what is the prerequisite for multinode cinder

Asked by Shanthakumar K on 2013-04-05

I have installed Openstack controller + compute in single node.
Now I want expand my O~S cinder component alone in another node.

Kindly let me know the prerequisite for configuring the cinder node in separate server.

Question information

Language:
English Edit question
Status:
Solved
For:
Cinder Edit question
Assignee:
No assignee Edit question
Solved by:
Khanh Nguyen
Solved:
2013-04-29
Last query:
2013-04-29
Last reply:
2013-04-26
Khanh Nguyen (ndquockhanh) said : #1

Go ahead to install cinder component in another node. ^.^.
Just remind, update correct end-point of cinder volume in keystore to new server path.

Shanthakumar K (shantha-kumar) said : #2

Thanks for your response.

install component includes cinder-api , scheduler also or only cinder volume ?

Khanh Nguyen (ndquockhanh) said : #3

for me, I installed all on the storage node.

Shanthakumar K (shantha-kumar) said : #4

Thanks for your response.

So as per your setup, you have two cinder scheduler (one in controller node and other on new node).. when I serve a request from horizon how it goes to 2 schedulers and where it create the volumes?

Khanh Nguyen (ndquockhanh) said : #5

for the simple deployement, i suggest you install one cinder-api, and cinder-scheduler on controller node and install cinder-volume on the cluster nodes..

I included HA for these cinder services, so i installed all.

Khanh Nguyen (ndquockhanh) said : #6

another options you to expand the storage nodes:

1. use multi-backend lvm volumes, see more detail in the guide: http://docs.openstack.org/trunk/openstack-block-storage/admin/content/multi_backend.html

2. select one the stable storage back-end, -> it's easier to extend more storage node dynamically. In my deployment, i'm using ceph likes as the storage back-end. ^.^

Hope they're useful to you ;)

Shanthakumar K (shantha-kumar) said : #7

Thanks for your response.

Now I have removed the cinder-volumes services from node1 and setup looks like below but still im getting the "no valid host found error".

mysql and rabbit is point to controller node.

##################################

Controller node1:
CInder-api
cinder-scheduler

Cluster node2:
Cinder-volume
tgtd

cinder.conf file
[DEFAULT]
rootwrap_config=/etc/cinder/rootwrap.conf
sql_connection = mysql://cinderUser:cinderPass@10.1.0.29/cinder
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper=ietadm
volume_name_template = volume-%s
volume_group = cinder-volumes
rabbit_host = 10.1.0.29
volumes_dir = /etc/cinder/volumes
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
verbose = True
auth_strategy = keystone
debug=true
#osapi_volume_listen_port=5900
iscsi_ip_address=10.1.0.7

Shanthakumar K (shantha-kumar) said : #8

Thanks for your resposne on multiback end drivers.

We have multiback end enabled and its working !!!! now we want to expand the backend to individual system !!!

It means each cinder to point only one backend!!!!

Khanh Nguyen (ndquockhanh) said : #9

share me your log which included that error, it's easier to detect your problem

thanks for your response.

I have attached the scheduler log, when I m trying to create the volume

FYI - Cinder host list commands list the host and in mysql DB the host is added

###############################

tailf /var/log/cinder/cinder-scheduler.log
----------------------------------------------

2013-04-24 00:20:59 DEBUG [cinder.openstack.common.rpc.amqp] received {u'_context_roles': [u'admin'], u'_context_request_id': u'req-17ecae07-31ff-4096-812d-062060fb2695', u'_context_quota_class': None, u'_unique_id': u'367cc5b4acf74f2ba4d379f9db804b81', u'_context_read_deleted': u'no', u'args': {u'service_name': u'volume', u'host': u'cinder', u'capabilities': {u'QoS_support': False, u'volume_backend_name': u'LVM_iSCSI', u'free_capacity_gb': 549.75, u'driver_version': u'1.0', u'total_capacity_gb': 549.75, u'reserved_percentage': 0, u'vendor_name': u'Open Source', u'storage_protocol': u'iSCSI'}}, u'_context_tenant': None, u'_context_auth_token': '<SANITIZED>', u'_context_is_admin': True, u'version': u'1.0', u'_context_project_id': None, u'_context_timestamp': u'2013-04-23T05:21:00.558719', u'_context_user': None, u'_context_user_id': None, u'method': u'update_service_capabilities', u'_context_remote_address': None}
2013-04-24 00:20:59 DEBUG [cinder.openstack.common.rpc.amqp] unpacked context: {'user_id': None, 'roles': [u'admin'], 'timestamp': u'2013-04-23T05:21:00.558719', 'auth_token': '<SANITIZED>', 'remote_address': None, 'quota_class': None, 'is_admin': True, 'user': None, 'request_id': u'req-17ecae07-31ff-4096-812d-062060fb2695', 'project_id': None, 'read_deleted': u'no', 'tenant': None}
2013-04-24 00:20:59 DEBUG [cinder.scheduler.host_manager] Received volume service update from cinder.

2013-04-24 00:21:18 DEBUG [cinder.openstack.common.rpc.amqp] received {u'_context_roles': [u'KeystoneAdmin', u'admin', u'KeystoneServiceAdmin'], u'_context_request_id': u'req-213a11c7-9e44-4fb1-a0f3-e9363f247bd1', u'_context_quota_class': None, u'_unique_id': u'7d86b91fcab24a3c8b00a9a3ad18d0f1', u'_context_read_deleted': u'no', u'args': {u'request_spec': {u'volume_id': u'73978248-b515-4c03-a339-5ca7979324a2', u'volume_properties': {u'status': u'creating', u'volume_type_id': None, u'display_name': None, u'availability_zone': u'nova', u'attach_status': u'detached', u'source_volid': None, u'metadata': {}, u'volume_metadata': [], u'display_description': None, u'snapshot_id': None, u'user_id': u'4b106ff812b54472b8e2d3596524f730', u'project_id': u'fedfaa89548e41188c2dbd6f96d0de4a', u'id': u'73978248-b515-4c03-a339-5ca7979324a2', u'size': 2}, u'volume_type': {}, u'image_id': None, u'source_volid': None, u'snapshot_id': None}, u'volume_id': u'73978248-b515-4c03-a339-5ca7979324a2', u'filter_properties': {}, u'topic': u'cinder-volume', u'image_id': None, u'snapshot_id': None}, u'_context_tenant': u'fedfaa89548e41188c2dbd6f96d0de4a', u'_context_auth_token': '<SANITIZED>', u'_context_is_admin': True, u'version': u'1.2', u'_context_project_id': u'fedfaa89548e41188c2dbd6f96d0de4a', u'_context_timestamp': u'2013-04-23T18:51:18.663053', u'_context_user': u'4b106ff812b54472b8e2d3596524f730', u'_context_user_id': u'4b106ff812b54472b8e2d3596524f730', u'method': u'create_volume', u'_context_remote_address': u'10.1.0.29'}
2013-04-24 00:21:18 DEBUG [cinder.openstack.common.rpc.amqp] unpacked context: {'user_id': u'4b106ff812b54472b8e2d3596524f730', 'roles': [u'KeystoneAdmin', u'admin', u'KeystoneServiceAdmin'], 'timestamp': u'2013-04-23T18:51:18.663053', 'auth_token': '<SANITIZED>', 'remote_address': u'10.1.0.29', 'quota_class': None, 'is_admin': True, 'user': u'4b106ff812b54472b8e2d3596524f730', 'request_id': u'req-213a11c7-9e44-4fb1-a0f3-e9363f247bd1', 'project_id': u'fedfaa89548e41188c2dbd6f96d0de4a', 'read_deleted': u'no', 'tenant': u'fedfaa89548e41188c2dbd6f96d0de4a'}
2013-04-24 00:21:18 WARNING [cinder.scheduler.host_manager] service is down or disabled.
2013-04-24 00:21:18 WARNING [cinder.scheduler.host_manager] service is down or disabled.
2013-04-24 00:21:18 ERROR [cinder.scheduler.manager] Failed to schedule_create_volume: No valid host was found.

Kindly let me know if u need any more logs.

Khanh Nguyen (ndquockhanh) said : #11

from your log, i see that cinder-volume services is down or disable.

Take a look the table services in cinder db to make sure that the cinder-volume is present and hasn't disabled yet.

Thanks for your response.

Cinder-volume service is running in node(CINDER)

please find the cinderdb , services which shows the cinder volume

root@grzrc3:~# mysql -uroot -piso*help cinder -e 'select * from services;'
+---------------------+---------------------+------------+---------+----+-----------------+------------------+------------------+--------------+----------+-------------------+
| created_at | updated_at | deleted_at | deleted | id | host | binary | topic | report_count | disabled | availability_zone |
+---------------------+---------------------+------------+---------+----+-----------------+------------------+------------------+--------------+----------+-------------------+
| 2013-04-08 22:48:21 | 2013-04-25 18:45:44 | NULL | 0 | 1 | grzrc3 | cinder-scheduler | cinder-scheduler | 144940 | 0 | nova |
| 2013-04-18 10:14:32 | 2013-04-24 09:49:08 | NULL | 0 | 6 | cinder | cinder-volume | cinder-volume | 43577 | 0 | nova |
+---------------------+---------------------+------------+---------+----+-----------------+------------------+------------------+--------------+----------+-------------------+
root@grzrc3:~#

root@grzrc3:~# cinder-manage host list
host zone
cinder nova
root@grzrc3:~#
root@grzrc3:~#

Any configuration changes needs to be done ?

Best Khanh Nguyen (ndquockhanh) said : #13

| 2013-04-08 22:48:21 | 2013-04-25 18:45:44 | NULL | 0 | 1 | grzrc3 | cinder-scheduler | cinder-scheduler | 144940 | 0 | nova |
| 2013-04-18 10:14:32 | 2013-04-24 09:49:08 | NULL | 0 | 6 | cinder | cinder-volume | cinder-volume | 43577 | 0 | nova |

I see that the update-time field of cinder-volume hasn't updated frequently -> so the cinder scheduler service marked it like as a service down.

I share you the formula which uses to check service down :

    last_heartbeat = service['updated_at'] or service['created_at']
    # Timestamps in DB are UTC.
    elapsed = total_seconds(timeutils.utcnow() - last_heartbeat)
    return abs(elapsed) <= FLAGS.service_down_time

 FLAGS.service_down_time is 60s in default.

Hope it help you detect your problem...

Thanks for your response.

Im unaware of using the above mentioned formula, Kindly I request you to advise me to proceed further.

Thanks alot Khanh Nguyen for your help throughout the debugging.

Issues was very simple, i dint have the timesync in both the nodes!!!!

That's why it could not find the second server services.

Thanks Khanh Nguyen, that solved my question.

CLisa (lisa-chen626) said : #17

Hi shausy, I encountered the same problem to yours.
I don't know about the formula too, kindly let me know your resolution?

Can u please share the logs and exactly tell me whats the problem you have now

CLisa (lisa-chen626) said : #19

Thanks for your response.
I planed to add another cinder node "c01".
"c01" was installed cinder-volume package, and created a VG cinder-vol.

The result by running cinder-manage host list is:
# cinder-manage host list
host zone
cloud nova
c01 nova

When the VG on "cloud" was used totally, I suppose cinder-vol on "c01" should be used.
 So I run "cinder create --display_name multi-node-test-01 8", but:
# cinder list
+--------------------------------------+-----------+--------------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------------+------+-------------+----------+-------------+
7ac33606-7030-4786-9b09-0eb90d781d3a | error | multi-node-test-01 | 8 | None | false | |

cinder-scheduler.log:
2013-09-13 18:13:12 ERROR [cinder.scheduler.filters.capacity_filter] Free capacity not set: volume node info collection broken.
2013-09-13 18:13:12 WARNING [cinder.scheduler.filters.capacity_filter] Insufficient free space for volume creation (requested / avail): 8/7.0
2013-09-13 18:13:12 ERROR [cinder.scheduler.manager] Failed to schedule_create_volume: No valid host was found.

At the same time I can see this on "c01":
c01:~# pvscan
  PV /dev/loop2 VG cinder-vol lvm2 [10.00 GiB / 10.00 GiB free]
  Total: 1 [10.00 GiB] / in use: 1 [10.00 GiB] / in no VG: 0 [0 ]

so why the VG on c01 could not be used by cinder?

thanks again.

I believe you are able to create the volume successfully from other node or controller node where u have cinder installed.

when u tryin to create on newly created not u are seeing this problem " ERROR [cinder.scheduler.manager] Failed to schedule_create_volume: No valid host was found."

No valid host -Though its listing in host list of cinder, its not successfully register.

check all the settings properly.

esp- NTP

Khanh Nguyen (ndquockhanh) said : #21

just do reproduce this case by creating new volume and attach the full log file for more detail in this issue.

Thanks Shausy!

I was having same issue where creating a volume on a cinder-volume host failed with
" No valid host found" error.
 This was a problem of timesync between nodes. After correcting the time difference, creating volume started working.

Also, To select a particular volume node, I used different volume backend names and create a volume type assosiated with the particular backend name. The create a volume with the selected type.

e.g:

1) cinder.conf on the volume node:

enabled_backends=lvm-host1

[lvm-host1]
volume_group=host-cinder
volume_backend_name-LVM_iSCSI_host1

2) Create volume type
cinder type-create host-lvm

3) Assosiate a backend name with the type:
cinder type-key host-lvm set volume_backend_name=LVM_iSCSI_host1

4) Create a volume with specified type
volume create --display-name sample-volume --volume-type host-lvm 10

Thanks,
Vijesh

Kaoen Beoyou (kon-bod) said : #23

Hello,

I have exactly the same problem as you. When I am trying to create a volume I see the same exactly warnings and errors [no valid host was found].

The controller is installed in Node-1, the storage/cinder in Node-2 an in the Node-3 there is the compute node.

I think the error comes from the synchronization of the servers, as you describe here and as I have read in other posts.

NTP server is installed and running in all 3 nodes. I provide you for better understanding my ntp.conf file from Node-1. I have seen some configurations of that file but I don't know exactly what should I modify. Or tell me, please, if I have to do some other modifications in other files.

Any help would be appreciated.
-----------------------------------------------------
/etc/ntp.conf:

tinker panic 0
# /etc/ntp.conf, configuration for ntpd; see ntp.conf(5) for help

driftfile /var/lib/ntp/ntp.drift

# Enable this if you want statistics to be logged.
#statsdir /var/log/ntpstats/

statistics loopstats peerstats clockstats
filegen loopstats file loopstats type day enable
filegen peerstats file peerstats type day enable
filegen clockstats file clockstats type day enable

# Specify one or more NTP servers.

# Use servers from the NTP Pool Project. Approved by Ubuntu Technical Board
# on 2011-02-08 (LP: #104525). See http://www.pool.ntp.org/join.html for
# more information.

# Use Ubuntu's ntp server as a fallback.

# Access control configuration; see /usr/share/doc/ntp-doc/html/accopt.html for
# details. The web page <http://support.ntp.org/bin/view/Support/AccessRestrictions>
# might also be helpful.
#
# Note that "restrict" applies to both servers and clients, so a configuration
# that might be intended to block requests from certain clients could also end
# up blocking replies from your own upstream servers.

# By default, exchange time with everybody, but don't allow configuration.
restrict -4 default kod notrap nomodify nopeer noquery
restrict -6 default kod notrap nomodify nopeer noquery

# Local users may interrogate the ntp server more closely.
restrict 127.0.0.1
restrict ::1

# Clients from this (example!) subnet have unlimited access, but only if
# cryptographically authenticated.
#restrict 192.168.123.0 mask 255.255.255.0 notrust

# If you want to provide time to your local subnet, change the next line.
# (Again, the address is an example only.)
#broadcast 192.168.123.255

# If you want to listen to time broadcasts on your local subnet, de-comment the
# next lines. Please do this only if you trust everybody on the network!
#disable auth
#broadcastclient
server 192.168.2.2 burst iburst