Unable to get cinder running

Asked by Brendy22

I have installed Openstack Havana release on Ubuntu 12.04 (LTS) Server.
Everything is ok except Cinder I am unable to get it running.

cinder-api and cinder-scheduler are installed on the controller node (oscontroller1).
cinder-volume is installed on a compute node (oscompute13).

Creating a volume fails. cinder-scheduler.log:
2014-02-12 09:21:19.462 32390 INFO cinder.openstack.common.rpc.common [req-b4b3a1ee-49fc-43ff-8ee0-8aa80f235986 None None] Connected to AMQP server on oscontroller1:5672
2014-02-12 09:59:03.242 32390 WARNING cinder.scheduler.host_manager [req-9b1037db-51dc-4e14-9d0b-2442e58e70de 97ff23a4764947b391c087e734dad3fb c384862ff4f548b38e8f5acefbfc1e76] volume service is down or disabled. (host: oscompute13)
2014-02-12 09:59:03.243 32390 ERROR cinder.volume.flows.create_volume [req-9b1037db-51dc-4e14-9d0b-2442e58e70de 97ff23a4764947b391c087e734dad3fb c384862ff4f548b38e8f5acefbfc1e76] Failed to schedule_create_volume: No valid host was found.

root@oscontroller1:~# cinder service-list
+------------------+---------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated_at |
+------------------+---------------+------+---------+-------+----------------------------+
| cinder-scheduler | oscontroller1 | nova | enabled | up | 2014-02-12T09:00:42.000000 |
| cinder-volume | oscompute13 | nova | enabled | down | 2014-02-12T09:03:16.000000 |
+------------------+---------------+------+---------+-------+----------------------------+

Here is my conf:

On the controller:
/etc/cinder/cinder.conf
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = VG-CINDER
verbose = True
auth_strategy = keystone
#state_path = /var/lib/cinder
#lock_path = /var/lock/cinder
#volumes_dir = /var/lib/cinder/volumes
rpc_backend = cinder.openstack.common.rpc.impl_kombu
rabbit_host = oscontroller1
rabbit_port = 5672
rabbit_userid = guest
rabbit_password = guest
[database]
connection = mysql://cinder:cinder@oscontroller1/cinder

api-paste.ini (controller):

[composite:osapi_volume]
use = call:cinder.api:root_app_factory
/: apiversions
/v1: openstack_volume_api_v1
/v2: openstack_volume_api_v2
[composite:openstack_volume_api_v1]
use = call:cinder.api.middleware.auth:pipeline_factory
noauth = faultwrap sizelimit noauth apiv1
keystone = faultwrap sizelimit authtoken keystonecontext apiv1
keystone_nolimit = faultwrap sizelimit authtoken keystonecontext apiv1
[composite:openstack_volume_api_v2]
use = call:cinder.api.middleware.auth:pipeline_factory
noauth = faultwrap sizelimit noauth apiv2
keystone = faultwrap sizelimit authtoken keystonecontext apiv2
keystone_nolimit = faultwrap sizelimit authtoken keystonecontext apiv2
[filter:faultwrap]
paste.filter_factory = cinder.api.middleware.fault:FaultWrapper.factory
[filter:noauth]
paste.filter_factory = cinder.api.middleware.auth:NoAuthMiddleware.factory
[filter:sizelimit]
paste.filter_factory = cinder.api.middleware.sizelimit:RequestBodySizeLimiter.factory
[app:apiv1]
paste.app_factory = cinder.api.v1.router:APIRouter.factory
[app:apiv2]
paste.app_factory = cinder.api.v2.router:APIRouter.factory
[pipeline:apiversions]
pipeline = faultwrap osvolumeversionapp
[app:osvolumeversionapp]
paste.app_factory = cinder.api.versions:Versions.factory
##########
# Shared #
##########
[filter:keystonecontext]
paste.filter_factory = cinder.api.middleware.auth:CinderKeystoneContext.factory
[filter:authtoken]
paste.filter_factory=keystoneclient.middleware.auth_token:filter_factory
auth_host=oscontroller1
auth_port = 35357
auth_protocol = http
admin_tenant_name=service
admin_user=cinder
admin_password=cinder
# signing_dir is configurable, but the default behavior of the authtoken
# middleware should be sufficient. It will create a temporary directory
# in the home directory for the user the cinder process is running as.
#signing_dir = /var/lib/cinder/keystone-signing

On the compute node:

cinder.conf:

[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = VG-CINDER
debug=True
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
rpc_backend = cinder.openstack.common.rpc.impl_kombu
rabbit_host = oscontroller1
rabbit_port = 5672
rabbit_userid = guest
rabbit_password = guest
[database]
connection = mysql://cinder:cinder@oscontroller1/cinder

api-paste.ini (compute):

[composite:osapi_volume]
use = call:cinder.api:root_app_factory
/: apiversions
/v1: openstack_volume_api_v1
/v2: openstack_volume_api_v2
[composite:openstack_volume_api_v1]
use = call:cinder.api.middleware.auth:pipeline_factory
noauth = faultwrap sizelimit noauth apiv1
keystone = faultwrap sizelimit authtoken keystonecontext apiv1
keystone_nolimit = faultwrap sizelimit authtoken keystonecontext apiv1
[composite:openstack_volume_api_v2]
use = call:cinder.api.middleware.auth:pipeline_factory
noauth = faultwrap sizelimit noauth apiv2
keystone = faultwrap sizelimit authtoken keystonecontext apiv2
keystone_nolimit = faultwrap sizelimit authtoken keystonecontext apiv2
[filter:faultwrap]
paste.filter_factory = cinder.api.middleware.fault:FaultWrapper.factory
[filter:noauth]
paste.filter_factory = cinder.api.middleware.auth:NoAuthMiddleware.factory
[filter:sizelimit]
paste.filter_factory = cinder.api.middleware.sizelimit:RequestBodySizeLimiter.factory
[app:apiv1]
paste.app_factory = cinder.api.v1.router:APIRouter.factory
[app:apiv2]
paste.app_factory = cinder.api.v2.router:APIRouter.factory
[pipeline:apiversions]
pipeline = faultwrap osvolumeversionapp
[app:osvolumeversionapp]
paste.app_factory = cinder.api.versions:Versions.factory
##########
# Shared #
##########
[filter:keystonecontext]
paste.filter_factory = cinder.api.middleware.auth:CinderKeystoneContext.factory
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = oscontroller1
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = cinder
# signing_dir is configurable, but the default behavior of the authtoken
# middleware should be sufficient. It will create a temporary directory
# in the home directory for the user the cinder process is running as.
#signing_dir = /var/lib/cinder/keystone-signing

I spent a lot of time looking for a solution with no succcess.
I would be glad if someone could help me.
Thanks.

Question information

Language:
English Edit question
Status:
Solved
For:
Cinder Edit question
Assignee:
No assignee Edit question
Solved by:
lirenke
Solved:
Last query:
Last reply:
Revision history for this message
gabriel staicu (gabriel-staicu) said :
#1

Hi,

Can you tell if the service cinder-volume is working on the compute node?
What is the content of the /var/log/cinder/cinder-volume.log on the compute node?

Revision history for this message
Brendy22 (brendy22) said :
#2

Yes, cinder-volume is running.
Here's the content of the log file:

2014-02-12 14:57:27.974 2819 DEBUG cinder.openstack.common.rpc.amqp [-] UNIQUE_ID is 2724b8abc0914d83872ee4134e4631e9. _add_unique_id /usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/amqp.py:345
2014-02-12 14:57:27.975 2819 DEBUG amqp [-] Closed channel #1 _do_close /usr/lib/python2.7/dist-packages/amqp/channel.py:88
2014-02-12 14:57:27.975 2819 DEBUG amqp [-] using channel_id: 1 __init__ /usr/lib/python2.7/dist-packages/amqp/channel.py:70
2014-02-12 14:57:27.976 2819 DEBUG amqp [-] Channel open _open_ok /usr/lib/python2.7/dist-packages/amqp/channel.py:420
2014-02-12 14:57:27.976 2819 DEBUG cinder.openstack.common.periodic_task [-] Running periodic task VolumeManager._report_driver_status run_periodic_tasks /usr/lib/python2.7/dist-packages/cinder/openstack/common/periodic_task.py:176
2014-02-12 14:57:27.977 2819 INFO cinder.volume.manager [-] Updating volume status
2014-02-12 14:57:27.977 2819 DEBUG cinder.volume.drivers.lvm [-] Updating volume stats _update_volume_stats /usr/lib/python2.7/dist-packages/cinder/volume/drivers/lvm.py:357
2014-02-12 14:57:27.977 2819 DEBUG cinder.openstack.common.processutils [-] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C LANG=C vgs --noheadings --unit=g -o name,size,free,lv_count,uuid --separator : --nosuffix VG-CINDER execute /usr/lib/python2.7/dist-packages/cinder/openstack/common/processutils.py:142
2014-02-12 14:58:28.025 2819 DEBUG cinder.openstack.common.periodic_task [-] Running periodic task VolumeManager._publish_service_capabilities run_periodic_tasks /usr/lib/python2.7/dist-packages/cinder/openstack/common/periodic_task.py:176
2014-02-12 14:58:28.025 2819 DEBUG cinder.manager [-] Notifying Schedulers of capabilities ... _publish_service_capabilities /usr/lib/python2.7/dist-packages/cinder/manager.py:135
2014-02-12 14:58:28.025 2819 DEBUG cinder.openstack.common.rpc.amqp [-] Making asynchronous fanout cast... fanout_cast /usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/amqp.py:640
2014-02-12 14:58:28.025 2819 DEBUG cinder.openstack.common.rpc.amqp [-] UNIQUE_ID is fe545fb94ad043ba9599db35a0a3dc35. _add_unique_id /usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/amqp.py:345
2014-02-12 14:58:28.027 2819 DEBUG amqp [-] Closed channel #1 _do_close /usr/lib/python2.7/dist-packages/amqp/channel.py:88
2014-02-12 14:58:28.027 2819 DEBUG amqp [-] using channel_id: 1 __init__ /usr/lib/python2.7/dist-packages/amqp/channel.py:70
2014-02-12 14:58:28.028 2819 DEBUG amqp [-] Channel open _open_ok /usr/lib/python2.7/dist-packages/amqp/channel.py:420
2014-02-12 14:58:28.028 2819 DEBUG cinder.openstack.common.periodic_task [-] Running periodic task VolumeManager._report_driver_status run_periodic_tasks /usr/lib/python2.7/dist-packages/cinder/openstack/common/periodic_task.py:176
2014-02-12 14:58:28.029 2819 INFO cinder.volume.manager [-] Updating volume status
2014-02-12 14:58:28.029 2819 DEBUG cinder.volume.drivers.lvm [-] Updating volume stats _update_volume_stats /usr/lib/python2.7/dist-packages/cinder/volume/drivers/lvm.py:357
2014-02-12 14:58:28.029 2819 DEBUG cinder.openstack.common.processutils [-] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C LANG=C vgs --noheadings --unit=g -o name,size,free,lv_count,uuid --separator : --nosuffix VG-CINDER execute /usr/lib/python2.7/dist-packages/cinder/openstack/common/processutils.py:142

Revision history for this message
gabriel staicu (gabriel-staicu) said :
#3

Something similar happened to me and was solved by clearing the rabbitmq messages.
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl start_app

Revision history for this message
Brendy22 (brendy22) said :
#4

Thank you Gabriel but I tried it but no chance, it did not solve the issue.

Revision history for this message
gabriel staicu (gabriel-staicu) said :
#5

something is spooky. I installed successfully an havana POC on ubuntu12.04 using information from here: https://github.com/jedipunkz/openstack_havana_deploy.

I am seeing some diffs between your configuration and what results from running the script. Maybe can offer you a hint.

Revision history for this message
Brendy22 (brendy22) said :
#6

Well, I tried to modify some parameters in cinder.conf as in the example in your link.
It still does not work. Some parameters looks deprecated. e.g. "sql_connection" changed to "connection" in [database].

Revision history for this message
lirenke (lvhancy) said :
#7

You can use "pdb" to trace cinder-scheduler, focus on "get_all_host_states" in "/cinder/scheduler/host_manager.py".
Cinder use service_is_up or service['disabled'] to judge.
For service_is_up, check the "updated_at" and current utc time on cinder-scheuler node.
For service['disabled'], just check the db.

Revision history for this message
Brendy22 (brendy22) said :
#8

Lirenke,

I don't think I have the knowledge to try what you said. I am just beginning in using Openstack.

Revision history for this message
lirenke (lvhancy) said :
#9

well, u can read the code in cinder.scheduler.host_manager.py, in get_all_host_states().

u can add some logs to help.

It is the place where cinder think wether the host is ok.

Revision history for this message
Best lirenke (lvhancy) said :
#10

check these:
1、"disabled" is 0, in "services" table of cinder db;
2、the os utc-time of oscontroller1 and oscompute13 must be same;
3、add "service_down_time" in cinder.conf. For example, service_down_time = 60 (also is default (seconds))

Revision history for this message
Brendy22 (brendy22) said :
#11

The problem is solved!
I found out the nodes were not synchronized. I installed ntp and all went ok.
Thanks for your help.

Revision history for this message
Brendy22 (brendy22) said :
#12

Thanks lirenke, that solved my question.