Attribute Error: Volume creation and display

Asked by David Taylor

Hello all,

I am hoping some of you can help point me in the right direction. I am building a proof of concept Openstack setup using GlusterFS as back-end storage for virtual machine volumes. I am relatively new to Openstack, so I am not sure what my problem is.

Setup:

1 x Controller node (Keystone, Glance, Nova Controller, Cinder, Heat, Ceilometer, Neutron Server)
1 x Compute node (Nova Compute, Cinder, Neutron Agent)
1 x Network node (Neutron dedicated server)
3 x GlusterFS nodes (2 exported volumes each, one for Glance images/Nova instances, one for Cinder volumes)

Volume 0: Mounted at (/var/lib/cinder/volumes)
Volume 1: Mount at (/csv1) with nova/glance subdirectories owned by the appropriate user/groups

Nova Version: 2.15.0
Cinder Version: 1.0.7

I am using CentOS 6.5 on all nodes running the 2.6.32-431 kernel. Currently the firewall is disabled on all nodes and SELinux is set to enforcing. I have two networks, an internal and external network. All nodes are resolvable with forward/reverse DNS.

Internal Network: 10.0.0.0/8
External Network: 192.168.218.0/24

Communication using Qpid between nodes works with no errors, MySQL connections are good.

Here is my problem:

When creating a new instance in Horizon, booting from an existing image and creating a new volume, instance creation always fails at the block device setup stage. If I boot directly off the image without creating a new volume, the instance starts an I am able to connect via the VNC console.

I am booting from a CentOS 6.5 minimal ISO using Glance, stored on a separate GlusterFS volume that Glance is configured to use as the 'state_path'. I can create images fine on this share.

After instance creation fails, if I try to access the 'Volumes' tab in Horizon, I am always given an internal error message:

    AttributeError at /project/volumes/
    display_name
    Request Method: GET
    Request URL: http://192.168.218.193/dashboard/project/volumes/
    Django Version: 1.4.8
    Exception Type: AttributeError
    Exception Value:
    display_name
    Exception Location: /usr/lib/python2.6/site-packages/cinderclient/base.py in __getattr__, line 271
    Python Executable: /usr/bin/python
    Python Version: 2.6.6
    Python Path:
    ['/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../..',
     '/usr/lib64/python26.zip',
     '/usr/lib64/python2.6',
     '/usr/lib64/python2.6/plat-linux2',
     '/usr/lib64/python2.6/lib-tk',
     '/usr/lib64/python2.6/lib-old',
     '/usr/lib64/python2.6/lib-dynload',
     '/usr/lib64/python2.6/site-packages',
     '/usr/lib64/python2.6/site-packages/PIL',
     '/usr/lib/python2.6/site-packages',
     '/usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg-info',
     '/usr/share/openstack-dashboard/openstack_dashboard']
    Server time: Wed, 4 Dec 2013 04:43:27 +0000

In order to acces the Volumes page again I have to manually delete the Cinder volume from the command line using 'cinder delete'. Once the volume is gone I can access the page again. The volumes path is located on a GlusterFS share. This is my 'cinder.conf' file:

    [DEFAULT]

    # SQL Connection
    sql_connection=mysql://cinder:<email address hidden>/cinder

    # Metering Configuration
    control_exchange=cinder
    notification_driver=cinder.openstack.common.notifier.rpc_notifier

    # Authentication Strategy
    auth_strategy=keystone
    auth_uri=http://ocson.bkk3.vpls.os:5000

    # Glance Configuration
    glance_host=oscon.bkk3.vpls.os
    glance_port=9292
    glance_api_servers=$glance_host:$glance_port

    # Qpid Messenger
    rpc_backend=cinder.openstack.common.rpc.impl_qpid
    qpid_hostname=oscon.bkk3.vpls.os
    qpid_port=5672
    qpid_hosts=$qpid_hostname:$qpid_port

    # GlusterFS Configuration
    volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver
    glusterfs_shares_config=/etc/cinder/glusterfs.conf
    glusterfs_mount_point_base=/var/lib/cinder/volumes

    [keystone_authtoken]

    # Keystone Authentication
    admin_tenant_name=service
    admin_user=cinder
    admin_password=password
    auth_host=oscon.bkk3.vpls.os
    auth_port=35357
    auth_protocl=http

This is my 'nova.conf' file on the compute node:

    [DEFAULT]

    # API configuration
    api_paste_config=/etc/nova/api-paste.ini

    # Glance image host
    glance_host=oscon.bkk3.vpls.os

    # State path and instances
    state_path=/csv1/nova

    # Network configuration
    my_ip = 10.0.0.3

    # VNC server options
    vnc_enabled=true
    vncserver_listen=192.168.218.197
    vncserver_proxyclient_address=192.168.218.197

    # Gluster options
    glusterfs_mount_point_base=/var/lib/cinder/volumes

    libvirt_type=qemu
    compute_driver=libvirt.LibvirtDriver
    instance_name_template=instance-%08x
    api_paste_config=/etc/nova/api-paste.ini

    # Authentication strategy
    auth_strategy=keystone

    # Qpid message broker
    rpc_backend=nova.openstack.common.rpc.impl_qpid
    qpid_hostname=oscon.bkk3.vpls.os

    # Metering service
    instance_usage_audit=True
    instance_usage_audit_period=hour
    notify_on_state_change=vm_and_task_state
    notification_driver=nova.openstack.common.notifier.rpc_notifier
    notification_driver=ceilometer.compute.nova_notifier

    # Neutron configuration
    network_api_class=nova.network.neutronv2.api.API
    neutron_url=http://oscon.bkk3.vpls.os:9696
    neutron_auth_strategy=keystone
    neutron_admin_tenant_name=service
    neutron_admin_username=neutron
    neutron_admin_password=password
    neutron_admin_auth_url=http://oscon.bkk3.vpls.os:35357/v2.0
    firewall_driver=nova.virt.firewall.NoopFirewallDriver
    security_group_api=neutron

    # Neutron metadata
    neutron_metadata_proxy_shared_secret=secret
    service_neutron_metadata_proxy=true

    # Cinder catalog
    cinder_catalog_info=volume:cinder:internalURL

    # Database backend configuration
    [database]

    connection=mysql://nova:<email address hidden>/nova

    # Keystone authentication
    [keystone_authtoken]

    admin_tenant_name=service
    admin_user=nova
    admin_password=password
    auth_host=oscon.bkk3.vpls.os
    auth_port=35357
    auth_protocol=http

When I go to check the logs for nova-compute on the compute node, I see the following error message:

    2013-12-04 11:43:03.596 8232 ERROR nova.compute.manager [req-1a3816a8-336f-442b-94ab-8eab4632f59f 3441275d567e4843a37df776416d823b 606557574dc74904a963a1a79cf0b510] [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] Instance failed block device setup
    2013-12-04 11:43:03.596 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] Traceback (most recent call last):
    2013-12-04 11:43:03.596 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1376, in _prep_block_device
    2013-12-04 11:43:03.596 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] self._await_block_device_map_created))
    2013-12-04 11:43:03.596 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] File "/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 283, in attach_block_devices
    2013-12-04 11:43:03.596 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] block_device_mapping)
    2013-12-04 11:43:03.596 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] File "/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 236, in attach
    2013-12-04 11:43:03.596 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] '', '', image_id=self.image_id)
    2013-12-04 11:43:03.596 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] File "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 307, in create
    2013-12-04 11:43:03.596 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] return _untranslate_volume_summary_view(context, item)
    2013-12-04 11:43:03.596 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] File "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 136, in _untranslate_volume_summary_view
    2013-12-04 11:43:03.596 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] d['display_name'] = vol.display_name
    2013-12-04 11:43:03.596 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] File "/usr/lib/python2.6/site-packages/cinderclient/base.py", line 271, in __getattr__
    2013-12-04 11:43:03.596 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] raise AttributeError(k)
    2013-12-04 11:43:03.596 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] AttributeError: display_name
    2013-12-04 11:43:03.596 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349]
    2013-12-04 11:43:04.570 8232 ERROR nova.virt.libvirt.driver [-] [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] During wait destroy, instance disappeared.
    2013-12-04 11:43:05.949 8232 ERROR nova.compute.manager [req-1a3816a8-336f-442b-94ab-8eab4632f59f 3441275d567e4843a37df776416d823b 606557574dc74904a963a1a79cf0b510] [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] Error: display_name
    2013-12-04 11:43:05.949 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] Traceback (most recent call last):
    2013-12-04 11:43:05.949 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1028, in _build_instance
    2013-12-04 11:43:05.949 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] context, instance, bdms)
    2013-12-04 11:43:05.949 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1393, in _prep_block_device
    2013-12-04 11:43:05.949 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] instance=instance)
    2013-12-04 11:43:05.949 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1376, in _prep_block_device
    2013-12-04 11:43:05.949 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] self._await_block_device_map_created))
    2013-12-04 11:43:05.949 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] File "/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 283, in attach_block_devices
    2013-12-04 11:43:05.949 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] block_device_mapping)
    2013-12-04 11:43:05.949 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] File "/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 236, in attach
    2013-12-04 11:43:05.949 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] '', '', image_id=self.image_id)
    2013-12-04 11:43:05.949 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] File "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 307, in create
    2013-12-04 11:43:05.949 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] return _untranslate_volume_summary_view(context, item)
    2013-12-04 11:43:05.949 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] File "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 136, in _untranslate_volume_summary_view
    2013-12-04 11:43:05.949 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] d['display_name'] = vol.display_name
    2013-12-04 11:43:05.949 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] File "/usr/lib/python2.6/site-packages/cinderclient/base.py", line 271, in __getattr__
    2013-12-04 11:43:05.949 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] raise AttributeError(k)
    2013-12-04 11:43:05.949 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349] AttributeError: display_name
    2013-12-04 11:43:05.949 8232 TRACE nova.compute.manager [instance: 68fa0263-8af8-47ec-bf58-2bd735dfc349]

At this point I am thoroughly confused as to what the problem can be. I have tried:

- Disabling SELinux/firewall
- Changing permissions on the mount points and GlusterFS volumes. I find it strange that the volume is created, but it does not appear to be attached to the instance
- Many different configuration settings

No matter what I try I consistently see the Python error in the dashboard error page and the nova compute logs. It looks like it occurs right after block creation is started, and fails. My first though was that the Nova python driver for Cinder volumes was somehow failing to retrieve the 'display_name' property for the volume causing a fatal error, but I am not a Python expert so I am unsure how to troubleshoot this.

I have been searching for a few days and have been unable to find anything directly related to my error message. I would greatly appreciate any insight to this error, and any tips on how to adjust my configuration or server settings to try to at least get a different error or resolve this one. If you think any other information would be useful please let me know. Cheers!

Question information

Language:
English Edit question
Status:
Open
For:
python-novaclient Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:

Can you help with this problem?

Provide an answer of your own, or ask David Taylor for more information if necessary.

To post a message you must log in.