StorageError: Timeout waiting for device sdb to be created

Asked by Somik Behera

So I I had a multi-host openstack deployment before( sync'd about a month or two ago). I pulled the trunk and tried creating a new multi-hypervisor openstack deployment and am running into few issues that I haven't been able to figure out yet.. I would appreciate somebody helping me triage it as a configuration issue or a bug..

I have a controller VM with - glance, scheduler, network-service and api service running. I have another 2 VMs, each on a hypervisor running nova-compute service. I have enabled remote DB connectivity on the openstack controller and everything is communicating fine.

Now, when I try to boot a image, I ran into this error, seems like we couldn't download the image and I dont know what could be wrong:

Nova-compute log:

2011-05-24 17:19:26,374 DEBUG nova.rpc [-] received {u'_context_request_id': u'WKKBRZ-LFBG7422GG6M0', u'_context_read_deleted': False, u'args': {u'instance_id': 9, u'instance_type': {u'rxtx_quota': 0, u'deleted_at': None, u'name': u'm1.tiny', u'deleted': False, u'created_at': None, u'updated_at': None, u'memory_mb': 512, u'vcpus': 1, u'rxtx_cap': 0, u'swap': 0, u'flavorid': 1, u'id': 2, u'local_gb': 0}, u'injected_files': [], u'availability_zone': None}, u'_context_is_admin': True, u'_context_timestamp': u'2011-05-25T00:19:31Z', u'_context_user': u'somik', u'method': u'run_instance', u'_context_project': u'openstack', u'_context_remote_address': None} from (pid=8945) _receive /home/sbehera/openstack/nova/nova/rpc.py:177
2011-05-24 17:19:26,375 DEBUG nova.rpc [-] unpacked context: {'timestamp': u'2011-05-25T00:19:31Z', 'remote_address': None, 'project': u'openstack', 'is_admin': True, 'user': u'somik', 'request_id': u'WKKBRZ-LFBG7422GG6M0', 'read_deleted': False} from (pid=8945) _unpack_context /home/sbehera/openstack/nova/nova/rpc.py:350
2011-05-24 17:19:29,269 AUDIT nova.compute.manager [WKKBRZ-LFBG7422GG6M0 somik openstack] instance 9: starting...
2011-05-24 17:19:29,396 DEBUG nova.rpc [-] Making asynchronous call on network.NvpOpenStackController ... from (pid=8945) call /home/sbehera/openstack/nova/nova/rpc.py:370
2011-05-24 17:19:29,396 DEBUG nova.rpc [-] MSG_ID is 6f2a230e89414303bf6bf38cad02c426 from (pid=8945) call /home/sbehera/openstack/nova/nova/rpc.py:373
2011-05-24 17:19:29,501 DEBUG nova.rpc [-] received {u'_context_request_id': u'WKKBRZ-LFBG7422GG6M0', u'_context_read_deleted': False, u'args': {u'instance_id': 9, u'new_pass': u'Tibm7bLSrES4Gf6p'}, u'_context_is_admin': True, u'_context_timestamp': u'2011-05-25T00:19:31Z', u'_context_user': u'somik', u'method': u'set_admin_password', u'_context_project': u'openstack', u'_context_remote_address': None} from (pid=8945) _receive /home/sbehera/openstack/nova/nova/rpc.py:177
2011-05-24 17:19:29,501 DEBUG nova.rpc [-] unpacked context: {'timestamp': u'2011-05-25T00:19:31Z', 'remote_address': None, 'project': u'openstack', 'is_admin': True, 'user': u'somik', 'request_id': u'WKKBRZ-LFBG7422GG6M0', 'read_deleted': False} from (pid=8945) _unpack_context /home/sbehera/openstack/nova/nova/rpc.py:350
2011-05-24 17:19:29,502 INFO nova.compute.manager [WKKBRZ-LFBG7422GG6M0 somik openstack] check_instance_lock: decorating: |<function set_admin_password at 0x8fe36f4>|
2011-05-24 17:19:29,502 INFO nova.compute.manager [WKKBRZ-LFBG7422GG6M0 somik openstack] check_instance_lock: arguments: |<nova.compute.manager.ComputeManager object at 0x8fe2cec>| |<nova.context.RequestContext object at 0x9c2768c>| |9|
2011-05-24 17:19:29,502 DEBUG nova.compute.manager [WKKBRZ-LFBG7422GG6M0 somik openstack] instance 9: getting locked state from (pid=8945) get_lock /home/sbehera/openstack/nova/nova/compute/manager.py:741
2011-05-24 17:19:29,558 INFO nova.compute.manager [WKKBRZ-LFBG7422GG6M0 somik openstack] check_instance_lock: locked: |False|
2011-05-24 17:19:29,558 INFO nova.compute.manager [WKKBRZ-LFBG7422GG6M0 somik openstack] check_instance_lock: admin: |True|
2011-05-24 17:19:29,559 INFO nova.compute.manager [WKKBRZ-LFBG7422GG6M0 somik openstack] check_instance_lock: executing: |<function set_admin_password at 0x8fe36f4>|
2011-05-24 17:19:29,929 DEBUG nova.virt.xenapi.vm_utils [-] Detected DISK format for image 3, instance 9 from (pid=8945) log_disk_format /home/sbehera/openstack/nova/nova/virt/xenapi/vm_utils.py:502
2011-05-24 17:19:30,222 DEBUG nova.virt.xenapi.vm_utils [-] Size for image 3:797819904 from (pid=8945) _fetch_image_glance_disk /home/sbehera/openstack/nova/nova/virt/xenapi/vm_utils.py:444
2011-05-24 17:19:31,062 DEBUG nova.virt.xenapi.vm_utils [-] Created VDI OpaqueRef:b04e1d01-bb7d-41cd-3066-17915c589b0b (Glance image 3, 797852160, False) on OpaqueRef:fabc3e71-7004-3682-bad6-44dd792dcffa. from (pid=8945) create_vdi /home/sbehera/openstack/nova/nova/virt/xenapi/vm_utils.py:279
2011-05-24 17:19:31,113 DEBUG nova.virt.xenapi.vm_utils [-] Creating VBD for VDI OpaqueRef:b04e1d01-bb7d-41cd-3066-17915c589b0b ... from (pid=8945) with_vdi_attached_here /home/sbehera/openstack/nova/nova/virt/xenapi/vm_utils.py:927
2011-05-24 17:19:31,180 DEBUG nova.virt.xenapi.vm_utils [-] Creating VBD for VDI OpaqueRef:b04e1d01-bb7d-41cd-3066-17915c589b0b done. from (pid=8945) with_vdi_attached_here /home/sbehera/openstack/nova/nova/virt/xenapi/vm_utils.py:929
2011-05-24 17:19:31,180 DEBUG nova.virt.xenapi.vm_utils [-] Plugging VBD OpaqueRef:78cf1e13-c0de-1f87-e05d-62db81419dc1 ... from (pid=8945) with_vdi_attached_here /home/sbehera/openstack/nova/nova/virt/xenapi/vm_utils.py:931
2011-05-24 17:19:32,479 DEBUG nova.virt.xenapi.vm_utils [-] Plugging VBD OpaqueRef:78cf1e13-c0de-1f87-e05d-62db81419dc1 done. from (pid=8945) with_vdi_attached_here /home/sbehera/openstack/nova/nova/virt/xenapi/vm_utils.py:933
2011-05-24 17:19:32,524 DEBUG nova.virt.xenapi.vm_utils [-] VBD OpaqueRef:78cf1e13-c0de-1f87-e05d-62db81419dc1 plugged as xvdb from (pid=8945) with_vdi_attached_here /home/sbehera/openstack/nova/nova/virt/xenapi/vm_utils.py:935
2011-05-24 17:19:32,535 DEBUG nova.virt.xenapi.vm_utils [-] VBD OpaqueRef:78cf1e13-c0de-1f87-e05d-62db81419dc1 plugged into wrong dev, remapping to sdb from (pid=8945) with_vdi_attached_here /home/sbehera/openstack/nova/nova/virt/xenapi/vm_utils.py:939
2011-05-24 17:19:42,543 DEBUG nova.virt.xenapi.vm_utils [-] Destroying VBD for VDI OpaqueRef:b04e1d01-bb7d-41cd-3066-17915c589b0b ... from (pid=8945) with_vdi_attached_here /home/sbehera/openstack/nova/nova/virt/xenapi/vm_utils.py:947
2011-05-24 17:19:43,823 DEBUG nova.virt.xenapi.vm_utils [-] VBD.unplug successful first time. from (pid=8945) vbd_unplug_with_retry /home/sbehera/openstack/nova/nova/virt/xenapi/vm_utils.py:962
2011-05-24 17:19:43,897 DEBUG nova.virt.xenapi.vm_utils [-] Destroying VBD for VDI OpaqueRef:b04e1d01-bb7d-41cd-3066-17915c589b0b done. from (pid=8945) with_vdi_attached_here /home/sbehera/openstack/nova/nova/virt/xenapi/vm_utils.py:950
2011-05-24 17:19:43,898 ERROR nova.compute.manager [-] Instance '9' failed to spawn. Is virtualization enabled in the BIOS? Details: Timeout waiting for device sdb to be created
(nova.compute.manager): TRACE: Traceback (most recent call last):
(nova.compute.manager): TRACE: File "/home/sbehera/openstack/nova/nova/compute/manager.py", line 256, in run_instance
(nova.compute.manager): TRACE: self.driver.spawn(instance_ref)
(nova.compute.manager): TRACE: File "/home/sbehera/openstack/nova/nova/virt/xenapi_conn.py", line 199, in spawn
(nova.compute.manager): TRACE: self._vmops.spawn(instance)
(nova.compute.manager): TRACE: File "/home/sbehera/openstack/nova/nova/virt/xenapi/vmops.py", line 117, in spawn
(nova.compute.manager): TRACE: vdi_uuid = self._create_disk(instance)
(nova.compute.manager): TRACE: File "/home/sbehera/openstack/nova/nova/virt/xenapi/vmops.py", line 113, in _create_disk
(nova.compute.manager): TRACE: instance.image_id, user, project, disk_image_type)
(nova.compute.manager): TRACE: File "/home/sbehera/openstack/nova/nova/virt/xenapi/vm_utils.py", line 384, in fetch_image
(nova.compute.manager): TRACE: access, image_type)
(nova.compute.manager): TRACE: File "/home/sbehera/openstack/nova/nova/virt/xenapi/vm_utils.py", line 543, in _fetch_image_glance
(nova.compute.manager): TRACE: session, instance_id, image, access, image_type)
(nova.compute.manager): TRACE: File "/home/sbehera/openstack/nova/nova/virt/xenapi/vm_utils.py", line 460, in _fetch_image_glance_disk
(nova.compute.manager): TRACE: lambda dev:
(nova.compute.manager): TRACE: File "/home/sbehera/openstack/nova/nova/virt/xenapi/vm_utils.py", line 944, in with_vdi_attached_here
(nova.compute.manager): TRACE: _wait_for_device(dev)
(nova.compute.manager): TRACE: File "/home/sbehera/openstack/nova/nova/virt/xenapi/vm_utils.py", line 909, in _wait_for_device
(nova.compute.manager): TRACE: raise StorageError(_('Timeout waiting for device %s to be created') % dev)
(nova.compute.manager): TRACE: StorageError: Timeout waiting for device sdb to be created
(nova.compute.manager): TRACE:
2011-05-24 17:19:44,114 ERROR nova.exception [-] Uncaught exception
(nova.exception): TRACE: Traceback (most recent call last):
(nova.exception): TRACE: File "/home/sbehera/openstack/nova/nova/exception.py", line 87, in _wrap
(nova.exception): TRACE: return f(*args, **kw)
(nova.exception): TRACE: File "/home/sbehera/openstack/nova/nova/compute/manager.py", line 277, in run_instance
(nova.exception): TRACE: self._update_state(context, instance_id)
(nova.exception): TRACE: File "/home/sbehera/openstack/nova/nova/compute/manager.py", line 149, in _update_state
(nova.exception): TRACE: info = self.driver.get_info(instance_ref['name'])
(nova.exception): TRACE: File "/home/sbehera/openstack/nova/nova/virt/xenapi_conn.py", line 274, in get_info
(nova.exception): TRACE: return self._vmops.get_info(instance_id)
(nova.exception): TRACE: File "/home/sbehera/openstack/nova/nova/virt/xenapi/vmops.py", line 767, in get_info
(nova.exception): TRACE: vm_ref = self._get_vm_opaque_ref(instance)
(nova.exception): TRACE: File "/home/sbehera/openstack/nova/nova/virt/xenapi/vmops.py", line 256, in _get_vm_opaque_ref
(nova.exception): TRACE: raise exception.InstanceNotFound(instance_id=instance_obj.id)
(nova.exception): TRACE: UnboundLocalError: local variable 'instance_obj' referenced before assignment
(nova.exception): TRACE:
2011-05-24 17:19:44,115 ERROR nova [-] Exception during message handling
(nova): TRACE: Traceback (most recent call last):
(nova): TRACE: File "/home/sbehera/openstack/nova/nova/rpc.py", line 198, in _receive
(nova): TRACE: rval = node_func(context=ctxt, **node_args)
(nova): TRACE: File "/home/sbehera/openstack/nova/nova/exception.py", line 93, in _wrap
(nova): TRACE: raise Error(str(e))
(nova): TRACE: Error: local variable 'instance_obj' referenced before assignment
(nova): TRACE:
2011-05-24 17:20:14,534 INFO nova.compute.manager [-] Updating host status

Let me know if you would like me to include any other relevant configuration info..

Thanks,
Somik

Question information

Language:
English Edit question
Status:
Answered
For:
OpenStack Compute (nova) Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
Jay Pipes (jaypipes) said :
#1

Looks like this is a bug in the exception code (_wrap()). I'll convert it to a bug.

Can you help with this problem?

Provide an answer of your own, or ask Somik Behera for more information if necessary.

To post a message you must log in.