failed to create VM on 2nd compute node

Asked by Upendra

Hi,

I have two compute nodes; first one (say CN_01) is on the same node which is also the cloud-controller node, while the 2nd node (say CN_02) is a dedicated compute node.

CN_01 has following processes;
1.) nova_apir
2.) nova-network
3.) nova-compute
4.) nova-scheduler
5.) nova-objectstore
6.) glance-registry
7.) glance-api

I have not made any setting changes to "/etc/glance/glance-api.conf" ... so everything is running on defaults.

I have registered an image (the standard ttylinux image); output of euca-describe-images is:
$ euca-describe-images
IMAGE ami-00000003 mybucket/ttylinux-uec-amd64-12.1_2.6.35-22_1.img.manifest.xml available public x86_64 machine aki-00000001 ari-00000002
IMAGE ari-00000002 mybucket/ttylinux-uec-amd64-12.1_2.6.35-22_1-initrd.manifest.xml available public x86_64 ramdisk
IMAGE aki-00000001 mybucket/ttylinux-uec-amd64-12.1_2.6.35-22_1-vmlinuz.manifest.xml available public x86_64 kernel

When I create an instance of this image on CN_01, everything works fine. However, when I create an instance of the same image on CN_02 it fails. I am not sure if it is a bug or wrong configuration; I am attaching the nova-compute.log.

Please let me know if the cause of problem is obvious.

Thanks,
-upendra

========== CN_02:/var/log/nova/nova-compute.log ===========
2011-07-19 10:45:25,342 DEBUG nova.rpc [-] received {u'_context_request_id': u'4Z2Z7NMZR4-Z37V1ZLOB', u'_context_read_deleted': False, u'args': {u'instance_id': 68, u'request_spec': {u'instance_properties': {u'state_description': u'scheduling', u'availability_zone': None, u'ramdisk_id': u'2', u'instance_type_id': 2, u'user_data': u'', u'vm_mode': None, u'reservation_id': u'r-7rmxukz7', u'user_id': u'upendra', u'display_description': None, u'key_data': u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQCfGYTuj0dtVgjSp5mMAHv/dPHUjBppmitimKEwroOy5HbWPsEWeWxvwXjz/q76X8kmdtoD43+IlZx1PG35xdVGgJ938tnmWubR4LtTxcbtM0f/dNdLjQRwkFmvJ4MmePbjLfTRTPIWfSayJKOFIiQM0A7f5uK2UYdfSo7wVNIdOw== nova@ibc6hb13\n', u'state': 0, u'project_id': u'myproj', u'metadata': {}, u'kernel_id': u'1', u'key_name': u'upen', u'display_name': None, u'local_gb': 0, u'locked': False, u'launch_time': u'2011-07-19T14:44:53Z', u'memory_mb': 512, u'vcpus': 1, u'image_ref': 3, u'architecture': None, u'os_type': None}, u'instance_type': {u'rxtx_quota': 0, u'deleted_at': None, u'name': u'm1.tiny', u'deleted': False, u'created_at': None, u'updated_at': None, u'memory_mb': 512, u'vcpus': 1, u'rxtx_cap': 0, u'extra_specs': {}, u'swap': 0, u'flavorid': 1, u'id': 2, u'local_gb': 0}, u'num_instances': 1, u'filter': u'nova.scheduler.host_filter.InstanceTypeFilter', u'blob': None}, u'admin_password': None, u'injected_files': None, u'availability_zone': None}, u'_context_is_admin': True, u'_context_timestamp': u'2011-07-19T14:44:53Z', u'_context_user': u'upendra', u'method': u'run_instance', u'_context_project': u'myproj', u'_context_remote_address': u'9.59.230.122'} from (pid=21444) process_data /usr/lib/pymodules/python2.7/nova/rpc.py:202
2011-07-19 10:45:25,342 DEBUG nova.rpc [-] unpacked context: {'timestamp': u'2011-07-19T14:44:53Z', 'msg_id': None, 'remote_address': u'9.59.230.122', 'project': u'myproj', 'is_admin': True, 'user': u'upendra', 'request_id': u'4Z2Z7NMZR4-Z37V1ZLOB', 'read_deleted': False} from (pid=21444) _unpack_context /usr/lib/pymodules/python2.7/nova/rpc.py:451
2011-07-19 10:45:25,400 AUDIT nova.compute.manager [4Z2Z7NMZR4-Z37V1ZLOB upendra myproj] instance 68: starting...
2011-07-19 10:45:25,564 DEBUG nova.rpc [-] Making asynchronous call on network ... from (pid=21444) multicall /usr/lib/pymodules/python2.7/nova/rpc.py:481
2011-07-19 10:45:25,564 DEBUG nova.rpc [-] MSG_ID is e2d87e6c9e0e4013ba2dd0a14583decb from (pid=21444) multicall /usr/lib/pymodules/python2.7/nova/rpc.py:484
2011-07-19 10:45:26,001 DEBUG nova.compute.manager [-] instance network_info: |[[{u'injected': True, u'bridge': u'br100', u'cidr_v6': None, u'cidr': u'9.59.231.192/29', u'id': 9}, {u'label': u'public', u'broadcast': u'9.59.231.255', u'ips': [{u'ip': u'9.59.231.197', u'netmask': u'255.255.254.0', u'enabled': u'1'}], u'mac': u'02:16:3e:67:32:b5', u'rxtx_cap': 0, u'dns': [u'8.8.4.4'], u'gateway': u'9.59.230.2'}]]| from (pid=21444) _run_instance /usr/lib/pymodules/python2.7/nova/compute/manager.py:295
2011-07-19 10:45:26,209 DEBUG nova.virt.libvirt_conn [-] instance instance-00000044: starting toXML method from (pid=21444) to_xml /usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py:1038
2011-07-19 10:45:26,218 DEBUG nova.virt.libvirt_conn [-] instance instance-00000044: finished toXML method from (pid=21444) to_xml /usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py:1042
2011-07-19 10:45:26,219 INFO nova [-] called setup_basic_filtering in nwfilter
2011-07-19 10:45:26,219 INFO nova [-] ensuring static filters
2011-07-19 10:45:26,232 DEBUG nova.virt.libvirt.firewall [-] Adding security group rule: <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at 0x7fe854461a10> from (pid=21444) instance_rules /usr/lib/pymodules/python2.7/nova/virt/libvirt/firewall.py:663
2011-07-19 10:45:26,233 DEBUG nova.virt.libvirt.firewall [-] Adding security group rule: <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at 0x7fe854461a50> from (pid=21444) instance_rules /usr/lib/pymodules/python2.7/nova/virt/libvirt/firewall.py:663
2011-07-19 10:45:26,233 DEBUG nova.utils [-] Attempting to grab semaphore "iptables" for method "apply"... from (pid=21444) inner /usr/lib/pymodules/python2.7/nova/utils.py:600
2011-07-19 10:45:26,233 DEBUG nova.utils [-] Attempting to grab file lock "iptables" for method "apply"... from (pid=21444) inner /usr/lib/pymodules/python2.7/nova/utils.py:605
2011-07-19 10:45:26,234 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-save -t filter from (pid=21444) execute /usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-19 10:45:26,259 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-restore from (pid=21444) execute /usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-19 10:45:26,284 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-save -t nat from (pid=21444) execute /usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-19 10:45:26,308 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-restore from (pid=21444) execute /usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-19 10:45:26,333 DEBUG nova.utils [-] Running cmd (subprocess): mkdir -p /var/lib/nova/instances/instance-00000044/ from (pid=21444) execute /usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-19 10:45:26,346 INFO nova.virt.libvirt_conn [-] instance instance-00000044: Creating image
2011-07-19 10:45:26,361 DEBUG nova.utils [-] Attempting to grab semaphore "00000001" for method "call_if_not_exists"... from (pid=21444) inner /usr/lib/pymodules/python2.7/nova/utils.py:600
2011-07-19 10:45:26,362 DEBUG nova.utils [-] Running cmd (subprocess): cp /var/lib/nova/instances/_base/00000001 /var/lib/nova/instances/instance-00000044/kernel from (pid=21444) execute /usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-19 10:45:26,375 DEBUG nova.utils [-] Attempting to grab semaphore "00000002" for method "call_if_not_exists"... from (pid=21444) inner /usr/lib/pymodules/python2.7/nova/utils.py:600
2011-07-19 10:45:26,376 DEBUG nova.utils [-] Running cmd (subprocess): cp /var/lib/nova/instances/_base/00000002 /var/lib/nova/instances/instance-00000044/ramdisk from (pid=21444) execute /usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-19 10:45:26,401 DEBUG nova.utils [-] Attempting to grab semaphore "77de68daecd823babbb58edb1c8e14d7106e83bb_sm" for method "call_if_not_exists"... from (pid=21444) inner /usr/lib/pymodules/python2.7/nova/utils.py:600
2011-07-19 10:45:26,401 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img create -f qcow2 -o cluster_size=2M,backing_file=/var/lib/nova/instances/_base/77de68daecd823babbb58edb1c8e14d7106e83bb_sm /var/lib/nova/instances/instance-00000044/disk from (pid=21444) execute /usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-19 10:45:26,527 INFO nova.virt.libvirt_conn [-] instance instance-00000044: injecting key into image 3
2011-07-19 10:45:26,527 INFO nova.virt.libvirt_conn [-] instance instance-00000044: injecting net into image 3
2011-07-19 10:45:26,527 DEBUG nova.utils [-] Running cmd (subprocess): sudo qemu-nbd -c /dev/nbd15 /var/lib/nova/instances/instance-00000044/disk from (pid=21444) execute /usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-19 10:45:27,558 DEBUG nova.utils [-] Running cmd (subprocess): sudo tune2fs -c 0 -i 0 /dev/nbd15 from (pid=21444) execute /usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-19 10:45:27,582 DEBUG nova.utils [-] Result was 1 from (pid=21444) execute /usr/lib/pymodules/python2.7/nova/utils.py:161
2011-07-19 10:45:27,583 DEBUG nova.utils [-] Running cmd (subprocess): sudo qemu-nbd -d /dev/nbd15 from (pid=21444) execute /usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-19 10:45:27,608 WARNING nova.virt.libvirt_conn [-] instance instance-00000044: ignoring error injecting data into image 3 (Unexpected error while running command.
Command: sudo tune2fs -c 0 -i 0 /dev/nbd15
Exit code: 1
Stdout: 'tune2fs 1.41.14 (22-Dec-2010)\n'
Stderr: "tune2fs: Invalid argument while trying to open /dev/nbd15\nCouldn't find valid filesystem superblock.\n")
2011-07-19 10:45:30,043 ERROR nova.exception [-] Uncaught exception
(nova.exception): TRACE: Traceback (most recent call last):
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/nova/exception.py", line 87, in _wrap
(nova.exception): TRACE: return f(*args, **kw)
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line 591, in spawn
(nova.exception): TRACE: domain = self._create_new_domain(xml)
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line 1090, in _create_new_domain
(nova.exception): TRACE: domain.createWithFlags(launch_flags)
(nova.exception): TRACE: File "/usr/lib/python2.7/dist-packages/libvirt.py", line 337, in createWithFlags
(nova.exception): TRACE: if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
(nova.exception): TRACE: libvirtError: operation failed: failed to retrieve chardev info in qemu with 'info chardev'
(nova.exception): TRACE:
2011-07-19 10:45:30,045 ERROR nova.compute.manager [-] Instance '68' failed to spawn. Is virtualization enabled in the BIOS? Details: operation failed: failed to retrieve chardev info in qemu with 'info chardev'
(nova.compute.manager): TRACE: Traceback (most recent call last):
(nova.compute.manager): TRACE: File "/usr/lib/pymodules/python2.7/nova/compute/manager.py", line 311, in _run_instance
(nova.compute.manager): TRACE: self.driver.spawn(instance, network_info, bd_mapping)
(nova.compute.manager): TRACE: File "/usr/lib/pymodules/python2.7/nova/exception.py", line 93, in _wrap
(nova.compute.manager): TRACE: raise Error(str(e))
(nova.compute.manager): TRACE: Error: operation failed: failed to retrieve chardev info in qemu with 'info chardev'
(nova.compute.manager): TRACE:
2011-07-19 10:45:38,772 INFO nova.compute.manager [-] Found instance 'instance-00000044' in DB but no VM. State=5, so setting state to shutoff.
2011-07-19 10:46:38,774 INFO nova.compute.manager [-] Updating host status
========================================================================

Question information

Language:
English Edit question
Status:
Solved
For:
OpenStack Compute (nova) Edit question
Assignee:
No assignee Edit question
Solved by:
Vish Ishaya
Solved:
Last query:
Last reply:
Revision history for this message
Best Vish Ishaya (vishvananda) said :
#1

probably need to set:
--glance_api_servers=<ip of glance>:9292

Vish

On Jul 19, 2011, at 8:51 AM, Upendra wrote:

> New question #165340 on OpenStack Compute (nova):
> https://answers.launchpad.net/nova/+question/165340
>
> Hi,
>
> I have two compute nodes; first one (say CN_01) is on the same node which is also the cloud-controller node, while the 2nd node (say CN_02) is a dedicated compute node.
>
> CN_01 has following processes;
> 1.) nova_apir
> 2.) nova-network
> 3.) nova-compute
> 4.) nova-scheduler
> 5.) nova-objectstore
> 6.) glance-registry
> 7.) glance-api
>
> I have not made any setting changes to "/etc/glance/glance-api.conf" ... so everything is running on defaults.
>
> I have registered an image (the standard ttylinux image); output of euca-describe-images is:
> $ euca-describe-images
> IMAGE ami-00000003 mybucket/ttylinux-uec-amd64-12.1_2.6.35-22_1.img.manifest.xml available public x86_64 machine aki-00000001 ari-00000002
> IMAGE ari-00000002 mybucket/ttylinux-uec-amd64-12.1_2.6.35-22_1-initrd.manifest.xml available public x86_64 ramdisk
> IMAGE aki-00000001 mybucket/ttylinux-uec-amd64-12.1_2.6.35-22_1-vmlinuz.manifest.xml available public x86_64 kernel
>
>
> When I create an instance of this image on CN_01, everything works fine. However, when I create an instance of the same image on CN_02 it fails. I am not sure if it is a bug or wrong configuration; I am attaching the nova-compute.log.
>
> Please let me know if the cause of problem is obvious.
>
> Thanks,
> -upendra
>
> ========== CN_02:/var/log/nova/nova-compute.log ===========
> 2011-07-19 10:45:25,342 DEBUG nova.rpc [-] received {u'_context_request_id': u'4Z2Z7NMZR4-Z37V1ZLOB', u'_context_read_deleted': False, u'args': {u'instance_id': 68, u'request_spec': {u'instance_properties': {u'state_description': u'scheduling', u'availability_zone': None, u'ramdisk_id': u'2', u'instance_type_id': 2, u'user_data': u'', u'vm_mode': None, u'reservation_id': u'r-7rmxukz7', u'user_id': u'upendra', u'display_description': None, u'key_data': u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQCfGYTuj0dtVgjSp5mMAHv/dPHUjBppmitimKEwroOy5HbWPsEWeWxvwXjz/q76X8kmdtoD43+IlZx1PG35xdVGgJ938tnmWubR4LtTxcbtM0f/dNdLjQRwkFmvJ4MmePbjLfTRTPIWfSayJKOFIiQM0A7f5uK2UYdfSo7wVNIdOw== nova@ibc6hb13\n', u'state': 0, u'project_id': u'myproj', u'metadata': {}, u'kernel_id': u'1', u'key_name': u'upen', u'display_name': None, u'local_gb': 0, u'locked': False, u'launch_time': u'2011-07-19T14:44:53Z', u'memory_mb': 512, u'vcpus': 1, u'image_ref': 3, u'architecture': None, u'os_type': None}, u'instance_type': {u'rxtx_quota': 0, u'deleted_at': None, u'name': u'm1.tiny', u'deleted': False, u'created_at': None, u'updated_at': None, u'memory_mb': 512, u'vcpus': 1, u'rxtx_cap': 0, u'extra_specs': {}, u'swap': 0, u'flavorid': 1, u'id': 2, u'local_gb': 0}, u'num_instances': 1, u'filter': u'nova.scheduler.host_filter.InstanceTypeFilter', u'blob': None}, u'admin_password': None, u'injected_files': None, u'availability_zone': None}, u'_context_is_admin': True, u'_context_timestamp': u'2011-07-19T14:44:53Z', u'_context_user': u'upendra', u'method': u'run_instance', u'_context_project': u'myproj', u'_context_remote_address': u'9.59.230.122'} from (pid=21444) process_data /usr/lib/pymodules/python2.7/nova/rpc.py:202
> 2011-07-19 10:45:25,342 DEBUG nova.rpc [-] unpacked context: {'timestamp': u'2011-07-19T14:44:53Z', 'msg_id': None, 'remote_address': u'9.59.230.122', 'project': u'myproj', 'is_admin': True, 'user': u'upendra', 'request_id': u'4Z2Z7NMZR4-Z37V1ZLOB', 'read_deleted': False} from (pid=21444) _unpack_context /usr/lib/pymodules/python2.7/nova/rpc.py:451
> 2011-07-19 10:45:25,400 AUDIT nova.compute.manager [4Z2Z7NMZR4-Z37V1ZLOB upendra myproj] instance 68: starting...
> 2011-07-19 10:45:25,564 DEBUG nova.rpc [-] Making asynchronous call on network ... from (pid=21444) multicall /usr/lib/pymodules/python2.7/nova/rpc.py:481
> 2011-07-19 10:45:25,564 DEBUG nova.rpc [-] MSG_ID is e2d87e6c9e0e4013ba2dd0a14583decb from (pid=21444) multicall /usr/lib/pymodules/python2.7/nova/rpc.py:484
> 2011-07-19 10:45:26,001 DEBUG nova.compute.manager [-] instance network_info: |[[{u'injected': True, u'bridge': u'br100', u'cidr_v6': None, u'cidr': u'9.59.231.192/29', u'id': 9}, {u'label': u'public', u'broadcast': u'9.59.231.255', u'ips': [{u'ip': u'9.59.231.197', u'netmask': u'255.255.254.0', u'enabled': u'1'}], u'mac': u'02:16:3e:67:32:b5', u'rxtx_cap': 0, u'dns': [u'8.8.4.4'], u'gateway': u'9.59.230.2'}]]| from (pid=21444) _run_instance /usr/lib/pymodules/python2.7/nova/compute/manager.py:295
> 2011-07-19 10:45:26,209 DEBUG nova.virt.libvirt_conn [-] instance instance-00000044: starting toXML method from (pid=21444) to_xml /usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py:1038
> 2011-07-19 10:45:26,218 DEBUG nova.virt.libvirt_conn [-] instance instance-00000044: finished toXML method from (pid=21444) to_xml /usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py:1042
> 2011-07-19 10:45:26,219 INFO nova [-] called setup_basic_filtering in nwfilter
> 2011-07-19 10:45:26,219 INFO nova [-] ensuring static filters
> 2011-07-19 10:45:26,232 DEBUG nova.virt.libvirt.firewall [-] Adding security group rule: <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at 0x7fe854461a10> from (pid=21444) instance_rules /usr/lib/pymodules/python2.7/nova/virt/libvirt/firewall.py:663
> 2011-07-19 10:45:26,233 DEBUG nova.virt.libvirt.firewall [-] Adding security group rule: <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at 0x7fe854461a50> from (pid=21444) instance_rules /usr/lib/pymodules/python2.7/nova/virt/libvirt/firewall.py:663
> 2011-07-19 10:45:26,233 DEBUG nova.utils [-] Attempting to grab semaphore "iptables" for method "apply"... from (pid=21444) inner /usr/lib/pymodules/python2.7/nova/utils.py:600
> 2011-07-19 10:45:26,233 DEBUG nova.utils [-] Attempting to grab file lock "iptables" for method "apply"... from (pid=21444) inner /usr/lib/pymodules/python2.7/nova/utils.py:605
> 2011-07-19 10:45:26,234 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-save -t filter from (pid=21444) execute /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-19 10:45:26,259 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-restore from (pid=21444) execute /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-19 10:45:26,284 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-save -t nat from (pid=21444) execute /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-19 10:45:26,308 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-restore from (pid=21444) execute /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-19 10:45:26,333 DEBUG nova.utils [-] Running cmd (subprocess): mkdir -p /var/lib/nova/instances/instance-00000044/ from (pid=21444) execute /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-19 10:45:26,346 INFO nova.virt.libvirt_conn [-] instance instance-00000044: Creating image
> 2011-07-19 10:45:26,361 DEBUG nova.utils [-] Attempting to grab semaphore "00000001" for method "call_if_not_exists"... from (pid=21444) inner /usr/lib/pymodules/python2.7/nova/utils.py:600
> 2011-07-19 10:45:26,362 DEBUG nova.utils [-] Running cmd (subprocess): cp /var/lib/nova/instances/_base/00000001 /var/lib/nova/instances/instance-00000044/kernel from (pid=21444) execute /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-19 10:45:26,375 DEBUG nova.utils [-] Attempting to grab semaphore "00000002" for method "call_if_not_exists"... from (pid=21444) inner /usr/lib/pymodules/python2.7/nova/utils.py:600
> 2011-07-19 10:45:26,376 DEBUG nova.utils [-] Running cmd (subprocess): cp /var/lib/nova/instances/_base/00000002 /var/lib/nova/instances/instance-00000044/ramdisk from (pid=21444) execute /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-19 10:45:26,401 DEBUG nova.utils [-] Attempting to grab semaphore "77de68daecd823babbb58edb1c8e14d7106e83bb_sm" for method "call_if_not_exists"... from (pid=21444) inner /usr/lib/pymodules/python2.7/nova/utils.py:600
> 2011-07-19 10:45:26,401 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img create -f qcow2 -o cluster_size=2M,backing_file=/var/lib/nova/instances/_base/77de68daecd823babbb58edb1c8e14d7106e83bb_sm /var/lib/nova/instances/instance-00000044/disk from (pid=21444) execute /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-19 10:45:26,527 INFO nova.virt.libvirt_conn [-] instance instance-00000044: injecting key into image 3
> 2011-07-19 10:45:26,527 INFO nova.virt.libvirt_conn [-] instance instance-00000044: injecting net into image 3
> 2011-07-19 10:45:26,527 DEBUG nova.utils [-] Running cmd (subprocess): sudo qemu-nbd -c /dev/nbd15 /var/lib/nova/instances/instance-00000044/disk from (pid=21444) execute /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-19 10:45:27,558 DEBUG nova.utils [-] Running cmd (subprocess): sudo tune2fs -c 0 -i 0 /dev/nbd15 from (pid=21444) execute /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-19 10:45:27,582 DEBUG nova.utils [-] Result was 1 from (pid=21444) execute /usr/lib/pymodules/python2.7/nova/utils.py:161
> 2011-07-19 10:45:27,583 DEBUG nova.utils [-] Running cmd (subprocess): sudo qemu-nbd -d /dev/nbd15 from (pid=21444) execute /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-19 10:45:27,608 WARNING nova.virt.libvirt_conn [-] instance instance-00000044: ignoring error injecting data into image 3 (Unexpected error while running command.
> Command: sudo tune2fs -c 0 -i 0 /dev/nbd15
> Exit code: 1
> Stdout: 'tune2fs 1.41.14 (22-Dec-2010)\n'
> Stderr: "tune2fs: Invalid argument while trying to open /dev/nbd15\nCouldn't find valid filesystem superblock.\n")
> 2011-07-19 10:45:30,043 ERROR nova.exception [-] Uncaught exception
> (nova.exception): TRACE: Traceback (most recent call last):
> (nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/nova/exception.py", line 87, in _wrap
> (nova.exception): TRACE: return f(*args, **kw)
> (nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line 591, in spawn
> (nova.exception): TRACE: domain = self._create_new_domain(xml)
> (nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line 1090, in _create_new_domain
> (nova.exception): TRACE: domain.createWithFlags(launch_flags)
> (nova.exception): TRACE: File "/usr/lib/python2.7/dist-packages/libvirt.py", line 337, in createWithFlags
> (nova.exception): TRACE: if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
> (nova.exception): TRACE: libvirtError: operation failed: failed to retrieve chardev info in qemu with 'info chardev'
> (nova.exception): TRACE:
> 2011-07-19 10:45:30,045 ERROR nova.compute.manager [-] Instance '68' failed to spawn. Is virtualization enabled in the BIOS? Details: operation failed: failed to retrieve chardev info in qemu with 'info chardev'
> (nova.compute.manager): TRACE: Traceback (most recent call last):
> (nova.compute.manager): TRACE: File "/usr/lib/pymodules/python2.7/nova/compute/manager.py", line 311, in _run_instance
> (nova.compute.manager): TRACE: self.driver.spawn(instance, network_info, bd_mapping)
> (nova.compute.manager): TRACE: File "/usr/lib/pymodules/python2.7/nova/exception.py", line 93, in _wrap
> (nova.compute.manager): TRACE: raise Error(str(e))
> (nova.compute.manager): TRACE: Error: operation failed: failed to retrieve chardev info in qemu with 'info chardev'
> (nova.compute.manager): TRACE:
> 2011-07-19 10:45:38,772 INFO nova.compute.manager [-] Found instance 'instance-00000044' in DB but no VM. State=5, so setting state to shutoff.
> 2011-07-19 10:46:38,774 INFO nova.compute.manager [-] Updating host status
> ========================================================================
>
>
> --
> You received this question notification because you are a member of Nova
> Core, which is an answer contact for OpenStack Compute (nova).

Revision history for this message
Upendra (upendras) said :
#2

Thanks Vish Ishaya, that solved my question.