Creating zero sized instances on nova-compute node

Asked by P Spencer Davis

I have two nodes, one running nova-api, nova-network, nova-volume, nova-
compute and glance, the second is just running nova-compute. The nodes
are running Ubuntu 11.04 server and I've installed from the
ppa.launchpad repository, additionally both nodes use the KVM
hypervisor, and kvm-ok returns that vitrualization (sp) is enabled in
their bios. On the master node, I can start instances and they run just
fine, but when a vm is scheduled on the second node, I receive the
following errors:

2011-07-11 08:53:38,013 INFO nova.virt.libvirt_conn [-] instance instance-00000002: Creating image
2011-07-11 08:53:38,034 DEBUG nova.utils [-] Attempting to grab semaphore "00000001" for method "call_if_not_exists
"... from (pid=6846) inner /usr/lib/pymodules/python2.7/nova/utils.py:600
2011-07-11 08:53:38,036 ERROR nova.exception [-] Uncaught exception
(nova.exception): TRACE: Traceback (most recent call last):
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/nova/exception.py", line 87, in _wrap
(nova.exception): TRACE: return f(*args, **kw)
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line 590, in spawn
(nova.exception): TRACE: block_device_mapping=block_device_mapping)
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line 815, in _creat
e_image
(nova.exception): TRACE: project=project)
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line 751, in _cache
_image
(nova.exception): TRACE: call_if_not_exists(base, fn, *args, **kwargs)
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/nova/utils.py", line 613, in inner
(nova.exception): TRACE: retval = f(*args, **kwargs)
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line 749, in call_i
f_not_exists
(nova.exception): TRACE: fn(target=base, *args, **kwargs)
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line 762, in _fetch
_image
(nova.exception): TRACE: images.fetch(image_id, target, user, project)
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/nova/virt/images.py", line 44, in fetch
(nova.exception): TRACE: metadata = image_service.get(elevated, image_id, image_file)
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/nova/image/glance.py", line 139, in get
(nova.exception): TRACE: image_meta, image_chunks = self.client.get_image(image_id)
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/glance/client.py", line 98, in get_image
(nova.exception): TRACE: res = self.do_request("GET", "/images/%s" % image_id)
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/glance/client.py", line 54, in do_request
(nova.exception): TRACE: headers, params)
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/glance/common/client.py", line 148, in do_request
(nova.exception): TRACE: "server. Got error: %s" % e)
(nova.exception): TRACE: ClientConnectionError: Unable to connect to server. Got error: [Errno 111] ECONNREFUSED
(nova.exception): TRACE:
2011-07-11 08:53:38,037 ERROR nova.compute.manager [-] Instance '2' failed to spawn. Is virtualization enabled in t
he BIOS? Details: Unable to connect to server. Got error: [Errno 111] ECONNREFUSED

Looking in /var/lib/nova/instances/_base, there are 0000000# files that
are zero size.

The nodes have dual nics attached to a public 172.16.0.0/16 and a
private 10.0.0.0/8 netowrk and i was using
http://dodeeric.louvrex.net/?p=225 as an install guide.

/etc/nova/nova.conf:

# RabbitMQ
--rabbit_host=172.16.1.13
# MySQL
--sql_connection=mysql://nova:nova@172.16.1.13/nova
# Networking
--network_manager=nova.network.manager.VlanManager
--vlan_interface=eth1
--public_interface=eth0
--network_host=172.16.1.13
--routing_source_ip=172.16.1.13
--fixed_range=10.0.0.0/8
--network_size=1024
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
# Virtualization
--libvirt_type=kvm
# Volumes
--iscsi_ip_prefix=172.16.1.13
--num_targets=100
# APIs
--auth_driver=nova.auth.dbdriver.DbDriver
--cc_host=172.16.1.13
--ec2_url=http://172.16.1.13:8773/services/Cloud
--s3_host=172.16.1.13
--s3_dmz=172.16.1.13
# Image service
--glance_host=172.16.1.13
--image_service=nova.image.glance.GlanceImageService
# Misc
--logdir=/var/log/nova
--state_path=/var/lib/nova
--lock_path=/var/lock/nova
--verbose
# VNC Console
--vnc_enabled=true
--vncproxy_url=http://172.16.1.13:6080
--vnc_console_proxy_url=http://172.16.1.13:6080

Question information

Language:
English Edit question
Status:
Solved
For:
OpenStack Compute (nova) Edit question
Assignee:
No assignee Edit question
Solved by:
Vish Ishaya
Solved:
Last query:
Last reply:
Revision history for this message
Best Vish Ishaya (vishvananda) said :
#1

the --glance_host and --glance_port flags were replaced with a single flag called
--glance_api_servers

try
--glance_api_servers=172.16.1.13:9292

Vish

On Jul 11, 2011, at 6:31 PM, P Spencer Davis wrote:

> New question #164486 on OpenStack Compute (nova):
> https://answers.launchpad.net/nova/+question/164486
>
>
> I have two nodes, one running nova-api, nova-network, nova-volume, nova-
> compute and glance, the second is just running nova-compute. The nodes
> are running Ubuntu 11.04 server and I've installed from the
> ppa.launchpad repository, additionally both nodes use the KVM
> hypervisor, and kvm-ok returns that vitrualization (sp) is enabled in
> their bios. On the master node, I can start instances and they run just
> fine, but when a vm is scheduled on the second node, I receive the
> following errors:
>
> 2011-07-11 08:53:38,013 INFO nova.virt.libvirt_conn [-] instance instance-00000002: Creating image
> 2011-07-11 08:53:38,034 DEBUG nova.utils [-] Attempting to grab semaphore "00000001" for method "call_if_not_exists
> "... from (pid=6846) inner /usr/lib/pymodules/python2.7/nova/utils.py:600
> 2011-07-11 08:53:38,036 ERROR nova.exception [-] Uncaught exception
> (nova.exception): TRACE: Traceback (most recent call last):
> (nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/nova/exception.py", line 87, in _wrap
> (nova.exception): TRACE: return f(*args, **kw)
> (nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line 590, in spawn
> (nova.exception): TRACE: block_device_mapping=block_device_mapping)
> (nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line 815, in _creat
> e_image
> (nova.exception): TRACE: project=project)
> (nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line 751, in _cache
> _image
> (nova.exception): TRACE: call_if_not_exists(base, fn, *args, **kwargs)
> (nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/nova/utils.py", line 613, in inner
> (nova.exception): TRACE: retval = f(*args, **kwargs)
> (nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line 749, in call_i
> f_not_exists
> (nova.exception): TRACE: fn(target=base, *args, **kwargs)
> (nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line 762, in _fetch
> _image
> (nova.exception): TRACE: images.fetch(image_id, target, user, project)
> (nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/nova/virt/images.py", line 44, in fetch
> (nova.exception): TRACE: metadata = image_service.get(elevated, image_id, image_file)
> (nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/nova/image/glance.py", line 139, in get
> (nova.exception): TRACE: image_meta, image_chunks = self.client.get_image(image_id)
> (nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/glance/client.py", line 98, in get_image
> (nova.exception): TRACE: res = self.do_request("GET", "/images/%s" % image_id)
> (nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/glance/client.py", line 54, in do_request
> (nova.exception): TRACE: headers, params)
> (nova.exception): TRACE: File "/usr/lib/pymodules/python2.7/glance/common/client.py", line 148, in do_request
> (nova.exception): TRACE: "server. Got error: %s" % e)
> (nova.exception): TRACE: ClientConnectionError: Unable to connect to server. Got error: [Errno 111] ECONNREFUSED
> (nova.exception): TRACE:
> 2011-07-11 08:53:38,037 ERROR nova.compute.manager [-] Instance '2' failed to spawn. Is virtualization enabled in t
> he BIOS? Details: Unable to connect to server. Got error: [Errno 111] ECONNREFUSED
>
> Looking in /var/lib/nova/instances/_base, there are 0000000# files that
> are zero size.
>
> The nodes have dual nics attached to a public 172.16.0.0/16 and a
> private 10.0.0.0/8 netowrk and i was using
> http://dodeeric.louvrex.net/?p=225 as an install guide.
>
> /etc/nova/nova.conf:
>
> # RabbitMQ
> --rabbit_host=172.16.1.13
> # MySQL
> --sql_connection=mysql://nova:nova@172.16.1.13/nova
> # Networking
> --network_manager=nova.network.manager.VlanManager
> --vlan_interface=eth1
> --public_interface=eth0
> --network_host=172.16.1.13
> --routing_source_ip=172.16.1.13
> --fixed_range=10.0.0.0/8
> --network_size=1024
> --dhcpbridge_flagfile=/etc/nova/nova.conf
> --dhcpbridge=/usr/bin/nova-dhcpbridge
> # Virtualization
> --libvirt_type=kvm
> # Volumes
> --iscsi_ip_prefix=172.16.1.13
> --num_targets=100
> # APIs
> --auth_driver=nova.auth.dbdriver.DbDriver
> --cc_host=172.16.1.13
> --ec2_url=http://172.16.1.13:8773/services/Cloud
> --s3_host=172.16.1.13
> --s3_dmz=172.16.1.13
> # Image service
> --glance_host=172.16.1.13
> --image_service=nova.image.glance.GlanceImageService
> # Misc
> --logdir=/var/log/nova
> --state_path=/var/lib/nova
> --lock_path=/var/lock/nova
> --verbose
> # VNC Console
> --vnc_enabled=true
> --vncproxy_url=http://172.16.1.13:6080
> --vnc_console_proxy_url=http://172.16.1.13:6080
>
> --
> You received this question notification because you are a member of Nova
> Core, which is an answer contact for OpenStack Compute (nova).

Revision history for this message
P Spencer Davis (p-spencer-davis) said :
#2

The image seems to have copied, but vm creation is still failing with
this error in the compute node's nova-compute log

2011-07-12 08:00:56,485 DEBUG nova.rpc [-] received
{u'_context_request_id': u'2PY8P4D06ZJCFARZTWNJ',
u'_context_read_deleted': False, u'args': {u'instance_id': 22,
u'request_spec': {u'instance_properties': {u'state_description':
u'scheduling', u'availability_zone': None, u'ramdisk_id': u'',
u'instance_type_id': 5, u'user_data': u'', u'vm_mode': None,
u'reservation_id': u'r-e1he7r0i', u'user_id': u'cscloud',
u'display_description': None, u'key_data': u'ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAAAgQCppUWe3nvWLzC3QStNUtUTu+hM2ZH5EgO3Al6YIdNeA/2MK4F60e54sN5nvjVP6gi0LhMVwM/SmJB7xhfMndvUZr1ajpv6i6ZHhLdNum5qz9A3fojbCG6pko30idbt0v/sc7KxacbH4b8SzNLma+VT1cAFnJKicfBDnL1tFpAHaQ==
nova@dhcp-172-16-1-13\n', u'state': 0, u'project_id': u'base',
u'metadata': {}, u'kernel_id': u'3', u'key_name': u'key-cscloud',
u'display_name': None, u'local_gb': 20, u'locked': False,
u'launch_time': u'2011-07-12T12:00:56Z', u'memory_mb': 2048, u'vcpus':
1, u'image_ref': 4, u'architecture': None, u'os_type': None},
u'instance_type': {u'rxtx_quota': 0, u'deleted_at': None, u'name':
u'm1.small', u'deleted': False, u'created_at': None, u'updated_at':
None, u'memory_mb': 2048, u'vcpus': 1, u'rxtx_cap': 0, u'extra_specs':
{}, u'swap': 0, u'flavorid': 2, u'id': 5, u'local_gb': 20},
u'num_instances': 1, u'filter':
u'nova.scheduler.host_filter.InstanceTypeFilter', u'blob': None},
u'admin_password': None, u'injected_files': None,
u'availability_zone': None}, u'_context_is_admin': True,
u'_context_timestamp': u'2011-07-12T12:00:56Z', u'_context_user':
u'cscloud', u'method': u'run_instance', u'_context_project': u'base',
u'_context_remote_address': u'172.16.1.13'} from (pid=4065)
process_data /usr/lib/pymodules/python2.7/nova/rpc.py:202
2011-07-12 08:00:56,485 DEBUG nova.rpc [-] unpacked context:
{'timestamp': u'2011-07-12T12:00:56Z', 'msg_id': None,
'remote_address': u'172.16.1.13', 'project': u'base', 'is_admin':
True, 'user': u'cscloud', 'request_id': u'2PY8P4D06ZJCFARZTWNJ',
'read_deleted': False} from (pid=4065) _unpack_context
/usr/lib/pymodules/python2.7/nova/rpc.py:451
2011-07-12 08:00:56,558 AUDIT nova.compute.manager
[2PY8P4D06ZJCFARZTWNJ cscloud base] instance 22: starting...
2011-07-12 08:00:56,778 DEBUG nova.rpc [-] Making asynchronous call on
network ... from (pid=4065) multicall
/usr/lib/pymodules/python2.7/nova/rpc.py:481
2011-07-12 08:00:56,778 DEBUG nova.rpc [-] MSG_ID is
1fb1ac62b5794d3bb119c424ab1f3602 from (pid=4065) multicall
/usr/lib/pymodules/python2.7/nova/rpc.py:484
2011-07-12 08:00:56,779 DEBUG nova.rpc [-] Creating new connection
from (pid=4065) create /usr/lib/pymodules/python2.7/nova/rpc.py:105
2011-07-12 08:00:57,383 DEBUG nova.compute.manager [-] instance
network_info: |[[{u'injected': False, u'bridge': u'br_vlan1',
u'cidr_v6': None, u'cidr': u'192.168.1.0/24', u'id': 1}, {u'label':
u'vlan1', u'broadcast': u'192.168.1.255', u'ips': [{u'ip':
u'192.168.1.8', u'netmask': u'255.255.255.0', u'enabled': u'1'}],
u'mac': u'02:16:3e:30:c4:0b', u'rxtx_cap': 0, u'dns': [None],
u'gateway': u'192.168.1.7'}]]| from (pid=4065) _run_instance
/usr/lib/pymodules/python2.7/nova/compute/manager.py:295
2011-07-12 08:00:57,390 DEBUG nova.utils [-] Attempting to grab
semaphore "ensure_vlan" for method "ensure_vlan"... from (pid=4065)
inner /usr/lib/pymodules/python2.7/nova/utils.py:600
2011-07-12 08:00:57,390 DEBUG nova.utils [-] Attempting to grab file
lock "ensure_vlan" for method "ensure_vlan"... from (pid=4065) inner
/usr/lib/pymodules/python2.7/nova/utils.py:605
2011-07-12 08:00:57,391 DEBUG nova.utils [-] Running cmd (subprocess):
ip link show dev vlan1 from (pid=4065) execute
/usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-12 08:00:57,399 DEBUG nova.utils [-] Attempting to grab
semaphore "ensure_bridge" for method "ensure_bridge"... from
(pid=4065) inner /usr/lib/pymodules/python2.7/nova/utils.py:600
2011-07-12 08:00:57,400 DEBUG nova.utils [-] Attempting to grab file
lock "ensure_bridge" for method "ensure_bridge"... from (pid=4065)
inner /usr/lib/pymodules/python2.7/nova/utils.py:605
2011-07-12 08:00:57,400 DEBUG nova.utils [-] Running cmd (subprocess):
ip link show dev br_vlan1 from (pid=4065) execute
/usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-12 08:00:57,408 DEBUG nova.utils [-] Running cmd (subprocess):
sudo route -n from (pid=4065) execute
/usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-12 08:00:57,421 DEBUG nova.utils [-] Running cmd (subprocess):
sudo ip addr show dev vlan1 scope global from (pid=4065) execute
/usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-12 08:00:57,435 DEBUG nova.utils [-] Running cmd (subprocess):
sudo brctl addif br_vlan1 vlan1 from (pid=4065) execute
/usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-12 08:00:57,448 DEBUG nova.utils [-] Result was 1 from
(pid=4065) execute /usr/lib/pymodules/python2.7/nova/utils.py:161
2011-07-12 08:00:57,750 DEBUG nova.virt.libvirt_conn [-] instance
instance-00000016: starting toXML method from (pid=4065) to_xml
/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py:1035
2011-07-12 08:00:57,829 DEBUG nova.virt.libvirt_conn [-] instance
instance-00000016: finished toXML method from (pid=4065) to_xml
/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py:1039
2011-07-12 08:00:57,830 INFO nova [-] called setup_basic_filtering in nwfilter
2011-07-12 08:00:57,830 INFO nova [-] ensuring static filters
2011-07-12 08:00:57,844 DEBUG nova.virt.libvirt.firewall [-] iptables
firewall: Setup Basic Filtering from (pid=4065) setup_basic_filtering
/usr/lib/pymodules/python2.7/nova/virt/libvirt/firewall.py:537
2011-07-12 08:00:57,845 DEBUG nova.utils [-] Attempting to grab
semaphore "iptables" for method "_do_refresh_provider_fw_rules"...
from (pid=4065) inner /usr/lib/pymodules/python2.7/nova/utils.py:600
2011-07-12 08:00:57,845 DEBUG nova.utils [-] Attempting to grab file
lock "iptables" for method "_do_refresh_provider_fw_rules"... from
(pid=4065) inner /usr/lib/pymodules/python2.7/nova/utils.py:605
2011-07-12 08:00:57,849 DEBUG nova.utils [-] Attempting to grab
semaphore "iptables" for method "apply"... from (pid=4065) inner
/usr/lib/pymodules/python2.7/nova/utils.py:600
2011-07-12 08:00:57,849 DEBUG nova.utils [-] Attempting to grab file
lock "iptables" for method "apply"... from (pid=4065) inner
/usr/lib/pymodules/python2.7/nova/utils.py:605
2011-07-12 08:00:57,849 DEBUG nova.utils [-] Running cmd (subprocess):
sudo iptables-save -t filter from (pid=4065) execute
/usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-12 08:00:57,864 DEBUG nova.utils [-] Running cmd (subprocess):
sudo iptables-restore from (pid=4065) execute
/usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-12 08:00:57,879 DEBUG nova.utils [-] Running cmd (subprocess):
sudo iptables-save -t nat from (pid=4065) execute
/usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-12 08:00:57,894 DEBUG nova.utils [-] Running cmd (subprocess):
sudo iptables-restore from (pid=4065) execute
/usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-12 08:00:57,927 DEBUG nova.virt.libvirt.firewall [-] Adding
security group rule:
<nova.db.sqlalchemy.models.SecurityGroupIngressRule object at
0x42a9350> from (pid=4065) instance_rules
/usr/lib/pymodules/python2.7/nova/virt/libvirt/firewall.py:663
2011-07-12 08:00:57,928 DEBUG nova.virt.libvirt.firewall [-] Adding
security group rule:
<nova.db.sqlalchemy.models.SecurityGroupIngressRule object at
0x40a9fd0> from (pid=4065) instance_rules
/usr/lib/pymodules/python2.7/nova/virt/libvirt/firewall.py:663
2011-07-12 08:00:57,928 DEBUG nova.utils [-] Attempting to grab
semaphore "iptables" for method "apply"... from (pid=4065) inner
/usr/lib/pymodules/python2.7/nova/utils.py:600
2011-07-12 08:00:57,928 DEBUG nova.utils [-] Attempting to grab file
lock "iptables" for method "apply"... from (pid=4065) inner
/usr/lib/pymodules/python2.7/nova/utils.py:605
2011-07-12 08:00:57,929 DEBUG nova.utils [-] Running cmd (subprocess):
sudo iptables-save -t filter from (pid=4065) execute
/usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-12 08:00:57,944 DEBUG nova.utils [-] Running cmd (subprocess):
sudo iptables-restore from (pid=4065) execute
/usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-12 08:00:57,959 DEBUG nova.utils [-] Running cmd (subprocess):
sudo iptables-save -t nat from (pid=4065) execute
/usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-12 08:00:57,973 DEBUG nova.utils [-] Running cmd (subprocess):
sudo iptables-restore from (pid=4065) execute
/usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-12 08:00:57,989 DEBUG nova.utils [-] Running cmd (subprocess):
mkdir -p /var/lib/nova/instances/instance-00000016/ from (pid=4065)
execute /usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-12 08:00:57,998 INFO nova.virt.libvirt_conn [-] instance
instance-00000016: Creating image
2011-07-12 08:00:58,018 DEBUG nova.utils [-] Attempting to grab
semaphore "00000003" for method "call_if_not_exists"... from
(pid=4065) inner /usr/lib/pymodules/python2.7/nova/utils.py:600
2011-07-12 08:00:58,018 DEBUG nova.utils [-] Running cmd (subprocess):
cp /var/lib/nova/instances/_base/00000003
/var/lib/nova/instances/instance-00000016/kernel from (pid=4065)
execute /usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-12 08:00:58,042 DEBUG nova.utils [-] Attempting to grab
semaphore "1b6453892473a467d07372d45eb05abc2031647a" for method
"call_if_not_exists"... from (pid=4065) inner
/usr/lib/pymodules/python2.7/nova/utils.py:600
2011-07-12 08:00:58,042 DEBUG nova.utils [-] Running cmd (subprocess):
qemu-img create -f qcow2 -o
cluster_size=2M,backing_file=/var/lib/nova/instances/_base/1b6453892473a467d07372d45eb05abc2031647a
/var/lib/nova/instances/instance-00000016/disk from (pid=4065) execute
/usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-12 08:00:58,195 DEBUG nova.utils [-] Attempting to grab
semaphore "local_20" for method "call_if_not_exists"... from
(pid=4065) inner /usr/lib/pymodules/python2.7/nova/utils.py:600
2011-07-12 08:00:58,196 DEBUG nova.utils [-] Running cmd (subprocess):
truncate /var/lib/nova/instances/_base/local_20 -s 20G from (pid=4065)
execute /usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-12 08:00:58,217 DEBUG nova.utils [-] Running cmd (subprocess):
qemu-img create -f qcow2 -o
cluster_size=2M,backing_file=/var/lib/nova/instances/_base/local_20
/var/lib/nova/instances/instance-00000016/disk.local from (pid=4065)
execute /usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-12 08:00:58,355 INFO nova.virt.libvirt_conn [-] instance
instance-00000016: injecting key into image 4
2011-07-12 08:00:58,356 DEBUG nova.utils [-] Running cmd (subprocess):
sudo qemu-nbd -c /dev/nbd15
/var/lib/nova/instances/instance-00000016/disk from (pid=4065) execute
/usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-12 08:00:59,390 DEBUG nova.utils [-] Running cmd (subprocess):
sudo tune2fs -c 0 -i 0 /dev/nbd15 from (pid=4065) execute
/usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-12 08:00:59,422 DEBUG nova.utils [-] Result was 1 from
(pid=4065) execute /usr/lib/pymodules/python2.7/nova/utils.py:161
2011-07-12 08:00:59,422 DEBUG nova.utils [-] Running cmd (subprocess):
sudo qemu-nbd -d /dev/nbd15 from (pid=4065) execute
/usr/lib/pymodules/python2.7/nova/utils.py:143
2011-07-12 08:00:59,448 WARNING nova.virt.libvirt_conn [-] instance
instance-00000016: ignoring error injecting data into image 4
(Unexpected error while running command.
Command: sudo tune2fs -c 0 -i 0 /dev/nbd15
Exit code: 1
Stdout: 'tune2fs 1.41.14 (22-Dec-2010)\n'
Stderr: "tune2fs: Invalid argument while trying to open
/dev/nbd15\nCouldn't find valid filesystem superblock.\n")
2011-07-12 08:01:31,461 ERROR nova.exception [-] Uncaught exception
(nova.exception): TRACE: Traceback (most recent call last):
(nova.exception): TRACE: File
"/usr/lib/pymodules/python2.7/nova/exception.py", line 87, in _wrap
(nova.exception): TRACE: return f(*args, **kw)
(nova.exception): TRACE: File
"/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line
591, in spawn
(nova.exception): TRACE: domain = self._create_new_domain(xml)
(nova.exception): TRACE: File
"/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line
1087, in _create_new_domain
(nova.exception): TRACE: domain.createWithFlags(launch_flags)
(nova.exception): TRACE: File
"/usr/lib/python2.7/dist-packages/libvirt.py", line 337, in
createWithFlags
(nova.exception): TRACE: if ret == -1: raise libvirtError
('virDomainCreateWithFlags() failed', dom=self)
(nova.exception): TRACE: libvirtError: internal error process exited
while connecting to monitor: char device redirected to /dev/pts/1
(nova.exception): TRACE: qemu: could not load kernel
'/var/lib/nova/instances/instance-00000016/kernel': Inappropriate
ioctl for device
(nova.exception): TRACE:
(nova.exception): TRACE:
2011-07-12 08:01:31,463 ERROR nova.compute.manager [-] Instance '22'
failed to spawn. Is virtualization enabled in the BIOS? Details:
internal error process exited while connecting to monitor: char device
redirected to /dev/pts/1
qemu: could not load kernel
'/var/lib/nova/instances/instance-00000016/kernel': Inappropriate
ioctl for device
(nova.compute.manager): TRACE: Traceback (most recent call last):
(nova.compute.manager): TRACE: File
"/usr/lib/pymodules/python2.7/nova/compute/manager.py", line 311, in
_run_instance
(nova.compute.manager): TRACE: self.driver.spawn(instance,
network_info, bd_mapping)
(nova.compute.manager): TRACE: File
"/usr/lib/pymodules/python2.7/nova/exception.py", line 93, in _wrap
(nova.compute.manager): TRACE: raise Error(str(e))
(nova.compute.manager): TRACE: Error: internal error process exited
while connecting to monitor: char device redirected to /dev/pts/1
(nova.compute.manager): TRACE: qemu: could not load kernel
'/var/lib/nova/instances/instance-00000016/kernel': Inappropriate
ioctl for device
(nova.compute.manager): TRACE:
(nova.compute.manager): TRACE:
2011-07-12 08:01:31,864 INFO nova.compute.manager [-] Found instance
'instance-00000016' in DB but no VM. State=5, so setting state to
shutoff.
2011-07-12 08:02:31,868 INFO nova.compute.manager [-] Updating host status
2011-07-12 08:02:31,906 INFO nova.compute.manager [-] Found instance
'instance-00000016' in DB but no VM. State=5, so setting state to
shutoff.

On Tue, Jul 12, 2011 at 1:11 AM, Vish Ishaya
<email address hidden> wrote:
> Your question #164486 on OpenStack Compute (nova) changed:
> https://answers.launchpad.net/nova/+question/164486
>
>    Status: Open => Answered
>
> Vish Ishaya proposed the following answer:
> the --glance_host and --glance_port flags were replaced with a single flag called
> --glance_api_servers
>
> try
> --glance_api_servers=172.16.1.13:9292
>
> Vish
>
>
> On Jul 11, 2011, at 6:31 PM, P Spencer Davis wrote:
>
>> New question #164486 on OpenStack Compute (nova):
>> https://answers.launchpad.net/nova/+question/164486
>>
>>
>> I have two nodes, one running nova-api, nova-network, nova-volume, nova-
>> compute and glance, the second is just running nova-compute. The nodes
>> are running Ubuntu 11.04 server and I've installed from the
>> ppa.launchpad repository, additionally both nodes use the KVM
>> hypervisor, and kvm-ok returns that vitrualization (sp)  is enabled in
>> their bios. On the master node, I can start instances and they run just
>> fine, but when a vm is scheduled on the second node, I receive the
>> following errors:
>>
>> 2011-07-11 08:53:38,013 INFO nova.virt.libvirt_conn [-] instance instance-00000002: Creating image
>> 2011-07-11 08:53:38,034 DEBUG nova.utils [-] Attempting to grab semaphore "00000001" for method "call_if_not_exists
>> "... from (pid=6846) inner /usr/lib/pymodules/python2.7/nova/utils.py:600
>> 2011-07-11 08:53:38,036 ERROR nova.exception [-] Uncaught exception
>> (nova.exception): TRACE: Traceback (most recent call last):
>> (nova.exception): TRACE:   File "/usr/lib/pymodules/python2.7/nova/exception.py", line 87, in _wrap
>> (nova.exception): TRACE:     return f(*args, **kw)
>> (nova.exception): TRACE:   File "/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line 590, in spawn
>> (nova.exception): TRACE:     block_device_mapping=block_device_mapping)
>> (nova.exception): TRACE:   File "/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line 815, in _creat
>> e_image
>> (nova.exception): TRACE:     project=project)
>> (nova.exception): TRACE:   File "/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line 751, in _cache
>> _image
>> (nova.exception): TRACE:     call_if_not_exists(base, fn, *args, **kwargs)
>> (nova.exception): TRACE:   File "/usr/lib/pymodules/python2.7/nova/utils.py", line 613, in inner
>> (nova.exception): TRACE:     retval = f(*args, **kwargs)
>> (nova.exception): TRACE:   File "/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line 749, in call_i
>> f_not_exists
>> (nova.exception): TRACE:     fn(target=base, *args, **kwargs)
>> (nova.exception): TRACE:   File "/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line 762, in _fetch
>> _image
>> (nova.exception): TRACE:     images.fetch(image_id, target, user, project)
>> (nova.exception): TRACE:   File "/usr/lib/pymodules/python2.7/nova/virt/images.py", line 44, in fetch
>> (nova.exception): TRACE:     metadata = image_service.get(elevated, image_id, image_file)
>> (nova.exception): TRACE:   File "/usr/lib/pymodules/python2.7/nova/image/glance.py", line 139, in get
>> (nova.exception): TRACE:     image_meta, image_chunks = self.client.get_image(image_id)
>> (nova.exception): TRACE:   File "/usr/lib/pymodules/python2.7/glance/client.py", line 98, in get_image
>> (nova.exception): TRACE:     res = self.do_request("GET", "/images/%s" % image_id)
>> (nova.exception): TRACE:   File "/usr/lib/pymodules/python2.7/glance/client.py", line 54, in do_request
>> (nova.exception): TRACE:     headers, params)
>> (nova.exception): TRACE:   File "/usr/lib/pymodules/python2.7/glance/common/client.py", line 148, in do_request
>> (nova.exception): TRACE:     "server. Got error: %s" % e)
>> (nova.exception): TRACE: ClientConnectionError: Unable to connect to server. Got error: [Errno 111] ECONNREFUSED
>> (nova.exception): TRACE:
>> 2011-07-11 08:53:38,037 ERROR nova.compute.manager [-] Instance '2' failed to spawn. Is virtualization enabled in t
>> he BIOS? Details: Unable to connect to server. Got error: [Errno 111] ECONNREFUSED
>>
>> Looking in /var/lib/nova/instances/_base, there are 0000000# files that
>> are zero size.
>>
>> The nodes have dual nics attached to a public 172.16.0.0/16 and a
>> private 10.0.0.0/8 netowrk and i was using
>> http://dodeeric.louvrex.net/?p=225 as an install guide.
>>
>> /etc/nova/nova.conf:
>>
>> # RabbitMQ
>> --rabbit_host=172.16.1.13
>> # MySQL
>> --sql_connection=mysql://nova:nova@172.16.1.13/nova
>> # Networking
>> --network_manager=nova.network.manager.VlanManager
>> --vlan_interface=eth1
>> --public_interface=eth0
>> --network_host=172.16.1.13
>> --routing_source_ip=172.16.1.13
>> --fixed_range=10.0.0.0/8
>> --network_size=1024
>> --dhcpbridge_flagfile=/etc/nova/nova.conf
>> --dhcpbridge=/usr/bin/nova-dhcpbridge
>> # Virtualization
>> --libvirt_type=kvm
>> # Volumes
>> --iscsi_ip_prefix=172.16.1.13
>> --num_targets=100
>> # APIs
>> --auth_driver=nova.auth.dbdriver.DbDriver
>> --cc_host=172.16.1.13
>> --ec2_url=http://172.16.1.13:8773/services/Cloud
>> --s3_host=172.16.1.13
>> --s3_dmz=172.16.1.13
>> # Image service
>> --glance_host=172.16.1.13
>> --image_service=nova.image.glance.GlanceImageService
>> # Misc
>> --logdir=/var/log/nova
>> --state_path=/var/lib/nova
>> --lock_path=/var/lock/nova
>> --verbose
>> # VNC Console
>> --vnc_enabled=true
>> --vncproxy_url=http://172.16.1.13:6080
>> --vnc_console_proxy_url=http://172.16.1.13:6080
>>
>> --
>> You received this question notification because you are a member of Nova
>> Core, which is an answer contact for OpenStack Compute (nova).
>
> --
> If this answers your question, please go to the following page to let us
> know that it is solved:
> https://answers.launchpad.net/nova/+question/164486/+confirm?answer_id=0
>
> If you still need help, you can reply to this email or go to the
> following page to enter your feedback:
> https://answers.launchpad.net/nova/+question/164486
>
> You received this question notification because you asked the question.
>

Revision history for this message
P Spencer Davis (p-spencer-davis) said :
#3

Scratch that, I deleted the files in /var/lib/nova/instances/ and all is well.
Thank you!

On Tue, Jul 12, 2011 at 8:11 AM, P Spencer Davis
<email address hidden> wrote:
> Your question #164486 on OpenStack Compute (nova) changed:
> https://answers.launchpad.net/nova/+question/164486
>
>    Status: Answered => Open
>
> You are still having a problem:
> The image seems to have copied, but vm creation is still failing with
> this error in the compute node's nova-compute log
>
> 2011-07-12 08:00:56,485 DEBUG nova.rpc [-] received
> {u'_context_request_id': u'2PY8P4D06ZJCFARZTWNJ',
> u'_context_read_deleted': False, u'args': {u'instance_id': 22,
> u'request_spec': {u'instance_properties': {u'state_description':
> u'scheduling', u'availability_zone': None, u'ramdisk_id': u'',
> u'instance_type_id': 5, u'user_data': u'', u'vm_mode': None,
> u'reservation_id': u'r-e1he7r0i', u'user_id': u'cscloud',
> u'display_description': None, u'key_data': u'ssh-rsa
> AAAAB3NzaC1yc2EAAAADAQABAAAAgQCppUWe3nvWLzC3QStNUtUTu+hM2ZH5EgO3Al6YIdNeA/2MK4F60e54sN5nvjVP6gi0LhMVwM/SmJB7xhfMndvUZr1ajpv6i6ZHhLdNum5qz9A3fojbCG6pko30idbt0v/sc7KxacbH4b8SzNLma+VT1cAFnJKicfBDnL1tFpAHaQ==
> nova@dhcp-172-16-1-13\n', u'state': 0, u'project_id': u'base',
> u'metadata': {}, u'kernel_id': u'3', u'key_name': u'key-cscloud',
> u'display_name': None, u'local_gb': 20, u'locked': False,
> u'launch_time': u'2011-07-12T12:00:56Z', u'memory_mb': 2048, u'vcpus':
> 1, u'image_ref': 4, u'architecture': None, u'os_type': None},
> u'instance_type': {u'rxtx_quota': 0, u'deleted_at': None, u'name':
> u'm1.small', u'deleted': False, u'created_at': None, u'updated_at':
> None, u'memory_mb': 2048, u'vcpus': 1, u'rxtx_cap': 0, u'extra_specs':
> {}, u'swap': 0, u'flavorid': 2, u'id': 5, u'local_gb': 20},
> u'num_instances': 1, u'filter':
> u'nova.scheduler.host_filter.InstanceTypeFilter', u'blob': None},
> u'admin_password': None, u'injected_files': None,
> u'availability_zone': None}, u'_context_is_admin': True,
> u'_context_timestamp': u'2011-07-12T12:00:56Z', u'_context_user':
> u'cscloud', u'method': u'run_instance', u'_context_project': u'base',
> u'_context_remote_address': u'172.16.1.13'} from (pid=4065)
> process_data /usr/lib/pymodules/python2.7/nova/rpc.py:202
> 2011-07-12 08:00:56,485 DEBUG nova.rpc [-] unpacked context:
> {'timestamp': u'2011-07-12T12:00:56Z', 'msg_id': None,
> 'remote_address': u'172.16.1.13', 'project': u'base', 'is_admin':
> True, 'user': u'cscloud', 'request_id': u'2PY8P4D06ZJCFARZTWNJ',
> 'read_deleted': False} from (pid=4065) _unpack_context
> /usr/lib/pymodules/python2.7/nova/rpc.py:451
> 2011-07-12 08:00:56,558 AUDIT nova.compute.manager
> [2PY8P4D06ZJCFARZTWNJ cscloud base] instance 22: starting...
> 2011-07-12 08:00:56,778 DEBUG nova.rpc [-] Making asynchronous call on
> network ... from (pid=4065) multicall
> /usr/lib/pymodules/python2.7/nova/rpc.py:481
> 2011-07-12 08:00:56,778 DEBUG nova.rpc [-] MSG_ID is
> 1fb1ac62b5794d3bb119c424ab1f3602 from (pid=4065) multicall
> /usr/lib/pymodules/python2.7/nova/rpc.py:484
> 2011-07-12 08:00:56,779 DEBUG nova.rpc [-] Creating new connection
> from (pid=4065) create /usr/lib/pymodules/python2.7/nova/rpc.py:105
> 2011-07-12 08:00:57,383 DEBUG nova.compute.manager [-] instance
> network_info: |[[{u'injected': False, u'bridge': u'br_vlan1',
> u'cidr_v6': None, u'cidr': u'192.168.1.0/24', u'id': 1}, {u'label':
> u'vlan1', u'broadcast': u'192.168.1.255', u'ips': [{u'ip':
> u'192.168.1.8', u'netmask': u'255.255.255.0', u'enabled': u'1'}],
> u'mac': u'02:16:3e:30:c4:0b', u'rxtx_cap': 0, u'dns': [None],
> u'gateway': u'192.168.1.7'}]]| from (pid=4065) _run_instance
> /usr/lib/pymodules/python2.7/nova/compute/manager.py:295
> 2011-07-12 08:00:57,390 DEBUG nova.utils [-] Attempting to grab
> semaphore "ensure_vlan" for method "ensure_vlan"... from (pid=4065)
> inner /usr/lib/pymodules/python2.7/nova/utils.py:600
> 2011-07-12 08:00:57,390 DEBUG nova.utils [-] Attempting to grab file
> lock "ensure_vlan" for method "ensure_vlan"... from (pid=4065) inner
> /usr/lib/pymodules/python2.7/nova/utils.py:605
> 2011-07-12 08:00:57,391 DEBUG nova.utils [-] Running cmd (subprocess):
> ip link show dev vlan1 from (pid=4065) execute
> /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-12 08:00:57,399 DEBUG nova.utils [-] Attempting to grab
> semaphore "ensure_bridge" for method "ensure_bridge"... from
> (pid=4065) inner /usr/lib/pymodules/python2.7/nova/utils.py:600
> 2011-07-12 08:00:57,400 DEBUG nova.utils [-] Attempting to grab file
> lock "ensure_bridge" for method "ensure_bridge"... from (pid=4065)
> inner /usr/lib/pymodules/python2.7/nova/utils.py:605
> 2011-07-12 08:00:57,400 DEBUG nova.utils [-] Running cmd (subprocess):
> ip link show dev br_vlan1 from (pid=4065) execute
> /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-12 08:00:57,408 DEBUG nova.utils [-] Running cmd (subprocess):
> sudo route -n from (pid=4065) execute
> /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-12 08:00:57,421 DEBUG nova.utils [-] Running cmd (subprocess):
> sudo ip addr show dev vlan1 scope global from (pid=4065) execute
> /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-12 08:00:57,435 DEBUG nova.utils [-] Running cmd (subprocess):
> sudo brctl addif br_vlan1 vlan1 from (pid=4065) execute
> /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-12 08:00:57,448 DEBUG nova.utils [-] Result was 1 from
> (pid=4065) execute /usr/lib/pymodules/python2.7/nova/utils.py:161
> 2011-07-12 08:00:57,750 DEBUG nova.virt.libvirt_conn [-] instance
> instance-00000016: starting toXML method from (pid=4065) to_xml
> /usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py:1035
> 2011-07-12 08:00:57,829 DEBUG nova.virt.libvirt_conn [-] instance
> instance-00000016: finished toXML method from (pid=4065) to_xml
> /usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py:1039
> 2011-07-12 08:00:57,830 INFO nova [-] called setup_basic_filtering in nwfilter
> 2011-07-12 08:00:57,830 INFO nova [-] ensuring static filters
> 2011-07-12 08:00:57,844 DEBUG nova.virt.libvirt.firewall [-] iptables
> firewall: Setup Basic Filtering from (pid=4065) setup_basic_filtering
> /usr/lib/pymodules/python2.7/nova/virt/libvirt/firewall.py:537
> 2011-07-12 08:00:57,845 DEBUG nova.utils [-] Attempting to grab
> semaphore "iptables" for method "_do_refresh_provider_fw_rules"...
> from (pid=4065) inner /usr/lib/pymodules/python2.7/nova/utils.py:600
> 2011-07-12 08:00:57,845 DEBUG nova.utils [-] Attempting to grab file
> lock "iptables" for method "_do_refresh_provider_fw_rules"... from
> (pid=4065) inner /usr/lib/pymodules/python2.7/nova/utils.py:605
> 2011-07-12 08:00:57,849 DEBUG nova.utils [-] Attempting to grab
> semaphore "iptables" for method "apply"... from (pid=4065) inner
> /usr/lib/pymodules/python2.7/nova/utils.py:600
> 2011-07-12 08:00:57,849 DEBUG nova.utils [-] Attempting to grab file
> lock "iptables" for method "apply"... from (pid=4065) inner
> /usr/lib/pymodules/python2.7/nova/utils.py:605
> 2011-07-12 08:00:57,849 DEBUG nova.utils [-] Running cmd (subprocess):
> sudo iptables-save -t filter from (pid=4065) execute
> /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-12 08:00:57,864 DEBUG nova.utils [-] Running cmd (subprocess):
> sudo iptables-restore from (pid=4065) execute
> /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-12 08:00:57,879 DEBUG nova.utils [-] Running cmd (subprocess):
> sudo iptables-save -t nat from (pid=4065) execute
> /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-12 08:00:57,894 DEBUG nova.utils [-] Running cmd (subprocess):
> sudo iptables-restore from (pid=4065) execute
> /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-12 08:00:57,927 DEBUG nova.virt.libvirt.firewall [-] Adding
> security group rule:
> <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at
> 0x42a9350> from (pid=4065) instance_rules
> /usr/lib/pymodules/python2.7/nova/virt/libvirt/firewall.py:663
> 2011-07-12 08:00:57,928 DEBUG nova.virt.libvirt.firewall [-] Adding
> security group rule:
> <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at
> 0x40a9fd0> from (pid=4065) instance_rules
> /usr/lib/pymodules/python2.7/nova/virt/libvirt/firewall.py:663
> 2011-07-12 08:00:57,928 DEBUG nova.utils [-] Attempting to grab
> semaphore "iptables" for method "apply"... from (pid=4065) inner
> /usr/lib/pymodules/python2.7/nova/utils.py:600
> 2011-07-12 08:00:57,928 DEBUG nova.utils [-] Attempting to grab file
> lock "iptables" for method "apply"... from (pid=4065) inner
> /usr/lib/pymodules/python2.7/nova/utils.py:605
> 2011-07-12 08:00:57,929 DEBUG nova.utils [-] Running cmd (subprocess):
> sudo iptables-save -t filter from (pid=4065) execute
> /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-12 08:00:57,944 DEBUG nova.utils [-] Running cmd (subprocess):
> sudo iptables-restore from (pid=4065) execute
> /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-12 08:00:57,959 DEBUG nova.utils [-] Running cmd (subprocess):
> sudo iptables-save -t nat from (pid=4065) execute
> /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-12 08:00:57,973 DEBUG nova.utils [-] Running cmd (subprocess):
> sudo iptables-restore from (pid=4065) execute
> /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-12 08:00:57,989 DEBUG nova.utils [-] Running cmd (subprocess):
> mkdir -p /var/lib/nova/instances/instance-00000016/ from (pid=4065)
> execute /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-12 08:00:57,998 INFO nova.virt.libvirt_conn [-] instance
> instance-00000016: Creating image
> 2011-07-12 08:00:58,018 DEBUG nova.utils [-] Attempting to grab
> semaphore "00000003" for method "call_if_not_exists"... from
> (pid=4065) inner /usr/lib/pymodules/python2.7/nova/utils.py:600
> 2011-07-12 08:00:58,018 DEBUG nova.utils [-] Running cmd (subprocess):
> cp /var/lib/nova/instances/_base/00000003
> /var/lib/nova/instances/instance-00000016/kernel from (pid=4065)
> execute /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-12 08:00:58,042 DEBUG nova.utils [-] Attempting to grab
> semaphore "1b6453892473a467d07372d45eb05abc2031647a" for method
> "call_if_not_exists"... from (pid=4065) inner
> /usr/lib/pymodules/python2.7/nova/utils.py:600
> 2011-07-12 08:00:58,042 DEBUG nova.utils [-] Running cmd (subprocess):
> qemu-img create -f qcow2 -o
> cluster_size=2M,backing_file=/var/lib/nova/instances/_base/1b6453892473a467d07372d45eb05abc2031647a
> /var/lib/nova/instances/instance-00000016/disk from (pid=4065) execute
> /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-12 08:00:58,195 DEBUG nova.utils [-] Attempting to grab
> semaphore "local_20" for method "call_if_not_exists"... from
> (pid=4065) inner /usr/lib/pymodules/python2.7/nova/utils.py:600
> 2011-07-12 08:00:58,196 DEBUG nova.utils [-] Running cmd (subprocess):
> truncate /var/lib/nova/instances/_base/local_20 -s 20G from (pid=4065)
> execute /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-12 08:00:58,217 DEBUG nova.utils [-] Running cmd (subprocess):
> qemu-img create -f qcow2 -o
> cluster_size=2M,backing_file=/var/lib/nova/instances/_base/local_20
> /var/lib/nova/instances/instance-00000016/disk.local from (pid=4065)
> execute /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-12 08:00:58,355 INFO nova.virt.libvirt_conn [-] instance
> instance-00000016: injecting key into image 4
> 2011-07-12 08:00:58,356 DEBUG nova.utils [-] Running cmd (subprocess):
> sudo qemu-nbd -c /dev/nbd15
> /var/lib/nova/instances/instance-00000016/disk from (pid=4065) execute
> /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-12 08:00:59,390 DEBUG nova.utils [-] Running cmd (subprocess):
> sudo tune2fs -c 0 -i 0 /dev/nbd15 from (pid=4065) execute
> /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-12 08:00:59,422 DEBUG nova.utils [-] Result was 1 from
> (pid=4065) execute /usr/lib/pymodules/python2.7/nova/utils.py:161
> 2011-07-12 08:00:59,422 DEBUG nova.utils [-] Running cmd (subprocess):
> sudo qemu-nbd -d /dev/nbd15 from (pid=4065) execute
> /usr/lib/pymodules/python2.7/nova/utils.py:143
> 2011-07-12 08:00:59,448 WARNING nova.virt.libvirt_conn [-] instance
> instance-00000016: ignoring error injecting data into image 4
> (Unexpected error while running command.
> Command: sudo tune2fs -c 0 -i 0 /dev/nbd15
> Exit code: 1
> Stdout: 'tune2fs 1.41.14 (22-Dec-2010)\n'
> Stderr: "tune2fs: Invalid argument while trying to open
> /dev/nbd15\nCouldn't find valid filesystem superblock.\n")
> 2011-07-12 08:01:31,461 ERROR nova.exception [-] Uncaught exception
> (nova.exception): TRACE: Traceback (most recent call last):
> (nova.exception): TRACE:   File
> "/usr/lib/pymodules/python2.7/nova/exception.py", line 87, in _wrap
> (nova.exception): TRACE:     return f(*args, **kw)
> (nova.exception): TRACE:   File
> "/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line
> 591, in spawn
> (nova.exception): TRACE:     domain = self._create_new_domain(xml)
> (nova.exception): TRACE:   File
> "/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line
> 1087, in _create_new_domain
> (nova.exception): TRACE:     domain.createWithFlags(launch_flags)
> (nova.exception): TRACE:   File
> "/usr/lib/python2.7/dist-packages/libvirt.py", line 337, in
> createWithFlags
> (nova.exception): TRACE:     if ret == -1: raise libvirtError
> ('virDomainCreateWithFlags() failed', dom=self)
> (nova.exception): TRACE: libvirtError: internal error process exited
> while connecting to monitor: char device redirected to /dev/pts/1
> (nova.exception): TRACE: qemu: could not load kernel
> '/var/lib/nova/instances/instance-00000016/kernel': Inappropriate
> ioctl for device
> (nova.exception): TRACE:
> (nova.exception): TRACE:
> 2011-07-12 08:01:31,463 ERROR nova.compute.manager [-] Instance '22'
> failed to spawn. Is virtualization enabled in the BIOS? Details:
> internal error process exited while connecting to monitor: char device
> redirected to /dev/pts/1
> qemu: could not load kernel
> '/var/lib/nova/instances/instance-00000016/kernel': Inappropriate
> ioctl for device
> (nova.compute.manager): TRACE: Traceback (most recent call last):
> (nova.compute.manager): TRACE:   File
> "/usr/lib/pymodules/python2.7/nova/compute/manager.py", line 311, in
> _run_instance
> (nova.compute.manager): TRACE:     self.driver.spawn(instance,
> network_info, bd_mapping)
> (nova.compute.manager): TRACE:   File
> "/usr/lib/pymodules/python2.7/nova/exception.py", line 93, in _wrap
> (nova.compute.manager): TRACE:     raise Error(str(e))
> (nova.compute.manager): TRACE: Error: internal error process exited
> while connecting to monitor: char device redirected to /dev/pts/1
> (nova.compute.manager): TRACE: qemu: could not load kernel
> '/var/lib/nova/instances/instance-00000016/kernel': Inappropriate
> ioctl for device
> (nova.compute.manager): TRACE:
> (nova.compute.manager): TRACE:
> 2011-07-12 08:01:31,864 INFO nova.compute.manager [-] Found instance
> 'instance-00000016' in DB but no VM. State=5, so setting state to
> shutoff.
> 2011-07-12 08:02:31,868 INFO nova.compute.manager [-] Updating host status
> 2011-07-12 08:02:31,906 INFO nova.compute.manager [-] Found instance
> 'instance-00000016' in DB but no VM. State=5, so setting state to
> shutoff.
>
>
> On Tue, Jul 12, 2011 at 1:11 AM, Vish Ishaya
> <email address hidden> wrote:
>> Your question #164486 on OpenStack Compute (nova) changed:
>> https://answers.launchpad.net/nova/+question/164486
>>
>>    Status: Open => Answered
>>
>> Vish Ishaya proposed the following answer:
>> the --glance_host and --glance_port flags were replaced with a single flag called
>> --glance_api_servers
>>
>> try
>> --glance_api_servers=172.16.1.13:9292
>>
>> Vish
>>
>>
>> On Jul 11, 2011, at 6:31 PM, P Spencer Davis wrote:
>>
>>> New question #164486 on OpenStack Compute (nova):
>>> https://answers.launchpad.net/nova/+question/164486
>>>
>>>
>>> I have two nodes, one running nova-api, nova-network, nova-volume, nova-
>>> compute and glance, the second is just running nova-compute. The nodes
>>> are running Ubuntu 11.04 server and I've installed from the
>>> ppa.launchpad repository, additionally both nodes use the KVM
>>> hypervisor, and kvm-ok returns that vitrualization (sp)  is enabled in
>>> their bios. On the master node, I can start instances and they run just
>>> fine, but when a vm is scheduled on the second node, I receive the
>>> following errors:
>>>
>>> 2011-07-11 08:53:38,013 INFO nova.virt.libvirt_conn [-] instance instance-00000002: Creating image
>>> 2011-07-11 08:53:38,034 DEBUG nova.utils [-] Attempting to grab semaphore "00000001" for method "call_if_not_exists
>>> "... from (pid=6846) inner /usr/lib/pymodules/python2.7/nova/utils.py:600
>>> 2011-07-11 08:53:38,036 ERROR nova.exception [-] Uncaught exception
>>> (nova.exception): TRACE: Traceback (most recent call last):
>>> (nova.exception): TRACE:   File "/usr/lib/pymodules/python2.7/nova/exception.py", line 87, in _wrap
>>> (nova.exception): TRACE:     return f(*args, **kw)
>>> (nova.exception): TRACE:   File "/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line 590, in spawn
>>> (nova.exception): TRACE:     block_device_mapping=block_device_mapping)
>>> (nova.exception): TRACE:   File "/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line 815, in _creat
>>> e_image
>>> (nova.exception): TRACE:     project=project)
>>> (nova.exception): TRACE:   File "/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line 751, in _cache
>>> _image
>>> (nova.exception): TRACE:     call_if_not_exists(base, fn, *args, **kwargs)
>>> (nova.exception): TRACE:   File "/usr/lib/pymodules/python2.7/nova/utils.py", line 613, in inner
>>> (nova.exception): TRACE:     retval = f(*args, **kwargs)
>>> (nova.exception): TRACE:   File "/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line 749, in call_i
>>> f_not_exists
>>> (nova.exception): TRACE:     fn(target=base, *args, **kwargs)
>>> (nova.exception): TRACE:   File "/usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py", line 762, in _fetch
>>> _image
>>> (nova.exception): TRACE:     images.fetch(image_id, target, user, project)
>>> (nova.exception): TRACE:   File "/usr/lib/pymodules/python2.7/nova/virt/images.py", line 44, in fetch
>>> (nova.exception): TRACE:     metadata = image_service.get(elevated, image_id, image_file)
>>> (nova.exception): TRACE:   File "/usr/lib/pymodules/python2.7/nova/image/glance.py", line 139, in get
>>> (nova.exception): TRACE:     image_meta, image_chunks = self.client.get_image(image_id)
>>> (nova.exception): TRACE:   File "/usr/lib/pymodules/python2.7/glance/client.py", line 98, in get_image
>>> (nova.exception): TRACE:     res = self.do_request("GET", "/images/%s" % image_id)
>>> (nova.exception): TRACE:   File "/usr/lib/pymodules/python2.7/glance/client.py", line 54, in do_request
>>> (nova.exception): TRACE:     headers, params)
>>> (nova.exception): TRACE:   File "/usr/lib/pymodules/python2.7/glance/common/client.py", line 148, in do_request
>>> (nova.exception): TRACE:     "server. Got error: %s" % e)
>>> (nova.exception): TRACE: ClientConnectionError: Unable to connect to server. Got error: [Errno 111] ECONNREFUSED
>>> (nova.exception): TRACE:
>>> 2011-07-11 08:53:38,037 ERROR nova.compute.manager [-] Instance '2' failed to spawn. Is virtualization enabled in t
>>> he BIOS? Details: Unable to connect to server. Got error: [Errno 111] ECONNREFUSED
>>>
>>> Looking in /var/lib/nova/instances/_base, there are 0000000# files that
>>> are zero size.
>>>
>>> The nodes have dual nics attached to a public 172.16.0.0/16 and a
>>> private 10.0.0.0/8 netowrk and i was using
>>> http://dodeeric.louvrex.net/?p=225 as an install guide.
>>>
>>> /etc/nova/nova.conf:
>>>
>>> # RabbitMQ
>>> --rabbit_host=172.16.1.13
>>> # MySQL
>>> --sql_connection=mysql://nova:nova@172.16.1.13/nova
>>> # Networking
>>> --network_manager=nova.network.manager.VlanManager
>>> --vlan_interface=eth1
>>> --public_interface=eth0
>>> --network_host=172.16.1.13
>>> --routing_source_ip=172.16.1.13
>>> --fixed_range=10.0.0.0/8
>>> --network_size=1024
>>> --dhcpbridge_flagfile=/etc/nova/nova.conf
>>> --dhcpbridge=/usr/bin/nova-dhcpbridge
>>> # Virtualization
>>> --libvirt_type=kvm
>>> # Volumes
>>> --iscsi_ip_prefix=172.16.1.13
>>> --num_targets=100
>>> # APIs
>>> --auth_driver=nova.auth.dbdriver.DbDriver
>>> --cc_host=172.16.1.13
>>> --ec2_url=http://172.16.1.13:8773/services/Cloud
>>> --s3_host=172.16.1.13
>>> --s3_dmz=172.16.1.13
>>> # Image service
>>> --glance_host=172.16.1.13
>>> --image_service=nova.image.glance.GlanceImageService
>>> # Misc
>>> --logdir=/var/log/nova
>>> --state_path=/var/lib/nova
>>> --lock_path=/var/lock/nova
>>> --verbose
>>> # VNC Console
>>> --vnc_enabled=true
>>> --vncproxy_url=http://172.16.1.13:6080
>>> --vnc_console_proxy_url=http://172.16.1.13:6080
>>>
>>> --
>>> You received this question notification because you are a member of Nova
>>> Core, which is an answer contact for OpenStack Compute (nova).
>>
>> --
>> If this answers your question, please go to the following page to let us
>> know that it is solved:
>> https://answers.launchpad.net/nova/+question/164486/+confirm?answer_id=0
>>
>> If you still need help, you can reply to this email or go to the
>> following page to enter your feedback:
>> https://answers.launchpad.net/nova/+question/164486
>>
>> You received this question notification because you asked the question.
>>
>
> --
> You received this question notification because you asked the question.
>

Revision history for this message
P Spencer Davis (p-spencer-davis) said :
#4

reporting solved

Revision history for this message
P Spencer Davis (p-spencer-davis) said :
#5

Thanks Vish Ishaya, that solved my question.