Failed to run instance in second nova-compute node

Asked by neo0 on 2012-05-20

I'm trying to install Openstack with 2 nodes.

Each node has only one network interface eth0. I config nova-network by VlanManage mode.

Node 1 (a Physic machine with IP: 172.17.2.203) : installs all components keystone, glance, nova (also nova-compute) and horizon. I can run instance on this host, everything okay. Then I install another nova-compute on node 2.

Node 2 (a Virtual machine on ESX with IP: 172.17.2.202) : installs only nova-compute. When I check by # nova-manage service list -> it's return nova-compute service on node 2 works with :-)

But in dashboard, the services list don't have compute service from node 2 and when I try to run a new instance on node 2. It's return error when 'Spawning' instance.

For example when I run an instance with Ubuntu lucid UEC image, then I check the information of this instance by $ nova show lucid2. I got this error: (this instance was created on node 2)

    {u'message': u'libvirtError', u'code': 500, u'created': u'2012-05-20T07:26:44Z'}

So I check nova-compute.log on Node 2 and here is the error part:

2012-05-20 14:26:44 TRACE nova.compute.manager [instance: a053fae3-0798-4fef-b596-e08d32756322] Traceback (most recent call last):
2012-05-20 14:26:44 TRACE nova.compute.manager [instance: a053fae3-0798-4fef-b596-e08d32756322] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 592, in _spawn
2012-05-20 14:26:44 TRACE nova.compute.manager [instance: a053fae3-0798-4fef-b596-e08d32756322] self._legacy_nw_info(network_info), block_device_info)
2012-05-20 14:26:44 TRACE nova.compute.manager [instance: a053fae3-0798-4fef-b596-e08d32756322] File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 114, in wrapped
2012-05-20 14:26:44 TRACE nova.compute.manager [instance: a053fae3-0798-4fef-b596-e08d32756322] return f(*args, **kw)
2012-05-20 14:26:44 TRACE nova.compute.manager [instance: a053fae3-0798-4fef-b596-e08d32756322] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py", line 922, in spawn
2012-05-20 14:26:44 TRACE nova.compute.manager [instance: a053fae3-0798-4fef-b596-e08d32756322] self._create_new_domain(xml)
2012-05-20 14:26:44 TRACE nova.compute.manager [instance: a053fae3-0798-4fef-b596-e08d32756322] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py", line 1575, in _create_new_domain
2012-05-20 14:26:44 TRACE nova.compute.manager [instance: a053fae3-0798-4fef-b596-e08d32756322] domain.createWithFlags(launch_flags)
2012-05-20 14:26:44 TRACE nova.compute.manager [instance: a053fae3-0798-4fef-b596-e08d32756322] File "/usr/lib/python2.7/dist-packages/libvirt.py", line 581, in createWithFlags
2012-05-20 14:26:44 TRACE nova.compute.manager [instance: a053fae3-0798-4fef-b596-e08d32756322] if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
2012-05-20 14:26:44 TRACE nova.compute.manager [instance: a053fae3-0798-4fef-b596-e08d32756322] libvirtError: internal error Process exited while reading console log output: char device redirected to /dev/pts/2
2012-05-20 14:26:44 TRACE nova.compute.manager [instance: a053fae3-0798-4fef-b596-e08d32756322] inet_listen_opts: bind(ipv4,172.17.2.203,5900): Cannot assign requested address
2012-05-20 14:26:44 TRACE nova.compute.manager [instance: a053fae3-0798-4fef-b596-e08d32756322] inet_listen_opts: FAILED

So I think the problem comes from node 2 because libvirt can not create and run a new virtual machine on that (because node 2 is virtual machine on ESX too) but I config nova-compute to run: QEMU not KVM, and I think it must works.

Here is the /etc/nova/nova.conf

##### RabbitMQ #####
--rabbit_host=172.17.2.203

##### MySQL #####
--sql_connection=mysql://nova:nova@172.17.2.203/nova_db

##### nova-api #####
--auth_strategy=keystone
--cc_host=172.17.2.203

##### nova-network #####
--network_manager=nova.network.manager.VlanManager
--public_interface=eth0
--vlan_interface=eth0
--network_host=172.17.2.203
--fixed_range=10.0.0.0/8
--network_size=1024
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--force_dhcp_release=True
--fixed_ip_disassociate_timeout=30

##### nova-compute #####
--connection_type=libvirt
--libvirt_type=kvm # if you demo on VirtualBox, use 'qemu'
--libvirt_use_virtio_for_bridges=True
--use_cow_images=True
--snapshot_image_format=qcow2

##### nova-volume #####
--iscsi_ip_prefix=172.17.2.203
--num_targets=100
--iscsi_helper=tgtadm

##### glance #####
--image_service=nova.image.glance.GlanceImageService
--glance_api_servers=172.17.2.203:9292

##### VNC #####
--novnc_enabled=true
--novncproxy_base_url=http://172.17.2.203:6080/vnc_auto.html
--vncserver_proxyclient_address=172.17.2.203
--vncserver_listen=172.17.2.203

##### Misc #####
--logdir=/var/log/nova
--state_path=/var/lib/nova
--lock_path=/var/lock/nova
--root_helper=sudo nova-rootwrap
--verbose

And here is all configuration when I install node 1, I write a note on my blog:

http://nphilo.blogspot.com/2012/05/install-openstack-essex-on-ubuntu-1204.html

Please give me some suggestions.

Thank you!

Question information

Language:
English Edit question
Status:
Solved
For:
OpenStack Compute (nova) Edit question
Assignee:
No assignee Edit question
Solved by:
JuanFra Rodriguez Cardoso
Solved:
2012-05-24
Last query:
2012-05-24
Last reply:
2012-05-23

Hi neo:

I have the same problem... hence, my nove-compute nodes run KVM.
I've try to delete '/var/lib/nova/_base' directory, but it does not solve the problem.

I also observed different forms to reference to Glance service in 'nova.conf' file:

--image_service=nova.image.glance.GlanceImageService
OR
--image_service=nova.image.glance.GlanceImage

Which is the correct form? both of them?

does anyone know how to solve this error (mentioned by neo above)?

thanks!

sorry, i forgot to attach the error log:

instance: f1d6eabb-aae2-4500-9b77-e18bdeff6277] Instance failed to spawn
2012-05-23 17:18:02 TRACE nova.compute.manager [instance: f1d6eabb-aae2-4500-9b77-e18bdeff6277] Traceback (most recent call last):
2012-05-23 17:18:02 TRACE nova.compute.manager [instance: f1d6eabb-aae2-4500-9b77-e18bdeff6277] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 592, in _spawn
2012-05-23 17:18:02 TRACE nova.compute.manager [instance: f1d6eabb-aae2-4500-9b77-e18bdeff6277] self._legacy_nw_info(network_info), block_device_info)
2012-05-23 17:18:02 TRACE nova.compute.manager [instance: f1d6eabb-aae2-4500-9b77-e18bdeff6277] File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 114, in wrapped
2012-05-23 17:18:02 TRACE nova.compute.manager [instance: f1d6eabb-aae2-4500-9b77-e18bdeff6277] return f(*args, **kw)
2012-05-23 17:18:02 TRACE nova.compute.manager [instance: f1d6eabb-aae2-4500-9b77-e18bdeff6277] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py", line 922, in spawn
2012-05-23 17:18:02 TRACE nova.compute.manager [instance: f1d6eabb-aae2-4500-9b77-e18bdeff6277] self._create_new_domain(xml)
2012-05-23 17:18:02 TRACE nova.compute.manager [instance: f1d6eabb-aae2-4500-9b77-e18bdeff6277] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py", line 1575, in _create_new_domain
2012-05-23 17:18:02 TRACE nova.compute.manager [instance: f1d6eabb-aae2-4500-9b77-e18bdeff6277] domain.createWithFlags(launch_flags)
2012-05-23 17:18:02 TRACE nova.compute.manager [instance: f1d6eabb-aae2-4500-9b77-e18bdeff6277] File "/usr/lib/python2.7/dist-packages/libvirt.py", line 581, in createWithFlags
2012-05-23 17:18:02 TRACE nova.compute.manager [instance: f1d6eabb-aae2-4500-9b77-e18bdeff6277] if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
2012-05-23 17:18:02 TRACE nova.compute.manager [instance: f1d6eabb-aae2-4500-9b77-e18bdeff6277] libvirtError: internal error Process exited while reading console log output: char device redirected to /dev/pts/0
2012-05-23 17:18:02 TRACE nova.compute.manager [instance: f1d6eabb-aae2-4500-9b77-e18bdeff6277] inet_listen_opts: bind(ipv4,163.117.148.131,5900): Cannot assign requested address
2012-05-23 17:18:02 TRACE nova.compute.manager [instance: f1d6eabb-aae2-4500-9b77-e18bdeff6277] inet_listen_opts: FAILED
2012-05-23 17:18:02 TRACE nova.compute.manager [instance: f1d6eabb-aae2-4500-9b77-e18bdeff6277]
2012-05-23 17:18:02 TRACE nova.compute.manager [instance: f1d6eabb-aae2-4500-9b77-e18bdeff6277]
2012-05-23 17:18:02 DEBUG nova.compute.manager [req-fa3714d3-3ed1-4810-8cef-20b1be3d4192 dc570e363f03431b9c4d1fc0c2a92991 02fd2450caeb4b429ca8cbb41f1f96be] [instance: f1d6eabb-aae2-4500-9b77-e18bdeff6277] Deallocating network for instance from (pid=2161) _deallocate_network /usr/lib/python2.7/dist-packages/nova/compute/manager.py:616
2012-05-23 17:18:02 DEBUG nova.rpc.amqp [req-fa3714d3-3ed1-4810-8cef-20b1be3d4192 dc570e363f03431b9c4d1fc0c2a92991 02fd2450caeb4b429ca8cbb41f1f96be] Making asynchronous cast on network... from (pid=2161) cast /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:346
2012-05-23 17:18:03 ERROR nova.rpc.amqp [req-fa3714d3-3ed1-4810-8cef-20b1be3d4192 dc570e363f03431b9c4d1fc0c2a92991 02fd2450caeb4b429ca8cbb41f1f96be] Exception during message handling
2012-05-23 17:18:03 TRACE nova.rpc.amqp Traceback (most recent call last):
2012-05-23 17:18:03 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py", line 252, in _process_data
2012-05-23 17:18:03 TRACE nova.rpc.amqp rval = node_func(context=ctxt, **node_args)
2012-05-23 17:18:03 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 114, in wrapped
2012-05-23 17:18:03 TRACE nova.rpc.amqp return f(*args, **kw)
2012-05-23 17:18:03 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 177, in decorated_function
2012-05-23 17:18:03 TRACE nova.rpc.amqp sys.exc_info())
2012-05-23 17:18:03 TRACE nova.rpc.amqp File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
2012-05-23 17:18:03 TRACE nova.rpc.amqp self.gen.next()
2012-05-23 17:18:03 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 171, in decorated_function
2012-05-23 17:18:03 TRACE nova.rpc.amqp return function(self, context, instance_uuid, *args, **kwargs)
2012-05-23 17:18:03 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 651, in run_instance
2012-05-23 17:18:03 TRACE nova.rpc.amqp do_run_instance()
2012-05-23 17:18:03 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/utils.py", line 945, in inner
2012-05-23 17:18:03 TRACE nova.rpc.amqp retval = f(*args, **kwargs)
2012-05-23 17:18:03 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 650, in do_run_instance
2012-05-23 17:18:03 TRACE nova.rpc.amqp self._run_instance(context, instance_uuid, **kwargs)
2012-05-23 17:18:03 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 451, in _run_instance
2012-05-23 17:18:03 TRACE nova.rpc.amqp self._set_instance_error_state(context, instance_uuid)
2012-05-23 17:18:03 TRACE nova.rpc.amqp File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
2012-05-23 17:18:03 TRACE nova.rpc.amqp self.gen.next()
2012-05-23 17:18:03 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 432, in _run_instance
2012-05-23 17:18:03 TRACE nova.rpc.amqp self._deallocate_network(context, instance)
2012-05-23 17:18:03 TRACE nova.rpc.amqp File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
2012-05-23 17:18:03 TRACE nova.rpc.amqp self.gen.next()
2012-05-23 17:18:03 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 429, in _run_instance
2012-05-23 17:18:03 TRACE nova.rpc.amqp injected_files, admin_password)
2012-05-23 17:18:03 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 592, in _spawn
2012-05-23 17:18:03 TRACE nova.rpc.amqp self._legacy_nw_info(network_info), block_device_info)
2012-05-23 17:18:03 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 114, in wrapped
2012-05-23 17:18:03 TRACE nova.rpc.amqp return f(*args, **kw)
2012-05-23 17:18:03 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py", line 922, in spawn
2012-05-23 17:18:03 TRACE nova.rpc.amqp self._create_new_domain(xml)
2012-05-23 17:18:03 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py", line 1575, in _create_new_domain
2012-05-23 17:18:03 TRACE nova.rpc.amqp domain.createWithFlags(launch_flags)
2012-05-23 17:18:03 TRACE nova.rpc.amqp File "/usr/lib/python2.7/dist-packages/libvirt.py", line 581, in createWithFlags
2012-05-23 17:18:03 TRACE nova.rpc.amqp if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
2012-05-23 17:18:03 TRACE nova.rpc.amqp libvirtError: internal error Process exited while reading console log output: char device redirected to /dev/pts/0
2012-05-23 17:18:03 TRACE nova.rpc.amqp inet_listen_opts: bind(ipv4,163.117.148.131,5900): Cannot assign requested address
2012-05-23 17:18:03 TRACE nova.rpc.amqp inet_listen_opts: FAILED

Hi again:

I think the problem is related to novnc service. I've removed novnc service (and dependences) and I've launched a new instance. The instance was deployed with no poblems.

The error line 'inet_listen_opts: bind(ipv4,172.17.2.203,5900): Cannot assign requested address' is due to VNC Server. It cannot bind new VNC session at same IP:port (5900-tcp). I suggest a possible solution would be to assign new ports automatically for each new connection.

Regards!

neo0 (tungns-inf) said : #4

@Juan: You're great!
When I remove novnc, it's work. Thank you!

I only have one problem is: in nova-network, for example I have instance_1 in node1(controller + nova-compute) with fixed IP 10.0.1.2 and instance_2 in node2 (only nova-compute) with fixed IP 10.0.1.3.

In node1 (controller) I can ping to these 10.0.1.x IP, but in node2 I can't. For floating IP, it's okay.

When I run: $ ip addr

In node1:

br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether fa:16:3e:7b:66:79 brd ff:ff:ff:ff:ff:ff
    inet 10.0.1.1/24 brd 10.0.1.255 scope global br1
    inet6 fe80::64f2:cfff:fe89:b0ec/64 scope link
       valid_lft forever preferred_lft forever

but in node2:

br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 08:00:27:f3:57:4f brd ff:ff:ff:ff:ff:ff
    inet6 fe80::980e:edff:fe79:da7e/64 scope link
       valid_lft forever preferred_lft forever

I config nova-network by VlanManager. How can I make node2 get fixed IP vlan?

neo0 (tungns-inf) said : #5

Thanks Juan F. Rodriguez, that solved my question.

Hi neo0:

I also answered myself the same question.
I think multihost option could be a solution since 'node2' (worker node) will also need 'nova-network' service.
I'm going to try this configuration. I will add a comment with the results, ok?

Regards!