Fail to "nova-manage service list" with diablo release

Asked by van dinh phuc

Hi,
I installed the Openstack diablo on 2 nodes with the same instructions.And I run "sudo nova-manage db sync" without any issue.
but when i try the command “sudo nova-manage service list” ,it shows out an error in the nova-manage.log as following:

2011-09-30 16:04:04,080 INFO nova.db.sqlalchemy [-] Using mysql/eventlet db_pool.
2011-09-30 16:04:09,093 CRITICAL nova [-]
(nova): TRACE: Traceback (most recent call last):
(nova): TRACE: File "/usr/bin/nova-manage", line 2141, in <module>
(nova): TRACE: main()
(nova): TRACE: File "/usr/bin/nova-manage", line 2129, in main
(nova): TRACE: fn(*fn_args, **fn_kwargs)
(nova): TRACE: File "/usr/bin/nova-manage", line 1010, in list
(nova): TRACE: services = db.service_get_all(ctxt)
(nova): TRACE: File "/usr/lib/python2.6/dist-packages/nova/db/api.py", line 94, in service_get_all
(nova): TRACE: return IMPL.service_get_all(context, disabled)
(nova): TRACE: File "/usr/lib/python2.6/dist-packages/nova/db/sqlalchemy/api.py", line 101, in wrapper
(nova): TRACE: return f(*args, **kwargs)
(nova): TRACE: File "/usr/lib/python2.6/dist-packages/nova/db/sqlalchemy/api.py", line 186, in service_get_all
(nova): TRACE: session = get_session()
(nova): TRACE: File "/usr/lib/python2.6/dist-packages/nova/db/sqlalchemy/session.py", line 53, in get_session
(nova): TRACE: _ENGINE = get_engine()
(nova): TRACE: File "/usr/lib/python2.6/dist-packages/nova/db/sqlalchemy/session.py", line 87, in get_engine
(nova): TRACE: creator = eventlet.db_pool.ConnectionPool(MySQLdb, **pool_args)
(nova): TRACE: File "/usr/lib/python2.6/dist-packages/eventlet/db_pool.py", line 50, in __init__
(nova): TRACE: order_as_stack=True)
(nova): TRACE: File "/usr/lib/python2.6/dist-packages/eventlet/pools.py", line 108, in __init__
(nova): TRACE: self.free_items.append(self.create())
(nova): TRACE: File "/usr/lib/python2.6/dist-packages/eventlet/db_pool.py", line 246, in create
(nova): TRACE: **self._kwargs)
(nova): TRACE: File "/usr/lib/python2.6/dist-packages/eventlet/db_pool.py", line 253, in connect
(nova): TRACE: conn = tpool.execute(db_module.connect, *args, **kw)
(nova): TRACE: File "/usr/lib/python2.6/dist-packages/eventlet/tpool.py", line 119, in execute
(nova): TRACE: rv = e.wait()
(nova): TRACE: File "/usr/lib/python2.6/dist-packages/eventlet/event.py", line 116, in wait
(nova): TRACE: return hubs.get_hub().switch()
(nova): TRACE: File "/usr/lib/python2.6/dist-packages/eventlet/hubs/hub.py", line 177, in switch
(nova): TRACE: return self.greenlet.switch()
(nova): TRACE: ConnectTimeout
(nova): TRACE:

It have the same issue with the error (http://<email address hidden>/msg04120.html)

More information

node 1:
nova.conf
http://pastebin.com/YkdRHN55
-----------------------------------------
root@openstack2:/home/cloud# ip addr
1: lo: mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet 169.254.169.254/32 scope link lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:1d:60:e8:75:74 brd ff:ff:ff:ff:ff:ff
inet 10.2.76.3/24 brd 10.2.76.255 scope global eth0
inet6 fe80::21d:60ff:fee8:7574/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:1d:60:e8:76:6e brd ff:ff:ff:ff:ff:ff
inet 10.2.77.3/24 brd 10.2.77.255 scope global eth1
inet6 fe80::21d:60ff:fee8:766e/64 scope link
valid_lft forever preferred_lft forever
4: eth2: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
link/ether 00:1d:60:e8:72:99 brd ff:ff:ff:ff:ff:ff
5: eth3: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
link/ether 00:1d:60:e8:74:29 brd ff:ff:ff:ff:ff:ff
6: virbr0: mtu 1500 qdisc noqueue state UNKNOWN
link/ether 8a:50:e4:60:e4:49 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0

node 2:
nova.conf
http://pastebin.com/Has60MVg
------------------------------------------------------------------------
cloud4@openstack4:~$ ip addr
1: lo: mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet 169.254.169.254/32 scope link lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc mq state DOWN qlen 1000
link/ether 00:14:5e:7b:aa:8e brd ff:ff:ff:ff:ff:ff
3: eth1: mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:14:5e:7b:aa:8f brd ff:ff:ff:ff:ff:ff
inet6 fe80::214:5eff:fe7b:aa8f/64 scope link tentative dadfailed
valid_lft forever preferred_lft forever
4: eth2: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:11:0a:61:0e:a0 brd ff:ff:ff:ff:ff:ff
inet 10.2.76.4/24 brd 10.2.76.255 scope global eth2
inet6 fe80::211:aff:fe61:ea0/64 scope link
valid_lft forever preferred_lft forever
5: eth3: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:11:0a:61:0e:a1 brd ff:ff:ff:ff:ff:ff
inet 10.2.77.4/24 brd 10.2.77.255 scope global eth3
inet6 fe80::211:aff:fe61:ea1/64 scope link
valid_lft forever preferred_lft forever
6: virbr0: mtu 1500 qdisc noqueue state UNKNOWN
link/ether 0e:41:57:f3:05:73 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0

Thanks,
Phucvdb

Question information

Language:
English Edit question
Status:
Solved
For:
OpenStack Compute (nova) Edit question
Assignee:
No assignee Edit question
Solved by:
van dinh phuc
Solved:
Last query:
Last reply:
Revision history for this message
Nachi Ueno (nati-ueno) said :
#1

You can try small number of sql_min_pool_size

--sql_min_pool_size=1

Revision history for this message
van dinh phuc (phucvdb) said :
#2

@Nachi Ueno : i added your flag into my nova.conf.But the same error still show out in my nova-manage.log

Revision history for this message
van dinh phuc (phucvdb) said :
#3

I tried to change the vlan 10.2.77.0/24 to 172.16.1.0/24 and the command "sudo nova-manage service list" run successfully without any error in my nova-manage.log
I don't understand this issue. This is a bug or where i was wrong?

Revision history for this message
Paul Belanger (pabelanger) said :
#4

Check your DNS settings, I was having the same problem until I figured out my /etc/resolve.conf was pointing to a non-existent nameserver.

Once I configured it properly, everything started working again.