Can not access second and next instances via ssh/ping etc

Asked by arturo lorenzo

Launched an instance, runs OK, I can access it via ssh/ping etc. Launch a second instance from the same image and I can not access it via ssh/ping etc. Both instances are running OK.
On the second instance console shows:
=============================================
2011-02-25 16:09:20,916 - DataSourceEc2.py[WARNING]: waiting for metadata service at http://169.254.169.254/2009-04-04/meta-data/instance-id

2011-02-25 16:09:20,916 - DataSourceEc2.py[WARNING]: 16:09:20 [ 1/100]: url error [timed out]

2011-02-25 16:50:42,117 - DataSourceEc2.py[CRITICAL]: giving up on md after 1213 seconds

2011-02-25 16:10:26,875 - DataSourceEc2.py[WARNING]: 16:10:26 [17/100]: url error [[Errno 113] No route to host]

Traceback (most recent call last):
  File "/usr/bin/cloud-init-cfg", line 116, in <module>
    main()
  File "/usr/bin/cloud-init-cfg", line 63, in main
    cc = cloudinit.CloudConfig.CloudConfig(cfg_path)
  File "/usr/lib/python2.6/dist-packages/cloudinit/CloudConfig/__init__.py", line 35, in __init__
    self.cloud.get_data_source()
  File "/usr/lib/python2.6/dist-packages/cloudinit/__init__.py", line 270, in get_data_source
    log.debug("did not find data source from %s" % dslist)
TypeError: not all arguments converted during string formatting

=============================================

I have not been able to access any new instance via ssh/ping after the first one which is still running OK.

Question information

Language:
English Edit question
Status:
Solved
For:
OpenStack Compute (nova) Edit question
Assignee:
No assignee Edit question
Solved by:
arturo lorenzo
Solved:
Last query:
Last reply:
Revision history for this message
Christian Berendt (berendt) said :
#1

Please paste your nova.conf and your logfiles (I think the logs of nova-network and nova-compute are interesting in this case) to http://paste.openstack.org and post the URLs here.

Revision history for this message
arturo lorenzo (arturo-lorenzo) said :
#2

here is the nova.conf file, failed to post the logs using pasted.openstack.org, another way?
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--logdir=/var/log/nova
--state_path=/var/lib/nova
--verbose
--sql_connection=mysql://root:nova@13.141.230.64/nova
--s3_host=13.141.230.64
--rabbit_host=13.141.230.64
--ec2_host=13.141.230.64
--ec2_url=http://13.141.230.64:8773/services/Cloud
--network_manager=nova.network.manager.FlatDHCPManager
--fixed_range=13.141.230.96/28
--flat_network_dhcp_start=13.141.230.96
--flat_network_dns=13.141.230.8
--dns_server=13.141.230.8
--flat_injected=False
--network_size=16
--fixed_ip_disassociate_timeout=60

thanks

Revision history for this message
Vish Ishaya (vishvananda) said :
#3

are you seeing the second ip get leased properly? (You should see leased reports every two minutes from each instance in nova-network)
It seems like you might have an ip conflict. If you describe_instances do they show that they have different ips?
Are you sure there isn't an old instance (or another machine on the network) trying to claim the same ip?

Vish

On Feb 28, 2011, at 6:03 AM, arturo lorenzo wrote:

> Question #146894 on OpenStack Compute (nova) changed:
> https://answers.launchpad.net/nova/+question/146894
>
> Status: Needs information => Open
>
> arturo lorenzo gave more information on the question:
> here is the nova.conf file, failed to post the logs using pasted.openstack.org, another way?
> --dhcpbridge_flagfile=/etc/nova/nova.conf
> --dhcpbridge=/usr/bin/nova-dhcpbridge
> --logdir=/var/log/nova
> --state_path=/var/lib/nova
> --verbose
> --sql_connection=mysql://root:nova@13.141.230.64/nova
> --s3_host=13.141.230.64
> --rabbit_host=13.141.230.64
> --ec2_host=13.141.230.64
> --ec2_url=http://13.141.230.64:8773/services/Cloud
> --network_manager=nova.network.manager.FlatDHCPManager
> --fixed_range=13.141.230.96/28
> --flat_network_dhcp_start=13.141.230.96
> --flat_network_dns=13.141.230.8
> --dns_server=13.141.230.8
> --flat_injected=False
> --network_size=16
> --fixed_ip_disassociate_timeout=60
>
> thanks
>
> --
> You received this question notification because you are a member of Nova
> Core, which is an answer contact for OpenStack Compute (nova).

Revision history for this message
arturo lorenzo (arturo-lorenzo) said :
#4

Vish,
now my first instance is not running anymore. I think the problem is that the instance is trying to access this url http://169.254.169.254/2009-04-04/meta-data/instance-id and after a while it changes to CRITICAL and it stays there forever.
The range of IPs is unique and they are dedicated to the instances only. I am also seeing a progressive IP # on new instances but all of them try to access that url. I have entered the iptables cmd on the compute node.
see the beginning and the end of the attached log:
===================
cloud-init start running: Tue, 01 Mar 2011 03:36:54 +0000. up 1.96 seconds
2011-03-01 03:36:56,243 - DataSourceEc2.py[WARNING]: waiting for metadata service at http://169.254.169.254/2009-04-04/meta-data/instance-id

2011-03-01 03:36:56,244 - DataSourceEc2.py[WARNING]: 03:36:56 [ 1/100]: url error [timed out]

2011-03-01 03:36:57,250 - DataSourceEc2.py[WARNING]: 03:36:57 [ 2/100]: url error [[Errno 113] No route to host]

2011-03-01 03:37:00,256 - DataSourceEc2.py[WARNING]: 03:37:00 [ 3/100]: url error [timed out]

......

2011-03-01 04:17:48,760 - DataSourceEc2.py[WARNING]: 04:17:48 [100/100]: url error [timed out]

2011-03-01 04:18:08,782 - DataSourceEc2.py[CRITICAL]: giving up on md after 1208 seconds

Traceback (most recent call last):
  File "/usr/bin/cloud-init-cfg", line 116, in <module>
    main()
  File "/usr/bin/cloud-init-cfg", line 63, in main
    cc = cloudinit.CloudConfig.CloudConfig(cfg_path)
  File "/usr/lib/python2.6/dist-packages/cloudinit/CloudConfig/__init__.py", line 35, in __init__
    self.cloud.get_data_source()
  File "/usr/lib/python2.6/dist-packages/cloudinit/__init__.py", line 270, in get_data_source
    log.debug("did not find data source from %s" % dslist)
TypeError: not all arguments converted during string formatting

========
here is my instance:
 euca-describe-instances
RESERVATION r-gvmuusmn simple_cloud default
INSTANCE i-00000002 ami-zjnxlmfr 13.141.230.98 13.141.230.98 running mykey (simple_cloud, cc1) 0 m1.small 2011-03-01T03:36:25Z nova

thanks!

Revision history for this message
Vish Ishaya (vishvananda) said :
#5

The iptables command should be on the network node, and it is done automatically by flatdhcp. Also, you may need to give 169.254.169.254 to the network node to allow some guests to reach it properly (this generally happens with desktop images).
for example:
ip addr add 169.254.169.254/32 link local dev br100

On Feb 28, 2011, at 8:43 PM, arturo lorenzo wrote:

> Question #146894 on OpenStack Compute (nova) changed:
> https://answers.launchpad.net/nova/+question/146894
>
> Status: Answered => Open
>
> arturo lorenzo is still having a problem:
> Vish,
> now my first instance is not running anymore. I think the problem is that the instance is trying to access this url http://169.254.169.254/2009-04-04/meta-data/instance-id and after a while it changes to CRITICAL and it stays there forever.
> The range of IPs is unique and they are dedicated to the instances only. I am also seeing a progressive IP # on new instances but all of them try to access that url. I have entered the iptables cmd on the compute node.
> see the beginning and the end of the attached log:
> ===================
> cloud-init start running: Tue, 01 Mar 2011 03:36:54 +0000. up 1.96 seconds
> 2011-03-01 03:36:56,243 - DataSourceEc2.py[WARNING]: waiting for metadata service at http://169.254.169.254/2009-04-04/meta-data/instance-id
>
> 2011-03-01 03:36:56,244 - DataSourceEc2.py[WARNING]: 03:36:56 [
> 1/100]: url error [timed out]
>
> 2011-03-01 03:36:57,250 - DataSourceEc2.py[WARNING]: 03:36:57 [
> 2/100]: url error [[Errno 113] No route to host]
>
> 2011-03-01 03:37:00,256 - DataSourceEc2.py[WARNING]: 03:37:00 [
> 3/100]: url error [timed out]
>
> ......
>
> 2011-03-01 04:17:48,760 - DataSourceEc2.py[WARNING]: 04:17:48
> [100/100]: url error [timed out]
>
> 2011-03-01 04:18:08,782 - DataSourceEc2.py[CRITICAL]: giving up on md
> after 1208 seconds
>
> Traceback (most recent call last):
> File "/usr/bin/cloud-init-cfg", line 116, in <module>
> main()
> File "/usr/bin/cloud-init-cfg", line 63, in main
> cc = cloudinit.CloudConfig.CloudConfig(cfg_path)
> File "/usr/lib/python2.6/dist-packages/cloudinit/CloudConfig/__init__.py", line 35, in __init__
> self.cloud.get_data_source()
> File "/usr/lib/python2.6/dist-packages/cloudinit/__init__.py", line 270, in get_data_source
> log.debug("did not find data source from %s" % dslist)
> TypeError: not all arguments converted during string formatting
>
> ========
> here is my instance:
> euca-describe-instances
> RESERVATION r-gvmuusmn simple_cloud default
> INSTANCE i-00000002 ami-zjnxlmfr 13.141.230.98 13.141.230.98 running mykey (simple_cloud, cc1) 0 m1.small 2011-03-01T03:36:25Z nova
>
> thanks!
>
> --
> You received this question notification because you are a member of Nova
> Core, which is an answer contact for OpenStack Compute (nova).

Revision history for this message
arturo lorenzo (arturo-lorenzo) said :
#6

Still can't run an instance. I have reinstalled nova from zero and I am using this nova.conf:
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--logdir=/var/log/nova
--state_path=/var/lib/nova
--verbose
--sql_connection=mysql://root:nova@192.168.1.194/nova
--s3_host=192.168.1.194
--rabbit_host=192.168.1.194
--ec2_host=192.168.1.194
--ec2_url=http://192.168.1.194:8773/services/Cloud
--network_manager=nova.network.manager.FlatDHCPManager
--fixed_range=10.0.0.0/24
--flat_network_dhcp_start=10.0.0.2
--flat_network_dns=209.18.47.61
--flat_injected=False
--fixed_ip_disassociate_timeout=60
--network_size=16
--dns_server=209.18.47.61
and when I run an instance I get this:
=============================================================
2011-03-01 20:53:28,561 DEBUG nova.rpc [-] Initing the Adapter Consumer for compute from (pid=8161) __init__ /usr/lib/pymodules/python2.6/nova/rpc.py:160
2011-03-01 20:53:28,607 DEBUG nova.rpc [-] Initing the Adapter Consumer for compute.ubuntu-svr1 from (pid=8161) __init__ /usr/lib/pymodules/python2.6/nova/rpc.py:160
2011-03-01 21:09:47,520 DEBUG nova.rpc [-] received {u'_context_request_id': u'Y2LX5RC-4MKBFUPRCGU7', u'_context_read_deleted': False, u'args': {u'instance_id': 1, u'onset_files': None, u'availability_zone': None}, u'_context_is_admin': True, u'_context_timestamp': u'2011-03-02T02:09:46Z', u'_context_user': u'admin', u'method': u'run_instance', u'_context_project': u'simple_cloud', u'_context_remote_address': u'192.168.1.194'} from (pid=8161) _receive /usr/lib/pymodules/python2.6/nova/rpc.py:179
2011-03-01 21:09:47,521 DEBUG nova.rpc [-] unpacked context: {'timestamp': u'2011-03-02T02:09:46Z', 'remote_address': u'192.168.1.194', 'project': u'simple_cloud', 'is_admin': True, 'user': u'admin', 'request_id': u'Y2LX5RC-4MKBFUPRCGU7', 'read_deleted': False} from (pid=8161) _unpack_context /usr/lib/pymodules/python2.6/nova/rpc.py:295
2011-03-01 21:09:47,581 DEBUG nova.virt.libvirt_conn [-] Connecting to libvirt: qemu:///system from (pid=8161) _get_connection /usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py:183
2011-03-01 21:09:47,770 AUDIT nova.compute.manager [Y2LX5RC-4MKBFUPRCGU7 admin simple_cloud] instance 1: starting...
2011-03-01 21:09:47,900 DEBUG nova.rpc [-] Making asynchronous call... from (pid=8161) call /usr/lib/pymodules/python2.6/nova/rpc.py:314
2011-03-01 21:09:47,900 DEBUG nova.rpc [-] MSG_ID is 6324edb1992c48c6a04dc70c1784b372 from (pid=8161) call /usr/lib/pymodules/python2.6/nova/rpc.py:317
2011-03-01 21:09:49,440 DEBUG nova.rpc [-] Making asynchronous call... from (pid=8161) call /usr/lib/pymodules/python2.6/nova/rpc.py:314
2011-03-01 21:09:49,441 DEBUG nova.rpc [-] MSG_ID is 310a43e781c643878c2dbaa90df86c82 from (pid=8161) call /usr/lib/pymodules/python2.6/nova/rpc.py:317
2011-03-01 21:09:49,944 DEBUG nova.utils [-] Running cmd (subprocess): ip link show dev br100 from (pid=8161) execute /usr/lib/pymodules/python2.6/nova/utils.py:132
2011-03-01 21:09:49,971 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables --delete FORWARD --in-interface br100 -j ACCEPT from (pid=8161) execute /usr/lib/pymodules/python2.6/nova/utils.py:132
2011-03-01 21:09:49,982 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables -I FORWARD --in-interface br100 -j ACCEPT from (pid=8161) execute /usr/lib/pymodules/python2.6/nova/utils.py:132
2011-03-01 21:09:49,997 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables --delete FORWARD --out-interface br100 -j ACCEPT from (pid=8161) execute /usr/lib/pymodules/python2.6/nova/utils.py:132
2011-03-01 21:09:50,009 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables -I FORWARD --out-interface br100 -j ACCEPT from (pid=8161) execute /usr/lib/pymodules/python2.6/nova/utils.py:132
2011-03-01 21:09:50,020 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables -N nova-local from (pid=8161) execute /usr/lib/pymodules/python2.6/nova/utils.py:132
2011-03-01 21:09:50,030 DEBUG nova.utils [-] Result was 1 from (pid=8161) execute /usr/lib/pymodules/python2.6/nova/utils.py:145
2011-03-01 21:09:50,036 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables --delete FORWARD -j nova-local from (pid=8161) execute /usr/lib/pymodules/python2.6/nova/utils.py:132
2011-03-01 21:09:50,046 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables -I FORWARD -j nova-local from (pid=8161) execute /usr/lib/pymodules/python2.6/nova/utils.py:132
2011-03-01 21:09:50,120 DEBUG nova.virt.libvirt_conn [-] instance instance-00000001: starting toXML method from (pid=8161) to_xml /usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py:665
2011-03-01 21:09:50,262 DEBUG nova.virt.libvirt_conn [-] instance instance-00000001: finished toXML method from (pid=8161) to_xml /usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py:733
2011-03-01 21:09:50,318 INFO nova [-] called setup_basic_filtering in nwfilter
2011-03-01 21:09:50,318 INFO nova [-] ensuring static filters
2011-03-01 21:09:50,342 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-save -t filter from (pid=8161) execute /usr/lib/pymodules/python2.6/nova/utils.py:132
2011-03-01 21:09:50,506 INFO nova [-] <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at 0x3e77450>
2011-03-01 21:09:50,506 INFO nova [-] <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at 0x3e77a90>
2011-03-01 21:09:50,506 INFO nova [-] <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at 0x3e77550>
2011-03-01 21:09:50,507 INFO nova [-] <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at 0x3e77590>
2011-03-01 21:09:50,507 INFO nova [-] new_filter: # Generated by iptables-save v1.4.4 on Tue Mar 1 21:09:50 2011
*filter
:INPUT ACCEPT [204991:205122195]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [203774:202929089]
:nova-fallback - [0:0]
:nova-local - [0:0]
:nova-inst-1 - [0:0]
:nova-sg-1 - [0:0]
-A nova-fallback -j DROP
-A FORWARD -j nova-local
-A OUTPUT -j nova-local
-A nova-local -d 10.0.0.2 -j nova-inst-1
-A nova-inst-1 -m state --state INVALID -j DROP
-A nova-inst-1 -m state --state ESTABLISHED,RELATED -j ACCEPT
-A nova-inst-1 -j nova-sg-1
-A nova-inst-1 -s 10.0.0.1 -p udp --sport 67 --dport 68 -j ACCEPT
-A nova-inst-1 -s 10.0.0.0/28 -j ACCEPT
-A nova-inst-1 -j nova-fallback
-A nova-sg-1 -p icmp -s 0.0.0.0/0 -j ACCEPT
-A nova-sg-1 -p tcp -s 0.0.0.0/0 --dport 22 -j ACCEPT
-A nova-sg-1 -p tcp -s 0.0.0.0/0 --dport 80 -j ACCEPT
-A nova-sg-1 -p tcp -s 0.0.0.0/0 --dport 8080 -j ACCEPT
-A INPUT -i virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A INPUT -i virbr0 -p udp -m udp --dport 67 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT
-A FORWARD -o br100 -j ACCEPT
-A FORWARD -i br100 -j ACCEPT
-A FORWARD -d 192.168.122.0/24 -o virbr0 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT
-A FORWARD -i virbr0 -o virbr0 -j ACCEPT
-A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
COMMIT
# Completed on Tue Mar 1 21:09:50 2011

2011-03-01 21:09:50,507 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-restore from (pid=8161) execute /usr/lib/pymodules/python2.6/nova/utils.py:132
2011-03-01 21:09:50,529 DEBUG nova.utils [-] Running cmd (subprocess): mkdir -p /var/lib/nova/instances/instance-00000001/ from (pid=8161) execute /usr/lib/pymodules/python2.6/nova/utils.py:132
2011-03-01 21:09:50,556 INFO nova.virt.libvirt_conn [-] instance instance-00000001: Creating image
2011-03-01 21:09:50,631 DEBUG nova.utils [-] Running cmd (subprocess): /usr/bin/curl --fail --silent http://192.168.1.194:3333/_images/ami-vn7lvhfv/image -H 'Date: Wed, 02 Mar 2011 02:09:50 GMT' -H 'Authorization: AWS 41575ff3-7a24-4f10-b542-1e3bbbd37447:simple_cloud:/NF+EeA2G4fh3Pt/E8T+OKE6xso=' -o /var/lib/nova/instances/_base/ami-vn7lvhfv from (pid=8161) execute /usr/lib/pymodules/python2.6/nova/utils.py:132
2011-03-01 21:09:50,903 DEBUG nova.utils [-] Running cmd (subprocess): cp /var/lib/nova/instances/_base/ami-vn7lvhfv /var/lib/nova/instances/instance-00000001/kernel from (pid=8161) execute /usr/lib/pymodules/python2.6/nova/utils.py:132
2011-03-01 21:09:51,110 DEBUG nova.utils [-] Running cmd (subprocess): /usr/bin/curl --fail --silent http://192.168.1.194:3333/_images/ami-z21or64q/image -H 'Date: Wed, 02 Mar 2011 02:09:51 GMT' -H 'Authorization: AWS 41575ff3-7a24-4f10-b542-1e3bbbd37447:simple_cloud:fgHqRF4QU7vgVa3RwpMfdxiFHVc=' -o /var/lib/nova/instances/_base/ami-z21or64q_sm from (pid=8161) execute /usr/lib/pymodules/python2.6/nova/utils.py:132
2011-03-01 21:10:12,024 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img create -f qcow2 -o cluster_size=2M,backing_file=/var/lib/nova/instances/_base/ami-z21or64q_sm /var/lib/nova/instances/instance-00000001/disk from (pid=8161) execute /usr/lib/pymodules/python2.6/nova/utils.py:132
2011-03-01 21:10:13,424 INFO nova.virt.libvirt_conn [-] instance instance-00000001: injecting key into image ami-z21or64q
2011-03-01 21:10:16,727 DEBUG nova.utils [-] Running cmd (subprocess): sudo qemu-nbd -c /dev/nbd15 /var/lib/nova/instances/instance-00000001/disk from (pid=8161) execute /usr/lib/pymodules/python2.6/nova/utils.py:132
2011-03-01 21:10:17,993 DEBUG nova.utils [-] Running cmd (subprocess): sudo tune2fs -c 0 -i 0 /dev/nbd15 from (pid=8161) execute /usr/lib/pymodules/python2.6/nova/utils.py:132
2011-03-01 21:10:19,391 DEBUG nova.utils [-] Running cmd (subprocess): sudo mount /dev/nbd15 /tmp/tmpJwN7gO from (pid=8161) execute /usr/lib/pymodules/python2.6/nova/utils.py:132
2011-03-01 21:10:19,485 DEBUG nova.utils [-] Running cmd (subprocess): sudo mkdir -p /tmp/tmpJwN7gO/root/.ssh from (pid=8161) execute /usr/lib/pymodules/python2.6/nova/utils.py:132
2011-03-01 21:10:19,507 DEBUG nova.utils [-] Running cmd (subprocess): sudo chown root /tmp/tmpJwN7gO/root/.ssh from (pid=8161) execute /usr/lib/pymodules/python2.6/nova/utils.py:132
2011-03-01 21:10:19,566 DEBUG nova.utils [-] Running cmd (subprocess): sudo chmod 700 /tmp/tmpJwN7gO/root/.ssh from (pid=8161) execute /usr/lib/pymodules/python2.6/nova/utils.py:132
2011-03-01 21:10:19,584 DEBUG nova.utils [-] Running cmd (subprocess): sudo tee -a /tmp/tmpJwN7gO/root/.ssh/authorized_keys from (pid=8161) execute /usr/lib/pymodules/python2.6/nova/utils.py:132
2011-03-01 21:10:19,638 DEBUG nova.utils [-] Running cmd (subprocess): sudo umount /dev/nbd15 from (pid=8161) execute /usr/lib/pymodules/python2.6/nova/utils.py:132
2011-03-01 21:10:21,006 DEBUG nova.utils [-] Running cmd (subprocess): rmdir /tmp/tmpJwN7gO from (pid=8161) execute /usr/lib/pymodules/python2.6/nova/utils.py:132
2011-03-01 21:10:21,042 DEBUG nova.utils [-] Running cmd (subprocess): sudo qemu-nbd -d /dev/nbd15 from (pid=8161) execute /usr/lib/pymodules/python2.6/nova/utils.py:132
2011-03-01 21:10:22,755 ERROR nova.exception [-] Uncaught exception
(nova.exception): TRACE: Traceback (most recent call last):
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.6/nova/exception.py", line 116, in _wrap
(nova.exception): TRACE: return f(*args, **kw)
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py", line 410, in spawn
(nova.exception): TRACE: self._conn.createXML(xml, 0)
(nova.exception): TRACE: File "/usr/lib/python2.6/dist-packages/libvirt.py", line 1358, in createXML
(nova.exception): TRACE: if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
(nova.exception): TRACE: libvirtError: internal error Process exited while reading console log output: chardev: opening backend "file" failed
(nova.exception): TRACE:
(nova.exception): TRACE:
2011-03-01 21:10:22,798 ERROR nova.compute.manager [Y2LX5RC-4MKBFUPRCGU7 admin simple_cloud] instance 1: Failed to spawn
(nova.compute.manager): TRACE: Traceback (most recent call last):
(nova.compute.manager): TRACE: File "/usr/lib/pymodules/python2.6/nova/compute/manager.py", line 213, in run_instance
(nova.compute.manager): TRACE: self.driver.spawn(instance_ref)
(nova.compute.manager): TRACE: File "/usr/lib/pymodules/python2.6/nova/exception.py", line 122, in _wrap
(nova.compute.manager): TRACE: raise Error(str(e))
(nova.compute.manager): TRACE: Error: internal error Process exited while reading console log output: chardev: opening backend "file" failed
(nova.compute.manager): TRACE:
(nova.compute.manager): TRACE:
=============================================================
and the instance state is:
euca-describe-instances
RESERVATION r-m5a06is3 simple_cloud default
INSTANCE i-00000001 ami-z21or64q 10.0.0.2 10.0.0.2 failed to spawn mykey (simple_cloud, ubuntu-svr1) 0 m1.tiny 2011-03-02T02:09:47Z nova
===========================
I am using this image: ubuntu1010-UEC-localuser-image.tar.gz for launching vms.

Thanks!

Revision history for this message
indro.fedrigo (indro-fedrigo) said :
#7

Try:
chmod a+rw instances/i-00000001/console.log

It worked for me. I suspect is a bug...

indro

Revision history for this message
arturo lorenzo (arturo-lorenzo) said :
#8

Indro,
thanks so much for figuring out this problem. you are right. I've tried changing the permissions to the console and run it with the virsh create command successfully.
But I am running this instance with euca-run-instances and there is no way to specify that permission because the instances directories are created when euca-run-instances is launched.
So I guess I need to wait for the fix.
Thanks!

Revision history for this message
indro.fedrigo (indro-fedrigo) said :
#9

Or you can just change the permission and use euca-reboot-instances...

Revision history for this message
arturo lorenzo (arturo-lorenzo) said :
#10

Yep, that is a temporary working around which works OK. Do you know if this is reported as a bug? should I create a bug report?
Thanks again Indro!

Revision history for this message
arturo lorenzo (arturo-lorenzo) said :
#11

I rebuilt the system from scratch, did an upgrade to the latest bexar components and reinstall it. This fixed the problem.
Thanks for all your help.