waiting for metadata service at http://169.254.169.254/2009-04-04/meta-data/instance-id

Asked by Everett Toews

I'm running an instance but I can't ssh or ping it. When I do a euca-get-console-output I get the following,

...
[ 0.728227] EXT3-fs: mounted filesystem with ordered data mode.
[ 0.729514] VFS: Mounted root (ext3 filesystem) readonly on device 252:0.
[ 0.731228] devtmpfs: mounted
[ 0.731878] Freeing unused kernel memory: 800k freed
[ 0.733335] Write protecting the kernel read-only data: 7808k
init: plymouth-splash main process (261) terminated with status 2
init: plymouth main process (48) killed by SEGV signal
cloud-init running: Sat, 12 Feb 2011 00:13:30 +0000. up 2.53 seconds
waiting for metadata service at http:\/\/169.254.169.254\/2009-04-04\/meta-data\/instance-id
  00:13:32 [ 1\/100]: url error [timed out]
  00:13:35 [ 2\/100]: url error [timed out]
...repeats 100 times...

Some details.

1. The instance I'm running is based on the Ubuntu 10.04 server image from http://uec-images.ubuntu.com/releases/10.04/release/ubuntu-10.04-server-uec-amd64.tar.gz

2. I've added the metadata service IP address to iptables on the CC with,
sudo iptables -t nat -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 68.77.36.184:8773

3. Output from iptables-save looks like,
*nat
:PREROUTING ACCEPT [413:56592]
:OUTPUT ACCEPT [278:17152]
:POSTROUTING ACCEPT [297:19583]
-A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 68.77.36.184:8773

4. If I try to ping the metadata service from the CC I get,
user@ubuntu:~$ ping 169.254.169.254
PING 169.254.169.254 (169.254.169.254) 56(84) bytes of data.
From 38.112.35.21 icmp_seq=1 Destination Host Unreachable
From 38.112.35.21 icmp_seq=3 Destination Host Unreachable
...

I don't know where the 38.112.35.21 is coming from.

5. I tried running an instance based on the Ubuntu 10.10 server image from http://uec-images.ubuntu.com/releases/10.10/release/ubuntu-10.10-server-uec-amd64.tar.gz and got the same results.

6. I'm running nova on Ubuntu 10.10

Any thoughts as to why the metadata service is inaccessible?

Thanks,
Everett

Question information

Language:
English Edit question
Status:
Solved
For:
OpenStack Compute (nova) Edit question
Assignee:
No assignee Edit question
Solved by:
Everett Toews
Solved:
Last query:
Last reply:
Revision history for this message
Everett Toews (everett-toews) said :
#1

If I run the command,
sudo ip addr add 169.254.169.254/32 scope link dev eth0

Then I can at least curl 169.254.169.254,
user@ubuntu:~$ curl http://169.254.169.254:8773/
1.0
2007-01-19
2007-03-01
2007-08-29
2007-10-10
2007-12-15
2008-02-01
2008-09-01
2009-04-04

That doesn't fix the problem for my instances but maybe it's a start...

Revision history for this message
Christian Berendt (berendt) said :
#2

    Metadata forwarding must be handled by the gateway, and since nova does
    not do any setup in this mode, it must be done manually. Requests to
    169.254.169.254 port 80 will need to be forwarded to the api server.

There is following method in nova/network/linux_net.py, but it's only called while using Flat DHCP Manager or VLAN Manager.

def metadata_forward():
    """Create forwarding rule for metadata"""
    _confirm_rule("PREROUTING", "-t nat -s 0.0.0.0/0 "
             "-d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT "
             "--to-destination %s:%s" % (FLAGS.ec2_dmz_host, FLAGS.ec2_port))

Revision history for this message
Everett Toews (everett-toews) said :
#3

This problem was fixed when I switch to FlatDHCPManager. See https://answers.launchpad.net/nova/+question/145820

Revision history for this message
richard, zhang (richard-zhang-bj) said :
#4

I meet the same issue in the single node installation with one NIC enabled. Further, I have only 10.x.x.x subnet, not 192.168.x.x subnet.we have one dhcp and gateway :10.0.0.1 and are not allow to have another dhcp server since they will conflict in the same network.
how do I config openstack's network to avoid this issue in my case ? thanks a lot.
here is my nova.conf:
======

--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--logdir=/var/log/nova
--state_path=/var/lib/nova
--lock_path=/var/lock/nova
--verbose
--s3_host=10.101.1.142
--rabbit_host=10.101.1.142
--cc_host=10.101.1.142
--ec2_url=http://10.101.1.142:8773/services/Cloud
--fixed_range=192.168.0.0/16
--network_size=64
--FAKE_subdomain=ec2
--routing_source_ip=10.101.1.142
--verbose
--sql_connection=mysql://root:iforgot@10.101.1.142/nova
--network_manager=nova.network.manager.FlatManager
=======
the error message of console log is below:
======
2011-07-11 10:04:11,496 - DataSourceEc2.py[WARNING]: waiting for metadata service at http://169.254.169.254/2009-04-04/meta-data/instance-id

2011-07-11 10:04:11,498 - DataSourceEc2.py[WARNING]: 10:04:11 [ 1/100]: url error [[Errno 113] No route to host]

Revision history for this message
JuanPM (juanpm) said :
#5

Hello everyone, I've got the same problem than Richard. I'm trying (hard) to setup a Nova on Single Node, with just one NIC.

I can create, and run instances, but:
- I'm not able to ping this instances from Cloud Controller / NOva host
- This instances consoles shows "DataSourceEc2.py[WARNING]: 10:04:11 [ 1/100]: url error [[Errno 113] No route to host]" errors

Below, my configurations:
==============================================================================
  mgr01:~# cat /etc/lsb-release
  DISTRIB_ID=Ubuntu
  DISTRIB_RELEASE=10.10
  DISTRIB_CODENAME=maverick
  DISTRIB_DESCRIPTION="Ubuntu 10.10"

==============================================================================

mgr01:~# cat /etc/nova/nova.conf
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--logdir=/var/log/nova
--state_path=/var/lib/nova
--lock_path=/var/lock/nova
--verbose
--s3_host=188.138.101.59
--rabbit_host=188.138.101.59
--cc_host=188.138.101.59
--ec2_url=http://188.138.101.59:8773/services/Cloud
--fixed_range=192.168.0.0/24
--network_size=256
--FAKE_subdomain=ec2
--routing_source_ip=188.138.101.59
--verbose
--sql_connection=mysql://root:pass@188.138.101.59/nova
--network_manager=nova.network.manager.FlatDHCPManager
--flat_network_dhcp_start=192.168.0.2
--flat_interface=eth0
--flat_injected=False
--public_interface=eth0

==============================================================================
mgr01:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet 169.254.169.254/32 scope link lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:19:99:9a:9c:04 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::219:99ff:fe9a:9c04/64 scope link
       valid_lft forever preferred_lft forever
3: br100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 00:19:99:9a:9c:04 brd ff:ff:ff:ff:ff:ff
    inet 188.138.101.59/24 brd 188.138.101.255 scope global br100
    inet 192.168.0.1/26 brd 192.168.0.63 scope global br100
    inet6 fe80::219:99ff:fe9a:9c04/64 scope link
       valid_lft forever preferred_lft forever
4: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether e2:6b:78:fa:f5:8c brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0

==============================================================================
mgr01:~# nova-manage service list
mgr01.cloudzen.com.ar nova-network enabled :-) 2011-08-13 03:56:12
mgr01.cloudzen.com.ar nova-compute enabled :-) 2011-08-13 03:56:12
mgr01.cloudzen.com.ar nova-scheduler enabled :-) 2011-08-13 03:56:10

==============================================================================
mgr01:~# nova list
+----+----------+--------+-----------+-------------+
| ID | Name | Status | Public IP | Private IP |
+----+----------+--------+-----------+-------------+
| 1 | Server 1 | ACTIVE | | 192.168.0.2 |
| 2 | Server 2 | ACTIVE | | 192.168.0.3 |
| 3 | Server 3 | ACTIVE | | 192.168.0.4 |
+----+----------+--------+-----------+-------------+

==============================================================================
mgr01:~# iptables -t nat -L -v
Chain PREROUTING (policy ACCEPT 1548 packets, 246K bytes)
 pkts bytes target prot opt in out source destination
 1548 246K nova-compute-PREROUTING all -- any any anywhere anywhere
 1548 246K nova-network-PREROUTING all -- any any anywhere anywhere

Chain POSTROUTING (policy ACCEPT 1543 packets, 243K bytes)
 pkts bytes target prot opt in out source destination
 1544 243K nova-compute-POSTROUTING all -- any any anywhere anywhere
 1544 243K nova-network-POSTROUTING all -- any any anywhere anywhere
 1543 243K nova-postrouting-bottom all -- any any anywhere anywhere
    0 0 MASQUERADE tcp -- any any 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
    0 0 MASQUERADE udp -- any any 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
    0 0 MASQUERADE all -- any any 192.168.122.0/24 !192.168.122.0/24

Chain OUTPUT (policy ACCEPT 3 packets, 204 bytes)
 pkts bytes target prot opt in out source destination
    3 204 nova-compute-OUTPUT all -- any any anywhere anywhere
    3 204 nova-network-OUTPUT all -- any any anywhere anywhere

Chain nova-compute-OUTPUT (1 references)
 pkts bytes target prot opt in out source destination

Chain nova-compute-POSTROUTING (1 references)
 pkts bytes target prot opt in out source destination

Chain nova-compute-PREROUTING (1 references)
 pkts bytes target prot opt in out source destination

Chain nova-compute-floating-snat (1 references)
 pkts bytes target prot opt in out source destination

Chain nova-compute-snat (1 references)
 pkts bytes target prot opt in out source destination
 1543 243K nova-compute-floating-snat all -- any any anywhere anywhere

Chain nova-network-OUTPUT (1 references)
 pkts bytes target prot opt in out source destination

Chain nova-network-POSTROUTING (1 references)
 pkts bytes target prot opt in out source destination
    0 0 ACCEPT all -- any any 192.168.0.0/24 10.128.0.0/24
    1 84 ACCEPT all -- any any 192.168.0.0/24 192.168.0.0/24

Chain nova-network-PREROUTING (1 references)
 pkts bytes target prot opt in out source destination
    0 0 DNAT tcp -- any any anywhere 169.254.169.254 tcp dpt:www to:188.138.101.59:8773

Chain nova-network-floating-snat (1 references)
 pkts bytes target prot opt in out source destination

Chain nova-network-snat (1 references)
 pkts bytes target prot opt in out source destination
 1543 243K nova-network-floating-snat all -- any any anywhere anywhere
    0 0 SNAT all -- any any 192.168.0.0/24 anywhere to:188.138.101.59

Chain nova-postrouting-bottom (1 references)
 pkts bytes target prot opt in out source destination
 1543 243K nova-compute-snat all -- any any anywhere anywhere
 1543 243K nova-network-snat all -- any any anywhere anywhere

==============================================================================
mgr01:~# brctl show
bridge name bridge id STP enabled interfaces
br100 8000.0019999a9c04 no eth0
       vnet0
       vnet1
       vnet2
virbr0 8000.000000000000 yes

==============================================================================
mgr01:~# curl http://169.254.169.254
<html><body><h1>It works!</h1>
<p>This is the default web page for this server.</p>
<p>The web server software is running but no content has been added, yet.</p>
</body></html>

==============================================================================

Any help, or suggestions please!? I could connect to the VMs consoles using virsh console command :S

Revision history for this message
JuanPM (juanpm) said :
#6

HI everyone, on last days I was reading some blogs, and doint some proof, and I solved my networks problems on single-host, one NIC, using a dummy network adapter. here my nova.conf and network configurations:

root:~# cat /etc/nova/nova.conf
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--logdir=/var/log/nova
--state_path=/var/lib/nova
--lock_path=/var/lock/nova
--verbose
--s3_host=192.168.66.1
--rabbit_host=192.168.66.1
--cc_host=192.168.66.1
--network_host=192.168.66.1
--ec2_url=http://192.168.66.1:8773/services/Cloud
--fixed_range=192.168.0.0/24
--network_size=65534
--FAKE_subdomain=ec2
--routing_source_ip=188.138.101.59
--verbose
--sql_connection=mysql://root:nova@192.168.66.1/nova
--network_manager=nova.network.manager.FlatDHCPManager
--flat_interface=dummy0
--public_interface=eth0

root:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet 169.254.169.254/32 scope link lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:19:99:9a:9c:04 brd ff:ff:ff:ff:ff:ff
    inet 188.138.101.59/24 brd 188.138.101.255 scope global eth0
    inet 188.138.99.184/32 scope global eth0
    inet6 fe80::219:99ff:fe9a:9c04/64 scope link
       valid_lft forever preferred_lft forever
3: br100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 5e:76:28:79:a3:1b brd ff:ff:ff:ff:ff:ff
    inet 192.168.66.1/24 brd 192.168.66.255 scope global br100
    inet 192.168.0.1/25 brd 192.168.0.127 scope global br100
    inet6 fe80::484:21ff:fecf:d1ed/64 scope link
       valid_lft forever preferred_lft forever
4: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 1e:4e:2c:3a:16:f9 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
5: dummy0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 5e:76:28:79:a3:1b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5c76:28ff:fe79:a31b/64 scope link
       valid_lft forever preferred_lft forever