Determining remote IP from within VM

Asked by Everett Toews

Hi,

When you're inside a VM (KVM in our case) with a floating IP and you receive a connection from a remote machine it always appears as though the IP address is the default gateway of the VM regardless of where the connection is coming from.

For example.

A VM is launched and is given a floating IP.

i-000004f7 28.7.4.29 10.0.4.3

You ssh to that VM from a completely different network with a machine with the IP 44.22.66.99.

On the VM you run tcpdump.

root@i-000004f7:~# tcpdump
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
16:14:51.076673 IP i-000004f7.novalocal.ssh > 10.0.4.1.50114
16:14:51.077239 IP i-000004f7.novalocal.56502 > 10.0.4.1.domain: 53009+ PTR? 1.4.0.10.in-addr.arpa. (39)
16:14:51.077667 IP 10.0.4.1.50114 > i-000004f7.novalocal.ssh
16:14:51.083420 IP 10.0.4.1.domain > i-000004f7.novalocal.56502: 53009 NXDomain* 0/0/0 (39)
16:14:51.083565 IP i-000004f7.novalocal.48465 > 10.0.4.1.domain: 26532+ PTR? 3.4.0.10.in-addr.arpa. (39)
16:14:51.083942 IP 10.0.4.1.domain > i-000004f7.novalocal.48465: 26532* 1/0/0 PTR i-000004f7.novalocal. (73)
16:14:51.086649 IP i-000004f7.novalocal.ssh > 10.0.4.1.50114
16:14:51.087937 IP 10.0.4.1.50114 > i-000004f7.novalocal.ssh
16:14:51.096715 IP i-000004f7.novalocal.ssh > 10.0.4.1.50114
16:14:51.097941 IP 10.0.4.1.50114 > i-000004f7.novalocal.ssh

tcpdump is showing the ssh connection you've made from 44.22.66.99. So even though you're connecting from 44.22.66.99 the remote address appears to be 10.0.4.1.

Is there a way with OpenStack to determine the remote IP address from within the VM (in Cactus or a future release)?

If not, could it be done manually such that it wouldn't interfere with the iptables rules that OpenStack creates?

BTW, we're using OpenStack Cactus and VLANManager.

Thanks,
Everett

Question information

Language:
English Edit question
Status:
Solved
For:
OpenStack Compute (nova) Edit question
Assignee:
No assignee Edit question
Solved by:
Everett Toews
Solved:
Last query:
Last reply:
Revision history for this message
Everett Toews (everett-toews) said :
#1

Turns out (for us) this was a symptom of an overzealous NAT rule in iptables. We're hiding all of our compute nodes behind our management node (aka cloud controller) on a private network and need to do NAT so our compute nodes can get updates and the like from the Internet.

These are the rules we used.

iptables -A FORWARD -i eth0 -o eth1 -s 192.168.2.0/24 -m conntrack --ctstate NEW -j ACCEPT
iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
iptables -A POSTROUTING -t nat -j MASQUERADE

However if you look at the MASQUERADE rule the last command creates.

root@dair-ua-v01:~# iptables -t nat -L -n -v | grep MASQ
 5071 700K MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0

Covers all IPs including the VMs. A more sensible MASQUERADE rule is

iptables -t nat -A POSTROUTING -s 192.168.2.0/24 -j MASQUERADE

Which only covers NATing for the compute nodes. Once that rule was in place traffic from the outside world showed up with the proper IP address.