Does quantum support multi-nic setup ?

Asked by Mandar Vaze

Host setup :
==========

Host A : Controller node. This has everything running via devstack/stack.sh

Host B :
- Only nova-network and q-agt
- nova.conf from HostA copied over to HostB, and references for localhost replaced pointing to HostA (mysql, rabbit etc)
- Also updated sql_Connection entry in /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini to point to HostA

I have not added multi-host flag to my nova.conf (Should I ????)

Network Setup :
===============
created two networks for "demo" tenant using "nova-manage network create" 10.0.8.0/24 and 10.0.9.0/24
Associated 10.0.9.0/24 with HostB using "nova-manage network modify --host"

Launched an instance for demo tenant - VM gets two fixed IPs, one each from 10.0.8.x and 10.0.9.x networks.

I can ssh to VM from HostA using 10.0.8.x IP

Problem :
========
I can't ssh to VM using 10.0.9.x IP from either HostA or HostB

Output from VM
===============
When I sshed into VM, I see following on VM :

$ ifconfig -a
eth0 Link encap:Ethernet HWaddr FA:16:3E:1D:72:F9
          inet addr:10.0.8.23 Bcast:10.0.8.255 Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fe1d:72f9/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:790 errors:0 dropped:0 overruns:0 frame:0
          TX packets:505 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:67734 (66.1 KiB) TX bytes:67244 (65.6 KiB)
          Interrupt:10 Base address:0xa000

eth1 Link encap:Ethernet HWaddr FA:16:3E:24:C2:B4
          inet6 addr: fe80::f816:3eff:fe24:c2b4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B) TX bytes:1434 (1.4 KiB)
          Interrupt:11 Base address:0xe100

lo Link encap:Local Loopback
          inet addr:127.0.0.1 Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING MTU:16436 Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

cat /etc/network/interfaces

$ cat /etc/network/interfaces
# Configure Loopback
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp

I added entry for eth1 (similar to eth0) manually and executed following :

$ sudo ifup eth1
udhcpc (v1.18.5) started
Sending discover...
Sending discover...
Sending discover...
No lease, failing

On Host A
=========
dnsmasq for 10.0.8.x is running here, with following conf file

cat /opt/stack/nova/networks/nova-gw-afc33410-c5.conf
fa:16:3e:1d:72:f9,host-10.0.8.23.novalocal,10.0.8.23

ifconfig has following entry :

gw-afc33410-c5 Link encap:Ethernet HWaddr fa:16:3e:5e:82:fd
          inet addr:10.0.8.1 Bcast:10.0.8.255 Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fe5e:82fd/64 Scope:Link
          UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1
          RX packets:6048 errors:0 dropped:5 overruns:0 frame:0
          TX packets:4739 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1359020 (1.3 MB) TX bytes:801255 (801.2 KB)

On Host B
=========
dnsmasq for 10.0.9.x is running here, with following conf file

cat /opt/stack/nova/networks/nova-gw-d173143a-33.conf
fa:16:3e:24:c2:b4,host-10.0.9.23.novalocal,10.0.9.23

ifconfig -a does NOT have an entry for gw-d173143a-33

So what am I doing wrong ? Is this kind of setup supported by Quantum ?

Let me know if you need additional information

Thanks !!!

Question information

Language:
English Edit question
Status:
Answered
For:
neutron Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
Mandar Vaze (mandarvaze) said :
#1

For related bug 983024 (linked) I have made some code changes which now execute dnsmasq on appropriate host instead of everything on HostA

Revision history for this message
Mandar Vaze (mandarvaze) said :
#2

I started from scratch - this time I executed "nova-manage network create" on two HostA for 10.0.8.0/24 and on HostB for 10.0.9.0/24 networks.

Only difference from previous scenario is that now gw-xxxx interface was created on HostB

But still running into same set of issues i.e. :

1. VM doesn't have eth1 when it come up
2. When eth1 added manually to /etc/network/interfaces (of VM) followed by "sudo ifup eth1" I still get "No lease, failing"
3. nova list shows 10.0.9.x assigned to VM but VM doesn't really have it (Hence I can't ssh using 10.0.9.x IP)

dnsmasq for 10.0.9.x is running on HostB (as expected)

Revision history for this message
Unmesh Gurjar (unmesh-gurjar) said :
#3

Hi,

I am also facing a similar issue, when an instance gets created on Compute host (on physically diff server than Network server), the dhcp lease fails. But, when an instance is created on the Controller node, the networking works fine (able to ping to the floating ip).

My set up is as follows:
Controller Node : has all services running.
Compute host: has Compute and Quantum agent service running

nova.conf details:

network_manager=nova.network.quantum.manager.QuantumManager
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtOpenVswitchDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
quantum_use_dhcp=True
libvirt_type=kvm
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver

Revision history for this message
dan wendlandt (danwent) said :
#4

There may be many things going on here, but nova-network should definitely only be run on one of the nodes, not both.

Also, you didn't say exactly how you were running devstack. Can you send your config?

Revision history for this message
dan wendlandt (danwent) said :
#5

btw, two more comments:

- "I have not added multi-host flag to my nova.conf (Should I ????)" - no. QuantumManager won't do anything with this flag, as this mode is unsupported, as mentioned in the "Limitations" section.

- "Associated 10.0.9.0/24 with HostB using "nova-manage network modify --host"" - this won't do anything (and may break things). Again, there is no notion of running nova-network on multiple hosts with Quantum in Essex.

Revision history for this message
Mandar Vaze (mandarvaze) said :
#6

> "Also, you didn't say exactly how you were running devstack

This is pretty standard devstack config - using Quantum and nova_ipam. Other details explained in original question.

Are you looking for specific config param ?

> "there is no notion of running nova-network on multiple hosts with Quantum in Essex."

1. Does this mean if specific use case requires multiple networks, all of them should be managed by single nova-network process ? Or that user should NOT choose Quantum Network Manager ? (opt for say VlanManager) and have multiple nova-network processes.

2. Is there a plan to support multiple nova-networks with quantum - in Folsom ?

Revision history for this message
Soheil Hassas Yeganeh (soheil-h-y) said :
#7

Hi Mandar,

There are devstack forks for easy multi-node, quantum-enabled setup:
https://github.com/soheilhassasyeganeh/devstack/
https://github.com/davlaps/devstack/

You might want to take a look at them.

Best,
Soheil

Revision history for this message
Mandar Vaze (mandarvaze) said :
#8

Soheil,

I went thru the readme of first link - It talks about nova-compute on multiple Hosts
My use case is nova-network on multiple hosts.
I don't think it is devstack or configuration issue.

Dan Wendlandt said that Quantum does not support such setup in Essex

-Mandar

Revision history for this message
Soheil Hassas Yeganeh (soheil-h-y) said :
#9

My bad, I thought you want to have multi-node quantum setup.

Revision history for this message
Mandar Vaze (mandarvaze) said :
#10

Tushar (tpatil) was able to get this working (along with my code changes for https://bugs.launchpad.net/nova/+bug/983024) using :

1. https://github.com/soheilhassasyeganeh/devstack/
2. http://openvswitch.org/openstack/documentation/

Revision history for this message
Somik Behera (somikbehera) said :
#11

From Comment #10 , It sounds like this issue has now been resolved. Please confirm.

Can you help with this problem?

Provide an answer of your own, or ask Mandar Vaze for more information if necessary.

To post a message you must log in.