Something wrong with the flow tables after openvswitch initialized??

Asked by Shuquan Huang

Hi list,

My environment is Ubuntu12.04.3 + Havana with three nodes:

server node: neutron-server

network node: neutron-openvswitch-agent and neutron-dhcp-agent

compute node: neutron-openvswitch-agent

I can create a VM and network sucessfully but the vm can't get ip.

compute node:

root@havana-cn:~# ovs-vsctl show
d4599681-a60f-441b-b0ab-ba56c177f883
    Bridge "br1"
        Port "br1"
            Interface "br1"
                type: internal
    Bridge br-int
        Port "tap45b4b1af-27"
            tag: 1
            Interface "tap45b4b1af-27"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
    Bridge br-tun
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "gre-1"
            Interface "gre-1"
                type: gre
                options: {in_key=flow, local_ip="192.168.2.2", out_key=flow, remote_ip="192.168.2.1"}
    ovs_version: "1.10.2"

root@havana-cn:~# ovs-ofctl show br-tun
OFPT_FEATURES_REPLY (xid=0x2): dpid:00006e57dff75746
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
 1(patch-int): addr:aa:33:a0:be:e1:f7
     config: 0
     state: 0
     speed: 0 Mbps now, 0 Mbps max
 2(gre-1): addr:52:75:cf:71:c1:06
     config: 0
     state: 0
     speed: 0 Mbps now, 0 Mbps max
 LOCAL(br-tun): addr:6e:57:df:f7:57:46
     config: 0
     state: 0
     speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

network node:

root@havana-net:~# ovs-vsctl show
be6364ba-b079-44b2-ae73-2f9e4299593c
    Bridge br-int
        Port "tap89dbfa13-8f"
            tag: 1
            Interface "tap89dbfa13-8f"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
    Bridge br-tun
        Port br-tun
            Interface br-tun
                type: internal
        Port "gre-2"
            Interface "gre-2"
                type: gre
                options: {in_key=flow, local_ip="192.168.2.1", out_key=flow, remote_ip="192.168.2.2"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    ovs_version: "1.10.2"

root@havana-net:~# ovs-ofctl show br-tun
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000568a33b17b42
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
 1(patch-int): addr:9a:b7:0b:e8:64:e4
     config: 0
     state: 0
     speed: 0 Mbps now, 0 Mbps max
 2(gre-2): addr:32:e8:9a:04:7c:1b
     config: 0
     state: 0
     speed: 0 Mbps now, 0 Mbps max
 LOCAL(br-tun): addr:56:8a:33:b1:7b:42
     config: 0
     state: 0
     speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

When i run udhcpc, i can't get the data packages from network node. and can't even get data packages from the br-tun in compute node.(tcpdump -i br-tun)
Here is the dump-flows in compute node.

root@havana-cn:~# ovs-ofctl dump-flows br-tun
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=44.008s, table=0, n_packets=0, n_bytes=0, idle_age=44, priority=1,in_port=1 actions=resubmit(,1)
 cookie=0x0, duration=43.217s, table=0, n_packets=0, n_bytes=0, idle_age=43, priority=1,in_port=2 actions=resubmit(,2)
 cookie=0x0, duration=43.961s, table=0, n_packets=5, n_bytes=378, idle_age=34, priority=0 actions=drop
 cookie=0x0, duration=43.876s, table=1, n_packets=0, n_bytes=0, idle_age=43, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,21)
 cookie=0x0, duration=43.922s, table=1, n_packets=0, n_bytes=0, idle_age=43, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)
 cookie=0x0, duration=43.837s, table=2, n_packets=0, n_bytes=0, idle_age=43, priority=0 actions=drop
 cookie=0x0, duration=43.79s, table=3, n_packets=0, n_bytes=0, idle_age=43, priority=0 actions=drop
 cookie=0x0, duration=43.743s, table=10, n_packets=0, n_bytes=0, idle_age=43, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
 cookie=0x0, duration=43.695s, table=20, n_packets=0, n_bytes=0, idle_age=43, priority=0 actions=resubmit(,21)
 cookie=0x0, duration=43.648s, table=21, n_packets=0, n_bytes=0, idle_age=43, priority=0 actions=drop

When i change the flow to the following in network and compute node, the vm can the ip from dhcp agent.
ovs-ofctl dump-flows br-tun
cookie=0x0, duration=7435.355s, table=0, n_packets=14, n_bytes=3628, idle_age=7327, priority=0 actions=NORMAL

is there something wrong with the flow table??

Question information

Language:
English Edit question
Status:
Solved
For:
neutron Edit question
Assignee:
No assignee Edit question
Solved by:
Shuquan Huang
Solved:
Last query:
Last reply:
Revision history for this message
yong sheng gong (gongysh) said :
#1

it seems the flow rules on br-tun bridge are missing some.
In table 21, we should have more besides:
 cookie=0x0, duration=43.648s, table=21, n_packets=0, n_bytes=0, idle_age=43, priority=0 actions=drop

what is your configuration when your start the OpenView agent? I mean enable the debug=True, and paste the configurations here.

Revision history for this message
Shuquan Huang (shuquan) said :
#2

i found out the reason at last. It's caused by missing "tenant_network_type = gre" in controller ovs plugin config file.
So the neutron server send the network information with default tenant_network_type = local. in the compute node, the plugin agent will not create more flows.
In the provision_local_vlan(self, net_uuid, network_type, physical_network, segmentation_id) method will not add the flow.

def provision_local_vlan(self, net_uuid, network_type, physical_network,
                             segmentation_id):
....
        elif network_type == constants.TYPE_LOCAL:
            # no flows needed for local networks
            pass
        else:
            LOG.error(_("Cannot provision unknown network type "
                        "%(network_type)s for net-id=%(net_uuid)s"),
                      {'network_type': network_type,
                       'net_uuid': net_uuid})