Using VlanManager, no network access to instances running on another nova-compute host.

Asked by P Spencer Davis

I have a setup with two hosts, one running as management/compute node and the other as a pure compute node. I am able to run instances on both hosts and have network access to any instances that are running on the combined master/compute node. The instances that are running on the pure compute node however are inaccessible from the network.
Both hosts have two active nics, eth0 is the public interface on a 10.169.30.128/25 network and eth1 is a 172.16.0.0/16 private network.
I have defined an virtual network for the project to run in as follows:
nova-manage network create --label=public --fixed_range_v4=192.168.1.0/24 --num_networks=1 --network_size=256 --vlan=1 --bridge=vlan1 --dns1=10.0.4.7

Here is the configuration that nova-manage reports:

--storage_availability_zone=nova
--ca_file=cacert.pem
--ec2_dmz_host=$my_ip
--fixed_range=172.16.0.0/16
--compute_topic=compute
--dmz_mask=255.255.255.0
--fixed_range_v6=fd00::/48
--glance_api_servers=10.192.30.137:9292
--rabbit_password=guest
--user_cert_subject=/C=US/ST=California/L=MountainView/O=AnsoLabs/OU=NovaDev/CN=%s-%s-%s
--s3_dmz=10.192.30.137
--quota_ram=51200
--find_host_timeout=30
--aws_access_key_id=admin
--vncserver_host=0.0.0.0
--network_size=1024
--enable_new_services
--my_ip=10.192.30.137
--live_migration_retry_count=30
--lockout_attempts=5
--credential_cert_file=cert.pem
--quota_max_injected_files=5
--zone_capabilities=hypervisor=xenserver;kvm,os=linux;windows
--logdir=/var/log/nova
--sqlite_db=nova.sqlite
--nouse_forwarded_for
--cpuinfo_xml_template=/usr/lib/pymodules/python2.7/nova/virt/cpuinfo.xml.template
--num_networks=1
--boot_script_template=/usr/lib/pymodules/python2.7/nova/cloudpipe/bootscript.template
--live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER
--notification_driver=nova.notifier.no_op_notifier
--osapi_max_limit=1000
--rabbit_port=5672
--s3_access_key=notchecked
--rabbit_max_retries=12
--noresume_guests_state_on_host_boot
--ajax_console_proxy_url=http://127.0.0.1:8000
--injected_network_template=/usr/lib/pymodules/python2.7/nova/virt/interfaces.template
--network_host=10.192.30.137
--snapshot_name_template=snapshot-%08x
--vncproxy_url=http://10.192.30.137:6080
--s3_secret_key=notchecked
--ajax_console_proxy_topic=ajax_proxy
--minimum_root_size=10737418240
--quota_cores=20
--nouse_project_ca
--rabbit_userid=guest
--volume_topic=volume
--volume_name_template=volume-%08x
--lock_path=/var/lock/nova
--live_migration_uri=qemu+tcp://%s/system
--flat_network_dns=8.8.4.4
--live_migration_bandwidth=0
--connection_type=libvirt
--noupdate_dhcp_on_disassociate
--default_project=openstack
--s3_port=3333
--logfile_mode=420
--logging_context_format_string=%(asctime)s %(levelname)s %(name)s [%(request_id)s %(user_id)s %(project_id)s] %(message)s
--instance_name_template=instance-%08x
--ec2_host=$my_ip
--credential_key_file=pk.pem
--vpn_cert_subject=/C=US/ST=California/L=MountainView/O=AnsoLabs/OU=NovaDev/CN=project-vpn-%s-%s
--logging_debug_format_suffix=from (pid=%(process)d) %(funcName)s %(pathname)s:%(lineno)d
--stub_network=False
--console_manager=nova.console.manager.ConsoleProxyManager
--rpc_backend=nova.rpc.amqp
--default_log_levels=amqplib=WARN,sqlalchemy=WARN,boto=WARN,eventlet.wsgi.server=WARN
--osapi_scheme=http
--credential_rc_file=%src
--sql_connection=mysql://nova:nova@10.192.30.137/nova
--console_topic=console
--instances_path=$state_path/instances
--flat_injected
--use_local_volumes
--host=csvirt-1
--fixed_ip_disassociate_timeout=600
--console_host=csvirt-1
--quota_instances=10
--quota_max_injected_file_content_bytes=10240
--libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtBridgeDriver
--floating_range=4.4.4.0/24
--nomulti_host
--lockout_window=15
--db_backend=sqlalchemy
--credentials_template=/usr/lib/pymodules/python2.7/nova/auth/novarc.template
--dmz_net=10.0.0.0
--sql_retry_interval=10
--vpn_start=1000
--volume_driver=nova.volume.driver.ISCSIDriver
--crl_file=crl.pem
--rpc_conn_pool_size=30
--s3_host=10.192.30.137
--qemu_img=qemu-img
--max_nbd_devices=16
--vlan_interface=eth1
--scheduler_topic=scheduler
--verbose
--sql_max_retries=12
--default_instance_type=m1.small
--firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
--password_length=12
--libvirt_type=kvm
--image_decryption_dir=/tmp
--vpn_key_suffix=-vpn
--use_cow_images
--block_size=268435456
--null_kernel=nokernel
--libvirt_xml_template=/usr/lib/pymodules/python2.7/nova/virt/libvirt.xml.template
--vpn_client_template=/usr/lib/pymodules/python2.7/nova/cloudpipe/client.ovpn.template
--credential_vpn_file=nova-vpn.conf
--service_down_time=60
--default_notification_level=INFO
--nopublish_errors
--quota_metadata_items=128
--allowed_roles=cloudadmin,itsec,sysadmin,netadmin,developer
--logging_exception_prefix=(%(name)s): TRACE:
--enabled_apis=ec2,osapi
--quota_max_injected_file_path_bytes=255
--scheduler_manager=nova.scheduler.manager.SchedulerManager
--ec2_port=8773
--rescue_kernel_id=aki-rescue
--osapi_port=8774
--auth_token_ttl=3600
--quota_volumes=10
--libvirt_uri=
--ec2_scheme=http
--keys_path=$state_path/keys
--vpn_image_id=0
--host_state_interval=120
--noauto_assign_floating_ip
--quota_floating_ips=10
--nofake_call
--state_path=/var/lib/nova
--sql_idle_timeout=3600
--vpn_ip=$my_ip
--default_image=ami-11111
--aws_secret_access_key=admin
--nouse_ipv6
--key_file=private/cakey.pem
--nofake_network
--osapi_extensions_path=/var/lib/nova/extensions
--quota_gigabytes=1000
--region_list=
--auth_driver=nova.auth.dbdriver.DbDriver
--network_manager=nova.network.manager.VlanManager
--noenable_zone_routing
--osapi_host=$my_ip
--zone_name=nova
--rescue_image_id=ami-rescue
--logging_default_format_string=%(asctime)s %(levelname)s %(name)s [-] %(message)s
--timeout_nbd=10
--compute_driver=nova.virt.connection.get_connection
--libvirt_vif_type=bridge
--nofake_rabbit
--rabbit_host=10.192.30.137
--vnc_keymap=en-us
--rescue_timeout=0
--ca_path=$state_path/CA
--nouse_syslog
--superuser_roles=cloudadmin
--osapi_path=/v1.0/
--ec2_path=/services/Cloud
--allow_project_net_traffic
--norabbit_use_ssl
--rabbit_retry_interval=10
--node_availability_zone=nova
--lockout_minutes=15
--db_driver=nova.db.api
--create_unique_mac_address_attempts=5
--ajaxterm_portrange=10000-12000
--volume_manager=nova.volume.manager.VolumeManager
--nostart_guests_on_host_boot
--vlan_start=100
--rpc_thread_pool_size=1024
--ipv6_backend=rfc2462
--vnc_enabled
--global_roles=cloudadmin,itsec
--rabbit_virtual_host=/
--rescue_ramdisk_id=ari-rescue
--network_driver=nova.network.linux_net
--ajax_console_proxy_port=8000
--project_cert_subject=/C=US/ST=California/L=MountainView/O=AnsoLabs/OU=NovaDev/CN=project-ca-%s-%s
--image_service=nova.image.glance.GlanceImageService
--control_exchange=nova
--cnt_vpn_clients=0
--vncproxy_topic=vncproxy
--compute_manager=nova.compute.manager.ComputeManager
--network_topic=network

Here is my nova.conf:

# RabbitMQ
--rabbit_host=10.192.30.137
# MySQL
--sql_connection=mysql://nova:nova@10.192.30.137/nova
# Networking
--network_manager=nova.network.manager.VlanManager
--vlan_interface=eth1
--public_interface=eth0
--network_host=10.192.30.137
--routing_source_ip=10.192.30.137
--fixed_range=172.16.0.0/16
--network_size=1024
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--user_ipv6=false
# Virtualization
--libvirt_type=kvm
# Volumes
--iscsi_ip_prefix=10.192.30.137
--num_targets=100
# APIs
--auth_driver=nova.auth.dbdriver.DbDriver
--cc_host=10.192.30.137
--ec2_url=http://10.192.30.137:8773/services/Cloud
--s3_host=10.192.30.137
--s3_dmz=10.192.30.137
# Image service
#--glance_host=10.192.30.137
--glance_api_servers=10.192.30.137:9292
--image_service=nova.image.glance.GlanceImageService
# Misc
--logdir=/var/log/nova
--state_path=/var/lib/nova
--lock_path=/var/lock/nova
--verbose
# VNC Console
--vnc_enabled=true
--vncproxy_url=http://10.192.30.137:6080
--vnc_console_proxy_url=http://10.192.30.137:6080

here is the output to /var/log/nova/nova-compute.log from creation of the instance on the compute only node

2011-08-05 13:47:04,737 INFO nova.rpc [-] Created "compute_fanout" fanout exchange with "compute" routing key
2011-08-05 13:47:04,737 DEBUG nova.rpc [-] Initing the Adapter Consumer for compute from (pid=1303) __init__ /usr/lib/pymodules/python2.7/nova/rpc/amqp.py:180
2011-08-05 13:48:04,745 INFO nova.compute.manager [-] Updating host status
2011-08-05 13:48:23,561 DEBUG nova.rpc [-] received {u'_context_roles': [u'cloudadmin', u'projectmanager'], u'_context_request_id': u'039409a5-6d4a-4b8a-97ba-7ecd95141be6', u'_context_read_deleted': False, u'args': {u'instance_id': 5, u'request_spec': {u'instance_properties': {u'state_description': u'scheduling', u'availability_zone': None, u'ramdisk_id': u'', u'instance_type_id': 5, u'user_data': u'', u'vm_mode': None, u'reservation_id': u'r-j413p6bj', u'root_device_name': None, u'user_id': u'cscloud', u'display_description': None, u'key_data': u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQC2uI7g/I4ZqQu59NWiDtFd3AmbfT/h2O+YepSRTcoSL0QMkjSPVy6SK9MlFKjPxCqRDVDyie7mpyZF75das3Ra45IGDDSI3NP8EDL0Ldn9pDGMmt93H6YX53C+rXxRQzrp+pVI7mbOpTV9EtYlXjair9PKXfUItLiBJRc8kRzeMw== nova@csvirt-1\n', u'state': 0, u'project_id': u'cscloudbase', u'metadata': {}, u'kernel_id': u'1', u'key_name': u'key-cloudbase', u'display_name': None, u'local_gb': 20, u'locked': False, u'launch_time': u'2011-08-05T17:48:23Z', u'memory_mb': 2048, u'vcpus': 1, u'image_ref': 2, u'architecture': None, u'os_type': None}, u'instance_type': {u'rxtx_quota': 0, u'deleted_at': None, u'name': u'm1.small', u'deleted': False, u'created_at': None, u'updated_at': None, u'memory_mb': 2048, u'vcpus': 1, u'rxtx_cap': 0, u'extra_specs': {}, u'swap': 0, u'flavorid': 2, u'id': 5, u'local_gb': 20}, u'num_instances': 1, u'filter': u'nova.scheduler.host_filter.InstanceTypeFilter', u'blob': None}, u'admin_password': None, u'injected_files': None, u'availability_zone': None}, u'_context_auth_token': None, u'_context_is_admin': True, u'_context_project_id': u'cscloudbase', u'_context_timestamp': u'2011-08-05T17:48:23.046601', u'_context_user_id': u'cscloud', u'method': u'run_instance', u'_context_remote_address': u'10.192.30.137'} from (pid=1303) process_data /usr/lib/pymodules/python2.7/nova/rpc/amqp.py:200
2011-08-05 13:48:23,561 DEBUG nova.rpc [-] unpacked context: {'user_id': u'cscloud', 'roles': [u'cloudadmin', u'projectmanager'], 'timestamp': u'2011-08-05T17:48:23.046601', 'auth_token': None, 'msg_id': None, 'remote_address': u'10.192.30.137', 'is_admin': True, 'request_id': u'039409a5-6d4a-4b8a-97ba-7ecd95141be6', 'project_id': u'cscloudbase', 'read_deleted': False} from (pid=1303) _unpack_context /usr/lib/pymodules/python2.7/nova/rpc/amqp.py:430
2011-08-05 13:48:23,632 AUDIT nova.compute.manager [039409a5-6d4a-4b8a-97ba-7ecd95141be6 cscloud cscloudbase] instance 5: starting...
2011-08-05 13:48:23,869 DEBUG nova.rpc [-] Making asynchronous call on network ... from (pid=1303) multicall /usr/lib/pymodules/python2.7/nova/rpc/amqp.py:460
2011-08-05 13:48:23,869 DEBUG nova.rpc [-] MSG_ID is aff1348ef9f642819f0065fa68d5d07c from (pid=1303) multicall /usr/lib/pymodules/python2.7/nova/rpc/amqp.py:463
2011-08-05 13:48:23,869 DEBUG nova.rpc [-] Creating new connection from (pid=1303) create /usr/lib/pymodules/python2.7/nova/rpc/amqp.py:103
2011-08-05 13:48:24,470 DEBUG nova.compute.manager [-] instance network_info: |[[{u'bridge': u'br1', u'multi_host': False, u'bridge_interface': u'eth1', u'vlan': 1, u'id': 1, u'injected': False, u'cidr': u'192.168.1.0/24', u'cidr_v6': None}, {u'should_create_bridge': True, u'dns': [], u'label': u'public', u'broadcast': u'192.168.1.255', u'ips': [{u'ip': u'192.168.1.4', u'netmask': u'255.255.255.0', u'enabled': u'1'}], u'mac': u'02:16:3e:4d:4c:b1', u'rxtx_cap': 0, u'should_create_vlan': True, u'dhcp_server': u'192.168.1.1', u'gateway': u'192.168.1.1'}]]| from (pid=1303) _run_instance /usr/lib/pymodules/python2.7/nova/compute/manager.py:343
2011-08-05 13:48:24,744 DEBUG nova.virt.libvirt_conn [-] instance instance-00000005: starting toXML method from (pid=1303) to_xml /usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py:1004
2011-08-05 13:48:24,745 DEBUG nova.virt.libvirt.vif [-] Ensuring vlan 1 and bridge br1 from (pid=1303) plug /usr/lib/pymodules/python2.7/nova/virt/libvirt/vif.py:82
2011-08-05 13:48:24,745 DEBUG nova.utils [-] Attempting to grab semaphore "ensure_vlan" for method "ensure_vlan"... from (pid=1303) inner /usr/lib/pymodules/python2.7/nova/utils.py:661
2011-08-05 13:48:24,745 DEBUG nova.utils [-] Attempting to grab file lock "ensure_vlan" for method "ensure_vlan"... from (pid=1303) inner /usr/lib/pymodules/python2.7/nova/utils.py:666
2011-08-05 13:48:24,745 DEBUG nova.utils [-] Running cmd (subprocess): ip link show dev vlan1 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:48:24,749 DEBUG nova.utils [-] Result was 255 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:176
2011-08-05 13:48:24,750 DEBUG nova.linux_net [-] Starting VLAN inteface vlan1 from (pid=1303) ensure_vlan /usr/lib/pymodules/python2.7/nova/network/linux_net.py:466
2011-08-05 13:48:24,750 DEBUG nova.utils [-] Running cmd (subprocess): sudo vconfig set_name_type VLAN_PLUS_VID_NO_PAD from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:48:24,890 DEBUG nova.utils [-] Running cmd (subprocess): sudo vconfig add eth1 1 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:48:24,899 DEBUG nova.utils [-] Running cmd (subprocess): sudo ip link set vlan1 up from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:48:24,909 DEBUG nova.utils [-] Attempting to grab semaphore "ensure_bridge" for method "ensure_bridge"... from (pid=1303) inner /usr/lib/pymodules/python2.7/nova/utils.py:661
2011-08-05 13:48:24,909 DEBUG nova.utils [-] Attempting to grab file lock "ensure_bridge" for method "ensure_bridge"... from (pid=1303) inner /usr/lib/pymodules/python2.7/nova/utils.py:666
2011-08-05 13:48:24,909 DEBUG nova.utils [-] Running cmd (subprocess): ip link show dev br1 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:48:24,913 DEBUG nova.utils [-] Result was 255 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:176
2011-08-05 13:48:24,914 DEBUG nova.linux_net [-] Starting Bridge interface for vlan1 from (pid=1303) ensure_bridge /usr/lib/pymodules/python2.7/nova/network/linux_net.py:489
2011-08-05 13:48:24,914 DEBUG nova.utils [-] Running cmd (subprocess): sudo brctl addbr br1 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:48:24,922 DEBUG nova.utils [-] Running cmd (subprocess): sudo brctl setfd br1 0 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:48:24,931 DEBUG nova.utils [-] Running cmd (subprocess): sudo brctl stp br1 off from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:48:24,939 DEBUG nova.utils [-] Running cmd (subprocess): sudo ip link set br1 up from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:48:24,949 DEBUG nova.utils [-] Running cmd (subprocess): sudo route -n from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:48:24,958 DEBUG nova.utils [-] Running cmd (subprocess): sudo ip addr show dev vlan1 scope global from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:48:24,985 DEBUG nova.utils [-] Running cmd (subprocess): sudo brctl addif br1 vlan1 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:48:25,059 DEBUG nova.virt.libvirt_conn [-] instance instance-00000005: finished toXML method from (pid=1303) to_xml /usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py:1008
2011-08-05 13:48:25,059 INFO nova [-] called setup_basic_filtering in nwfilter
2011-08-05 13:48:25,059 INFO nova [-] ensuring static filters
2011-08-05 13:48:25,090 DEBUG nova.virt.libvirt.firewall [-] iptables firewall: Setup Basic Filtering from (pid=1303) setup_basic_filtering /usr/lib/pymodules/python2.7/nova/virt/libvirt/firewall.py:538
2011-08-05 13:48:25,090 DEBUG nova.utils [-] Attempting to grab semaphore "iptables" for method "_do_refresh_provider_fw_rules"... from (pid=1303) inner /usr/lib/pymodules/python2.7/nova/utils.py:661
2011-08-05 13:48:25,090 DEBUG nova.utils [-] Attempting to grab file lock "iptables" for method "_do_refresh_provider_fw_rules"... from (pid=1303) inner /usr/lib/pymodules/python2.7/nova/utils.py:666
2011-08-05 13:48:25,093 DEBUG nova.utils [-] Attempting to grab semaphore "iptables" for method "apply"... from (pid=1303) inner /usr/lib/pymodules/python2.7/nova/utils.py:661
2011-08-05 13:48:25,093 DEBUG nova.utils [-] Attempting to grab file lock "iptables" for method "apply"... from (pid=1303) inner /usr/lib/pymodules/python2.7/nova/utils.py:666
2011-08-05 13:48:25,093 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-save -t filter from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:48:25,102 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-restore from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:48:25,112 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-save -t nat from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:48:25,121 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-restore from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:48:25,141 DEBUG nova.virt.libvirt.firewall [-] Adding security group rule: <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at 0x36af690> from (pid=1303) instance_rules /usr/lib/pymodules/python2.7/nova/virt/libvirt/firewall.py:664
2011-08-05 13:48:25,141 DEBUG nova.virt.libvirt.firewall [-] Adding security group rule: <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at 0x36af710> from (pid=1303) instance_rules /usr/lib/pymodules/python2.7/nova/virt/libvirt/firewall.py:664
2011-08-05 13:48:25,141 DEBUG nova.utils [-] Attempting to grab semaphore "iptables" for method "apply"... from (pid=1303) inner /usr/lib/pymodules/python2.7/nova/utils.py:661
2011-08-05 13:48:25,141 DEBUG nova.utils [-] Attempting to grab file lock "iptables" for method "apply"... from (pid=1303) inner /usr/lib/pymodules/python2.7/nova/utils.py:666
2011-08-05 13:48:25,142 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-save -t filter from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:48:25,151 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-restore from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:48:25,160 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-save -t nat from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:48:25,169 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-restore from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:48:25,179 DEBUG nova.utils [-] Running cmd (subprocess): mkdir -p /var/lib/nova/instances/instance-00000005/ from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:48:25,297 INFO nova.virt.libvirt_conn [-] instance instance-00000005: Creating image
2011-08-05 13:48:25,298 DEBUG nova.utils [-] Attempting to grab semaphore "00000001" for method "call_if_not_exists"... from (pid=1303) inner /usr/lib/pymodules/python2.7/nova/utils.py:661
2011-08-05 13:48:25,725 DEBUG nova.utils [-] Running cmd (subprocess): cp /var/lib/nova/instances/_base/00000001 /var/lib/nova/instances/instance-00000005/kernel from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:48:25,745 DEBUG nova.utils [-] Attempting to grab semaphore "da4b9237bacccdf19c0760cab7aec4a8359010b0" for method "call_if_not_exists"... from (pid=1303) inner /usr/lib/pymodules/python2.7/nova/utils.py:661
2011-08-05 13:48:52,492 DEBUG nova.utils [-] Running cmd (subprocess): truncate -s 10737418240 /var/lib/nova/instances/_base/da4b9237bacccdf19c0760cab7aec4a8359010b0 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:48:52,505 DEBUG nova.utils [-] Running cmd (subprocess): e2fsck -fp /var/lib/nova/instances/_base/da4b9237bacccdf19c0760cab7aec4a8359010b0 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:49:04,794 INFO nova.compute.manager [-] Found instance 'instance-00000005' in DB but no VM. State=9, so assuming spawn is in progress.
2011-08-05 13:49:09,914 DEBUG nova.utils [-] Running cmd (subprocess): resize2fs /var/lib/nova/instances/_base/da4b9237bacccdf19c0760cab7aec4a8359010b0 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:49:15,011 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img create -f qcow2 -o cluster_size=2M,backing_file=/var/lib/nova/instances/_base/da4b9237bacccdf19c0760cab7aec4a8359010b0 /var/lib/nova/instances/instance-00000005/disk from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:49:15,165 DEBUG nova.utils [-] Attempting to grab semaphore "local_20" for method "call_if_not_exists"... from (pid=1303) inner /usr/lib/pymodules/python2.7/nova/utils.py:661
2011-08-05 13:49:15,165 DEBUG nova.utils [-] Running cmd (subprocess): truncate /var/lib/nova/instances/_base/local_20 -s 20G from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:49:15,169 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img create -f qcow2 -o cluster_size=2M,backing_file=/var/lib/nova/instances/_base/local_20 /var/lib/nova/instances/instance-00000005/disk.local from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:49:15,312 INFO nova.virt.libvirt_conn [-] instance instance-00000005: injecting key into image 2
2011-08-05 13:49:15,312 DEBUG nova.utils [-] Running cmd (subprocess): sudo qemu-nbd -c /dev/nbd15 /var/lib/nova/instances/instance-00000005/disk from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:49:16,346 DEBUG nova.utils [-] Running cmd (subprocess): sudo tune2fs -c 0 -i 0 /dev/nbd15 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:49:18,096 DEBUG nova.utils [-] Running cmd (subprocess): sudo mount /dev/nbd15 /tmp/tmpsNhDZB from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:49:18,123 DEBUG nova.utils [-] Running cmd (subprocess): sudo mkdir -p /tmp/tmpsNhDZB/root/.ssh from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:49:18,133 DEBUG nova.utils [-] Running cmd (subprocess): sudo chown root /tmp/tmpsNhDZB/root/.ssh from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:49:18,142 DEBUG nova.utils [-] Running cmd (subprocess): sudo chmod 700 /tmp/tmpsNhDZB/root/.ssh from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:49:18,151 DEBUG nova.utils [-] Running cmd (subprocess): sudo tee -a /tmp/tmpsNhDZB/root/.ssh/authorized_keys from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:49:18,188 DEBUG nova.utils [-] Running cmd (subprocess): sudo umount /dev/nbd15 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:49:19,675 DEBUG nova.utils [-] Running cmd (subprocess): rmdir /tmp/tmpsNhDZB from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:49:19,696 DEBUG nova.utils [-] Running cmd (subprocess): sudo qemu-nbd -d /dev/nbd15 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
2011-08-05 13:49:21,929 DEBUG nova.virt.libvirt_conn [-] instance instance-00000005: is running from (pid=1303) spawn /usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py:583
2011-08-05 13:49:22,052 DEBUG nova.compute.manager [-] Checking state of instance-00000005 from (pid=1303) _update_state /usr/lib/pymodules/python2.7/nova/compute/manager.py:183
2011-08-05 13:49:22,157 INFO nova.virt.libvirt_conn [-] Instance instance-00000005 spawned successfully.

IPtables on conpute only node
Chain PREROUTING (policy ACCEPT 358 packets, 22820 bytes)
 pkts bytes target prot opt in out source destination
  358 22820 nova-compute-PREROUTING all -- * * 0.0.0.0/0 0.0.0.0/0

Chain INPUT (policy ACCEPT 1 packets, 328 bytes)
 pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 5 packets, 372 bytes)
 pkts bytes target prot opt in out source destination
    5 372 nova-compute-OUTPUT all -- * * 0.0.0.0/0 0.0.0.0/0

Chain POSTROUTING (policy ACCEPT 9 packets, 1684 bytes)
 pkts bytes target prot opt in out source destination
    9 1684 nova-compute-POSTROUTING all -- * * 0.0.0.0/0 0.0.0.0/0
    9 1684 nova-postrouting-bottom all -- * * 0.0.0.0/0 0.0.0.0/0
    0 0 MASQUERADE tcp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
    0 0 MASQUERADE udp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
    0 0 MASQUERADE all -- * * 192.168.122.0/24 !192.168.122.0/24

Chain nova-compute-OUTPUT (1 references)
 pkts bytes target prot opt in out source destination

Chain nova-compute-POSTROUTING (1 references)
 pkts bytes target prot opt in out source destination

Chain nova-compute-PREROUTING (1 references)
 pkts bytes target prot opt in out source destination

Chain nova-compute-floating-snat (1 references)
 pkts bytes target prot opt in out source destination

Chain nova-compute-snat (1 references)
 pkts bytes target prot opt in out source destination
    9 1684 nova-compute-floating-snat all -- * * 0.0.0.0/0 0.0.0.0/0

Chain nova-postrouting-bottom (1 references)
 pkts bytes target prot opt in out source destination
    9 1684 nova-compute-snat all -- * * 0.0.0.0/0 0.0.0.0/0

Question information

Language:
English Edit question
Status:
Solved
For:
OpenStack Compute (nova) Edit question
Assignee:
No assignee Edit question
Solved by:
Vish Ishaya
Solved:
Last query:
Last reply:
Revision history for this message
Best Vish Ishaya (vishvananda) said :
#1

Did you set up vlan 1 on your switch that eth1 is plugged into? if you ifconfig do you see traffic in both directions on vlan1?

Vish

On Aug 5, 2011, at 11:16 AM, P Spencer Davis wrote:

> New question #167077 on OpenStack Compute (nova):
> https://answers.launchpad.net/nova/+question/167077
>
> I have a setup with two hosts, one running as management/compute node and the other as a pure compute node. I am able to run instances on both hosts and have network access to any instances that are running on the combined master/compute node. The instances that are running on the pure compute node however are inaccessible from the network.
> Both hosts have two active nics, eth0 is the public interface on a 10.169.30.128/25 network and eth1 is a 172.16.0.0/16 private network.
> I have defined an virtual network for the project to run in as follows:
> nova-manage network create --label=public --fixed_range_v4=192.168.1.0/24 --num_networks=1 --network_size=256 --vlan=1 --bridge=vlan1 --dns1=10.0.4.7
>
> Here is the configuration that nova-manage reports:
>
> --storage_availability_zone=nova
> --ca_file=cacert.pem
> --ec2_dmz_host=$my_ip
> --fixed_range=172.16.0.0/16
> --compute_topic=compute
> --dmz_mask=255.255.255.0
> --fixed_range_v6=fd00::/48
> --glance_api_servers=10.192.30.137:9292
> --rabbit_password=guest
> --user_cert_subject=/C=US/ST=California/L=MountainView/O=AnsoLabs/OU=NovaDev/CN=%s-%s-%s
> --s3_dmz=10.192.30.137
> --quota_ram=51200
> --find_host_timeout=30
> --aws_access_key_id=admin
> --vncserver_host=0.0.0.0
> --network_size=1024
> --enable_new_services
> --my_ip=10.192.30.137
> --live_migration_retry_count=30
> --lockout_attempts=5
> --credential_cert_file=cert.pem
> --quota_max_injected_files=5
> --zone_capabilities=hypervisor=xenserver;kvm,os=linux;windows
> --logdir=/var/log/nova
> --sqlite_db=nova.sqlite
> --nouse_forwarded_for
> --cpuinfo_xml_template=/usr/lib/pymodules/python2.7/nova/virt/cpuinfo.xml.template
> --num_networks=1
> --boot_script_template=/usr/lib/pymodules/python2.7/nova/cloudpipe/bootscript.template
> --live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER
> --notification_driver=nova.notifier.no_op_notifier
> --osapi_max_limit=1000
> --rabbit_port=5672
> --s3_access_key=notchecked
> --rabbit_max_retries=12
> --noresume_guests_state_on_host_boot
> --ajax_console_proxy_url=http://127.0.0.1:8000
> --injected_network_template=/usr/lib/pymodules/python2.7/nova/virt/interfaces.template
> --network_host=10.192.30.137
> --snapshot_name_template=snapshot-%08x
> --vncproxy_url=http://10.192.30.137:6080
> --s3_secret_key=notchecked
> --ajax_console_proxy_topic=ajax_proxy
> --minimum_root_size=10737418240
> --quota_cores=20
> --nouse_project_ca
> --rabbit_userid=guest
> --volume_topic=volume
> --volume_name_template=volume-%08x
> --lock_path=/var/lock/nova
> --live_migration_uri=qemu+tcp://%s/system
> --flat_network_dns=8.8.4.4
> --live_migration_bandwidth=0
> --connection_type=libvirt
> --noupdate_dhcp_on_disassociate
> --default_project=openstack
> --s3_port=3333
> --logfile_mode=420
> --logging_context_format_string=%(asctime)s %(levelname)s %(name)s [%(request_id)s %(user_id)s %(project_id)s] %(message)s
> --instance_name_template=instance-%08x
> --ec2_host=$my_ip
> --credential_key_file=pk.pem
> --vpn_cert_subject=/C=US/ST=California/L=MountainView/O=AnsoLabs/OU=NovaDev/CN=project-vpn-%s-%s
> --logging_debug_format_suffix=from (pid=%(process)d) %(funcName)s %(pathname)s:%(lineno)d
> --stub_network=False
> --console_manager=nova.console.manager.ConsoleProxyManager
> --rpc_backend=nova.rpc.amqp
> --default_log_levels=amqplib=WARN,sqlalchemy=WARN,boto=WARN,eventlet.wsgi.server=WARN
> --osapi_scheme=http
> --credential_rc_file=%src
> --sql_connection=mysql://nova:nova@10.192.30.137/nova
> --console_topic=console
> --instances_path=$state_path/instances
> --flat_injected
> --use_local_volumes
> --host=csvirt-1
> --fixed_ip_disassociate_timeout=600
> --console_host=csvirt-1
> --quota_instances=10
> --quota_max_injected_file_content_bytes=10240
> --libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtBridgeDriver
> --floating_range=4.4.4.0/24
> --nomulti_host
> --lockout_window=15
> --db_backend=sqlalchemy
> --credentials_template=/usr/lib/pymodules/python2.7/nova/auth/novarc.template
> --dmz_net=10.0.0.0
> --sql_retry_interval=10
> --vpn_start=1000
> --volume_driver=nova.volume.driver.ISCSIDriver
> --crl_file=crl.pem
> --rpc_conn_pool_size=30
> --s3_host=10.192.30.137
> --qemu_img=qemu-img
> --max_nbd_devices=16
> --vlan_interface=eth1
> --scheduler_topic=scheduler
> --verbose
> --sql_max_retries=12
> --default_instance_type=m1.small
> --firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
> --password_length=12
> --libvirt_type=kvm
> --image_decryption_dir=/tmp
> --vpn_key_suffix=-vpn
> --use_cow_images
> --block_size=268435456
> --null_kernel=nokernel
> --libvirt_xml_template=/usr/lib/pymodules/python2.7/nova/virt/libvirt.xml.template
> --vpn_client_template=/usr/lib/pymodules/python2.7/nova/cloudpipe/client.ovpn.template
> --credential_vpn_file=nova-vpn.conf
> --service_down_time=60
> --default_notification_level=INFO
> --nopublish_errors
> --quota_metadata_items=128
> --allowed_roles=cloudadmin,itsec,sysadmin,netadmin,developer
> --logging_exception_prefix=(%(name)s): TRACE:
> --enabled_apis=ec2,osapi
> --quota_max_injected_file_path_bytes=255
> --scheduler_manager=nova.scheduler.manager.SchedulerManager
> --ec2_port=8773
> --rescue_kernel_id=aki-rescue
> --osapi_port=8774
> --auth_token_ttl=3600
> --quota_volumes=10
> --libvirt_uri=
> --ec2_scheme=http
> --keys_path=$state_path/keys
> --vpn_image_id=0
> --host_state_interval=120
> --noauto_assign_floating_ip
> --quota_floating_ips=10
> --nofake_call
> --state_path=/var/lib/nova
> --sql_idle_timeout=3600
> --vpn_ip=$my_ip
> --default_image=ami-11111
> --aws_secret_access_key=admin
> --nouse_ipv6
> --key_file=private/cakey.pem
> --nofake_network
> --osapi_extensions_path=/var/lib/nova/extensions
> --quota_gigabytes=1000
> --region_list=
> --auth_driver=nova.auth.dbdriver.DbDriver
> --network_manager=nova.network.manager.VlanManager
> --noenable_zone_routing
> --osapi_host=$my_ip
> --zone_name=nova
> --rescue_image_id=ami-rescue
> --logging_default_format_string=%(asctime)s %(levelname)s %(name)s [-] %(message)s
> --timeout_nbd=10
> --compute_driver=nova.virt.connection.get_connection
> --libvirt_vif_type=bridge
> --nofake_rabbit
> --rabbit_host=10.192.30.137
> --vnc_keymap=en-us
> --rescue_timeout=0
> --ca_path=$state_path/CA
> --nouse_syslog
> --superuser_roles=cloudadmin
> --osapi_path=/v1.0/
> --ec2_path=/services/Cloud
> --allow_project_net_traffic
> --norabbit_use_ssl
> --rabbit_retry_interval=10
> --node_availability_zone=nova
> --lockout_minutes=15
> --db_driver=nova.db.api
> --create_unique_mac_address_attempts=5
> --ajaxterm_portrange=10000-12000
> --volume_manager=nova.volume.manager.VolumeManager
> --nostart_guests_on_host_boot
> --vlan_start=100
> --rpc_thread_pool_size=1024
> --ipv6_backend=rfc2462
> --vnc_enabled
> --global_roles=cloudadmin,itsec
> --rabbit_virtual_host=/
> --rescue_ramdisk_id=ari-rescue
> --network_driver=nova.network.linux_net
> --ajax_console_proxy_port=8000
> --project_cert_subject=/C=US/ST=California/L=MountainView/O=AnsoLabs/OU=NovaDev/CN=project-ca-%s-%s
> --image_service=nova.image.glance.GlanceImageService
> --control_exchange=nova
> --cnt_vpn_clients=0
> --vncproxy_topic=vncproxy
> --compute_manager=nova.compute.manager.ComputeManager
> --network_topic=network
>
> Here is my nova.conf:
>
> # RabbitMQ
> --rabbit_host=10.192.30.137
> # MySQL
> --sql_connection=mysql://nova:nova@10.192.30.137/nova
> # Networking
> --network_manager=nova.network.manager.VlanManager
> --vlan_interface=eth1
> --public_interface=eth0
> --network_host=10.192.30.137
> --routing_source_ip=10.192.30.137
> --fixed_range=172.16.0.0/16
> --network_size=1024
> --dhcpbridge_flagfile=/etc/nova/nova.conf
> --dhcpbridge=/usr/bin/nova-dhcpbridge
> --user_ipv6=false
> # Virtualization
> --libvirt_type=kvm
> # Volumes
> --iscsi_ip_prefix=10.192.30.137
> --num_targets=100
> # APIs
> --auth_driver=nova.auth.dbdriver.DbDriver
> --cc_host=10.192.30.137
> --ec2_url=http://10.192.30.137:8773/services/Cloud
> --s3_host=10.192.30.137
> --s3_dmz=10.192.30.137
> # Image service
> #--glance_host=10.192.30.137
> --glance_api_servers=10.192.30.137:9292
> --image_service=nova.image.glance.GlanceImageService
> # Misc
> --logdir=/var/log/nova
> --state_path=/var/lib/nova
> --lock_path=/var/lock/nova
> --verbose
> # VNC Console
> --vnc_enabled=true
> --vncproxy_url=http://10.192.30.137:6080
> --vnc_console_proxy_url=http://10.192.30.137:6080
>
> here is the output to /var/log/nova/nova-compute.log from creation of the instance on the compute only node
>
> 2011-08-05 13:47:04,737 INFO nova.rpc [-] Created "compute_fanout" fanout exchange with "compute" routing key
> 2011-08-05 13:47:04,737 DEBUG nova.rpc [-] Initing the Adapter Consumer for compute from (pid=1303) __init__ /usr/lib/pymodules/python2.7/nova/rpc/amqp.py:180
> 2011-08-05 13:48:04,745 INFO nova.compute.manager [-] Updating host status
> 2011-08-05 13:48:23,561 DEBUG nova.rpc [-] received {u'_context_roles': [u'cloudadmin', u'projectmanager'], u'_context_request_id': u'039409a5-6d4a-4b8a-97ba-7ecd95141be6', u'_context_read_deleted': False, u'args': {u'instance_id': 5, u'request_spec': {u'instance_properties': {u'state_description': u'scheduling', u'availability_zone': None, u'ramdisk_id': u'', u'instance_type_id': 5, u'user_data': u'', u'vm_mode': None, u'reservation_id': u'r-j413p6bj', u'root_device_name': None, u'user_id': u'cscloud', u'display_description': None, u'key_data': u'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQC2uI7g/I4ZqQu59NWiDtFd3AmbfT/h2O+YepSRTcoSL0QMkjSPVy6SK9MlFKjPxCqRDVDyie7mpyZF75das3Ra45IGDDSI3NP8EDL0Ldn9pDGMmt93H6YX53C+rXxRQzrp+pVI7mbOpTV9EtYlXjair9PKXfUItLiBJRc8kRzeMw== nova@csvirt-1\n', u'state': 0, u'project_id': u'cscloudbase', u'metadata': {}, u'kernel_id': u'1', u'key_name': u'key-cloudbase', u'display_name': None, u'local_gb': 20, u'locked': False, u'launch_time': u'2011-08-05T17:48:23Z', u'memory_mb': 2048, u'vcpus': 1, u'image_ref': 2, u'architecture': None, u'os_type': None}, u'instance_type': {u'rxtx_quota': 0, u'deleted_at': None, u'name': u'm1.small', u'deleted': False, u'created_at': None, u'updated_at': None, u'memory_mb': 2048, u'vcpus': 1, u'rxtx_cap': 0, u'extra_specs': {}, u'swap': 0, u'flavorid': 2, u'id': 5, u'local_gb': 20}, u'num_instances': 1, u'filter': u'nova.scheduler.host_filter.InstanceTypeFilter', u'blob': None}, u'admin_password': None, u'injected_files': None, u'availability_zone': None}, u'_context_auth_token': None, u'_context_is_admin': True, u'_context_project_id': u'cscloudbase', u'_context_timestamp': u'2011-08-05T17:48:23.046601', u'_context_user_id': u'cscloud', u'method': u'run_instance', u'_context_remote_address': u'10.192.30.137'} from (pid=1303) process_data /usr/lib/pymodules/python2.7/nova/rpc/amqp.py:200
> 2011-08-05 13:48:23,561 DEBUG nova.rpc [-] unpacked context: {'user_id': u'cscloud', 'roles': [u'cloudadmin', u'projectmanager'], 'timestamp': u'2011-08-05T17:48:23.046601', 'auth_token': None, 'msg_id': None, 'remote_address': u'10.192.30.137', 'is_admin': True, 'request_id': u'039409a5-6d4a-4b8a-97ba-7ecd95141be6', 'project_id': u'cscloudbase', 'read_deleted': False} from (pid=1303) _unpack_context /usr/lib/pymodules/python2.7/nova/rpc/amqp.py:430
> 2011-08-05 13:48:23,632 AUDIT nova.compute.manager [039409a5-6d4a-4b8a-97ba-7ecd95141be6 cscloud cscloudbase] instance 5: starting...
> 2011-08-05 13:48:23,869 DEBUG nova.rpc [-] Making asynchronous call on network ... from (pid=1303) multicall /usr/lib/pymodules/python2.7/nova/rpc/amqp.py:460
> 2011-08-05 13:48:23,869 DEBUG nova.rpc [-] MSG_ID is aff1348ef9f642819f0065fa68d5d07c from (pid=1303) multicall /usr/lib/pymodules/python2.7/nova/rpc/amqp.py:463
> 2011-08-05 13:48:23,869 DEBUG nova.rpc [-] Creating new connection from (pid=1303) create /usr/lib/pymodules/python2.7/nova/rpc/amqp.py:103
> 2011-08-05 13:48:24,470 DEBUG nova.compute.manager [-] instance network_info: |[[{u'bridge': u'br1', u'multi_host': False, u'bridge_interface': u'eth1', u'vlan': 1, u'id': 1, u'injected': False, u'cidr': u'192.168.1.0/24', u'cidr_v6': None}, {u'should_create_bridge': True, u'dns': [], u'label': u'public', u'broadcast': u'192.168.1.255', u'ips': [{u'ip': u'192.168.1.4', u'netmask': u'255.255.255.0', u'enabled': u'1'}], u'mac': u'02:16:3e:4d:4c:b1', u'rxtx_cap': 0, u'should_create_vlan': True, u'dhcp_server': u'192.168.1.1', u'gateway': u'192.168.1.1'}]]| from (pid=1303) _run_instance /usr/lib/pymodules/python2.7/nova/compute/manager.py:343
> 2011-08-05 13:48:24,744 DEBUG nova.virt.libvirt_conn [-] instance instance-00000005: starting toXML method from (pid=1303) to_xml /usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py:1004
> 2011-08-05 13:48:24,745 DEBUG nova.virt.libvirt.vif [-] Ensuring vlan 1 and bridge br1 from (pid=1303) plug /usr/lib/pymodules/python2.7/nova/virt/libvirt/vif.py:82
> 2011-08-05 13:48:24,745 DEBUG nova.utils [-] Attempting to grab semaphore "ensure_vlan" for method "ensure_vlan"... from (pid=1303) inner /usr/lib/pymodules/python2.7/nova/utils.py:661
> 2011-08-05 13:48:24,745 DEBUG nova.utils [-] Attempting to grab file lock "ensure_vlan" for method "ensure_vlan"... from (pid=1303) inner /usr/lib/pymodules/python2.7/nova/utils.py:666
> 2011-08-05 13:48:24,745 DEBUG nova.utils [-] Running cmd (subprocess): ip link show dev vlan1 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:48:24,749 DEBUG nova.utils [-] Result was 255 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:176
> 2011-08-05 13:48:24,750 DEBUG nova.linux_net [-] Starting VLAN inteface vlan1 from (pid=1303) ensure_vlan /usr/lib/pymodules/python2.7/nova/network/linux_net.py:466
> 2011-08-05 13:48:24,750 DEBUG nova.utils [-] Running cmd (subprocess): sudo vconfig set_name_type VLAN_PLUS_VID_NO_PAD from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:48:24,890 DEBUG nova.utils [-] Running cmd (subprocess): sudo vconfig add eth1 1 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:48:24,899 DEBUG nova.utils [-] Running cmd (subprocess): sudo ip link set vlan1 up from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:48:24,909 DEBUG nova.utils [-] Attempting to grab semaphore "ensure_bridge" for method "ensure_bridge"... from (pid=1303) inner /usr/lib/pymodules/python2.7/nova/utils.py:661
> 2011-08-05 13:48:24,909 DEBUG nova.utils [-] Attempting to grab file lock "ensure_bridge" for method "ensure_bridge"... from (pid=1303) inner /usr/lib/pymodules/python2.7/nova/utils.py:666
> 2011-08-05 13:48:24,909 DEBUG nova.utils [-] Running cmd (subprocess): ip link show dev br1 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:48:24,913 DEBUG nova.utils [-] Result was 255 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:176
> 2011-08-05 13:48:24,914 DEBUG nova.linux_net [-] Starting Bridge interface for vlan1 from (pid=1303) ensure_bridge /usr/lib/pymodules/python2.7/nova/network/linux_net.py:489
> 2011-08-05 13:48:24,914 DEBUG nova.utils [-] Running cmd (subprocess): sudo brctl addbr br1 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:48:24,922 DEBUG nova.utils [-] Running cmd (subprocess): sudo brctl setfd br1 0 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:48:24,931 DEBUG nova.utils [-] Running cmd (subprocess): sudo brctl stp br1 off from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:48:24,939 DEBUG nova.utils [-] Running cmd (subprocess): sudo ip link set br1 up from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:48:24,949 DEBUG nova.utils [-] Running cmd (subprocess): sudo route -n from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:48:24,958 DEBUG nova.utils [-] Running cmd (subprocess): sudo ip addr show dev vlan1 scope global from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:48:24,985 DEBUG nova.utils [-] Running cmd (subprocess): sudo brctl addif br1 vlan1 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:48:25,059 DEBUG nova.virt.libvirt_conn [-] instance instance-00000005: finished toXML method from (pid=1303) to_xml /usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py:1008
> 2011-08-05 13:48:25,059 INFO nova [-] called setup_basic_filtering in nwfilter
> 2011-08-05 13:48:25,059 INFO nova [-] ensuring static filters
> 2011-08-05 13:48:25,090 DEBUG nova.virt.libvirt.firewall [-] iptables firewall: Setup Basic Filtering from (pid=1303) setup_basic_filtering /usr/lib/pymodules/python2.7/nova/virt/libvirt/firewall.py:538
> 2011-08-05 13:48:25,090 DEBUG nova.utils [-] Attempting to grab semaphore "iptables" for method "_do_refresh_provider_fw_rules"... from (pid=1303) inner /usr/lib/pymodules/python2.7/nova/utils.py:661
> 2011-08-05 13:48:25,090 DEBUG nova.utils [-] Attempting to grab file lock "iptables" for method "_do_refresh_provider_fw_rules"... from (pid=1303) inner /usr/lib/pymodules/python2.7/nova/utils.py:666
> 2011-08-05 13:48:25,093 DEBUG nova.utils [-] Attempting to grab semaphore "iptables" for method "apply"... from (pid=1303) inner /usr/lib/pymodules/python2.7/nova/utils.py:661
> 2011-08-05 13:48:25,093 DEBUG nova.utils [-] Attempting to grab file lock "iptables" for method "apply"... from (pid=1303) inner /usr/lib/pymodules/python2.7/nova/utils.py:666
> 2011-08-05 13:48:25,093 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-save -t filter from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:48:25,102 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-restore from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:48:25,112 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-save -t nat from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:48:25,121 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-restore from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:48:25,141 DEBUG nova.virt.libvirt.firewall [-] Adding security group rule: <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at 0x36af690> from (pid=1303) instance_rules /usr/lib/pymodules/python2.7/nova/virt/libvirt/firewall.py:664
> 2011-08-05 13:48:25,141 DEBUG nova.virt.libvirt.firewall [-] Adding security group rule: <nova.db.sqlalchemy.models.SecurityGroupIngressRule object at 0x36af710> from (pid=1303) instance_rules /usr/lib/pymodules/python2.7/nova/virt/libvirt/firewall.py:664
> 2011-08-05 13:48:25,141 DEBUG nova.utils [-] Attempting to grab semaphore "iptables" for method "apply"... from (pid=1303) inner /usr/lib/pymodules/python2.7/nova/utils.py:661
> 2011-08-05 13:48:25,141 DEBUG nova.utils [-] Attempting to grab file lock "iptables" for method "apply"... from (pid=1303) inner /usr/lib/pymodules/python2.7/nova/utils.py:666
> 2011-08-05 13:48:25,142 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-save -t filter from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:48:25,151 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-restore from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:48:25,160 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-save -t nat from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:48:25,169 DEBUG nova.utils [-] Running cmd (subprocess): sudo iptables-restore from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:48:25,179 DEBUG nova.utils [-] Running cmd (subprocess): mkdir -p /var/lib/nova/instances/instance-00000005/ from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:48:25,297 INFO nova.virt.libvirt_conn [-] instance instance-00000005: Creating image
> 2011-08-05 13:48:25,298 DEBUG nova.utils [-] Attempting to grab semaphore "00000001" for method "call_if_not_exists"... from (pid=1303) inner /usr/lib/pymodules/python2.7/nova/utils.py:661
> 2011-08-05 13:48:25,725 DEBUG nova.utils [-] Running cmd (subprocess): cp /var/lib/nova/instances/_base/00000001 /var/lib/nova/instances/instance-00000005/kernel from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:48:25,745 DEBUG nova.utils [-] Attempting to grab semaphore "da4b9237bacccdf19c0760cab7aec4a8359010b0" for method "call_if_not_exists"... from (pid=1303) inner /usr/lib/pymodules/python2.7/nova/utils.py:661
> 2011-08-05 13:48:52,492 DEBUG nova.utils [-] Running cmd (subprocess): truncate -s 10737418240 /var/lib/nova/instances/_base/da4b9237bacccdf19c0760cab7aec4a8359010b0 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:48:52,505 DEBUG nova.utils [-] Running cmd (subprocess): e2fsck -fp /var/lib/nova/instances/_base/da4b9237bacccdf19c0760cab7aec4a8359010b0 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:49:04,794 INFO nova.compute.manager [-] Found instance 'instance-00000005' in DB but no VM. State=9, so assuming spawn is in progress.
> 2011-08-05 13:49:09,914 DEBUG nova.utils [-] Running cmd (subprocess): resize2fs /var/lib/nova/instances/_base/da4b9237bacccdf19c0760cab7aec4a8359010b0 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:49:15,011 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img create -f qcow2 -o cluster_size=2M,backing_file=/var/lib/nova/instances/_base/da4b9237bacccdf19c0760cab7aec4a8359010b0 /var/lib/nova/instances/instance-00000005/disk from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:49:15,165 DEBUG nova.utils [-] Attempting to grab semaphore "local_20" for method "call_if_not_exists"... from (pid=1303) inner /usr/lib/pymodules/python2.7/nova/utils.py:661
> 2011-08-05 13:49:15,165 DEBUG nova.utils [-] Running cmd (subprocess): truncate /var/lib/nova/instances/_base/local_20 -s 20G from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:49:15,169 DEBUG nova.utils [-] Running cmd (subprocess): qemu-img create -f qcow2 -o cluster_size=2M,backing_file=/var/lib/nova/instances/_base/local_20 /var/lib/nova/instances/instance-00000005/disk.local from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:49:15,312 INFO nova.virt.libvirt_conn [-] instance instance-00000005: injecting key into image 2
> 2011-08-05 13:49:15,312 DEBUG nova.utils [-] Running cmd (subprocess): sudo qemu-nbd -c /dev/nbd15 /var/lib/nova/instances/instance-00000005/disk from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:49:16,346 DEBUG nova.utils [-] Running cmd (subprocess): sudo tune2fs -c 0 -i 0 /dev/nbd15 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:49:18,096 DEBUG nova.utils [-] Running cmd (subprocess): sudo mount /dev/nbd15 /tmp/tmpsNhDZB from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:49:18,123 DEBUG nova.utils [-] Running cmd (subprocess): sudo mkdir -p /tmp/tmpsNhDZB/root/.ssh from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:49:18,133 DEBUG nova.utils [-] Running cmd (subprocess): sudo chown root /tmp/tmpsNhDZB/root/.ssh from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:49:18,142 DEBUG nova.utils [-] Running cmd (subprocess): sudo chmod 700 /tmp/tmpsNhDZB/root/.ssh from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:49:18,151 DEBUG nova.utils [-] Running cmd (subprocess): sudo tee -a /tmp/tmpsNhDZB/root/.ssh/authorized_keys from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:49:18,188 DEBUG nova.utils [-] Running cmd (subprocess): sudo umount /dev/nbd15 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:49:19,675 DEBUG nova.utils [-] Running cmd (subprocess): rmdir /tmp/tmpsNhDZB from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:49:19,696 DEBUG nova.utils [-] Running cmd (subprocess): sudo qemu-nbd -d /dev/nbd15 from (pid=1303) execute /usr/lib/pymodules/python2.7/nova/utils.py:158
> 2011-08-05 13:49:21,929 DEBUG nova.virt.libvirt_conn [-] instance instance-00000005: is running from (pid=1303) spawn /usr/lib/pymodules/python2.7/nova/virt/libvirt/connection.py:583
> 2011-08-05 13:49:22,052 DEBUG nova.compute.manager [-] Checking state of instance-00000005 from (pid=1303) _update_state /usr/lib/pymodules/python2.7/nova/compute/manager.py:183
> 2011-08-05 13:49:22,157 INFO nova.virt.libvirt_conn [-] Instance instance-00000005 spawned successfully.
>
> IPtables on conpute only node
> Chain PREROUTING (policy ACCEPT 358 packets, 22820 bytes)
> pkts bytes target prot opt in out source destination
> 358 22820 nova-compute-PREROUTING all -- * * 0.0.0.0/0 0.0.0.0/0
>
> Chain INPUT (policy ACCEPT 1 packets, 328 bytes)
> pkts bytes target prot opt in out source destination
>
> Chain OUTPUT (policy ACCEPT 5 packets, 372 bytes)
> pkts bytes target prot opt in out source destination
> 5 372 nova-compute-OUTPUT all -- * * 0.0.0.0/0 0.0.0.0/0
>
> Chain POSTROUTING (policy ACCEPT 9 packets, 1684 bytes)
> pkts bytes target prot opt in out source destination
> 9 1684 nova-compute-POSTROUTING all -- * * 0.0.0.0/0 0.0.0.0/0
> 9 1684 nova-postrouting-bottom all -- * * 0.0.0.0/0 0.0.0.0/0
> 0 0 MASQUERADE tcp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
> 0 0 MASQUERADE udp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
> 0 0 MASQUERADE all -- * * 192.168.122.0/24 !192.168.122.0/24
>
> Chain nova-compute-OUTPUT (1 references)
> pkts bytes target prot opt in out source destination
>
> Chain nova-compute-POSTROUTING (1 references)
> pkts bytes target prot opt in out source destination
>
> Chain nova-compute-PREROUTING (1 references)
> pkts bytes target prot opt in out source destination
>
> Chain nova-compute-floating-snat (1 references)
> pkts bytes target prot opt in out source destination
>
> Chain nova-compute-snat (1 references)
> pkts bytes target prot opt in out source destination
> 9 1684 nova-compute-floating-snat all -- * * 0.0.0.0/0 0.0.0.0/0
>
> Chain nova-postrouting-bottom (1 references)
> pkts bytes target prot opt in out source destination
> 9 1684 nova-compute-snat all -- * * 0.0.0.0/0 0.0.0.0/0
>
>
>
> --
> You received this question notification because you are a member of Nova
> Core, which is an answer contact for OpenStack Compute (nova).

Revision history for this message
P Spencer Davis (p-spencer-davis) said :
#2

Vish. I did not have to set up vlans on the switch when I was testing this previously in my office using consumer grade switches. The problem only showed up when the servers where racked in their final location. (I was using the trunk build from about a month ago when I was doing my initial tests). I'll see if the hardware supports vlan creation, but I was under the impression that the vlans where created in software on the compute nodes using brctl and iptables...

Here is the output of ifconfig from the management/compute node:

br1 Link encap:Ethernet HWaddr 14:fe:b5:db:29:7a
          inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0
          inet6 addr: fe80::5c46:aff:fe12:3298/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:338 errors:0 dropped:0 overruns:0 frame:0
          TX packets:378 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:45232 (45.2 KB) TX bytes:55288 (55.2 KB)

eth0 Link encap:Ethernet HWaddr 14:fe:b5:db:29:78
          inet addr:10.192.30.137 Bcast:10.192.30.255 Mask:255.255.255.128
          inet6 addr: fe80::16fe:b5ff:fedb:2978/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:113195 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1035491 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:10605942 (10.6 MB) TX bytes:1556998125 (1.5 GB)
          Interrupt:36 Memory:d6000000-d6012800

eth1 Link encap:Ethernet HWaddr 14:fe:b5:db:29:7a
          inet addr:172.16.0.100 Bcast:172.16.255.255 Mask:255.255.0.0
          inet6 addr: fe80::16fe:b5ff:fedb:297a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:410 errors:0 dropped:0 overruns:0 frame:0
          TX packets:137 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:54684 (54.6 KB) TX bytes:11664 (11.6 KB)
          Interrupt:48 Memory:d8000000-d8012800

lo Link encap:Local Loopback
          inet addr:127.0.0.1 Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING MTU:16436 Metric:1
          RX packets:29319 errors:0 dropped:0 overruns:0 frame:0
          TX packets:29319 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:21201705 (21.2 MB) TX bytes:21201705 (21.2 MB)

virbr0 Link encap:Ethernet HWaddr f6:6d:6c:82:c5:0b
          inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

vlan1 Link encap:Ethernet HWaddr 14:fe:b5:db:29:7a
          inet6 addr: fe80::16fe:b5ff:fedb:297a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:96 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B) TX bytes:6744 (6.7 KB)

vnet0 Link encap:Ethernet HWaddr fe:16:3e:6d:7f:62
          inet6 addr: fe80::fc16:3eff:fe6d:7f62/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:375 errors:0 dropped:0 overruns:0 frame:0
          TX packets:405 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:53154 (53.1 KB) TX bytes:56566 (56.5 KB)

and the compute node:
br1 Link encap:Ethernet HWaddr 14:fe:b5:db:27:af
          inet6 addr: fe80::f8dc:eaff:fe6f:e854/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:78 errors:0 dropped:0 overruns:0 frame:0
          TX packets:40 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:24504 (24.5 KB) TX bytes:3392 (3.3 KB)

eth0 Link encap:Ethernet HWaddr 14:fe:b5:db:27:ad
          inet addr:10.192.30.138 Bcast:10.192.30.255 Mask:255.255.255.128
          inet6 addr: fe80::16fe:b5ff:fedb:27ad/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:1040427 errors:0 dropped:0 overruns:0 frame:0
          TX packets:111358 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1557300740 (1.5 GB) TX bytes:12493733 (12.4 MB)
          Interrupt:36 Memory:d6000000-d6012800

eth1 Link encap:Ethernet HWaddr 14:fe:b5:db:27:af
          inet addr:172.16.0.101 Bcast:172.16.255.255 Mask:255.255.0.0
          inet6 addr: fe80::16fe:b5ff:fedb:27af/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:345 errors:0 dropped:0 overruns:0 frame:0
          TX packets:195 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:30110 (30.1 KB) TX bytes:35908 (35.9 KB)
          Interrupt:48 Memory:d8000000-d8012800

lo Link encap:Local Loopback
          inet addr:127.0.0.1 Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING MTU:16436 Metric:1
          RX packets:6 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:528 (528.0 B) TX bytes:528 (528.0 B)

virbr0 Link encap:Ethernet HWaddr fe:e9:cb:6a:96:ef
          inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

vlan1 Link encap:Ethernet HWaddr 14:fe:b5:db:27:af
          inet6 addr: fe80::16fe:b5ff:fedb:27af/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:154 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B) TX bytes:30684 (30.6 KB)

vnet0 Link encap:Ethernet HWaddr fe:16:3e:4d:4c:b1
          inet6 addr: fe80::fc16:3eff:fe4d:4cb1/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:114 errors:0 dropped:0 overruns:0 frame:0
          TX packets:76 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:28700 (28.7 KB) TX bytes:5088 (5.0 KB)

brctl show on the management node

bridge name bridge id STP enabled interfaces
br1 8000.14feb5db297a no vlan1
       vnet0
virbr0 8000.000000000000 yes

and the compute node:
bridge name bridge id STP enabled interfaces
br1 8000.14feb5db27af no vlan1
       vnet0
virbr0 8000.000000000000 yes

and pinging 192.168.1.1 from the managment node:

PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_req=1 ttl=64 time=0.037 ms

and the compute node:
From 10.192.2.1 icmp_seq=1 Packet filtered

Revision history for this message
P Spencer Davis (p-spencer-davis) said :
#3

Previously, I was using the method for hand editing the network table laid out in the Admin documentation for using VlanManager, but with recent changes to the operation of nova-manage network create, I didn't think that was necessary.

http://docs.openstack.org/cactus/openstack-compute/admin/content/configuring-vlan-networking.html

  Update the DB to match your network settings. The following script will generate SQL based on the predetermined settings for this example. You will need to modify this database update to fit your environment.

if [ -z $1 ]; then
  echo "You need to specify the vlan to modify"
fi

if [ -z $2 ]; then
  echo "You need to specify a network id number (check the DB for the network you want to update)"
fi

VLAN=$1
ID=$2

cat > vlan.sql << __EOF_
update networks set vlan = '$VLAN' where id = $ID;
update networks set bridge = 'br_$VLAN' where id = $ID;
update networks set gateway = '10.1.$VLAN.7' where id = $ID;
update networks set dhcp_start = '10.1.$VLAN.8' where id = $ID;
update fixed_ips set reserved = 1 where address in ('10.1.$VLAN.1','10.1.$VLAN.2','10.1.$VLAN.3','10.1.$VLAN.4','10.1.$VLAN.5','10.1.$VLAN.6','10.1.$VLAN.7');
__EOF_
After verifying that the above SQL will work for your environment, run it against the nova database, once for every VLAN you have in the environment.

Revision history for this message
Vish Ishaya (vishvananda) said :
#4

On Aug 5, 2011, at 12:06 PM, P Spencer Davis wrote:

>
> vlan1 Link encap:Ethernet HWaddr 14:fe:b5:db:29:7a
> inet6 addr: fe80::16fe:b5ff:fedb:297a/64 Scope:Link
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> RX packets:0 errors:0 dropped:0 overruns:0 frame:0
> TX packets:96 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:0
> RX bytes:0 (0.0 B) TX bytes:6744 (6.7 KB)

As I suspected, no traffic on the vlan. You have to trunk vlan 1 on your switch for all ports that your hosts are connected to.

Vish

Revision history for this message
P Spencer Davis (p-spencer-davis) said :
#5

I've replaced the switch with the cheap consumer device and it works, Now I just need to reprogram the other one. Thanks for your help!

Revision history for this message
P Spencer Davis (p-spencer-davis) said :
#6

Thanks Vish Ishaya, that solved my question.