Attached nova-volumes does not show in VM target

Asked by Chris McClung

I have setup Openstack all on one machine and are currently trying to setup nova-volume. I can start instances ok and can ssh etc in ok.

I have created a vggroup nova-volumes.

When trying to attach a volume to a VM the volume shows that it is in use after running:
euca-attach-volume -i i-00000010 -d /dev/vdb vol-00000002

The Status is:
euca-describe-volumes
VOLUME vol-00000002 10 nova in-use (proj, cloud1, i-00000010[cloud1], /dev/vdb) 2011-11-01T14:54:51Z

vgdisplay displays for Nova-volumes:
Alloc PE / Size 2560 / 10.00 GiB
  Free PE / Size 951302 / 3.63 TiB
  VG UUID nBoTgU-nTPQ-AOc6-M7ZE-dRGF-3nXl-kDgeYW

If i do a discovery I get
iscsiadm -m discovery -t st -p 192.168.2.200
192.168.2.200:3260,1 iqn.2010-10.org.openstack:volume-00000002

euca-describe-instances
RESERVATION r-kectzmil proj default
INSTANCE i-00000010 ami-00000002 192.168.22.6 192.168.22.6 running chriskey (proj, cloud1) 0 m1.tiny 2011-11-01T16:13:32Z nova aki-00000001 ami-00000000

However if I run:
iscsiadm -m session
iscsiadm: No active sessions.

Running fdisk -l on the VM shows no /dev/....

The volume.log shows:
2011-11-01 16:12:42,871 AUDIT nova [-] Starting volume node (version 2011.3-nova-milestone-tarball:tarmac-20110922115702-k9nkvxqzhj130av2)
2011-11-01 16:12:42,872 DEBUG nova.utils [-] Running cmd (subprocess): sudo vgs --noheadings -o name from (pid=2312) execute /usr/lib/python2.7/dist-packages/nova/utils.py:168
2011-11-01 16:12:43,046 DEBUG nova.utils [-] backend <module 'nova.db.sqlalchemy.api' from '/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.pyc'> from (pid=2312) __get_backend /usr/lib/python2.7/dist-packages/nova/utils.py:450
2011-11-01 16:12:43,047 INFO nova.db.sqlalchemy [-] Using mysql/eventlet db_pool.
2011-11-01 16:12:43,130 DEBUG nova.volume.manager [-] Re-exporting 1 volumes from (pid=2312) init_host /usr/lib/python2.7/dist-packages/nova/volume/manager.py:89
2011-11-01 16:12:43,132 DEBUG nova.utils [-] Running cmd (subprocess): sudo tgtadm --op new --lld=iscsi --mode=target --tid=1 --targetname=iqn.2010-10.org.openstack:volume-00000002 from (pid=2312) execute /usr/lib/python2.7/dist-packages/nova/utils.py:168
2011-11-01 16:12:43,136 DEBUG nova.utils [-] Running cmd (subprocess): sudo tgtadm --op bind --lld=iscsi --mode=target --initiator-address=ALL --tid=1 from (pid=2312) execute /usr/lib/python2.7/dist-packages/nova/utils.py:168
2011-11-01 16:12:43,140 DEBUG nova.utils [-] Running cmd (subprocess): sudo tgtadm --op new --lld=iscsi --mode=logicalunit --tid=1 --lun=1 --backing-store=/dev/nova-volumes/volume-00000002 from (pid=2312) execute /usr/lib/python2.7/dist-packages/nova/utils.py:168
2011-11-01 16:12:43,168 INFO nova.rpc [-] Connected to AMQP server on 192.168.2.200:5672
2011-11-01 16:12:43,168 DEBUG nova [-] Creating Consumer connection for Service volume from (pid=2312) start /usr/lib/python2.7/dist-packages/nova/service.py:153

compute.log shows:

2011-11-01 16:15:37,297 DEBUG nova.rpc [-] unpacked context: {'user_id': u'novaadmin', 'roles': [u'projec
tmanager'], 'timestamp': u'2011-11-01T16:15:37.245193', 'auth_token': None, 'msg_id': None, 'remote_addre
ss': u'192.168.2.200', 'strategy': u'noauth', 'is_admin': True, 'request_id': u'8a6a994c-51e7-4c4e-97b0-6
b46901e6df4', 'project_id': u'proj', 'read_deleted': False} from (pid=2536) _unpack_context /usr/lib/pyth
on2.7/dist-packages/nova/rpc/impl_kombu.py:646
2011-11-01 16:15:37,298 INFO nova.compute.manager [8a6a994c-51e7-4c4e-97b0-6b46901e6df4 novaadmin proj] c
heck_instance_lock: decorating: |<function attach_volume at 0x3542b18>|
2011-11-01 16:15:37,298 INFO nova.compute.manager [8a6a994c-51e7-4c4e-97b0-6b46901e6df4 novaadmin proj] c
heck_instance_lock: arguments: |<nova.compute.manager.ComputeManager object at 0x2dde8d0>| |<nova.rpc.imp
l_kombu.RpcContext object at 0x47e8910>| |16|
2011-11-01 16:15:37,298 DEBUG nova.compute.manager [8a6a994c-51e7-4c4e-97b0-6b46901e6df4 novaadmin proj]
instance 16: getting locked state from (pid=2536) get_lock /usr/lib/python2.7/dist-packages/nova/compute/
manager.py:1165
2011-11-01 16:15:37,331 INFO nova.compute.manager [8a6a994c-51e7-4c4e-97b0-6b46901e6df4 novaadmin proj] c
heck_instance_lock: locked: |False|
2011-11-01 16:15:37,331 INFO nova.compute.manager [8a6a994c-51e7-4c4e-97b0-6b46901e6df4 novaadmin proj] c
heck_instance_lock: admin: |True|
2011-11-01 16:15:37,331 INFO nova.compute.manager [8a6a994c-51e7-4c4e-97b0-6b46901e6df4 novaadmin proj] c
heck_instance_lock: executing: |<function attach_volume at 0x3542b18>|
2011-11-01 16:15:37,362 AUDIT nova.compute.manager [8a6a994c-51e7-4c4e-97b0-6b46901e6df4 novaadmin proj]
instance 16: attaching volume 2 to /dev/vdb

In the nova.conf I have the following:
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--logdir=/var/log/nova
--state_path=/var/lib/nova
--lock_path=/var/lock/nova
--s3_host=192.168.2.200
--rabbit_host=192.168.2.200
--my_ip=192.168.2.200
--cc_host=192.168.2.200
--network_manager=nova.network.manager.FlatDHCPManager
--flat_network_bridge=br100
--flat_interface=eth0
--flat_injected=False
--daemonize=1
--libvirt_type=kvm
--ec2_url=http://192.168.2.200:8773/services/Cloud
--nova_url=http://192.168.2.200:8774/v1.1
--FAKE_subdomain=ec2
--routing_source_ip=192.168.2.200
--sql_connection=mysql://novadbadmin:novasecret@192.168.2.200/nova
--glance_api_servers=192.168.2.200:9292
--iscsi_helper=tgtadm
--image_service=nova.image.glance.GlanceImageService
--iscsi_ip_prefix=192.168.
--vlan_interface=eth0
--public_interface=eth0
--verbose

I am guessing that something quirky is going on after openstack is trying to attach the volume.

Question information

Language:
English Edit question
Status:
Solved
For:
OpenStack Compute (nova) Edit question
Assignee:
No assignee Edit question
Solved by:
Chris McClung
Solved:
Last query:
Last reply:
Revision history for this message
Chris McClung (chris-mcclung) said :
#1

I am running Ubuntu 11.10 with the ubuntu packages

Revision history for this message
Chris McClung (chris-mcclung) said :
#2

WAD.

Revision history for this message
Travis Rhoden (trhoden) said :
#3

What was the solution to your problem? I'm experiencing the exact same one right now.

The volume says it is attached, but iscsiadm shows no session.

Looking in nova-compute.log, there is the same line about "attaching volume 2 to /dev/vdb". However, after a period of a few minutes, an error does eventually show up:

2011-12-14 17:17:31,391 AUDIT nova.compute.manager [374df4a8-a85e-4352-8b87-06cba81c75d1 trhoden testproj] instance 20: attaching volume 2 to /dev/vdb
2011-12-14 17:17:32,460 ERROR nova.exception [-] Uncaught exception
(nova.exception): TRACE: Traceback (most recent call last):
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.6/nova/exception.py", line 98, in wrapped
(nova.exception): TRACE: return f(*args, **kw)
(nova.exception): TRACE: File "/usr/lib/pymodules/python2.6/nova/virt/libvirt/connection.py", line 361, in attach_volume
(nova.exception): TRACE: virt_dom.attachDevice(xml)
(nova.exception): TRACE: File "/usr/lib/python2.6/dist-packages/libvirt.py", line 263, in attachDevice
(nova.exception): TRACE: if ret == -1: raise libvirtError ('virDomainAttachDevice() failed', dom=self)
(nova.exception): TRACE: libvirtError: operation failed: adding virtio-blk-pci,bus=pci.0,addr=0x8,drive=drive-virtio-disk1,id=virtio-disk1 device failed: Duplicate ID 'virtio-disk1' for device

Wondering if anyone has seen this before.

Revision history for this message
Chris McClung (chris-mcclung) said :
#4

you have to add/use the following line in nova.conf:
--iscsi_helper=tgtadm

It seems to be a ubuntu thing since 11.10.

Revision history for this message
Lorin Hochstein (lorinh) said :
#5

Perhaps the default value of that flag should be changed?

Revision history for this message
Chris McClung (chris-mcclung) said :
#6

Makes sense, I think an autoconfig is needed once Openstack has matured more.

Giving it this is my setup x,y,z would cut out a lot of misconfigurations. To make to run properly on one machine using 1 NIC took adding a few non official config options.

Revision history for this message
Travis Rhoden (trhoden) said :
#7

Thanks for the response, Chris.

However, that didn't resolve my problem. I may ask a separate question about my (seemingly unique) issue, rather than hijacking this Solved question.

Revision history for this message
gowtham (gowtham-cybercafe) said :
#8

i just rebooted my system. Then i have given the commands for restarting the nova services it started.