Attached nova-volumes does not show in VM target
I have setup Openstack all on one machine and are currently trying to setup nova-volume. I can start instances ok and can ssh etc in ok.
I have created a vggroup nova-volumes.
When trying to attach a volume to a VM the volume shows that it is in use after running:
euca-attach-volume -i i-00000010 -d /dev/vdb vol-00000002
The Status is:
euca-describe-
VOLUME vol-00000002 10 nova in-use (proj, cloud1, i-00000010[cloud1], /dev/vdb) 2011-11-
vgdisplay displays for Nova-volumes:
Alloc PE / Size 2560 / 10.00 GiB
Free PE / Size 951302 / 3.63 TiB
VG UUID nBoTgU-
If i do a discovery I get
iscsiadm -m discovery -t st -p 192.168.2.200
192.168.
euca-describe-
RESERVATION r-kectzmil proj default
INSTANCE i-00000010 ami-00000002 192.168.22.6 192.168.22.6 running chriskey (proj, cloud1) 0 m1.tiny 2011-11-
However if I run:
iscsiadm -m session
iscsiadm: No active sessions.
Running fdisk -l on the VM shows no /dev/....
The volume.log shows:
2011-11-01 16:12:42,871 AUDIT nova [-] Starting volume node (version 2011.3-
2011-11-01 16:12:42,872 DEBUG nova.utils [-] Running cmd (subprocess): sudo vgs --noheadings -o name from (pid=2312) execute /usr/lib/
2011-11-01 16:12:43,046 DEBUG nova.utils [-] backend <module 'nova.db.
2011-11-01 16:12:43,047 INFO nova.db.sqlalchemy [-] Using mysql/eventlet db_pool.
2011-11-01 16:12:43,130 DEBUG nova.volume.manager [-] Re-exporting 1 volumes from (pid=2312) init_host /usr/lib/
2011-11-01 16:12:43,132 DEBUG nova.utils [-] Running cmd (subprocess): sudo tgtadm --op new --lld=iscsi --mode=target --tid=1 --targetname=
2011-11-01 16:12:43,136 DEBUG nova.utils [-] Running cmd (subprocess): sudo tgtadm --op bind --lld=iscsi --mode=target --initiator-
2011-11-01 16:12:43,140 DEBUG nova.utils [-] Running cmd (subprocess): sudo tgtadm --op new --lld=iscsi --mode=logicalunit --tid=1 --lun=1 --backing-
2011-11-01 16:12:43,168 INFO nova.rpc [-] Connected to AMQP server on 192.168.2.200:5672
2011-11-01 16:12:43,168 DEBUG nova [-] Creating Consumer connection for Service volume from (pid=2312) start /usr/lib/
compute.log shows:
2011-11-01 16:15:37,297 DEBUG nova.rpc [-] unpacked context: {'user_id': u'novaadmin', 'roles': [u'projec
tmanager'], 'timestamp': u'2011-
ss': u'192.168.2.200', 'strategy': u'noauth', 'is_admin': True, 'request_id': u'8a6a994c-
b46901e6df4', 'project_id': u'proj', 'read_deleted': False} from (pid=2536) _unpack_context /usr/lib/pyth
on2.7/dist-
2011-11-01 16:15:37,298 INFO nova.compute.
heck_instance_lock: decorating: |<function attach_volume at 0x3542b18>|
2011-11-01 16:15:37,298 INFO nova.compute.
heck_instance_lock: arguments: |<nova.
l_kombu.RpcContext object at 0x47e8910>| |16|
2011-11-01 16:15:37,298 DEBUG nova.compute.
instance 16: getting locked state from (pid=2536) get_lock /usr/lib/
manager.py:1165
2011-11-01 16:15:37,331 INFO nova.compute.
heck_instance_lock: locked: |False|
2011-11-01 16:15:37,331 INFO nova.compute.
heck_instance_lock: admin: |True|
2011-11-01 16:15:37,331 INFO nova.compute.
heck_instance_lock: executing: |<function attach_volume at 0x3542b18>|
2011-11-01 16:15:37,362 AUDIT nova.compute.
instance 16: attaching volume 2 to /dev/vdb
In the nova.conf I have the following:
--dhcpbridge_
--dhcpbridge=
--logdir=
--state_
--lock_
--s3_host=
--rabbit_
--my_ip=
--cc_host=
--network_
--flat_
--flat_
--flat_
--daemonize=1
--libvirt_type=kvm
--ec2_url=http://
--nova_url=http://
--FAKE_
--routing_
--sql_connectio
--glance_
--iscsi_
--image_
--iscsi_
--vlan_
--public_
--verbose
I am guessing that something quirky is going on after openstack is trying to attach the volume.
Question information
- Language:
- English Edit question
- Status:
- Solved
- Assignee:
- No assignee Edit question
- Solved by:
- Chris McClung
- Solved:
- Last query:
- Last reply: