euca-attach-volume issue

Asked by Sudhir

Following is my setup :

root@in01emt16:~# euca-describe-availability-zones verbose
AVAILABILITYZONE nova available
AVAILABILITYZONE |- in01emt16
AVAILABILITYZONE | |- nova-scheduler enabled :-) 2011-08-28 06:06:18
AVAILABILITYZONE | |- nova-network enabled :-) 2011-08-28 06:06:19
AVAILABILITYZONE | |- nova-volume enabled :-) 2011-08-28 06:06:18
AVAILABILITYZONE |- in01emt17
AVAILABILITYZONE | |- nova-compute enabled :-) 2011-08-28 06:06:22

Firewall is disabled on both nodes :

root@in01emt16:~# ufw status
Status: inactive

root@in01emt17:~# ufw status
Status: inactive

cat /etc/hosts
127.0.0.1 localhost
192.168.3.1 in01emt16.synopsys.com in01emt16
192.168.3.2 in01emt17.synopsys.com in01emt17

root@in01emt16:~# euca-describe-instances
RESERVATION r-6aui4s55 proj default
INSTANCE i-0000000c ami-00000002 192.168.3.3 192.168.3.3 running None (proj, in01emt17) 0 m1.small 2011-08-28T05:55:56Z nova

root@in01emt16:~# cat /etc/nova/nova.conf
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--logdir=/var/log/nova
--state_path=/var/lib/nova
--lock_path=/var/lock/nova
--verbose=1
--s3_host=10.144.199.85
--rabbit_host=192.168.3.1
--cc_host=192.168.3.1
--fixed_range=192.168.0.0/16
--network_size=8
--ec2_url=http://10.144.199.85:8773/services/Cloud
--FAKE_subdomain=ec2
--routing_source_ip=192.168.3.1
--sql_connection=mysql://root:nova@10.144.199.85/nova
--glance_host=192.168.3.1
--image_service=nova.image.glance.GlanceImageService
--iscsi_ip_prefix=193.168.3
--network_manager=nova.network.manager.FlatDHCPManager
--flat_interface=eth1
--flat_injected=False
--public_interface=eth0
--flat_network_dhcp_start=192.168.3.3
--volume_manager=nova.volume.manager.VolumeManager
--volume_topic=volume

root@in01emt16:~# euca-describe-volumes
VOLUME vol-00000005 3 nova available (proj, in01emt16, None, None) 2011-08-27T16:45:44Z
VOLUME vol-00000006 1 nova available (proj, in01emt16, None, None) 2011-08-28T05:55:41Z

When I attach a volume from controller to instance there is no error :

 euca-attach-volume -i i-0000000c -d /dev/vdc vol-00000006
VOLUME vol-00000006

but in nova-compute.log I see following error due to which the volume doesn't attach to instance :

2011-08-28 11:32:35,414 DEBUG nova.rpc [-] received {u'_context_request_id': u'-VG5KGCH-AUHXCY9J-LT', u'_context_read_deleted': False, u'args': {u'instance_id': 12, u'mountpoint': u'/dev/vdc', u'volume_id': 6}, u'_context_is_admin': True, u'_context_timestamp': u'2011-08-28T06:02:35Z', u'_context_user': u'novaadmin', u'method': u'attach_volume', u'_context_project': u'proj', u'_context_remote_address': u'10.144.199.85'} from (pid=1263) _receive /usr/lib/pymodules/python2.7/nova/rpc.py:167
2011-08-28 11:32:35,414 DEBUG nova.rpc [-] unpacked context: {'timestamp': u'2011-08-28T06:02:35Z', 'remote_address': u'10.144.199.85', 'project': u'proj', 'is_admin': True, 'user': u'novaadmin', 'request_id': u'-VG5KGCH-AUHXCY9J-LT', 'read_deleted': False} from (pid=1263) _unpack_context /usr/lib/pymodules/python2.7/nova/rpc.py:331
2011-08-28 11:32:35,415 INFO nova.compute.manager [-VG5KGCH-AUHXCY9J-LT novaadmin proj] check_instance_lock: decorating: |<function attach_volume at 0x248f8c0>|
2011-08-28 11:32:35,415 INFO nova.compute.manager [-VG5KGCH-AUHXCY9J-LT novaadmin proj] check_instance_lock: arguments: |<nova.compute.manager.ComputeManager object at 0x2381210>| |<nova.context.RequestContext object at 0x3af2ad0>| |12|
2011-08-28 11:32:35,415 DEBUG nova.compute.manager [-VG5KGCH-AUHXCY9J-LT novaadmin proj] instance 12: getting locked state from (pid=1263) get_lock /usr/lib/pymodules/python2.7/nova/compute/manager.py:680
2011-08-28 11:32:35,468 INFO nova.compute.manager [-VG5KGCH-AUHXCY9J-LT novaadmin proj] check_instance_lock: locked: |False|
2011-08-28 11:32:35,469 INFO nova.compute.manager [-VG5KGCH-AUHXCY9J-LT novaadmin proj] check_instance_lock: admin: |True|
2011-08-28 11:32:35,469 INFO nova.compute.manager [-VG5KGCH-AUHXCY9J-LT novaadmin proj] check_instance_lock: executing: |<function attach_volume at 0x248f8c0>|
2011-08-28 11:32:35,522 AUDIT nova.compute.manager [-VG5KGCH-AUHXCY9J-LT novaadmin proj] instance 12: attaching volume 6 to /dev/vdc
2011-08-28 11:32:35,534 WARNING nova.volume.driver [-] ISCSI provider_location not stored, using discovery
2011-08-28 11:32:35,534 DEBUG nova.utils [-] Running cmd (subprocess): sudo iscsiadm -m discovery -t sendtargets -p in01emt16 from (pid=1263) execute /usr/lib/pymodules/python2.7/nova/utils.py:150
2011-08-28 11:32:35,698 ERROR nova [-] Exception during message handling
(nova): TRACE: Traceback (most recent call last):
(nova): TRACE: File "/usr/lib/pymodules/python2.7/nova/rpc.py", line 188, in _receive
(nova): TRACE: rval = node_func(context=ctxt, **node_args)
(nova): TRACE: File "/usr/lib/pymodules/python2.7/nova/compute/manager.py", line 105, in decorated_function
(nova): TRACE: function(self, context, instance_id, *args, **kwargs)
(nova): TRACE: File "/usr/lib/pymodules/python2.7/nova/compute/manager.py", line 743, in attach_volume
(nova): TRACE: volume_id)
(nova): TRACE: File "/usr/lib/pymodules/python2.7/nova/volume/manager.py", line 164, in setup_compute_volume
(nova): TRACE: path = self.driver.discover_volume(context, volume_ref)
(nova): TRACE: File "/usr/lib/pymodules/python2.7/nova/volume/driver.py", line 446, in discover_volume
(nova): TRACE: iscsi_properties = self._get_iscsi_properties(volume)
(nova): TRACE: File "/usr/lib/pymodules/python2.7/nova/volume/driver.py", line 407, in _get_iscsi_properties
(nova): TRACE: (volume['name']))
(nova): TRACE: Error: Could not find iSCSI export for volume volume-00000006
(nova): TRACE:

root@in01emt17:~# iscsiadm -m node

192.168.3.1:3260,1 iqn.2010-10.org.openstack:volume-00000005
10.144.199.85:3260,1 iqn.2010-10.org.openstack:volume-00000005
169.254.169.254:3260,1 iqn.2010-10.org.openstack:volume-00000005
192.168.122.1:3260,1 iqn.2010-10.org.openstack:volume-00000005
192.168.3.1:3260,1 iqn.2010-10.org.openstack:volume-00000006
10.144.199.85:3260,1 iqn.2010-10.org.openstack:volume-00000006
169.254.169.254:3260,1 iqn.2010-10.org.openstack:volume-00000006
192.168.122.1:3260,1 iqn.2010-10.org.openstack:volume-00000006

Let me know what could be the reason why volume cannot be attached to instance, what's the fix/solution for this.

Thanks,
Sudhir

Question information

Language:
English Edit question
Status:
Solved
For:
OpenStack Compute (nova) Edit question
Assignee:
No assignee Edit question
Solved by:
Sudhir
Solved:
Last query:
Last reply:
Revision history for this message
Everett Toews (everett-toews) said :
#1

Have you enabled iSCSI target on the machine running nova-volume?

echo "ISCSITARGET_ENABLE=true" > /etc/default/iscsitarget
/etc/init.d/iscsitarget restart

You might also need to set the --iscsi_ip_prefix flag, see
http://docs.openstack.org/cactus/openstack-compute/admin/content/reference-for-flags-in-nova-conf.html

Sounds like you already have the nova-volumes volume group setup. There's
more info at,

http://docs.openstack.org/cactus/openstack-compute/admin/content/managing-volumes.html

Everett

On Sun, Aug 28, 2011 at 12:15 AM, Sudhir <
<email address hidden>> wrote:

> New question #169359 on OpenStack Compute (nova):
> https://answers.launchpad.net/nova/+question/169359
>
> Following is my setup :
>
> root@in01emt16:~# euca-describe-availability-zones verbose
> AVAILABILITYZONE nova available
> AVAILABILITYZONE |- in01emt16
> AVAILABILITYZONE | |- nova-scheduler enabled :-) 2011-08-28
> 06:06:18
> AVAILABILITYZONE | |- nova-network enabled :-) 2011-08-28
> 06:06:19
> AVAILABILITYZONE | |- nova-volume enabled :-) 2011-08-28
> 06:06:18
> AVAILABILITYZONE |- in01emt17
> AVAILABILITYZONE | |- nova-compute enabled :-) 2011-08-28
> 06:06:22
>
> Firewall is disabled on both nodes :
>
> root@in01emt16:~# ufw status
> Status: inactive
>
> root@in01emt17:~# ufw status
> Status: inactive
>
>
> cat /etc/hosts
> 127.0.0.1 localhost
> 192.168.3.1 in01emt16.synopsys.com in01emt16
> 192.168.3.2 in01emt17.synopsys.com in01emt17
>
>
>
> root@in01emt16:~# euca-describe-instances
> RESERVATION r-6aui4s55 proj default
> INSTANCE i-0000000c ami-00000002 192.168.3.3 192.168.3.3
> running None (proj, in01emt17) 0 m1.small
> 2011-08-28T05:55:56Z nova
>
>
> root@in01emt16:~# cat /etc/nova/nova.conf
> --dhcpbridge_flagfile=/etc/nova/nova.conf
> --dhcpbridge=/usr/bin/nova-dhcpbridge
> --logdir=/var/log/nova
> --state_path=/var/lib/nova
> --lock_path=/var/lock/nova
> --verbose=1
> --s3_host=10.144.199.85
> --rabbit_host=192.168.3.1
> --cc_host=192.168.3.1
> --fixed_range=192.168.0.0/16
> --network_size=8
> --ec2_url=http://10.144.199.85:8773/services/Cloud
> --FAKE_subdomain=ec2
> --routing_source_ip=192.168.3.1
> --sql_connection=mysql://root:nova@10.144.199.85/nova
> --glance_host=192.168.3.1
> --image_service=nova.image.glance.GlanceImageService
> --iscsi_ip_prefix=193.168.3
> --network_manager=nova.network.manager.FlatDHCPManager
> --flat_interface=eth1
> --flat_injected=False
> --public_interface=eth0
> --flat_network_dhcp_start=192.168.3.3
> --volume_manager=nova.volume.manager.VolumeManager
> --volume_topic=volume
>
>
>
> root@in01emt16:~# euca-describe-volumes
> VOLUME vol-00000005 3 nova available (proj, in01emt16,
> None, None) 2011-08-27T16:45:44Z
> VOLUME vol-00000006 1 nova available (proj, in01emt16,
> None, None) 2011-08-28T05:55:41Z
>
>
> When I attach a volume from controller to instance there is no error :
>
> euca-attach-volume -i i-0000000c -d /dev/vdc vol-00000006
> VOLUME vol-00000006
>
>
> but in nova-compute.log I see following error due to which the volume
> doesn't attach to instance :
>
> 2011-08-28 11:32:35,414 DEBUG nova.rpc [-] received
> {u'_context_request_id': u'-VG5KGCH-AUHXCY9J-LT', u'_context_read_deleted':
> False, u'args': {u'instance_id': 12, u'mountpoint': u'/dev/vdc',
> u'volume_id': 6}, u'_context_is_admin': True, u'_context_timestamp':
> u'2011-08-28T06:02:35Z', u'_context_user': u'novaadmin', u'method':
> u'attach_volume', u'_context_project': u'proj', u'_context_remote_address':
> u'10.144.199.85'} from (pid=1263) _receive
> /usr/lib/pymodules/python2.7/nova/rpc.py:167
> 2011-08-28 11:32:35,414 DEBUG nova.rpc [-] unpacked context: {'timestamp':
> u'2011-08-28T06:02:35Z', 'remote_address': u'10.144.199.85', 'project':
> u'proj', 'is_admin': True, 'user': u'novaadmin', 'request_id':
> u'-VG5KGCH-AUHXCY9J-LT', 'read_deleted': False} from (pid=1263)
> _unpack_context /usr/lib/pymodules/python2.7/nova/rpc.py:331
> 2011-08-28 11:32:35,415 INFO nova.compute.manager [-VG5KGCH-AUHXCY9J-LT
> novaadmin proj] check_instance_lock: decorating: |<function attach_volume at
> 0x248f8c0>|
> 2011-08-28 11:32:35,415 INFO nova.compute.manager [-VG5KGCH-AUHXCY9J-LT
> novaadmin proj] check_instance_lock: arguments:
> |<nova.compute.manager.ComputeManager object at 0x2381210>|
> |<nova.context.RequestContext object at 0x3af2ad0>| |12|
> 2011-08-28 11:32:35,415 DEBUG nova.compute.manager [-VG5KGCH-AUHXCY9J-LT
> novaadmin proj] instance 12: getting locked state from (pid=1263) get_lock
> /usr/lib/pymodules/python2.7/nova/compute/manager.py:680
> 2011-08-28 11:32:35,468 INFO nova.compute.manager [-VG5KGCH-AUHXCY9J-LT
> novaadmin proj] check_instance_lock: locked: |False|
> 2011-08-28 11:32:35,469 INFO nova.compute.manager [-VG5KGCH-AUHXCY9J-LT
> novaadmin proj] check_instance_lock: admin: |True|
> 2011-08-28 11:32:35,469 INFO nova.compute.manager [-VG5KGCH-AUHXCY9J-LT
> novaadmin proj] check_instance_lock: executing: |<function attach_volume at
> 0x248f8c0>|
> 2011-08-28 11:32:35,522 AUDIT nova.compute.manager [-VG5KGCH-AUHXCY9J-LT
> novaadmin proj] instance 12: attaching volume 6 to /dev/vdc
> 2011-08-28 11:32:35,534 WARNING nova.volume.driver [-] ISCSI
> provider_location not stored, using discovery
> 2011-08-28 11:32:35,534 DEBUG nova.utils [-] Running cmd (subprocess): sudo
> iscsiadm -m discovery -t sendtargets -p in01emt16 from (pid=1263) execute
> /usr/lib/pymodules/python2.7/nova/utils.py:150
> 2011-08-28 11:32:35,698 ERROR nova [-] Exception during message handling
> (nova): TRACE: Traceback (most recent call last):
> (nova): TRACE: File "/usr/lib/pymodules/python2.7/nova/rpc.py", line 188,
> in _receive
> (nova): TRACE: rval = node_func(context=ctxt, **node_args)
> (nova): TRACE: File
> "/usr/lib/pymodules/python2.7/nova/compute/manager.py", line 105, in
> decorated_function
> (nova): TRACE: function(self, context, instance_id, *args, **kwargs)
> (nova): TRACE: File
> "/usr/lib/pymodules/python2.7/nova/compute/manager.py", line 743, in
> attach_volume
> (nova): TRACE: volume_id)
> (nova): TRACE: File
> "/usr/lib/pymodules/python2.7/nova/volume/manager.py", line 164, in
> setup_compute_volume
> (nova): TRACE: path = self.driver.discover_volume(context, volume_ref)
> (nova): TRACE: File "/usr/lib/pymodules/python2.7/nova/volume/driver.py",
> line 446, in discover_volume
> (nova): TRACE: iscsi_properties = self._get_iscsi_properties(volume)
> (nova): TRACE: File "/usr/lib/pymodules/python2.7/nova/volume/driver.py",
> line 407, in _get_iscsi_properties
> (nova): TRACE: (volume['name']))
> (nova): TRACE: Error: Could not find iSCSI export for volume
> volume-00000006
> (nova): TRACE:
>
>
>
> root@in01emt17:~# iscsiadm -m node
>
> 192.168.3.1:3260,1 iqn.2010-10.org.openstack:volume-00000005
> 10.144.199.85:3260,1 iqn.2010-10.org.openstack:volume-00000005
> 169.254.169.254:3260,1 iqn.2010-10.org.openstack:volume-00000005
> 192.168.122.1:3260,1 iqn.2010-10.org.openstack:volume-00000005
> 192.168.3.1:3260,1 iqn.2010-10.org.openstack:volume-00000006
> 10.144.199.85:3260,1 iqn.2010-10.org.openstack:volume-00000006
> 169.254.169.254:3260,1 iqn.2010-10.org.openstack:volume-00000006
> 192.168.122.1:3260,1 iqn.2010-10.org.openstack:volume-00000006
>
>
> Let me know what could be the reason why volume cannot be attached to
> instance, what's the fix/solution for this.
>
> Thanks,
> Sudhir
>
>
> --
> You received this question notification because you are an answer
> contact for OpenStack Compute (nova).
>

Revision history for this message
Sudhir (av-sudhir) said :
#2

Hi Everett,

When I attach a volume to running instance with euca-attach-volume getting following error in nova-compute :

2011-08-29 21:18:45,738 WARNING nova.volume.driver [-] ISCSI provider_location not stored, using discovery
2011-08-29 21:18:45,738 DEBUG nova.utils [-] Running cmd (subprocess): sudo iscsiadm -m discovery -t sendtargets -p in01emt16 from (pid=1074) execute /usr/lib/pymodules/python2.7/nova/utils.py:150
2011-08-29 21:18:45,830 ERROR nova [-] Exception during message handling
(nova): TRACE: Traceback (most recent call last):

(nova): TRACE: File "/usr/lib/pymodules/python2.7/nova/volume/driver.py", line 407, in _get_iscsi_properties
(nova): TRACE: (volume['name']))
(nova): TRACE: Error: Could not find iSCSI export for volume volume-00000001

root@in01emt16:~# cat /etc/default/iscsitarget
ISCSITARGET_ENABLE=true

root@in01emt16:~# euca-describe-volumes
VOLUME vol-00000001 1 nova available (proj, in01emt16, None, None) 2011-08-29T12:14:14Z

root@in01emt16:~# euca-describe-instances
RESERVATION r-r06zfr6k proj default
INSTANCE i-0000000a ami-00000002 192.168.3.2 192.168.3.2 running None (proj, in01emt17) 0 m1.small 2011-08-29T15:44:21Z nova

root@in01emt16:~# cat /etc/hosts
127.0.0.1 localhost
192.168.3.1 in01emt16.synopsys.com in01emt16
192.168.3.2 in01emt17.synopsys.com in01emt17

root@in01emt16:~# cat /etc/nova/nova.conf
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--logdir=/var/log/nova
--state_path=/var/lib/nova
--lock_path=/var/lock/nova
--verbose=1
--s3_host=10.144.199.85
--rabbit_host=192.168.3.1
--cc_host=192.168.3.1
--ec2_path=/services/Cloud
--ec2_port=8773
--ec2_scheme=http
--fixed_range=192.168.0.0/16
--network_size=8
--FAKE_subdomain=ec2
--routing_source_ip=192.168.3.1
--sql_connection=mysql://root:nova@10.144.199.85/nova
--glance_host=192.168.3.1
--image_service=nova.image.glance.GlanceImageService
--iscsi_ip_prefix=193.168.3.
--network_manager=nova.network.manager.FlatDHCPManager
--flat_interface=eth1
--flat_injected=False
--public_interface=eth0
--flat_network_dhcp_start=192.168.3.3

Revision history for this message
Sudhir (av-sudhir) said :
#3

The issue was fixed after installing controller & compute node from scratch.

Thanks,
Sudhir