ERROR attaching volume to vm on compute node

Asked by diul

Hi guys, I need your help
I have an openstack installation on three nodes. The Controller Node has all services, with quantum for networking and cinder for storage. The other two nodes run nova-compute services only.

The problem is that I can't attach a volume (located on Controller node) to an instance runnig on Compute Node.
I got this error in compute node "/nova/compute-log"

2013-06-26 15:00:22 6280 ERROR nova.openstack.common.rpc.amqp [-] Exception during message handling
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp Traceback (most recent call last):
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 276, in _process_data
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp rval = self.proxy.dispatch(ctxt, version, method, **args)
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", line 145, in dispatch
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp return getattr(proxyobj, method)(ctxt, **kwargs)
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/exception.py", line 117, in wrapped
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp temp_level, payload)
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp self.gen.next()
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/exception.py", line 92, in wrapped
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp return f(*args, **kw)
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 175, in decorated_function
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp pass
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp self.gen.next()
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 161, in decorated_function
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp return function(self, context, *args, **kwargs)
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 196, in decorated_function
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp kwargs['instance']['uuid'], e, sys.exc_info())
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp self.gen.next()
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 190, in decorated_function
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp return function(self, context, *args, **kwargs)
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 2020, in attach_volume
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp context, instance.get('uuid'), mountpoint)
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp self.gen.next()
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 2016, in attach_volume
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp mountpoint, instance)
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 2023, in _attach_volume
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp volume = self.volume_api.get(context, volume_id)
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/volume/api.py", line 245, in get
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp rv = self.db.volume_get(context, volume_id)
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/db/api.py", line 1101, in volume_get
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp return IMPL.volume_get(context, volume_id)
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/db/sqlalchemy/api.py", line 129, in wrapper
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp return f(*args, **kwargs)
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/db/sqlalchemy/api.py", line 3023, in volume_get
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp raise exception.VolumeNotFound(volume_id=volume_id)
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp VolumeNotFound: Volume 27e29461-0b46-4b64-941d-a8a27f4f91d2 could not be found.
2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp

What to check? Ask form more information

Question information

Language:
English Edit question
Status:
Solved
For:
Cinder Edit question
Assignee:
No assignee Edit question
Solved by:
diul
Solved:
Last query:
Last reply:
Revision history for this message
Xiang Hui (xianghui) said :
#1

Hi diul,

 The error " VolumeNotFound:" means can not find the volume you specified to attach, did you create an available volume yet?put more detail info here, like volume , pv,vg,lv.

Revision history for this message
diul (diul) said :
#2

Yes, the volume exists, but on Controller Node, and it belongs to cinder-volume group. On the Compute Node, there is not cinder, and there is not cinder-volume group. I've yet checked if port 3260 is reachable from the two Compute Nodes.

In the db, in "provider_location" coloumn for that volume i have

"CONTROLLER_IP:3260,23 iqn.2010-10.org.openstack:volume-b66bd34a-bfb1-4f18-a8a1-67a9ef51e8e8 1"
(the volume is not the one of the error in the first post, it's another volume)

Do I need to configure cinder on Compute Node to? Do I mistake supposing to attach a remote volume (existing on Controller Node) to a VM running on Compute node?

This is "nova.conf" on Compute Node

[DEFAULT]
#DEFAULT CONFIG
verbose=True
logdir = /var/log/nova
state_path = /var/lib/nova
lock_path = /var/lib/nova/tmp
#volumes_dir = /etc/nova/volumes
injected_network_template = /usr/share/nova/interfaces.template
rootwrap_config = /etc/nova/rootwrap.conf
auth_strategy = keystone

# GLANCE
image_service=nova.image.glance.GlanceImageService
glance_api_servers=CONTROLLER_IP:9292

# APIS
osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions
ec2_dmz_host=CONTROLLER_IP
s3_host=CONTROLLER_IP

#NETWORK
network_api_class=nova.network.quantumv2.api.API
quantum_admin_username=quantum
quantum_admin_password=####
quantum_admin_auth_url=http://CONTROLLER_IP:35357/v2.0/
quantum_auth_strategy=keystone
quantum_admin_tenant_name=service
quantum_url=http://CONTROLLER_IP:9696/
#firewall_driver=nova.virt.firewall.NoopFirewallDriver
#libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtOpenVswitchVirtualPortDriver
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtOpenVswitchDriver
#LIBVIRT
libvirt_nonblocking = True
libvirt_inject_partition = -1
compute_driver = libvirt.LibvirtDriver
libvirt_type=kvm
firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
instance_name_template=instance-%08x
api_paste_config=/etc/nova/api-paste.ini

# COMPUTE/APIS: if you have separate configs for separate services
# this flag is required for both nova-api and nova-compute
allow_resize_to_same_host=True

#ISCSI
iscsi_helper = tgtadm

# VOLUMES
#volume_driver=nova.volume.driver.ISCSIDriver
#volume_group=nova-volumes
#volume_name_template=volume-%08x

#MYSQL
sql_connection = mysql://nova:nova@CONTROLLER_IP/nova

#MESSAGE QUEUE
rpc_backend = nova.openstack.common.rpc.impl_qpid
qpid_hostname=CONTROLLER_IP
qpid_port=5672

# NOVNC CONSOLE
novncproxy_base_url=http://CONTROLLER_IP:6080/vnc_auto.html
# Change vncserver_proxyclient_address and vncserver_listen to match each compute host
vncserver_proxyclient_address=NODE_IP
vncserver_listen=NODE_IP

[keystone_authtoken]
admin_tenant_name = service
admin_user = nova
admin_password = ####
auth_host = CONTROLLER_IP
auth_port = 35357
auth_protocol = http
signing_dir = /tmp/keystone-signing-nova

Hope it's clear and enough.
Thanks for the help

Revision history for this message
Xiaoxi Chen (xiaoxi-chen) said :
#3

No, you don't need to configure cinder on compute node.
It's likely that the volume you want to create doesn't exist due to some error
Could you please do the following checklist:
    1. run and paste the result of 'cinder list', see if the volume you like to attach is "available"
    2. run pvdisplay, lvdisplay,vgdisplay on control node, and paste the result

Revision history for this message
diul (diul) said :
#5

The volume exists, and I am able to attach it to a VM running on Controller Node
1. cinder list
+--------------------------------------+----------------+--------------+------+-------------+-------------+
| ID | Status | Display Name | Size | Volume Type | Attached to |
+--------------------------------------+----------------+--------------+------+-------------+-------------+
| 4c820b34-ddc7-4719-bc8a-dc0277324451 | available | HDD3 | 3 | None | |
| b66bd34a-bfb1-4f18-a8a1-67a9ef51e8e8 | available | HDD4 | 1 | None | |
| ddd50017-f2c8-4e59-8369-281b9177280d | error_deleting | Pro | 1 | None | |
+--------------------------------------+----------------+--------------+------+-------------+-------------+
2.--- Logical volume ---
  LV Path /dev/cinder-volumes/volume-ddd50017-f2c8-4e59-8369-281b9177280d
  LV Name volume-ddd50017-f2c8-4e59-8369-281b9177280d
  VG Name cinder-volumes
  LV UUID 8Mdy5e-isXi-xT75-T6bh-XZOf-pjJc-ltL26l
  LV Write Access read/write
  LV Creation host, time Controller_IP, 2013-06-24 17:44:12 +0200
  LV Status available
  # open 1
  LV Size 1,00 GiB
  Current LE 256
  Segments 1
  Allocation inherit
  Read ahead sectors auto
  - currently set to 256
  Block device 253:3
  --- Logical volume ---
  LV Path /dev/cinder-volumes/volume-4c820b34-ddc7-4719-bc8a-dc0277324451
  LV Name volume-4c820b34-ddc7-4719-bc8a-dc0277324451
  VG Name cinder-volumes
  LV UUID hwgLlu-m5h9-mYlw-QDXK-2wDF-JbhI-sKOXh7
  LV Write Access read/write
  LV Creation host, time Controller_IP, 2013-06-26 10:50:05 +0200
  LV Status available
  # open 1
  LV Size 3,00 GiB
  Current LE 768
  Segments 1
  Allocation inherit
  Read ahead sectors auto
  - currently set to 256
  Block device 253:5
  --- Logical volume ---
  LV Path /dev/cinder-volumes/volume-b66bd34a-bfb1-4f18-a8a1-67a9ef51e8e8
  LV Name volume-b66bd34a-bfb1-4f18-a8a1-67a9ef51e8e8
  VG Name cinder-volumes
  LV UUID Q6pirW-xWwb-znt6-g4eZ-qYHv-4EtF-BkrGFi
  LV Write Access read/write
  LV Creation host, time Controller_IP, 2013-06-26 13:53:19 +0200
  LV Status available
  # open 1
  LV Size 1,00 GiB
  Current LE 256
  Segments 1
  Allocation inherit
  Read ahead sectors auto
  - currently set to 256
  Block device 253:7
  --- Logical volume ---
  LV Path /dev/vg_opscontroller/lv_root
  LV Name lv_root
  VG Name vg_opscontroller
  LV UUID qtM9O5-KNi0-7MUQ-8HMs-nmEd-ZZdE-JFq3Oi
  LV Write Access read/write
  LV Creation host, time ops-controller, 2013-04-18 15:02:54 +0200
  LV Status available
  # open 1
  LV Size 50,00 GiB
  Current LE 12800
  Segments 1
  Allocation inherit
  Read ahead sectors auto
  - currently set to 256
  Block device 253:0
  --- Logical volume ---
  LV Path /dev/vg_opscontroller/lv_home
  LV Name lv_home
  VG Name vg_opscontroller
  LV UUID PCBaOn-jVXf-1OC6-27Vz-lLNz-LWil-zN4b9v
  LV Write Access read/write
  LV Creation host, time ops-controller, 2013-04-18 15:03:04 +0200
  LV Status available
  # open 0
  LV Size 220,57 GiB
  Current LE 56466
  Segments 1
  Allocation inherit
  Read ahead sectors auto
  - currently set to 256
  Block device 253:6
  --- Logical volume ---
  LV Path /dev/vg_opscontroller/lv_swap
  LV Name lv_swap
  VG Name vg_opscontroller
  LV UUID jrLxU3-gKq3-I8Zn-r0sG-8jtR-bv5f-vcSCOO
  LV Write Access read/write
  LV Creation host, time ops-controller, 2013-04-18 15:03:44 +0200
  LV Status available
  # open 1
  LV Size 7,81 GiB
  Current LE 2000
  Segments 1
  Allocation inherit
  Read ahead sectors auto
  - currently set to 256
  Block device 253:1
________________________________________________________________________________
I take the opportunity also to ask you how to delete that "error_deleting" volume. The problem, I think, is that in the db the "provider_locaiotn" coloumn is "NULL". This volume, with its error, is previous a correct Cinder run

Revision history for this message
Xiaoxi Chen (xiaoxi-chen) said :
#6

From the first log
   2013-06-26 15:00:22 6280 TRACE nova.openstack.common.rpc.amqp VolumeNotFound: Volume 27e29461-0b46-4b64-941d-a8a27f4f91d2 could not be found.

the volume you are trying to attach is 27e29461-0b46-4b64-941d-a8a27f4f91d2

but I cannot find it in cinder list i cannot find such volume.....

have you manually delete them?

Revision history for this message
diul (diul) said :
#7

Yes. It was on "attaching" status, so I tried to delete the volume of the log, and the "delete" action was successful.

Revision history for this message
Xiaoxi Chen (xiaoxi-chen) said :
#8

Oh>......
Sorry for some misread of the log.....

This is just because you forget to config nova-compute to use cinder instead of nova-volume

you should add
       volume_api_class=nova.volume.cinder.API
in your nova.conf

I would like to bet you are using Folsom release, right :)

Revision history for this message
diul (diul) said :
#9

Yes, it's folsom

I've added
     volume_api_class=nova.volume.cinder.API
and no more lines in nova.conf on Compute node, but i got this error:

2013-06-27 16:21:36 6280 INFO nova.service [-] Caught SIGTERM, exiting
2013-06-27 16:21:36 6280 CRITICAL nova [-] need more than 0 values to unpack
2013-06-27 16:21:36 6280 TRACE nova Traceback (most recent call last):
2013-06-27 16:21:36 6280 TRACE nova File "/usr/bin/nova-compute", line 48, in <module>
2013-06-27 16:21:36 6280 TRACE nova service.wait()
2013-06-27 16:21:36 6280 TRACE nova File "/usr/lib/python2.6/site-packages/nova/service.py", line 659, in wait
2013-06-27 16:21:36 6280 TRACE nova _launcher.wait()
2013-06-27 16:21:36 6280 TRACE nova File "/usr/lib/python2.6/site-packages/nova/service.py", line 202, in wait
2013-06-27 16:21:36 6280 TRACE nova rpc.cleanup()
2013-06-27 16:21:36 6280 TRACE nova File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/__init__.py", line 203, in cleanup
2013-06-27 16:21:36 6280 TRACE nova return _get_impl().cleanup()
2013-06-27 16:21:36 6280 TRACE nova File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_qpid.py", line 581, in cleanup
2013-06-27 16:21:36 6280 TRACE nova return rpc_amqp.cleanup(Connection.pool)
2013-06-27 16:21:36 6280 TRACE nova File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 419, in cleanup
2013-06-27 16:21:36 6280 TRACE nova connection_pool.empty()
2013-06-27 16:21:36 6280 TRACE nova File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 63, in empty
2013-06-27 16:21:36 6280 TRACE nova self.get().close()
2013-06-27 16:21:36 6280 TRACE nova File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_qpid.py", line 368, in close
2013-06-27 16:21:36 6280 TRACE nova self.connection.close()
2013-06-27 16:21:36 6280 TRACE nova File "<string>", line 6, in close
2013-06-27 16:21:36 6280 TRACE nova File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 316, in close
2013-06-27 16:21:36 6280 TRACE nova ssn.close(timeout=timeout)
2013-06-27 16:21:36 6280 TRACE nova File "<string>", line 6, in close
2013-06-27 16:21:36 6280 TRACE nova File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 749, in close
2013-06-27 16:21:36 6280 TRACE nova if not self._ewait(lambda: self.closed, timeout=timeout):
2013-06-27 16:21:36 6280 TRACE nova File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 566, in _ewait
2013-06-27 16:21:36 6280 TRACE nova result = self.connection._ewait(lambda: self.error or predicate(), timeout)
2013-06-27 16:21:36 6280 TRACE nova File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 208, in _ewait
2013-06-27 16:21:36 6280 TRACE nova result = self._wait(lambda: self.error or predicate(), timeout)
2013-06-27 16:21:36 6280 TRACE nova File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 193, in _wait
2013-06-27 16:21:36 6280 TRACE nova return self._waiter.wait(predicate, timeout=timeout)
2013-06-27 16:21:36 6280 TRACE nova File "/usr/lib/python2.6/site-packages/qpid/concurrency.py", line 57, in wait
2013-06-27 16:21:36 6280 TRACE nova self.condition.wait(3)
2013-06-27 16:21:36 6280 TRACE nova File "/usr/lib/python2.6/site-packages/qpid/concurrency.py", line 96, in wait
2013-06-27 16:21:36 6280 TRACE nova sw.wait(timeout)
2013-06-27 16:21:36 6280 TRACE nova File "/usr/lib/python2.6/site-packages/qpid/compat.py", line 53, in wait
2013-06-27 16:21:36 6280 TRACE nova ready, _, _ = select([self], [], [], timeout)
2013-06-27 16:21:36 6280 TRACE nova ValueError: need more than 0 values to unpack
2013-06-27 16:21:36 6280 TRACE nova
2013-06-27 16:21:38 32208 INFO nova.compute.manager [-] Loading compute driver 'libvirt.LibvirtDriver'
2013-06-27 16:21:38 32208 CRITICAL nova [-] No module named cinderclient
2013-06-27 16:21:38 32208 TRACE nova Traceback (most recent call last):
2013-06-27 16:21:38 32208 TRACE nova File "/usr/bin/nova-compute", line 46, in <module>
2013-06-27 16:21:38 32208 TRACE nova server = service.Service.create(binary='nova-compute')
2013-06-27 16:21:38 32208 TRACE nova File "/usr/lib/python2.6/site-packages/nova/service.py", line 492, in create
2013-06-27 16:21:38 32208 TRACE nova periodic_fuzzy_delay=periodic_fuzzy_delay)
2013-06-27 16:21:38 32208 TRACE nova File "/usr/lib/python2.6/site-packages/nova/service.py", line 387, in __init__
2013-06-27 16:21:38 32208 TRACE nova self.manager = manager_class(host=self.host, *args, **kwargs)
2013-06-27 16:21:38 32208 TRACE nova File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 229, in __init__
2013-06-27 16:21:38 32208 TRACE nova self.volume_api = volume.API()
2013-06-27 16:21:38 32208 TRACE nova File "/usr/lib/python2.6/site-packages/nova/volume/__init__.py", line 27, in API
2013-06-27 16:21:38 32208 TRACE nova cls = importutils.import_class(nova.flags.FLAGS.volume_api_class)
2013-06-27 16:21:38 32208 TRACE nova File "/usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py", line 30, in import_class
2013-06-27 16:21:38 32208 TRACE nova __import__(mod_str)
2013-06-27 16:21:38 32208 TRACE nova File "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 24, in <module>
2013-06-27 16:21:38 32208 TRACE nova from cinderclient import service_catalog
2013-06-27 16:21:38 32208 TRACE nova ImportError: No module named cinderclient
2013-06-27 16:21:38 32208 TRACE nova

Have I to modify something else?

Revision history for this message
diul (diul) said :
#10

Seems solved by editing nova.conf on Compute Node as you suggested and installing python-cinderclient always on Compute node ;)
Thanks a lot xiaoxi_chen

Revision history for this message
zhang.chen (superdebuger) said :
#11

I also hit this issue..Thanks.

But diul, basically it's not a good manger to set the "resolved by" to yourself unless you resolve it without any external help