Found a potential bug while setting up multiple cinder-volume instances on single cinder node

Asked by Ganpat Agarwal

I was trying to setup multiple cinder-volume instances on a single node setup.I edited the cinder.conf file as per the instructions given in the openstack docs for multi-backend cinder.

http://docs.openstack.org/admin-guide-cloud/content//multi_backend.html

I setup two backends and was able to see both the backend hosts in cinder service-list.

PART 1 :
I tried to create a volume using the volume-type(configures for the backends), but it failed saying "No valid host was found". I tried to debug the code and got the point where the process was stalling.

cinder/openstack/common/scheduler/filter.py : get_filtered_objects :

for filter_cls in filter_classes:
            objs = filter_cls().filter_all(objs, filter_properties)
return list(objs)

In these lines of code, the objs will contain the list of hosts as per the filter classes.
In my case , the code was able to find the list of host for one of the classes in the first iteration only and in second and third iteration the list was empty.As the return is after the loop is over , it was always sending the empty list and hence the error : "No valid host found"

I changed the code to this and it was then returning the list of hosts.
Changed code :
for filter_cls in filter_classes:
            objs = filter_cls().filter_all(objs, filter_properties)
            if objs:
                return list(objs)
return list(objs)

PART 2 :
After making the above changes , i tried to create the volume using one of the volume-type and got success. Then i tried to use the other volume-type to create the volume , but it used the other host and not the host associated with this volume-type.

I went through the logs and found that it was discovering both the hosts in the file
cinder/cinder/scheduler : filter_scheduler.py : _get_weighted_candidates : line 222

after getting the hosts list it was choosing the first host in the list in file
cinder/cinder/scheduler : filter_scheduler.py : _schedule : line 239

My doubts:
==> was the host list filtered as per the volume-type provided during the create volume process?
==> Why we are using the first host from the "weighed_hosts" list?

Question information

Language:
English Edit question
Status:
Solved
For:
Cinder Edit question
Assignee:
No assignee Edit question
Solved by:
Ganpat Agarwal
Solved:
Last query:
Last reply:
Revision history for this message
Jon Bernard (jbernard) said :
#2

I'm seeing a different, but possibly related, behaviour very similar to what you describe here. Multiple LVM backends of type `thin` result in only the last one having the thin pool created (the others remain uninitialized).

I'll be diving into this next week and I'll let you know how it goes. If you get any futher on your issue, I would be very curious to know the details.

This should probably be turned into a bug, there's defintely something strange going on.

Cheers

Revision history for this message
Jon Bernard (jbernard) said :
#3

Ahh, you've already created a bug ;)

Revision history for this message
Ganpat Agarwal (gans-developer) said :
#4

Is this a bug or am i in a wrong way?

Revision history for this message
Ganpat Agarwal (gans-developer) said :
#5

Please ignore this question.
My code was not returing the volume backend name correctly in the update_volume_stats function and it was causing the whole trouble.

Revision history for this message
Ganpat Agarwal (gans-developer) said :
#6

Please ignore this question.
My code was not returing the volume backend name correctly in the update_volume_stats function and it was causing the whole trouble.