How to assign 2 same type PCI passthrough devices to 2 instances separately

Asked by Yi Liu

I want configured the PCI pass-through to assign the physical NIC to instance.

I have two NICs with same type:
04:00.0 Ethernet controller [0200]: Intel Corporation 82599EB 10-Gigabit SFP+ Network Connection [8086:154d] (rev 01)
04:00.1 Ethernet controller [0200]: Intel Corporation 82599EB 10-Gigabit SFP+ Network Connection [8086:154d] (rev 01)

I add below configuration into nova.conf on compute node:
pci_passthrough_whitelist=[{"vendor_id":"8086", "product_id":"154d"}]

and add below configuration into nova.conf on controller node:
pci_alias={"vendor_id":"8086", "product_id":"154d", "name":"a1"}

then create flavor and set the extra spec:
nova flavor-key test_flavor set "pci_passthrough:alias"="a1:1"

After that, I launch the first instance using the test_flavor, it succeed.

However, I launch the second instance also using the test_flavor, it failed.
There is error log from scheduler.log, it said that the device is already in use.
I found that when launch the second instance, nova still assign the first NIC which has been already assigned to the first instance to the second instance. So it failed...

Below is the error log:

2013-12-05 05:21:10.522 6109 ERROR nova.scheduler.filter_scheduler [req-6abe0403-36fe-4d24-9852-9436fbf75331 c7095324d1fe439e8c9f350a71690388 92a596418bac4d3a901f1956fbed4463] [instance: 8dcb7465-080b-4bb6-bcd6-d228ca17e5fd] Error from last host: lng-compute-1 (node lng-compute-1): [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1037, in _build_instance\n set_access_ip=set_access_ip)\n', u' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1410, in _spawn\n LOG.exception(_(\'Instance failed to spawn\'), instance=instance)\n', u' File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1407, in _spawn\n block_device_info)\n', u' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 2070, in spawn\n block_device_info, context=context)\n', u' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 3205, in _create_domain_and_network\n domain = self._create_domain(xml, instance=instance, power_on=power_on)\n', u' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 3148, in _create_domain\n domain.XMLDesc(0))\n', u' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 3143, in _create_domain\n domain.createWithFlags(launch_flags)\n', u' File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 187, in doit\n result = proxy_call(self._autowrap, f, *args, **kwargs)\n', u' File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 147, in proxy_call\n rv = execute(f,*args,**kwargs)\n', u' File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 76, in tworker\n rv = meth(*args,**kwargs)\n', u' File "/usr/lib64/python2.6/site-packages/libvirt.py", line 708, in createWithFlags\n if ret == -1: raise libvirtError (\'virDomainCreateWithFlags() failed\', dom=self)\n', u'libvirtError: Requested operation is not valid: PCI device 0000:04:00.0 is in use by domain instance-00000010\n']

In my case, how to configure to assign the two device to two instance separately?

Question information

Language:
English Edit question
Status:
Expired
For:
OpenStack Compute (nova) Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
Yi Liu (ryan-yi-liu) said :
#1

BTW, I'm using the havana release, and the nova version is 2.15.0.

Revision history for this message
gouzongmei (gouzongmei) said :
#2

Hello, I've tested this,it's ok on my environment. I don't know why the PCI device 0000:04:00.0 is being assigned to the second instance, it should be the PCI device 0000:04:00.1. I think you need to look at the pci_devices table of nova DB. After my test, it's as below on my environment:

mysql> select * from pci_devices;
+---------------------+---------------------+------------+---------+----+-----------------+--------------+------------+-----------+----------+------------------+-----------------+-----------+------------+--------------------------------------+
| created_at | updated_at | deleted_at | deleted | id | compute_node_id | address | product_id | vendor_id | dev_type | dev_id | label | status | extra_info | instance_uuid |
+---------------------+---------------------+------------+---------+----+-----------------+--------------+------------+-----------+----------+------------------+-----------------+-----------+------------+--------------------------------------+
| 2014-02-22 06:59:16 | 2014-02-24 09:47:58 | NULL | 0 | 1 | 1 | 0000:02:00.0 | 10c9 | 8086 | type-PCI | pci_0000_02_00_0 | label_8086_10c9 | allocated | {} | 23749835-30bd-43fc-9a37-e21609a30e70 |
| 2014-02-22 06:59:16 | 2014-02-24 09:49:06 | NULL | 0 | 2 | 1 | 0000:02:00.1 | 10c9 | 8086 | type-PCI | pci_0000_02_00_1 | label_8086_10c9 | allocated | {} | 5864a793-e96e-4ffd-bd84-9a3ada2ef9b4 |
| 2014-02-22 06:59:16 | NULL | NULL | 0 | 3 | 1 | 0000:03:00.0 | 10c9 | 8086 | type-PCI | pci_0000_03_00_0 | label_8086_10c9 | available | {} | NULL |
| 2014-02-22 06:59:16 | NULL | NULL | 0 | 4 | 1 | 0000:03:00.1 | 10c9 | 8086 | type-PCI | pci_0000_03_00_1 | label_8086_10c9 | available | {} | NULL |
+---------------------+---------------------+------------+---------+----+-----------------+--------------+------------+-----------+----------+------------------+-----------------+-----------+------------+--------------------------------------+

Revision history for this message
Launchpad Janitor (janitor) said :
#3

This question was expired because it remained in the 'Open' state without activity for the last 15 days.