does not start the instance when an attack volume

Asked by Alagia Antonio

I have an architecture that uses cinder to manage volumes. My problem is: when I create an instance, and you associate a volume the system fails to boot it goes into an error state. I enclose the following configuration file:

nova.conf cloud controller:

[DEFAULT]
# LOGS/STATE
verbose=True
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
rootwrap_config=/etc/nova/rootwrap.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
# SCHEDULER
compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
# Cinder
volume_api_class=nova.volume.cinder.API
enabled_apis=ec2,osapi_compute,metadata,osapi_volume
#MAKE SURE NO ENTRY FOR osapi_volume anywhere in nova.conf!!!
#Leaving out enabled_apis altogether is NOT sufficient, as it defaults to include osapi_volume
osapi_volume_listen_port=5900
# DATABASE
sql_connection=mysql://nova:coritel@192.168.100.224/nova
# COMPUTE
connection_type=libvirt
libvirt_type=kvm
compute_driver=libvirt.LibvirtDriver
instance_name_template=instance-%08x
api_paste_config=/etc/nova/api-paste.ini
# COMPUTE/APIS: if you have separate configs for separate services
# this flag is required for both nova-api and nova-compute
allow_resize_to_same_host=True
# APIS
osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions
ec2_dmz_host=192.168.100.224
s3_host=192.168.100.224
# RABBITMQ
rabbit_host=192.168.100.224
# GLANCE
image_service=nova.image.glance.GlanceImageService
glance_api_servers=192.168.100.224:9292
# NETWORK
network_manager=nova.network.manager.FlatDHCPManager
force_dhcp_release=True
dhcpbridge_flagfile=/etc/nova/nova.conf
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
# Change my_ip to match each host
my_ip=192.168.100.224
public_interface=eth0
flat_interface=eth1
flat_network_bridge=br100
fixed_range=192.168.4.1/27
network_size=32
flat_network_dhcp_start=192.168.4.33
# NOVNC CONSOLE
novncproxy_base_url=http://192.168.100.224:6080/vnc_auto.html
# Change vncserver_proxyclient_address and vncserver_listen to match each compute host
vncserver_proxyclient_address=192.168.100.224
vncserver_listen=192.168.100.224
# AUTHENTICATION
auth_strategy=keystone
[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = coritel
signing_dirname = /tmp/keystone-signing-nova

cinder.conf:

[DEFAULT]
state_path = /var/lib/cinder
volumes_dir = /var/lib/cinder/volumes
rootwrap_config=/etc/cinder/rootwrap.conf
sql_connection = mysql://cinder:coritel@192.168.100.224/cinder
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper=tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
#osapi_volume_listen_port=5900

nova.conf compute node:

[DEFAULT]
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/run/lock/nova
verbose=True
api_paste_config=/etc/nova/api-paste.ini
scheduler_driver=nova.scheduler.simple.SimpleScheduler
s3_host=192.168.100.224
ec2_host=192.168.100.224
ec2_dmz_host=192.168.100.224
rabbit_host=192.168.100.224
cc_host=192.168.100.224
nova_url=http://192.168.100.224:8774/v1.1/
sql_connection=mysql://nova:coritel@192.168.100.224/nova
ec2_url=http://192.168.100.224:8773/services/Cloud
# Auth
use_deprecated_auth=false
auth_strategy=keystone
keystone_ec2_url=http://192.168.100.224:5000/v2.0/ec2tokens
# Imaging service
glance_api_servers=192.168.100.224:9292
image_service=nova.image.glance.GlanceImageService
# Virt driver
connection_type=libvirt
libvirt_type=kvm
# Vnc configuration
novnc_enabled=true
novncproxy_base_url=http://192.168.100.224:6080/vnc_auto.html
novncproxy_port=6080
vncserver_proxyclient_address=192.168.100.223
vncserver_listen=192.168.100.223

# NETWORK
network_manager=nova.network.manager.FlatDHCPManager
force_dhcp_release=True
dhcpbridge_flagfile=/etc/nova/nova.conf
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
# Change my_ip to match each host
my_ip=192.168.100.223
public_interface=eth0
flat_interface=eth1
flat_network_bridge=br100
fixed_range=192.168.4.1/27
network_size=32
flat_network_dhcp_start=192.168.4.33
routing_source_ip=192.168.100.224
rootwrap_config=/etc/nova/rootwrap.conf

# Cinder
volume_api_class=nova.volume.cinder.API
enabled_apis=ec2,osapi_compute,metadata,osapi_volume
#MAKE SURE NO ENTRY FOR osapi_volume anywhere in nova.conf!!!
#Leaving out enabled_apis altogether is NOT sufficient, as it defaults to include osapi_volume
osapi_volume_listen_port=5900

Thanks Antonio

Question information

Language:
English Edit question
Status:
Answered
For:
OpenStack Compute (nova) Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
Yaguang Tang (heut2008) said :
#1

can you show your error log messages.

Revision history for this message
Yaguang Tang (heut2008) said :
#2

can you show your error log messages.

Revision history for this message
Alagia Antonio (alagia-antonio90) said :
#3

this is the log file of nova-compute:

2013-01-07 11:46:19 DEBUG nova.manager [-] Running periodic task ComputeManager._publish_service_capabilities from (pid=6310) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:172
2013-01-07 11:46:19 DEBUG nova.manager [-] Notifying Schedulers of capabilities ... from (pid=6310) _publish_service_capabilities /usr/lib/python2.7/dist-packages/nova/manager.py:231
2013-01-07 11:46:19 DEBUG nova.openstack.common.rpc.amqp [-] Making asynchronous fanout cast... from (pid=6310) fanout_cast /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:384
2013-01-07 11:46:19 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_rescued_instances from (pid=6310) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:172
2013-01-07 11:46:19 DEBUG nova.manager [-] Skipping ComputeManager._sync_power_states, 7 ticks left until next run from (pid=6310) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:167
2013-01-07 11:46:19 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_bandwidth_usage from (pid=6310) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:172
2013-01-07 11:46:19 DEBUG nova.manager [-] Running periodic task ComputeManager._instance_usage_audit from (pid=6310) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:172
2013-01-07 11:46:19 DEBUG nova.manager [-] Running periodic task ComputeManager.update_available_resource from (pid=6310) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:172
2013-01-07 11:46:19 DEBUG nova.utils [-] Got semaphore "compute_resources" for method "update_available_resource"... from (pid=6310) inner /usr/lib/python2.7/dist-packages/nova/utils.py:713
2013-01-07 11:46:25 DEBUG nova.compute.resource_tracker [-] Hypervisor: free ram (MB): 2391 from (pid=6310) _report_hypervisor_resource_view /usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py:447
2013-01-07 11:46:25 DEBUG nova.compute.resource_tracker [-] Hypervisor: free disk (GB): 251 from (pid=6310) _report_hypervisor_resource_view /usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py:448
2013-01-07 11:46:25 DEBUG nova.compute.resource_tracker [-] Hypervisor: free VCPUs: 4 from (pid=6310) _report_hypervisor_resource_view /usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py:453
2013-01-07 11:46:25 AUDIT nova.compute.resource_tracker [-] Free ram (MB): 3442
2013-01-07 11:46:25 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 274
2013-01-07 11:46:25 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 4
2013-01-07 11:46:25 INFO nova.compute.resource_tracker [-] Compute_service record updated for localadmin
2013-01-07 11:46:25 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_rebooting_instances from (pid=6310) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:172
2013-01-07 11:46:25 DEBUG nova.manager [-] Skipping ComputeManager._cleanup_running_deleted_instances, 27 ticks left until next run from (pid=6310) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:167
2013-01-07 11:46:25 DEBUG nova.manager [-] Running periodic task ComputeManager._check_instance_build_time from (pid=6310) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:172
2013-01-07 11:46:25 DEBUG nova.manager [-] Running periodic task ComputeManager._heal_instance_info_cache from (pid=6310) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:172
2013-01-07 11:46:25 DEBUG nova.manager [-] Skipping ComputeManager._run_image_cache_manager_pass, 37 ticks left until next run from (pid=6310) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:167
2013-01-07 11:46:25 DEBUG nova.manager [-] Running periodic task ComputeManager._reclaim_queued_deletes from (pid=6310) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:172
2013-01-07 11:46:25 DEBUG nova.compute.manager [-] FLAGS.reclaim_instance_interval <= 0, skipping... from (pid=6310) _reclaim_queued_deletes /usr/lib/python2.7/dist-packages/nova/compute/manager.py:2711
2013-01-07 11:46:25 DEBUG nova.manager [-] Running periodic task ComputeManager._report_driver_status from (pid=6310) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:172
2013-01-07 11:46:25 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_unconfirmed_resizes from (pid=6310) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:172
2013-01-07 11:46:26 DEBUG nova.openstack.common.rpc.amqp [-] received {u'_context_roles': [u'admin', u'Member'], u'_context_request_id': u'req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd', u'_context_quota_class': None, u'_context_project_name': u'admin', u'_context_service_catalog': [{u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://192.168.100.224:8776/v1/376a3c1c9c2546549863e258d7fd9520', u'region': u'myregion', u'publicURL': u'http://192.168.100.224:8776/v1/376a3c1c9c2546549863e258d7fd9520', u'id': u'd954c6d484144050aaeba6d9904a0096', u'internalURL': u'http://192.168.100.224:8776/v1/376a3c1c9c2546549863e258d7fd9520'}], u'type': u'volume', u'name': u'volume'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://192.168.100.224:9292/v1', u'region': u'myregion', u'publicURL': u'http://192.168.100.224:9292/v1', u'id': u'dab53ecdac144f4080e8182690daedd9', u'internalURL': u'http://192.168.100.224:9292/v1'}], u'type': u'image', u'name': u'glance'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://192.168.100.224:8774/v2/376a3c1c9c2546549863e258d7fd9520', u'region': u'myregion', u'publicURL': u'http://192.168.100.224:8774/v2/376a3c1c9c2546549863e258d7fd9520', u'id': u'3a6436f55174480b918da20026b131e8', u'internalURL': u'http://192.168.100.224:8774/v2/376a3c1c9c2546549863e258d7fd9520'}], u'type': u'compute', u'name': u'nova'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://192.168.100.224:8773/services/Admin', u'region': u'myregion', u'publicURL': u'http://192.168.100.224:8773/services/Cloud', u'id': u'2b7b9bfd8dce47aebad363934c70c27b', u'internalURL': u'http://192.168.100.224:8773/services/Cloud'}], u'type': u'ec2', u'name': u'ec2'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://192.168.100.224:35357/v2.0', u'region': u'myregion', u'publicURL': u'http://192.168.100.224:5000/v2.0', u'id': u'9897b5d634a145678bb86b76bd8d44c2', u'internalURL': u'http://192.168.100.224:5000/v2.0'}], u'type': u'identity', u'name': u'keystone'}], u'_context_user_name': u'admin', u'_context_auth_token': '<SANITIZED>', u'args': {u'request_spec': {u'block_device_mapping': [{u'volume_size': u'', u'volume_id': u'11f80196-9554-4a10-923f-75208b8aa794', u'delete_on_termination': u'0', u'device_name': u'vda'}], u'image': {u'status': u'active', u'name': u'fedora', u'deleted': False, u'container_format': u'bare', u'created_at': u'2012-12-03T15:06:50.000000', u'disk_format': u'qcow2', u'updated_at': u'2012-12-19T10:52:52.000000', u'id': u'c4c4a764-4605-4f5c-a2ad-006d7cb0a23c', u'owner': u'376a3c1c9c2546549863e258d7fd9520', u'min_ram': 0, u'checksum': u'755122332caeb9f661d5c978adb8b45f', u'min_disk': 0, u'is_public': True, u'deleted_at': None, u'properties': {}, u'size': 213581824}, u'instance_type': {u'disabled': False, u'root_gb': 0, u'name': u'm1.tiny', u'flavorid': u'1', u'deleted': False, u'created_at': None, u'ephemeral_gb': 0, u'updated_at': None, u'memory_mb': 512, u'vcpus': 1, u'extra_specs': {}, u'swap': 0, u'rxtx_factor': 1.0, u'is_public': True, u'deleted_at': None, u'vcpu_weight': None, u'id': 2}, u'instance_properties': {u'vm_state': u'building', u'availability_zone': None, u'launch_index': 0, u'ephemeral_gb': 0, u'instance_type_id': 2, u'user_data': None, u'vm_mode': None, u'reservation_id': u'r-07lj2yca', u'root_device_name': None, u'user_id': u'5cc47236cd9d4a0390129587fe3d43b4', u'display_description': u'antonio', u'key_data': None, u'power_state': 0, u'progress': 0, u'project_id': u'376a3c1c9c2546549863e258d7fd9520', u'config_drive': u'', u'ramdisk_id': u'', u'access_ip_v6': None, u'access_ip_v4': None, u'kernel_id': u'', u'key_name': None, u'display_name': u'antonio', u'config_drive_id': u'', u'root_gb': 0, u'locked': False, u'launch_time': u'2013-01-07T10:46:26Z', u'memory_mb': 512, u'vcpus': 1, u'image_ref': u'c4c4a764-4605-4f5c-a2ad-006d7cb0a23c', u'architecture': None, u'auto_disk_config': None, u'os_type': None, u'metadata': {}}, u'security_group': [u'default'], u'instance_uuids': [u'215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f']}, u'requested_networks': None, u'filter_properties': {u'config_options': {}, u'limits': {u'memory_mb': 5931.0}, u'request_spec': {u'block_device_mapping': [{u'volume_size': u'', u'volume_id': u'11f80196-9554-4a10-923f-75208b8aa794', u'delete_on_termination': u'0', u'device_name': u'vda'}], u'image': {u'status': u'active', u'name': u'fedora', u'deleted': False, u'container_format': u'bare', u'created_at': u'2012-12-03T15:06:50.000000', u'disk_format': u'qcow2', u'updated_at': u'2012-12-19T10:52:52.000000', u'id': u'c4c4a764-4605-4f5c-a2ad-006d7cb0a23c', u'owner': u'376a3c1c9c2546549863e258d7fd9520', u'min_ram': 0, u'checksum': u'755122332caeb9f661d5c978adb8b45f', u'min_disk': 0, u'is_public': True, u'deleted_at': None, u'properties': {}, u'size': 213581824}, u'instance_type': {u'disabled': False, u'root_gb': 0, u'name': u'm1.tiny', u'flavorid': u'1', u'deleted': False, u'created_at': None, u'ephemeral_gb': 0, u'updated_at': None, u'memory_mb': 512, u'vcpus': 1, u'extra_specs': {}, u'swap': 0, u'rxtx_factor': 1.0, u'is_public': True, u'deleted_at': None, u'vcpu_weight': None, u'id': 2}, u'instance_properties': {u'vm_state': u'building', u'availability_zone': None, u'launch_index': 0, u'ephemeral_gb': 0, u'instance_type_id': 2, u'user_data': None, u'vm_mode': None, u'reservation_id': u'r-07lj2yca', u'root_device_name': None, u'user_id': u'5cc47236cd9d4a0390129587fe3d43b4', u'display_description': u'antonio', u'key_data': None, u'power_state': 0, u'progress': 0, u'project_id': u'376a3c1c9c2546549863e258d7fd9520', u'config_drive': u'', u'ramdisk_id': u'', u'access_ip_v6': None, u'access_ip_v4': None, u'kernel_id': u'', u'key_name': None, u'display_name': u'antonio', u'config_drive_id': u'', u'root_gb': 0, u'locked': False, u'launch_time': u'2013-01-07T10:46:26Z', u'memory_mb': 512, u'vcpus': 1, u'image_ref': u'c4c4a764-4605-4f5c-a2ad-006d7cb0a23c', u'architecture': None, u'auto_disk_config': None, u'os_type': None, u'metadata': {}}, u'security_group': [u'default'], u'instance_uuids': [u'215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f']}, u'instance_type': {u'disabled': False, u'root_gb': 0, u'name': u'm1.tiny', u'flavorid': u'1', u'deleted': False, u'created_at': None, u'ephemeral_gb': 0, u'updated_at': None, u'memory_mb': 512, u'vcpus': 1, u'extra_specs': {}, u'swap': 0, u'rxtx_factor': 1.0, u'is_public': True, u'deleted_at': None, u'vcpu_weight': None, u'id': 2}, u'retry': {u'num_attempts': 1, u'hosts': [u'localadmin']}, u'scheduler_hints': {}}, u'instance': {u'vm_state': u'building', u'availability_zone': None, u'terminated_at': None, u'ephemeral_gb': 0, u'instance_type_id': 2, u'user_data': None, u'vm_mode': None, u'deleted_at': None, u'reservation_id': u'r-07lj2yca', u'id': 3, u'security_groups': [{u'deleted_at': None, u'user_id': u'5cc47236cd9d4a0390129587fe3d43b4', u'name': u'default', u'deleted': False, u'created_at': u'2013-01-07T10:42:29.000000', u'updated_at': None, u'rules': [], u'project_id': u'376a3c1c9c2546549863e258d7fd9520', u'id': 1, u'description': u'default'}], u'disable_terminate': False, u'user_id': u'5cc47236cd9d4a0390129587fe3d43b4', u'uuid': u'215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f', u'server_name': None, u'default_swap_device': None, u'info_cache': {u'instance_uuid': u'215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f', u'deleted': False, u'created_at': u'2013-01-07T10:46:26.000000', u'updated_at': None, u'network_info': u'[]', u'deleted_at': None, u'id': 3}, u'hostname': u'antonio', u'launched_on': None, u'display_description': u'antonio', u'key_data': None, u'kernel_id': u'', u'power_state': 0, u'default_ephemeral_device': None, u'progress': 0, u'project_id': u'376a3c1c9c2546549863e258d7fd9520', u'launched_at': None, u'scheduled_at': u'2013-01-07T10:46:26.342435', u'ramdisk_id': u'', u'access_ip_v6': None, u'access_ip_v4': None, u'deleted': False, u'key_name': None, u'updated_at': u'2013-01-07T10:46:26.397424', u'host': u'localadmin', u'display_name': u'antonio', u'task_state': u'scheduling', u'shutdown_terminate': False, u'architecture': None, u'root_gb': 0, u'locked': False, u'name': u'instance-00000003', u'created_at': u'2013-01-07T10:46:26.000000', u'launch_index': 0, u'metadata': [], u'memory_mb': 512, u'instance_type': {u'disabled': False, u'root_gb': 0, u'deleted_at': None, u'name': u'm1.tiny', u'deleted': False, u'created_at': None, u'ephemeral_gb': 0, u'updated_at': None, u'memory_mb': 512, u'vcpus': 1, u'swap': 0, u'rxtx_factor': 1.0, u'is_public': True, u'flavorid': u'1', u'vcpu_weight': None, u'id': 2}, u'vcpus': 1, u'image_ref': u'c4c4a764-4605-4f5c-a2ad-006d7cb0a23c', u'root_device_name': None, u'auto_disk_config': None, u'os_type': None, u'config_drive': u''}, u'admin_password': '<SANITIZED>', u'injected_files': [], u'is_first_time': True}, u'_context_instance_lock_checked': False, u'_context_is_admin': True, u'version': u'2.0', u'_context_project_id': u'376a3c1c9c2546549863e258d7fd9520', u'_context_timestamp': u'2013-01-07T10:46:25.369452', u'_context_read_deleted': u'no', u'_context_user_id': u'5cc47236cd9d4a0390129587fe3d43b4', u'method': u'run_instance', u'_context_remote_address': u'192.168.100.224'} from (pid=6310) _safe_log /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/common.py:195
2013-01-07 11:46:26 DEBUG nova.openstack.common.rpc.amqp [-] unpacked context: {'project_name': u'admin', 'user_id': u'5cc47236cd9d4a0390129587fe3d43b4', 'roles': [u'admin', u'Member'], 'timestamp': u'2013-01-07T10:46:25.369452', 'auth_token': '<SANITIZED>', 'remote_address': u'192.168.100.224', 'quota_class': None, 'is_admin': True, 'service_catalog': [{u'endpoints': [{u'adminURL': u'http://192.168.100.224:8776/v1/376a3c1c9c2546549863e258d7fd9520', u'region': u'myregion', u'internalURL': u'http://192.168.100.224:8776/v1/376a3c1c9c2546549863e258d7fd9520', u'id': u'd954c6d484144050aaeba6d9904a0096', u'publicURL': u'http://192.168.100.224:8776/v1/376a3c1c9c2546549863e258d7fd9520'}], u'endpoints_links': [], u'type': u'volume', u'name': u'volume'}, {u'endpoints': [{u'adminURL': u'http://192.168.100.224:9292/v1', u'region': u'myregion', u'internalURL': u'http://192.168.100.224:9292/v1', u'id': u'dab53ecdac144f4080e8182690daedd9', u'publicURL': u'http://192.168.100.224:9292/v1'}], u'endpoints_links': [], u'type': u'image', u'name': u'glance'}, {u'endpoints': [{u'adminURL': u'http://192.168.100.224:8774/v2/376a3c1c9c2546549863e258d7fd9520', u'region': u'myregion', u'internalURL': u'http://192.168.100.224:8774/v2/376a3c1c9c2546549863e258d7fd9520', u'id': u'3a6436f55174480b918da20026b131e8', u'publicURL': u'http://192.168.100.224:8774/v2/376a3c1c9c2546549863e258d7fd9520'}], u'endpoints_links': [], u'type': u'compute', u'name': u'nova'}, {u'endpoints': [{u'adminURL': u'http://192.168.100.224:8773/services/Admin', u'region': u'myregion', u'internalURL': u'http://192.168.100.224:8773/services/Cloud', u'id': u'2b7b9bfd8dce47aebad363934c70c27b', u'publicURL': u'http://192.168.100.224:8773/services/Cloud'}], u'endpoints_links': [], u'type': u'ec2', u'name': u'ec2'}, {u'endpoints': [{u'adminURL': u'http://192.168.100.224:35357/v2.0', u'region': u'myregion', u'internalURL': u'http://192.168.100.224:5000/v2.0', u'id': u'9897b5d634a145678bb86b76bd8d44c2', u'publicURL': u'http://192.168.100.224:5000/v2.0'}], u'endpoints_links': [], u'type': u'identity', u'name': u'keystone'}], 'request_id': u'req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd', 'instance_lock_checked': False, 'project_id': u'376a3c1c9c2546549863e258d7fd9520', 'user_name': u'admin', 'read_deleted': u'no'} from (pid=6310) _safe_log /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/common.py:195
2013-01-07 11:46:26 DEBUG nova.utils [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Got semaphore "215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f" for method "do_run_instance"... from (pid=6310) inner /usr/lib/python2.7/dist-packages/nova/utils.py:713
2013-01-07 11:46:27 AUDIT nova.compute.manager [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] Starting instance...
2013-01-07 11:46:27 DEBUG nova.utils [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Got semaphore "compute_resources" for method "update_usage"... from (pid=6310) inner /usr/lib/python2.7/dist-packages/nova/utils.py:713
2013-01-07 11:46:27 DEBUG nova.utils [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Got semaphore "compute_resources" for method "update_usage"... from (pid=6310) inner /usr/lib/python2.7/dist-packages/nova/utils.py:713
2013-01-07 11:46:27 DEBUG nova.openstack.common.rpc.amqp [-] Making asynchronous call on network ... from (pid=6310) multicall /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:351
2013-01-07 11:46:27 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is 4f42864699cd431ea957b7c49e9ef011 from (pid=6310) multicall /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:354
2013-01-07 11:46:28 DEBUG nova.compute.manager [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] Instance network_info: |[VIF({'network': Network({'bridge': u'br100', 'subnets': [Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 'type': u'fixed', 'floating_ips': [], 'address': u'192.168.4.34'})], 'version': 4, 'meta': {u'dhcp_server': u'192.168.4.33'}, 'dns': [IP({'meta': {}, 'version': 4, 'type': u'dns', 'address': u'8.8.4.4'})], 'routes': [], 'cidr': u'192.168.4.32/27', 'gateway': IP({'meta': {}, 'version': 4, 'type': u'gateway', 'address': u'192.168.4.33'})}), Subnet({'ips': [], 'version': None, 'meta': {u'dhcp_server': None}, 'dns': [], 'routes': [], 'cidr': None, 'gateway': IP({'meta': {}, 'version': None, 'type': u'gateway', 'address': None})})], 'meta': {u'tenant_id': None, u'should_create_bridge': True, u'bridge_interface': u'eth1'}, 'id': u'63d6b959-8d54-44ca-9319-944546462a72', 'label': u'private'}), 'meta': {}, 'id': u'3844d4f0-3d11-4606-ae71-187f868ef924', 'address': u'fa:16:3e:56:53:90'})]| from (pid=6310) _allocate_network /usr/lib/python2.7/dist-packages/nova/compute/manager.py:715
2013-01-07 11:46:28 DEBUG nova.utils [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Got semaphore "compute_resources" for method "begin_resource_claim"... from (pid=6310) inner /usr/lib/python2.7/dist-packages/nova/utils.py:713
2013-01-07 11:46:28 AUDIT nova.compute.resource_tracker [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Attempting claim: memory 512 MB, disk 0 GB, VCPUs 1
2013-01-07 11:46:28 AUDIT nova.compute.resource_tracker [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Total memory: 3954 MB, used: 512 MB, free: 0 MB
2013-01-07 11:46:28 AUDIT nova.compute.resource_tracker [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Memory limit: 5931 MB, free: 5419 MB
2013-01-07 11:46:28 AUDIT nova.compute.resource_tracker [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Total disk: 274 GB, used: 0 GB, free: 274 GB
2013-01-07 11:46:28 AUDIT nova.compute.resource_tracker [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Disk limit not specified, defaulting to unlimited
2013-01-07 11:46:28 AUDIT nova.compute.resource_tracker [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Total VCPUs: 4, used: 0
2013-01-07 11:46:28 AUDIT nova.compute.resource_tracker [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] VCPU limit not specified, defaulting to unlimited
2013-01-07 11:46:28 DEBUG nova.utils [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Got semaphore "compute_resources" for method "update_usage"... from (pid=6310) inner /usr/lib/python2.7/dist-packages/nova/utils.py:713
2013-01-07 11:46:28 DEBUG nova.utils [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Got semaphore "compute_resources" for method "update_usage"... from (pid=6310) inner /usr/lib/python2.7/dist-packages/nova/utils.py:713
2013-01-07 11:46:28 DEBUG nova.compute.manager [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] Setting up bdm <nova.db.sqlalchemy.models.BlockDeviceMapping object at 0x4050d10> from (pid=6310) _setup_block_device_mapping /usr/lib/python2.7/dist-packages/nova/compute/manager.py:407
2013-01-07 11:46:28 AUDIT nova.compute.manager [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] Booting with volume 11f80196-9554-4a10-923f-75208b8aa794 at vda
2013-01-07 11:46:28 DEBUG nova.openstack.common.rpc.amqp [-] Making asynchronous call on volume.localadmin ... from (pid=6310) multicall /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:351
2013-01-07 11:46:28 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is bf3de0e109f04860a96793eba81c85cb from (pid=6310) multicall /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:354
2013-01-07 11:46:28 DEBUG nova.openstack.common.rpc.amqp [-] Making asynchronous call on volume.localadmin ... from (pid=6310) multicall /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:351
2013-01-07 11:46:28 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is 4a327919b126453e8ab1001127c60b7a from (pid=6310) multicall /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:354
2013-01-07 11:46:29 DEBUG nova.utils [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Got semaphore "compute_resources" for method "update_usage"... from (pid=6310) inner /usr/lib/python2.7/dist-packages/nova/utils.py:713
2013-01-07 11:46:29 DEBUG nova.virt.libvirt.driver [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] Starting toXML method from (pid=6310) to_xml /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:1818
2013-01-07 11:46:29 DEBUG nova.virt.libvirt.driver [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] CPU mode 'host-model' model '' was chosen from (pid=6310) get_guest_cpu_config /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:1509
2013-01-07 11:46:29 DEBUG nova.virt.libvirt.driver [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] block_device_list [u'vda'] from (pid=6310) _volume_in_mapping /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:1447
2013-01-07 11:46:29 DEBUG nova.virt.libvirt.driver [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] block_device_list [u'vda'] from (pid=6310) _volume_in_mapping /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py:1447
2013-01-07 11:46:29 DEBUG nova.utils [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Got semaphore "connect_volume" for method "connect_volume"... from (pid=6310) inner /usr/lib/python2.7/dist-packages/nova/utils.py:713
2013-01-07 11:46:29 DEBUG nova.utils [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Esecuzione del comando (sottoprocesso): sudo nova-rootwrap /etc/nova/rootwrap.conf iscsiadm -m node -T iqn.2010-10.org.openstack:volume-11f80196-9554-4a10-923f-75208b8aa794 -p 192.168.100.224:3260 from (pid=6310) execute /usr/lib/python2.7/dist-packages/nova/utils.py:176
2013-01-07 11:46:29 DEBUG nova.utils [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Il risultato é 255 from (pid=6310) execute /usr/lib/python2.7/dist-packages/nova/utils.py:191
2013-01-07 11:46:29 DEBUG nova.utils [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Esecuzione del comando (sottoprocesso): sudo nova-rootwrap /etc/nova/rootwrap.conf iscsiadm -m node -T iqn.2010-10.org.openstack:volume-11f80196-9554-4a10-923f-75208b8aa794 -p 192.168.100.224:3260 --op new from (pid=6310) execute /usr/lib/python2.7/dist-packages/nova/utils.py:176
2013-01-07 11:46:29 DEBUG nova.utils [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Il risultato é 0 from (pid=6310) execute /usr/lib/python2.7/dist-packages/nova/utils.py:191
2013-01-07 11:46:29 DEBUG nova.virt.libvirt.volume [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] iscsiadm ('--op', 'new'): stdout=New iSCSI node [tcp:[hw=,ip=,net_if=,iscsi_if=default] 192.168.100.224,3260,-1 iqn.2010-10.org.openstack:volume-11f80196-9554-4a10-923f-75208b8aa794] added
 stderr= from (pid=6310) _run_iscsiadm /usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py:117
2013-01-07 11:46:29 DEBUG nova.utils [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Esecuzione del comando (sottoprocesso): sudo nova-rootwrap /etc/nova/rootwrap.conf iscsiadm -m node -T iqn.2010-10.org.openstack:volume-11f80196-9554-4a10-923f-75208b8aa794 -p 192.168.100.224:3260 --login from (pid=6310) execute /usr/lib/python2.7/dist-packages/nova/utils.py:176
2013-01-07 11:46:30 DEBUG nova.utils [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Il risultato é 255 from (pid=6310) execute /usr/lib/python2.7/dist-packages/nova/utils.py:191
2013-01-07 11:46:30 DEBUG nova.virt.libvirt.volume [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] iscsiadm ('--login',): stdout=Logging in to [iface: default, target: iqn.2010-10.org.openstack:volume-11f80196-9554-4a10-923f-75208b8aa794, portal: 192.168.100.224,3260]
 stderr=iscsiadm: Could not login to [iface: default, target: iqn.2010-10.org.openstack:volume-11f80196-9554-4a10-923f-75208b8aa794, portal: 192.168.100.224,3260]:
iscsiadm: initiator reported error (19 - encountered non-retryable iSCSI login failure)
 from (pid=6310) _run_iscsiadm /usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py:117
2013-01-07 11:46:30 DEBUG nova.utils [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Esecuzione del comando (sottoprocesso): sudo nova-rootwrap /etc/nova/rootwrap.conf iscsiadm -m node -T iqn.2010-10.org.openstack:volume-11f80196-9554-4a10-923f-75208b8aa794 -p 192.168.100.224:3260 --op update -n node.startup -v automatic from (pid=6310) execute /usr/lib/python2.7/dist-packages/nova/utils.py:176
2013-01-07 11:46:30 DEBUG nova.utils [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Il risultato é 0 from (pid=6310) execute /usr/lib/python2.7/dist-packages/nova/utils.py:191
2013-01-07 11:46:30 DEBUG nova.virt.libvirt.volume [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] iscsiadm ('--op', 'update', '-n', 'node.startup', '-v', 'automatic'): stdout= stderr= from (pid=6310) _run_iscsiadm /usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py:117
2013-01-07 11:46:30 WARNING nova.virt.libvirt.volume [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] ISCSI volume not yet found at: vda. Will rescan & retry. Try number: 0
2013-01-07 11:46:30 DEBUG nova.utils [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Esecuzione del comando (sottoprocesso): sudo nova-rootwrap /etc/nova/rootwrap.conf iscsiadm -m node -T iqn.2010-10.org.openstack:volume-11f80196-9554-4a10-923f-75208b8aa794 -p 192.168.100.224:3260 --rescan from (pid=6310) execute /usr/lib/python2.7/dist-packages/nova/utils.py:176
2013-01-07 11:46:30 DEBUG nova.utils [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Il risultato é 255 from (pid=6310) execute /usr/lib/python2.7/dist-packages/nova/utils.py:191
2013-01-07 11:46:30 ERROR nova.compute.manager [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] Instance failed to spawn
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] Traceback (most recent call last):
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 743, in _spawn
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] block_device_info)
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 117, in wrapped
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] temp_level, payload)
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] self.gen.next()
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 92, in wrapped
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] return f(*args, **kw)
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1056, in spawn
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] block_device_info=block_device_info)
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1820, in to_xml
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] rescue, block_device_info)
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1774, in get_guest_config
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] root_device):
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1642, in get_guest_storage_config
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] mount_device)
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 627, in volume_driver_method
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] return method(connection_info, *args, **kwargs)
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] File "/usr/lib/python2.7/dist-packages/nova/utils.py", line 752, in inner
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] retval = f(*args, **kwargs)
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py", line 181, in connect_volume
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] self._run_iscsiadm(iscsi_properties, ("--rescan",))
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py", line 115, in _run_iscsiadm
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] check_exit_code=check_exit_code)
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] File "/usr/lib/python2.7/dist-packages/nova/utils.py", line 198, in execute
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] cmd=' '.join(cmd))
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] ProcessExecutionError: Si e' verificato un errore inatteso durante l'esecuzione del comando.
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] Comando: sudo nova-rootwrap /etc/nova/rootwrap.conf iscsiadm -m node -T iqn.2010-10.org.openstack:volume-11f80196-9554-4a10-923f-75208b8aa794 -p 192.168.100.224:3260 --rescan
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] Exit code: 255
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] Stdout: ''
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] Stderr: 'iscsiadm: No portal found.\n'
2013-01-07 11:46:30 TRACE nova.compute.manager [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f]
2013-01-07 11:46:30 DEBUG nova.utils [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Got semaphore "compute_resources" for method "abort_resource_claim"... from (pid=6310) inner /usr/lib/python2.7/dist-packages/nova/utils.py:713
2013-01-07 11:46:30 INFO nova.compute.resource_tracker [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Aborting claim: [Claim 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f: 512 MB memory, 0 GB disk, 1 VCPUS]
2013-01-07 11:46:30 DEBUG nova.compute.manager [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] Deallocating network for instance from (pid=6310) _deallocate_network /usr/lib/python2.7/dist-packages/nova/compute/manager.py:769
2013-01-07 11:46:30 DEBUG nova.openstack.common.rpc.amqp [-] Making asynchronous call on network ... from (pid=6310) multicall /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:351
2013-01-07 11:46:30 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is 57b202afae66486ab308faa420c27acb from (pid=6310) multicall /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:354
2013-01-07 11:46:31 DEBUG nova.compute.manager [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] Re-scheduling instance: attempt 1 from (pid=6310) _reschedule /usr/lib/python2.7/dist-packages/nova/compute/manager.py:575
2013-01-07 11:46:31 DEBUG nova.utils [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Got semaphore "compute_resources" for method "update_usage"... from (pid=6310) inner /usr/lib/python2.7/dist-packages/nova/utils.py:713
2013-01-07 11:46:31 DEBUG nova.openstack.common.rpc.amqp [-] Making asynchronous cast on scheduler... from (pid=6310) cast /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:376
2013-01-07 11:46:31 ERROR nova.compute.manager [req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] [instance: 215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f] Build error: ['Traceback (most recent call last):\n', ' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 496, in _run_instance\n injected_files, admin_password)\n', ' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 743, in _spawn\n block_device_info)\n', ' File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 117, in wrapped\n temp_level, payload)\n', ' File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__\n self.gen.next()\n', ' File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 92, in wrapped\n return f(*args, **kw)\n', ' File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1056, in spawn\n block_device_info=block_device_info)\n', ' File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1820, in to_xml\n rescue, block_device_info)\n', ' File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1774, in get_guest_config\n root_device):\n', ' File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1642, in get_guest_storage_config\n mount_device)\n', ' File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 627, in volume_driver_method\n return method(connection_info, *args, **kwargs)\n', ' File "/usr/lib/python2.7/dist-packages/nova/utils.py", line 752, in inner\n retval = f(*args, **kwargs)\n', ' File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py", line 181, in connect_volume\n self._run_iscsiadm(iscsi_properties, ("--rescan",))\n', ' File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/volume.py", line 115, in _run_iscsiadm\n check_exit_code=check_exit_code)\n', ' File "/usr/lib/python2.7/dist-packages/nova/utils.py", line 198, in execute\n cmd=\' \'.join(cmd))\n', "ProcessExecutionError: Si e' verificato un errore inatteso durante l'esecuzione del comando.\nComando: sudo nova-rootwrap /etc/nova/rootwrap.conf iscsiadm -m node -T iqn.2010-10.org.openstack:volume-11f80196-9554-4a10-923f-75208b8aa794 -p 192.168.100.224:3260 --rescan\nExit code: 255\nStdout: ''\nStderr: 'iscsiadm: No portal found.\\n'\n"]

i changed Cinder with nova-volume but the result doesn't change
this is the log file of nova-volume:

2013-01-07 11:46:08 DEBUG nova.openstack.common.rpc.amqp [-] received {u'_context_roles': [u'admin', u'Member'], u'_context_request_id': u'req-77846921-4fbd-4667-8d0f-4fc7613bfa6a', u'_context_quota_class': None, u'_context_project_name': u'admin', u'_context_service_catalog': [{u'endpoints': [{u'adminURL': u'http://192.168.100.224:8776/v1/376a3c1c9c2546549863e258d7fd9520', u'region': u'myregion', u'id': u'd954c6d484144050aaeba6d9904a0096', u'internalURL': u'http://192.168.100.224:8776/v1/376a3c1c9c2546549863e258d7fd9520', u'publicURL': u'http://192.168.100.224:8776/v1/376a3c1c9c2546549863e258d7fd9520'}], u'endpoints_links': [], u'type': u'volume', u'name': u'volume'}, {u'endpoints': [{u'adminURL': u'http://192.168.100.224:9292/v1', u'region': u'myregion', u'id': u'dab53ecdac144f4080e8182690daedd9', u'internalURL': u'http://192.168.100.224:9292/v1', u'publicURL': u'http://192.168.100.224:9292/v1'}], u'endpoints_links': [], u'type': u'image', u'name': u'glance'}, {u'endpoints': [{u'adminURL': u'http://192.168.100.224:8774/v2/376a3c1c9c2546549863e258d7fd9520', u'region': u'myregion', u'id': u'3a6436f55174480b918da20026b131e8', u'internalURL': u'http://192.168.100.224:8774/v2/376a3c1c9c2546549863e258d7fd9520', u'publicURL': u'http://192.168.100.224:8774/v2/376a3c1c9c2546549863e258d7fd9520'}], u'endpoints_links': [], u'type': u'compute', u'name': u'nova'}, {u'endpoints': [{u'adminURL': u'http://192.168.100.224:8773/services/Admin', u'region': u'myregion', u'id': u'2b7b9bfd8dce47aebad363934c70c27b', u'internalURL': u'http://192.168.100.224:8773/services/Cloud', u'publicURL': u'http://192.168.100.224:8773/services/Cloud'}], u'endpoints_links': [], u'type': u'ec2', u'name': u'ec2'}, {u'endpoints': [{u'adminURL': u'http://192.168.100.224:35357/v2.0', u'region': u'myregion', u'id': u'9897b5d634a145678bb86b76bd8d44c2', u'internalURL': u'http://192.168.100.224:5000/v2.0', u'publicURL': u'http://192.168.100.224:5000/v2.0'}], u'endpoints_links': [], u'type': u'identity', u'name': u'keystone'}], u'_context_user_name': u'admin', u'_context_auth_token': '<SANITIZED>', u'args': {u'image_id': None, u'volume_id': u'11f80196-9554-4a10-923f-75208b8aa794', u'snapshot_id': None}, u'_context_instance_lock_checked': False, u'_context_is_admin': True, u'_context_project_id': u'376a3c1c9c2546549863e258d7fd9520', u'_context_timestamp': u'2013-01-07T10:46:08.237453', u'_context_read_deleted': u'no', u'_context_user_id': u'5cc47236cd9d4a0390129587fe3d43b4', u'method': u'create_volume', u'_context_remote_address': u'192.168.100.224'} from (pid=6391) _safe_log /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/common.py:195
2013-01-07 11:46:08 DEBUG nova.openstack.common.rpc.amqp [-] unpacked context: {'project_name': u'admin', 'user_id': u'5cc47236cd9d4a0390129587fe3d43b4', 'roles': [u'admin', u'Member'], 'timestamp': u'2013-01-07T10:46:08.237453', 'auth_token': '<SANITIZED>', 'remote_address': u'192.168.100.224', 'quota_class': None, 'is_admin': True, 'service_catalog': [{u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://192.168.100.224:8776/v1/376a3c1c9c2546549863e258d7fd9520', u'region': u'myregion', u'publicURL': u'http://192.168.100.224:8776/v1/376a3c1c9c2546549863e258d7fd9520', u'id': u'd954c6d484144050aaeba6d9904a0096', u'internalURL': u'http://192.168.100.224:8776/v1/376a3c1c9c2546549863e258d7fd9520'}], u'type': u'volume', u'name': u'volume'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://192.168.100.224:9292/v1', u'region': u'myregion', u'publicURL': u'http://192.168.100.224:9292/v1', u'id': u'dab53ecdac144f4080e8182690daedd9', u'internalURL': u'http://192.168.100.224:9292/v1'}], u'type': u'image', u'name': u'glance'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://192.168.100.224:8774/v2/376a3c1c9c2546549863e258d7fd9520', u'region': u'myregion', u'publicURL': u'http://192.168.100.224:8774/v2/376a3c1c9c2546549863e258d7fd9520', u'id': u'3a6436f55174480b918da20026b131e8', u'internalURL': u'http://192.168.100.224:8774/v2/376a3c1c9c2546549863e258d7fd9520'}], u'type': u'compute', u'name': u'nova'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://192.168.100.224:8773/services/Admin', u'region': u'myregion', u'publicURL': u'http://192.168.100.224:8773/services/Cloud', u'id': u'2b7b9bfd8dce47aebad363934c70c27b', u'internalURL': u'http://192.168.100.224:8773/services/Cloud'}], u'type': u'ec2', u'name': u'ec2'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://192.168.100.224:35357/v2.0', u'region': u'myregion', u'publicURL': u'http://192.168.100.224:5000/v2.0', u'id': u'9897b5d634a145678bb86b76bd8d44c2', u'internalURL': u'http://192.168.100.224:5000/v2.0'}], u'type': u'identity', u'name': u'keystone'}], 'request_id': u'req-77846921-4fbd-4667-8d0f-4fc7613bfa6a', 'instance_lock_checked': False, 'project_id': u'376a3c1c9c2546549863e258d7fd9520', 'user_name': u'admin', 'read_deleted': u'no'} from (pid=6391) _safe_log /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/common.py:195
2013-01-07 11:46:08 INFO nova.volume.manager [req-77846921-4fbd-4667-8d0f-4fc7613bfa6a 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] volume volume-11f80196-9554-4a10-923f-75208b8aa794: creazione in corso
2013-01-07 11:46:08 DEBUG nova.volume.manager [req-77846921-4fbd-4667-8d0f-4fc7613bfa6a 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] volume volume-11f80196-9554-4a10-923f-75208b8aa794: creating lv of size 1G from (pid=6391) create_volume /usr/lib/python2.7/dist-packages/nova/volume/manager.py:137
2013-01-07 11:46:08 DEBUG nova.utils [req-77846921-4fbd-4667-8d0f-4fc7613bfa6a 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Esecuzione del comando (sottoprocesso): sudo nova-rootwrap /etc/nova/rootwrap.conf lvcreate -L 1G -n volume-11f80196-9554-4a10-923f-75208b8aa794 nova-volumes from (pid=6391) execute /usr/lib/python2.7/dist-packages/nova/utils.py:176
2013-01-07 11:46:08 DEBUG nova.utils [req-77846921-4fbd-4667-8d0f-4fc7613bfa6a 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Il risultato é 0 from (pid=6391) execute /usr/lib/python2.7/dist-packages/nova/utils.py:191
2013-01-07 11:46:08 DEBUG nova.volume.manager [req-77846921-4fbd-4667-8d0f-4fc7613bfa6a 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] volume volume-11f80196-9554-4a10-923f-75208b8aa794: creazione in corso per l'esportazione from (pid=6391) create_volume /usr/lib/python2.7/dist-packages/nova/volume/manager.py:159
2013-01-07 11:46:08 INFO nova.volume.iscsi [req-77846921-4fbd-4667-8d0f-4fc7613bfa6a 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Creating volume: volume-11f80196-9554-4a10-923f-75208b8aa794
2013-01-07 11:46:08 DEBUG nova.utils [req-77846921-4fbd-4667-8d0f-4fc7613bfa6a 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Esecuzione del comando (sottoprocesso): sudo nova-rootwrap /etc/nova/rootwrap.conf tgt-admin --update iqn.2010-10.org.openstack:volume-11f80196-9554-4a10-923f-75208b8aa794 from (pid=6391) execute /usr/lib/python2.7/dist-packages/nova/utils.py:176
2013-01-07 11:46:09 DEBUG nova.utils [req-77846921-4fbd-4667-8d0f-4fc7613bfa6a 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Il risultato é 0 from (pid=6391) execute /usr/lib/python2.7/dist-packages/nova/utils.py:191
2013-01-07 11:46:09 DEBUG nova.utils [req-77846921-4fbd-4667-8d0f-4fc7613bfa6a 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Esecuzione del comando (sottoprocesso): sudo nova-rootwrap /etc/nova/rootwrap.conf tgt-admin --show from (pid=6391) execute /usr/lib/python2.7/dist-packages/nova/utils.py:176
2013-01-07 11:46:09 DEBUG nova.utils [req-77846921-4fbd-4667-8d0f-4fc7613bfa6a 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Il risultato é 0 from (pid=6391) execute /usr/lib/python2.7/dist-packages/nova/utils.py:191
2013-01-07 11:46:09 DEBUG nova.volume.manager [req-77846921-4fbd-4667-8d0f-4fc7613bfa6a 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] volume volume-11f80196-9554-4a10-923f-75208b8aa794: creato con successo from (pid=6391) create_volume /usr/lib/python2.7/dist-packages/nova/volume/manager.py:172
2013-01-07 11:46:09 INFO nova.volume.manager [req-77846921-4fbd-4667-8d0f-4fc7613bfa6a 5cc47236cd9d4a0390129587fe3d43b4 376a3c1c9c2546549863e258d7fd9520] Clear capabilities
2013-01-07 11:46:28 DEBUG nova.openstack.common.rpc.amqp [-] received {u'_context_roles': [u'admin', u'Member'], u'_msg_id': u'bf3de0e109f04860a96793eba81c85cb', u'_context_quota_class': None, u'_context_request_id': u'req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd', u'_context_service_catalog': [{u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://192.168.100.224:8776/v1/376a3c1c9c2546549863e258d7fd9520', u'region': u'myregion', u'publicURL': u'http://192.168.100.224:8776/v1/376a3c1c9c2546549863e258d7fd9520', u'id': u'd954c6d484144050aaeba6d9904a0096', u'internalURL': u'http://192.168.100.224:8776/v1/376a3c1c9c2546549863e258d7fd9520'}], u'type': u'volume', u'name': u'volume'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://192.168.100.224:9292/v1', u'region': u'myregion', u'publicURL': u'http://192.168.100.224:9292/v1', u'id': u'dab53ecdac144f4080e8182690daedd9', u'internalURL': u'http://192.168.100.224:9292/v1'}], u'type': u'image', u'name': u'glance'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://192.168.100.224:8774/v2/376a3c1c9c2546549863e258d7fd9520', u'region': u'myregion', u'publicURL': u'http://192.168.100.224:8774/v2/376a3c1c9c2546549863e258d7fd9520', u'id': u'3a6436f55174480b918da20026b131e8', u'internalURL': u'http://192.168.100.224:8774/v2/376a3c1c9c2546549863e258d7fd9520'}], u'type': u'compute', u'name': u'nova'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://192.168.100.224:8773/services/Admin', u'region': u'myregion', u'publicURL': u'http://192.168.100.224:8773/services/Cloud', u'id': u'2b7b9bfd8dce47aebad363934c70c27b', u'internalURL': u'http://192.168.100.224:8773/services/Cloud'}], u'type': u'ec2', u'name': u'ec2'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://192.168.100.224:35357/v2.0', u'region': u'myregion', u'publicURL': u'http://192.168.100.224:5000/v2.0', u'id': u'9897b5d634a145678bb86b76bd8d44c2', u'internalURL': u'http://192.168.100.224:5000/v2.0'}], u'type': u'identity', u'name': u'keystone'}], u'_context_user_name': u'admin', u'_context_auth_token': '<SANITIZED>', u'args': {u'connector': {u'ip': u'192.168.100.224', u'initiator': u'iqn.1993-08.org.debian:01:61ede93ee82c', u'host': u'localadmin'}, u'volume_id': u'11f80196-9554-4a10-923f-75208b8aa794'}, u'_context_instance_lock_checked': False, u'_context_project_name': u'admin', u'_context_is_admin': True, u'_context_project_id': u'376a3c1c9c2546549863e258d7fd9520', u'_context_timestamp': u'2013-01-07T10:46:25.369452', u'_context_read_deleted': u'no', u'_context_user_id': u'5cc47236cd9d4a0390129587fe3d43b4', u'method': u'initialize_connection', u'_context_remote_address': u'192.168.100.224'} from (pid=6391) _safe_log /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/common.py:195
2013-01-07 11:46:28 DEBUG nova.openstack.common.rpc.amqp [-] unpacked context: {'project_name': u'admin', 'user_id': u'5cc47236cd9d4a0390129587fe3d43b4', 'roles': [u'admin', u'Member'], 'timestamp': u'2013-01-07T10:46:25.369452', 'auth_token': '<SANITIZED>', 'remote_address': u'192.168.100.224', 'quota_class': None, 'is_admin': True, 'service_catalog': [{u'endpoints': [{u'adminURL': u'http://192.168.100.224:8776/v1/376a3c1c9c2546549863e258d7fd9520', u'region': u'myregion', u'internalURL': u'http://192.168.100.224:8776/v1/376a3c1c9c2546549863e258d7fd9520', u'id': u'd954c6d484144050aaeba6d9904a0096', u'publicURL': u'http://192.168.100.224:8776/v1/376a3c1c9c2546549863e258d7fd9520'}], u'endpoints_links': [], u'type': u'volume', u'name': u'volume'}, {u'endpoints': [{u'adminURL': u'http://192.168.100.224:9292/v1', u'region': u'myregion', u'internalURL': u'http://192.168.100.224:9292/v1', u'id': u'dab53ecdac144f4080e8182690daedd9', u'publicURL': u'http://192.168.100.224:9292/v1'}], u'endpoints_links': [], u'type': u'image', u'name': u'glance'}, {u'endpoints': [{u'adminURL': u'http://192.168.100.224:8774/v2/376a3c1c9c2546549863e258d7fd9520', u'region': u'myregion', u'internalURL': u'http://192.168.100.224:8774/v2/376a3c1c9c2546549863e258d7fd9520', u'id': u'3a6436f55174480b918da20026b131e8', u'publicURL': u'http://192.168.100.224:8774/v2/376a3c1c9c2546549863e258d7fd9520'}], u'endpoints_links': [], u'type': u'compute', u'name': u'nova'}, {u'endpoints': [{u'adminURL': u'http://192.168.100.224:8773/services/Admin', u'region': u'myregion', u'internalURL': u'http://192.168.100.224:8773/services/Cloud', u'id': u'2b7b9bfd8dce47aebad363934c70c27b', u'publicURL': u'http://192.168.100.224:8773/services/Cloud'}], u'endpoints_links': [], u'type': u'ec2', u'name': u'ec2'}, {u'endpoints': [{u'adminURL': u'http://192.168.100.224:35357/v2.0', u'region': u'myregion', u'internalURL': u'http://192.168.100.224:5000/v2.0', u'id': u'9897b5d634a145678bb86b76bd8d44c2', u'publicURL': u'http://192.168.100.224:5000/v2.0'}], u'endpoints_links': [], u'type': u'identity', u'name': u'keystone'}], 'request_id': u'req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd', 'instance_lock_checked': False, 'project_id': u'376a3c1c9c2546549863e258d7fd9520', 'user_name': u'admin', 'read_deleted': u'no'} from (pid=6391) _safe_log /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/common.py:195
2013-01-07 11:46:28 DEBUG nova.openstack.common.rpc.amqp [-] received {u'_context_roles': [u'admin', u'Member'], u'_msg_id': u'4a327919b126453e8ab1001127c60b7a', u'_context_quota_class': None, u'_context_request_id': u'req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd', u'_context_service_catalog': [{u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://192.168.100.224:8776/v1/376a3c1c9c2546549863e258d7fd9520', u'region': u'myregion', u'publicURL': u'http://192.168.100.224:8776/v1/376a3c1c9c2546549863e258d7fd9520', u'id': u'd954c6d484144050aaeba6d9904a0096', u'internalURL': u'http://192.168.100.224:8776/v1/376a3c1c9c2546549863e258d7fd9520'}], u'type': u'volume', u'name': u'volume'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://192.168.100.224:9292/v1', u'region': u'myregion', u'publicURL': u'http://192.168.100.224:9292/v1', u'id': u'dab53ecdac144f4080e8182690daedd9', u'internalURL': u'http://192.168.100.224:9292/v1'}], u'type': u'image', u'name': u'glance'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://192.168.100.224:8774/v2/376a3c1c9c2546549863e258d7fd9520', u'region': u'myregion', u'publicURL': u'http://192.168.100.224:8774/v2/376a3c1c9c2546549863e258d7fd9520', u'id': u'3a6436f55174480b918da20026b131e8', u'internalURL': u'http://192.168.100.224:8774/v2/376a3c1c9c2546549863e258d7fd9520'}], u'type': u'compute', u'name': u'nova'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://192.168.100.224:8773/services/Admin', u'region': u'myregion', u'publicURL': u'http://192.168.100.224:8773/services/Cloud', u'id': u'2b7b9bfd8dce47aebad363934c70c27b', u'internalURL': u'http://192.168.100.224:8773/services/Cloud'}], u'type': u'ec2', u'name': u'ec2'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://192.168.100.224:35357/v2.0', u'region': u'myregion', u'publicURL': u'http://192.168.100.224:5000/v2.0', u'id': u'9897b5d634a145678bb86b76bd8d44c2', u'internalURL': u'http://192.168.100.224:5000/v2.0'}], u'type': u'identity', u'name': u'keystone'}], u'_context_user_name': u'admin', u'_context_auth_token': '<SANITIZED>', u'args': {u'instance_uuid': u'215ced6b-3b8b-493c-bfe1-3f0a0c2aaf7f', u'mountpoint': u'vda', u'volume_id': u'11f80196-9554-4a10-923f-75208b8aa794'}, u'_context_instance_lock_checked': False, u'_context_project_name': u'admin', u'_context_is_admin': True, u'_context_project_id': u'376a3c1c9c2546549863e258d7fd9520', u'_context_timestamp': u'2013-01-07T10:46:25.369452', u'_context_read_deleted': u'no', u'_context_user_id': u'5cc47236cd9d4a0390129587fe3d43b4', u'method': u'attach_volume', u'_context_remote_address': u'192.168.100.224'} from (pid=6391) _safe_log /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/common.py:195
2013-01-07 11:46:28 DEBUG nova.openstack.common.rpc.amqp [-] unpacked context: {'project_name': u'admin', 'user_id': u'5cc47236cd9d4a0390129587fe3d43b4', 'roles': [u'admin', u'Member'], 'timestamp': u'2013-01-07T10:46:25.369452', 'auth_token': '<SANITIZED>', 'remote_address': u'192.168.100.224', 'quota_class': None, 'is_admin': True, 'service_catalog': [{u'endpoints': [{u'adminURL': u'http://192.168.100.224:8776/v1/376a3c1c9c2546549863e258d7fd9520', u'region': u'myregion', u'internalURL': u'http://192.168.100.224:8776/v1/376a3c1c9c2546549863e258d7fd9520', u'id': u'd954c6d484144050aaeba6d9904a0096', u'publicURL': u'http://192.168.100.224:8776/v1/376a3c1c9c2546549863e258d7fd9520'}], u'endpoints_links': [], u'type': u'volume', u'name': u'volume'}, {u'endpoints': [{u'adminURL': u'http://192.168.100.224:9292/v1', u'region': u'myregion', u'internalURL': u'http://192.168.100.224:9292/v1', u'id': u'dab53ecdac144f4080e8182690daedd9', u'publicURL': u'http://192.168.100.224:9292/v1'}], u'endpoints_links': [], u'type': u'image', u'name': u'glance'}, {u'endpoints': [{u'adminURL': u'http://192.168.100.224:8774/v2/376a3c1c9c2546549863e258d7fd9520', u'region': u'myregion', u'internalURL': u'http://192.168.100.224:8774/v2/376a3c1c9c2546549863e258d7fd9520', u'id': u'3a6436f55174480b918da20026b131e8', u'publicURL': u'http://192.168.100.224:8774/v2/376a3c1c9c2546549863e258d7fd9520'}], u'endpoints_links': [], u'type': u'compute', u'name': u'nova'}, {u'endpoints': [{u'adminURL': u'http://192.168.100.224:8773/services/Admin', u'region': u'myregion', u'internalURL': u'http://192.168.100.224:8773/services/Cloud', u'id': u'2b7b9bfd8dce47aebad363934c70c27b', u'publicURL': u'http://192.168.100.224:8773/services/Cloud'}], u'endpoints_links': [], u'type': u'ec2', u'name': u'ec2'}, {u'endpoints': [{u'adminURL': u'http://192.168.100.224:35357/v2.0', u'region': u'myregion', u'internalURL': u'http://192.168.100.224:5000/v2.0', u'id': u'9897b5d634a145678bb86b76bd8d44c2', u'publicURL': u'http://192.168.100.224:5000/v2.0'}], u'endpoints_links': [], u'type': u'identity', u'name': u'keystone'}], 'request_id': u'req-2c3243b6-4edc-4abd-903f-6c1ab1c26bcd', 'instance_lock_checked': False, 'project_id': u'376a3c1c9c2546549863e258d7fd9520', 'user_name': u'admin', 'read_deleted': u'no'} from (pid=6391) _safe_log /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/common.py:195
2013-01-07 11:46:31 DEBUG nova.manager [-] Running periodic task VolumeManager._publish_service_capabilities from (pid=6391) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:172
2013-01-07 11:46:31 DEBUG nova.manager [-] Running periodic task VolumeManager._report_driver_status from (pid=6391) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:172

Can you help with this problem?

Provide an answer of your own, or ask Alagia Antonio for more information if necessary.

To post a message you must log in.