Keystone resulting in slow mysql queries
I used this install guide: https:/
1 controller/network node and 4 compute nodes running on Ubuntu 13.04 server 64bit. I have keystone running on the controller with a local mysql database (15K SAS drives).
As you can see below mysql is constantly running at 100% cpu utilization. Any api call that needs keystone authentication results in this.
10655 mysql 20 0 3970m 554m 7740 S 101.7 3.5 75:53.22 /usr/sbin/mysqld
API calls take 5-8 seconds to return a value. A simple nova list takes 5 seconds. I have checked the keystone logs and there is nothing getting logged. I enabled slow query logging in the my.cnf and noticed the following:
=======
SET timestamp=
SELECT token.id AS token_id, token.expires AS token_expires, token.extra AS token_extra, token.valid AS token_valid, token.user_id AS token_user_id, token.trust_id AS token_trust_id
FROM token
WHERE token.expires > '2013-05-16 13:59:32' AND token.valid = 0;
# Time: 130516 9:59:39
# User@Host: keystone[keystone] @ [10.70.0.1]
# Query_time: 3.804186 Lock_time: 0.000048 Rows_sent: 3 Rows_examined: 117374
SET timestamp=
SELECT token.id AS token_id, token.expires AS token_expires, token.extra AS token_extra, token.valid AS token_valid, token.user_id AS token_user_id, token.trust_id AS token_trust_id
FROM token
WHERE token.expires > '2013-05-16 13:59:36' AND token.valid = 0;
# Time: 130516 9:59:43
# User@Host: keystone[keystone] @ [10.70.0.1]
# Query_time: 3.800959 Lock_time: 0.000047 Rows_sent: 3 Rows_examined: 117375
SET timestamp=
SELECT token.id AS token_id, token.expires AS token_expires, token.extra AS token_extra, token.valid AS token_valid, token.user_id AS token_user_id, token.trust_id AS token_trust_id
FROM token
WHERE token.expires > '2013-05-16 13:59:39' AND token.valid = 0;
# Time: 130516 9:59:47
# User@Host: keystone[keystone] @ [10.70.0.1]
# Query_time: 3.810399 Lock_time: 0.000048 Rows_sent: 3 Rows_examined: 117377
SET timestamp=
SELECT token.id AS token_id, token.expires AS token_expires, token.extra AS token_extra, token.valid AS token_valid, token.user_id AS token_user_id, token.trust_id AS token_trust_id
FROM token
WHERE token.expires > '2013-05-16 13:59:43' AND token.valid = 0;
# Time: 130516 9:59:51
# User@Host: keystone[keystone] @ [10.70.0.1]
# Query_time: 3.871686 Lock_time: 0.000042 Rows_sent: 3 Rows_examined: 117377
SET timestamp=
SELECT token.id AS token_id, token.expires AS token_expires, token.extra AS token_extra, token.valid AS token_valid, token.user_id AS token_user_id, token.trust_id AS token_trust_id
FROM token
WHERE token.expires > '2013-05-16 13:59:47' AND token.valid = 0;
# Time: 130516 9:59:55
# User@Host: keystone[keystone] @ [10.70.0.1]
# Query_time: 3.928294 Lock_time: 0.000050 Rows_sent: 3 Rows_examined: 117379
SET timestamp=
SELECT token.id AS token_id, token.expires AS token_expires, token.extra AS token_extra, token.valid AS token_valid, token.user_id AS token_user_id, token.trust_id AS token_trust_id
FROM token
WHERE token.expires > '2013-05-16 13:59:51' AND token.valid = 0;
=======
As you can see the same query is running over and over taking about 4 seconds to complete. This is not a large OpenStack install at all. I have 3 Project with less than 20 Instances total.
Please help me determine if this is a bug or a configuration issue.
=======
keystone.conf
[DEFAULT]
log_file = keystone.log
# The directory to keep log files in (will be prepended to --logfile)
log_dir = /var/log/keystone
[sql]
connection = mysql:/
# the timeout before idle sql connections are reaped
idle_timeout = 200
[identity]
driver = keystone.
[trust]
driver = keystone.
[catalog]
driver = keystone.
[token]
driver = keystone.
[policy]
driver = keystone.
[ec2]
driver = keystone.
[ssl]
[signing]
[ldap]
[auth]
methods = password,token
password = keystone.
token = keystone.
[filter:debug]
paste.filter_
[filter:token_auth]
paste.filter_
[filter:
paste.filter_
[filter:xml_body]
paste.filter_
[filter:json_body]
paste.filter_
[filter:
paste.filter_
[filter:
paste.filter_
[filter:
paste.filter_
[filter:
paste.filter_
[filter:
paste.filter_
[filter:sizelimit]
paste.filter_
[filter:
paste.filter_
[filter:
paste.filter_
[filter:access_log]
paste.filter_
[app:public_
paste.app_factory = keystone.
[app:service_v3]
paste.app_factory = keystone.
[app:admin_service]
paste.app_factory = keystone.
[pipeline:
pipeline = access_log sizelimit stats_monitoring url_normalize token_auth admin_token_auth xml_body json_body debug ec2_extension user_crud_extension public_service
[pipeline:
pipeline = access_log sizelimit stats_monitoring url_normalize token_auth admin_token_auth xml_body json_body debug stats_reporting ec2_extension s3_extension crud_extension admin_service
[pipeline:api_v3]
pipeline = access_log sizelimit stats_monitoring url_normalize token_auth admin_token_auth xml_body json_body debug stats_reporting ec2_extension s3_extension service_v3
[app:public_
paste.app_factory = keystone.
[app:admin_
paste.app_factory = keystone.
[pipeline:
pipeline = access_log sizelimit stats_monitoring url_normalize xml_body public_
[pipeline:
pipeline = access_log sizelimit stats_monitoring url_normalize xml_body admin_version_
[composite:main]
use = egg:Paste#urlmap
/v2.0 = public_api
/v3 = api_v3
/ = public_version_api
[composite:admin]
use = egg:Paste#urlmap
/v2.0 = admin_api
/v3 = api_v3
/ = admin_version_api
=======
Also note that I have an Essex environment up and running with similar equipment with no mysql performance issues.
Thanks in advance,
Adam
Question information
- Language:
- English Edit question
- Status:
- Solved
- Assignee:
- No assignee Edit question
- Solved by:
- Adam
- Solved:
- Last query:
- Last reply: