Savanna - Failed to create database

Asked by Akshay Thapa

Hi,

I have OpenStack Grizzly set up with a controller node and two compute nodes. All systems run Ubuntu 12.04 64-bit.

I followed the following guide : https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_MultiNode/OpenStack_Grizzly_Install_Guide.rst#4-compute-node

Now i am trying to integrate Savanna 0.3 into my setup and i used the following installation guide : https://savanna.readthedocs.org/en/latest/userdoc/installation.guide.html

I have set up savanna in a virtual environment.

Now when i try and run savanna by using the command, it fails to create a database.
$ savanna-venv/bin/python savanna-venv/bin/savanna-api --config-file savanna-venv/etc/savanna.conf

The following are excerpts from the log.

2014-02-24 17:32:35.697 18858 ERROR savanna.db.sqlalchemy.api [-] Database registration exception: (OperationalError) unable to open database file None None
....
2014-02-24 17:32:35.708 18858 CRITICAL savanna [-] Failed to create database!
....
2014-02-24 17:32:35.708 18858 TRACE savanna raise RuntimeError('Failed to create database!')
2014-02-24 17:32:35.708 18858 TRACE savanna RuntimeError: Failed to create database!

Here is my savanna.conf configuration file :

[DEFAULT]

# Hostname or IP address that will be used to listen on
# (string value)
#host=

# Port that will be used to listen on (integer value)
port=8386

# Address and credentials that will be used to check auth tokens
os_auth_host=10.204.142.58
os_auth_port=35357
os_admin_username=admin
os_admin_password=password
os_admin_tenant_name=service

# If set to True, Savanna will use floating IPs to communicate
# with instances. To make sure that all instances have
# floating IPs assigned in Nova Network set
# "auto_assign_floating_ip=True" in nova.conf.If Neutron is
# used for networking, make sure that all Node Groups have
# "floating_ip_pool" parameter defined. (boolean value)
use_floating_ips=false

# Use Neutron or Nova Network (boolean value)
use_neutron=true

# Maximum length of job binary data in kilobytes that may be
# stored or retrieved in a single operation (integer value)

# Maximum length of job binary data in kilobytes that may be
# stored or retrieved in a single operation (integer value)
#job_binary_max_KB=5120

# Postfix for storing jobs in hdfs. Will be added to
# /user/hadoop/ (string value)
#job_workflow_postfix=

# Enables Savanna to use Keystone API v3. If that flag is
# disabled, per-job clusters will not be terminated
# automatically. (boolean value)
use_identity_api_v3=true

# enable periodic tasks (boolean value)
#periodic_enable=true

# Enables data locality for hadoop cluster.
# Also enables data locality for Swift used by hadoop.
# If enabled, 'compute_topology' and 'swift_topology'
# configuration parameters should point to OpenStack and Swift
# topology correspondingly. (boolean value)
enable_data_locality=false

# Print debugging output (set logging level to DEBUG instead
# of default WARNING level). (boolean value)
debug=true

# Print more verbose output (set logging level to INFO instead
# of default WARNING level). (boolean value)
verbose=true

# Log output to standard error (boolean value)
use_stderr=true

# (Optional) Name of log file to output to. If no default is
# set, logging will go to stdout. (string value)
log_file=savanna.log

# (Optional) The base directory used for relative --log-file
# paths (string value)
log_dir=/var/log/savanna/

# Use syslog for logging. (boolean value)
use_syslog=true

# syslog facility to receive log lines (string value)
#syslog_log_facility=LOG_USER

# List of plugins to be loaded. Savanna preserves the order of
# the list when returning it. (list value)
plugins=vanilla,hdp

[plugin:vanilla]
plugin_class=savanna.plugins.vanilla.plugin:VanillaProvider

[plugin:hdp]
plugin_class=savanna.plugins.hdp.ambariplugin:AmbariPlugin

[database]
connection=sqlite:////savanna/openstack/common/db/$sqlite_db

Can someone please guide me on this? Thanks a lot.

Question information

Language:
English Edit question
Status:
Solved
For:
Sahara Edit question
Assignee:
No assignee Edit question
Solved by:
Akshay Thapa
Solved:
Last query:
Last reply:
Revision history for this message
Dmitry Mescheryakov (dmitrymex) said :
#1

Hello Akshay, try changing 'connection' parameter in your config to
connection=sqlite:////tmp/savanna-server.db

It could happen that the default location is not accessible for the user you run Savanna with.

Revision history for this message
Akshay Thapa (akshay-thapa23) said :
#2

Hi Dmitry,
Thanks for your help. I did set the connection parameter as :
connection=sqlite:////tmp/savanna-server.db

Now when i try to run savanna, this is what gets generated in the logs. If i try to access savanna through horizon, i get the following error :
Something went wrong!
An unexpected error has occurred. Try refreshing the page. If that doesn't help, contact your local administrator.

The logs are as follows :

2014-02-25 13:01:18.431 24139 WARNING keystoneclient.middleware.auth_token [-] Configuring auth_uri to point to the public identity endpoint is required; clients may not be able to authenticate against an admin endpoint

2014-02-25 13:03:26.598 24139 INFO keystoneclient.middleware.auth_token [-] Auth Token confirmed use of v2.0 apis
2014-02-25 13:03:26.661 24139 WARNING keystoneclient.middleware.auth_token [-] Unexpected response from keystone service: {u'error': {u'message': u'The request you have made requires authentication.', u'code': 401, u'title': u'Not Authorized'}}
2014-02-25 13:03:26.661 24139 DEBUG keystoneclient.middleware.auth_token [-] Token validation failure. _validate_user_token /home/akshay/savanna-venv/local/lib/python2.7/site-packages/keystoneclient/middleware/auth_token.py:848

2014-02-25 13:03:26.718 24139 WARNING keystoneclient.middleware.auth_token [-] Authorization failed for token d591ef09fd63030156616c0907b994f4

I'm sure I've missed out some key step.
I've got OpenStack up and running and I'm able to create VMs. Then i installed savanna 0.3 using the following guide : https://savanna.readthedocs.org/en/latest/userdoc/installation.guide.html

Is there anything else that I'm supposed to do? Something that i might have missed?
Thanks a lot again.

Revision history for this message
Dmitry Mescheryakov (dmitrymex) said :
#3

Akshay, check that 'admin' user has admin role in tenant 'services'. It is a requirement for os_admin_username to have admin role in os_admin_tenant_name.

Revision history for this message
Sergey Lukjanov (slukjanov) said :
#4
Revision history for this message
Akshay Thapa (akshay-thapa23) said :
#5

Hey Dmitry,
Thanks for taking out time to help me out.
I couldn't get it to work, so I manually set up my hadoop Cluster on top of openstack.

Thanks,
Akshay