confused by the Cloud Controller and the Compute Node

Asked by herry

After reading http://docs.openstack.org/openstack-compute/admin/content/ch03s02.html#d5e194 about installing openstack compute on multi servers.I have some question:
1.the difference between Cloud Controller Node and Compute Node is only the Cloud Controller Node has some more apps such as mysql,ec2ools,rabbitMQ?because I found that all the nova-services being started at the end. where is the auth manager?
2.If two nova-api(s) being installed on both the Cloud Controller and the Compute Node ,which one should the user to choose?
3.if two nova-schedulers being installed on both nodes,which one should be chosen? anywhere has others scheduler to choose the nova-schedulers?

Question information

Language:
English Edit question
Status:
Solved
For:
OpenStack Compute (nova) Edit question
Assignee:
No assignee Edit question
Solved by:
Sandy Walsh
Solved:
Last query:
Last reply:
Revision history for this message
Marc (nerens) said :
#1

I'm currently asking the same questions as you.. one box we're setting up as the cloud controller which runs the postgres database and rabbitmq, and I presume the auth contorller and nova-network, this will not be a compute node. The other server is a compute node which runs nova-compute and nova-api, and there will eventually be many more of these. I'm not sure where the nova-scheduler is supposed to run either. Can someone please clarify which nova services on which hosts?

Revision history for this message
Sandy Walsh (sandy-walsh) said :
#2

Hi guys,

You can run any service on any box you wish. There is an exception that XenServer requires Compute to run on a DomU on the host currently, but that may change.

Essentially all calls come into Nova via the API service. From there all calls are routed to the various services via RabbitMQ. So, it doesn't really matter where they run, so long as they have access to the RabbitMQ cluster.

For development we often run all services on a single box (again, with the Compute/XenServer caveat above).

Network, Volume & Scheduler(s) are also part of this equation. You can stand up as many worker as you need to service the load. If the box has enough cores/cpu's you can run more than one worker on a single box. For testing the scheduler I'll often stand up several instances of that service on my dev machine. RabbitMQ will round-robin the requests to the appropriate worker.

Hope it helps!

Revision history for this message
herry (gongsiping) said :
#3

Well,thanks Walsh .
I know we can run any service in any different box.but some more questions
about my questions.
1.Every request from outside call the API-Service at first. If we run two
API-services, we need some load-balance and dns to make the API-service HA?
2.if two Schedulers runs, As I know ,the message from api-service such as
run an instance will cast to the Scheduler queue,but how do the api-service
knows which Scheduler queue should the message being send? I guess maybe
query the meta data about where is the scheduler-service having state
running in db and then choose anyone of these Schedulers?

On Wed, Mar 16, 2011 at 8:55 PM, Sandy Walsh <
<email address hidden>> wrote:

> Your question #148570 on OpenStack Compute (nova) changed:
> https://answers.launchpad.net/nova/+question/148570
>
> Status: Open => Answered
>
> Sandy Walsh proposed the following answer:
> Hi guys,
>
> You can run any service on any box you wish. There is an exception that
> XenServer requires Compute to run on a DomU on the host currently, but
> that may change.
>
> Essentially all calls come into Nova via the API service. From there all
> calls are routed to the various services via RabbitMQ. So, it doesn't
> really matter where they run, so long as they have access to the
> RabbitMQ cluster.
>
> For development we often run all services on a single box (again, with
> the Compute/XenServer caveat above).
>
> Network, Volume & Scheduler(s) are also part of this equation. You can
> stand up as many worker as you need to service the load. If the box has
> enough cores/cpu's you can run more than one worker on a single box. For
> testing the scheduler I'll often stand up several instances of that
> service on my dev machine. RabbitMQ will round-robin the requests to the
> appropriate worker.
>
> Hope it helps!
>
> --
> If this answers your question, please go to the following page to let us
> know that it is solved:
> https://answers.launchpad.net/nova/+question/148570/+confirm?answer_id=1
>
> If you still need help, you can reply to this email or go to the
> following page to enter your feedback:
> https://answers.launchpad.net/nova/+question/148570
>
> You received this question notification because you are a direct
> subscriber of the question.
>

Revision history for this message
Best Sandy Walsh (sandy-walsh) said :
#4

1. Correct, handling the load-balancing and HA of the API service is an exercise external to OpenStack Nova.

2. The Scheduler has different drivers that you can select with the --scheduler-driver flag. The default is the ChanceScheduler which doesn't do much. It sends the request to any host randomly. We are working on more scheduler (with Server-Best-Match and weighting options in this BluePrint:

https://blueprints.launchpad.net/nova/+spec/distributed-scheduler

This is slated for Diablo currently.

I should also mention that really what we are doing in the scheduler is deciding which Compute or Volume node to send the request to. If more than one Scheduler node is running they are simply round-robin'ed thanks to RabbitMQ. *Where* the scheduler redirect the request to is the function of the Distributed Scheduler effort.

Cheers!

Revision history for this message
herry (gongsiping) said :
#5

Thanks Sandy Walsh, that solved my question.