Cloud Web interface should allow overcommit on CPU

Asked by Neil Wilson on 2009-10-22

Currently the default installation of UEC restricts the number of VMs based on the lower of memory and core numbers retrieved from the node controllers.

There should be an option in the visual interface to allow a cluster to overcommit on CPU. Ordinarily when you're providing VMs on a cluster memory is the only restricting factor you require - kvm does a good job of sharing out processor power. CPU cores should be the number of cores available to the instance rather than dedicated to it.

The workaround at the moment appears to be increasing MAX_CPUS to a large value in the cluster controller.

Question information

Language:
English Edit question
Status:
Answered
For:
Ubuntu eucalyptus Edit question
Assignee:
No assignee Edit question
Last query:
2009-10-22
Last reply:
2009-10-22
Neil Soman (neilsoman) said : #1

You can already set this via the MAX_CORES value in eucalyptus.conf on individual nodes. It is unclear whether using a blanket value across all nodes via the front end config is a good idea. We provide a finer level control at the individual node level and I don't think this will be changing for Karmic.

Thierry Carrez (ttx) said : #2

Following upstream rationale on this.

Dustin Kirkland  (kirkland) said : #3

I'm actually going to convert this to a question.

I suspect this will come up again.

:-Dustin

Dustin Kirkland  (kirkland) said : #4

Neil Soman wrote 5 hours ago: #1

You can already set this via the MAX_CORES value in eucalyptus.conf on individual nodes. It is unclear whether using a blanket value across all nodes via the front end config is a good idea. We provide a finer level control at the individual node level and I don't think this will be changing for Karmic.

Neil Soman (neilsoman) said : #5

Configuration at the individual node controller level can be controlled via eucalyptus.conf. If it is tedious to change eucalyptus.conf manually, this file should be under the control of cfengine, puppet, chef or whatever the site admin uses to manage node level config files.

In general, the front end is not responsible for configuration of node level parameters across tens, hundreds or thousands of node. At least, that is not the way the system works today. Nodes are throwaway entities that come and go, each one with possibly a different config. A redesign needs to be considered carefully, since there are parameters other than MAX_CORES that can be set at the node level. As a site admin, it is conceivable that I want a bunch of nodes overprovisioned, but not that other bunch over there because I know that those are being used to run other things as well. So far, the granularity of this config is at the individual node level.

Can you help with this problem?

Provide an answer of your own, or ask Neil Wilson for more information if necessary.

To post a message you must log in.