CPU occupation by yade?

Asked by kelaogui

Hello sir,
   I would like to create simulation with large number of particles. Since yade is parallelized by OpenMP, a sheared memory system is preferred. In such case, I would like to know whether I can chose the number of cores to run the simulation and if there is an optimal option of the number of cores for a simulation I need to use to reduce the computation consume.

Question information

Language:
English Edit question
Status:
Answered
For:
Yade Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
Anton Gladky (gladky-anton) said :
#1

Hello,
2012/12/19 kelaogui <email address hidden>:
>I would like to know whether I can chose the number of cores to run the simulation

yade has -j key, which sets the number of used threads.

>and if there is an optimal option of the number of cores for a simulation I need to use >to reduce the computation consume.

It depends on your task. You can check different number of "jobs" to find
an optimal one.

Anton

Revision history for this message
Christian Jakob (jakob-ifgt) said :
#2

Hi kelogui,

> I would like to create simulation with large number of particles.

How many particles? 10.000? 100.000? more?

> Since yade is parallelized by OpenMP, a sheared memory system is preferred.

Not only preferred, but relevant!

> In such case, I would like to know whether I can chose the number of cores to run
> the simulation and if there is an optimal option of the number of cores for a simulation
> I need to use to reduce the computation consume.

Well, thats a good question. It depends on model geometry, model dynamics, contact law, model parameters, model conditions and the script you use, so it is not easy to answer...

I can give you an example on calculation speed-up in my simulations (app. 2500 spheres, capillary law with hertz model, clumps used and periodic space):
- 1 core: 5 days 10 hours
- 4 cores: 4 days 9 hours

If you have more particles and an easy model with simple contact law, it could be more "effective" to use many cores. Please check out and let us know the results!

Cheers,

Christian.

Revision history for this message
kelaogui (kelaogui91) said :
#3

Thanks, Christian and Anton. I also wonder whether the number of cores has a upper limit or not. Even though it seems that more cores don't result in less time consuming proportionally, I am trying to make use of my computer power as much as possible.

Revision history for this message
Bruno Chareyre (bruno-chareyre) said :
#4

For 500k particle, I got costs ~1/N for N up to 8 for 500k particles.
For smaller problems, clearly it is not O(1/N).

Revision history for this message
Alexander Eulitz [Eugen] (kubeu) said :
#5

Hi Chareyre, could you please explain your answer in a little more detail? im really intereseted in understanding it.

Revision history for this message
Bruno Chareyre (bruno-chareyre) said :
#6

With large numbers of particles, it scales very well.
With 500k spheres, the time with 8 cores was almost 8 times smaller than with 1 core.
For small number of particles, it scales badly, as you previous results suggested.

Is it more clear?

Revision history for this message
Alexander Eulitz [Eugen] (kubeu) said :
#7

yes, thanks.
I'll do Christians test and enter a lot of particles there, to see how the difrent number of cores perform

Can you help with this problem?

Provide an answer of your own, or ask kelaogui for more information if necessary.

To post a message you must log in.