Basic Problem using parallel run in Yade
Hello,
I try to use the possibility of parallel calculations with Yade for the first time thanks to Openmp.
This is a very basic question, sorry.
When I add -j X to the Yade command where X is the number of available cores
of my machine then I can check with htop that there is only one core running
at 100 %CPU ; the other cores seem not to be called for running by Yade (S state).
I find the same behavior using yade-2016.06a on my machine and yade-2019-01-28 on a cluster.
Did I miss something in the process of using Yade in parallel ??
Thanks for your cooperation,
Best
Vincent
Question information
- Language:
- English Edit question
- Status:
- Solved
- For:
- Yade Edit question
- Assignee:
- No assignee Edit question
- Solved by:
- Rioual
- Solved:
- Last query:
- Last reply:
Revision history for this message
|
#1 |
Hello,
It depends on the script you are running.
Cheers,
Robert
Revision history for this message
|
#2 |
Dear Robert,
Can you be more precise ??
How can I adapt my script in order to have the benefits
of parallel computing ??
Thanks,
Cheers,
V.
Revision history for this message
|
#3 |
-It is possible there are two few spheres to take advantage of parallel computing.
-It is possible you have an expensive PyRunner running on each iteration and consuming the majority of the computational time
No way to know without a script.
Revision history for this message
|
#5 |
Please use yade.timing module and provide yade.timing.stats [1]
[1]https:/
Revision history for this message
|
#6 |
> plot.plot()
Not a minimal script.
https:/
Please.
Bruno
Revision history for this message
|
#7 |
.... OK, here is a self contained shorter script that reproduces the problem, I got rid of the stl importations.
Thanks for your feed-back,
Best,
Vincent,
#******
#SCRIPT:
#******
#gravity deposition (1), (2) continuing with oedometric test after stabilization (3)
# load parameters from file if run in batch
# default values are used if not run from batch
readParamsFromT
# make rMean, rRelFuzz, maxLoad accessible directly as variables later
from yade.params.table import *
# create box with free top, and ceate loose packing inside the box
from yade import pack, plot
## PhysicalParameters
Density=2400
frictionAngle=
tc = 0.01
en = 0.0001
et = 0.0001
## Import wall's geometry
facetMat=
sphereMat=
from yade import ymport
fctIdscylinder = O.bodies.
#######
sp=pack.
sp.makeCloud(
sp.toSimulation
O.engines=[
ForceResetter(),
# sphere, facet, wall
InsertionSortC
InteractionLoop(
# the loading plate is a wall, we need to handle sphere+sphere, sphere+facet, sphere+wall
[Ig2_
[Ip2_
[Law2_
),
NewtonIntegrat
# the label creates an automatic variable referring to this engine
# we use it below to change its attributes from the functions called
PyRunner(
]
O.dt=.5*
# the following checkUnbalanced, unloadPlate and stopUnloading functions are all called by the 'checker'
# (the last engine) one after another; this sequence defines progression of different stages of the
# simulation, as each of the functions, when the condition is satisfied, updates 'checker' to call
# the next function when it is run from within the simulation next time
# check whether the gravity deposition has already finished
# if so, add wall on the top of the packing and start the oedometric test
def checkUnbalanced():
# at the very start, unbalanced force can be low as there is only few contacts, but it does not mean the packing is stable
if O.iter<9000: return
# the rest will be run only if unbalanced is < .1 (stabilized packing)
if unbalancedForce
# add plate at the position on the top of the packing
fctIdsbouchonI = O.bodies.
global fctIdsbouchon
fctIdsbouchon=[]
TransEngload= TranslationEngi
O.engines=
global TransEngload
# next time, do not call this function anymore, but the next one (unloadPlate) instead
checker.
def unloadPlate():
# if the force on plate exceeds maximum load, start unloading
Fn = sum(O.forces.
if abs(Fn)>maxLoad:
TransEngload.
TransEngunload = TranslationEngi
O.engines=
# next time, do not call this function anymore, but the next one (stopUnloading) instead
checker.
def stopUnloading():
Fn = sum(O.forces.
if abs(Fn)<minLoad:
# On supprime le bouchon
for facet in fctIdsbouchon:
O.bodies.
O.save(
print '********fin de la construction de l''empilement*
O.pause()
from yade import timing
O.run()
O.wait()
timing.stats()
#******
#END SCRIPT:
#******
Revision history for this message
|
#8 |
Please follow Robert's #5 request:
> ... provide yade.timing.stats [1]
thanks
Jan
Revision history for this message
|
#9 |
Hello,
Thanks for your suggestions.
Actually I tested the code for a lower number of particles and several cores are used in this case, it works....
The point is that for a higher number of particles, the script is stuck at the operation " sp.makeCloud(
doesn't seem to use the different cores required, at this stage.
Would there eventually be an efficient way to create a cloud of particles for a high number of particles ???
Thanks for your reply,
Best,
Vincent
Revision history for this message
|
#10 |
So in the end the question is not about any parallel portion of the code.
Duplicate a small cloud?
Note that nothing is «stuck». It just takes more time (quadratically i
think).
Bruno
Le ven. 6 mars. 2020 13:32, Rioual <email address hidden> a
écrit :
> Question #689131 on Yade changed:
> https:/
>
> Status: Needs information => Open
>
> Rioual gave more information on the question:
> Hello,
>
> Thanks for your suggestions.
> Actually I tested the code for a lower number of particles and several
> cores are used in this case, it works....
> The point is that for a higher number of particles, the script is stuck at
> the operation "
> sp.makeCloud(
> beginning of the script, before the engines) and the script
> doesn't seem to use the different cores required, at this stage.
> Would there eventually be an efficient way to create a cloud of particles
> for a high number of particles ???
>
> Thanks for your reply,
>
> Best,
> Vincent
>
> --
> You received this question notification because your team yade-users is
> an answer contact for Yade.
>
> _______
> Mailing list: https:/
> Post to : <email address hidden>
> Unsubscribe : https:/
> More help : https:/
>
>
>
Revision history for this message
|
#11 |
the makeCloud code is not paraleized.
If you need it for really very high number of particles, consider creating a smaller packing and copy it wherever needed (as suggested by Bruno)
cheers
Jan
Revision history for this message
|
#12 |
...Thank you very much for these suggestions.
But basically, how can I make this duplication of packing and write it in Yade??
How do I glue the pieces of packing to make a large global packing ??
Thanks for your input,
Best,
Vincent
Revision history for this message
|
#13 |
Have a look how pack.randomDens
A MWE:
###
from yade import pack
r = 0.005
num = 20
dimSmall = (0.04, 0.04, 0.04)
dimFull = (1, 1, 1)
sp = pack.SpherePack()
sp.makeCloud(
print(len(
sp.cellFill(
print(len(
#filterSpherePa
#sp.toSimulation() # takes 30 s on my laptop
###
sp.toSimulation() might take some time, adding to simulation is not prallelized :-)
but should be significantly faster than makeCloud with all the particles
cheers
Jan
[1] https:/
[2] https:/
[3] https:/
Revision history for this message
|
#14 |
> How do I glue the pieces of packing to make a large global packing ??
makeCloud will anyway not reach this goal, if I understand correctly what you have in mind.
You have anyway to compress your initial cloud to reach a dense, solid- (not gas-)like state. Before this subsequent compression, you can as well replicate a smaller cloud as many times as you want, shifting it in space at each replication, in order to get a bigger cloud.
See e.g. https:/
Revision history for this message
|
#15 |
> You have anyway to compress your initial cloud
This is not my suggestion.
Accelerating makeCloud is efficiently exemplified in #13, instead.
I was expecting some python loops in the script to duplicate/shift an inititial packing, but I learned the existence of "sp.cellFill()" which makes it automatically. Thanks Jan!
B
Revision history for this message
|
#16 |
...I will try that; looks very relevant...
All the best
V.