result indeterminism and MPI=OFF
Helle,
I am trying to 'compare' several runs made with different evolutions of parameters with time, wich implies the same 'initial' behavior ,
so according to https:/
so I have compiled with ENABLE_MPI=OFF and seed defined in the makeCloud identical... BUT I got different results already just after the gravity deposition output after having launched twice the same py code though my batch command.
What I have missed to obtain my goal?
Luc
Question information
- Language:
- English Edit question
- Status:
- Solved
- For:
- Yade Edit question
- Assignee:
- No assignee Edit question
- Solved by:
- Luc OGER
- Solved:
- 2020-08-31
- Last query:
- 2020-08-31
- Last reply:
- 2020-08-31
Jan Stránský (honzik) said : | #1 |
Hello,
please provide actutal code, it is very difficult without it.
E.g. if you have PyRunner with realPeriod, the results might differ, because the code runs each time in different circumstances.
Also, MPI != OpenMP. Do you use -j option for yade?
cheers
Jan
Luc OGER (luc-oger) said : | #2 |
here the cmake command
and the py code
#Result indeterminism
#It is naturally expected that running the same simulation several times will give exactly the same results: although the computation is done
#with finite precision, round-off errors would be deterministically the same at every run. While this is true for single-threaded computation
#where exact order of all operations is given by the simulation itself, it is not true anymore in multi-threaded computation which is described in detail in later sections.
#The straight-forward manner of parallel processing in explicit DEM is given by the possibility of treating interactions in arbitrary order.
#Strain and stress is evaluated for each interaction independently, but forces from interactions have to be summed up. If summation order is also arbitrary
#(in Yade, forces are accumulated for each thread in the order interactions are processed, then summed together), then the results can be slightly different. For instance
#(1/10.
#(1/17.
#As forces generated by interactions are assigned to bodies in quasi-random order, summary force Fi
#on the body can be different between single-threaded and multi-threaded computations, but also between different runs of multi-threaded computation with exactly the same parameters.
#Exact thread scheduling by the kernel is not predictable since it depends on asynchronous events (hardware interrupts) and other unrelated tasks running on the system;
#and it is thread scheduling that ultimately determines summation order of force contributions from interactions.
# The components of the batch are:
# 1. table with parameters, one set of parameters per line (ccc.table)
# 2. readParamsFromTable which reads respective line from the parameter file
# 3. the simulation muse be run using yade-batch, not yade
#
# $ yade-batch --job-threads=1 03-oedometric-
#
# load parameters from file if run in batch
# default values are used if not run from batch
# gravity deposition in box, showing how to plot and save history of data,
# and how to control the simulation while it is running by calling
# python functions from within the simulation loop
#pas_box theta_max converg_min cover_pack_fraction init_seed friction ratio
#model_type bottom_cover pas_box theta_max converg_min cover_pack_fraction init_seed friction ratio
#.1 55. .002 0.1 210 0.4 4
readParamsFromT
# make rMean, rRelFuzz, maxLoad accessible directly as variables later
from yade.params.table import *
# import yade modules that we will use below
from yade import pack, plot, export,math
global ratio,nombre,
global converg_min, init_seed,
global i_pas, box_size,
#some parameters passed by batch_table:
####batch : friction = 0.5
####batch : theta_max = 30.0
####batch : pas_box = 0.1
####batch : ratio = 3 # size ratio between the glued spheres and the moving ones
####batch : cover_pack_fraction = 0.2 # coverage percent for moving spheres
####batch : init_seed=10
####batch : packing_fraction = 70./100.
####batch : model_type = 1
#some parameters:
shear_modulus = 1e5
poisson_ratio = 0.3
young_modulus = 2*shear_
local_damping = 0.01
viscous_normal = 0.021
viscous_shear = 0.8*viscous_normal
angle = math.atan(friction)
# initialisation coordonnees initiales
#newTable(
#creating a material (FrictMat):
id_SphereMat=
SphereMat=
box_size = 1.0
i_pas = 0
step0=0
mask1=0b01
num_Angle= pas_box*i_pas
gravity_y = 9.81*sin( num_Angle*
gravity_z = 9.81*cos( num_Angle*
O.reset
nom_file=
filename_
rayon = 0.025
height = 5.0*rayon*nb_layers
# create rectangular box from facets
O.bodies.
# create empty sphere packing
# sphere packing is not equivalent to particles in simulation, it contains only the pure geometry
nombre= int(packing_
print("nombre = ",nombre)
sp=pack.
# generate randomly spheres with uniform radius distribution
sp.makeCloud(
# add the sphere pack to the simulation
sp.toSimulation
nb_particles=
O.save(
# save file text mode; beginning of the run
filename1=
export.
# simulation loop
O.engines=[
ForceResetter(),
InsertionSort
InteractionLoop(
# handle sphere+sphere and facet+sphere collisions
[Ig2_
[Ip2_
# [Law2_L3Geom_
),
NewtonIntegra
# call the checkUnbalanced function (defined below) every 2 seconds
PyRunner(
]
# timesteps
O.dt=.5*
traitement_file = nom_file+
traitement = open(traitement
traitement.close()
# if the unbalanced forces goes below .05, the packing
# is considered stabilized, therefore we stop collected
# data history and stop
def checkUnbalanced():
global converg_
#print("iteration= ",utils.
if (O.iter)<(5000): return
if (O.iter-
#print ("bal iter steps :",utils.
#######
# the rest will be run only if unbalanced is < <converg_min (stabilized packing)
if utils.unbalance
#print ("bal iter steps :",utils.
#######
if step0==0:
step0 = O.iter
i_pas=1
print ("len-O.bodies = ",len(O.bodies))
O.save(
# save file text mode; beginning of the run
filename1=
export.
print ("fin de step 1: ",step0)
step1=1
# export.
print ("len-O.bodies (2 packs) = ",len(O.bodies))
O.pause()
O.run()
# when running with yade-batch, the script must not finish until the simulation is done fully
# this command will wait for that (has no influence in the non-batch mode)
waitIfBatch()
and the table for batch command:
model_type packing_fraction pas_box theta_max nb_cycles converg_min nb_layers init_seed friction ratio
1 0.6 0.15 25.0 4 0.00005 10 440 0.15 1
1 0.6 0.15 22.0 4 0.00005 10 440 0.15 1
and the first lines of the output for the two runs:
#format x_y_z_r_attrs
# x y z r V_x V_y V_z
0.297853 0.453123 0.284843 0.0250031 5.65698e-06 -2.8491e-06 6.80765e-06
0.342439 0.740925 0.585506 0.0249955 -5.30826e-05 5.30518e-06 -1.07065e-05
and
#format x_y_z_r_attrs
# x y z r V_x V_y V_z
0.297853 0.453123 0.284843 0.0250031 4.84423e-06 -4.98896e-06 6.7201e-06
0.342438 0.740925 0.585506 0.0249955 -3.45893e-05 9.91756e-06 -2.91766e-06
Luc OGER (luc-oger) said : | #3 |
with the cmake..
cmake -DENABLE_MPI=OFF -DOpenGL_
Jan Stránský (honzik) said : | #4 |
> PyRunner(
as already discusses in #1. You have the same code, but because of realPeriod=2, it is very likely that two runs produces different results.
realPeriod=2 means running the code every two real seconds, which (depending on random circumstances like running other processes on the computer and slowing Yade) may be at different simulation times.
Use iterPeriod or virtPeriod and you should get same results
cheers
Jan
Luc OGER (luc-oger) said : | #5 |
Dear Jan
in my first rapid test, the change to iterPeriod solves my problem
thanks
Luc