Elastic energy fron fieldsaver vs from checkpointer
Dear,
Recently, working on simulation of uniaxial streetching process (opposite to uniaxial compression) I tried to calculate the elastic and kinetic energy from checkpointers output files. For kinetic energy (at a given stage) the filedsaver output was practically the same as result of calculation from checkpointer file (sum up over all existing particles). However, to my surprise I have got significantly different results for elastic energy. I have used the NRotBondPrms() model with scaling =true. I have calculated the elastic energy going over the list of ``connectivity'' segment from *.vtu file taking the absolute value of bound strains and using
the formula Ep_i = 1/2 bound_strain_i^2 * normalK and obviously summing up over the whole ``connectivity'' list.
My quastion is where is my mistake?. Is it a result of ``scaling'' setup? Thanks for explanaition. Regards
Wojciech
Question information
 Language:
 English Edit question
 Status:
 Answered
 Assignee:
 No assignee Edit question
 Last query:
 Last reply:
Revision history for this message

#1 
Hi Wojciech,
Yes, it is most likely the discrepancy arises due to the use of "scaling=True" in NRotBondPrms.
When "scaling==True", the stiffness of individual particlepair interactions is calculated based upon the radii (R_1 and R_2) of the two bound particles. In that case, the value of normalK is assumed to be an elastic (Young's) modulus and the interaction is assumed to be a cylinder with equilibrium length of L_12 = (R_1 + R_2) and radius R_12 = 0.5*(R_1+R_2). Linear elastic beam theory then provides the formula for calculating the elastic stiffness (K_12) of the interaction:
K_12 = normalK * A_12 / L_12
= normalK * pi * R_12 * R_12 / L_12
where A_12 is the crosssectional area of the cylindrical beam
For our cylindrical beam interactions, the calculation of the elastic stiffness simplifies to:
K_12 = 0.25 * pi * normalK * (R_1 + R_2)
As can be seen from this formula, you need to take account for the sum of the radii of the bound particles, in order to correctly backcalculate the elastic stiffness of each interaction. I suspect this is the primary reason for the discrepancy you observe. Another discrepancy arises due to the geometrical scaling factor (0.25*pi).
To check whether this is the case, set "scaling=False" and rerun your simulation. In this instance, every interaction will have identical elastic stiffnesses, equal to the value of normalK. You should find that the backcalculated value for the elastic strain energy from the checkpointer files now more closely matches that of the FieldSaver. You won't get identical results though because of truncation and roundoff errors induced by only storing a few decimal places in the checkpoint files.
As a general rule, I would avoid backcalculating quantities from checkpoint files (particularly from the .vtu files!) if that quantity can be recorded via a FieldSaver. The output will always be more accurate using FieldSavers and the output files likely more easily postprocessed. Checkpoint files should only really be used in conjunction with the provided postprocessing tools (dump2vtk, grainextract, fracextract etc) to achieve the data analysis tasks for which these tools were designed.
I hope this helps. Have fun!
Cheers,
Dion
Revision history for this message

#2 
Dear Dion,
Thanks for in depth explanation and so prompt answer. I will promptly check setting the scaling FALSE and
let you know the results. Since in the post I tried to be concise, let take me now to explain a story behind it.
I fully agree with you that checkpoint outputs are primary source of information about current state of a sample/process
and due to finite accuracy (with respect to fieldsavers output) should basically be used for imaging purpose. Moreover,
I have noticed that their creation significantly slower simulation (nothing surprising) while fieldsaver much less.
(I presume fieldsavers are created by the master core not workers?). This this a standard way I am using checkpoint files.
However, recently I needed something more than the fieldsaver provides me with, namely the kinetic energy of BOUNDED
particles only. The reason is that while simulating dynamic breaking (here a tissue under tension) it is very difficult
to assure an exact stability of simulations. The classical stability criterion, like the one you put down in the tutorial or the formulas developed by O'Sullivan does not work. The reason is very simple: the stress factor which appears in denominator of the bound for time steps becomes hard to estimate in dynamic cases. Apparently it is not normalK factor
from NRotBondPrms. In some of simulations I started from time step estimated using this formula and taking normalK geting dt=10^6. I have got quickly quite impressive noise. When I get down to dt=10^9 the numerical artefacts did not mask
the whole process but still existed  calculations took, however, over 1 month with optimum workers loading.
Fortunately, DEM as any other multibody system exhibits a nice statistical property I called soft stability. Even if locally numerical stability is broken and some particles get completely crazy energies, it only slightly influences the overall state of sample/process provided the ``unstable'' particles quickly disappear from a simulation region. In many cases it can be arrange by a proper simulation setup. In my current simulations unstable particles break all bounds and become unbounded
usually in 1/2 times teps . So their presence I can detect looking for the list of bounded particle from checkpoint files.
Correction of the total kinetic energy by removing spurious energy is than trivial.
Performing such analysis (using the vtu file created by dump2vtk) I have also performed a cross check of my soft and than I have arrived for a puzzle with elastic energy. This is a story behind my post.
With best regards
Wojciech
PS. The analysis I was talking about will soon be published in Phil. Trans. Royal Soc. A
 Original Message 
From: Dion Weatherley <email address hidden>
To: <email address hidden>
Sent: Wed, 10 Feb 2021 23:41:05 +0100 (CET)
Subject: Re: [Question #695471]: Elastic energy fron fieldsaver vs from checkpointer
Your question #695471 on ESySParticle changed:
https:/
Status: Open => Answered
Dion Weatherley proposed the following answer:
Hi Wojciech,
Yes, it is most likely the discrepancy arises due to the use of
"scaling=True" in NRotBondPrms.
When "scaling==True", the stiffness of individual particlepair
interactions is calculated based upon the radii (R_1 and R_2) of the two
bound particles. In that case, the value of normalK is assumed to be an
elastic (Young's) modulus and the interaction is assumed to be a
cylinder with equilibrium length of L_12 = (R_1 + R_2) and radius R_12 =
0.5*(R_1+R_2). Linear elastic beam theory then provides the formula for
calculating the elastic stiffness (K_12) of the interaction:
K_12 = normalK * A_12 / L_12
= normalK * pi * R_12 * R_12 / L_12
where A_12 is the crosssectional area of the cylindrical beam
For our cylindrical beam interactions, the calculation of the elastic stiffness simplifies to:
K_12 = 0.25 * pi * normalK * (R_1 + R_2)
As can be seen from this formula, you need to take account for the sum
of the radii of the bound particles, in order to correctly back
calculate the elastic stiffness of each interaction. I suspect this is
the primary reason for the discrepancy you observe. Another discrepancy
arises due to the geometrical scaling factor (0.25*pi).
To check whether this is the case, set "scaling=False" and rerun your
simulation. In this instance, every interaction will have identical
elastic stiffnesses, equal to the value of normalK. You should find that
the backcalculated value for the elastic strain energy from the
checkpointer files now more closely matches that of the FieldSaver. You
won't get identical results though because of truncation and roundoff
errors induced by only storing a few decimal places in the checkpoint
files.
As a general rule, I would avoid backcalculating quantities from
checkpoint files (particularly from the .vtu files!) if that quantity
can be recorded via a FieldSaver. The output will always be more
accurate using FieldSavers and the output files likely more easily post
processed. Checkpoint files should only really be used in conjunction
with the provided postprocessing tools (dump2vtk, grainextract,
fracextract etc) to achieve the data analysis tasks for which these
tools were designed.
I hope this helps. Have fun!
Cheers,
Dion

If this answers your question, please go to the following page to let us
know that it is solved:
https:/
If you still need help, you can reply to this email or go to the
following page to enter your feedback:
https:/
You received this question notification because you asked the question.
Revision history for this message

#3 
Hi Wojciech,
> I presume fieldsavers are created by the master core not workers?
The master does the actual writing out of fieldsaver files, after having gathered the relevant information from the workers via MPI communication. FieldSavers are generally a lot smaller in terms of data output than checkpointers but can be a bit slow if you have very large simulations: a lot of comms and data output for RAW* output formats.
> The classical stability criterion, like the one you put down in the tutorial or the formulas developed by O'Sullivan does not work.
This is a little surprising to me. The formula in the Tutorial is possibly a bit misleading. The one I use, often hardcoded into my scripts, is:
dt = 0.2 * sqrt ( 4./3. * density * Rmin**3. / youngsModulus_max / Rmax )
where density = particle density, Rmin is the minimum particle radius, Rmax is the maximum particle radius, and youngsModulus_max is the largest Young's modulus in your simulation.
If you use scaling=True, insert normalK in place of youngsModulus_max. If scaling==False:
dt = 0.2 * sqrt (4./3. * density * Rmin**3. / normalK)
If you simulations are highly dynamic, you will need to do a second check though. The timestep increment must also be equal or smaller than:
dt_dyn = 0.2 * Rmin / Vmax
where Vmax is the maximum particle speed you expect to be achieved during your simulations. This will ensure that no particle will ever move more than 1/5th of the radius of the smallest particle. Note that this is also the verlet distance for the neighbour search algorithm. In other words, you cannot ever move a particle more than the verlet distance in a single timestep. If you do so, then the particlepair list will be incorrect leading to missed interaction force calculations.
> I needed something more than the fieldsaver provides me with, namely the kinetic energy of BOUNDED particles only.
I would have probably added a pair of fieldsavers storing the elastic strain energy of bonded interactions in RAW_WITH_POS_ID format, as well as kinetic energy in RAW format or similar. You can then postprocess the fieldsaver output files to get a list of BOUNDED particle ids, then use this list to sum up the kinetic energy of only those particles. Arguably it is just a preference on my part, but I prefer to avoid postprocessing checkpointer output when I'm specifically interested in particular information from a simulation.
> DEM as any other multibody system exhibits a nice statistical property I called soft stability...
I've seen that phenomenon many times, when the timestep increment is too large for the smallest particles but small enough for the larger particles. In such cases, the small particles quickly explode out of the simulation, leaving just the larger particles behind. IMO, this "soft stability" should be avoided at all costs. It is just another manifestation of numerical instability that is a little less strict than the traditional one (wherein everything explodes). Regardless, the simulation you end up running after this softstability explosion, is not the one you originally defined: far fewer particles and many missing bonds/interactions that would otherwise have been there. My advice: don't "exploit" this soft stability in your simulations. Seek other ways to ensure the simulation is numerically stable for all particle sizes. Sometimes a simulation that takes a month to execute simply cannot be avoided in DEM.
Cheers,
Dion
Revision history for this message

#4 
Dear Dion,
Thanks for your comments and explanation, especially for drawing my attention to the second stability criterion  the verlet distance.
I was unaware of it. Let us put, however, a stability issue aside for the time being.
It seems to me that I have basically fixed the problem with backcalculated elastic energy (Ecp). While calculating Ecp I have made twoo mistakes.
The first one was ignoring the scaling parameter effect. I used and using normalK as the bound stiffness for E cp calculations while simulations were
run with scaling=True. Thanks for help with this. The second was my misunderstanding of what bond_strain parameter provided by dump2vtk is. I assumed that this
is a direct elongation/
initial bond length L_init = R_1 + R_2  sum of bonded particle radius and L_current is calculated as distance between centers of bonded particles.
Correcting this two bugs and rerunig simulations with scaling = False showed, that the problem has been solved. I have got agreement with Fieldsavers values at the level between 30% (for Ecp ~ 10^6) up to about 0.1% for Ecp~ 10^4. The differences are uparently due to rundoff errors. Perfect.
A problem has appeared when I have rerun simulation with scaling=True. This time when applied scaling of normalK for backcalculation I have got results almost exactly multiplied by pi (3.1415...) with respect to Fieldsaver data. Where the extra/missing pi is? I do not know  it must be somewhere in Esys code (Fieldsaver?)
I have also found out one more problem.It is following. The fieldsaver reports Ep = 0 for the first (zero) snapshot, no matter if scaling is False/True. On the other hand if calculated Ecp from zero'th snapshot I got always Ecp~ 10^6. I have crosschecked calculation using the particles positions (to calculate L_current) reported by gengeo (10 decimal digits). I'v got the same. Apparently the initial bound length understood as particles' centers distance is larger than sum of their radii.
Consequence of that is that L_init is either wrong in dumpt2vtk ( should be change to L_init = ( position_particle_1  position_
or there is wrong initialization in Esys. If it is a problem with dump2vtk tht's fine. What about the second case  I do not know. Any way I belive it is a minor problem especially with respect to missing/additional pi factor.
Any way the problem is solved. Thanks Dion, once more for a very nice and constructive discussion.
Finaly I would like to share with you my conclusion after this discussion and analysis.
1. The dump2vtk tools is not a general postprocessing tool as the tutorial suggest but rather a format converter design primary for the visualization purpose.Due to significant diminishing data resolution it should not be used for quantitative postprocessing analysis. It should be clarified in upgraded tutorial
( I can ofere some help). Secondly, Fieldsavers. In the current form they are OK and basically I see no reason to change them. I think, much more valuable and versatile option to extending fieldsavers would be change or extending the check pointers. Currently they are slightly inconvenient due to their huge sizes and limited accuracy of outputed data. I see two different way of changing it. The first is to add binary format to existing checkpointer with additional parameter allowing controlling ASCII/BIN output. The second is adding additional ``checkpointer'' with fully binary output and (purhaps)
better structured for parallel reading (see VTK parallel formats). An advantage of the second approach is that current form of checkpointer is very convenient for beginers who start DEM simulations and heve not yet too large needs for advanced HPC applications. For more advanced application such ``binary parallel' checkpointer would be perfect. It would also improve versatility of Esys by allowing any casedependent postprocessing without any quality limits.
Thanks Dion, once more for a very nice and constructive discussion and help
I will come back to the stability (especially soft ) issue latter on.
Any way, the background of the problem which has opened the Ep backcalculation issue is described in our paper to be published soon by the Royla Soc.:
``Earthquake physics beyond the linear fracture mechanics: a discrete approach,,. doi:10.
Here it is a temporary link to it: http://
With best regards
Wojciech
PS. By the way, is there any preferable way to refer to EsysParticle in publications?
Dion Weatherley proposed the following answer:
Hi Wojciech,
> I presume fieldsavers are created by the master core not workers?
The master does the actual writing out of fieldsaver files, after having
gathered the relevant information from the workers via MPI
communication. FieldSavers are generally a lot smaller in terms of data
output than checkpointers but can be a bit slow if you have very large
simulations: a lot of comms and data output for RAW* output formats.
> The classical stability criterion, like the one you put down in the
tutorial or the formulas developed by O'Sullivan does not work.
This is a little surprising to me. The formula in the Tutorial is
possibly a bit misleading. The one I use, often hardcoded into my
scripts, is:
dt = 0.2 * sqrt ( 4./3. * density * Rmin**3. / youngsModulus_max / Rmax
)
where density = particle density, Rmin is the minimum particle radius,
Rmax is the maximum particle radius, and youngsModulus_max is the
largest Young's modulus in your simulation.
If you use scaling=True, insert normalK in place of youngsModulus_max.
If scaling==False:
dt = 0.2 * sqrt (4./3. * density * Rmin**3. / normalK)
If you simulations are highly dynamic, you will need to do a second
check though. The timestep increment must also be equal or smaller than:
dt_dyn = 0.2 * Rmin / Vmax
where Vmax is the maximum particle speed you expect to be achieved
during your simulations. This will ensure that no particle will ever
move more than 1/5th of the radius of the smallest particle. Note that
this is also the verlet distance for the neighbour search algorithm. In
other words, you cannot ever move a particle more than the verlet
distance in a single timestep. If you do so, then the particlepair list
will be incorrect leading to missed interaction force calculations.
> I needed something more than the fieldsaver provides me with, namely
the kinetic energy of BOUNDED particles only.
I would have probably added a pair of fieldsavers storing the elastic
strain energy of bonded interactions in RAW_WITH_POS_ID format, as well
as kinetic energy in RAW format or similar. You can then postprocess
the fieldsaver output files to get a list of BOUNDED particle ids, then
use this list to sum up the kinetic energy of only those particles.
Arguably it is just a preference on my part, but I prefer to avoid post
processing checkpointer output when I'm specifically interested in
particular information from a simulation.
> DEM as any other multibody system exhibits a nice statistical
property I called soft stability...
I've seen that phenomenon many times, when the timestep increment is too
large for the smallest particles but small enough for the larger
particles. In such cases, the small particles quickly explode out of the
simulation, leaving just the larger particles behind. IMO, this "soft
stability" should be avoided at all costs. It is just another
manifestation of numerical instability that is a little less strict than
the traditional one (wherein everything explodes). Regardless, the
simulation you end up running after this softstability explosion, is
not the one you originally defined: far fewer particles and many missing
bonds/interactions that would otherwise have been there. My advice:
don't "exploit" this soft stability in your simulations. Seek other ways
to ensure the simulation is numerically stable for all particle sizes.
Sometimes a simulation that takes a month to execute simply cannot be
avoided in DEM.
Cheers,
Dion

If this answers your question, please go to the following page to let us
know that it is solved:
https:/
If you still need help, you can reply to this email or go to the
following page to enter your feedback:
https:/
You received this question notification because you asked the question.
Revision history for this message

#5 
Hi Wojciech,
> It seems to me that I have basically fixed the problem with backcalculated elastic energy (Ecp).
Glad to hear it!
> This time when applied scaling of normalK for backcalculation I have got results almost exactly multiplied by pi (3.1415...)
> with respect to Fieldsaver data. Where the extra/missing pi is? I do not know  it must be somewhere in Esys code (Fieldsaver?)
Looks like that might be my mistake. I just checked in Model/BondedInt
K_12 = normalK * R_12 * R_12 / L_12
or
K_12 = 0.25 * normalK * (R_1 + R_2)
> Apparently the initial bound length understood as particles' centers distance is larger than sum of their radii.
Yes, this can happen although it will only be by an amount less than the tolerance supplied to gengeo during packing of the geometry.
> Consequence of that is that L_init is either wrong in dumpt2vtk or there is wrong initialization in Esys.
The problem is in dump2vtk and we probably will not correct this. The "bond strain" field produced by dump2vtk is intended only as a visualisation aid. Indeed I am reasonably convinced that, in most circumstances, the individual bond strains are largely irrelevant and should not be considered as overly meaningful or diagnostic. When discussing "stress" or "strain" of bonded particle models, it is better to consider volumeaveraged values of these tensors; a better analogue for the continuum concepts of "stress" and "strain".
> [...] it should not be used for quantitative postprocessing analysis. It should be clarified in upgraded tutorial ( I can ofere some help).
I'm not sure I entirely agree that checkpoint files and dump2vtk is not suitable for quantitative postprocessing analysis. These are extremely useful for 3D visualisation of simulation results but, like any such visualisation, it only produces pretty pictures. A rigorous scientific study should not rely solely upon visualisation but rather undertake careful quantitative analysis of specific measurable quantities (such as your Ecp analysis). In which case, FieldSavers are the recommended method to extract the data of interest for such analysis.
Your offer to help improve the Tutorial is much appreciated. There is quite a list of things I'd like to improve in the Tutorial if/as time permits. Contact me directly via email to discuss how you can get involved in this.
> The first is to add binary format to existing checkpointer with additional parameter allowing controlling ASCII/BIN output.
This has been discussed amongst the developers at various times. If we were to do this, there would need to be changes to various postprocessing tools as well; to support either format.
> The second is adding additional ``checkpointer'' with fully binary output and (purhaps)
> better structured for parallel reading (see VTK parallel formats).
Currently ESySParticle does support the Silo data format (https:/
Cheers,
Dion
Can you help with this problem?
Provide an answer of your own, or ask Debski for more information if necessary.