Sensitivity testing: Results seem random

Asked by Justin

Hello,

My Model: An object rotating in a pit of spheres.
Output: Torque on the object about the y-axis (vertical axis)

I can "calibrate" a DEM model to get results that fall in line with laboratory results (I used an MTS machine to rotate the object and get torque values). However, to calibrate my model (i.e., run many simulations while adjusting input parameters until my model results fall inline with laboratory results). My results are all over the place. One small change in an input parameter (e.g., E, density, etc), can create large changes in the output torque load path. And even when I "hit the jackpot", and get a load path close to laboratory results, when I try to do sensitivity testing (i.e., make small changes to one input parameter to see if my model is consistent). My results, again, go all over the place. I don't see a place to post them, but I do have plots showing load paths relative to small changes to an input parameter I can email over.

Is this normal for DEM? It seems wrong and highly unscientific to have a model's output load path change so chaotically to small changes to an input parameter.

If a MWE is required, please let me know and I'll gladly create one.

As always, thanks for the help in advances,
Justin

Question information

Language:
English Edit question
Status:
Answered
For:
Yade Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
Jan Stránský (honzik) said :
#1

Hello,

> y-axis (vertical axis)

although it does not really matter, I prefer z to be vertical

> I don't see a place to post them

yes, this is a drawback of launchpad questions, no attachments..
In this case, you can probably put them to dropbox / google drive / ... and give the link

> If a MWE is required, please let me know and I'll gladly create one.

Yes, please

> Is this normal for DEM? It seems wrong and highly unscientific to have a model's output load path change so chaotically to small changes to an input parameter.

In general, it is not normal, but it may be normal under certain circumstances.
E.g. if the result is strongly dependent on packing (initial particle positions) and you change the packing every run.
But it should be much more clear with the MWE.

Cheers
Jan

Revision history for this message
Justin (justin-l-rittenhouse) said :
#2

Jan,

Link to google drive with files:
https://drive.google.com/drive/folders/1ygHckMnSksTSuMzfdsxSRS5pFaQP4-mC?usp=sharing

In the folder:
- Three MWE (Original, increase 5% and minus 5% of density of spheres): Basically, there all the exact same

- Three CAD files needed to run the models: block_test.mesh, small_base.mesh, and 1mm_heightchecker.mesh
        - block_test.mesh, is not the object I'm using for my research. Didn't want to share everything with the world, yet. Perhaps, I can share the actual object over email.

- Three plots:
        - Sensitivity_Testing.png is the plot that goes with the included models (i.e., using block_test.mesh). Admittedly the results were better than I expected. But two models still started negative? And the original model had a higher torque than the plus 5% density model.
        - density_Sensitivity_Testing.png - Results from a model exactly the same as the MWE, except used my actual object (i.e., not block_test.mesh).
        - density_Sensitivity_Testing_2r.png - Results from same model as above, except sphere radius was reduced to 2 mm and sphere count increased by eight times.

Problems:
1) All plots have runs that start negative, which is not intuitive. I feel they should all start at zero (or close to it).
2) Sometimes results can be radically different (e.g., density_Sensitivity_Testing.png)
3) Even when not radically different, the results doesn't make inherit sense. I.e., spheres with less density can create more torque on the object than spheres with a lower density.

As always, thanks for the help,
Justin

Revision history for this message
Jan Stránský (honzik) said :
#3

Hello,

thanks for the data

> 1) All plots have runs that start negative, which is not intuitive. I feel they should all start at zero (or close to it).

You push "the object" down through the packing, which also is not vary intuitive..
At the end, you of course has some have some forces and torque on the object.
The orientation of the forces and the sense of the torque are basically random (due to just pushing down), possibly negative.

> 2) Sometimes results can be radically different (e.g., density_Sensitivity_Testing.png)

With this coarse spherical packing w.r.t. the object, I expect significant discrepancy w.r.t. initial particle packing.
Not the initial particle packing before the gravity deposition, which seems to be the same using memoizeDb, but the "starting packing" after gravity deposition, which might differ (see below).

Check if you have same or different packing after the gravity deposition.

> 3) Even when not radically different, the results doesn't make inherit sense. I.e., spheres with less density can create more torque on the object than spheres with a lower density.

what is density? Material density? Packing density?
If material density, then the packing difference is probably more significant for the results then the material density..

How do you run your simulation? Using parallel / multicore run, or not?
If yes, then you intrinsically get different results [1].
Specifically for your case, different "starting packing".
Even with same "starting packing", you simply get different results.

Cheers
Jan

[1] https://yade-dem.org/doc/formulation.html#result-indeterminism

Revision history for this message
Justin (justin-l-rittenhouse) said :
#4

Jan,

1) I push down because my object has a top, a rigid bracket, this can be seen in the photo I added to the google drive link (top_of_object.png). And, I didn't know another way to make sure the spheres would settle evenly. Because if I have the object in place before, the spheres would fall on top of the object and off to the side. Thus leaving the middle emptier compared to the sides.

2) You are correct, I just checked. The actual packing density (after gravity deposition) is different.

3) It was material density, and that's a good point.

4) Not parallel, it is single threaded. But this was good information, because I tried multicore in the past. And I did get different results running the same exact model.

Thank you for clearing these issues up for me.

It appears my course material is the cause of a lot of my issues. However, I use such a course material to reduce computational expense. Is there away to pack the spheres around my object to start? Thus I can by past pushing my object down into the spheres. Which appears to be a big cause of my issues, and is by far the most computationally expensive part of my model.

Thanks,
Justin

Revision history for this message
Jan Stránský (honzik) said :
#5

> 1) ...
> Is there away to pack the spheres around my object to start?

You can create the object in destination place and delete the undesired spheres

> It appears my course material is the cause of a lot of my issues
> I use such a course material to reduce computational expense.

It is always a tradeoff..

Cheers
Jan

Revision history for this message
Justin (justin-l-rittenhouse) said :
#6

Jan,

Does a DEM simulation ever converge relative to the time step? I.e., at a small enough time-step, any reduction in time step would produce the same results?

I ask, because I read in literature a good time-step is 20-40% of rayleigh's wave. However, when I go well below that, the model still produces slightly different results for different time steps.

To see example plots, go back to the shared link:
https://drive.google.com/drive/folders/1ygHckMnSksTSuMzfdsxSRS5pFaQP4-mC?usp=sharing

The full torque path:
Sensitivity_Testing_time_step.png

The zoomed in start to the torque path:
time_step_zoomed.png

Legend:
 - Original: the original model with a time-step of 20% of rayleigh's wave
 - ts_1_wave_vtk: time step is 1% of rayleigh's wave and the model outputted VTK files
 - The other ones follow the same naming convention as ts_1_wave_vtk

This just goes back to trying to get my model to start from 0, at least as close as possible. Note, these models are the same as above. Except I reduced the sphere size even smaller to a radius of 1 mm. Note, the real rubber particles this simulation is trying to mimic range from 0.2125 to 0.84 mm radius.

Thanks,
Justin

Revision history for this message
Jan Stránský (honzik) said :
#7

> Does a DEM simulation ever converge relative to the time step?

yes, it should, explicit time integration methods should converge for decreasing time step.
For deeper mathematics , discuss with some mathematician.

> any reduction in time step would produce the same results?
> the model still produces slightly different results for different time steps.

From the plot, the difference seems pretty small to me, in the order of percents.
In general, you have a lot of influencing factors in DEM.
Obviously time step is one of them, practical values ("close" to critical time step) influences the results.
Then you have many other factors, e.g. the initial packing. If you change the packing, you get different results.
If you run simulation in parallel, you get different results (!) [1].

So **from my point of view** your plot seems pretty good.
**In my point of view** it is not a good idea to match exactly one simulation to one results, but rather a trends or some statistics from multiple runs if some "exact match" is needed.

Cheers
Jan

Can you help with this problem?

Provide an answer of your own, or ask Justin for more information if necessary.

To post a message you must log in.