nummerical error through multiple replications

Asked by matthias

i have not disregardable numerical error in my simulationes between some replications caused by this described multithreading issue. running some replications would solve the problem but it burns so much cpu time, so it is not feasable.

another way could be to use the 80bit long double of the cpu (core i7). how can i compile yade to using this long double data type? or maybe double-double arithmetics?

matthias

Question information

Language:
English Edit question
Status:
Answered
For:
Yade Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
Jan Stránský (honzik) said :
#1

Hello Matthias,

what quantities and values do you consider "not disregardable"? The
multithreading might not be the only reason, just some more that came to my
mind:
- time step (if you decrease time step, it might help)
- initial state (e.g. particle positions) - what technique are you using?
might this be the source?

for your second question, see top of yade/lib/base/Math.hpp. If you want to
use "long double" instead of "double", compile Yade with cmake option
-DCMAKE_CXX_FLAGS="-DQUAD_PRECISION"
Personally I haven't try it, so I can't guarantee the result :-)

cheers
Jan

2013/7/9 matthias <email address hidden>

> New question #232113 on Yade:
> https://answers.launchpad.net/yade/+question/232113
>
> i have not disregardable numerical error in my simulationes between some
> replications caused by this described multithreading issue. running some
> replications would solve the problem but it burns so much cpu time, so it
> is not feasable.
>
> another way could be to use the 80bit long double of the cpu (core i7).
> how can i compile yade to using this long double data type? or maybe
> double-double arithmetics?
>
> matthias
>
> --
> You received this question notification because you are a member of
> yade-users, which is an answer contact for Yade.
>
> _______________________________________________
> Mailing list: https://launchpad.net/~yade-users
> Post to : <email address hidden>
> Unsubscribe : https://launchpad.net/~yade-users
> More help : https://help.launchpad.net/ListHelp
>

Revision history for this message
matthias (matthias-frank) said :
#2

- i use as time step the proposal of PWaveTimeStep, and sometime a smaller one, but the effect is still the same.
- the initial state is excatly the same. i let fall a clump cloud, save this state and (re-)start my simulation from this state.

this values are some time intervals measure in my model by running some identical replications
1.35329999999
1.33769999999
1.18169999999
1.28699999999
1.29869999999
1.33379999999
1.39619999999

i want to minimize this time interval by an optimizer. there are robust optimizers which can deal with errors, but this error is too big.

recompile and generating some values with long double would take the night

matthias

Revision history for this message
Jan Stránský (honzik) said :
#3

Hi,

- i use as time step the proposal of PWaveTimeStep, and sometime a smaller
> one, but the effect is still the same.
> - the initial state is excatly the same. i let fall a clump cloud, save
> this state and (re-)start my simulation from this state.
>

it was just an idea, apparently not the source of the error..

>
> this values are some time intervals measure in my model by running some
> identical replications
> 1.35329999999
> 1.33769999999
> 1.18169999999
> 1.28699999999
> 1.29869999999
> 1.33379999999
> 1.39619999999
>

could you be more specific on "some time interval" term? :-)
cheers
Jan

Revision history for this message
matthias (matthias-frank) said :
#4

the time interval is the time which needs my particles to fall through a hopper (for packaging)

Revision history for this message
Christian Jakob (jakob-ifgt) said :
#5

Hi,

Interesting results, I see your values differ with approx. +/- 20% ...

I simulate increasing water level in wet sand,
see what numerical indeterminism can do, when running it multithreaded:

http://www.youtube.com/watch?v=ZAkfHMyG6MM

Revision history for this message
Bruno Chareyre (bruno-chareyre) said :
#6

The word "error" may be simply irrelevant.
You can get mutiple results which are all correct though different.
80bit long double would then give another set of solutions which would not be intrinsically better than the previous ones.
Heard about this? : http://en.wikipedia.org/wiki/Chaos_theory

Revision history for this message
403175147 (yanfb-1019) said :
#7

thank you
At 2013-07-09 21:46:12,"Bruno Chareyre" <email address hidden> wrote:
>Question #232113 on Yade changed:
>https://answers.launchpad.net/yade/+question/232113
>
> Status: Open => Answered
>
>Bruno Chareyre proposed the following answer:
>The word "error" may be simply irrelevant.
>You can get mutiple results which are all correct though different.
>80bit long double would then give another set of solutions which would not be intrinsically better than the previous ones.
>Heard about this? : http://en.wikipedia.org/wiki/Chaos_theory
>
>--
>You received this question notification because you are a member of
>yade-users, which is an answer contact for Yade.
>
>_______________________________________________
>Mailing list: https://launchpad.net/~yade-users
>Post to : <email address hidden>
>Unsubscribe : https://launchpad.net/~yade-users
>More help : https://help.launchpad.net/ListHelp

Revision history for this message
matthias (matthias-frank) said :
#8

ok, i say not "error" maybe noise or innaccuracy. or variant
after evalutation 40 values over night, i get a nearly gaussian distributed histogram with mean=1.3016714286 and median=1.30455. Therefore i could reach "the real value" by running some replications, in the similiar way as in discrete event simulation or other monte carlo based approaches, but ic don't have so much computers/nodes/clusters

so if multithreaded DEM has a chaotic nature, so there are some points to reason:
- a multithreaded simulation is not repeatable because the OS scheduling isn't
- for (automatic) simulation based optimization, the optimizer cannot fine tune a model. it can only find rough parameter sets which are distinguishable. small changes are distinguishable because the noise of one run ist too big

Can you help with this problem?

Provide an answer of your own, or ask matthias for more information if necessary.

To post a message you must log in.