PSD generation

Asked by Bruno Chareyre on 2011-01-25

Hi,

A couple questions on particles generation by PSD:

1/ What is the difference between makeCloud(psdSizes,psdCumm), particleSD, and particleSD2?
2/ If we want a mixture with, let's say, x% particles in [r1,r2] and (100-x)% particles in [r3,r4], with no particles between r2 and r3 (staircase psdCumm), will one of the above apply?
3/ Is there a precise reason why the "best" psdScaleExponent is not 3?
4/ It seems particles generation will start with smaller particles and finish with the bigger ones, this is high demanding for the positioning algorithm which tries to find free spots. It would be easier (i.e. it could achieve lower porosities) to fill the voids between big ones with small ones. Ok to invert that part?

Bruno

Question information

Language:
English Edit question
Status:
Solved
For:
Yade Edit question
Assignee:
No assignee Edit question
Last query:
2011-01-25
Last reply:
2011-01-25

(4) Does not apply. The ordering is random (right?).
(5) Trying psd.py, it seems the tabular psd is not fitted really well.
Also, size PSD and mass PSD look similar. Am I interpreting something
the wrong way?

On 25/01/11 15:33, Chareyre wrote:
> New question #142804 on Yade:
> https://answers.launchpad.net/yade/+question/142804
>
> Hi,
>
> A couple questions on particles generation by PSD:
>
> 1/ What is the difference between makeCloud(psdSizes,psdCumm), particleSD, and particleSD2?
> 2/ If we want a mixture with, let's say, x% particles in [r1,r2] and (100-x)% particles in [r3,r4], with no particles between r2 and r3 (staircase psdCumm), will one of the above apply?
> 3/ Is there a precise reason why the "best" psdScaleExponent is not 3?
> 4/ It seems particles generation will start with smaller particles and finish with the bigger ones, this is high demanding for the positioning algorithm which tries to find free spots. It would be easier (i.e. it could achieve lower porosities) to fill the voids between big ones with small ones. Ok to invert that part?
>
> Bruno
>

--
_______________
Bruno Chareyre
Associate Professor
ENSE³ - Grenoble INP
Lab. 3SR
BP 53 - 38041, Grenoble cedex 9 - France
Tél : +33 4 56 52 86 21
Fax : +33 4 76 82 70 43
________________

I found the problem in makeCloud and get good fit now. I'll let you know shortly.

Hi Bruno,

See below for my answers to your questions.

(1) I will try to summarize them here, though there is some documentation in the code:
- makeCloud() has in fact three possibilities: you can obtain a uniform distribution by starting from either radius mean or porosity as input (and the number of balls, ofc) or you can define psdSizes and psdCumm and in that case you should obtain the distribution as you input it. To be honest I would use particleSD for the latter since I do not remember if makeCloud() was working properly in that sense (Vaclav?).
- particleSD() and particleSD2() do essentially the same job. The difference is that in the first case you guess the volume of solids by the number of particles and the mean radius; this means that if your size distribution is rather broad then the final number of balls you obtain is not the same one as you input. Instead, using particleSD2() you will get the same number of balls because in fact the volume of solids is not really another parameter but it is sufficient to input number of balls and porosity together with the size distribution (there is a comment in the code about that, I hope it is clear).

(2) I think none of the above would apply to your case. If you choose to input the percentage of passing that is cumulative so I do not think you can obtain an interval of radii with zero particles.

(3) Vaclav wrote that function and he can probably give you the answer.

(4) It does apply and makes sense but only if you have the list of radii so that you can actually choose to start from the bigger size, otherwise it is random as you already say. Indeed this is what happens in both function particleSD() and particleSD2() since we have the list of radii available there (and it makes a big difference if you start placing the big balls rather than the small ones).

HTH, Chiara

closing

Václav Šmilauer (eudoxos) said : #5

(2) if you specify psdSizes (radii) and psdCumm with makeCloud, you should be able to get any psd you are able to describe by a non-decreasing piecewise-linear function. Therefore, in your case, you would have psdSizes=[r1,r2,r3,r4] and psdCumm=[0,.4,.4,1] to get 40% uniformly in (r1,r2) and remaining 60% uniformly in [r3,r4]. But I suspect that algorithm is buggy. Fixes welcome.

(3) I was not able to discover the reason, although I checked the derivation carefully several times. It is true though that I never used that function since Chiara introduced the particleSD method, which is more straightforward and gives lower porosity due to the ability to place bigger spheres first.

Oh! We managed to send 3 posts in the same minute with Chiara!
Thank you both for answers. It seems particleSD uses discrete
distributions (right?) and assume cubic volume, which is a bit restrictive.
1/ The difference between particleSD() and particleSD2() is not very
explicit in the doc.
2/ Vaclav is right, it works (with makeCloud at least), since it results
in zero probability for r2<r<r3.
3/ It's fixed. I re-derived, psdScaleExponent disapears.

I'll commit a makeCloud generating decreasing radii and scaling the psd
down if the target number can't be achived (also retrying retrying
recursively with higher porosity if target poro is to low). Not much
time now.
 Bruno

p.s. trying to attach figures, not sure it will fit in lp answers...

It is up to you to choose the shape of the box. With particleSD() you can assign the size of the box as input so it is not necessarily a cubic volume.
(2) I understand now your question. You can do the same with particleSD() then (just set the same percentage for r2 and r3 as Vaclav suggests).
Chiara

So, this comment in SpherePack.cpp (particleSD2) is obsolete?
/* possible enhacement: proportions parameter, so that the domain is not
cube, but box with sides having given proportions */

That applies only to particleSD2, which in fact does not ask to specify the size of the box which is assumed to be cubic for now.
By the way, when you say "scaling the psd down" you mean applying a shift to it, right? That would be quite nice to have it, I agree.

For PSD2 only, ok.

Scaling down : homothetic transformation of the distribution. It looks
like a translation in log axes, see the figures I sent this morning for
the 20k particles case. The only diff. between figures is the target
number of particles.

B.

Václav Šmilauer (eudoxos) said : #11

Bruno, to me it is not clear about what the scaling does. Can you describe it concisely? My idea about it now:

1. Try to place particles as required.
2. If it is impossible to place all of them, then you make particles, which are already generated, smaller, until the rest is put as well.

Right? That means, however, that the size distribution is different (although "only" scaled down). Wouldn't it be better to increase the box instead (move particles homothetically, but without changing their radius), so that the size (and mass) distribution is as was required, although in a larger volume?

It's simpler than that: I made makeCloud(num>0,poro>0,psd) recursive
(such call with all three params would throw currently). If num doesn't
fit, the list of generated spheres is simply erased and
makeCloud(num,poro,psd) calls makeCloud(num,poro2>poro,psd).

"Scaling" is a multiplication of all sizes in psdSizes by the same factor.

Is it better to scale down psd sizes or to increase box size? It
depends: physics behind, boundary constraints, taste, etc.
I've always been in the case where it is better for me to scale
particles down. Because Yade's CundallStrack packings have
size-independent behaviour, I can see "num" as a mesh refinement
parameter (if you refine a FEM mesh, you usually don't want to change
the dimension of the problem).
It explains why I can grow particles, or why I define packings via num
and rRelFuzz (never by rMean). I'm not claiming it's fundamentally
better but it is what I need, so I irrationally tend to think it is the
most frequent usage.
A bit more rational maybe : minCorner and maxCorner are the only
mandatory parameters (others have default values), suggesting that it is
not something we should fiddle with.

Anyway, I'll add a way to get the scaling factor from pack, so one can
easily scale up everything again if needed.

Good news : ordering sizes gives 0.55 porosity (else 0.85)!

Bruno

Bruno,
yes the distribution in particleSD() is discrete, I will add more documentation about that. Would you think that a continuous distribution would make a big difference? In DEM papers I can see that both are being used. Any idea about that?
Thanks, Chiara

I don't really know in which cases it could give big differences. It is good if both discrete and continuous distributions can be generated in Yade.
makeCloud works perfectly for the continuous case now (you can try psd.py to see how it works, before bzr2724 or after bzr2748, there is a small bug in between).