My fluid-solid coupling script run very slowly

Asked by Ziyu Wang

Hello,everyone!

What I want to do is to simulate the triaxial compression failure of rock under seepage conditions.But when I run the following script,it runs very slowly ( iter:3.0/s).I wonder if this is normal?
ps:My model is a 0.05 * 0.05 * 0.05 cube with about 5000 particles.

------------------------------------------------
from yade import pack, ymport, plot, utils, export, timing
from builtins import range
import numpy as np

rate=-0.01
damp=0.4
stabilityThreshold=0.001
key='_triax_base_'
young=20e9
name='JCFPM_triax'
compFricDegree=30
poisson=0.1

mn,mx=Vector3(0,0,0),Vector3(0.05,0.05,0.05)
O.materials.append(JCFpmMat(type=1,density=2640,young=young,poisson=poisson,tensileStrength=1e7,cohesion=2e7,frictionAngle=radians(60),label='sphere'))
O.materials.append(JCFpmMat(type=1,frictionAngle=0,density=0,label='wall'))
walls=aabbWalls([mn,mx],thickness=0,material='wall')
wallIds=O.bodies.append(walls)

O.bodies.append(ymport.text('packing-JCFPM.spheres',scale=1,shift=Vector3(0,0,0),material='sphere',color=(0.6,0.5,0.15)))
### (The model imported here is a cube of about 5000 particles generated by Randomdensepack)

triax=TriaxialStressController(
 maxMultiplier=1.+2e7/young,
 finalMaxMultiplier=1.+2e6/young,
 thickness = 0,
 stressMask = 7,
 internalCompaction=True,
)
newton=NewtonIntegrator(damping=damp)
def recorder():
 yade.plot.addData(
 i=O.iter,
 e11=-triax.strain[0],e22=-triax.strain[1],e33=-triax.strain[2],
 s11=-triax.stress(triax.wall_right_id)[0],#0+边界id为right
 s22=-triax.stress(triax.wall_top_id)[1],#1+边界id为top
 s33=-triax.stress(triax.wall_front_id)[2],#2+边界id为front
 P=triax.porosity,
 #meanS=-triax.meanStress[2]
 ev=triax.volumetricStrain,
 numberTc=interactionLaw.nbTensCracks,
 numberSc=interactionLaw.nbShearCracks,
 unb=unbalancedForce()
)
 plot.saveDataTxt('triax_JCFPM_HM_Wy=10.0Mpa,Sy=2.0MPa')

def stop_condition():
 if abs(triax.strain[2]) > 0.015:
  O.pause()
  presentcohesive_count = 0
  for i in O.interactions:
          if hasattr(i.phys, 'isCohesive'):
               if i.phys.isCohesive == 1:
                   presentcohesive_count+=1
  print('the number of cohesive bond now is:',presentcohesive_count)

O.engines=[
 ForceResetter(),
 InsertionSortCollider([Bo1_Sphere_Aabb(aabbEnlargeFactor=1.3,label='is2aabb'),Bo1_Box_Aabb()]),
 InteractionLoop(
  [Ig2_Sphere_Sphere_ScGeom(interactionDetectionFactor=1.3,label='ss2sc'),Ig2_Box_Sphere_ScGeom()],
  [Ip2_JCFpmMat_JCFpmMat_JCFpmPhys()],
  [Law2_ScGeom_JCFpmPhys_JointedCohesiveFrictionalPM(recordCracks=True,Key='triax_JCFPM_HM',label='interactionLaw')]
 ),
 GlobalStiffnessTimeStepper(active=1,timeStepUpdateInterval=100,timestepSafetyCoefficient=0.8),
 triax,
 #TriaxialStateRecorder(iterPeriod=100,file='WallStresses'+key),
 newton,
 FlowEngine(dead=1,label='flow'),
 PyRunner(iterPeriod=int(1000),initRun=True,command='recorder()',label='data',dead=0),
 PyRunner(iterPeriod=1000,command='stop_condition()',dead=0),

]

O.step()
ss2sc.interactionDetectionFactor=-1
is2aabb.aabbEnlargeFactor=-1

triax.goal1=triax.goal2=triax.goal3=-10e6
while 1:
 O.run(1000,1)
 unb=unbalancedForce()
 print('unbalanced force:',unb,'mean stres:',triax.meanStress)
 if unb<stabilityThreshold and abs(-10000000-triax.meanStress)/10000000<0.001:
  break

flow.dead=0
flow.meshUpdateInterval=2
flow.useSolver=4
flow.permeabilityFactor=1
flow.viscosity=1
flow.bndCondIsPressure=[0,0,0,0,1,1]
flow.bndCondValue=[0,0,0,0,2e6,0]

triax.internalCompaction=False
triax.stressMask=3
triax.goal1=-1e7
triax.goal2=-1e7
triax.goal3=rate

plot.plots={'e33':('s33',None,'unb'),'i':('numberTc','numberSc',None,'s33')}
plot.plot()
O.run()
----------------------------------------------------------

Thanks for help!

Question information

Language:
English Edit question
Status:
Solved
For:
Yade Edit question
Assignee:
No assignee Edit question
Solved by:
Ziyu Wang
Solved:
Last query:
Last reply:
Revision history for this message
Jan Stránský (honzik) said :
#1

Hello,

please do not open the same question twice [1].

Concerning where is "problem", if it is "normal" etc., please first define those terms and provide more information about software (Yade version, OS) and hardware information, how you run the simulation (single/multi thread, ...) etc.
With the limited information provided, it may be normal and there may be no problem.

You have two stages of the simulation. I suggest investigate each separately.

A good start is probably measure times of individual engines using timing module [2].
Please provide the timing.stats() result for both stages.

Cheers
Jan

[1] https://answers.launchpad.net/yade/+question/700285
[2] https://yade-dem.org/doc/yade.timing.html

Revision history for this message
Ziyu Wang (ziyuwang1) said :
#2

Hello,Jan
In fact, I don't know how to deal with my last question, and I don't want to reopen the same question.Anyway, I'm very sorry...

The following is the information from printAllVersions()
----------------------------------------------------
Yade version : 20220208-6309~ebf9df3~bionic1
Yade features : LOGGER USEFUL_ERRORS VTK OPENMP GTS QT5 CGAL PFVFLOW PFVFLOW LINSOLV MPI TWOPHASEFLOW LS_DEM FEMLIKE GL2PS LBMFLOW THERMAL PARTIALSAT POTENTIAL_PARTICLES POTENTIAL_BLOCKS
Yade config dir: ~/.yadedaily
Yade precision : 53 bits, 15 decimal places, without mpmath, PrecisionDouble
```

Libraries used :

| library | cmake | C++ |
| ------------- | -------------------- | ------------------- |
| boost | 106501 | 1.65.1 |
| cgal | | 4.11 |
| clp | 1.16.11 | 1.16.11 |
| cmake | 3.10.2 | |
| coinutils | 2.10.14 | 2.10.14 |
| compiler | /usr/bin/c++ 7.5.0 | gcc 7.5.0 |
| eigen | 3.3.4 | 3.3.4 |
| freeglut | 2.8.1 | |
| gl | | 20190911 |
| ipython | 5.5.0 | |
| metis | | 5.1.0 |
| mpi | 3.1 | ompi:2.1.1 |
| mpi4py | 2.0.0 | |
| mpmath | 1.0.0 | |
| openblas | | OpenBLAS 0.2.20 |
| python | 3.6.9 | 3.6.9 |
| qglviewer | | 2.6.3 |
| qt | | 5.9.5 |
| sphinx | 1.6.7-final-0 | |
| sqlite | | 3.22.0 |
| suitesparse | 5.1.2 | 5.1.2 |
| vtk | 6.3.0 | 6.3.0 |

```
Linux version : Ubuntu 18.04.6 LTS
Architecture : amd64
Little endian : True
-------------------------------------------------------------

About hardware:
Memory 15.6GiB
processor AMD® Ryzen 7 4800h with radeon graphics × 16
Graphics card NVIDIA GeForce RTX 2060/PCIe/SSE2
disk 101.7 GB

I run the simulation with yadedaily -j 12 script.py(actually no matter how many threads or single thread, the running speed is 3.0/s,So I think it is not normal.)

I follow your suggestion and run the timing.stats(),but as follows:
In [10]: timing.stats()
Name Count Time Rel. time
-------------------------------------------------------------------------------------------------------
ForceResetter 0 0.0us
InsertionSortCollider 0 0.0us
"iloop" 0 0.0us
GlobalStiffnessTimeStepper 0 0.0us
TriaxialStressController 0 0.0us
"flow" 0 0.0us
"thermal" 0 0.0us
"VTKrec" 0 0.0us
NewtonIntegrator 0 0.0us
"recorder" 0 0.0us
PyRunner 0 0.0us
TOTAL 0.0us

I do not know what happened..I run another normal script and run timing.stats(), and all the data is still 0..

Thanks for help!

Revision history for this message
Jan Stránský (honzik) said :
#3

put the timing code in your script:
###
O.timingEnabled=True # before any timing

yade.timing.reset() # before 1st stage
# the running you want to measure
while 1:
    ...
    O.run(1000,1)
yade.timing.stats()

yade.timing.reset() # before 2nd stage
flow.dead=0
...
O.run(10000,True) # instead of just O.run()
yade.timing.stats()
###

Cheers
Jan

Revision history for this message
Bruno Chareyre (bruno-chareyre) said :
#4

> flow.meshUpdateInterval=2

This is an enormous workload, it means to rebuild a mesh, a permeability matrix, and factorize it, every 2 iterations. It can only be very slow. See [1] for a more detailed understanding.
Bruno

[1] https://www.sciencedirect.com/science/article/abs/pii/S0010465519303340

Revision history for this message
Ziyu Wang (ziyuwang1) said :
#5

Hello,Jan
the following is the information from timing.stats()
------------first stage (quick)------------------------------
ForceResetter 4000 75521.451us 0.50%
InsertionSortCollider 59 196899.836us 1.30%
InteractionLoop 4000 10281851.285us 67.77%
GlobalStiffnessTimeStepper 41 169944.973us 1.12%
TriaxialStressController 4000 3179726.783us 20.96%
"flow" 0 0.0us 0.00%
"data" 4 6129.148us 0.04%
PyRunner 4 273.363us 0.00%
NewtonIntegrator 4000 1260754.597us 8.31%
  forces sync 4000 8066.228us 0.64%
  motion integration 4000 1204713.592us 95.55%
  sync max vel 4000 8042.751us 0.64%
  terminate 4000 6417.242us 0.51%
  TOTAL 16000 1227239.813us 97.34%
TOTAL 15171101.436us 100.00%
------------------------------------------------------------------------

------------------------------second stage(very slow with 15000 steps)-----------------------
ForceResetter 15000 330213.906us 0.01%
InsertionSortCollider 114 251996.562us 0.00%
InteractionLoop 15000 51001932.733us 0.98%
GlobalStiffnessTimeStepper 150 741695.019us 0.01%
TriaxialStressController 15000 7243186.493us 0.14%
"flow" 15000 5159923026.425us 98.68%
  Position buffer 15001 4307152.614us 0.08%
  Triangulating 15001 296565.133us 0.01%
  Update_Volumes 15001 27251839.449us 0.53%
  Factorize + Solve 15001 2441294624.726us 47.31%
  compute_Forces 15001 221043860.595us 4.28%
  forces.sync() 15001 35730.031us 0.00%
  viscous forces 15001 31865.789us 0.00%
  Applying Forces 15001 2064120.419us 0.04%
  triangulate + init volumes 15001 2464117996.31us 47.75%
  TOTAL 135009 5160443755.066us 100.01%
"data" 15 35217.754us 0.00%
PyRunner 15 902.201us 0.00%
NewtonIntegrator 15000 9604592.545us 0.18%
  forces sync 15000 4004912.235us 41.70%
  motion integration 15000 5391893.156us 56.14%
  sync max vel 15000 37529.859us 0.39%
  terminate 15000 27078.136us 0.28%
  TOTAL 60000 9461413.386us 98.51%
TOTAL 5229132763.638us 100.00%
------------------------------------------------------------

I don't understand this very well. Please see if there is any abnormality..
Thanks for kindly help!

Revision history for this message
Ziyu Wang (ziyuwang1) said :
#6

Hi Bruno,
You are right!
When I change the value of flow.meshUpdateInterval from 2 to 1000(default value),the running speed from 3/s to about 55/s.
So a further question is whether the default value can be used, or how to determine this value (I set it to 2 by referring to the setting in [1]).

Thanks for help!

[1]https://gitlab.com/yade-dev/trunk/-/blob/master/examples/ThermalEngine/thermoHydroMechanical_coupling.py

Revision history for this message
Jan Stránský (honzik) said :
#7

Thanks for information.
See Bruno's answer and this line:
> "flow" 15000 5159923026.425us 98.68%
meaning that the flow engine takes 98.68 % of simulation time.
Cheers
Jan

Revision history for this message
Ziyu Wang (ziyuwang1) said :
#8

Ok,I understand it.Thanks Jan,I'd like to wait for an answer on how to set this value.

Revision history for this message
Ziyu Wang (ziyuwang1) said :
#9

No one seems to have noticed my last problem. I want to mark the problem as solved first. I would appreciate it if someone could see and answer

Revision history for this message
Jan Stránský (honzik) said :
#10

Hello,
concerning OP, the problem stated was "it runs very slowly ...I wonder if this is normal?", which was discussed and IMO solved.
The "last problem" is not difficult to miss among other information.

I suggest to open a new question concerning the meshUpdateInterval value (see also [1], point 5).

Cheers
Jan

[1] https://www.yade-dem.org/wiki/Howtoask