MeshPartitioning throws assertion
I'm very new to FEniCS/Dolfin so hopefully this question is not a trivial one :)
I've implemented a sixth order pde (advected phase-field crystal model) that works fine in sequential processing, but when I start the program using mpirun with two (or more) processors the simulation crashes shortly after the beginning with
/.../FEniCS/
Since I've not much knowlage about dolfin and parallel execution, I've not changed anything in my code for the parallel task. The cahn-hilliard demo works fine in parallel. Is there anything that must be considered in the programming for parallel execution with dolfin? Is it ok to use adaptive meshes in parallel? Is there something like repartitioning or mesh balancing implemented, if I would use an adaption loop in my simulations?
Some information about my environment:
ubuntu 10.10 (amd64 - version), quad-core amd phenom cpu
gcc 4.4.5, libcgal5_3.6.1-2, petsc-3.1-p4, dolfin 0.9.9, ffc 0.9.4
the program stops somewhere around the commands
ns_
ns_
ns_
The output messages before the error:
[...]
Process 0: Number of global vertices: 5551
Process 0: Number of global cells: 10800
Process 1: Partitioned mesh, edge cut is 61.
Process 0: Partitioned mesh, edge cut is 61.
Process 1: refine mesh... Creating directory "output".
Process 0: refine mesh... Creating directory "output".
Process 1: [ ok ]
Process 0: [ ok ]
Process 1: Vertices: 41737
Process 1: FunctionSpaces...
Process 0: Vertices: 45509
Process 0: FunctionSpaces...
where "refine mesh..." means that the a standard rectangle mesh will be refined locally using cell-marker and MeshFunction. The message "FunctionSpaces..." comes direktly before definition of FunctionSpace V(mesh).
It would be great to receive any answers to my question :)
Question information
- Language:
- English Edit question
- Status:
- Solved
- For:
- DOLFIN Edit question
- Assignee:
- No assignee Edit question
- Solved by:
- Garth Wells
- Solved:
- Last query:
- Last reply: