python loop over cells for DG0 elements hangs in parallel
I'm using dolfin 1.0, built using dorsal.
The following code works fine in parallel (4 processors, mpirun -np 4 ) for nx = 13, but hangs, on my pc, as soon as I increase it to nx = 14
#-------------
from dolfin import *
nx = 13
mesh = UnitCube(nx,10,10)
DG = FunctionSpace(mesh, "DG", 0)
E = Function(DG)
#print cells(mesh)
i = 0
global_idx = mesh.parallel_
#print mesh
#print global_idx
print "Number of cells", mesh.num_cells()
for c in cells(mesh):
i = i+1
#E.vector(
if c.index() > mesh.num_cells():
print "XXXXX", c.index()
break
E.vector(
print i
V = VectorFunctionS
print "Done VF"
#-----------
I understand I should use a compiled Expression here; I'll ask a different question for the problems I'm having with the expression.
Output and backtraces below.
Thanks,
Marco
Output for nx = 13:
marco@pao:
Process 0: Number of global vertices: 1694
Process 0: Number of global cells: 7800
Small graph
Process 0: Partitioned mesh, edge cut is 775.
Process 1: Partitioned mesh, edge cut is 775.
Process 2: Partitioned mesh, edge cut is 775.
Process 3: Partitioned mesh, edge cut is 775.
Number of cells 1950
Number of cells 1950
Number of cells 1950
Number of cells 1950
1950
1950
1950
1950
Done VF
Done VF
Done VF
Done VF
marco@pao:
Output for nx = 14:
marco@pao:
Process 0: Number of global vertices: 1815
Process 0: Number of global cells: 8400
Process 1: Partitioned mesh, edge cut is 790.
Process 2: Partitioned mesh, edge cut is 790.
Process 3: Partitioned mesh, edge cut is 790.
Small graph
Process 0: Partitioned mesh, edge cut is 790.
Number of cells 2098
Number of cells 2100
Number of cells 2102
Number of cells 2100
2098
and all the four processes stay there, three with this backtrace
#0 0x00007fda4a7fb4e8 in poll () from /lib64/libc.so.6
#1 0x00007fda4935e130 in poll_dispatch () from /home/marco/
#2 0x00007fda4935ce52 in opal_event_
#3 0x00007fda49351d21 in opal_progress () from /home/marco/
#4 0x00007fda4981f3d5 in ompi_request_
#5 0x00007fda24811cb6 in ompi_coll_
from /home/marco/
#6 0x00007fda4983393b in PMPI_Allreduce () from /home/marco/
#7 0x00007fda2cb14f7c in VecAssemblyBegi
#8 0x00007fda2cafcc69 in VecAssemblyBegi
#9 0x00007fda33073dd5 in dolfin:
at /home/marco/
#10 0x00007fda3361af65 in _set_vector_
at /home/marco/
#11 0x00007fda3361b224 in _wrap__
at /home/marco/
#12 ...... python stuff from here below
the last one (I think the one that has printed "2098") with this
#0 0x00007f078fa934e8 in poll () from /lib64/libc.so.6
#1 0x00007f078e5f6130 in poll_dispatch () from /home/marco/
#2 0x00007f078e5f4e52 in opal_event_
#3 0x00007f078e5e9d21 in opal_progress () from /home/marco/
#4 0x00007f078eab73d5 in ompi_request_
#5 0x00007f0769aa9cb6 in ompi_coll_
from /home/marco/
#6 0x00007f078eaa54cb in ompi_comm_nextcid () from /home/marco/
#7 0x00007f078eaa117e in ompi_comm_dup () from /home/marco/
#8 0x00007f078eacf876 in PMPI_Comm_dup () from /home/marco/
#9 0x00007f077819840e in dolfin:
at /home/marco/
#10 0x00007f07787d13e3 in _wrap_MPI_barrier (args=<optimized out>)
at /home/marco/
#11 ...... python stuff from here below
Question information
- Language:
- English Edit question
- Status:
- Solved
- For:
- DOLFIN Edit question
- Assignee:
- No assignee Edit question
- Solved by:
- Marco Morandini
- Solved:
- Last query:
- Last reply: