PETSC ERROR with mpirun (reformulated question)

Asked by Jacopo Lanzoni

Dear all,

I wonder how I can use parallel computing for codes where I use piecewise functions, ie how I can avoid indexing into array to define them? Eg running the program at the end of the post with mpirun I get the following error, which I can't solve. Running without mpi, everything works fine.

Process 0: Number of global vertices: 10201
Process 0: Number of global cells: 20000
Process 1: Partitioned mesh, edge cut is 231.
Process 2: Partitioned mesh, edge cut is 231.
Process 3: Partitioned mesh, edge cut is 231.
Process 0: Partitioned mesh, edge cut is 231.
Traceback (most recent call last):
  File "temp2.py", line 26, in <module>
Traceback (most recent call last):
  File "temp2.py", line 26, in <module>
Traceback (most recent call last):
  File "temp2.py", line 26, in <module>
Traceback (most recent call last):
  File "temp2.py", line 26, in <module>
    k.vector()[:] = numpy.choose(help, k_values)
  File "/usr/local/packages/lib/python2.7/site-packages/dolfin/cpp.py", line 3662, in __setslice__
    k.vector()[:] = numpy.choose(help, k_values)
    k.vector()[:] = numpy.choose(help, k_values)
  File "/usr/local/packages/lib/python2.7/site-packages/dolfin/cpp.py", line 3662, in __setslice__
  File "/usr/local/packages/lib/python2.7/site-packages/dolfin/cpp.py", line 3662, in __setslice__
    k.vector()[:] = numpy.choose(help, k_values)
  File "/usr/local/packages/lib/python2.7/site-packages/dolfin/cpp.py", line 3662, in __setslice__
    self.__setitem__(slice(i, j, 1), values)
  File "/usr/local/packages/lib/python2.7/site-packages/dolfin/cpp.py", line 3693, in __setitem__
    self.__setitem__(slice(i, j, 1), values)
  File "/usr/local/packages/lib/python2.7/site-packages/dolfin/cpp.py", line 3693, in __setitem__
    self.__setitem__(slice(i, j, 1), values)
  File "/usr/local/packages/lib/python2.7/site-packages/dolfin/cpp.py", line 3693, in __setitem__
    self.__setitem__(slice(i, j, 1), values)
  File "/usr/local/packages/lib/python2.7/site-packages/dolfin/cpp.py", line 3693, in __setitem__
    _set_vector_items_array_of_float(self, indices, values)
RuntimeError: non matching dimensions on input
    _set_vector_items_array_of_float(self, indices, values)
RuntimeError: non matching dimensions on input
    _set_vector_items_array_of_float(self, indices, values)
RuntimeError: _set_vector_items_array_of_float(self, indices, values)
RuntimeError: non matching dimensions on input
non matching dimensions on input

The code is the following:

from dolfin import *
import numpy

tol = 1E-14
mesh = UnitSquare(100, 100)

class Omega0(SubDomain):
        def inside(self, x, on_boundary):
                return True if x[1] <= 0.5 else False

class Omega1(SubDomain):
        def inside(self, x, on_boundary):
                return True if x[1] >= 0.5 else False

subdomains = MeshFunction('uint', mesh, 2)
subdomain0 = Omega0()
subdomain0.mark(subdomains, 0)
subdomain1 = Omega1()
subdomain1.mark(subdomains, 1)

V0 = FunctionSpace(mesh, 'DG', 0)
k = Function(V0)

k_values = [1.5, 5.0]
help = numpy.asarray(subdomains.array(), dtype=numpy.int32)
k.vector()[:] = numpy.choose(help, k_values)

Question information

Language:
English Edit question
Status:
Answered
For:
DOLFIN Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
Johan Hake (johan-hake) said :
#1

You should only use local (to processor) dofs when accessing values in a
parallel vector.

Try the following code:

from dolfin import *
import numpy

tol = 1E-14
mesh = UnitSquare(100, 100)

class Omega0(SubDomain):
        def inside(self, x, on_boundary):
                return True if x[1] <= 0.5 else False

class Omega1(SubDomain):
        def inside(self, x, on_boundary):
                return True if x[1] >= 0.5 else False

subdomains = MeshFunction('uint', mesh, 2)
subdomain0 = Omega0()
subdomain0.mark(subdomains, 0)
subdomain1 = Omega1()
subdomain1.mark(subdomains, 1)

V0 = FunctionSpace(mesh, 'DG', 0)
dm = V0.dofmap()
k1 = Function(V0)

inds0, inds1 = [], []
for cell in SubsetIterator(subdomains, 0):
    inds0.extend(dm.cell_dofs(cell.index()))

for cell in SubsetIterator(subdomains, 1):
    inds1.extend(dm.cell_dofs(cell.index()))

print "num 0:", len(inds0), "num 1:", len(inds1)

k1.vector()[inds0] = 1.5
k1.vector()[inds1] = 5.0

print "dofs:", MPI.sum(float(len(k1.vector().array()))),
print "num 1.5:", MPI.sum(float(sum(k1.vector().array()==1.5))),
print "num 5:", MPI.sum(float(sum(k1.vector().array()==5)))

Note that this works because a cell does not share any dofs with other
cells (DG, 0). If the FunctionSpace would have been CG, 1 for example
this script would not work. Then you need to sort the dofs you collect
in inds{0,1}.

Johan

On 08/30/2012 01:55 PM, Jacopo Lanzoni wrote:
> New question #207268 on DOLFIN:
> https://answers.launchpad.net/dolfin/+question/207268
>
> Dear all,
>
> I wonder how I can use parallel computing for codes where I use piecewise functions, ie how I can avoid indexing into array to define them? Eg running the program at the end of the post with mpirun I get the following error, which I can't solve. Running without mpi, everything works fine.
>
> Process 0: Number of global vertices: 10201
> Process 0: Number of global cells: 20000
> Process 1: Partitioned mesh, edge cut is 231.
> Process 2: Partitioned mesh, edge cut is 231.
> Process 3: Partitioned mesh, edge cut is 231.
> Process 0: Partitioned mesh, edge cut is 231.
> Traceback (most recent call last):
> File "temp2.py", line 26, in <module>
> Traceback (most recent call last):
> File "temp2.py", line 26, in <module>
> Traceback (most recent call last):
> File "temp2.py", line 26, in <module>
> Traceback (most recent call last):
> File "temp2.py", line 26, in <module>
> k.vector()[:] = numpy.choose(help, k_values)
> File "/usr/local/packages/lib/python2.7/site-packages/dolfin/cpp.py", line 3662, in __setslice__
> k.vector()[:] = numpy.choose(help, k_values)
> k.vector()[:] = numpy.choose(help, k_values)
> File "/usr/local/packages/lib/python2.7/site-packages/dolfin/cpp.py", line 3662, in __setslice__
> File "/usr/local/packages/lib/python2.7/site-packages/dolfin/cpp.py", line 3662, in __setslice__
> k.vector()[:] = numpy.choose(help, k_values)
> File "/usr/local/packages/lib/python2.7/site-packages/dolfin/cpp.py", line 3662, in __setslice__
> self.__setitem__(slice(i, j, 1), values)
> File "/usr/local/packages/lib/python2.7/site-packages/dolfin/cpp.py", line 3693, in __setitem__
> self.__setitem__(slice(i, j, 1), values)
> File "/usr/local/packages/lib/python2.7/site-packages/dolfin/cpp.py", line 3693, in __setitem__
> self.__setitem__(slice(i, j, 1), values)
> File "/usr/local/packages/lib/python2.7/site-packages/dolfin/cpp.py", line 3693, in __setitem__
> self.__setitem__(slice(i, j, 1), values)
> File "/usr/local/packages/lib/python2.7/site-packages/dolfin/cpp.py", line 3693, in __setitem__
> _set_vector_items_array_of_float(self, indices, values)
> RuntimeError: non matching dimensions on input
> _set_vector_items_array_of_float(self, indices, values)
> RuntimeError: non matching dimensions on input
> _set_vector_items_array_of_float(self, indices, values)
> RuntimeError: _set_vector_items_array_of_float(self, indices, values)
> RuntimeError: non matching dimensions on input
> non matching dimensions on input
>
> The code is the following:
>
> from dolfin import *
> import numpy
>
> tol = 1E-14
> mesh = UnitSquare(100, 100)
>
> class Omega0(SubDomain):
> def inside(self, x, on_boundary):
> return True if x[1] <= 0.5 else False
>
> class Omega1(SubDomain):
> def inside(self, x, on_boundary):
> return True if x[1] >= 0.5 else False
>
> subdomains = MeshFunction('uint', mesh, 2)
> subdomain0 = Omega0()
> subdomain0.mark(subdomains, 0)
> subdomain1 = Omega1()
> subdomain1.mark(subdomains, 1)
>
> V0 = FunctionSpace(mesh, 'DG', 0)
> k = Function(V0)
>
> k_values = [1.5, 5.0]
> help = numpy.asarray(subdomains.array(), dtype=numpy.int32)
> k.vector()[:] = numpy.choose(help, k_values)
>

Revision history for this message
Garth Wells (garth-wells) said :
#2

On 30 August 2012 12:55, Jacopo Lanzoni
<email address hidden> wrote:
> New question #207268 on DOLFIN:
> https://answers.launchpad.net/dolfin/+question/207268
>
> Dear all,
>
> I wonder how I can use parallel computing for codes where I use piecewise functions, ie how I can avoid indexing into > array to define them?

You need to say what you're trying to achieve at the higher level,
e.g. if you want to interpolate a function, then do this by using an
Expression. It's rare that indexing is needed at the solver level.

Garth

> Eg running the program at the end of the post with mpirun I get the following error, which I can't solve. Running without mpi, everything works fine.
>
> Process 0: Number of global vertices: 10201
> Process 0: Number of global cells: 20000
> Process 1: Partitioned mesh, edge cut is 231.
> Process 2: Partitioned mesh, edge cut is 231.
> Process 3: Partitioned mesh, edge cut is 231.
> Process 0: Partitioned mesh, edge cut is 231.
> Traceback (most recent call last):
> File "temp2.py", line 26, in <module>
> Traceback (most recent call last):
> File "temp2.py", line 26, in <module>
> Traceback (most recent call last):
> File "temp2.py", line 26, in <module>
> Traceback (most recent call last):
> File "temp2.py", line 26, in <module>
> k.vector()[:] = numpy.choose(help, k_values)
> File "/usr/local/packages/lib/python2.7/site-packages/dolfin/cpp.py", line 3662, in __setslice__
> k.vector()[:] = numpy.choose(help, k_values)
> k.vector()[:] = numpy.choose(help, k_values)
> File "/usr/local/packages/lib/python2.7/site-packages/dolfin/cpp.py", line 3662, in __setslice__
> File "/usr/local/packages/lib/python2.7/site-packages/dolfin/cpp.py", line 3662, in __setslice__
> k.vector()[:] = numpy.choose(help, k_values)
> File "/usr/local/packages/lib/python2.7/site-packages/dolfin/cpp.py", line 3662, in __setslice__
> self.__setitem__(slice(i, j, 1), values)
> File "/usr/local/packages/lib/python2.7/site-packages/dolfin/cpp.py", line 3693, in __setitem__
> self.__setitem__(slice(i, j, 1), values)
> File "/usr/local/packages/lib/python2.7/site-packages/dolfin/cpp.py", line 3693, in __setitem__
> self.__setitem__(slice(i, j, 1), values)
> File "/usr/local/packages/lib/python2.7/site-packages/dolfin/cpp.py", line 3693, in __setitem__
> self.__setitem__(slice(i, j, 1), values)
> File "/usr/local/packages/lib/python2.7/site-packages/dolfin/cpp.py", line 3693, in __setitem__
> _set_vector_items_array_of_float(self, indices, values)
> RuntimeError: non matching dimensions on input
> _set_vector_items_array_of_float(self, indices, values)
> RuntimeError: non matching dimensions on input
> _set_vector_items_array_of_float(self, indices, values)
> RuntimeError: _set_vector_items_array_of_float(self, indices, values)
> RuntimeError: non matching dimensions on input
> non matching dimensions on input
>
> The code is the following:
>
> from dolfin import *
> import numpy
>
> tol = 1E-14
> mesh = UnitSquare(100, 100)
>
> class Omega0(SubDomain):
> def inside(self, x, on_boundary):
> return True if x[1] <= 0.5 else False
>
> class Omega1(SubDomain):
> def inside(self, x, on_boundary):
> return True if x[1] >= 0.5 else False
>
> subdomains = MeshFunction('uint', mesh, 2)
> subdomain0 = Omega0()
> subdomain0.mark(subdomains, 0)
> subdomain1 = Omega1()
> subdomain1.mark(subdomains, 1)
>
> V0 = FunctionSpace(mesh, 'DG', 0)
> k = Function(V0)
>
> k_values = [1.5, 5.0]
> help = numpy.asarray(subdomains.array(), dtype=numpy.int32)
> k.vector()[:] = numpy.choose(help, k_values)
>
> --
> You received this question notification because you are a member of
> DOLFIN Team, which is an answer contact for DOLFIN.

Revision history for this message
Jacopo Lanzoni (lanzoni) said :
#3

Dear Johan and Garth,

First of all, thank you very much for you answers. I'm interested in the eigenvalue problem for the stiffness+mass pencil coming from FEM discretizations (using slepc), and 3d cases are too slow to assemble and their eigenproblems too slow to be computed without parallelisation. Is there a way to avoid indexing?

Thanks again,
Jacopo

Can you help with this problem?

Provide an answer of your own, or ask Jacopo Lanzoni for more information if necessary.

To post a message you must log in.