How to set the values of a function in a VectorFunctionSpace("Reall")

Asked by Martin Sandve Alnæs

It seems the ordering of the dofs in a vector real space is also arbitrary, even though there are no mesh relations. How do I reliably set the values of such a function? Can this be seen as a bug or is it expected?

Question information

Language:
English Edit question
Status:
Solved
For:
DOLFIN Edit question
Assignee:
No assignee Edit question
Solved by:
Martin Sandve Alnæs
Solved:
Last query:
Last reply:
Revision history for this message
Garth Wells (garth-wells) said :
#1

You should always access dof indices via the dofmap of a (sub)space. Nothing should be assumed regarding the dof ordering.

Revision history for this message
Johan Hake (johan-hake) said :
#2

I have this little help function to map dofs to vertex indices for CG1
and vector-CG1

def dofs_to_verts(mesh):
    """
    Compute the dof to vert mapping
    """
    import dolfin as d

    coords = mesh.coordinates()

    # FunctionSpaces
    V = d.FunctionSpace(mesh, "CG", 1)
    Vv = d.VectorFunctionSpace(mesh, "CG", 1)

    # Dof arrays
    dofs_to_vert = np.zeros(mesh.num_vertices(), dtype=np.uintp)
    vectordofs_to_vert = np.zeros((mesh.num_vertices()*3), dtype=np.uintp)
    vectordofs_to_subvert = np.zeros((mesh.num_vertices()*3),
dtype=np.uintp)

    # DofMaps
    dm = V.dofmap()
    dms = [Vv.sub(i).dofmap() for i in range(3)]

    # Iterate over cells and collect dofs
    for cell in cells(mesh):
        cell_ind = cell.index()
        vert_inds = cell.entities(0)

        # Check that dof coordinates are the same as the vertex coordinates
        assert(np.all(coords[vert_inds]==dm.tabulate_coordinates(cell)))
        dofs_to_vert[dm.cell_dofs(cell_ind)] = vert_inds

        # Iterate over the sub dof maps
        for i, (dms_i, dmcs_i) in enumerate(zip(dms, dmcs)):
            vectordofs_to_vert[dms_i.cell_dofs(cell_ind)] = vert_inds
            vectordofs_to_subvert[dms_i.cell_dofs(cell_ind)] = i

    # Return dof to vertex mapping
    return dofs_to_vert, vectordofs_to_vert, vectordofs_to_subvert

Johan

On 12/13/2012 01:31 PM, Garth Wells wrote:
> Question #216695 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/216695
>
> Status: Open => Answered
>
> Garth Wells proposed the following answer:
> You should always access dof indices via the dofmap of a (sub)space.
> Nothing should be assumed regarding the dof ordering.
>

Revision history for this message
Martin Sandve Alnæs (martinal) said :
#3

Thanks Johan, I found the right calls in your example. We really should improve utilities in dolfin for simple assignment, this is too cumbersome...

Revision history for this message
Johan Hake (johan-hake) said :
#4

Yup!

We should be able to add this functionality to either a FunctionSpace or
to the DofMap.

Something like:

  std::vector<std::size_t>& DofMap::global_dofs()

which mapped to a numpy array in Python.

The dofs included in the out argument should be parallel safe; only
including global dofs for the owning processor. Not sure the
DofMap::dofs() does this.

Johan

On 12/13/2012 03:01 PM, Martin Sandve Alnæs wrote:
> Question #216695 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/216695
>
> Status: Answered => Solved
>
> Martin Sandve Alnæs confirmed that the question is solved:
> Thanks Johan, I found the right calls in your example. We really should
> improve utilities in dolfin for simple assignment, this is too
> cumbersome...
>

Revision history for this message
Martin Sandve Alnæs (martinal) said :
#5

Note that as much as I want parallell, the immediate problem here was that I couldn't get this to work even in serial.

The case I'm talking about in this particular question, is that of assigning to Real functions (vector type), which I don't think it makes sense to distribute in parallell. I believe I'll usually want to treat these just like python constants, and will want to have their values available in each process. It sounds like such a small issue, but I guess it's a bit complicated to fit it into the rest of the framework.

Anyway, here's my case with solution, working in serial and failing miserably in parallell:

import numpy
from dolfin import *

# Setup two functions in R^10
mesh = UnitSquareMesh(5,5)
n = 10
RT = VectorFunctionSpace(mesh, "Real", 0, dim=n)
f1 = Function(RT)
f2 = Function(RT)

# Some arbitrary values, want to set f[i] == values[i] for all i
values1 = list(range(n))

# Reorder values to match random component ordering of RT dofs
values2 = [0]*n
for i,v in enumerate(values1):
    values2[RT.sub(i).dofmap().cell_dofs(0)[0]] = v

# Set values of vectors
f1.vector()[:] = numpy.array(values1)
f2.vector()[:] = numpy.array(values2)

# Evaluate func components to see the result:
print [f1[i]((0.0,0.0)) for i in range(n)]
print [f2[i]((0.0,0.0)) for i in range(n)]

# Result printed on my laptop:
#[8.0, 9.0, 7.0, 6.0, 5.0, 4.0, 3.0, 2.0, 1.0, 0.0]
#[0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0]

Revision history for this message
Johan Hake (johan-hake) said :
#6

The dofs you collect should be used to index the vector. You are lucky
that the values and indices are the same. Have a look at my updated
example below to see how you can index the vector.

Also when you access the values through the eval function you will be
off mesh for all but one processor. I have also fixed this in the
example below.

My example also show how global dofs are treated in DOLFIN. They are
only present on processor 0. You need to call update_ghost_values for
the values to be propagated to the other processors.

Johan

########################################################
import numpy
from dolfin import *

# Setup two functions in R^10
mesh = UnitSquareMesh(5,5)
n = 10
RT = VectorFunctionSpace(mesh, "Real", 0, dim=n)
f1 = Function(RT)
f2 = Function(RT)

# Some arbitrary values, want to set f[i] == values[i] for all i
values1 = list(range(n))

# Reorder values to match random component ordering of RT dofs
values2 = []
for i in range(len(values1)):
    values2.append(RT.sub(i).dofmap().cell_dofs(0)[0])

# Set values of vectors
f1.vector()[:] = numpy.array(values1)
f2.vector()[values2] = numpy.array(values1)

cell = Cell(mesh, 0)
mid_point = (cell.midpoint().x(), cell.midpoint().y())

# Evaluate func components to see the result:
info("1: "+repr([f1[i](mid_point) for i in range(n)]))
info("2: "+repr([f2[i](mid_point) for i in range(n)]))

f1.vector().update_ghost_values()
f2.vector().update_ghost_values()

# Evaluate func components after ghost values are updated
info("1: "+repr([f1[i](mid_point) for i in range(n)]))
info("2: "+repr([f2[i](mid_point) for i in range(n)]))

On 12/13/2012 06:50 PM, Martin Sandve Alnæs wrote:
> Question #216695 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/216695
>
> Martin Sandve Alnæs posted a new comment:
> Note that as much as I want parallell, the immediate problem here was
> that I couldn't get this to work even in serial.
>
> The case I'm talking about in this particular question, is that of
> assigning to Real functions (vector type), which I don't think it makes
> sense to distribute in parallell. I believe I'll usually want to treat
> these just like python constants, and will want to have their values
> available in each process. It sounds like such a small issue, but I
> guess it's a bit complicated to fit it into the rest of the framework.
>
> Anyway, here's my case with solution, working in serial and failing
> miserably in parallell:
>
> import numpy
> from dolfin import *
>
> # Setup two functions in R^10
> mesh = UnitSquareMesh(5,5)
> n = 10
> RT = VectorFunctionSpace(mesh, "Real", 0, dim=n)
> f1 = Function(RT)
> f2 = Function(RT)
>
> # Some arbitrary values, want to set f[i] == values[i] for all i
> values1 = list(range(n))
>
> # Reorder values to match random component ordering of RT dofs
> values2 = [0]*n
> for i,v in enumerate(values1):
> values2[RT.sub(i).dofmap().cell_dofs(0)[0]] = v
>
> # Set values of vectors
> f1.vector()[:] = numpy.array(values1)
> f2.vector()[:] = numpy.array(values2)
>
> # Evaluate func components to see the result:
> print [f1[i]((0.0,0.0)) for i in range(n)]
> print [f2[i]((0.0,0.0)) for i in range(n)]
>
> # Result printed on my laptop:
> #[8.0, 9.0, 7.0, 6.0, 5.0, 4.0, 3.0, 2.0, 1.0, 0.0]
> #[0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0]
>

Revision history for this message
Martin Sandve Alnæs (martinal) said :
#7

> The dofs you collect should be used to index the vector. You are lucky
> that the values and indices are the same.

Nope, you misread my code :)
I reorder the source indices while you reorder the target indices, same thing really.

For the rest of the code, thanks!

Revision history for this message
Johan Hake (johan-hake) said :
#8

On 12/14/2012 11:01 AM, Martin Sandve Alnæs wrote:
> Question #216695 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/216695
>
> Martin Sandve Alnæs posted a new comment:
>> The dofs you collect should be used to index the vector. You are lucky
>> that the values and indices are the same.
>
> Nope, you misread my code :)
> I reorder the source indices while you reorder the target indices, same thing really.

Ahh, well, but my way is of course much cleaner ;)

> For the rest of the code, thanks!

NP.

Johan