simple mesh-traversal procedure on parallel mesh

Asked by Jan Blechta

Hello,
does anybody have a clue how to implement this simple mesh-traversal in parallel:

for v in vertices(mesh):
     if all([cell_domains[c]==1 for c in cells(v)]):
         do_something

This code has wrong meaning in parallel, because not all physical cells neighbouring with v are iterated by cells(v) if v on partition interface. So simply said do_something is executed for more vertices than in serial case. Am I missing some functionality of dolfin or do I need to program data transfer between processes by hand?

Thanks
Jan

Question information

Language:
English Edit question
Status:
Solved
For:
DOLFIN Edit question
Assignee:
No assignee Edit question
Solved by:
Garth Wells
Solved:
Last query:
Last reply:

This question was reopened

Revision history for this message
Garth Wells (garth-wells) said :
#1
Revision history for this message
Jan Blechta (blechta) said :
#2

Thanks Garth Wells, that solved my question.

Revision history for this message
Jan Blechta (blechta) said :
#3

decide_do_something = MeshFunction('bool', mesh, 0, False)

# set decide_do_something on each partition independently
for v in vertices(mesh):
    if all([cell_domains[c]==1 for c in cells(v)]):
         decide_do_something[v] = True

ngv = mesh.parallel_data().num_global_entities()[0]
g2lvi = mesh.parallel_data().global_to_local_entity_indices(0)

# sync values on shared vertices:
# if True on every partition we take True else False
for i in range(ngv):
    if g2lvi.has_key(i):
        temp = decide_do_something[g2lvi[i]]
    else:
        temp = True
    temp = MPI.min(temp) # little hack
    if g2lvi.has_key(i):
        decide_do_something[g2lvi[i]] = temp

# finally we can do something where True
for v in vertices(mesh):
    if decide_do_something[v] == True:
        do_something

Works in 1.0.0
Could be made much more efficient if we skip for-cycle for i which is not shared in no partitions..

Revision history for this message
Jan Blechta (blechta) said :
#4

1.0.0: ngv = mesh.parallel_data().num_global_entities()[0]
trunk: ngv = mesh.size_global(0)

But what about global_to_local_entity_indices map? Is it somehow accessible in trunk?

Revision history for this message
Best Garth Wells (garth-wells) said :
#5

On Sun, Jan 6, 2013 at 4:41 AM, Jan Blechta
<email address hidden> wrote:
> Question #218262 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/218262
>
> Status: Solved => Open
>
> Jan Blechta is still having a problem:
> 1.0.0: ngv = mesh.parallel_data().num_global_entities()[0]
> trunk: ngv = mesh.size_global(0)
>
> But what about global_to_local_entity_indices map? Is it somehow
> accessible in trunk?
>

   entitiy.global_index()

returns the global entity index. Cells and vertices will always have a
global index. Other entities need to have their numbering explicitly
instantiated.

If you want a global-to-local map, you can build it yourself by
iterating over all the entities on a process and getting the local and
global indices.

Garth

> --
> You received this question notification because you are a member of
> DOLFIN Team, which is an answer contact for DOLFIN.

Revision history for this message
Jan Blechta (blechta) said :
#6

Thanks, Garth.

You can also build it this way:

l2gvi = mesh.topology().global_indices(0)
g2lvi = {}
for j in range(mesh.size(0)):
    g2lvi[l2gvi[j]] = j

Here l2gvi is local to global vertex index map (numpy.array) and g2lvi
is global to local vertex index map (dict).

Jan

Revision history for this message
Jan Blechta (blechta) said :
#7

Thanks Garth Wells, that solved my question.

Revision history for this message
Garth Wells (garth-wells) said :
#8

On Sun, Jan 6, 2013 at 5:01 PM, Jan Blechta
<email address hidden> wrote:
> Question #218262 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/218262
>
> Status: Answered => Open
>
> Jan Blechta is still having a problem:
> Thanks, Garth.
>
> You can also build it this way:
>
> l2gvi = mesh.topology().global_indices(0)
> g2lvi = {}
> for j in range(mesh.size(0)):
> g2lvi[l2gvi[j]] = j
>
> Here l2gvi is local to global vertex index map (numpy.array) and g2lvi
> is global to local vertex index map (dict).
>

Take care with this approach because I'm working to de-couple the
local index from the array position (this is related to optimisations
for hyrbid MPI/ threaded operations). I don't know what will happen
with MeshTopology::global_indices(..).

Garth

> Jan
>
> --
> You received this question notification because you are a member of
> DOLFIN Team, which is an answer contact for DOLFIN.

Revision history for this message
Jan Blechta (blechta) said :
#9

> Take care with this approach because I'm working to de-couple the
> local index from the array position (this is related to optimisations
> for hyrbid MPI/ threaded operations). I don't know what will happen
> with MeshTopology::global_indices(..).

Ok. I assume that global_indices will stop working
instead of giving wrong values, won't they?

Regarding hybrid MPI/OpenMP do you consider creating some doc or demo
when it becomes useful? I'm interested in present state of usefulness
of approach as well as how to configure it. Do you know about any doc
or other source of information so far?

Jan