export dolfin vector computed in parallel

Asked by Doug Arnold

Sometimes you want to export a solution computed in dolfin in order to process it further with other tools. For example, the FEniCS tutorial describes how to use scipy.io.savemat to export the array of a FEniCS Matrix or Vector into a file which can be read into Matlab or octave for further processing.

My question is how to do this if one is using dolfin *in parallel*. Thus one computes, e.g., a FEniCS Function u and would like to export the associated array to a file that could be read by other programs. I tried

  File("u.xml") << u.vector()

for this, and it produces an xml file for a vector of the correct size, but the entries of the vector are not in an order that I know how to interpret.

Can anyone suggest a way to do this?

Question information

Language:
English Edit question
Status:
Solved
For:
DOLFIN Edit question
Assignee:
No assignee Edit question
Solved by:
Anders Logg
Solved:
Last query:
Last reply:
Revision history for this message
Best Anders Logg (logg) said :
#1

On Sun, Sep 30, 2012 at 04:41:16PM -0000, Doug Arnold wrote:
> New question #210000 on DOLFIN:
> https://answers.launchpad.net/dolfin/+question/210000
>
> Sometimes you want to export a solution computed in dolfin in order to process it further with other tools. For example, the FEniCS tutorial describes how to use scipy.io.savemat to export the array of a FEniCS Matrix or Vector into a file which can be read into Matlab or octave for further processing.
>
> My question is how to do this if one is using dolfin *in parallel*. Thus one computes, e.g., a FEniCS Function u and would like to export the associated array to a file that could be read by other programs. I tried
>
> File("u.xml") << u.vector()
>
> for this, and it produces an xml file for a vector of the correct size, but the entries of the vector are not in an order that I know how to interpret.
>
> Can anyone suggest a way to do this?

Try this function:

def store_vertex_values(v):
    filename = "%s_%d.csv" % (v.name(), MPI.process_number())
    f = open(filename, "w")
    mesh = v.function_space().mesh()
    vertex_coordinates = mesh.coordinates()
    vertex_values = v.compute_vertex_values(mesh)
    for x, vx in zip(vertex_coordinates, vertex_values):
        f.write(",".join("%.16e" % xx for xx in x) + "," + "%.16e\n" % vx)
    f.close()

It writes one file per processor with vertex coordinates stored along
with vertex values.

You could then do

  cat u_*.csv >> u.csv

You will get multiple values for shared vertices but that should be
possible to handle with a suitable script that parses the values in
serial.

--
Anders

Revision history for this message
Doug Arnold (dnarnold) said :
#2

Many thanks, Anders.

BTW, for others reading this, the syntax for dolfin 1.0.0 is

  vertex_values = np.zeros(mesh.num_vertices(), dtype='d')
  v.compute_vertex_values(vertex_values, mesh)

Revision history for this message
Lizao (Larry) Li (creatorlarryli) said :
#3

Here is an alternative approach. The vector of the function and the dofmap of the function space are saved in files during the parallel run. Then another program loads these data to reconstruct the function on a single cpu. A sample code is given as below.

================= parallel_writing_program.py ==============
import numpy as np
import pickle
from dolfin import *
#parameters["num_threads"] = 4
set_log_level(ERROR)

mesh = UnitSquare(64, 64)
V = FunctionSpace(mesh, "CG", 2)
#f = Expression("2.0*x[0]+x[1]*6.0")
f = Expression("sin(5.0*x[0])*cos(3.0*x[1])")
u = interpolate(f, V)

# Save the data in the partitioned order
File("u.xml") << u.vector()

# Combine the cell dof map into a list
dofmap = V.dofmap()
map_to_save = []
for i in range(mesh.num_cells()):
    map_to_save.append(dofmap.cell_dofs(i))

map_file = open("mapfile{}.txt".format(MPI.process_number()), "w")
pickle.dump(map_to_save, map_file)
# Save the global to local cell map as well
pickle.dump(mesh.parallel_data().global_to_local_entity_indices(dofmap.geometric_dimension()), map_file)
map_file.close()
====================================================================

================== serial_reading_program.py ============================
import pickle
import numpy as np
from dolfin import *

# User has to specify this. Easy to automate.
NUM_PROCESS = 2
# This part has to match the writing program. Easy to automate.
mesh = UnitSquare(64, 64)
V = FunctionSpace(mesh, "CG", 2)

# f is the vector to be written into
f = Function(V)
v = f.vector()

# Read the data from the output of the parallel run
data = Vector()
File("u.xml") >> data

# We need the dofmap for the single CPU as well
dofmap = V.dofmap()

# Loop through the maps from each cpu
for i in range(NUM_PROCESS):
    mapfile = open("mapfile{}.txt".format(i), "r")
    cell_dofmap = pickle.load(mapfile)
    cell_global_to_local_map = pickle.load(mapfile)
    # For each global cell refered to in cell_global_to_local_map
    for global_cell in cell_global_to_local_map:
        # Find its corresponding local cell number on that CPU
        local_cell = cell_global_to_local_map[global_cell]
        # Get the cell dof map on both the global cell and its corresponding local cell
        global_dof = dofmap.cell_dofs(global_cell)
        local_dof = cell_dofmap[local_cell]
        # Carry the value of the local cell dof on that CPU to the corresponding global cell dof
        for i in range(len(global_dof)):
            v[global_dof[i]] = data[local_dof[i]]

# f is reconstructed as a usual finite element function now, which
# has all the possible operations including export to numpy array and so on
# Plotting just for fun
plot(f, interactive=True)
================================================================

Revision history for this message
Doug Arnold (dnarnold) said :
#4

Very nice, Larry. Thanks.