Reading vector into function during MPI run
I use File("T0.xml") << T0.vector() to write a Function vector into a file, running my code on a single processor.
I'm trying to read this saved vector into a function defined in the same way, but this time running the code on multiple processors using mpirun. When I read the vector in with
File("T0.xml") >> T0.vector()
the elements of the vector get assigned to different nodal positions. I can understand why this happens, but can't see a straightforward solution.
Is there a way to read the vector in correctly whilst running FEniCs using mpirun? I initially populate the vector by interpolating a 3D grid of T0 values at the points required by the finite element mesh. This relies on other software (e.g. numpy,scipy) which doesn't work very well in parallel (or at least doesn't integrate well with FEniCs in parallel), so I've opted to split the code into the steps described above.
Any help would be appreciated. Thanks!
Question information
- Language:
- English Edit question
- Status:
- Solved
- For:
- DOLFIN Edit question
- Assignee:
- No assignee Edit question
- Solved by:
- Bob Myhill
- Solved:
- Last query:
- Last reply: