MPI Communicator Error With PETSc in C++
Hello,
I'm currently playing around with FEniCS in parallel. I have a Mesh with three subdomains marked with a MeshFunction. I would like to see which parts of the mesh (and which subdomains) are on which processors after partitioning. I currently have the following small c++ code which I think should do the job:
#include <iostream>
#include <sstream>
#include <dolfin.h>
#include <petscsys.h>
#include <boost/mpi.hpp>
using namespace dolfin;
int main(int argc, char** argv) {
Mesh mesh("mesh.xml");
MeshFunction<
MPICommunicator mpi_comm;
boost:
for (CellIterator c(mesh); !c.end(); ++c) {
std:
sstr << MPI::process_
PetscSynchr
}
PetscSynchron
return 0;
}
Unfortunately when I try to execute this I get the following run time error
lesleis@
[lesleis-
[lesleis-
[lesleis-
[lesleis-
[lesleis-
[lesleis-
[lesleis-
[lesleis-
[lesleis-
[lesleis-
[lesleis-
[lesleis-
[lesleis-
[lesleis-
[lesleis-
[lesleis-
-------
mpirun noticed that process rank 0 with PID 13323 on node lesleis-
-------
If I remove the calls to PETSc and just use cout, everything seems to work (but obviously there is a synchronisation issue to stdout), so possibly this is me not acquiring the MPI communicator in the correct manner. Could someone please let me know what I'm doing wrong or why the above code will not work?
Kind regards
Les
Question information
- Language:
- English Edit question
- Status:
- Solved
- Assignee:
- No assignee Edit question
- Solved by:
- Garth Wells
- Solved:
- Last query:
- Last reply: