cpp demo_poisson hangs with mpiexec

Asked by Paul Constantine on 2013-01-31

Building and running ./demo_poisson in serial is fine. On a cluster with PBS submission system. I begin an interactive session with
$ qsub -I -lnodes=1:ppn=2
Notice just one node and two processors. Then I run
====
$ mpiexec ./demo_poisson
Process 0: Number of global vertices: 1089
Process 0: Number of global cells: 2048
Process 0: Time to build (parallel) dual graph: 0.00135803
Process 0: Solving linear variational problem.
Process 1: Solving linear variational problem.
====
Then it hangs. Forever.

I fear this is a rabbit hole of debugging. Perhaps you could give me some broad suggestions for where to look. Here is a bit more info:
- Intel compilers
- OpenMPI 1.6.1
- Built everything from source with PETSc backend that includes CHOLMOD.

Question information

Language:
English Edit question
Status:
Solved
For:
DOLFIN Edit question
Assignee:
No assignee Edit question
Solved by:
Paul Constantine
Solved:
2013-01-31
Last query:
2013-01-31
Last reply:
2013-01-31
Garth Wells (garth-wells) said : #1

Trying building PETSc with MUMPS and SuperLU_dist.

It is built with SuperLU_DIST_3.1, but not with MUMPS. I will try MUMPS.

What is the default solver for "solve"? Maybe I can see what's happening by
changing the solver?

On Thu, Jan 31, 2013 at 11:41 AM, Garth Wells <
<email address hidden>> wrote:

> Your question #220684 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/220684
>
> Status: Open => Answered
>
> Garth Wells proposed the following answer:
> Trying building PETSc with MUMPS and SuperLU_dist.
>
> --
> If this answers your question, please go to the following page to let us
> know that it is solved:
> https://answers.launchpad.net/dolfin/+question/220684/+confirm?answer_id=0
>
> If you still need help, you can reply to this email or go to the
> following page to enter your feedback:
> https://answers.launchpad.net/dolfin/+question/220684
>
> You received this question notification because you asked the question.
>

Okay, that seemed to do the trick. No more hanging. Any ideas why it was happening? Things to avoid going forward?

Garth Wells (garth-wells) said : #4

On 31 January 2013 20:50, Paul Constantine
<email address hidden> wrote:
> Question #220684 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/220684
>
> Status: Open => Solved
>
> Paul Constantine confirmed that the question is solved:
> Okay, that seemed to do the trick. No more hanging. Any ideas why it was
> happening? Things to avoid going forward?
>

On a parallel machine, I'd suggest starting with C++ (there can be
some weird Python issues on parallel machines, especially if you use
Intel MKL, which you don't want to worry about until other issues are
ironed out) and using a lower-level interface for the assembly and
solve, e.g.

    PETScMatrix A;
    PETScVector b;
    assemble_system(A, b, a, L, bcs);

   PETScLUSolver lu("mumps");
   lu.solve(A, x, b)

   PETScPreconditoner pc("hypre_amg");
   pc.parameters["report"] = true;

   PETScKrylovSolver krylov_solver("gmres", pc);
   krylov_solver.parameters["report"] = true;
   krylov_solver.parameters["monitor_convergence"] = true;

   krylov_solver.set_operator(A);
   krylov_solver.solve(x, b);

Garth

> --
> You received this question notification because you are a member of
> DOLFIN Team, which is an answer contact for DOLFIN.

I am experiencing similar issues on a linux cluster running rhel5 on amd processors.

I solved some problems linking to the correct blas/lapack library (amcl)

However, I have still some unpredictable behavior, especially in "Init dofmap", that may take much more time than the solver, switching from one run to the other on the some number of processor from 0.1 sec to 30 sec in list_timings(), while solver and assembling times stay reasonable.

Should one really avoid using the python interface on clusters or is it just difficult to configure correctly?I think this is important to known before developing an application.

May you give some more information?

Which are the most sensible steps for a correct configuration?

Which benchmark do you suggest to run to test?

It would be nice if something about this will appear in the documentation

Below my dolfin_configure.log. I use the latest dev version.

###########################
more dorsal_configure.log
-- The C compiler identification is GNU
-- The CXX compiler identification is GNU
-- Check for working C compiler: /usr/bin/gcc
-- Check for working C compiler: /usr/bin/gcc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Performing Test HAVE_PIPE
-- Performing Test HAVE_PIPE - Success
-- Performing Test HAVE_PEDANTIC
-- Performing Test HAVE_PEDANTIC - Success
-- Performing Test HAVE_STD
-- Performing Test HAVE_STD - Success
-- Performing Test HAVE_DEBUG
-- Performing Test HAVE_DEBUG - Success
-- Performing Test HAVE_O2_OPTIMISATION
-- Performing Test HAVE_O2_OPTIMISATION - Success
-- The Fortran compiler identification is GNU
-- Check for working Fortran compiler: /usr/bin/gfortran
-- Check for working Fortran compiler: /usr/bin/gfortran -- works
-- Detecting Fortran compiler ABI info
-- Detecting Fortran compiler ABI info - done
-- Checking whether /usr/bin/gfortran supports Fortran 90
-- Checking whether /usr/bin/gfortran supports Fortran 90 -- yes
-- Found MPI_C: /share/apps/openmpi_amd64/lib/libmpi.so;/share/apps/openmpi_amd64/lib/libopen-rte.so;/share/apps/openmpi_amd64/lib/lib
open-pal.so;/usr/lib64/libibcm.so;/usr/lib64/librdmacm.so;/usr/lib64/libibverbs.so;/usr/lib64/libnuma.so;/usr/lib64/libdl.so;/usr/lib6
4/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libm.so
-- Found MPI_CXX: /share/apps/openmpi_amd64/lib/libmpi_cxx.so;/share/apps/openmpi_amd64/lib/libmpi.so;/share/apps/openmpi_amd64/lib/li
bopen-rte.so;/share/apps/openmpi_amd64/lib/libopen-pal.so;/usr/lib64/libibcm.so;/usr/lib64/librdmacm.so;/usr/lib64/libibverbs.so;/usr/
lib64/libnuma.so;/usr/lib64/libdl.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libm.so
-- Found MPI_Fortran: /share/apps/openmpi_amd64/lib/libmpi_f90.so;/share/apps/openmpi_amd64/lib/libmpi_f77.so;/share/apps/openmpi_amd6
4/lib/libmpi.so;/share/apps/openmpi_amd64/lib/libopen-rte.so;/share/apps/openmpi_amd64/lib/libopen-pal.so;/usr/lib64/libibcm.so;/usr/l
ib64/librdmacm.so;/usr/lib64/libibverbs.so;/usr/lib64/libnuma.so;/usr/lib64/libdl.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/l
ib64/libm.so
-- Try OpenMP C flag = [-fopenmp]
-- Performing Test OpenMP_FLAG_DETECTED
-- Performing Test OpenMP_FLAG_DETECTED - Success
-- Try OpenMP CXX flag = [-fopenmp]
-- Performing Test OpenMP_FLAG_DETECTED
-- Performing Test OpenMP_FLAG_DETECTED - Success
-- Found OpenMP: -fopenmp
-- Performing Test OPENMP_UINT_TEST_RUNS
-- Performing Test OPENMP_UINT_TEST_RUNS - Success
-- Boost version: 1.51.0
-- Found the following Boost libraries:
-- filesystem
-- program_options
-- system
-- thread
-- iostreams
-- math_tr1
-- mpi
-- serialization
-- timer
-- chrono
-- UFC version: 2.1.0+
-- Checking for package 'Armadillo'
-- Looking for Fortran sgemm
-- Looking for Fortran sgemm - found
-- Looking for include files CMAKE_HAVE_PTHREAD_H
-- Looking for include files CMAKE_HAVE_PTHREAD_H - found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- A library with BLAS API found.
-- A library with BLAS API found.
-- A library with LAPACK API found.
-- Performing Test ARMADILLO_TEST_RUNS
-- Performing Test ARMADILLO_TEST_RUNS - Success
-- Found Armadillo: /share/users/common/FEniCS/tao/lib/libarmadillo.so;/share/users/common/FEniCS/tao/lib/libacml.so;/share/apps/acml4
4/gfortran64_mp/lib/libacml_mv.so;/share/users/common/FEniCS/tao/lib/libacml.so;/share/apps/acml44/gfortran64_mp/lib/libacml_mv.so
-- Found LibXml2: /share/users/common/FEniCS/tao/lib/libxml2.so
-- Found PythonInterp: /share/users/common/FEniCS/tao/bin/python2.7 (Required is at least version "2")
-- Found PythonLibs: /share/users/common/FEniCS/tao/lib/libpython2.7.so (Required is at least version "2")
-- NumPy headers found
-- Found SWIG: /share/users/common/FEniCS/tao/bin/swig (found version "2.0.3")
-- Checking for package 'PETSc'
-- PETSC_DIR is /share/users/common/FEniCS/tao
-- PETSC_ARCH is empty
-- Found petscconf.h
-- Performing Test PETSC_TEST_RUNS
-- Performing Test PETSC_TEST_RUNS - Success
-- PETSc test runs
-- Performing Test PETSC_CUSP_FOUND
-- Performing Test PETSC_CUSP_FOUND - Failed
-- PETSc configured without Cusp support
-- Found PETSc: /share/users/common/FEniCS/tao/lib/libpetsc.so;/usr/lib64/libX11.so;/share/users/common/FEniCS/tao/lib/libcmumps.a;/sh
are/users/common/FEniCS/tao/lib/libdmumps.a;/share/users/common/FEniCS/tao/lib/libsmumps.a;/share/users/common/FEniCS/tao/lib/libzmump
s.a;/share/users/common/FEniCS/tao/lib/libmumps_common.a;/share/users/common/FEniCS/tao/lib/libpord.a;/share/users/common/FEniCS/tao/l
ib/libhwloc.so;/share/users/common/FEniCS/tao/lib/libscalapack.a;/share/users/common/FEniCS/tao/lib/libblacs.a;/share/users/common/FEn
iCS/tao/lib/libHYPRE.a;/share/users/common/FEniCS/tao/lib/libptesmumps.a;/share/users/common/FEniCS/tao/lib/libptscotch.a;/share/users
/common/FEniCS/tao/lib/libptscotcherr.a;/share/users/common/FEniCS/tao/lib/libumfpack.a;/share/users/common/FEniCS/tao/lib/libamd.a;/s
hare/apps/acml44/gfortran64_mp/lib/libacml_mp.so;/share/users/common/FEniCS/tao/lib/libnetcdf_c++.so;/share/users/common/FEniCS/tao/li
b/libnetcdf.so;/share/apps/openmpi_amd64/lib/libmpi_f90.so;/share/apps/openmpi_amd64/lib/libmpi_f77.so;/usr/lib/gcc/x86_64-redhat-linu
x/4.1.2/libgfortran.so;/usr/lib64/librt.so;/usr/lib64/libm.so;/usr/lib64/libz.so;/share/apps/openmpi_amd64/lib/libmpi_cxx.so;/usr/lib/
gcc/x86_64-redhat-linux/4.1.2/libstdc++.so;/share/apps/openmpi_amd64/lib/libmpi.so;/share/apps/openmpi_amd64/lib/libopen-rte.so;/share
/apps/openmpi_amd64/lib/libopen-pal.so;/usr/lib64/libibcm.so;/usr/lib64/librdmacm.so;/usr/lib64/libibverbs.so;/usr/lib64/libnuma.so;/u
sr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib/gcc/x86_64-redhat-linux/4.1.2/libgcc_s.so;/usr/lib64/libpthread.so;/usr/lib64/libdl.
so (Required is at least version "3.2")
-- Checking for package 'SLEPc'
-- SLEPC_DIR is SLEPC_DIR-NOTFOUND
-- SLEPc could not be found. Be sure to set SLEPC_DIR, PETSC_DIR, and PETSC_ARCH. (missing: SLEPC_LIBRARIES SLEPC_DIR SLEPC_INCLUDE_D
IRS SLEPC_TEST_RUNS SLEPC_VERSION SLEPC_VERSION_OK) (Required is at least version "3.2")
-- Checking for package 'TAO'
-- TAO_DIR is /share/users/common/FEniCS/src/tao-2.1-p0
-- TAO could not be found. Be sure to set TAO_DIR, PETSC_DIR, and PETSC_ARCH. (missing: TAO_LIBRARIES TAO_TEST_RUNS)
-- ParMETIS could not be found/configured. (missing: PARMETIS_LIBRARIES PARMETIS_TEST_RUNS PARMETIS_INCLUDE_DIRS PARMETIS_VERSION PAR
METIS_VERSION_OK) (Required is at least version "4.0.2")
-- Checking for package 'SCOTCH-PT'
-- Found SCOTCH (version 5.1.12)
-- Performing test SCOTCH_TEST_RUNS
-- Performing test SCOTCH_TEST_RUNS - Success
-- Found SCOTCH: /share/users/common/FEniCS/tao/lib/libptscotch.a;/share/users/common/FEniCS/tao/lib/libptscotcherr.a
-- Checking for package 'AMD'
-- Checking for package 'UMFPACK'
-- Checking for package 'AMD'
-- Checking for package 'CHOLMOD'
-- Checking for package 'AMD'
-- A library with BLAS API found.
-- Performing Test CHOLMOD_TEST_RUNS
-- Performing Test CHOLMOD_TEST_RUNS - Success
-- Performing Test UMFPACK_TEST_RUNS
-- Performing Test UMFPACK_TEST_RUNS - Success
-- Checking for package 'CHOLMOD'
-- Checking for package 'AMD'
-- A library with BLAS API found.
-- Found HDF5: debug;/share/users/common/FEniCS/tao/lib/libhdf5.so;debug;/usr/lib64/libpthread.so;debug;/usr/lib64/libz.so;debug;/usr/
lib64/librt.so;debug;/usr/lib64/libm.so;optimized;/share/users/common/FEniCS/tao/lib/libhdf5.so;optimized;/usr/lib64/libpthread.so;opt
imized;/usr/lib64/libz.so;optimized;/usr/lib64/librt.so;optimized;/usr/lib64/libm.so
-- PASTIX_LIBRARIES /share/users/common/FEniCS/tao/lib/libpastix.a;/usr/lib64/librt.so;/usr/lib64/libm.so
-- A library with BLAS API found.
-- Performing Test PASTIX_TEST_RUNS
-- Performing Test PASTIX_TEST_RUNS - Success
-- Found PASTIX: /share/users/common/FEniCS/tao/lib/libpastix.a;/usr/lib64/librt.so;/usr/lib64/libm.so;/share/users/common/FEniCS/tao/
lib/libhwloc.so;/share/users/common/FEniCS/tao/lib/libacml.so;/share/apps/acml44/gfortran64_mp/lib/libacml_mv.so;/usr/lib/gcc/x86_64-r
edhat-linux/4.1.2/libgfortran.so
-- Checking for Trilinos
-- Unable to find Trilinos (>= 11.0.0)
-- Trilinos could not be found
-- Checking for package 'CGAL'
-- Performing Test CGAL_TEST_RUNS
-- Performing Test CGAL_TEST_RUNS - Success
-- Found CGAL: /share/users/common/FEniCS/tao/lib/libCGAL.so;/share/users/common/FEniCS/tao/lib/libboost_thread.so;/share/users/common
/FEniCS/tao/lib/libboost_system.so;/share/users/common/FEniCS/tao/lib/libgmp.so;/share/users/common/FEniCS/tao/lib/libmpfr.so (Require
d is at least version "3.8")
-- Found ZLIB: /usr/lib64/libz.so (found version "1.2.3")
-- checking for module 'cppunit'
-- package 'cppunit' not found
-- CPPUNIT could not be found. Be sure to set CPPUNIT_DIR. (missing: CPPUNIT_LIBRARIES CPPUNIT_INCLUDE_DIRS)
-- Checking for package 'Sphinx'
-- Could NOT find Sphinx (missing: SPHINX_EXECUTABLE SPHINX_VERSION_OK) (Required is at least version "1.0.7")
-- Could NOT find Qt4 (missing: QT_QMAKE_EXECUTABLE QT_MOC_EXECUTABLE QT_RCC_EXECUTABLE QT_UIC_EXECUTABLE QT_INCLUDE_DIR QT_LIBRARY_D
IR QT_QTCORE_LIBRARY)
-- Found VTK: /share/users/common/FEniCS/tao/lib/vtk-5.8 (found version "5.8")
--
-- The following optional packages were found:
-- -------------------------------------------
-- (OK) OPENMP
-- (OK) MPI
-- (OK) PETSC
-- (OK) UMFPACK
-- (OK) CHOLMOD
-- (OK) PASTIX
-- (OK) SCOTCH
-- (OK) CGAL
-- (OK) ZLIB
-- (OK) PYTHON
-- (OK) HDF5
-- (OK) VTK
--
-- The following optional packages were not be found:
-- --------------------------------------------------
-- (**) SLEPC
-- (**) TAO
-- (**) TRILINOS
-- (**) PARMETIS
-- (**) SPHINX
-- (**) QT
--
-- Could NOT find Qt4 (missing: QT_QMAKE_EXECUTABLE QT_MOC_EXECUTABLE QT_RCC_EXECUTABLE QT_UIC_EXECUTABLE QT_INCLUDE_DIR QT_LIBRARY_D
IR QT_QTCORE_LIBRARY)
-- QT not found, or QT/VTK not enabled in DOLFIN. Not building demo_plot-qt
-- Disabling generation of documentation because Sphinx is missing.
-- Configuring done
-- Generating done
-- Build files have been written to: /share/users/common/FEniCS/src/tao/dorsal_build_dir

On Jan 31, 2013, at 10:01 PM, Garth Wells wrote:

> Question #220684 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/220684
>
> Garth Wells posted a new comment:
> On 31 January 2013 20:50, Paul Constantine
> <email address hidden> wrote:
>> Question #220684 on DOLFIN changed:
>> https://answers.launchpad.net/dolfin/+question/220684
>>
>> Status: Open => Solved
>>
>> Paul Constantine confirmed that the question is solved:
>> Okay, that seemed to do the trick. No more hanging. Any ideas why it was
>> happening? Things to avoid going forward?
>>
>
> On a parallel machine, I'd suggest starting with C++ (there can be
> some weird Python issues on parallel machines, especially if you use
> Intel MKL, which you don't want to worry about until other issues are
> ironed out) and using a lower-level interface for the assembly and
> solve, e.g.
>
> PETScMatrix A;
> PETScVector b;
> assemble_system(A, b, a, L, bcs);
>
> PETScLUSolver lu("mumps");
> lu.solve(A, x, b)
>
> PETScPreconditoner pc("hypre_amg");
> pc.parameters["report"] = true;
>
> PETScKrylovSolver krylov_solver("gmres", pc);
> krylov_solver.parameters["report"] = true;
> krylov_solver.parameters["monitor_convergence"] = true;
>
> krylov_solver.set_operator(A);
> krylov_solver.solve(x, b);
>
> Garth
>
>
>> --
>> You received this question notification because you are a member of
>> DOLFIN Team, which is an answer contact for DOLFIN.
>
> --
> You received this question notification because you are a member of
> DOLFIN Team, which is an answer contact for DOLFIN.

Garth Wells (garth-wells) said : #6

On 31 January 2013 22:56, corrado maurini
<email address hidden> wrote:
> Question #220684 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/220684
>
> corrado maurini posted a new comment:
> I am experiencing similar issues on a linux cluster running rhel5 on amd
> processors.
>
> I solved some problems linking to the correct blas/lapack library (amcl)
>
> However, I have still some unpredictable behavior, especially in "Init
> dofmap", that may take much more time than the solver, switching from
> one run to the other on the some number of processor from 0.1 sec to 30
> sec in list_timings(), while solver and assembling times stay
> reasonable.
>

Are you using DOLFIN 1.1?

> Should one really avoid using the python interface on clusters or is it
> just difficult to configure correctly?I think this is important to known
> before developing an application.
>
> May you give some more information?
>

The Python interface can work, but when getting started one problem is
better than two, which is why I would recommend starting with the C++
interface. There are some issues with Python loading shared MKL
libraries, launching threads (which some systems do not allow), etc.

> Which are the most sensible steps for a correct configuration?
>
> Which benchmark do you suggest to run to test?
>
> It would be nice if something about this will appear in the
> documentation
>

Unlikely, because it depends heavily on the configuration of the cluster.

On a cluster, I suggest using the options

      -D CMAKE_CXX_COMPILER:FILEPATH=mpicxx \
      -D CMAKE_C_COMPILER:FILEPATH=mpicc \
      -D CMAKE_Fortran_COMPILER:FILEPATH=mpif90 \
      -D DOLFIN_AUTO_DETECT_MPI=false

and make sure that you set CC=mpicc, CXX-mpicxx, FC=mpif90 (or
whatever the names of your MPI wrappers are) when building demos.

The demos should probably get the compiler from the DOLFIN CMake
config file, but they don't. For me not using the MPI wrappers leads
to hangs on a system that has a customised MPI for an Infiniband
interconnect.

Garth

> Below my dolfin_configure.log. I use the latest dev version.
>
> ###########################
> more dorsal_configure.log
> -- The C compiler identification is GNU
> -- The CXX compiler identification is GNU
> -- Check for working C compiler: /usr/bin/gcc
> -- Check for working C compiler: /usr/bin/gcc -- works
> -- Detecting C compiler ABI info
> -- Detecting C compiler ABI info - done
> -- Check for working CXX compiler: /usr/bin/c++
> -- Check for working CXX compiler: /usr/bin/c++ -- works
> -- Detecting CXX compiler ABI info
> -- Detecting CXX compiler ABI info - done
> -- Performing Test HAVE_PIPE
> -- Performing Test HAVE_PIPE - Success
> -- Performing Test HAVE_PEDANTIC
> -- Performing Test HAVE_PEDANTIC - Success
> -- Performing Test HAVE_STD
> -- Performing Test HAVE_STD - Success
> -- Performing Test HAVE_DEBUG
> -- Performing Test HAVE_DEBUG - Success
> -- Performing Test HAVE_O2_OPTIMISATION
> -- Performing Test HAVE_O2_OPTIMISATION - Success
> -- The Fortran compiler identification is GNU
> -- Check for working Fortran compiler: /usr/bin/gfortran
> -- Check for working Fortran compiler: /usr/bin/gfortran -- works
> -- Detecting Fortran compiler ABI info
> -- Detecting Fortran compiler ABI info - done
> -- Checking whether /usr/bin/gfortran supports Fortran 90
> -- Checking whether /usr/bin/gfortran supports Fortran 90 -- yes
> -- Found MPI_C: /share/apps/openmpi_amd64/lib/libmpi.so;/share/apps/openmpi_amd64/lib/libopen-rte.so;/share/apps/openmpi_amd64/lib/lib
> open-pal.so;/usr/lib64/libibcm.so;/usr/lib64/librdmacm.so;/usr/lib64/libibverbs.so;/usr/lib64/libnuma.so;/usr/lib64/libdl.so;/usr/lib6
> 4/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libm.so
> -- Found MPI_CXX: /share/apps/openmpi_amd64/lib/libmpi_cxx.so;/share/apps/openmpi_amd64/lib/libmpi.so;/share/apps/openmpi_amd64/lib/li
> bopen-rte.so;/share/apps/openmpi_amd64/lib/libopen-pal.so;/usr/lib64/libibcm.so;/usr/lib64/librdmacm.so;/usr/lib64/libibverbs.so;/usr/
> lib64/libnuma.so;/usr/lib64/libdl.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libm.so
> -- Found MPI_Fortran: /share/apps/openmpi_amd64/lib/libmpi_f90.so;/share/apps/openmpi_amd64/lib/libmpi_f77.so;/share/apps/openmpi_amd6
> 4/lib/libmpi.so;/share/apps/openmpi_amd64/lib/libopen-rte.so;/share/apps/openmpi_amd64/lib/libopen-pal.so;/usr/lib64/libibcm.so;/usr/l
> ib64/librdmacm.so;/usr/lib64/libibverbs.so;/usr/lib64/libnuma.so;/usr/lib64/libdl.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/l
> ib64/libm.so
> -- Try OpenMP C flag = [-fopenmp]
> -- Performing Test OpenMP_FLAG_DETECTED
> -- Performing Test OpenMP_FLAG_DETECTED - Success
> -- Try OpenMP CXX flag = [-fopenmp]
> -- Performing Test OpenMP_FLAG_DETECTED
> -- Performing Test OpenMP_FLAG_DETECTED - Success
> -- Found OpenMP: -fopenmp
> -- Performing Test OPENMP_UINT_TEST_RUNS
> -- Performing Test OPENMP_UINT_TEST_RUNS - Success
> -- Boost version: 1.51.0
> -- Found the following Boost libraries:
> -- filesystem
> -- program_options
> -- system
> -- thread
> -- iostreams
> -- math_tr1
> -- mpi
> -- serialization
> -- timer
> -- chrono
> -- UFC version: 2.1.0+
> -- Checking for package 'Armadillo'
> -- Looking for Fortran sgemm
> -- Looking for Fortran sgemm - found
> -- Looking for include files CMAKE_HAVE_PTHREAD_H
> -- Looking for include files CMAKE_HAVE_PTHREAD_H - found
> -- Looking for pthread_create in pthreads
> -- Looking for pthread_create in pthreads - not found
> -- Looking for pthread_create in pthread
> -- Looking for pthread_create in pthread - found
> -- Found Threads: TRUE
> -- A library with BLAS API found.
> -- A library with BLAS API found.
> -- A library with LAPACK API found.
> -- Performing Test ARMADILLO_TEST_RUNS
> -- Performing Test ARMADILLO_TEST_RUNS - Success
> -- Found Armadillo: /share/users/common/FEniCS/tao/lib/libarmadillo.so;/share/users/common/FEniCS/tao/lib/libacml.so;/share/apps/acml4
> 4/gfortran64_mp/lib/libacml_mv.so;/share/users/common/FEniCS/tao/lib/libacml.so;/share/apps/acml44/gfortran64_mp/lib/libacml_mv.so
> -- Found LibXml2: /share/users/common/FEniCS/tao/lib/libxml2.so
> -- Found PythonInterp: /share/users/common/FEniCS/tao/bin/python2.7 (Required is at least version "2")
> -- Found PythonLibs: /share/users/common/FEniCS/tao/lib/libpython2.7.so (Required is at least version "2")
> -- NumPy headers found
> -- Found SWIG: /share/users/common/FEniCS/tao/bin/swig (found version "2.0.3")
> -- Checking for package 'PETSc'
> -- PETSC_DIR is /share/users/common/FEniCS/tao
> -- PETSC_ARCH is empty
> -- Found petscconf.h
> -- Performing Test PETSC_TEST_RUNS
> -- Performing Test PETSC_TEST_RUNS - Success
> -- PETSc test runs
> -- Performing Test PETSC_CUSP_FOUND
> -- Performing Test PETSC_CUSP_FOUND - Failed
> -- PETSc configured without Cusp support
> -- Found PETSc: /share/users/common/FEniCS/tao/lib/libpetsc.so;/usr/lib64/libX11.so;/share/users/common/FEniCS/tao/lib/libcmumps.a;/sh
> are/users/common/FEniCS/tao/lib/libdmumps.a;/share/users/common/FEniCS/tao/lib/libsmumps.a;/share/users/common/FEniCS/tao/lib/libzmump
> s.a;/share/users/common/FEniCS/tao/lib/libmumps_common.a;/share/users/common/FEniCS/tao/lib/libpord.a;/share/users/common/FEniCS/tao/l
> ib/libhwloc.so;/share/users/common/FEniCS/tao/lib/libscalapack.a;/share/users/common/FEniCS/tao/lib/libblacs.a;/share/users/common/FEn
> iCS/tao/lib/libHYPRE.a;/share/users/common/FEniCS/tao/lib/libptesmumps.a;/share/users/common/FEniCS/tao/lib/libptscotch.a;/share/users
> /common/FEniCS/tao/lib/libptscotcherr.a;/share/users/common/FEniCS/tao/lib/libumfpack.a;/share/users/common/FEniCS/tao/lib/libamd.a;/s
> hare/apps/acml44/gfortran64_mp/lib/libacml_mp.so;/share/users/common/FEniCS/tao/lib/libnetcdf_c++.so;/share/users/common/FEniCS/tao/li
> b/libnetcdf.so;/share/apps/openmpi_amd64/lib/libmpi_f90.so;/share/apps/openmpi_amd64/lib/libmpi_f77.so;/usr/lib/gcc/x86_64-redhat-linu
> x/4.1.2/libgfortran.so;/usr/lib64/librt.so;/usr/lib64/libm.so;/usr/lib64/libz.so;/share/apps/openmpi_amd64/lib/libmpi_cxx.so;/usr/lib/
> gcc/x86_64-redhat-linux/4.1.2/libstdc++.so;/share/apps/openmpi_amd64/lib/libmpi.so;/share/apps/openmpi_amd64/lib/libopen-rte.so;/share
> /apps/openmpi_amd64/lib/libopen-pal.so;/usr/lib64/libibcm.so;/usr/lib64/librdmacm.so;/usr/lib64/libibverbs.so;/usr/lib64/libnuma.so;/u
> sr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib/gcc/x86_64-redhat-linux/4.1.2/libgcc_s.so;/usr/lib64/libpthread.so;/usr/lib64/libdl.
> so (Required is at least version "3.2")
> -- Checking for package 'SLEPc'
> -- SLEPC_DIR is SLEPC_DIR-NOTFOUND
> -- SLEPc could not be found. Be sure to set SLEPC_DIR, PETSC_DIR, and PETSC_ARCH. (missing: SLEPC_LIBRARIES SLEPC_DIR SLEPC_INCLUDE_D
> IRS SLEPC_TEST_RUNS SLEPC_VERSION SLEPC_VERSION_OK) (Required is at least version "3.2")
> -- Checking for package 'TAO'
> -- TAO_DIR is /share/users/common/FEniCS/src/tao-2.1-p0
> -- TAO could not be found. Be sure to set TAO_DIR, PETSC_DIR, and PETSC_ARCH. (missing: TAO_LIBRARIES TAO_TEST_RUNS)
> -- ParMETIS could not be found/configured. (missing: PARMETIS_LIBRARIES PARMETIS_TEST_RUNS PARMETIS_INCLUDE_DIRS PARMETIS_VERSION PAR
> METIS_VERSION_OK) (Required is at least version "4.0.2")
> -- Checking for package 'SCOTCH-PT'
> -- Found SCOTCH (version 5.1.12)
> -- Performing test SCOTCH_TEST_RUNS
> -- Performing test SCOTCH_TEST_RUNS - Success
> -- Found SCOTCH: /share/users/common/FEniCS/tao/lib/libptscotch.a;/share/users/common/FEniCS/tao/lib/libptscotcherr.a
> -- Checking for package 'AMD'
> -- Checking for package 'UMFPACK'
> -- Checking for package 'AMD'
> -- Checking for package 'CHOLMOD'
> -- Checking for package 'AMD'
> -- A library with BLAS API found.
> -- Performing Test CHOLMOD_TEST_RUNS
> -- Performing Test CHOLMOD_TEST_RUNS - Success
> -- Performing Test UMFPACK_TEST_RUNS
> -- Performing Test UMFPACK_TEST_RUNS - Success
> -- Checking for package 'CHOLMOD'
> -- Checking for package 'AMD'
> -- A library with BLAS API found.
> -- Found HDF5: debug;/share/users/common/FEniCS/tao/lib/libhdf5.so;debug;/usr/lib64/libpthread.so;debug;/usr/lib64/libz.so;debug;/usr/
> lib64/librt.so;debug;/usr/lib64/libm.so;optimized;/share/users/common/FEniCS/tao/lib/libhdf5.so;optimized;/usr/lib64/libpthread.so;opt
> imized;/usr/lib64/libz.so;optimized;/usr/lib64/librt.so;optimized;/usr/lib64/libm.so
> -- PASTIX_LIBRARIES /share/users/common/FEniCS/tao/lib/libpastix.a;/usr/lib64/librt.so;/usr/lib64/libm.so
> -- A library with BLAS API found.
> -- Performing Test PASTIX_TEST_RUNS
> -- Performing Test PASTIX_TEST_RUNS - Success
> -- Found PASTIX: /share/users/common/FEniCS/tao/lib/libpastix.a;/usr/lib64/librt.so;/usr/lib64/libm.so;/share/users/common/FEniCS/tao/
> lib/libhwloc.so;/share/users/common/FEniCS/tao/lib/libacml.so;/share/apps/acml44/gfortran64_mp/lib/libacml_mv.so;/usr/lib/gcc/x86_64-r
> edhat-linux/4.1.2/libgfortran.so
> -- Checking for Trilinos
> -- Unable to find Trilinos (>= 11.0.0)
> -- Trilinos could not be found
> -- Checking for package 'CGAL'
> -- Performing Test CGAL_TEST_RUNS
> -- Performing Test CGAL_TEST_RUNS - Success
> -- Found CGAL: /share/users/common/FEniCS/tao/lib/libCGAL.so;/share/users/common/FEniCS/tao/lib/libboost_thread.so;/share/users/common
> /FEniCS/tao/lib/libboost_system.so;/share/users/common/FEniCS/tao/lib/libgmp.so;/share/users/common/FEniCS/tao/lib/libmpfr.so (Require
> d is at least version "3.8")
> -- Found ZLIB: /usr/lib64/libz.so (found version "1.2.3")
> -- checking for module 'cppunit'
> -- package 'cppunit' not found
> -- CPPUNIT could not be found. Be sure to set CPPUNIT_DIR. (missing: CPPUNIT_LIBRARIES CPPUNIT_INCLUDE_DIRS)
> -- Checking for package 'Sphinx'
> -- Could NOT find Sphinx (missing: SPHINX_EXECUTABLE SPHINX_VERSION_OK) (Required is at least version "1.0.7")
> -- Could NOT find Qt4 (missing: QT_QMAKE_EXECUTABLE QT_MOC_EXECUTABLE QT_RCC_EXECUTABLE QT_UIC_EXECUTABLE QT_INCLUDE_DIR QT_LIBRARY_D
> IR QT_QTCORE_LIBRARY)
> -- Found VTK: /share/users/common/FEniCS/tao/lib/vtk-5.8 (found version "5.8")
> --
> -- The following optional packages were found:
> -- -------------------------------------------
> -- (OK) OPENMP
> -- (OK) MPI
> -- (OK) PETSC
> -- (OK) UMFPACK
> -- (OK) CHOLMOD
> -- (OK) PASTIX
> -- (OK) SCOTCH
> -- (OK) CGAL
> -- (OK) ZLIB
> -- (OK) PYTHON
> -- (OK) HDF5
> -- (OK) VTK
> --
> -- The following optional packages were not be found:
> -- --------------------------------------------------
> -- (**) SLEPC
> -- (**) TAO
> -- (**) TRILINOS
> -- (**) PARMETIS
> -- (**) SPHINX
> -- (**) QT
> --
> -- Could NOT find Qt4 (missing: QT_QMAKE_EXECUTABLE QT_MOC_EXECUTABLE QT_RCC_EXECUTABLE QT_UIC_EXECUTABLE QT_INCLUDE_DIR QT_LIBRARY_D
> IR QT_QTCORE_LIBRARY)
> -- QT not found, or QT/VTK not enabled in DOLFIN. Not building demo_plot-qt
> -- Disabling generation of documentation because Sphinx is missing.
> -- Configuring done
> -- Generating done
> -- Build files have been written to: /share/users/common/FEniCS/src/tao/dorsal_build_dir
>
> On Jan 31, 2013, at 10:01 PM, Garth Wells wrote:
>
>> Question #220684 on DOLFIN changed:
>> https://answers.launchpad.net/dolfin/+question/220684
>>
>> Garth Wells posted a new comment:
>> On 31 January 2013 20:50, Paul Constantine
>> <email address hidden> wrote:
>>> Question #220684 on DOLFIN changed:
>>> https://answers.launchpad.net/dolfin/+question/220684
>>>
>>> Status: Open => Solved
>>>
>>> Paul Constantine confirmed that the question is solved:
>>> Okay, that seemed to do the trick. No more hanging. Any ideas why it was
>>> happening? Things to avoid going forward?
>>>
>>
>> On a parallel machine, I'd suggest starting with C++ (there can be
>> some weird Python issues on parallel machines, especially if you use
>> Intel MKL, which you don't want to worry about until other issues are
>> ironed out) and using a lower-level interface for the assembly and
>> solve, e.g.
>>
>> PETScMatrix A;
>> PETScVector b;
>> assemble_system(A, b, a, L, bcs);
>>
>> PETScLUSolver lu("mumps");
>> lu.solve(A, x, b)
>>
>> PETScPreconditoner pc("hypre_amg");
>> pc.parameters["report"] = true;
>>
>> PETScKrylovSolver krylov_solver("gmres", pc);
>> krylov_solver.parameters["report"] = true;
>> krylov_solver.parameters["monitor_convergence"] = true;
>>
>> krylov_solver.set_operator(A);
>> krylov_solver.solve(x, b);
>>
>> Garth
>>
>>
>>> --
>>> You received this question notification because you are a member of
>>> DOLFIN Team, which is an answer contact for DOLFIN.
>>
>> --
>> You received this question notification because you are a member of
>> DOLFIN Team, which is an answer contact for DOLFIN.
>
> You received this question notification because you are a member of
> DOLFIN Team, which is an answer contact for DOLFIN.

It might be nice to set up some kind of forum (wiki, etc) where cluster users can share their experiences setting up and running DOLFIN. In an ideal world, I can write my python DOLFIN code on my laptop then run it blazingly fast on a cluster. But life never works that way, and all clusters are different.

On Feb 1, 2013, at 12:15 AM, Garth Wells wrote:

> Question #220684 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/220684
>
> Garth Wells posted a new comment:
> On 31 January 2013 22:56, corrado maurini
> <email address hidden> wrote:
>> Question #220684 on DOLFIN changed:
>> https://answers.launchpad.net/dolfin/+question/220684
>>
>> corrado maurini posted a new comment:
>> I am experiencing similar issues on a linux cluster running rhel5 on amd
>> processors.
>>
>> I solved some problems linking to the correct blas/lapack library (amcl)
>>
>> However, I have still some unpredictable behavior, especially in "Init
>> dofmap", that may take much more time than the solver, switching from
>> one run to the other on the some number of processor from 0.1 sec to 30
>> sec in list_timings(), while solver and assembling times stay
>> reasonable.
>>
>
> Are you using DOLFIN 1.1?

I run the latest dev version.

>
>> Should one really avoid using the python interface on clusters or is it
>> just difficult to configure correctly?I think this is important to known
>> before developing an application.
>>
>> May you give some more information?
>>
>
> The Python interface can work, but when getting started one problem is
> better than two, which is why I would recommend starting with the C++
> interface. There are some issues with Python loading shared MKL
> libraries, launching threads (which some systems do not allow), etc.
>
>> Which are the most sensible steps for a correct configuration?
>>
>> Which benchmark do you suggest to run to test?
>>
>> It would be nice if something about this will appear in the
>> documentation
>>
>
> Unlikely, because it depends heavily on the configuration of the
> cluster.
>
> On a cluster, I suggest using the options
>
> -D CMAKE_CXX_COMPILER:FILEPATH=mpicxx \
> -D CMAKE_C_COMPILER:FILEPATH=mpicc \
> -D CMAKE_Fortran_COMPILER:FILEPATH=mpif90 \
> -D DOLFIN_AUTO_DETECT_MPI=false
>
> and make sure that you set CC=mpicc, CXX-mpicxx, FC=mpif90 (or
> whatever the names of your MPI wrappers are) when building demos.
>
> The demos should probably get the compiler from the DOLFIN CMake
> config file, but they don't. For me not using the MPI wrappers leads
> to hangs on a system that has a customised MPI for an Infiniband
> interconnect.
>
> Garth

Thanks a lot for the hints

I will do further tests following your suggestions and report the results ...

>
>> Below my dolfin_configure.log. I use the latest dev version.
>>
>> ###########################
>> more dorsal_configure.log
>> -- The C compiler identification is GNU
>> -- The CXX compiler identification is GNU
>> -- Check for working C compiler: /usr/bin/gcc
>> -- Check for working C compiler: /usr/bin/gcc -- works
>> -- Detecting C compiler ABI info
>> -- Detecting C compiler ABI info - done
>> -- Check for working CXX compiler: /usr/bin/c++
>> -- Check for working CXX compiler: /usr/bin/c++ -- works
>> -- Detecting CXX compiler ABI info
>> -- Detecting CXX compiler ABI info - done
>> -- Performing Test HAVE_PIPE
>> -- Performing Test HAVE_PIPE - Success
>> -- Performing Test HAVE_PEDANTIC
>> -- Performing Test HAVE_PEDANTIC - Success
>> -- Performing Test HAVE_STD
>> -- Performing Test HAVE_STD - Success
>> -- Performing Test HAVE_DEBUG
>> -- Performing Test HAVE_DEBUG - Success
>> -- Performing Test HAVE_O2_OPTIMISATION
>> -- Performing Test HAVE_O2_OPTIMISATION - Success
>> -- The Fortran compiler identification is GNU
>> -- Check for working Fortran compiler: /usr/bin/gfortran
>> -- Check for working Fortran compiler: /usr/bin/gfortran -- works
>> -- Detecting Fortran compiler ABI info
>> -- Detecting Fortran compiler ABI info - done
>> -- Checking whether /usr/bin/gfortran supports Fortran 90
>> -- Checking whether /usr/bin/gfortran supports Fortran 90 -- yes
>> -- Found MPI_C: /share/apps/openmpi_amd64/lib/libmpi.so;/share/apps/openmpi_amd64/lib/libopen-rte.so;/share/apps/openmpi_amd64/lib/lib
>> open-pal.so;/usr/lib64/libibcm.so;/usr/lib64/librdmacm.so;/usr/lib64/libibverbs.so;/usr/lib64/libnuma.so;/usr/lib64/libdl.so;/usr/lib6
>> 4/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libm.so
>> -- Found MPI_CXX: /share/apps/openmpi_amd64/lib/libmpi_cxx.so;/share/apps/openmpi_amd64/lib/libmpi.so;/share/apps/openmpi_amd64/lib/li
>> bopen-rte.so;/share/apps/openmpi_amd64/lib/libopen-pal.so;/usr/lib64/libibcm.so;/usr/lib64/librdmacm.so;/usr/lib64/libibverbs.so;/usr/
>> lib64/libnuma.so;/usr/lib64/libdl.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib64/libm.so
>> -- Found MPI_Fortran: /share/apps/openmpi_amd64/lib/libmpi_f90.so;/share/apps/openmpi_amd64/lib/libmpi_f77.so;/share/apps/openmpi_amd6
>> 4/lib/libmpi.so;/share/apps/openmpi_amd64/lib/libopen-rte.so;/share/apps/openmpi_amd64/lib/libopen-pal.so;/usr/lib64/libibcm.so;/usr/l
>> ib64/librdmacm.so;/usr/lib64/libibverbs.so;/usr/lib64/libnuma.so;/usr/lib64/libdl.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/l
>> ib64/libm.so
>> -- Try OpenMP C flag = [-fopenmp]
>> -- Performing Test OpenMP_FLAG_DETECTED
>> -- Performing Test OpenMP_FLAG_DETECTED - Success
>> -- Try OpenMP CXX flag = [-fopenmp]
>> -- Performing Test OpenMP_FLAG_DETECTED
>> -- Performing Test OpenMP_FLAG_DETECTED - Success
>> -- Found OpenMP: -fopenmp
>> -- Performing Test OPENMP_UINT_TEST_RUNS
>> -- Performing Test OPENMP_UINT_TEST_RUNS - Success
>> -- Boost version: 1.51.0
>> -- Found the following Boost libraries:
>> -- filesystem
>> -- program_options
>> -- system
>> -- thread
>> -- iostreams
>> -- math_tr1
>> -- mpi
>> -- serialization
>> -- timer
>> -- chrono
>> -- UFC version: 2.1.0+
>> -- Checking for package 'Armadillo'
>> -- Looking for Fortran sgemm
>> -- Looking for Fortran sgemm - found
>> -- Looking for include files CMAKE_HAVE_PTHREAD_H
>> -- Looking for include files CMAKE_HAVE_PTHREAD_H - found
>> -- Looking for pthread_create in pthreads
>> -- Looking for pthread_create in pthreads - not found
>> -- Looking for pthread_create in pthread
>> -- Looking for pthread_create in pthread - found
>> -- Found Threads: TRUE
>> -- A library with BLAS API found.
>> -- A library with BLAS API found.
>> -- A library with LAPACK API found.
>> -- Performing Test ARMADILLO_TEST_RUNS
>> -- Performing Test ARMADILLO_TEST_RUNS - Success
>> -- Found Armadillo: /share/users/common/FEniCS/tao/lib/libarmadillo.so;/share/users/common/FEniCS/tao/lib/libacml.so;/share/apps/acml4
>> 4/gfortran64_mp/lib/libacml_mv.so;/share/users/common/FEniCS/tao/lib/libacml.so;/share/apps/acml44/gfortran64_mp/lib/libacml_mv.so
>> -- Found LibXml2: /share/users/common/FEniCS/tao/lib/libxml2.so
>> -- Found PythonInterp: /share/users/common/FEniCS/tao/bin/python2.7 (Required is at least version "2")
>> -- Found PythonLibs: /share/users/common/FEniCS/tao/lib/libpython2.7.so (Required is at least version "2")
>> -- NumPy headers found
>> -- Found SWIG: /share/users/common/FEniCS/tao/bin/swig (found version "2.0.3")
>> -- Checking for package 'PETSc'
>> -- PETSC_DIR is /share/users/common/FEniCS/tao
>> -- PETSC_ARCH is empty
>> -- Found petscconf.h
>> -- Performing Test PETSC_TEST_RUNS
>> -- Performing Test PETSC_TEST_RUNS - Success
>> -- PETSc test runs
>> -- Performing Test PETSC_CUSP_FOUND
>> -- Performing Test PETSC_CUSP_FOUND - Failed
>> -- PETSc configured without Cusp support
>> -- Found PETSc: /share/users/common/FEniCS/tao/lib/libpetsc.so;/usr/lib64/libX11.so;/share/users/common/FEniCS/tao/lib/libcmumps.a;/sh
>> are/users/common/FEniCS/tao/lib/libdmumps.a;/share/users/common/FEniCS/tao/lib/libsmumps.a;/share/users/common/FEniCS/tao/lib/libzmump
>> s.a;/share/users/common/FEniCS/tao/lib/libmumps_common.a;/share/users/common/FEniCS/tao/lib/libpord.a;/share/users/common/FEniCS/tao/l
>> ib/libhwloc.so;/share/users/common/FEniCS/tao/lib/libscalapack.a;/share/users/common/FEniCS/tao/lib/libblacs.a;/share/users/common/FEn
>> iCS/tao/lib/libHYPRE.a;/share/users/common/FEniCS/tao/lib/libptesmumps.a;/share/users/common/FEniCS/tao/lib/libptscotch.a;/share/users
>> /common/FEniCS/tao/lib/libptscotcherr.a;/share/users/common/FEniCS/tao/lib/libumfpack.a;/share/users/common/FEniCS/tao/lib/libamd.a;/s
>> hare/apps/acml44/gfortran64_mp/lib/libacml_mp.so;/share/users/common/FEniCS/tao/lib/libnetcdf_c++.so;/share/users/common/FEniCS/tao/li
>> b/libnetcdf.so;/share/apps/openmpi_amd64/lib/libmpi_f90.so;/share/apps/openmpi_amd64/lib/libmpi_f77.so;/usr/lib/gcc/x86_64-redhat-linu
>> x/4.1.2/libgfortran.so;/usr/lib64/librt.so;/usr/lib64/libm.so;/usr/lib64/libz.so;/share/apps/openmpi_amd64/lib/libmpi_cxx.so;/usr/lib/
>> gcc/x86_64-redhat-linux/4.1.2/libstdc++.so;/share/apps/openmpi_amd64/lib/libmpi.so;/share/apps/openmpi_amd64/lib/libopen-rte.so;/share
>> /apps/openmpi_amd64/lib/libopen-pal.so;/usr/lib64/libibcm.so;/usr/lib64/librdmacm.so;/usr/lib64/libibverbs.so;/usr/lib64/libnuma.so;/u
>> sr/lib64/libnsl.so;/usr/lib64/libutil.so;/usr/lib/gcc/x86_64-redhat-linux/4.1.2/libgcc_s.so;/usr/lib64/libpthread.so;/usr/lib64/libdl.
>> so (Required is at least version "3.2")
>> -- Checking for package 'SLEPc'
>> -- SLEPC_DIR is SLEPC_DIR-NOTFOUND
>> -- SLEPc could not be found. Be sure to set SLEPC_DIR, PETSC_DIR, and PETSC_ARCH. (missing: SLEPC_LIBRARIES SLEPC_DIR SLEPC_INCLUDE_D
>> IRS SLEPC_TEST_RUNS SLEPC_VERSION SLEPC_VERSION_OK) (Required is at least version "3.2")
>> -- Checking for package 'TAO'
>> -- TAO_DIR is /share/users/common/FEniCS/src/tao-2.1-p0
>> -- TAO could not be found. Be sure to set TAO_DIR, PETSC_DIR, and PETSC_ARCH. (missing: TAO_LIBRARIES TAO_TEST_RUNS)
>> -- ParMETIS could not be found/configured. (missing: PARMETIS_LIBRARIES PARMETIS_TEST_RUNS PARMETIS_INCLUDE_DIRS PARMETIS_VERSION PAR
>> METIS_VERSION_OK) (Required is at least version "4.0.2")
>> -- Checking for package 'SCOTCH-PT'
>> -- Found SCOTCH (version 5.1.12)
>> -- Performing test SCOTCH_TEST_RUNS
>> -- Performing test SCOTCH_TEST_RUNS - Success
>> -- Found SCOTCH: /share/users/common/FEniCS/tao/lib/libptscotch.a;/share/users/common/FEniCS/tao/lib/libptscotcherr.a
>> -- Checking for package 'AMD'
>> -- Checking for package 'UMFPACK'
>> -- Checking for package 'AMD'
>> -- Checking for package 'CHOLMOD'
>> -- Checking for package 'AMD'
>> -- A library with BLAS API found.
>> -- Performing Test CHOLMOD_TEST_RUNS
>> -- Performing Test CHOLMOD_TEST_RUNS - Success
>> -- Performing Test UMFPACK_TEST_RUNS
>> -- Performing Test UMFPACK_TEST_RUNS - Success
>> -- Checking for package 'CHOLMOD'
>> -- Checking for package 'AMD'
>> -- A library with BLAS API found.
>> -- Found HDF5: debug;/share/users/common/FEniCS/tao/lib/libhdf5.so;debug;/usr/lib64/libpthread.so;debug;/usr/lib64/libz.so;debug;/usr/
>> lib64/librt.so;debug;/usr/lib64/libm.so;optimized;/share/users/common/FEniCS/tao/lib/libhdf5.so;optimized;/usr/lib64/libpthread.so;opt
>> imized;/usr/lib64/libz.so;optimized;/usr/lib64/librt.so;optimized;/usr/lib64/libm.so
>> -- PASTIX_LIBRARIES /share/users/common/FEniCS/tao/lib/libpastix.a;/usr/lib64/librt.so;/usr/lib64/libm.so
>> -- A library with BLAS API found.
>> -- Performing Test PASTIX_TEST_RUNS
>> -- Performing Test PASTIX_TEST_RUNS - Success
>> -- Found PASTIX: /share/users/common/FEniCS/tao/lib/libpastix.a;/usr/lib64/librt.so;/usr/lib64/libm.so;/share/users/common/FEniCS/tao/
>> lib/libhwloc.so;/share/users/common/FEniCS/tao/lib/libacml.so;/share/apps/acml44/gfortran64_mp/lib/libacml_mv.so;/usr/lib/gcc/x86_64-r
>> edhat-linux/4.1.2/libgfortran.so
>> -- Checking for Trilinos
>> -- Unable to find Trilinos (>= 11.0.0)
>> -- Trilinos could not be found
>> -- Checking for package 'CGAL'
>> -- Performing Test CGAL_TEST_RUNS
>> -- Performing Test CGAL_TEST_RUNS - Success
>> -- Found CGAL: /share/users/common/FEniCS/tao/lib/libCGAL.so;/share/users/common/FEniCS/tao/lib/libboost_thread.so;/share/users/common
>> /FEniCS/tao/lib/libboost_system.so;/share/users/common/FEniCS/tao/lib/libgmp.so;/share/users/common/FEniCS/tao/lib/libmpfr.so (Require
>> d is at least version "3.8")
>> -- Found ZLIB: /usr/lib64/libz.so (found version "1.2.3")
>> -- checking for module 'cppunit'
>> -- package 'cppunit' not found
>> -- CPPUNIT could not be found. Be sure to set CPPUNIT_DIR. (missing: CPPUNIT_LIBRARIES CPPUNIT_INCLUDE_DIRS)
>> -- Checking for package 'Sphinx'
>> -- Could NOT find Sphinx (missing: SPHINX_EXECUTABLE SPHINX_VERSION_OK) (Required is at least version "1.0.7")
>> -- Could NOT find Qt4 (missing: QT_QMAKE_EXECUTABLE QT_MOC_EXECUTABLE QT_RCC_EXECUTABLE QT_UIC_EXECUTABLE QT_INCLUDE_DIR QT_LIBRARY_D
>> IR QT_QTCORE_LIBRARY)
>> -- Found VTK: /share/users/common/FEniCS/tao/lib/vtk-5.8 (found version "5.8")
>> --
>> -- The following optional packages were found:
>> -- -------------------------------------------
>> -- (OK) OPENMP
>> -- (OK) MPI
>> -- (OK) PETSC
>> -- (OK) UMFPACK
>> -- (OK) CHOLMOD
>> -- (OK) PASTIX
>> -- (OK) SCOTCH
>> -- (OK) CGAL
>> -- (OK) ZLIB
>> -- (OK) PYTHON
>> -- (OK) HDF5
>> -- (OK) VTK
>> --
>> -- The following optional packages were not be found:
>> -- --------------------------------------------------
>> -- (**) SLEPC
>> -- (**) TAO
>> -- (**) TRILINOS
>> -- (**) PARMETIS
>> -- (**) SPHINX
>> -- (**) QT
>> --
>> -- Could NOT find Qt4 (missing: QT_QMAKE_EXECUTABLE QT_MOC_EXECUTABLE QT_RCC_EXECUTABLE QT_UIC_EXECUTABLE QT_INCLUDE_DIR QT_LIBRARY_D
>> IR QT_QTCORE_LIBRARY)
>> -- QT not found, or QT/VTK not enabled in DOLFIN. Not building demo_plot-qt
>> -- Disabling generation of documentation because Sphinx is missing.
>> -- Configuring done
>> -- Generating done
>> -- Build files have been written to: /share/users/common/FEniCS/src/tao/dorsal_build_dir
>>
>> On Jan 31, 2013, at 10:01 PM, Garth Wells wrote:
>>
>>> Question #220684 on DOLFIN changed:
>>> https://answers.launchpad.net/dolfin/+question/220684
>>>
>>> Garth Wells posted a new comment:
>>> On 31 January 2013 20:50, Paul Constantine
>>> <email address hidden> wrote:
>>>> Question #220684 on DOLFIN changed:
>>>> https://answers.launchpad.net/dolfin/+question/220684
>>>>
>>>> Status: Open => Solved
>>>>
>>>> Paul Constantine confirmed that the question is solved:
>>>> Okay, that seemed to do the trick. No more hanging. Any ideas why it was
>>>> happening? Things to avoid going forward?
>>>>
>>>
>>> On a parallel machine, I'd suggest starting with C++ (there can be
>>> some weird Python issues on parallel machines, especially if you use
>>> Intel MKL, which you don't want to worry about until other issues are
>>> ironed out) and using a lower-level interface for the assembly and
>>> solve, e.g.
>>>
>>> PETScMatrix A;
>>> PETScVector b;
>>> assemble_system(A, b, a, L, bcs);
>>>
>>> PETScLUSolver lu("mumps");
>>> lu.solve(A, x, b)
>>>
>>> PETScPreconditoner pc("hypre_amg");
>>> pc.parameters["report"] = true;
>>>
>>> PETScKrylovSolver krylov_solver("gmres", pc);
>>> krylov_solver.parameters["report"] = true;
>>> krylov_solver.parameters["monitor_convergence"] = true;
>>>
>>> krylov_solver.set_operator(A);
>>> krylov_solver.solve(x, b);
>>>
>>> Garth
>>>
>>>
>>>> --
>>>> You received this question notification because you are a member of
>>>> DOLFIN Team, which is an answer contact for DOLFIN.
>>>
>>> --
>>> You received this question notification because you are a member of
>>> DOLFIN Team, which is an answer contact for DOLFIN.
>>
>> You received this question notification because you are a member of
>> DOLFIN Team, which is an answer contact for DOLFIN.
>
> --
> You received this question notification because you are a member of
> DOLFIN Team, which is an answer contact for DOLFIN.