Unsure if MPI is working for eScript on Swinburne's Swinstar supercomputer

Asked by Louise Olsen-Kettle

I wanted to check if MPI is working for eScript on Swinburne's swinstar supercomputer. This is part of the output when I submit a job:

Warning: MPI disabled but number of processors per node set. Option ignored.

I have submitted a job using this script on the Swinstar supercomputer:
#!/bin/csh
# specify the queue name
#PBS -q gstar
# resource requests
#PBS -l nodes=1:ppn=12
#PBS -l walltime=01:00:00:00

# list the assigned CPUs and GPUs
echo Deploying job to CPUs ...
cat $PBS_NODEFILE
echo and using GPU ...
cat $PBS_GPUFILE

echo Working directory is $PBS_O_WORKDIR
cd $PBS_O_WORKDIR

# run process

module purge
module load escript/x86_64/gnu/5.1
run-escript -n1 -p12 -t4 CubeCompression.py

This is the job output at the end:

Deploying job to CPUs ...
gstar022
gstar022
gstar022
gstar022
gstar022
gstar022
gstar022
gstar022
gstar022
gstar022
gstar022
gstar022
and using GPU ...
Working directory is /lustre/projects/p127_swin/ImplicitGradientModel/delVisoConcreteInterpolation/LocalDamageMeshDependent/lz_20_5
Warning: MPI disabled but number of processors per node set. Option ignored.
(21, 21, 21) 0.005 0.005 0.005

Question information

Language:
English Edit question
Status:
Answered
For:
esys-escript Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
Lutz Gross (l-gross) said :
#1

The message suggests that escript has not been installed with MPI.

Can you help with this problem?

Provide an answer of your own, or ask Louise Olsen-Kettle for more information if necessary.

To post a message you must log in.