Unsure if MPI is working for eScript on Swinburne's Swinstar supercomputer
I wanted to check if MPI is working for eScript on Swinburne's swinstar supercomputer. This is part of the output when I submit a job:
Warning: MPI disabled but number of processors per node set. Option ignored.
I have submitted a job using this script on the Swinstar supercomputer:
#!/bin/csh
# specify the queue name
#PBS -q gstar
# resource requests
#PBS -l nodes=1:ppn=12
#PBS -l walltime=
# list the assigned CPUs and GPUs
echo Deploying job to CPUs ...
cat $PBS_NODEFILE
echo and using GPU ...
cat $PBS_GPUFILE
echo Working directory is $PBS_O_WORKDIR
cd $PBS_O_WORKDIR
# run process
module purge
module load escript/
run-escript -n1 -p12 -t4 CubeCompression.py
This is the job output at the end:
Deploying job to CPUs ...
gstar022
gstar022
gstar022
gstar022
gstar022
gstar022
gstar022
gstar022
gstar022
gstar022
gstar022
gstar022
and using GPU ...
Working directory is /lustre/
Warning: MPI disabled but number of processors per node set. Option ignored.
(21, 21, 21) 0.005 0.005 0.005
Question information
- Language:
- English Edit question
- Status:
- Answered
- Assignee:
- No assignee Edit question
- Last query:
- Last reply:
Can you help with this problem?
Provide an answer of your own, or ask Louise Olsen-Kettle for more information if necessary.