PY8_parallelization error

Asked by Shubhani Jain on 2020-08-10

Hi

I have been trying to run madgraph on pbs cluster but everytime the compilation stops with pythia8 parallelsisation error. I am tagging error and output file from cluster run as well as debug file from madgraph.

https://www.dropbox.com/sh/lz7zwgsd734ief0/AACtVfaw2O4O4EXzNpnzeoTja?dl=0

Please let me know how to rectify it as for now I have to run shower pythia8 run_01 manually.

Thanks
Shubhani

Question information

Language:
English Edit question
Status:
Answered
For:
MadGraph5_aMC@NLO Edit question
Assignee:
No assignee Edit question
Last query:
2020-08-10
Last reply:
2020-08-14

Do I understand correctly that you are not running in PBS mode for madgraph but in the single node mode of madgraph (but submitting that to PBS requesting to have a full node?)

Does it work on your front-end? do you see any more details log concerning pythia in the Events/run_01 directory?
My first guess is that this is a dynamic library issue on the node but I can be wrong.

Cheers,

Olivier

Shubhani Jain (s2697661) said : #2

Hi Olivier

Many thanks for the reply.

There is tag_1_pythia8.log in the Events/run_01 directory do you want me to upload it.

What I do is open MG5 interface using ./bin/mg5_aMC then generate the process and output the directory. After that I quit the interface and open the directory and edit cards. Then submit the job the to pbs cluster . Here are my scripts:

madscript.sh:
   launch 4jets7

  shower=Pythia8
  madspin=OFF
  detector=OFF
  analysis=OFF
  done
  done
  exit

script for submitting to cluster:

#PBS -S /bin/bash

#PBS -l nodes=1:ppn=1,walltime=0:30:00
#PBS -N MA5_test
#PBS -m abe
#PBS -M <email address hidden>
#PBS -q batch

cd $PBS_O_WORKDIR
./bin/mg5_aMC mad_script1.sh

I haven't changed any setting in madgraph directory and my senior also uses the same script and process and his pythia8 parallelisation seems to be working fine.

Thanks
Shubhani

Shubhani Jain (s2697661) said : #3

Hi Olivier

After I changed the setting of me5_configuration file in my 4jets7 directory I was able to get past PY8_parallelization error but now its giving me another error:
decay_events -from_cards
INFO: ^[[92mRunning Pythia8 [arXiv:1410.3012]^[[0m
using cluster: pbs
Splitting .lhe event file for PY8 parallelization...
Submitting Pythia8 jobs...
Pythia8 shower jobs: 1 Idle, 0 Running, 0 Done [0 second]
Pythia8 shower jobs: 1 Idle, 0 Running, 0 Done [30 seconds]
Pythia8 shower jobs: 1 Idle, 0 Running, 0 Done [1m00s]
Pythia8 shower jobs: 1 Idle, 0 Running, 0 Done [1m30s]
INFO: All jobs finished
Pythia8 shower jobs: 0 Idle, 0 Running, 1 Done [2m00s]
Merging results from the split PY8 runs...
^[[1;34mFail to produce a pythia8 output. More info in
     /scratch/sj1n19/MG5/MG5_aMC_v2_7_2/check/Events/run_01/tag_1_pythia8.log^[[0m
INFO: storing files of previous run
gzipping output file: unweighted_events.lhe
INFO: Done
quit
INFO:
more information in /scratch/sj1n19/MG5/MG5_aMC_v2_7_2/check/index.html
exit
quit

Do you know the reason behind it?

Regards
Shubhani

Shubhani Jain (s2697661) said : #4

Hi Olivier

Sorry for multiple comments I am attaching pythia log file as well as error and output file from cluster run for directory check for process p p > b b~ b b~ for the error:
decay_events -from_cards
INFO: ^[[92mRunning Pythia8 [arXiv:1410.3012]^[[0m
using cluster: pbs
Splitting .lhe event file for PY8 parallelization...
Submitting Pythia8 jobs...
Pythia8 shower jobs: 1 Idle, 0 Running, 0 Done [0 second]
Pythia8 shower jobs: 1 Idle, 0 Running, 0 Done [30 seconds]
Pythia8 shower jobs: 1 Idle, 0 Running, 0 Done [1m00s]
Pythia8 shower jobs: 1 Idle, 0 Running, 0 Done [1m30s]
INFO: All jobs finished
Pythia8 shower jobs: 0 Idle, 0 Running, 1 Done [2m00s]
Merging results from the split PY8 runs...
^[[1;34mFail to produce a pythia8 output. More info in
     /scratch/sj1n19/MG5/MG5_aMC_v2_7_2/check/Events/run_01/tag_1_pythia8.log^[[0m
INFO: storing files of previous run
gzipping output file: unweighted_events.lhe
INFO: Done
quit
INFO:
more information in /scratch/sj1n19/MG5/MG5_aMC_v2_7_2/check/index.html
exit
quit

https://www.dropbox.com/sh/lz7zwgsd734ief0/AACtVfaw2O4O4EXzNpnzeoTja?dl=0

Regards
Shubhani

The log is quite clear:
./MG5aMC_PY8_interface: /usr/lib64/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by ./MG5aMC_PY8_interface)
./MG5aMC_PY8_interface: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.15' not found (required by ./MG5aMC_PY8_interface)
./MG5aMC_PY8_interface: /usr/lib64/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by ./MG5aMC_PY8_interface)
./MG5aMC_PY8_interface: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by ./MG5aMC_PY8_interface)
./MG5aMC_PY8_interface: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by ./MG5aMC_PY8_interface)

You have a configuration error on your gcc library which fails to find some basic library like GLIBC

Cheers,

Olivier

Shubhani Jain (s2697661) said : #6

Thanks Olivier I will pass this info to cluster team so that they can tell which version of gcc will be suitable.

Can you help with this problem?

Provide an answer of your own, or ask Shubhani Jain for more information if necessary.

To post a message you must log in.