NLO QCD process for WWjj failed

Asked by Lebohang

Dear Authors,

I would like to generate the NLO QCD correction to the process of WWjj->lvlvjj.

 What I did was:

generate p p > e+ ve mu- vm~ j j [QCD]
add process p p > mu+ vm e- ve~ j j [QCD]

Generating the FKS-subtracted matrix element for Born process is relatively quick, ±30mins, but it takes a few hours to generate the virtual matrix-elements using Madloop and this step worked fine and was complete. After this was complete I then ran,

output ppWWjj

The problem occurs when I try to create the corresponding output for MadGraph5_aMC@NLO.

The generation of matrix-elements in "output" ends at 88/240. Since it stops I have no log file to look at and resolve this issue. Also, there is no error.

I tried running the process on different machines with my colleagues and still, the output suddenly stops.

Note: I am using:
 macOS Mojave Version 10.14
, MG5_aMC_v2_6_5,
GNU Fortran (GCC) 6.2.0
, Python 2.7.15

Any advice into the solution for this would be greatly appreciated.

Kind regards,

Lebohang.

Question information

Language:
English Edit question
Status:
Answered
For:
MadGraph5_aMC@NLO Edit question
Assignee:
marco zaro Edit question
Last query:
Last reply:
Revision history for this message
marco zaro (marco-zaro) said :
#1

Hi,
this may be related to the process using too much memory. A more memory-efficient generation is available for complicated processes

Try setting
set low_mem_multicore_nlo_generation True
in the MG5_aMC shell before generate and output

Also, you may want to limit the number of cores to X
set nb_core X
because the new generation also works on many cores. However, if you have too many cores, then you can still have too much memory consumption.

You can find many details on the simulation of WWjj with MG5_aMC@NLO (and other codes) here: arXiv:1803.07943

In case you have further issues, just let me know

Best,

Marco

Revision history for this message
Lebohang (lebohang0405) said :
#2

Good day Marco,

Regarding the previously stated problem, as per your insight I looked into the “set low_mem_multicore_nlo_generation True” command you recommended I use. Indeed generating this process was a memory issue and should have been solved by this command however as I try to implement it, I don’t seem to be generating any events.

I received the following error while trying to generate the process,

Command "output ppWWjj_NLO" interrupted with error:
TypeError : Pool() got an unexpected keyword argument 'maxtasksperchild'
Please report this bug on https://bugs.launchpad.net/mg5amcnlo
More information is found in 'MG5_debug'.
Please attach this file to your report.

I believe I am not generating the process at generate before output. I also looked at generating previously answered questions similar to mine to see if perhaps I was doing something wrong and still I could not generate any events. Further assistance to this would be greatly appreciated.

Kind regards,

Lebohang.

I have copied the MG5_debug file below,

#************************************************************
#* MadGraph5_aMC@NLO *
#* *
#* * * *
#* * * * * *
#* * * * * 5 * * * * *
#* * * * * *
#* * * *
#* *
#* *
#* VERSION 2.6.5 2018-02-03 *
#* *
#* The MadGraph5_aMC@NLO Development Team - Find us at *
#* https://server06.fynu.ucl.ac.be/projects/madgraph *
#* *
#************************************************************
#* *
#* Command File for MadGraph5_aMC@NLO *
#* *
#* run as ./bin/mg5_aMC filename *
#* *
#************************************************************
set default_unset_couplings 99
set group_subprocesses Auto
set ignore_six_quark_processes False
set loop_optimized_output True
set low_mem_multicore_nlo_generation False
set loop_color_flows False
set gauge unitary
set complex_mass_scheme False
set max_npoint_for_channel 0
import model sm
define p = g u c d s u~ c~ d~ s~
define j = g u c d s u~ c~ d~ s~
define l+ = e+ mu+
define l- = e- mu-
define vl = ve vm vt
define vl~ = ve~ vm~ vt~
set low_mem_multicore_nlo_generation True
set nb_core 4
save options
import model loop_sm
generate p p > e+ ve mu- vm~ j j [QCD]
add process p p > mu+ vm e- ve~ j j [QCD]
output ppWWjj_NLO
Traceback (most recent call last):
  File "/afs/cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/madgraph/interface/extended_cmd.py", line 1514, in onecmd
    return self.onecmd_orig(line, **opt)
  File "/afs/cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/madgraph/interface/extended_cmd.py", line 1463, in onecmd_orig
    return func(arg, **opt)
  File "/afs/cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/madgraph/interface/master_interface.py", line 292, in do_output
    self.cmd.do_output(self, line, *args, **opts)
  File "/afs/cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/madgraph/interface/amcatnlo_interface.py", line 553, in do_output
    self.export(nojpeg, main_file_name, group_processes=group_processes)
  File "/afs/cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/madgraph/interface/master_interface.py", line 306, in export
    return self.cmd.export(self, *args, **opts)
  File "/afs/cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/madgraph/interface/amcatnlo_interface.py", line 648, in export
    ndiags, cpu_time = generate_matrix_elements(self, group=group_processes)
  File "/afs/cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/madgraph/interface/amcatnlo_interface.py", line 596, in generate_matrix_elements
    loop_optimized= self.options['loop_optimized_output'])
  File "/afs/cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/madgraph/fks/fks_helas_objects.py", line 284, in __init__
    pool = multiprocessing.Pool(processes=fksmulti['ncores_for_proc_gen'],maxtasksperchild=1)
TypeError: Pool() got an unexpected keyword argument 'maxtasksperchild'
                          MadGraph5_aMC@NLO Options
                          ----------------
        complex_mass_scheme : False
    default_unset_couplings : 99
                      gauge : unitary
         group_subprocesses : Auto
  ignore_six_quark_processes : False
           loop_color_flows : False
      loop_optimized_output : True
  low_mem_multicore_nlo_generation : True (user set)
     max_npoint_for_channel : 0
               stdout_level : 20 (user set)

                         MadEvent Options
                          ----------------
     automatic_html_opening : True
                    nb_core : 4 (user set)
        notification_center : True
                   run_mode : 2

                      Configuration Options
                      ---------------------
                        OLP : MadLoop
                    amcfast : amcfast-config
                   applgrid : applgrid-config
                auto_update : 7
         cluster_local_path : None
           cluster_nb_retry : 1
              cluster_queue : None (user set)
         cluster_retry_wait : 300
               cluster_size : 100
      cluster_status_update : (600, 30)
          cluster_temp_path : None
               cluster_type : condor
                    collier : /afs/cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/HEPTools/collier (user set)
               cpp_compiler : None
             crash_on_error : False
               delphes_path : ./Delphes
                 eps_viewer : None
        exrootanalysis_path : ./ExRootAnalysis
              f2py_compiler : None
                    fastjet : fastjet-config
           fortran_compiler : None
                      golem : None (user set)
                 hepmc_path : None (user set)
                  hwpp_path : None (user set)
                     lhapdf : /afs/cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/HEPTools/lhapdf6/bin/lhapdf-config (user set)
          madanalysis5_path : None (user set)
           madanalysis_path : ./MadAnalysis
  mg5amc_py8_interface_path : /afs/cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/HEPTools/MG5aMC_PY8_interface (user set)
                      ninja : /afs/cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/HEPTools/lib (user set)
        output_dependencies : external
                      pjfry : None (user set)
            pythia-pgs_path : ./pythia-pgs
               pythia8_path : /afs/cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/HEPTools/pythia8 (user set)
                    samurai : None
               syscalc_path : ./SysCalc
                    td_path : ./td
                text_editor : None
                thepeg_path : None (user set)
                    timeout : 60
                web_browser : None

Revision history for this message
marco zaro (marco-zaro) said :
#3

Dear Lebohang,
it looks like it is an issue related to the version of python you are
using. can you check the version number? Normally it should be ok with
python 2.7...
Best wishes,

Marco

On Mon, Mar 18, 2019 at 10:17 AM Lebohang <
<email address hidden>> wrote:

> Question #679049 on MadGraph5_aMC@NLO changed:
> https://answers.launchpad.net/mg5amcnlo/+question/679049
>
> Lebohang posted a new comment:
> Good day Marco,
>
> Regarding the previously stated problem, as per your insight I looked
> into the “set low_mem_multicore_nlo_generation True” command you
> recommended I use. Indeed generating this process was a memory issue and
> should have been solved by this command however as I try to implement
> it, I don’t seem to be generating any events.
>
> I received the following error while trying to generate the process,
>
> Command "output ppWWjj_NLO" interrupted with error:
> TypeError : Pool() got an unexpected keyword argument 'maxtasksperchild'
> Please report this bug on https://bugs.launchpad.net/mg5amcnlo
> More information is found in 'MG5_debug'.
> Please attach this file to your report.
>
> I believe I am not generating the process at generate before output. I
> also looked at generating previously answered questions similar to mine
> to see if perhaps I was doing something wrong and still I could not
> generate any events. Further assistance to this would be greatly
> appreciated.
>
> Kind regards,
> Lebohang.
>
> I have copied the MG5_debug file below,
>
> #************************************************************
> #* MadGraph5_aMC@NLO *
> #* *
> #* * * *
> #* * * * * *
> #* * * * * 5 * * * * *
> #* * * * * *
> #* * * *
> #* *
> #* *
> #* VERSION 2.6.5 2018-02-03 *
> #* *
> #* The MadGraph5_aMC@NLO Development Team - Find us at *
> #* https://server06.fynu.ucl.ac.be/projects/madgraph *
> #* *
> #************************************************************
> #* *
> #* Command File for MadGraph5_aMC@NLO *
> #* *
> #* run as ./bin/mg5_aMC filename *
> #* *
> #************************************************************
> set default_unset_couplings 99
> set group_subprocesses Auto
> set ignore_six_quark_processes False
> set loop_optimized_output True
> set low_mem_multicore_nlo_generation False
> set loop_color_flows False
> set gauge unitary
> set complex_mass_scheme False
> set max_npoint_for_channel 0
> import model sm
> define p = g u c d s u~ c~ d~ s~
> define j = g u c d s u~ c~ d~ s~
> define l+ = e+ mu+
> define l- = e- mu-
> define vl = ve vm vt
> define vl~ = ve~ vm~ vt~
> set low_mem_multicore_nlo_generation True
> set nb_core 4
> save options
> import model loop_sm
> generate p p > e+ ve mu- vm~ j j [QCD]
> add process p p > mu+ vm e- ve~ j j [QCD]
> output ppWWjj_NLO
> Traceback (most recent call last):
> File "/afs/
> cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/madgraph/interface/extended_cmd.py",
> line 1514, in onecmd
> return self.onecmd_orig(line, **opt)
> File "/afs/
> cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/madgraph/interface/extended_cmd.py",
> line 1463, in onecmd_orig
> return func(arg, **opt)
> File "/afs/
> cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/madgraph/interface/master_interface.py",
> line 292, in do_output
> self.cmd.do_output(self, line, *args, **opts)
> File "/afs/
> cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/madgraph/interface/amcatnlo_interface.py",
> line 553, in do_output
> self.export(nojpeg, main_file_name, group_processes=group_processes)
> File "/afs/
> cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/madgraph/interface/master_interface.py",
> line 306, in export
> return self.cmd.export(self, *args, **opts)
> File "/afs/
> cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/madgraph/interface/amcatnlo_interface.py",
> line 648, in export
> ndiags, cpu_time = generate_matrix_elements(self,
> group=group_processes)
> File "/afs/
> cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/madgraph/interface/amcatnlo_interface.py",
> line 596, in generate_matrix_elements
> loop_optimized= self.options['loop_optimized_output'])
> File "/afs/
> cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/madgraph/fks/fks_helas_objects.py",
> line 284, in __init__
> pool =
> multiprocessing.Pool(processes=fksmulti['ncores_for_proc_gen'],maxtasksperchild=1)
> TypeError: Pool() got an unexpected keyword argument 'maxtasksperchild'
> MadGraph5_aMC@NLO Options
> ----------------
> complex_mass_scheme : False
> default_unset_couplings : 99
> gauge : unitary
> group_subprocesses : Auto
> ignore_six_quark_processes : False
> loop_color_flows : False
> loop_optimized_output : True
> low_mem_multicore_nlo_generation : True (user set)
> max_npoint_for_channel : 0
> stdout_level : 20 (user set)
>
> MadEvent Options
> ----------------
> automatic_html_opening : True
> nb_core : 4 (user set)
> notification_center : True
> run_mode : 2
>
> Configuration Options
> ---------------------
> OLP : MadLoop
> amcfast : amcfast-config
> applgrid : applgrid-config
> auto_update : 7
> cluster_local_path : None
> cluster_nb_retry : 1
> cluster_queue : None (user set)
> cluster_retry_wait : 300
> cluster_size : 100
> cluster_status_update : (600, 30)
> cluster_temp_path : None
> cluster_type : condor
> collier : /afs/
> cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/HEPTools/collier
> (user set)
> cpp_compiler : None
> crash_on_error : False
> delphes_path : ./Delphes
> eps_viewer : None
> exrootanalysis_path : ./ExRootAnalysis
> f2py_compiler : None
> fastjet : fastjet-config
> fortran_compiler : None
> golem : None (user set)
> hepmc_path : None (user set)
> hwpp_path : None (user set)
> lhapdf : /afs/
> cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/HEPTools/lhapdf6/bin/lhapdf-config
> (user set)
> madanalysis5_path : None (user set)
> madanalysis_path : ./MadAnalysis
> mg5amc_py8_interface_path : /afs/
> cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/HEPTools/MG5aMC_PY8_interface
> (user set)
> ninja : /afs/
> cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/HEPTools/lib (user
> set)
> output_dependencies : external
> pjfry : None (user set)
> pythia-pgs_path : ./pythia-pgs
> pythia8_path : /afs/
> cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/HEPTools/pythia8
> (user set)
> samurai : None
> syscalc_path : ./SysCalc
> td_path : ./td
> text_editor : None
> thepeg_path : None (user set)
> timeout : 60
> web_browser : None
>
> --
> You received this question notification because you are subscribed to
> the question.
>

Revision history for this message
Lebohang (lebohang0405) said :
#4

Hello again Marco,

I'm running on python 2.7.15 along with GNU Fortran (GCC) 6.2.0. I can generate processes at LO, and they work just fine, giving me an output. It's when I use "low_mem_multicore_nlo_generation" command add the "[QCD]", e.g. generate p p > e+ ve mu- vm~ j j [QCD], to any process where mg5_aMC does not generate the processes and results in the previous error.

Kind regards,
Lebohang.

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#5

Hi,

I confirm that such error is a pure python2.6 issue.
Do you have python2.6 on that machine? is it the default version?
That might explain what the issue is...

Cheers,

Olivier

Revision history for this message
Lebohang (lebohang0405) said :
#6

Good day Olivier,

I checked the python version I running and indeed it is python 2.7.15.

echo $PYTHONPATH
/cvmfs/sft.cern.ch/lcg/views/LCG_94/x86_64-slc6-gcc62-opt/lib:/cvmfs/sft.cern.ch/lcg/views/LCG_94/x86_64-slc6-gcc62-opt/lib/python2.7/site-packages

python --version
Python 2.7.15

So I ran Madgraph using python bin/mg5_aMC instead of using ./bin/mg5_aMC and still I get the same error,

Command "output ppWWjj_NLO" interrupted with error:
TypeError : Pool() got an unexpected keyword argument 'maxtasksperchild'
Please report this bug on https://bugs.launchpad.net/mg5amcnlo
More information is found in 'MG5_debug'.
Please attach this file to your report.

Here is the MG5_debug file,

#************************************************************
#* MadGraph5_aMC@NLO *
#* *
#* * * *
#* * * * * *
#* * * * * 5 * * * * *
#* * * * * *
#* * * *
#* *
#* *
#* VERSION 2.6.5 2018-02-03 *
#* *
#* The MadGraph5_aMC@NLO Development Team - Find us at *
#* https://server06.fynu.ucl.ac.be/projects/madgraph *
#* *
#************************************************************
#* *
#* Command File for MadGraph5_aMC@NLO *
#* *
#* run as ./bin/mg5_aMC filename *
#* *
#************************************************************
set default_unset_couplings 99
set group_subprocesses Auto
set ignore_six_quark_processes False
set loop_optimized_output True
set low_mem_multicore_nlo_generation True
set loop_color_flows False
set gauge unitary
set complex_mass_scheme False
set max_npoint_for_channel 0
import model sm
define p = g u c d s u~ c~ d~ s~
define j = g u c d s u~ c~ d~ s~
define l+ = e+ mu+
define l- = e- mu-
define vl = ve vm vt
define vl~ = ve~ vm~ vt~
import model loop_sm
generate p p > e+ ve mu- vm~ j j [QCD]
add process p p > mu+ vm e- ve~ j j [QCD]
output ppWWjj_NLO
Traceback (most recent call last):
  File "/afs/cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/madgraph/interface/extended_cmd.py", line 1514, in onecmd
    return self.onecmd_orig(line, **opt)
  File "/afs/cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/madgraph/interface/extended_cmd.py", line 1463, in onecmd_orig
    return func(arg, **opt)
  File "/afs/cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/madgraph/interface/master_interface.py", line 292, in do_output
    self.cmd.do_output(self, line, *args, **opts)
  File "/afs/cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/madgraph/interface/amcatnlo_interface.py", line 553, in do_output
    self.export(nojpeg, main_file_name, group_processes=group_processes)
  File "/afs/cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/madgraph/interface/master_interface.py", line 306, in export
    return self.cmd.export(self, *args, **opts)
  File "/afs/cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/madgraph/interface/amcatnlo_interface.py", line 648, in export
    ndiags, cpu_time = generate_matrix_elements(self, group=group_processes)
  File "/afs/cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/madgraph/interface/amcatnlo_interface.py", line 596, in generate_matrix_elements
    loop_optimized= self.options['loop_optimized_output'])
  File "/afs/cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/madgraph/fks/fks_helas_objects.py", line 284, in __init__
    pool = multiprocessing.Pool(processes=fksmulti['ncores_for_proc_gen'],maxtasksperchild=1)
TypeError: Pool() got an unexpected keyword argument 'maxtasksperchild'
                          MadGraph5_aMC@NLO Options
                          ----------------
        complex_mass_scheme : False
    default_unset_couplings : 99
                      gauge : unitary
         group_subprocesses : Auto
  ignore_six_quark_processes : False
           loop_color_flows : False
      loop_optimized_output : True
  low_mem_multicore_nlo_generation : True (user set)
     max_npoint_for_channel : 0
               stdout_level : 20 (user set)

                         MadEvent Options
                          ----------------
     automatic_html_opening : False (user set)
                    nb_core : 4 (user set)
        notification_center : True
                   run_mode : 2

                      Configuration Options
                      ---------------------
                        OLP : MadLoop
                    amcfast : amcfast-config
                   applgrid : applgrid-config
                auto_update : 7
         cluster_local_path : None
           cluster_nb_retry : 1
              cluster_queue : madgraph (user set)
         cluster_retry_wait : 300
               cluster_size : 150 (user set)
      cluster_status_update : (600, 30)
          cluster_temp_path : None
               cluster_type : condor
                    collier : /afs/cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/HEPTools/lib (user set)
               cpp_compiler : gcc (user set)
             crash_on_error : False
               delphes_path : ./Delphes
                 eps_viewer : None
        exrootanalysis_path : ./ExRootAnalysis
              f2py_compiler : None
                    fastjet : fastjet-config
           fortran_compiler : gfortran (user set)
                      golem : None (user set)
                 hepmc_path : None (user set)
                  hwpp_path : None (user set)
                     lhapdf : /afs/cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/HEPTools/lhapdf6/bin/lhapdf-config (user set)
          madanalysis5_path : /afs/cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/HEPTools/madanalysis5/madanalysis5 (user set)
           madanalysis_path : ./MadAnalysis
  mg5amc_py8_interface_path : /afs/cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/HEPTools/MG5aMC_PY8_interface (user set)
                      ninja : /afs/cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/HEPTools/lib (user set)
        output_dependencies : external
                      pjfry : None (user set)
            pythia-pgs_path : ./pythia-pgs
               pythia8_path : /afs/cern.ch/work/l/lmokoena/public/local/MG5_aMC_v2_6_5/HEPTools/pythia8 (user set)
                    samurai : None
               syscalc_path : ./SysCalc
                    td_path : ./td
                text_editor : None
                thepeg_path : None (user set)
                    timeout : 60
                web_browser : None

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#7

Hi,

I thought that you were running on mac...
I actually do not know how to set python2.7 on lxplus...
[lxplus047 ~]$ module av Python
does not return anything...
Could you teach me on that?

But for sure it has python2.6 has default and looks like at some point the code switch back to python2.6.

Cheers,

Olivier

Revision history for this message
marco zaro (marco-zaro) said :
#8

Hi,
Perhaps this link can be of help
https://sft.its.cern.ch/jira/si/jira.issueviews:issue-html/SPI-969/SPI-969.html

On Tue, 19 Mar 2019, 12:18 Olivier Mattelaer, <
<email address hidden>> wrote:

> Question #679049 on MadGraph5_aMC@NLO changed:
> https://answers.launchpad.net/mg5amcnlo/+question/679049
>
> Olivier Mattelaer proposed the following answer:
> Hi,
>
> I thought that you were running on mac...
> I actually do not know how to set python2.7 on lxplus...
> [lxplus047 ~]$ module av Python
> does not return anything...
> Could you teach me on that?
>
> But for sure it has python2.6 has default and looks like at some point
> the code switch back to python2.6.
>
> Cheers,
>
> Olivier
>
> --
> You received this question notification because you are subscribed to
> the question.
>

Revision history for this message
Lebohang (lebohang0405) said :
#9

Good day Olivier and Marco,

Thank you for the previous replies I realized lxplus gives a tad too many issues when running certain processes on MadGraph hence I’m running it locally on my PC.

I hate to be a bother but the “set low_mem_multicore_nlo_generation True” command greatly shortens the time and memory to generate a process involving NLO corrections however to create the corresponding output for MadGraph5_aMC@NLO for the WWjj process still does not occur. It breaks during the output command while collecting infos and finalizing matrix elements...

INFO: Generating born process: s s > mu+ vm e- ve d d~ [ all = QCD ]
INFO: Generating born process: d s > mu+ vm e- ve d s~ [ all = QCD ]
INFO: Generating born process: s s > mu+ vm e- ve s s~ [ all = QCD ]
INFO: Generating born process: s g > mu+ vm e- ve s~ g [ all = QCD ]
INFO: Generating born process: s u > mu+ vm e- ve s u~ [ all = QCD ]
INFO: Generating born process: s u > mu+ vm e- ve s~ u [ all = QCD ]
INFO: Generating born process: s u > mu+ vm e- ve d c~ [ all = QCD ]
INFO: Generating born process: s c > mu+ vm e- ve s c~ [ all = QCD ]
INFO: Generating born process: s c > mu+ vm e- ve u d~ [ all = QCD ]
INFO: Generating born process: s c > mu+ vm e- ve s~ c [ all = QCD ]
INFO: Generating born process: s d > mu+ vm e- ve s d~ [ all = QCD ]
INFO: Generating born process: s d > mu+ vm e- ve u c~ [ all = QCD ]
INFO: Generating born process: s s > mu+ vm e- ve s s~ [ all = QCD ]
INFO: Generating born process: s d > mu+ vm e- ve s~ d [ all = QCD ]
INFO: Collecting infos and finalizing matrix elements...
Command "output ppWWjj_NLO" interrupted with error:
ValueError : {
     'processes': [{
     'legs': [{
     'id': 4,
     'number': 1,
     'state': False,
     'from_group': True,
     'loop_line': False,
     'onshell': None,
     'fks': 'j',
     'color': 3,
     'charge': 0.67,
     'massless': True,
     'spin': 2,
     'is_part': True,
     'self_antipart': False
 }, {

This might be a python related issue since the error from the MG5_debug file points to multiprocessing/pool.py. But my college tried generating this process and still received the same error. I have attached the file below.

Regards,
Lebohang.

#************************************************************
#* MadGraph5_aMC@NLO *
#* *
#* * * *
#* * * * * *
#* * * * * 5 * * * * *
#* * * * * *
#* * * *
#* *
#* *
#* VERSION 2.6.5 2018-02-03 *
#* *
#* The MadGraph5_aMC@NLO Development Team - Find us at *
#* https://server06.fynu.ucl.ac.be/projects/madgraph *
#* *
#************************************************************
#* *
#* Command File for MadGraph5_aMC@NLO *
#* *
#* run as ./bin/mg5_aMC filename *
#* *
#************************************************************
set default_unset_couplings 99
set group_subprocesses Auto
set ignore_six_quark_processes False
set loop_optimized_output True
set loop_color_flows False
set gauge unitary
set complex_mass_scheme False
set max_npoint_for_channel 0
import model sm
define p = g u c d s u~ c~ d~ s~
define j = g u c d s u~ c~ d~ s~
define l+ = e+ mu+
define l- = e- mu-
define vl = ve vm vt
define vl~ = ve~ vm~ vt~
set low_mem_multicore_nlo_generation True
set nb_core 4
import model loop_sm
generate p p > e+ ve mu- vm~ j j [QCD]
add process p p > mu+ vm e- ve~ j j [QCD]
output ppWWjj_NLO
Traceback (most recent call last):
  File "/Users/lebohangmokoena/Documents/MonteCarlo/MG5_aMC_v2_6_5/madgraph/interface/extended_cmd.py", line 1514, in onecmd
    return self.onecmd_orig(line, **opt)
  File "/Users/lebohangmokoena/Documents/MonteCarlo/MG5_aMC_v2_6_5/madgraph/interface/extended_cmd.py", line 1463, in onecmd_orig
    return func(arg, **opt)
  File "/Users/lebohangmokoena/Documents/MonteCarlo/MG5_aMC_v2_6_5/madgraph/interface/master_interface.py", line 292, in do_output
    self.cmd.do_output(self, line, *args, **opts)
  File "/Users/lebohangmokoena/Documents/MonteCarlo/MG5_aMC_v2_6_5/madgraph/interface/amcatnlo_interface.py", line 553, in do_output
    self.export(nojpeg, main_file_name, group_processes=group_processes)
  File "/Users/lebohangmokoena/Documents/MonteCarlo/MG5_aMC_v2_6_5/madgraph/interface/master_interface.py", line 306, in export
    return self.cmd.export(self, *args, **opts)
  File "/Users/lebohangmokoena/Documents/MonteCarlo/MG5_aMC_v2_6_5/madgraph/interface/amcatnlo_interface.py", line 648, in export
    ndiags, cpu_time = generate_matrix_elements(self, group=group_processes)
  File "/Users/lebohangmokoena/Documents/MonteCarlo/MG5_aMC_v2_6_5/madgraph/interface/amcatnlo_interface.py", line 596, in generate_matrix_elements
    loop_optimized= self.options['loop_optimized_output'])
  File "/Users/lebohangmokoena/Documents/MonteCarlo/MG5_aMC_v2_6_5/madgraph/fks/fks_helas_objects.py", line 347, in __init__
    memapout = pool.map_async(async_finalize_matrix_elements,memapin).get(9999999)
  File "/usr/local/Cellar/python@2/2.7.16/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/pool.py", line 572, in get
    raise self._value
ValueError: {
    'processes': [{
    'legs': [{
    'id': 4,
    'number': 1,
    'state': False,
    'from_group': True,
    'loop_line': False,
    'onshell': None,
    'fks': 'j',
    'color': 3,
    'charge': 0.67,
    'massless': True,
    'spin': 2,
    'is_part': True,
    'self_antipart': False
},

Revision history for this message
marco zaro (marco-zaro) said :
#10

Hi Lebohang,
apologies for the delay.I cannot reproduce your error. For me, the gerenation

set complex_mass_scheme
set low_mem_multicore_nlo_generation
generate p p > e+ ve mu+ vm j j QCD=0 [QCD]
output VBS-ppwpwpjj-QCD
works OK. I have checked with 2.6.5, and it works both with the loop_sm model and with the loop_qcd_qed_sm_Gmu
I am not sure what to suggest here...

Cheers,

Marco

> On 27 Mar 2019, at 11:03, Lebohang <email address hidden> wrote:
>
> Question #679049 on MadGraph5_aMC@NLO changed:
> https://answers.launchpad.net/mg5amcnlo/+question/679049
>
> Lebohang posted a new comment:
> Good day Olivier and Marco,
>
>
>
> Thank you for the previous replies I realized lxplus gives a tad too
> many issues when running certain processes on MadGraph hence I’m running
> it locally on my PC.
>
> I hate to be a bother but the “set low_mem_multicore_nlo_generation
> True” command greatly shortens the time and memory to generate a process
> involving NLO corrections however to create the corresponding output for
> MadGraph5_aMC@NLO for the WWjj process still does not occur. It breaks
> during the output command while collecting infos and finalizing matrix
> elements...
>
>
>
> INFO: Generating born process: s s > mu+ vm e- ve d d~ [ all = QCD ]
> INFO: Generating born process: d s > mu+ vm e- ve d s~ [ all = QCD ]
> INFO: Generating born process: s s > mu+ vm e- ve s s~ [ all = QCD ]
> INFO: Generating born process: s g > mu+ vm e- ve s~ g [ all = QCD ]
> INFO: Generating born process: s u > mu+ vm e- ve s u~ [ all = QCD ]
> INFO: Generating born process: s u > mu+ vm e- ve s~ u [ all = QCD ]
> INFO: Generating born process: s u > mu+ vm e- ve d c~ [ all = QCD ]
> INFO: Generating born process: s c > mu+ vm e- ve s c~ [ all = QCD ]
> INFO: Generating born process: s c > mu+ vm e- ve u d~ [ all = QCD ]
> INFO: Generating born process: s c > mu+ vm e- ve s~ c [ all = QCD ]
> INFO: Generating born process: s d > mu+ vm e- ve s d~ [ all = QCD ]
> INFO: Generating born process: s d > mu+ vm e- ve u c~ [ all = QCD ]
> INFO: Generating born process: s s > mu+ vm e- ve s s~ [ all = QCD ]
> INFO: Generating born process: s d > mu+ vm e- ve s~ d [ all = QCD ]
> INFO: Collecting infos and finalizing matrix elements...
> Command "output ppWWjj_NLO" interrupted with error:
> ValueError : {
> 'processes': [{
> 'legs': [{
> 'id': 4,
> 'number': 1,
> 'state': False,
> 'from_group': True,
> 'loop_line': False,
> 'onshell': None,
> 'fks': 'j',
> 'color': 3,
> 'charge': 0.67,
> 'massless': True,
> 'spin': 2,
> 'is_part': True,
> 'self_antipart': False
> }, {
>
>
>
>
>
>
>
>
>
>
>
>
> This might be a python related issue since the error from the MG5_debug file points to multiprocessing/pool.py. But my college tried generating this process and still received the same error. I have attached the file below.
>
> Regards,
> Lebohang.
>
>
>
>
> #************************************************************
> #* MadGraph5_aMC@NLO *
> #* *
> #* * * *
> #* * * * * *
> #* * * * * 5 * * * * *
> #* * * * * *
> #* * * *
> #* *
> #* *
> #* VERSION 2.6.5 2018-02-03 *
> #* *
> #* The MadGraph5_aMC@NLO Development Team - Find us at *
> #* https://server06.fynu.ucl.ac.be/projects/madgraph *
> #* *
> #************************************************************
> #* *
> #* Command File for MadGraph5_aMC@NLO *
> #* *
> #* run as ./bin/mg5_aMC filename *
> #* *
> #************************************************************
> set default_unset_couplings 99
> set group_subprocesses Auto
> set ignore_six_quark_processes False
> set loop_optimized_output True
> set loop_color_flows False
> set gauge unitary
> set complex_mass_scheme False
> set max_npoint_for_channel 0
> import model sm
> define p = g u c d s u~ c~ d~ s~
> define j = g u c d s u~ c~ d~ s~
> define l+ = e+ mu+
> define l- = e- mu-
> define vl = ve vm vt
> define vl~ = ve~ vm~ vt~
> set low_mem_multicore_nlo_generation True
> set nb_core 4
> import model loop_sm
> generate p p > e+ ve mu- vm~ j j [QCD]
> add process p p > mu+ vm e- ve~ j j [QCD]
> output ppWWjj_NLO
> Traceback (most recent call last):
> File "/Users/lebohangmokoena/Documents/MonteCarlo/MG5_aMC_v2_6_5/madgraph/interface/extended_cmd.py", line 1514, in onecmd
> return self.onecmd_orig(line, **opt)
> File "/Users/lebohangmokoena/Documents/MonteCarlo/MG5_aMC_v2_6_5/madgraph/interface/extended_cmd.py", line 1463, in onecmd_orig
> return func(arg, **opt)
> File "/Users/lebohangmokoena/Documents/MonteCarlo/MG5_aMC_v2_6_5/madgraph/interface/master_interface.py", line 292, in do_output
> self.cmd.do_output(self, line, *args, **opts)
> File "/Users/lebohangmokoena/Documents/MonteCarlo/MG5_aMC_v2_6_5/madgraph/interface/amcatnlo_interface.py", line 553, in do_output
> self.export(nojpeg, main_file_name, group_processes=group_processes)
> File "/Users/lebohangmokoena/Documents/MonteCarlo/MG5_aMC_v2_6_5/madgraph/interface/master_interface.py", line 306, in export
> return self.cmd.export(self, *args, **opts)
> File "/Users/lebohangmokoena/Documents/MonteCarlo/MG5_aMC_v2_6_5/madgraph/interface/amcatnlo_interface.py", line 648, in export
> ndiags, cpu_time = generate_matrix_elements(self, group=group_processes)
> File "/Users/lebohangmokoena/Documents/MonteCarlo/MG5_aMC_v2_6_5/madgraph/interface/amcatnlo_interface.py", line 596, in generate_matrix_elements
> loop_optimized= self.options['loop_optimized_output'])
> File "/Users/lebohangmokoena/Documents/MonteCarlo/MG5_aMC_v2_6_5/madgraph/fks/fks_helas_objects.py", line 347, in __init__
> memapout = pool.map_async(async_finalize_matrix_elements,memapin).get(9999999)
> File "/usr/local/Cellar/python@2/2.7.16/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/pool.py", line 572, in get
> raise self._value
> ValueError: {
> 'processes': [{
> 'legs': [{
> 'id': 4,
> 'number': 1,
> 'state': False,
> 'from_group': True,
> 'loop_line': False,
> 'onshell': None,
> 'fks': 'j',
> 'color': 3,
> 'charge': 0.67,
> 'massless': True,
> 'spin': 2,
> 'is_part': True,
> 'self_antipart': False
> },
>
> --
> You received this question notification because you are subscribed to
> the question.

Revision history for this message
marco zaro (marco-zaro) said :
#11

sorry, I have jsut realised that you are generating W+ W- and not W+ W+
let me check…
cheers,

Marco

> On 1 Apr 2019, at 14:02, Marco Zaro <email address hidden> wrote:
>
> Hi Lebohang,
> apologies for the delay.I cannot reproduce your error. For me, the gerenation
>
> set complex_mass_scheme
> set low_mem_multicore_nlo_generation
> generate p p > e+ ve mu+ vm j j QCD=0 [QCD]
> output VBS-ppwpwpjj-QCD
> works OK. I have checked with 2.6.5, and it works both with the loop_sm model and with the loop_qcd_qed_sm_Gmu
> I am not sure what to suggest here...
>
> Cheers,
>
> Marco
>
>> On 27 Mar 2019, at 11:03, Lebohang <<email address hidden> <mailto:<email address hidden>>> wrote:
>>
>> Question #679049 on MadGraph5_aMC@NLO changed:
>> https://answers.launchpad.net/mg5amcnlo/+question/679049 <https://answers.launchpad.net/mg5amcnlo/+question/679049>
>>
>> Lebohang posted a new comment:
>> Good day Olivier and Marco,
>>
>>
>>
>> Thank you for the previous replies I realized lxplus gives a tad too
>> many issues when running certain processes on MadGraph hence I’m running
>> it locally on my PC.
>>
>> I hate to be a bother but the “set low_mem_multicore_nlo_generation
>> True” command greatly shortens the time and memory to generate a process
>> involving NLO corrections however to create the corresponding output for
>> MadGraph5_aMC@NLO for the WWjj process still does not occur. It breaks
>> during the output command while collecting infos and finalizing matrix
>> elements...
>>
>>
>>
>> INFO: Generating born process: s s > mu+ vm e- ve d d~ [ all = QCD ]
>> INFO: Generating born process: d s > mu+ vm e- ve d s~ [ all = QCD ]
>> INFO: Generating born process: s s > mu+ vm e- ve s s~ [ all = QCD ]
>> INFO: Generating born process: s g > mu+ vm e- ve s~ g [ all = QCD ]
>> INFO: Generating born process: s u > mu+ vm e- ve s u~ [ all = QCD ]
>> INFO: Generating born process: s u > mu+ vm e- ve s~ u [ all = QCD ]
>> INFO: Generating born process: s u > mu+ vm e- ve d c~ [ all = QCD ]
>> INFO: Generating born process: s c > mu+ vm e- ve s c~ [ all = QCD ]
>> INFO: Generating born process: s c > mu+ vm e- ve u d~ [ all = QCD ]
>> INFO: Generating born process: s c > mu+ vm e- ve s~ c [ all = QCD ]
>> INFO: Generating born process: s d > mu+ vm e- ve s d~ [ all = QCD ]
>> INFO: Generating born process: s d > mu+ vm e- ve u c~ [ all = QCD ]
>> INFO: Generating born process: s s > mu+ vm e- ve s s~ [ all = QCD ]
>> INFO: Generating born process: s d > mu+ vm e- ve s~ d [ all = QCD ]
>> INFO: Collecting infos and finalizing matrix elements...
>> Command "output ppWWjj_NLO" interrupted with error:
>> ValueError : {
>> 'processes': [{
>> 'legs': [{
>> 'id': 4,
>> 'number': 1,
>> 'state': False,
>> 'from_group': True,
>> 'loop_line': False,
>> 'onshell': None,
>> 'fks': 'j',
>> 'color': 3,
>> 'charge': 0.67,
>> 'massless': True,
>> 'spin': 2,
>> 'is_part': True,
>> 'self_antipart': False
>> }, {
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> This might be a python related issue since the error from the MG5_debug file points to multiprocessing/pool.py. But my college tried generating this process and still received the same error. I have attached the file below.
>>
>> Regards,
>> Lebohang.
>>
>>
>>
>>
>> #************************************************************
>> #* MadGraph5_aMC@NLO *
>> #* *
>> #* * * *
>> #* * * * * *
>> #* * * * * 5 * * * * *
>> #* * * * * *
>> #* * * *
>> #* *
>> #* *
>> #* VERSION 2.6.5 2018-02-03 *
>> #* *
>> #* The MadGraph5_aMC@NLO Development Team - Find us at *
>> #* https://server06.fynu.ucl.ac.be/projects/madgraph *
>> #* *
>> #************************************************************
>> #* *
>> #* Command File for MadGraph5_aMC@NLO *
>> #* *
>> #* run as ./bin/mg5_aMC filename *
>> #* *
>> #************************************************************
>> set default_unset_couplings 99
>> set group_subprocesses Auto
>> set ignore_six_quark_processes False
>> set loop_optimized_output True
>> set loop_color_flows False
>> set gauge unitary
>> set complex_mass_scheme False
>> set max_npoint_for_channel 0
>> import model sm
>> define p = g u c d s u~ c~ d~ s~
>> define j = g u c d s u~ c~ d~ s~
>> define l+ = e+ mu+
>> define l- = e- mu-
>> define vl = ve vm vt
>> define vl~ = ve~ vm~ vt~
>> set low_mem_multicore_nlo_generation True
>> set nb_core 4
>> import model loop_sm
>> generate p p > e+ ve mu- vm~ j j [QCD]
>> add process p p > mu+ vm e- ve~ j j [QCD]
>> output ppWWjj_NLO
>> Traceback (most recent call last):
>> File "/Users/lebohangmokoena/Documents/MonteCarlo/MG5_aMC_v2_6_5/madgraph/interface/extended_cmd.py", line 1514, in onecmd
>> return self.onecmd_orig(line, **opt)
>> File "/Users/lebohangmokoena/Documents/MonteCarlo/MG5_aMC_v2_6_5/madgraph/interface/extended_cmd.py", line 1463, in onecmd_orig
>> return func(arg, **opt)
>> File "/Users/lebohangmokoena/Documents/MonteCarlo/MG5_aMC_v2_6_5/madgraph/interface/master_interface.py", line 292, in do_output
>> self.cmd.do_output(self, line, *args, **opts)
>> File "/Users/lebohangmokoena/Documents/MonteCarlo/MG5_aMC_v2_6_5/madgraph/interface/amcatnlo_interface.py", line 553, in do_output
>> self.export(nojpeg, main_file_name, group_processes=group_processes)
>> File "/Users/lebohangmokoena/Documents/MonteCarlo/MG5_aMC_v2_6_5/madgraph/interface/master_interface.py", line 306, in export
>> return self.cmd.export(self, *args, **opts)
>> File "/Users/lebohangmokoena/Documents/MonteCarlo/MG5_aMC_v2_6_5/madgraph/interface/amcatnlo_interface.py", line 648, in export
>> ndiags, cpu_time = generate_matrix_elements(self, group=group_processes)
>> File "/Users/lebohangmokoena/Documents/MonteCarlo/MG5_aMC_v2_6_5/madgraph/interface/amcatnlo_interface.py", line 596, in generate_matrix_elements
>> loop_optimized= self.options['loop_optimized_output'])
>> File "/Users/lebohangmokoena/Documents/MonteCarlo/MG5_aMC_v2_6_5/madgraph/fks/fks_helas_objects.py", line 347, in __init__
>> memapout = pool.map_async(async_finalize_matrix_elements,memapin).get(9999999)
>> File "/usr/local/Cellar/python@2/2.7.16/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/pool.py", line 572, in get
>> raise self._value
>> ValueError: {
>> 'processes': [{
>> 'legs': [{
>> 'id': 4,
>> 'number': 1,
>> 'state': False,
>> 'from_group': True,
>> 'loop_line': False,
>> 'onshell': None,
>> 'fks': 'j',
>> 'color': 3,
>> 'charge': 0.67,
>> 'massless': True,
>> 'spin': 2,
>> 'is_part': True,
>> 'self_antipart': False
>> },
>>
>> --
>> You received this question notification because you are subscribed to
>> the question.
>

Revision history for this message
Ogul Oencel (oguloncel) said :
#12

Dear Lebohang, Marco and Olivier,

I am resurrecting this topic due to having encountered the same problem recently.

I was basically seeking a solution to slow NLO event generation on my Madgraph setup that I run on LXPLUS7, came across this post and implemented the "set low_mem_multicore_nlo_generation" command. I ended up with the same problem of "maxtasksperchild".

The topic here seem to not have a final resolution, I have also checked the JIRA ticket linked by Marco above, and also wrote a post there (https://sft.its.cern.ch/jira/browse/SPI-969). The problem seems to be related to the python version used along MG. For example 2.7.5 can retrieve "maxtasksperchild":

[ooncel@lxplus797 ~]$ python -V
Python 2.7.5
[ooncel@lxplus797 ~]$ python
Python 2.7.5 (default, Apr 2 2020, 13:16:51)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from multiprocessing import Pool
>>> x = Pool(maxtasksperchild=1)
>>> help(Pool)
Help on function Pool in module multiprocessing:
Pool(processes=None, initializer=None, initargs=(), maxtasksperchild=None)
 Returns a process pool object
(END)

However, I am using AthGeneration 21.6.40 and this automatically sets python version to 2.7.13, which fails to retrieve"maxtasksperchild":

[ooncel@lxplus712 JO]$ source ../setuptest.sh
lsetup lsetup (...)Using AthGeneration/21.6.40 [cmake] with platform x86_64-slc6-gcc62-opt
 at /cvmfs/atlas.cern.ch/repo/sw/software/21.6
[ooncel@lxplus712 JO]$ python -V
Python 2.7.13
[ooncel@lxplus712 JO]$ python
Python 2.7.13 (default, Apr 22 2017, 20:06:00)
[GCC 6.2.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from multiprocessing import Pool
>>> x = Pool(maxtasksperchild=1)
Traceback (most recent call last):
 File "<stdin>", line 1, in <module>
TypeError: Pool() got an unexpected keyword argument 'maxtasksperchild'
>>>

I do not know if/how Madgraph can be forced to use a specified python version, or whether you have some alternative suggestions, perhaps a version which is know to be not affected?

Best regards,

Ogul

Revision history for this message
marco zaro (marco-zaro) said :
#13

Dear Ogul
In order to specify a specific python installation (let us call it
mypython) to run MG5_aMC, just do

/path/to/mypython ./bin/mg5_aMC

instead of

./bin/mg5_aMC

If mypython is already inside the PATH, then just do
mypython ./bin/mg5_aMC

Please also note that in the bug linked to this question there is a patch
that should be applied to the MG5-amc code in order to have the code
properly generated.

Let us know if it helps.

Marco Zaro

On Mon, Sep 7, 2020 at 11:20 AM Ogul Oencel <
<email address hidden>> wrote:

> Question #679049 on MadGraph5_aMC@NLO changed:
> https://answers.launchpad.net/mg5amcnlo/+question/679049
>
> Ogul Oencel requested more information:
> Dear Lebohang, Marco and Olivier,
>
> I am resurrecting this topic due to having encountered the same problem
> recently.
>
> I was basically seeking a solution to slow NLO event generation on my
> Madgraph setup that I run on LXPLUS7, came across this post and
> implemented the "set low_mem_multicore_nlo_generation" command. I ended
> up with the same problem of "maxtasksperchild".
>
> The topic here seem to not have a final resolution, I have also checked
> the JIRA ticket linked by Marco above, and also wrote a post there
> (https://sft.its.cern.ch/jira/browse/SPI-969). The problem seems to be
> related to the python version used along MG. For example 2.7.5 can
> retrieve "maxtasksperchild":
>
> [ooncel@lxplus797 ~]$ python -V
> Python 2.7.5
> [ooncel@lxplus797 ~]$ python
> Python 2.7.5 (default, Apr 2 2020, 13:16:51)
> [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
> >>> from multiprocessing import Pool
> >>> x = Pool(maxtasksperchild=1)
> >>> help(Pool)
> Help on function Pool in module multiprocessing:
> Pool(processes=None, initializer=None, initargs=(), maxtasksperchild=None)
> Returns a process pool object
> (END)
>
> However, I am using AthGeneration 21.6.40 and this automatically sets
> python version to 2.7.13, which fails to retrieve"maxtasksperchild":
>
>
> [ooncel@lxplus712 JO]$ source ../setuptest.sh
> lsetup lsetup (...)Using AthGeneration/21.6.40 [cmake] with platform
> x86_64-slc6-gcc62-opt
> at /cvmfs/atlas.cern.ch/repo/sw/software/21.6
> [ooncel@lxplus712 JO]$ python -V
> Python 2.7.13
> [ooncel@lxplus712 JO]$ python
> Python 2.7.13 (default, Apr 22 2017, 20:06:00)
> [GCC 6.2.0] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
> >>> from multiprocessing import Pool
> >>> x = Pool(maxtasksperchild=1)
> Traceback (most recent call last):
> File "<stdin>", line 1, in <module>
> TypeError: Pool() got an unexpected keyword argument 'maxtasksperchild'
> >>>
>
> I do not know if/how Madgraph can be forced to use a specified python
> version, or whether you have some alternative suggestions, perhaps a
> version which is know to be not affected?
>
> Best regards,
>
> Ogul
>
> --
> You received this question notification because you are subscribed to
> the question.
>

Revision history for this message
Ogul Oencel (oguloncel) said :
#14

Dear Marco,

Thank you very much for your prompt reply and bringing the bug fix to my attention.

However, I am working on LXPLUS, using so-called Gen_tf.py function. I do not run MG on my local laptop. I think it is then not possible to make the suggested application of "./bin/mg5_aMC".

What I do is this on LXPLUS7:

setupATLAS
asetup 21.6.40,AthGeneration
export ATHENA_PROC_NUMBER=10
Gen_tf.py --ecmEnergy=13000 --firstEvent=1 --maxEvents=1000 --randomSeed=123456 --jobConfig=000000 --outputEVNTFile=EVNT.root

where the folder 000000 contains job options file with "set low_mem_multicore_nlo_generation" command.

I don't know if this case is then beyond the scope of this forum/post, please kindly let me know.

Best,

Ogul

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#15

Hi,

I will suggest to use the same trick
> mypython Gen_tf.py --ecmEnergy=13000 --firstEvent=1 --maxEvents=1000 --randomSeed=123456 --jobConfig=000000 --outputEVNTFile=EVNT.root

But now since we have no idea of waht is inside " Gen_tf.py" I can not tell you which version of python are supported and if it will be enough (if they do not have code it correctly, you might still run another version of python.

This might actually explains your issue if they have hardcoded that the running of MG5aMC is done with python2.6, then this will setup your error (since according to Python, this argument should be supported in Python2.7)

> I don't know if this case is then beyond the scope of this forum/post,
> please kindly let me know.

Clearly this starts to be outside of what we can help you with.

Cheers,

Olivier

> On 7 Sep 2020, at 11:55, Ogul Oencel <email address hidden> wrote:
>
> Question #679049 on MadGraph5_aMC@NLO changed:
> https://answers.launchpad.net/mg5amcnlo/+question/679049
>
> Ogul Oencel requested more information:
> Dear Marco,
>
> Thank you very much for your prompt reply and bringing the bug fix to my
> attention.
>
> However, I am working on LXPLUS, using so-called Gen_tf.py function. I
> do not run MG on my local laptop. I think it is then not possible to
> make the suggested application of "./bin/mg5_aMC".
>
> What I do is this on LXPLUS7:
>
> setupATLAS
> asetup 21.6.40,AthGeneration
> export ATHENA_PROC_NUMBER=10
> Gen_tf.py --ecmEnergy=13000 --firstEvent=1 --maxEvents=1000 --randomSeed=123456 --jobConfig=000000 --outputEVNTFile=EVNT.root
>
> where the folder 000000 contains job options file with "set
> low_mem_multicore_nlo_generation" command.
>
> I don't know if this case is then beyond the scope of this forum/post,
> please kindly let me know.
>
> Best,
>
> Ogul
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.

Revision history for this message
Ogul Oencel (oguloncel) said :
#16

Dear Marco and Olivier,

Thank you for your suggestions. I would like to provide an update to this issue. Thanks to help I received from Ivan Razumov on the JIRA ticket previously mentioned by Marco, here are two outcomes:

1) The problem originates from LXPLUS setup. I was first confused because I was setting up Python 2.7 release that should have this package. Olivier's insight turns out to be right, LXPLUS Pyton release 2.7 still uses a backdrop to release 2.6 for multiprocessing packages. This explains the crash in multiprocessing tool despite having setup release 2.7.

2) Ivan proposed to use the following command after the generator setup to solve the issue: export PYTHONPATH="$(dirname $(which python))/../lib/python2.7:$PYTHONPATH"

Now, this solves the maxtasksperchild problem but it created another problem. Creating a simple p p > t tbar [QCD] and decaying both with MadSpin to Wb I get the following error during the MadSpin decay:

=============
generate 09:33:48 Py:MadGraphUtils INFO Some errors detected by MadGraphControl - checking for serious errors
generate 09:33:48 Py:MadGraphUtils INFO stty: standard input: Inappropriate ioctl for device
generate 09:33:48 Py:MadGraphUtils INFO stty: standard input: Inappropriate ioctl for device
generate 09:33:48 Py:MadGraphUtils ERROR Command "launch --name=run_01" interrupted with error:
generate 09:33:48 Py:MadGraphUtils ERROR OSError : [Errno 39] Directory not empty: '/afs/cern.ch/work/o/ooncel/HNL_evtgen_JULY20/NLO_MASTER_JULY20/EVTGENAUG/JO/PROCNLO_SM_HeavyN_NLO_0/full_me/SubProcesses/P2_gg_ttxg_t_bwp_wp_udx_tx_bxwm_wm_dux'
generate 09:33:48 Py:MadGraphUtils ERROR Please report this bug on https://bugs.launchpad.net/mg5amcnlo
generate 09:33:48 Py:MadGraphUtils ERROR More information is found in '/afs/cern.ch/work/o/ooncel/HNL_evtgen_JULY20/NLO_MASTER_JULY20/EVTGENAUG/JO/PROCNLO_SM_HeavyN_NLO_0/run_01_tag_1_debug.log'.
generate 09:33:48 Py:MadGraphUtils ERROR Please attach this file to your report.
=============

I tested the exact same JO, with removing the PYTHONPATH command from setup script and use the setup without that and it works fine. So the problem occurs only when using that PYTHONPATH command.

Interestingly, there is a bug report dealing with same error on same tt̄ production on same LXPLUS7 machine, answered by Olivier but I am not sure how to put that info into good use with the case here. Perhaps it can help you figure out something.
Link: https://bugs.launchpad.net/mg5amcnlo/+bug/1788615

Cheers,

Ogul

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#17

Yeah I have nothing to add more about that issue, everything is said in the other bug report.

Cheers,

Olivier

Can you help with this problem?

Provide an answer of your own, or ask Lebohang for more information if necessary.

To post a message you must log in.