Can not write output of integrated process

Asked by Evgeny Soldatov

Dear MG Team,

I was trying to study a fololowing process at the NLO level:
generate p p > vl vl~ a j j QCD=0 QED=5 [QCD]
with MG5_v.2.6.4.
The integration completed successfully.
While generating the output folder (by 'output -f' command), the following error message is coming:

INFO: Generating virtual matrix elements using MadLoop:
INFO: Generating virtual matrix element with MadLoop for process: b~ b~ > vt vt~ a b~ b~ QCD=0 QED<=5 [ all = QCD ] (492 / 492)
WARNING: Some loop diagrams contributing to this process are discarded because they are not pure (QCD)-perturbation.
Make sure you did not want to include them.
INFO: Generated 492 subprocesses with 390192 real emission diagrams, 28512 born diagrams and 219456 virtual diagrams
MG5_aMC>output -f
INFO: Writing out the aMC@NLO code, using optimized Loops
INFO: initialize a new directory: PROCNLO_loop_sm-no_b_mass_0
INFO: remove old information in PROCNLO_loop_sm-no_b_mass_0
Error detected in "output -f"
write debug file MG5_debug
If you need help with this issue please contact us on https://answers.launchpad.net/mg5amcnlo
MadGraph5Error : Failed to clean correctly PROCNLO_loop_sm-no_b_mass_0:
         [Errno 12] Cannot allocate memory

I have more than 15 Gbytes free on the disk. Why do I get such error? How to solve it?

gfotran is in place
gfortran -v
Using built-in specs.
COLLECT_GCC=gfortran
COLLECT_LTO_WRAPPER=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/x86_64/Gcc/gcc472p1_x86_64_slc6/slc6/x86_64-slc6-gcc47-opt/bin/../libexec/gcc/x86_64-unknown-linux-gnu/4.7.2/lto-wrapper
Target: x86_64-unknown-linux-gnu
Configured with: /build/vdiez/gcc-4.7.2/configure --prefix=/afs/cern.ch/sw/lcg/external/gcc/4.7.2/x86_64-slc6-gcc47-opt --with-mpfr=/afs/cern.ch/sw/lcg/external/mpfr/2.4.2/x86_64-slc6-gcc47-opt --with-gmp=/afs/cern.ch/sw/lcg/external/gmp/4.3.2/x86_64-slc6-gcc47-opt --with-mpc=/afs/cern.ch/sw/lcg/external/mpc/0.8.1/x86_64-slc6-gcc47-opt --enable-libstdcxx-time --enable-lto --with-libelf=/afs/cern.ch/sw/lcg/external/libelf/0.8.13/x86_64-slc6-gcc47-opt --with-ppl=/afs/cern.ch/sw/lcg/external/ppl/0.11.2/x86_64-slc6-gcc47-opt --with-cloog=/afs/cern.ch/sw/lcg/external/cloog-ppl/0.15.11/x86_64-slc6-gcc47-opt --enable-languages=c,c++,fortran,go
Thread model: posix
gcc version 4.7.2 (GCC)

This is the MG_debug file:

#************************************************************
#* MadGraph5_aMC@NLO *
#* *
#* * * *
#* * * * * *
#* * * * * 5 * * * * *
#* * * * * *
#* * * *
#* *
#* *
#* VERSION 2.6.4 2018-11-09 *
#* *
#* The MadGraph5_aMC@NLO Development Team - Find us at *
#* https://server06.fynu.ucl.ac.be/projects/madgraph *
#* *
#************************************************************
#* *
#* Command File for MadGraph5_aMC@NLO *
#* *
#* run as ./bin/mg5_aMC filename *
#* *
#************************************************************
set default_unset_couplings 99
set group_subprocesses Auto
set ignore_six_quark_processes False
set loop_optimized_output True
set low_mem_multicore_nlo_generation False
set loop_color_flows False
set gauge unitary
set complex_mass_scheme False
set max_npoint_for_channel 0
import model sm
define p = g u c d s u~ c~ d~ s~
define j = g u c d s u~ c~ d~ s~
define l+ = e+ mu+
define l- = e- mu-
define vl = ve vm vt
define vl~ = ve~ vm~ vt~
import model loop_sm-no_b_mass
define p = 21 2 4 1 3 -2 -4 -1 -3 5 -5 # pass to 5 flavors
define j = p
define p = g u c d s b u~ c~ d~ s~ b~
define j = g u c d s b u~ c~ d~ s~ b~
define vl = ve vm vt
define vl~ = ve~ vm~ vt~
generate p p > vl vl~ a j j QCD=0 QED=5 [QCD]
output -f
Traceback (most recent call last):
  File "/afs/cern.ch/work/e/esoldato/workarea/MadGraph_full/MG5_aMC_v2_6_1/madgraph/interface/extended_cmd.py", line 1501, in onecmd
    return self.onecmd_orig(line, **opt)
  File "/afs/cern.ch/work/e/esoldato/workarea/MadGraph_full/MG5_aMC_v2_6_1/madgraph/interface/extended_cmd.py", line 1450, in onecmd_orig
    return func(arg, **opt)
  File "/afs/cern.ch/work/e/esoldato/workarea/MadGraph_full/MG5_aMC_v2_6_1/madgraph/interface/master_interface.py", line 292, in do_output
    self.cmd.do_output(self, line, *args, **opts)
  File "/afs/cern.ch/work/e/esoldato/workarea/MadGraph_full/MG5_aMC_v2_6_1/madgraph/interface/amcatnlo_interface.py", line 547, in do_output
    self._curr_exporter.copy_fkstemplate()
  File "/afs/cern.ch/work/e/esoldato/workarea/MadGraph_full/MG5_aMC_v2_6_1/madgraph/iolibs/export_fks.py", line 3427, in copy_fkstemplate
    % (os.path.basename(dir_path),why))
MadGraph5Error: Failed to clean correctly PROCNLO_loop_sm-no_b_mass_1:
 [Errno 12] Cannot allocate memory
                          MadGraph5_aMC@NLO Options
                          ----------------
        complex_mass_scheme : False
    default_unset_couplings : 99
                      gauge : unitary
         group_subprocesses : Auto
  ignore_six_quark_processes : False
           loop_color_flows : False
      loop_optimized_output : True
  low_mem_multicore_nlo_generation : False
     max_npoint_for_channel : 0
               stdout_level : 20 (user set)

                         MadEvent Options
                          ----------------
     automatic_html_opening : True
                    nb_core : None
        notification_center : True
                   run_mode : 2

                      Configuration Options
                      ---------------------
                        OLP : MadLoop
                    amcfast : amcfast-config
                   applgrid : applgrid-config
                auto_update : 7
         cluster_local_path : None
           cluster_nb_retry : 1
              cluster_queue : None (user set)
         cluster_retry_wait : 300
               cluster_size : 100
      cluster_status_update : (600, 30)
          cluster_temp_path : None
               cluster_type : condor
                    collier : /afs/cern.ch/work/e/esoldato/workarea/MadGraph_full/MG5_aMC_v2_6_1/HEPTools/lib (user set)
               cpp_compiler : None
             crash_on_error : False
               delphes_path : ./Delphes
                 eps_viewer : None
        exrootanalysis_path : ./ExRootAnalysis
              f2py_compiler : None
                    fastjet : /afs/cern.ch/work/e/esoldato/workarea/herwig/fastjet-3.2.1/fastjet/bin/fastjet-config (user set)
           fortran_compiler : None
                      golem : None (user set)
                 hepmc_path : None (user set)
                  hwpp_path : None (user set)
                     lhapdf : /afs/cern.ch/work/e/esoldato/workarea/herwig/LHAPDF-6.1.6/LHAPDF/bin/lhapdf-config (user set)
          madanalysis5_path : None (user set)
           madanalysis_path : ./MadAnalysis
  mg5amc_py8_interface_path : /afs/cern.ch/work/e/esoldato/workarea/MadGraph_full/MG5_aMC_v2_6_1/HEPTools/MG5aMC_PY8_interface (user set)
                      ninja : /afs/cern.ch/work/e/esoldato/workarea/MadGraph_full/MG5_aMC_v2_6_1/HEPTools/lib (user set)
        output_dependencies : external
                      pjfry : None (user set)
            pythia-pgs_path : ./pythia-pgs
               pythia8_path : /afs/cern.ch/work/e/esoldato/workarea/MadGraph_full/MG5_aMC_v2_6_1/HEPTools/pythia8 (user set)
                    samurai : None
               syscalc_path : ./SysCalc
                    td_path : ./td
                text_editor : None
                thepeg_path : None (user set)
                    timeout : 60
                web_browser : None

With the best regards,
Evgeny

Question information

Language:
English Edit question
Status:
Answered
For:
MadGraph5_aMC@NLO Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#1

Hi,

I have run this process on my laptop 16G of RAM and monitor the RAM and this sounds quite under control (less than 11 Gb).

The part where you report the crash is actually not RAM intensive but rather disk intensive. But that part of the code has some "fork command" and depending of your system, it might double the amount of RAM used in a temporary way (some os are doing that).
I guess that you should use one of those os.

My only advise here (except the useless change os) is to run our code with the option
"set low_mem_multicore_nlo_generation True"
which should highly reduce the amount of RAM used and have the code using more than one core.

Cheers,

Olivier

Revision history for this message
Evgeny Soldatov (esoldato) said :
#2

Hi Olivier,

Thank you for your answer.
Unfortunately now I have another error, connected with the usage of 'low_mem_multicore_nlo_generation'.

MG5_debug is now more than 5Mbytes (I'm adding the short version):

#************************************************************
#* MadGraph5_aMC@NLO *
#* *
#* * * *
#* * * * * *
#* * * * * 5 * * * * *
#* * * * * *
#* * * *
#* *
#* *
#* VERSION 2.6.4 2018-11-09 *
#* *
#* The MadGraph5_aMC@NLO Development Team - Find us at *
#* https://server06.fynu.ucl.ac.be/projects/madgraph *
#* *
#************************************************************
#* *
#* Command File for MadGraph5_aMC@NLO *
#* *
#* run as ./bin/mg5_aMC filename *
#* *
#************************************************************
set default_unset_couplings 99
set group_subprocesses Auto
set ignore_six_quark_processes False
set loop_optimized_output True
set low_mem_multicore_nlo_generation False
set loop_color_flows False
set gauge unitary
set complex_mass_scheme False
set max_npoint_for_channel 0
import model sm
define p = g u c d s u~ c~ d~ s~
define j = g u c d s u~ c~ d~ s~
define l+ = e+ mu+
define l- = e- mu-
define vl = ve vm vt
define vl~ = ve~ vm~ vt~
set low_mem_multicore_nlo_generation True
set complex_mass_scheme
import model loop_qcd_qed_sm_Gmu
define p = 21 2 4 1 3 -2 -4 -1 -3 5 -5 # pass to 5 flavors
define j = p
define p = g u c d s b u~ c~ d~ s~ b~
define j = g u c d s b u~ c~ d~ s~ b~
define vl = ve vm vt
define vl~ = ve~ vm~ vt~
generate p p > vl vl~ a j j QCD=0 QED=5 [QCD]
output -f
Traceback (most recent call last):
  File "/afs/cern.ch/work/e/esoldato/workarea/MadGraph_full/MG5_aMC_v2_6_1/madgraph/interface/extended_cmd.py", line 1501, in onecmd
    return self.onecmd_orig(line, **opt)
  File "/afs/cern.ch/work/e/esoldato/workarea/MadGraph_full/MG5_aMC_v2_6_1/madgraph/interface/extended_cmd.py", line 1450, in onecmd_orig
    return func(arg, **opt)
  File "/afs/cern.ch/work/e/esoldato/workarea/MadGraph_full/MG5_aMC_v2_6_1/madgraph/interface/master_interface.py", line 292, in do_output
    self.cmd.do_output(self, line, *args, **opts)
  File "/afs/cern.ch/work/e/esoldato/workarea/MadGraph_full/MG5_aMC_v2_6_1/madgraph/interface/amcatnlo_interface.py", line 553, in do_output
    self.export(nojpeg, main_file_name, group_processes=group_processes)
  File "/afs/cern.ch/work/e/esoldato/workarea/MadGraph_full/MG5_aMC_v2_6_1/madgraph/interface/master_interface.py", line 306, in export
    return self.cmd.export(self, *args, **opts)
  File "/afs/cern.ch/work/e/esoldato/workarea/MadGraph_full/MG5_aMC_v2_6_1/madgraph/interface/amcatnlo_interface.py", line 648, in export
    ndiags, cpu_time = generate_matrix_elements(self, group=group_processes)
  File "/afs/cern.ch/work/e/esoldato/workarea/MadGraph_full/MG5_aMC_v2_6_1/madgraph/interface/amcatnlo_interface.py", line 596, in generate_matrix_elements
    loop_optimized= self.options['loop_optimized_output'])
  File "/afs/cern.ch/work/e/esoldato/workarea/MadGraph_full/MG5_aMC_v2_6_1/madgraph/fks/fks_helas_objects.py", line 347, in __init__
    memapout = pool.map_async(async_finalize_matrix_elements,memapin).get(9999999)
  File "/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/x86_64/python/2.7.3-x86_64-slc6-gcc47/sw/lcg/external/Python/2.7.3/x86_64-slc6-gcc47-opt/lib/python2.7/multiprocessing/pool.py", line 528, in get
    raise self._value
ValueError: {
    'processes': [{
    'legs': [{
    'id': 4,
    'number': 1,
    'state': False,
    'from_group': True,
    'loop_line': False,
    'onshell': None,
    'fks': 'j',
    'color': 3,
    'charge': 0.67,
    'massless': True,
    'spin': 2,
    'is_part': True,
    'self_antipart': False
}, {
    'id': 1,
    'number': 2,
    'state': False,
    'from_group': True,
    'loop_line': False,
    'onshell': None,
    'fks': 'n',
    'color': 3,
    'charge': -0.33,
    'massless': True,
    'spin': 2,
    'is_part': True,
    'self_antipart': False
}, {
...
}, {
    'id': 24,
    'number': 2,
    'state': False,
    'from_group': True,
    'loop_line': False,
    'onshell': None
}]
}],
    'orders': {'WEIGHTED': 11, 'QED': 5, 'QCD': 1}
}],
    'has_mirror_process': False
},
    'has_mirror_process': False
} is not in list
                          MadGraph5_aMC@NLO Options
                          ----------------
        complex_mass_scheme : True (user set)
    default_unset_couplings : 99
                      gauge : Feynman (user set)
         group_subprocesses : Auto
  ignore_six_quark_processes : False
           loop_color_flows : False
      loop_optimized_output : True
  low_mem_multicore_nlo_generation : True (user set)
     max_npoint_for_channel : 0
               stdout_level : 20 (user set)

                         MadEvent Options
                          ----------------
     automatic_html_opening : True
                    nb_core : None
        notification_center : True
                   run_mode : 2

                      Configuration Options
                      ---------------------
                        OLP : MadLoop
                    amcfast : amcfast-config
                   applgrid : applgrid-config
                auto_update : 7
         cluster_local_path : None
           cluster_nb_retry : 1
              cluster_queue : None (user set)
         cluster_retry_wait : 300
               cluster_size : 100
      cluster_status_update : (600, 30)
          cluster_temp_path : None
               cluster_type : condor
                    collier : /afs/cern.ch/work/e/esoldato/workarea/MadGraph_full/MG5_aMC_v2_6_1/HEPTools/lib (user set)
               cpp_compiler : None
             crash_on_error : False
               delphes_path : ./Delphes
                 eps_viewer : None
        exrootanalysis_path : ./ExRootAnalysis
              f2py_compiler : None
                    fastjet : /afs/cern.ch/work/e/esoldato/workarea/herwig/fastjet-3.2.1/fastjet/bin/fastjet-config (user set)
           fortran_compiler : None
                      golem : None (user set)
                 hepmc_path : None (user set)
                  hwpp_path : None (user set)
                     lhapdf : /afs/cern.ch/work/e/esoldato/workarea/herwig/LHAPDF-6.1.6/LHAPDF/bin/lhapdf-config (user set)
          madanalysis5_path : None (user set)
           madanalysis_path : ./MadAnalysis
  mg5amc_py8_interface_path : /afs/cern.ch/work/e/esoldato/workarea/MadGraph_full/MG5_aMC_v2_6_1/HEPTools/MG5aMC_PY8_interface (user set)
                      ninja : /afs/cern.ch/work/e/esoldato/workarea/MadGraph_full/MG5_aMC_v2_6_1/HEPTools/lib (user set)
        output_dependencies : external
                      pjfry : None (user set)
            pythia-pgs_path : ./pythia-pgs
               pythia8_path : /afs/cern.ch/work/e/esoldato/workarea/MadGraph_full/MG5_aMC_v2_6_1/HEPTools/pythia8 (user set)
                    samurai : None
               syscalc_path : ./SysCalc
                    td_path : ./td
                text_editor : None
                thepeg_path : None (user set)
                    timeout : 60
                web_browser : None

I'm running MG5_aMC at CERN lxplus.
 lsb_release -a
LSB Version: :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch
Distributor ID: ScientificCERNSLC
Description: Scientific Linux CERN SLC release 6.10 (Carbon)
Release: 6.10
Codename: Carbon

With the best regards,
Evgeny

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#3

Once again, I do not reproduce the problem.
How many core did you have available? It might be quite large on a cluster (leading to generating a lot simultaneously and to RAM issue)
Maybe you should limit the number of core used maybe to 4 (set nb_core 4)?

Cheers,

Olivier

Can you help with this problem?

Provide an answer of your own, or ask Evgeny Soldatov for more information if necessary.

To post a message you must log in.