Problem with dependence on order of adding processes

Asked by Jack

Hi,

When I do:

import model loop_sm
generate p p > mu+ mu- mu+ mu- $h [QCD]
add process p p > mu+ mu- mu+ mu- $h
display diagrams
output bkg_4mu
launch -i
multi_run 2
set nevents 100000
set iseed 0
set bwcutoff 999999
set ebeam1 6500
set ebeam2 6500
set lpp1 1
set lpp2 1

The generation runs fine, however running it with the 2 processes in the other order
i.e.

import model loop_sm
generate p p > mu+ mu- mu+ mu- $h
add process p p > mu+ mu- mu+ mu- $h [QCD]
display diagrams
output bkg_4mu
launch -i
multi_run 2
set nevents 100000
set iseed 0
set bwcutoff 999999
set ebeam1 6500
set ebeam2 6500
set lpp1 1
set lpp2 1

I receive the error:

multi_run 2
Command "multi_run" not recognized, please try again
set nevents 100000
INFO: syntax: set stdout_level|fortran_compiler|cpp_compiler|timeout argument
INFO: -- set options
INFO: stdout_level DEBUG|INFO|WARNING|ERROR|CRITICAL
INFO: change the default level for printed information
INFO: timeout VALUE
INFO: (default 20) Seconds allowed to answer questions.
INFO: Note that pressing tab always stops the timer.
INFO: cluster_temp_path PATH
INFO: (default None) Allow to perform the run in PATH directory
INFO: This allow to not run on the central disk. This is not used
INFO: by condor cluster (since condor has it's own way to prevent it).
Command "import /location/control.txt" interrupted in sub-command:
"set nevents 100000" with error:
InvalidCmd : Possible options for set are ['stdout_level', 'fortran_compiler', 'cpp_compiler', 'timeout', 'text_editor', 'notification_center', 'pjfry', 'cluster_local_path', 'default_unset_couplings', 'group_subprocesses', 'ignore_six_quark_processes', 'loop_optimized_output', 'cluster_status_update', 'fortran_compiler', 'hepmc_path', 'collier', 'auto_update', 'pythia8_path', 'hwpp_path', 'low_mem_multicore_nlo_generation', 'golem', 'pythia-pgs_path', 'td_path', 'delphes_path', 'thepeg_path', 'cluster_type', 'madanalysis5_path', 'exrootanalysis_path', 'OLP', 'applgrid', 'eps_viewer', 'fastjet', 'run_mode', 'web_browser', 'automatic_html_opening', 'cluster_temp_path', 'cluster_size', 'cluster_queue', 'syscalc_path', 'madanalysis_path', 'lhapdf', 'stdout_level', 'nb_core', 'f2py_compiler', 'ninja', 'amcfast', 'cluster_retry_wait', 'output_dependencies', 'crash_on_error', 'mg5amc_py8_interface_path', 'loop_color_flows', 'samurai', 'cluster_nb_retry', 'mg5_path', 'timeout', 'gauge', 'complex_mass_scheme', 'cpp_compiler', 'max_npoint_for_channel']

Why is the order the processes are added important, am I doing something wrong?

Thank you,
Jack

Question information

Language:
English Edit question
Status:
Solved
For:
MadGraph5_aMC@NLO Edit question
Assignee:
No assignee Edit question
Solved by:
Jack
Solved:
Last query:
Last reply:
Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#1

Hi,

1) What kind of computation do you expect to do here?
Actually the correct behaviour is a crash in both cases.
I will fix that in 2.6.6 and prevent the code to run in both case.

2) Additionally the "multi_run" command is only available for LO processes not for NLO one where such command does not exists.

3) Why do you use "set bwcutoff 999999". Thiis is a very large value for that parameter and this is not advised.

Cheers,

Olivier

> On 21 Jun 2019, at 17:22, Jack <email address hidden> wrote:
>
> New question #681527 on MadGraph5_aMC@NLO:
> https://answers.launchpad.net/mg5amcnlo/+question/681527
>
> Hi,
>
> When I do:
>
> import model loop_sm
> generate p p > mu+ mu- mu+ mu- $h [QCD]
> add process p p > mu+ mu- mu+ mu- $h
> display diagrams
> output bkg_4mu
> launch -i
> multi_run 2
> set nevents 100000
> set iseed 0
> set bwcutoff 999999
> set ebeam1 6500
> set ebeam2 6500
> set lpp1 1
> set lpp2 1
>
> The generation runs fine, however running it with the 2 processes in the other order
> i.e.
>
> import model loop_sm
> generate p p > mu+ mu- mu+ mu- $h
> add process p p > mu+ mu- mu+ mu- $h [QCD]
> display diagrams
> output bkg_4mu
> launch -i
> multi_run 2
> set nevents 100000
> set iseed 0
> set bwcutoff 999999
> set ebeam1 6500
> set ebeam2 6500
> set lpp1 1
> set lpp2 1
>
> I receive the error:
>
> multi_run 2
> Command "multi_run" not recognized, please try again
> set nevents 100000
> INFO: syntax: set stdout_level|fortran_compiler|cpp_compiler|timeout argument
> INFO: -- set options
> INFO: stdout_level DEBUG|INFO|WARNING|ERROR|CRITICAL
> INFO: change the default level for printed information
> INFO: timeout VALUE
> INFO: (default 20) Seconds allowed to answer questions.
> INFO: Note that pressing tab always stops the timer.
> INFO: cluster_temp_path PATH
> INFO: (default None) Allow to perform the run in PATH directory
> INFO: This allow to not run on the central disk. This is not used
> INFO: by condor cluster (since condor has it's own way to prevent it).
> Command "import /location/control.txt" interrupted in sub-command:
> "set nevents 100000" with error:
> InvalidCmd : Possible options for set are ['stdout_level', 'fortran_compiler', 'cpp_compiler', 'timeout', 'text_editor', 'notification_center', 'pjfry', 'cluster_local_path', 'default_unset_couplings', 'group_subprocesses', 'ignore_six_quark_processes', 'loop_optimized_output', 'cluster_status_update', 'fortran_compiler', 'hepmc_path', 'collier', 'auto_update', 'pythia8_path', 'hwpp_path', 'low_mem_multicore_nlo_generation', 'golem', 'pythia-pgs_path', 'td_path', 'delphes_path', 'thepeg_path', 'cluster_type', 'madanalysis5_path', 'exrootanalysis_path', 'OLP', 'applgrid', 'eps_viewer', 'fastjet', 'run_mode', 'web_browser', 'automatic_html_opening', 'cluster_temp_path', 'cluster_size', 'cluster_queue', 'syscalc_path', 'madanalysis_path', 'lhapdf', 'stdout_level', 'nb_core', 'f2py_compiler', 'ninja', 'amcfast', 'cluster_retry_wait', 'output_dependencies', 'crash_on_error', 'mg5amc_py8_interface_path', 'loop_color_flows', 'samurai', 'cluster_nb_retry', 'mg5_path', 'timeout', 'gauge', 'complex_mass_scheme', 'cpp_compiler', 'max_npoint_for_channel']
>
>
> Why is the order the processes are added important, am I doing something wrong?
>
> Thank you,
> Jack
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.

Revision history for this message
Jack (jackhenli) said :
#2

Hi,

1) I meant to have the LO diagrams from p p > mu+ mu- mu+ mu- $h and the NLO ones, I was under the impression
 p p > mu+ mu- mu+ mu- $h [QCD] didn't include the LO diagrams only the NLO ones but after checking it now I see I was wrong.

So I think what I should do is just generate p p > mu+ mu- mu+ mu- $h [QCD] and not add another process.

2) Why is multi_run not available in NLO, and how do I generate more than 100k events then for the NLO case without multi_run?

3) I wanted to save the off-shell particles to the LHE so I set bwcutoff very high to do so. I did not think this causes a problem. From https://cp3.irmp.ucl.ac.be/projects/madgraph/wiki/FAQ-Cards-2 all I could see is that doing so would cut off less of the tail, is this an issue?

Thanks,
Jack

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#3

Hi,

> 2) Why is multi_run not available in NLO, and how do I generate more
> than 100k events then for the NLO case without multi_run?

NLO use another strategy for the linearisation of the problem.
Which is equivalent to the multi_run option of the LO case
in the run_card you have the parameter
 nevt_job
Which allows to split the generation in smaller pieces.

This is actually a better solution than the LO one.

> 3) I wanted to save the off-shell particles to the LHE so I set bwcutoff
> very high to do so. I did not think this causes a problem. From
> https://cp3.irmp.ucl.ac.be/projects/madgraph/wiki/FAQ-Cards-2 all I
> could see is that doing so would cut off less of the tail, is this an
> issue?

Saving all the time the off-shell makes me nervous since the presence/absence of those particles will have some impact at the parton-shower level since they will not react in the same way if this particle is written or not. So putting such value very high can have impact on the tail of your distribution.
Also at LO, the $ syntax is associated to a cut linked to that cut.
(the $ syntax is actually not defined at NLO, and you should rather use the $$ or the "/" syntax)

Cheers,

Olivier

> On 21 Jun 2019, at 22:27, Jack <email address hidden> wrote:
>
> Question #681527 on MadGraph5_aMC@NLO changed:
> https://answers.launchpad.net/mg5amcnlo/+question/681527
>
> Status: Answered => Solved
>
> Jack confirmed that the question is solved:
> Hi,
>
> 1) I meant to have the LO diagrams from p p > mu+ mu- mu+ mu- $h and the NLO ones, I was under the impression
> p p > mu+ mu- mu+ mu- $h [QCD] didn't include the LO diagrams only the NLO ones but after checking it now I see I was wrong.
>
> So I think what I should do is just generate p p > mu+ mu- mu+ mu- $h
> [QCD] and not add another process.
>
> 2) Why is multi_run not available in NLO, and how do I generate more
> than 100k events then for the NLO case without multi_run?
>
> 3) I wanted to save the off-shell particles to the LHE so I set bwcutoff
> very high to do so. I did not think this causes a problem. From
> https://cp3.irmp.ucl.ac.be/projects/madgraph/wiki/FAQ-Cards-2 all I
> could see is that doing so would cut off less of the tail, is this an
> issue?
>
> Thanks,
> Jack
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.