MG5 takes too long

Asked by Amin Aboubrahim on 2017-11-08

Dear Madgraph team,

I am generating some SUSY processes (here's the PROC Card):

define p = g u c d s u~ c~ d~ s~
define j = g u c d s u~ c~ d~ s~
define neu = n1 n2 n3 n4
define chg = x1- x1+ x2- x2+
define susy = neu chg h3

# Specify process(es) to run
generate p p > susy susy j j QCD<=99 QED<=99

I switched on matching in the run_card and asked for 50K events. I did not get any error but the process is taking too long:

INFO: Idle: 60916, Running: 4, Completed: 7390 [ 17h 44m ]
INFO: Idle: 60915, Running: 4, Completed: 7391 [ 17h 45m ]
INFO: Idle: 60914, Running: 4, Completed: 7392 [ 17h 45m ]
INFO: Idle: 60913, Running: 4, Completed: 7393 [ 17h 45m ]
INFO: Idle: 60912, Running: 4, Completed: 7394 [ 17h 45m ]
INFO: Idle: 60911, Running: 4, Completed: 7395 [ 17h 46m ]
INFO: Idle: 60910, Running: 4, Completed: 7396 [ 17h 46m ]
INFO: Idle: 60909, Running: 4, Completed: 7397 [ 17h 46m ]
INFO: Idle: 60908, Running: 4, Completed: 7398 [ 17h 46m ]
INFO: Idle: 60907, Running: 4, Completed: 7399 [ 17h 46m ]
INFO: Idle: 60906, Running: 4, Completed: 7400 [ 17h 46m ]
INFO: Idle: 60905, Running: 4, Completed: 7401 [ 17h 46m ]
INFO: Idle: 60904, Running: 4, Completed: 7402 [ 17h 47m ]
INFO: Idle: 60903, Running: 4, Completed: 7403 [ 17h 47m ]
INFO: Idle: 60901, Running: 4, Completed: 7405 [ 17h 48m ]
INFO: Idle: 60900, Running: 4, Completed: 7406 [ 17h 48m ]
INFO: Idle: 60899, Running: 4, Completed: 7407 [ 17h 48m ]
INFO: Idle: 60898, Running: 4, Completed: 7408 [ 17h 49m ]
INFO: Idle: 60897, Running: 4, Completed: 7409 [ 17h 49m ]
INFO: Idle: 60896, Running: 4, Completed: 7410 [ 17h 50m ]
INFO: Idle: 60895, Running: 4, Completed: 7411 [ 17h 50m ]
INFO: Idle: 60893, Running: 4, Completed: 7413 [ 17h 50m ]
INFO: Idle: 60892, Running: 4, Completed: 7414 [ 17h 51m ]
INFO: Idle: 60891, Running: 4, Completed: 7415 [ 17h 51m ]
INFO: Idle: 60890, Running: 4, Completed: 7416 [ 17h 51m ]
INFO: Idle: 60889, Running: 4, Completed: 7417 [ 17h 52m ]

Is that normal? Is there a way to speed this up?
I'm not sure if I understood this right but gridpack is not helpful here.

Thank you,
Amin

Question information

Language:
English Edit question
Status:
Solved
For:
MadGraph5_aMC@NLO Edit question
Assignee:
No assignee Edit question
Solved by:
Olivier Mattelaer
Solved:
2017-11-09
Last query:
2017-11-09
Last reply:
2017-11-09

Hi,

Less than one minute per job is quite reasonable.

So you need to find a way to reduce the number of Feynman diagram.
1) Do you really need QCD<=99 QED<=99
2) You can use a restriction model to remove a priori all Feynman diagram which have a zero coupling:
https://answers.launchpad.net/mg5amcnlo/+faq/2312

I doubt that it will be enough to make it run on a 4 core machine but this should help.
I think that you have to rethink your generation strategy here.

Cheers,

Olivier

PS: Now, from your process definition, it is a bit surprising that you are using matching.
What is the rationale here? (Note that it will not really speed up the code if you put if off)

> On Nov 8, 2017, at 02:09, Amin Aboubrahim <email address hidden> wrote:
>
> New question #660467 on MadGraph5_aMC@NLO:
> https://answers.launchpad.net/mg5amcnlo/+question/660467
>
> Dear Madgraph team,
>
> I am generating some SUSY processes (here's the PROC Card):
>
> define p = g u c d s u~ c~ d~ s~
> define j = g u c d s u~ c~ d~ s~
> define neu = n1 n2 n3 n4
> define chg = x1- x1+ x2- x2+
> define susy = neu chg h3
>
> # Specify process(es) to run
> generate p p > susy susy j j QCD<=99 QED<=99
>
> I switched on matching in the run_card and asked for 50K events. I did not get any error but the process is taking too long:
>
> INFO: Idle: 60916, Running: 4, Completed: 7390 [ 17h 44m ]
> INFO: Idle: 60915, Running: 4, Completed: 7391 [ 17h 45m ]
> INFO: Idle: 60914, Running: 4, Completed: 7392 [ 17h 45m ]
> INFO: Idle: 60913, Running: 4, Completed: 7393 [ 17h 45m ]
> INFO: Idle: 60912, Running: 4, Completed: 7394 [ 17h 45m ]
> INFO: Idle: 60911, Running: 4, Completed: 7395 [ 17h 46m ]
> INFO: Idle: 60910, Running: 4, Completed: 7396 [ 17h 46m ]
> INFO: Idle: 60909, Running: 4, Completed: 7397 [ 17h 46m ]
> INFO: Idle: 60908, Running: 4, Completed: 7398 [ 17h 46m ]
> INFO: Idle: 60907, Running: 4, Completed: 7399 [ 17h 46m ]
> INFO: Idle: 60906, Running: 4, Completed: 7400 [ 17h 46m ]
> INFO: Idle: 60905, Running: 4, Completed: 7401 [ 17h 46m ]
> INFO: Idle: 60904, Running: 4, Completed: 7402 [ 17h 47m ]
> INFO: Idle: 60903, Running: 4, Completed: 7403 [ 17h 47m ]
> INFO: Idle: 60901, Running: 4, Completed: 7405 [ 17h 48m ]
> INFO: Idle: 60900, Running: 4, Completed: 7406 [ 17h 48m ]
> INFO: Idle: 60899, Running: 4, Completed: 7407 [ 17h 48m ]
> INFO: Idle: 60898, Running: 4, Completed: 7408 [ 17h 49m ]
> INFO: Idle: 60897, Running: 4, Completed: 7409 [ 17h 49m ]
> INFO: Idle: 60896, Running: 4, Completed: 7410 [ 17h 50m ]
> INFO: Idle: 60895, Running: 4, Completed: 7411 [ 17h 50m ]
> INFO: Idle: 60893, Running: 4, Completed: 7413 [ 17h 50m ]
> INFO: Idle: 60892, Running: 4, Completed: 7414 [ 17h 51m ]
> INFO: Idle: 60891, Running: 4, Completed: 7415 [ 17h 51m ]
> INFO: Idle: 60890, Running: 4, Completed: 7416 [ 17h 51m ]
> INFO: Idle: 60889, Running: 4, Completed: 7417 [ 17h 52m ]
>
> Is that normal? Is there a way to speed this up?
> I'm not sure if I understood this right but gridpack is not helpful here.
>
> Thank you,
> Amin
>
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.

Amin Aboubrahim (amin83) said : #2

Hi Olivier,

1) Yes, I think I can decrease the order of QCD and QED couplings.
2) I will look into this and try to implement. Thanks for the tip!

As for matching: I have added partons in the final state and to avoid double counting when passing to Pythia I need matching. Isn't that what it is or did I misunderstand your question?

Thank you,
Amin

Hi,

For the matching/merging, I'm just surprised since you only generate one multiplcity here.
But if you have other multiplicity generated in other sample then of course it makes sense (if you setup pythia accordingly obviously).

Actually activating matching for a single multiplicity is nothing wrong since this is equivalent in that case to choose a different dynamical scale.

Cheers,

Olivier

Amin Aboubrahim (amin83) said : #4

Hi,

Yes, Pythia is configured for the matching.
I have asked for two jets in the final state so wouldn't that make my multiplicity 2?

I have one more question if you don't mind: I am using a cluster that has no scheduler/queue. It consists of 20 nodes, 4 cores each. I am using one node now as it is obvious. The cluster uses mpirun for parallel runs. Now reading about MG5, you support clusters with specific schedulers. Is there a way to make use of my cluster to parallelize the event generation in MG5?

Thanks a lot,
Amin

Hi,

> Yes, Pythia is configured for the matching.
> I have asked for two jets in the final state so wouldn’t that make my multiplicity 2?

So you generated events at a single multiplicity of 2 jet.
So if you do that you do not have any double counting with another multiplicity( 0j/1j/3j/…)

> I have one more question if you don't mind: I am using a cluster that
> has no scheduler/queue. It consists of 20 nodes, 4 cores each. I am
> using one node now as it is obvious. The cluster uses mpirun for
> parallel runs. Now reading about MG5, you support clusters with specific
> schedulers. Is there a way to make use of my cluster to parallelize the
> event generation in MG5?

A generic plugin exists for the support of MPI for generation of gridpack.
You are likely to have to update the file MPICluster.py
and edit the class OneCore in your case to setup how you want to use your mpi cluster

You can download such plugin via the command:
bzr branch lp:~mg5hpcteam/mg5amcnlo/mpi_plugin

More information on that PLUGIN
https://cp3.irmp.ucl.ac.be/projects/madgraph/wiki/MPI

Cheers,

Olivier

> On Nov 8, 2017, at 18:44, Amin Aboubrahim <email address hidden> wrote:
>
> Question #660467 on MadGraph5_aMC@NLO changed:
> https://answers.launchpad.net/mg5amcnlo/+question/660467
>
> Amin Aboubrahim posted a new comment:
> Hi,
>
> Yes, Pythia is configured for the matching.
> I have asked for two jets in the final state so wouldn't that make my multiplicity 2?
>
> I have one more question if you don't mind: I am using a cluster that
> has no scheduler/queue. It consists of 20 nodes, 4 cores each. I am
> using one node now as it is obvious. The cluster uses mpirun for
> parallel runs. Now reading about MG5, you support clusters with specific
> schedulers. Is there a way to make use of my cluster to parallelize the
> event generation in MG5?
>
> Thanks a lot,
> Amin
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.

Amin Aboubrahim (amin83) said : #6

Hi Olivier,

Thanks for the help regarding the mpi cluster. I will implement that.
Regarding matching: I get what you mean. I interpreted what you said above about "one multiplicity" as multiplicity of 1. I simply misread. So I suppose I need to do something like:

generate p p > susy susy @0
add process p p > susy susy j @1
add process p p > susy susy j j @2

I know this forum is to discuss technical aspects of MG5 and not physics, but I'm curious to know the advantage behind considering matching compared to the case of a single multiplicity (with no matching).

Thanks a lot.
Amin

Hi,

If you only do
p p > t t~
then all the additional jet will be generated by Pythia. The problem is that Pythia works correctly only in the soft and collinear limit, so the hard jet will not be modeled correctly.
The solution is matching/merging where Pythia is restricted to generated soft and collinear jet, while the madgraph sample with higher jet multiplicity is use for modeling the hard jet.

More information here: https://cp3.irmp.ucl.ac.be/projects/madgraph/wiki/IntroMatching

Cheers,

Olivier

> On Nov 9, 2017, at 01:48, Amin Aboubrahim <email address hidden> wrote:
>
> Question #660467 on MadGraph5_aMC@NLO changed:
> https://answers.launchpad.net/mg5amcnlo/+question/660467
>
> Amin Aboubrahim posted a new comment:
> Hi Olivier,
>
> Thanks for the help regarding the mpi cluster. I will implement that.
> Regarding matching: I get what you mean. I interpreted what you said above about "one multiplicity" as multiplicity of 1. I simply misread. So I suppose I need to do something like:
>
> generate p p > susy susy @0
> add process p p > susy susy j @1
> add process p p > susy susy j j @2
>
> I know this forum is to discuss technical aspects of MG5 and not
> physics, but I'm curious to know the advantage behind considering
> matching compared to the case of a single multiplicity (with no
> matching).
>
> Thanks a lot.
> Amin
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.

Amin Aboubrahim (amin83) said : #8

Thanks Olivier.
I know this but what I meant with my question is the difference between doing:
p p > t t~ j j (fixed multiplicity of 2 jets)
and p p > t t with 0, 1 and 2 jets then merging and matching.

I think it would be a critical difference if one is looking at final states with soft jets.

Best,
Amin

Correct
> On Nov 9, 2017, at 15:47, Amin Aboubrahim <email address hidden> wrote:
>
> Question #660467 on MadGraph5_aMC@NLO changed:
> https://answers.launchpad.net/mg5amcnlo/+question/660467
>
> Amin Aboubrahim posted a new comment:
> Thanks Olivier.
> I know this but what I meant with my question is the difference between doing:
> p p > t t~ j j (fixed multiplicity of 2 jets)
> and p p > t t with 0, 1 and 2 jets then merging and matching.
>
> I think it would be a critical difference if one is looking at final
> states with soft jets.
>
> Best,
> Amin
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.

Amin Aboubrahim (amin83) said : #10

Thanks Olivier Mattelaer, that solved my question.