compuation time degrade

Asked by Sihyun Jeon

Hi,

I won't really bother you related to v265 but this is somewhat interesting observation I have. I was trying out somewhat complicated process using v299. This was not so time consuming for v265 but v299 seems to be super slow and I wonder what caused the degrade of the performance.

I do see a message

WARNING: The optimizer detects that you have coupling evaluated to zero:
GC_100 GC_102 GC_78 GC_79 GC_81 GC_99
This will slow down the computation. Please consider using restricted model:
https://answers.launchpad.net/mg5amcnlo/+faq/2312

But would this be the reason? But I don't understand why v265 is fine.
I am doing "./bin/mg5 test.cmd", v299 under python3 and v265 under python2 environments.

It takes longer time compiling the processes, below is the message from v299, but v265 doesn't have such issues.
Working on SubProcesses
INFO: Compiling for process 1/28.
INFO: P1_gg_ttx_t_wpb_wp_qq_tx_wmbx_wm_n1l_n1_llvl

This is the test.cmd command:

import model SM_HeavyN_Gen3Mass_NLO
define em = e+ e- mu+ mu-
define ll = e+ e- mu+ mu- ta+ ta-
define le = e+ e-
define lm = mu+ mu-
define lt = ta+ ta-
define vv = ve ve~ vm vm~ vt vt~
generate p p > t t~, (t > w+ b, w+ > j j), (t~ > w- b~, (w- > n1 em, (n1 > ll ll vv)))
add process p p > t t~, (t > w+ b, w+ > ll vv), (t~ > w- b~, (w- > n1 em, (n1 > ll j j)))
add process p p > t t~, (t > w+ b, (w+ > n1 em, (n1 > ll j j))), (t~ > w- b~, w- > ll vv)
add process p p > t t~, (t > w+ b, (w+ > n1 em, (n1 > ll ll vv))), (t~ > w- b~, w- > j j)
output test
launch
set nevents 5000
set no_parton_cut
set dynamical_scale_choice -1
set lpp1 1
set lpp2 1
set ebeam1 6500
set ebeam2 6500
set use_syst F
set maxjetflavor 4
set mn1 40
set mn2 9999999
set mn3 9999999
set ven1 0.01
set ven2 0.
set ven3 0.
set vmun1 0.
set vmun2 0.
set vmun3 0.
set vtan1 0.
set vtan2 0.
set vtan3 0.
set wn1 Auto
set wn2 100
set wn3 100

Thanks!
Sihyun.

Question information

Language:
English Edit question
Status:
Solved
For:
MadGraph5_aMC@NLO Edit question
Assignee:
No assignee Edit question
Solved by:
Olivier Mattelaer
Solved:
Last query:
Last reply:
Revision history for this message
Sihyun Jeon (shjeon) said :
#1

Or it might be false alarm... let me test several things more

Revision history for this message
Sihyun Jeon (shjeon) said :
#2

Oh no this seems to be true (at least under my computing env)
But the effect is more observant if you switch "set mn1 40" to "set mn1 10" from the commands line I posted above.

Revision history for this message
Best Olivier Mattelaer (olivier-mattelaer) said :
#3

I guess that the issue is that you have a very fast code but a lot of different matrix-element and therefore that you are dominated by compilation time in both version.
However since compilation has increased a lot with 2.9.x due to helicity recycling optimization your code is slower.
instead of doing
output
you can do
output --hel_recycling=False

(or alternatively you can also turn off helicity recycling at run time via
set hel_recycling False)

Cheers,

Olivier

Revision history for this message
Sihyun Jeon (shjeon) said :
#4

Thanks Olivier Mattelaer, that solved my question.