compilation error when running smeftatnlo using v3.0.4

Asked by Hesham El Faham

Hi, I am trying to run this process using SMEFTatNLO:
generate p p > t t t~ t~ NP^2==2 QCD<=4 QED<=4 YT<=4 [LOonly=QCD] using 3.0.4_eft_running. I run at LO obviously. I am using [] syntax to access the FO environment.
When I try to generate events using python ./bin/generate_events, I run into the following error:
"MadGraph5Error: A compilation Error occurs when trying to compile /nfs/scratch/fynu/elfaham/4tops_dir/dim64f/sm_dim64f_test/Source."
I attach the complete debug file:
-->
launch
Traceback (most recent call last):
  File "/nfs/scratch/fynu/elfaham/4tops_dir/dim64f/sm_dim64f_test/bin/internal/extended_cmd.py", line 1544, in onecmd
    return self.onecmd_orig(line, **opt)
  File "/nfs/scratch/fynu/elfaham/4tops_dir/dim64f/sm_dim64f_test/bin/internal/extended_cmd.py", line 1493, in onecmd_orig
    return func(arg, **opt)
  File "/nfs/scratch/fynu/elfaham/4tops_dir/dim64f/sm_dim64f_test/bin/internal/amcatnlo_run_interface.py", line 1783, in do_launch
    self.compile(mode, options)
  File "/nfs/scratch/fynu/elfaham/4tops_dir/dim64f/sm_dim64f_test/bin/internal/amcatnlo_run_interface.py", line 5260, in compile
    misc.compile(cwd = sourcedir)
  File "/nfs/scratch/fynu/elfaham/4tops_dir/dim64f/sm_dim64f_test/bin/internal/misc.py", line 553, in compile
    raise MadGraph5Error(error_text)
MadGraph5Error: A compilation Error occurs when trying to compile /nfs/scratch/fynu/elfaham/4tops_dir/dim64f/sm_dim64f_test/Source.
The compilation fails with the following output message:
    gfortran -O -fno-automatic -ffixed-line-length-132 -c alfas_functions_lhapdf6.f
    gfortran -O -fno-automatic -ffixed-line-length-132 -c -o setrun.o setrun.f
    run.inc:75.21:
        Included at setrun.f:12:

          common/to_rwgt/ do_rwgt_scale, rw_Fscale_down, rw_Fscale_up, rw_Rscale_do
                         1
    Warning: Padding of 4 bytes required before 'rw_fscale_down' in COMMON 'to_rwgt' at (1); reorder elements or use -fno-align-commons
    run.inc:10.22:
        Included at setrun.f:12:

          common/to_scale/scale,scalefact,ellissextonfact,alpsfact,fixed_ren_scale,
                          1
    Warning: Padding of 4 bytes required before 'mue_ref_fixed' in COMMON 'to_scale' at (1); reorder elements or use -fno-align-commons
    rm -f ../lib/libgeneric.a
    ar cru libgeneric.a alfas_functions_lhapdf6.o rw_routines.o kin_functions.o run_printout.o dgauss.o ranmar.o setrun.o derivative.o zerox64_cernlib.o extra_weights.o
    ranlib libgeneric.a
    mv libgeneric.a ../lib/
    cp -f extra_weights.mod ../lib/
    rm -f alfas_functions_lhapdf6.o
    rm -f ../lib/libpdf.a
    cd PDF; make
    make[1]: Entering directory `/nfs/scratch/fynu/elfaham/4tops_dir/dim64f/sm_dim64f_test/Source/PDF'
    ar cru ../../lib/libpdf.a pdfwrap_lhapdf.o pdf_lhapdf6.o pdg2pdf_lhapdf6.o opendata.o
    ranlib ../../lib/libpdf.a
    make[1]: Leaving directory `/nfs/scratch/fynu/elfaham/4tops_dir/dim64f/sm_dim64f_test/Source/PDF'
    rm -f ../lib/libmodel.a
    cd MODEL; make
    make[1]: Entering directory `/nfs/scratch/fynu/elfaham/4tops_dir/dim64f/sm_dim64f_test/Source/MODEL'
    gfortran -O -fno-automatic -ffixed-line-length-132 -c -o couplings.o couplings.f
    couplings.f:41: Error: Can't open included file '../maxparticles.inc'
    make[1]: *** [couplings.o] Error 1
    make[1]: Leaving directory `/nfs/scratch/fynu/elfaham/4tops_dir/dim64f/sm_dim64f_test/Source/MODEL'
    make: *** [../lib/libmodel.a] Error 2

Please try to fix this compilations issue and retry.
Help might be found at https://answers.launchpad.net/mg5amcnlo.
If you think that this is a bug, you can report this at https://bugs.launchpad.net/mg5amcnlo
Value of current Options:
              text_editor : None
      notification_center : True
       cluster_local_path : /cvmfs/cp3.uclouvain.be/madgraph/
    cluster_status_update : (900, 60)
               hepmc_path : None
          pythia-pgs_path : None
              thepeg_path : None
        madanalysis5_path : None
                 run_mode : 1
        cluster_temp_path : None
            cluster_queue : None
         madanalysis_path : None
                   lhapdf : /cvmfs/cp3.uclouvain.be/lhapdf/lhapdf-6.1.5_amd64_gcc44/bin/lhapdf-config
            f2py_compiler : None
                    ninja : /home/users/e/l/elfaham/pheno_work/working_dir/3.0.4_eft_running/HEPTools/lib
   automatic_html_opening : False
       cluster_retry_wait : 300
      exrootanalysis_path : None
                  timeout : 60
                  nb_core : 5
        f2py_compiler_py2 : None
        f2py_compiler_py3 : None
         fortran_compiler : None
                  collier : /home/users/e/l/elfaham/pheno_work/working_dir/3.0.4_eft_running/HEPTools/lib
             pythia8_path : None
                hwpp_path : None
                  td_path : None
             delphes_path : None
              auto_update : 7
             cluster_type : slurm
               eps_viewer : None
                  fastjet : /home/ucl/cp3/elfaham/pheno_work/tZW_work/fastjet-3.3.4/fastjet-install/bin/fastjet-config
              web_browser : None
             cluster_size : 150
           cluster_memory : None
             stdout_level : None
               lhapdf_py3 : None
               lhapdf_py2 : None
             cluster_time : None
         cluster_nb_retry : 1
                 mg5_path : /home/users/e/l/elfaham/pheno_work/working_dir/3.0.4_eft_running
             syscalc_path : None
             cpp_compiler : None
---
set group_subprocesses Auto
set ignore_six_quark_processes False
set max_t_for_channel 99
set loop_optimized_output True
set low_mem_multicore_nlo_generation False
set default_unset_couplings 99
set include_lepton_initiated_processes False
set zerowidth_tchannel True
set nlo_mixed_expansion True
set loop_color_flows False
set gauge unitary
set complex_mass_scheme False
set max_npoint_for_channel 0
import model SMEFTatNLO_coupling_yt-LO_4tops
define p = g u c d s u~ c~ d~ s~
define j = g u c d s u~ c~ d~ s~
define l+ = e+ mu+
define l- = e- mu-
define vl = ve vm vt
define vl~ = ve~ vm~ vt~
define p = 21 2 4 1 3 -2 -4 -1 -3 5 -5 # pass to 5 flavors
define j = p
define q = u c d s u~ c~ d~ s~ b b~
define p = g u c d s u~ c~ d~ s~ b b~
generate p p > t t t~ t~ NP^2==2 QCD<=4 QED<=4 YT<=4 [LOonly=QCD]
output /nfs/scratch/fynu/elfaham/4tops_dir/dim64f/sm_dim64f_test
---
###################################
## INFORMATION FOR DIM6
###################################
Block dim6
    1 1.000000e+03 # Lambda

###################################
## INFORMATION FOR DIM64F
###################################
Block dim64f
    1 1.0 # cQq83
    2 1.0e-10 # cQq81
    3 1.0e-10 # cQu8
    4 1.0e-10 # ctq8
    6 1.0e-10 # cQd8
    7 1.0e-10 # ctu8
    8 1.0e-10 # ctd8
   10 1.0e-10 # cQq13
   11 1.0e-10 # cQq11
   12 1.0e-10 # cQu1
   13 1.0e-10 # ctq1
   14 1.0e-10 # cQd1
   16 1.0e-10 # ctu1
   17 1.0e-10 # ctd1
   19 1.0e-10 # cQQ8..
---
#***********************************************************************
# Tag name for the run (one word) *
#***********************************************************************
  tag_1 = run_tag ! name of the run
#***********************************************************************
# Number of LHE events (and their normalization) and the required *
# (relative) accuracy on the Xsec. *
# These values are ignored for fixed order runs *
#***********************************************************************
 10000 = nevents ! Number of unweighted events requested
 -1.0 = req_acc ! Required accuracy (-1=auto determined from nevents)
 -1 = nevt_job! Max number of events per job in event generation.
                 ! (-1= no split).
#***********************************************************************
# Output format
#***********************************************************************
  -1.0 = time_of_flight ! threshold (in mm) below which the invariant livetime is not written (-1 means not written)
  average = event_norm ! average/sum/bias. Normalization of the weight in the LHEF
#***********************************************************************
# Number of points per itegration channel (ignored for aMC@NLO runs) *
#***********************************************************************
 0.01 = req_acc_FO ! Required accuracy (-1=ignored, and use the
                           ! number of points and iter. below)
# These numbers are ignored except if req_acc_FO is equal to -1
# These numbers are ignored except if req_acc_FO is equal to -1
 5000 = npoints_FO_grid ! number of points to setup grids
 4 = niters_FO_grid ! number of iter. to setup grids
 10000 = npoints_FO ! number of points to compute Xsec
 6 = niters_FO ! number of iter. to compute Xsec
#***********************************************************************
# Random number seed *
#***********************************************************************
 0 = iseed ! rnd seed (0=assigned automatically=default))
#***********************************************************************
# Collider type and energy *
#***********************************************************************
 1 = lpp1 ! beam 1 type (0 = no PDF)
 1 = lpp2 ! beam 2 type (0 = no PDF)
 6500.0 = ebeam1 ! beam 1 energy in GeV
 6500.0 = ebeam2 ! beam 2 energy in GeV
#***********************************************************************
# PDF choice: this automatically fixes also alpha_s(MZ) and its evol. *
#***********************************************************************
 lhapdf = pdlabel ! PDF set
 303400 = lhaid ! If pdlabel=lhapdf, this is the lhapdf number. Only
              ! numbers for central PDF sets are allowed. Can be a list;
              ! PDF sets beyond the first are included via reweighting.
#***********************************************************************
# Include the NLO Monte Carlo subtr. terms for the following parton *
# shower (HERWIG6 | HERWIGPP | PYTHIA6Q | PYTHIA6PT | PYTHIA8) *
# WARNING: PYTHIA6PT works only for processes without FSR!!!! *
#***********************************************************************
  PYTHIA8 = parton_shower
  1.0 = shower_scale_factor ! multiply default shower starting
                                  ! scale by this factor
#***********************************************************************
# Renormalization and factorization scales *
# (Default functional form for the non-fixed scales is the sum of *
# the transverse masses divided by two of all final state particles *
# and partons. This can be changed in SubProcesses/set_scales.f or via *
# dynamical_scale_choice option) *
#***********************************************************************
 True = fixed_ren_scale ! if .true. use fixed ren scale
 True = fixed_fac_scale ! if .true. use fixed fac scale
 340.0 = muR_ref_fixed ! fixed ren reference scale
 340.0 = muF_ref_fixed ! fixed fact reference scale
 -1 = dynamical_scale_choice ! Choose one (or more) of the predefined
           ! dynamical choices. Can be a list; scale choices beyond the
           ! first are included via reweighting
 1.0 = muR_over_ref ! ratio of current muR over reference muR
 1.0 = muF_over_ref ! ratio of current muF over reference muF..
---
May you please help with this?

Please note that running with 3.0.3 works fine, so I presume it is related to the mg5 version.
Best,
Hesham

Question information

Language:
English Edit question
Status:
Solved
For:
MadGraph5_aMC@NLO Edit question
Assignee:
No assignee Edit question
Solved by:
Olivier Mattelaer
Solved:
Last query:
Last reply:
Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#1

Hi,

I do not think that we did any validation for the LOonly mode with that branch.
So this worry me that you try to use that mode.

This being said, which revision are you using?
The latest revision should already have strong restriction on the running if running at NLO (i.e. only fix scale should be possible)
Is it the version that you are using?

Cheers,

Olivier

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#2

just in case, I merged that branch with 3.3.1 official version

Cheers,

Olivier

> On 4 Dec 2021, at 17:20, Olivier Mattelaer <email address hidden> wrote:
>
> Question #699734 on MadGraph5_aMC@NLO changed:
> https://answers.launchpad.net/mg5amcnlo/+question/699734
>
> Status: Open => Answered
>
> Olivier Mattelaer proposed the following answer:
> Hi,
>
> I do not think that we did any validation for the LOonly mode with that branch.
> So this worry me that you try to use that mode.
>
> This being said, which revision are you using?
> The latest revision should already have strong restriction on the running if running at NLO (i.e. only fix scale should be possible)
> Is it the version that you are using?
>
> Cheers,
>
> Olivier
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.

Revision history for this message
Hesham El Faham (helfaham) said :
#3

Hi,

I was using rev. 976. Now I switched to 3.3.1. I noticed two things:
1) I don't get the coefficients running block in the run card, is that expected?
2) I get the following error when python2/3 ./bin/generate_events:
INFO: compile directory
Unhandled exception in thread started by <function call at 0x7f14509149d0>
Traceback (most recent call last):
  File "/usr/lib/python2.7/subprocess.py", line 172, in call
    return Popen(*popenargs, **kwargs).wait()
  File "/usr/lib/python2.7/subprocess.py", line 394, in __init__
    errread, errwrite)
  File "/usr/lib/python2.7/subprocess.py", line 1047, in _execute_child
    raise child_exception
OSError: [Errno 2] No such file or directory
Even though the error, the code still runs, but I don't know why it is happening.

Best,
Hesham

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#4

> 1) I don't get the coefficients running block in the run card, is that expected?

Yes, we realise at Bonn meeting , that those were not correctly implemented (looks like Eleni/Rafael were aware of that but I was not). So it was important to remove them to avoid that someone used those in the future.

> 2) I get the following error when python2/3 ./bin/generate_events:
> INFO: compile directory
> Unhandled exception in thread started by <function call at 0x7f14509149d0>
> Traceback (most recent call last):
> File "/usr/lib/python2.7/subprocess.py", line 172, in call
> return Popen(*popenargs, **kwargs).wait()
> File "/usr/lib/python2.7/subprocess.py", line 394, in __init__
> errread, errwrite)
> File "/usr/lib/python2.7/subprocess.py", line 1047, in _execute_child
> raise child_exception
> OSError: [Errno 2] No such file or directory
> Even though the error, the code still runs, but I don't know why it is happening.

Do you have that with python3 too?
Python2 is going to be dropped so if this is only a python2 issue then it does not matter since this will not be release in a version of mg5amc supporting python2 anymore.

The error is something to worry about (I guess) since this might means that you are not running with the parameter of the param_card and/or run_card....

Cheers,

Olivier

> On 7 Dec 2021, at 15:30, Hesham El Faham <email address hidden> wrote:
>
> Question #699734 on MadGraph5_aMC@NLO changed:
> https://answers.launchpad.net/mg5amcnlo/+question/699734
>
> Status: Answered => Open
>
> Hesham El Faham is still having a problem:
> Hi,
>
> I was using rev. 976. Now I switched to 3.3.1. I noticed two things:
> 1) I don't get the coefficients running block in the run card, is that expected?
> 2) I get the following error when python2/3 ./bin/generate_events:
> INFO: compile directory
> Unhandled exception in thread started by <function call at 0x7f14509149d0>
> Traceback (most recent call last):
> File "/usr/lib/python2.7/subprocess.py", line 172, in call
> return Popen(*popenargs, **kwargs).wait()
> File "/usr/lib/python2.7/subprocess.py", line 394, in __init__
> errread, errwrite)
> File "/usr/lib/python2.7/subprocess.py", line 1047, in _execute_child
> raise child_exception
> OSError: [Errno 2] No such file or directory
> Even though the error, the code still runs, but I don't know why it is happening.
>
> Best,
> Hesham
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.

Revision history for this message
Hesham El Faham (helfaham) said :
#5

Thanks for the quick reply. I get this with python3 too, yes.

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#6

Can you send me your model? and the process and benchmark that you use?

Cheers,

Olivier

PS: note that replying to this email will automaticlly drop attachment since this is a question on launchpad

> On 7 Dec 2021, at 16:15, Hesham El Faham <email address hidden> wrote:
>
> Question #699734 on MadGraph5_aMC@NLO changed:
> https://answers.launchpad.net/mg5amcnlo/+question/699734
>
> Status: Answered => Open
>
> Hesham El Faham is still having a problem:
> Thanks for the quick reply. I get this with python3 too, yes.
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.

Revision history for this message
Hesham El Faham (helfaham) said :
#7

hi again,
I learned that using the running feature in 4tops production is what causing the error so I dropped it and now I am using the normal smeftatnlo model. I am now running:
generate p p > t t t~ t~ NP^2==2 QCD<=4 QED<=4 YT<=4 [QCD] but then I choose LO when doing ./bin/generate_events. For this, I have tried a couple of mg5 versions, but I consistently get the following error:
    --> raise aMCatNLOError('Some tests failed, run cannot continue. Please search on https://answers.launchpad.net/mg5amcnlo for more information, and in case there is none, report the problem there.')
which is basically this failure:
--> Collinear test 20 FAILED. Fraction of failures: 5.24
May you please help with that?

Best,
Hesham

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#8

Hi,

As we have communicated by email, since 48h I'm trying to fix the issue for generate p p > t t t~ t~ NP^2==2 QCD<=4 QED<=4 YT<=4 (at LO), so far even with the non running version, I have huge issue with that process.
So let me first try to understand what is the issue at LO before even look at NLO.

Cheers,

Olivier

> On 10 Dec 2021, at 16:50, Hesham El Faham <email address hidden> wrote:
>
> Question #699734 on MadGraph5_aMC@NLO changed:
> https://answers.launchpad.net/mg5amcnlo/+question/699734
>
> Status: Answered => Open
>
> Hesham El Faham is still having a problem:
> hi again,
> I learned that using the running feature in 4tops production is what causing the error so I dropped it and now I am using the normal smeftatnlo model. I am now running:
> generate p p > t t t~ t~ NP^2==2 QCD<=4 QED<=4 YT<=4 [QCD] but then I choose LO when doing ./bin/generate_events. For this, I have tried a couple of mg5 versions, but I consistently get the following error:
> --> raise aMCatNLOError('Some tests failed, run cannot continue. Please search on https://answers.launchpad.net/mg5amcnlo for more information, and in case there is none, report the problem there.')
> which is basically this failure:
> --> Collinear test 20 FAILED. Fraction of failures: 5.24
> May you please help with that?
>
> Best,
> Hesham
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.

Revision history for this message
Hesham El Faham (helfaham) said :
#9

Hi,

Sure, please take your time! I thought it was only problematic in running.

Best,
Hesham

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#10

I actually have one question for you.
Is it possible that
>> generate g g > t t t~ t~ NP^2==2 QCD<=4 QED<=4 YT<=4
(so gluon initial state and LO)

is actually zero? for some reason? (if so it is at the cross-section level or on the sum of helicity or per helicity?)
If it is this might explains a lot of the weird behavior that we face with this process/model.

Cheers,

Olivier

> On 10 Dec 2021, at 16:58, Olivier Mattelaer <email address hidden> wrote:
>
> Hi,
>
> As we have communicated by email, since 48h I'm trying to fix the issue for generate p p > t t t~ t~ NP^2==2 QCD<=4 QED<=4 YT<=4 (at LO), so far even with the non running version, I have huge issue with that process.
> So let me first try to understand what is the issue at LO before even look at NLO.
>
> Cheers,
>
> Olivier
>
>
>> On 10 Dec 2021, at 16:50, Hesham El Faham <email address hidden> wrote:
>>
>> Question #699734 on MadGraph5_aMC@NLO changed:
>> https://answers.launchpad.net/mg5amcnlo/+question/699734
>>
>> Status: Answered => Open
>>
>> Hesham El Faham is still having a problem:
>> hi again,
>> I learned that using the running feature in 4tops production is what causing the error so I dropped it and now I am using the normal smeftatnlo model. I am now running:
>> generate p p > t t t~ t~ NP^2==2 QCD<=4 QED<=4 YT<=4 [QCD] but then I choose LO when doing ./bin/generate_events. For this, I have tried a couple of mg5 versions, but I consistently get the following error:
>> --> raise aMCatNLOError('Some tests failed, run cannot continue. Please search on https://answers.launchpad.net/mg5amcnlo for more information, and in case there is none, report the problem there.')
>> which is basically this failure:
>> --> Collinear test 20 FAILED. Fraction of failures: 5.24
>> May you please help with that?
>>
>> Best,
>> Hesham
>>
>> --
>> You received this question notification because you are an answer
>> contact for MadGraph5_aMC@NLO.
>

Revision history for this message
Hesham El Faham (helfaham) said :
#11

I ran this process now with all four fermion operators set to their default values, this is what I get:
Cross-section : -0.00226 +- 1.277e-05 pb

Best,
Hesham

Revision history for this message
Hesham El Faham (helfaham) said :
#12

and from the ongoing study, we know that this process interferes with the SM for the 4-fermion operators contributions. So perhaps it should not be zero..

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#13

Depending of the optimization options and/or version, I also have
12.01 =- 27
or
244 +- 127
or
520 pb (did not mark the error on this one)
or
-153 (same for the error)
or
-0.001113 (with ~1e-5 error)

So I would not trust MG for any value that it return especially since a lot of those runs should actually returned the exact same number. Maybe the issue is related to numerical accuracy of the amplitude that are trigger randomly depending of the (random) ordering of the operation.
(the ordering of the feynman diagram are not fixed between two consecutive run and the same is true within the aloha output a -b + c is sometimes evaluated as a -b + c and sometimes as a + c -b (which is not the same if you have large a and b very large and close to each other and c small)

Cheers,

Olivier

> On 10 Dec 2021, at 17:31, Hesham El Faham <email address hidden> wrote:
>
> Question #699734 on MadGraph5_aMC@NLO changed:
> https://answers.launchpad.net/mg5amcnlo/+question/699734
>
> Status: Answered => Open
>
> Hesham El Faham is still having a problem:
> I ran this process now with all four fermion operators set to their default values, this is what I get:
> Cross-section : -0.00226 +- 1.277e-05 pb
>
> Best,
> Hesham
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#14

Which would also explain why hel_recyling and not hel_recycling differs quite a lot since the two differs only (or close to that at least) in the way the matrix-element is computed (by changing the order of the element in the sum)

Cheers,

Olivier

PS: and also why this only occurs for this process
PPS: and why if looking at the difference between various version returning different number points in quite absurb direction (like one pointing as if this is an issue in the generation of the phase-space and another as if it is an issue of the pdf)

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#15

Sorry for the avalanche of email,

But the numerical issue, might actually also explain the issue at NLO.
I have also try to setup the width of the top to zero and/or allow the width in T channel but this does not have any impact.

So after 48h of looking into this process/model, I might stop my investigation on this with the fact that such evaluation of the matrix-element can be numerically unstable and that therefore you might need to study which cancellation occur and then rewrite the code by hand to avoid those numerical issue. (and therefore might need different code for different benchmark point).

In that context going to NLO (or runnning) will be very difficult obviously.

Sorry,

Olivier

Revision history for this message
Hesham El Faham (helfaham) said :
#16

I see, thanks for the feedback. I might follow later with you on this if I may.

Actually, I wasn't aiming to study the process at NLO, I wanted to access the FO env. so I used the NLO syntax since I wasn't sure that [LOonly=QCD] will work.

Best,
Hesham

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#17

And what would be the point to use [LOonly=QCD] here?
That code is much less optimize than the normal LO code (both in term of the method use for the evaluation of the matrix-ement, the phase-space optimization, possibility of cuts, ...) for this syntax, I really do not see the point but I guess I might miss a point (systematics maybe?).

Cheers,

Olivier

Revision history for this message
Hesham El Faham (helfaham) said :
#18

I want to use an HwU analysis so I don't know how to be able to do that without using the [] syntax

Best,
Hesham

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#19

Interesting,

I was always considering that doing plot from the lhe file is much more easier (and flexible) than using
HwU but I might be wrong on that.

This being said the hyppothesis of the numerical precision of the matrix-element might not be correct:

1) doing "check g g > t t~ t t~ NP^2==2 QCD<=6 QED<=6" do not detect any issue.

2) If I'm doing a diff of a generated code returning

-0.00113 ± 4.9e-05 and -153.9 ± 2.4e+02 <file:///Users/omattelaer/Documents/workspace/2.7.3-lepcoll/PROC_SMEFTatNLO-LO_47/HTML/run_01/results.html>

I have the following:

> [2.7.3-lepcoll]$ diff PROC_SMEFTatNLO-LO_47/SubProcesses/P1_gg_ttxttx/ PROC_SMEFTatNLO-LO_46/SubProcesses/P1_gg_ttxttx/ | grep -v Binary | grep -v subdirectories
> diff PROC_SMEFTatNLO-LO_47/SubProcesses/P1_gg_ttxttx/coupl.inc PROC_SMEFTatNLO-LO_46/SubProcesses/P1_gg_ttxttx/coupl.inc
> 19c19
> < DOUBLE PRECISION MDL_MH,MDL_MT,MDL_MZ,MDL_MW
> ---
> > DOUBLE PRECISION MDL_MW,MDL_MZ,MDL_MH,MDL_MT
> 21c21
> < COMMON/MASSES/ MDL_MH,MDL_MT,MDL_MZ,MDL_MW
> ---
> > COMMON/MASSES/ MDL_MW,MDL_MZ,MDL_MH,MDL_MT
> 24c24
> < DOUBLE PRECISION MDL_WT,MDL_WZ,MDL_WH,MDL_WW
> ---
> > DOUBLE PRECISION MDL_WZ,MDL_WW,MDL_WH,MDL_WT
> 26c26
> < COMMON/WIDTHS/ MDL_WT,MDL_WZ,MDL_WH,MDL_WW
> ---
> > COMMON/WIDTHS/ MDL_WZ,MDL_WW,MDL_WH,MDL_WT
> diff PROC_SMEFTatNLO-LO_47/SubProcesses/P1_gg_ttxttx/genps.f PROC_SMEFTatNLO-LO_46/SubProcesses/P1_gg_ttxttx/genps.f
> 257a258
> > stop 1
> 1882c1883
> <
> ---
> > stop 1
> 1930c1931
> <
> ---
> > stop 1
> 1996c1997
> <
> ---
> > stop 1
> diff PROC_SMEFTatNLO-LO_47/SubProcesses/P1_gg_ttxttx/matrix1_optim.f PROC_SMEFTatNLO-LO_46/SubProcesses/P1_gg_ttxttx/matrix1_optim.f
> 394d393
> < DOUBLE PRECISION FK_MDL_WT
> 396d394
> < DOUBLE PRECISION FK_ZERO
> 398c396,397
> < SAVE FK_MDL_WT
> ---
> > DOUBLE PRECISION FK_MDL_WT
> > DOUBLE PRECISION FK_ZERO
> 400d398
> < SAVE FK_ZERO
> 401a400,401
> > SAVE FK_MDL_WT
> > SAVE FK_ZERO
> diff PROC_SMEFTatNLO-LO_47/SubProcesses/P1_gg_ttxttx/matrix1_orig.f PROC_SMEFTatNLO-LO_46/SubProcesses/P1_gg_ttxttx/matrix1_orig.f
> 394d393
> < DOUBLE PRECISION FK_MDL_WT
> 396d394
> < DOUBLE PRECISION FK_ZERO
> 398c396,397
> < SAVE FK_MDL_WT
> ---
> > DOUBLE PRECISION FK_MDL_WT
> > DOUBLE PRECISION FK_ZERO
> 400d398
> < SAVE FK_ZERO
> 401a400,401
> > SAVE FK_MDL_WT
> > SAVE FK_ZERO
> diff PROC_SMEFTatNLO-LO_47/SubProcesses/P1_gg_ttxttx/template_matrix1.f PROC_SMEFTatNLO-LO_46/SubProcesses/P1_gg_ttxttx/template_matrix1.f
> 217d216
> < double precision fk_mdl_WT
> 219d217
> < double precision fk_ZERO
> 221c219,220
> < save fk_mdl_WT
> ---
> > double precision fk_mdl_WT
> > double precision fk_ZERO
> 223d221
> < save fk_ZERO
> 224a223,224
> > save fk_mdl_WT
> > save fk_ZERO

So one can see that the matrix-element itself is the same. So this rule out the possibility that this difference is related to the ordering of the addition/substration.

The difference in genps seems to have an impact but no runs hit one of those "stop 1", so I have no clue on how such sanity check can impact the computation. This kind of sounds of compilation glitch, but I put the compiler check to maximum and he did not notice any warning.

I'm running now 10 runs of those two codes to see if the result are stable on both side.

Cheers,

Olivier

> On 10 Dec 2021, at 18:55, Hesham El Faham <email address hidden> wrote:
>
> Question #699734 on MadGraph5_aMC@NLO changed:
> https://answers.launchpad.net/mg5amcnlo/+question/699734
>
> Status: Answered => Open
>
> Hesham El Faham is still having a problem:
> I want to use an HwU analysis so I don't know how to be able to do that
> without using the [] syntax
>
> Best,
> Hesham
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.

Revision history for this message
Hesham El Faham (helfaham) said :
#20

Yes, HwU ended up being easier for me to do :)

Thanks for checking that, I'd be interested to know the outcome of this.

Best,
Hesham

Revision history for this message
Hesham El Faham (helfaham) said :
#21

Hi again Olivier,

Are there any updates on this?

On a different note, I think the process at LO works fine for me and I see the cross section fairly stable across several versions.

Anyways, I decided to proceed and I am still trying to get the [LOonly=QCD] to work because I will need also to split the SMEFT contributions later on. I realised that the generation fails with this syntax because of the following error I saw in the log file of the subprocesses:
"Too many gridpacks to keep track off"
Is there a way to get around that?

I see also that [LOonly=QCD] or even [LOonly=all] generate more diagrams than the normal LO process (probably duplicates), is that understood?

Best,
Hesham

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#22

Hi,

I succeed to have a version of the code where changing one line of the code allows me to change the cross-section.
changing any line in the area of the code (like adding a print statement, a code crash in some non used function , ...) allows me to change the cross-section to a random number.

Since those lines are dedicated to e+e- collider, I can literally change those lines as I want and this change the cross-section in a un-predictable way while this should not have any impact since none of those lines are executed at run-time.

So my understanding is that this is an issue related to the compiler. By addding such irrelevant change, the code is likely resulting in a different memory layout or assembler code. For a reason, this code is sensitive to some side effect.
The reason is likely a syntax error somewhere in the code (maybe not even in the file where I add those lines)

Since I'm currently working for my other job, I do not expect to work on this before a couple of days (and I'm anyway out of idea of what I can do)

My guess is that this is an issue introduced within 3.2.0, so if you use the long term stable, you should not be sensitive to that. But I can not give any warranty for any version.

> I see also that [LOonly=QCD] or even [LOonly=all] generate more diagrams
> than the normal LO process (probably duplicates), is that understood?

I have never used LOonly, so maybe better to do another thread on that, where I can assign to the expert in that mode.

Cheers,

Olivier

> On 13 Dec 2021, at 19:15, Hesham El Faham <email address hidden> wrote:
>
> Question #699734 on MadGraph5_aMC@NLO changed:
> https://answers.launchpad.net/mg5amcnlo/+question/699734
>
> Status: Answered => Open
>
> Hesham El Faham is still having a problem:
> Hi again Olivier,
>
> Are there any updates on this?
>
> On a different note, I think the process at LO works fine for me and I
> see the cross section fairly stable across several versions.
>
> Anyways, I decided to proceed and I am still trying to get the [LOonly=QCD] to work because I will need also to split the SMEFT contributions later on. I realised that the generation fails with this syntax because of the following error I saw in the log file of the subprocesses:
> "Too many gridpacks to keep track off"
> Is there a way to get around that?
>
> I see also that [LOonly=QCD] or even [LOonly=all] generate more diagrams
> than the normal LO process (probably duplicates), is that understood?
>
> Best,
> Hesham
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.

Revision history for this message
Hesham El Faham (helfaham) said :
#23

Thanks for the update!

I opened another thread with the latest issue.

Best,
Hesham

Revision history for this message
Best Olivier Mattelaer (olivier-mattelaer) said :
#24

Hi,

Concerning the madevent issue,
I would advise to work with the development version 3.3.2:
https://bazaar.launchpad.net/~maddevelopers/mg5amcnlo/3.3.2/revision/976
I have made a series of patch and all of them combined seems to fix the issue.
(or maybe it is just hidden now on my machine, no way to be sure since this is something at the compiler level,
that even flag at max level, the compiler did not even raise a warning...)

Thanks a lot for this report, this takes me a while but this was more than worthed.

Cheers,

Olivier

Revision history for this message
Hesham El Faham (helfaham) said :
#25

Hi,

I have tried with the 3.3.2 branch and I still get the same error when trying to generate events using lhapdf:
 File "./bin/generate_events", line 43, in <module>
    subprocess.call([sys.executable] + ['-O'] + sys.argv)
  File "/usr/lib64/python2.7/subprocess.py", line 524, in call
INFO: start to update the status
    return Popen(*popenargs, **kwargs).wait()
  File "/usr/lib64/python2.7/subprocess.py", line 1376, in wait
    pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0)
  File "/usr/lib64/python2.7/subprocess.py", line 478, in _eintr_retry_call
    return func(*args)

Best,
Hesham

Revision history for this message
Hesham El Faham (helfaham) said :
#26

I confirm things work fine now. I had to define a pythonpath in my bashrc. Thanks for your help!

Best,
Hesham

Revision history for this message
Hesham El Faham (helfaham) said :
#27

Thanks Olivier Mattelaer, that solved my question.