Huge SubProcesses folder after runs

Asked by Maksym Ovchynnikov

Hi!

I launch some processes of the type

generate p p > particle
add process p p > particle j

and then launch multiple times. I turn off showerings and study systematics with lhapdf. For each run, the folder SubProcesses accumulates log files, with overall size > 1 Gb, and my disk space quickly ends. An example of the files are

SubProcesses/P2_gq_qpq/G1/run_20_log.txt

For it, I launched the production of a 2 GeV particle called "particle" for the configuration E_p1 = 400 GeV and E_p2 = 0. The content of the file suggests the reason: systematics switched to the cut ptj = 30 GeV "to improve integration efficiency". This is bad for such collision energies (and bad at all, I would say) and leads to a lot of errors overfilling the log file.

Could you please tell me if I can omit such a large cut?

```
==== LHAPDF6 USING DEFAULT-TYPE LHAGLUE INTERFACE ====
LHAPDF 6.5.4 loading /home/name/Downloads/mg5/HEPTools/lhapdf6_py3//share/LHAPDF/NNPDF31_nnlo_as_0118/NNPDF31_nnlo_as_0118_0000.dat
NNPDF31_nnlo_as_0118 PDF set, member #0, version 1; LHAPDF ID = 303600
 Process in group number 2
 A PDF is used, so alpha_s(MZ) is going to be modified
 Old value of alpha_s from param_card: 0.11800220000000000
 New value of alpha_s from PDF lhapdf : 0.11800208008122040
 using LHAPDF
 Warning! ptj set to xqcut= 30.000000000000000 to improve integration efficiency
 Note that this might affect non-radiated jets,
 e.g. from decays. Use cut_decays=F in run_card.
 Warning! mmjj set to xqcut= 30.000000000000000 to improve integration efficiency
 Note that this might affect non-radiated jets,
 e.g. from decays. Use cut_decays=F in run_card
 Define smin to 900.00000000000000
 Define smin to 900.00000000000000
 Define smin to 900.00000000000000
 Define smin to 900.00000000000000
 *****************************************************
 * MadGraph/MadEvent *
 * -------------------------------- *
 * http://madgraph.hep.uiuc.edu *
 * http://madgraph.phys.ucl.ac.be *
 * http://madgraph.roma2.infn.it *
 * -------------------------------- *
 * *
 * PARAMETER AND COUPLING VALUES *
 * *
 *****************************************************

  External Params
  ---------------------------------

 aEWM1 = 127.90000000000001
 mdl_Gf = 1.1663700000000000E-005
 aS = 0.11800220000000000
 mdl_ymb = 4.7000000000000002
 mdl_ymt = 172.00000000000000
 mdl_ymtau = 1.7769999999999999
 mdl_cs = 1.0000000000000000
 mdl_cGs = 1.0000000000000000
 mdl_cG = 1.0000000000000000
 mdl_cq = 1.0000000000000000
 mdl_cn = 1.0000000000000000
 mdl_cv = 1.0000000000000000
 mdl_cbl = 1.0000000000000000
 mdl_cn1 = 0.23269999999999999
 mdl_cn2 = 0.18709999999999999
 mdl_MUq = 2.1600000000000000E-003
 mdl_MDq = 4.6699999999999997E-003
 mdl_MSq = 9.2999999999999999E-002
 mdl_MCq = 1.2749999999999999
 mdl_MBq = 4.7000000000000002
 mdl_MZ = 91.187600000000003
 mdl_MTA = 1.7769999999999999
 mdl_MT = 172.00000000000000
 mdl_MB = 4.7000000000000002
 mdl_MH = 125.00000000000000
 mdl_Ms = 5.0999999999999996
 mdl_Ma = 3.7999999999999998
 mdl_Mn = 11.000000000000000
 mdl_Mdp = 2.0000000000000000
 mdl_Mbl = 5.0000000000000000
 mdl_WZ = 2.4952000000000001
 mdl_WW = 2.0850000000000000
 mdl_WT = 1.5083359999999999
 mdl_WH = 4.0699999999999998E-003
 mdl_Wsp = 1.0000000000000000
 mdl_Wa = 1.0000000000000000
 mdl_WN = 1.0000000000000000
 mdl_WDP = 1.0000000000000000
 mdl_Wbl = 1.0000000000000000
  Internal Params
  ---------------------------------

 mdl_MZ__exp__2 = 8315.1783937600012
 mdl_MZ__exp__4 = 69142191.720053151
 mdl_sqrt__2 = 1.4142135623730951
 mdl_MH__exp__2 = 15625.000000000000
 mdl_complexi = (0.0000000000000000,1.0000000000000000)
 mdl_aEW = 7.8186082877247844E-003
 mdl_MW = 79.824359746197842
 mdl_sqrt__aEW = 8.8422894590285753E-002
 mdl_ee = 0.31345100004952897
 mdl_MW__exp__2 = 6371.9284088904105
 mdl_sw2 = 0.23369913342182447
 mdl_cw = 0.87538612427783857
 mdl_sqrt__sw2 = 0.48342438232036300
 mdl_sw = 0.48342438232036300
 mdl_g1 = 0.35807170271074895
 mdl_gw = 0.64839716719502682
 mdl_vev = 246.22056907348590
 mdl_vev__exp__2 = 60624.568634871241
 mdl_lam = 0.12886689630821144
 mdl_yb = 2.6995322804122722E-002
 mdl_yt = 0.98791394091683138
 mdl_ytau = 1.0206529494239589E-002
 mdl_muH = 88.388347648318444
 mdl_I1a33 = (2.69953228041227219E-002,0.0000000000000000)
 mdl_I2a33 = (0.98791394091683138,0.0000000000000000)
 mdl_I3a33 = (0.98791394091683138,0.0000000000000000)
 mdl_I4a33 = (2.69953228041227219E-002,0.0000000000000000)
 mdl_ee__exp__2 = 9.8251529432049817E-002
 mdl_sw__exp__2 = 0.23369913342182450
 mdl_cw__exp__2 = 0.76630086657817542
  Internal Params evaluated point by point
  ----------------------------------------

 mdl_sqrt__aS = 0.34351448295523146
 mdl_G__exp__2 = 1.4828593785097339
 mdl_G__exp__3 = 1.8057181045455082
 mdl_G__exp__4 = 2.1988719364342741
  Couplings of fips-jets
  ---------------------------------

        GC_14 = 0.00000E+00 0.12177E+01
        GC_33 = -0.00000E+00 -0.10095E+00
        GC_34 = 0.00000E+00 0.20191E+00

 Collider parameters:
 --------------------

 Running at P P machine @ 38.740160040970402 GeV
 PDF set = lhapdf
 alpha_s(Mz)= 0.1180 running at 3 loops.
 alpha_s(Mz)= 0.1180 running at 3 loops.
 Renormalization scale set on event-by-event basis
 Factorization scale set on event-by-event basis

 getting user params
Enter number of events and max and min iterations:
 Number of events and iterations 1000 5 3
Enter desired fractional accuracy:
 Desired fractional accuracy: 0.10000000000000001
Enter 0 for fixed, 2 for adjustable grid:
Suppress amplitude (0 no, 1 yes)?
 Using suppressed amplitude.
Exact helicity sum (0 yes, n = number/event)?
 Explicitly summing over helicities
Enter Configuration Number:
Running Configuration Number: 1
 Not subdividing B.W.
 Attempting mappinvarients 1 4
 Determine nb_t
 T-channel found: 0
 Completed mapping 4
 about to integrate 4 1000 5 3 4 1
 Using non-zero grid deformation.
  4 dimensions 1000 events 4 invarients 5 iterations 1 config(s), (0.99)
 Using h-tuple random number sequence.
 Error opening grid
 Using Uniform Grid! 16
 Using uniform alpha 1.0000000000000000
 Grid defined OK
 Set CM energy to 27.43
 Mapping Graph 1 to config 1
 Determine nb_t
 T-channel found: 0
Setting grid 1 0.53180E-02 1
 Transforming s_hat 1/s 3 1.1965544210340611 900.00000000000000 752.15968799999996
   1 1 2 3 4
 Masses: 0.000E+00 0.000E+00 0.200E+01 0.000E+00
Using random seed offsets 1 : 1
  with seed 138
 Ranmar initialization seeds 19620 9517
 ERROR CMS ENERGY LESS THAN MINIMUM CMS ENERGY 752.15968799999996 900.00000000000000

 ********************************************
 * You are using the DiscreteSampler module *
 * part of the MG5_aMC framework *
 * Author: Valentin Hirschi *
 ********************************************

  Particle 3 4
      Et > 0.0 30.0
       E > 0.0 0.0
     Eta < -1.0 -1.0
   xqcut: 0.0 30.0
d R # 3 > -0.0 0.0
s min # 3> 0.0 0.0
xqcutij # 3> 0.0 0.0
 ERROR CMS ENERGY LESS THAN MINIMUM CMS ENERGY 752.15968799999996 900.00000000000000
 ERROR CMS ENERGY LESS THAN MINIMUM CMS ENERGY 752.15968799999996 900.00000000000000
 ERROR CMS ENERGY LESS THAN MINIMUM CMS ENERGY 752.15968799999996 900.00000000000000
 ERROR CMS ENERGY LESS THAN MINIMUM CMS ENERGY 752.15968799999996 900.00000000000000
 ERROR CMS ENERGY LESS THAN MINIMUM CMS ENERGY 752.15968799999996 900.00000000000000
 ERROR CMS ENERGY LESS THAN MINIMUM CMS ENERGY 752.15968799999996 900.00000000000000
 ERROR CMS ENERGY LESS THAN MINIMUM CMS ENERGY 752.15968799999996 900.00000000000000
 ERROR CMS ENERGY LESS THAN MINIMUM CMS ENERGY 752.15968799999996 900.00000000000000
 ERROR CMS ENERGY LESS THAN MINIMUM CMS ENERGY 752.15968799999996 900.00000000000000
```

Question information

Language:
English Edit question
Status:
Solved
For:
MadGraph5_aMC@NLO Edit question
Assignee:
No assignee Edit question
Solved by:
Olivier Mattelaer
Solved:
Last query:
Last reply:
Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#1

Hi,

Looks like you are doing a MLM merging computation. Be aware that such computation requires to run the parton-shower since the removal of the double counting between your two multiplicities can only be done AFTER parton-shower. So not running the parton-shower means that no cross-section/distribution/scale uncertainty/... are physically meaningfull.

The merging scale that remove such double counting is technically set within the parton-shower, but in order to have such MLM merging to not reject too many event after parton-shower, we do use within the run_card a proxy for such type of cut via the cut on xqcut.

In your run, the value that you pick for xqcut is 30GeV, which means that the merging scale at the parton-shower level should be at least 40GeV. A 30 GeV xqcut also means that all jet should at least have 30 GeV otherwise they should be simulate by a sample with one less parton.

So here it does make sense to automatically set ptj to 30 since anyway any event below such pt will be discarded by the xqcut value that you did use. They are indeed a way to tell the code to not set ptj to the value of xqcut but this is for case where you have non QCD jet (VBF process, single top) where you do have jet at the lowest multiplicity (in that case you should/can put auto_ptj_mjj to False). But setting that parameter is irrelevant since this will just move one error to another one.

So to summarise, the real issue:
1) your process with different mulitplicity is double counting some contribution due to the resummation of the PDF and renormalization of alpha_s
2) Madgraph did (correctly) set you to MLM mode to remove such double counting but the default cutoff scale is too large for your particular collision. -> so you need to change xqcut
3) you do need to run the parton-shower with MLM mode to have consistent result

Now the normal recomendation for setting the merging scale is to take the hard-scale of your process and to divide by a number between 8 and 2. And then to remove (at least) 10 GeV for setting xqcut. Now if you tell me that your particle is 2GeV, then this is clearly problematic since you are very close to the QCD scale.

Not sure what is the best setup to simulate this, I guess that I will drop the zero jet multiplicity and then focus on
p p > particle j
One additional issue for such process will be the default dynamical scale which includes an effective cut at 4 GeV^2.
So you probably want to pass either to fix scale for QCD or at least to a non default dynamical scale choice.

Note that such dynamical scale choice is mandatory for MLM merging, meaning again that MLM merging is not what you want to do here.

Cheers,

Olivier

Revision history for this message
Maksym Ovchynnikov (name-xxx) said :
#2

Hi Olivier,

Thanks for the very detailed answer! The problem is clear now.

May I also please ask you a somewhat related question? I would like to set the particle's mass to the value below 2 GeV, but then the cross-section vanishes for an unknown reason (I am using a dynamic scale 4). Of course, there, I enter the problematic domain, but I want to get at least an estimate of the cross-section within the perturbative QCD.

Could you please tell me which parameter controls this cut-off?

Revision history for this message
Best Olivier Mattelaer (olivier-mattelaer) said :
#3

I actually do not know the answer to your question.

I would say that the issue can be the PDF which start to be zero (Our default PDF does not allow for the fit to be negative and therefore has some exactly zero value). Another possible problem is that the running of alpha_s to too low value can lead alpha_s to be nan which will automatically discard the phase-space point. if any of those two are true, this is a clear sign that you can not trust the result.

And obviously at such low scale, the PDF factorization and in particular the collinear factorization are likely to not be True and you might need to use, at least, TMD factorization(which are not implemented in MadGraph ).

Cheers,

Olivier

Revision history for this message
Maksym Ovchynnikov (name-xxx) said :
#4

Thanks Olivier Mattelaer, that solved my question.