about run_card.dat on NLO order

Asked by abulikim2011

Hello:

    I don't understand some variable in run_card.dat. For example req_acc_fo,npoints_fo_grid ,niters_fo_grid, npoints_fo,niters_fo

 and req_acc,nevt_job . So, when I run some process, I can't understand the difference between the req_acc and the req_acc_fo

two parameter. I don't find some literature or link to explain them. and another native question about analysis NLO event number.

if i want to make figure between the different bin size and the the event number for this bin size (x axis is different bin size, y

axis is the event number) . how can i do for this analysis ?

Thanks!

best

Question information

Language:
English Edit question
Status:
Solved
For:
MadGraph5_aMC@NLO Edit question
Assignee:
No assignee Edit question
Solved by:
abulikim2011
Solved:
Last query:
Last reply:
Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#1

Hi,

First you have to understand the difference between fix order computation (fNLO) and computation with parton shower (NLO+PS).
You can choose which one to run in the first question asked by MG5aMC.

A pure NLO computation (fNLO) has two drawback
1) it can not generate events that can go through the parton shower (if you try you will face a double counting)
2) the weight of the computation is not bounded (making problematic the event generation)

Those two problem are solved when doing NLO+PS by adding some appropriate MC counter-term (in our case following the MC@NLO prescription).

Due to this huge difference, some parameter of the run_card are only related to fixed order computation (fNLO) and some other to NLO+PS generation

As stated the following one are related for fixed order run
> #***********************************************************************
> # Number of points per integration channel (ignored for aMC@NLO runs) *
> #***********************************************************************
> 0.01 = req_acc_fo ! Required accuracy (-1=ignored, and use the
> ! number of points and iter. below)
> # These numbers are ignored except if req_acc_FO is equal to -1
> 5000 = npoints_fo_grid ! number of points to setup grids
> 4 = niters_fo_grid ! number of iter. to setup grids
> 10000 = npoints_fo ! number of points to compute Xsec
> 6 = niters_fo ! number of iter. to compute Xsec

Since we do not generate unweighted events in this mode, the only goal to choose is the accuracy of each (of the many) integral to evaluate.
You can also decide to set that number to -1 and to manually control each step of the computation by choosing manually the number of iteration and number of points inside that iteration.
Those last 4 parameter are quite technical and not that much customize.

and the following one are related to NLO+PS computation
> #***********************************************************************
> # Number of LHE events (and their normalization) and the required *
> # (relative) accuracy on the Xsec. *
> # These values are ignored for fixed order runs *
> #***********************************************************************
> 10000 = nevents ! Number of unweighted events requested
> -1.0 = req_acc ! Required accuracy (-1=auto determined from nevents)
> -1 = nevt_job ! Max number of events per job in event generation.
> ! (-1= no split).

Here the typical workflow is to ask the code to generate unweighted events. This is why the parameter that you have to choose is “nevents”.
The second parameter is typically not speccify is the accuracy of cross-section (as in the case of the LO)

The next_job is a problem linearization mechanism. The complexity of our computation does not rise linearly (probably quadraticly but I’m do not know). So an easy trick to avoid this
slowing down for large generation is to stop and restart the computation after N events (such that the complexity for a total of n events goes like (N**2*n/N) which is much better than n**2.

> if i want to make figure between the different bin size and the the
> event number for this bin size (x axis is different bin size, y
>
> axis is the event number) . how can i do for this analysis ?

The answer depends if you want to do it for Fixed order computation or for NLO+PS.
For NLO+PS since you have unweighted events you have plenty of default tools for that.
For fixed order, you can either generate weighted events and then use the same public tools as above but you have to check that the weight are indeed taken into account and be careful how the statistical error are computed.
(many code do not handle correctly such specific type of events with grouped events and fail to have the correct error)
The second option at LO is to write a fortran analysis in fortran that you run within MG5aMC, they are plenty of example for this in the code.

Cheers,

Olivier

> On Nov 27, 2017, at 11:54, abulikim2011 <email address hidden> wrote:
>
> Question #661133 on MadGraph5_aMC@NLO changed:
> https://answers.launchpad.net/mg5amcnlo/+question/661133
>
> Description changed to:
> Hello:
>
> I don't understand some variable in run_card.dat. For example
> req_acc_fo,npoints_fo_grid ,niters_fo_grid, npoints_fo,niters_fo
>
> and req_acc,nevt_job . So, when I run some process, I can't understand
> the difference between the req_acc and the req_acc_fo
>
> two parameter. I don't find some literature or link to explain them. and
> another native question about analysis NLO event number.
>
> if i want to make figure between the different bin size and the the
> event number for this bin size (x axis is different bin size, y
>
> axis is the event number) . how can i do for this analysis ?
>
>
> Thanks!
>
> best
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.

Revision history for this message
abulikim2011 (abulikim2011) said :
#2

Thank you so much Oliver .