Bias in reweighted cross sections for NLO production with systematics
Hi experts,
I'm generating a simple process (p p > w+) to test the reweighting to many PDFs with v2.6.0. My input cards are in [1].
I then call the systematics program as
pdfsets=
pdfsets+
pdfsets+
pdfsets+
pdfsets+
pdfsets+
scalevars=
./bin/aMCatNLO "systematics $1 --pdf=$pdfsets $scalevars"
The output I get is
#******
#
# original cross-section: 106033.6152
# scale variation: +-46.3% -55.5%
#
#PDF ABMP15_3_nnlo: 45793.2 +0.473% -0.473%
#PDF PDF4LHC15_
#PDF MMHT2014lo68cl: 51926.5 +0.713% -1.04%
#PDF PDF4LHC15_
#PDF NNPDF30_
#PDF HERAPDF20_NLO_EIG: 57731.6 +0.448% -0.49%
#PDF CT14nlo: 54370 +1.28% -1.49%
#PDF NNPDF31_
#PDF NNPDF31_
#PDF HERAPDF20_NNLO_EIG: 58950.4 +0.637% -0.675%
#PDF PDF4LHC15_
#PDF LUXqed_
#PDF PDF4LHC15_
#******
I've tried with both LHAPDF v6.1 and 6.2 and see the same thing. I also tried with p p > e+ e- and see the same effect. The weights have the same striking trend from scale specific weights (which look fine) to PDF weights. Is it possible that there is a systematic factor of 2 being lost somewhere?
Thanks,
Kenneth
Question information
- Language:
- English Edit question
- Status:
- Answered
- Assignee:
- No assignee Edit question
- Last query:
- Last reply:
Revision history for this message
|
#1 |
Hi,
I have put the following line in the run_card.dat:
[“--pdf=
and get the following numbers which are much more reasonable.
INFO: #******
#
# original cross-section: 83570.9312275
# scale variation: +12.3% -13.2%
#
#PDF NNPDF30_
#PDF CT14nlo: 89749.3 +2.67% -3.07%
# PDF 13065 : 91206.502861
#******
Can you try the same method of running? I’m also surprised by our difference in the scale variation
Cheers,
Olivier
> On Nov 6, 2017, at 20:08, Kenneth Long <email address hidden> wrote:
>
> New question #660414 on MadGraph5_aMC@NLO:
> https:/
>
> Hi experts,
>
> I'm generating a simple process (p p > w+) to test the reweighting to many PDFs with v2.6.0. My input cards are in [1].
>
> I then call the systematics program as
>
> pdfsets=
> pdfsets+
> pdfsets+
> pdfsets+
> pdfsets+
> pdfsets+
> scalevars=
>
> ./bin/aMCatNLO "systematics $1 --pdf=$pdfsets $scalevars"
>
> The output I get is
>
> #******
> #
> # original cross-section: 106033.6152
> # scale variation: +-46.3% -55.5%
> #
> #PDF ABMP15_3_nnlo: 45793.2 +0.473% -0.473%
> #PDF PDF4LHC15_
> #PDF MMHT2014lo68cl: 51926.5 +0.713% -1.04%
> #PDF PDF4LHC15_
> #PDF NNPDF30_
> #PDF HERAPDF20_NLO_EIG: 57731.6 +0.448% -0.49%
> #PDF CT14nlo: 54370 +1.28% -1.49%
> #PDF NNPDF31_
> #PDF NNPDF31_
> #PDF HERAPDF20_NNLO_EIG: 58950.4 +0.637% -0.675%
> #PDF PDF4LHC15_
> #PDF LUXqed_
> #PDF PDF4LHC15_
> #******
>
> I've tried with both LHAPDF v6.1 and 6.2 and see the same thing. I also tried with p p > e+ e- and see the same effect. The weights have the same striking trend from scale specific weights (which look fine) to PDF weights. Is it possible that there is a systematic factor of 2 being lost somewhere?
>
> Thanks,
>
> Kenneth
>
> [1] https:/
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.
Revision history for this message
|
#2 |
Hi Olivier,
I guess this issue is still not resolved?
I continue to see the problem when I add
systematics = systematics_program
[“--pdf=
to the run_card instead of running systematics after the event generation.
INFO: #******
#
# original cross-section: 105272.2398
# scale variation: +-47.3% -54.5%
#
#PDF NNPDF30_
#PDF CT14nlo: 50577.5 +1.25% -1.4%
# PDF 13065 : 51353.003542
#******
Andrew
Revision history for this message
|
#4 |
Hi,
Since it works for me and that i did not get any additional feedback, I guessed that the problem was solved.
I have run the following command:
set low_mem_
#import model loop_sm-
#switch to diagonal ckm matrix if needed for speed
import model loop_sm-no_b_mass
#include b quark in proton and jet definition for consistent 5 flavour scheme treatment
define p = p b b~
define j = j b b~
generate p p > w+ [QCD]
output wplustest_5f_NLO -nojpeg
launch
madspin=ON
set pdlabel lhapdf
set systematics_program systematics
set systematics_
.5", "--dyn=-1"]
and get:
INFO: #******
#
# original cross-section: 99413.5855427
# scale variation: +5.49% -9.5%
#
#PDF NNPDF30_
#PDF CT14nlo: 101189 +2.72% -3.05%
# PDF 13065 : 103084.692282
#******
I do not have your restrict_card, so I also tried the following model:
import model loop_sm-ckm
INFO: #Will Compute 173 weights per event.
INFO: #******
#
# original cross-section: 98111.9784094
# scale variation: +5.55% -9.56%
#
#PDF NNPDF30_
#PDF CT14nlo: 100104 +2.62% -2.96%
# PDF 13065 : 101864.820241
#******
If you can send me your restriction card, then I would be able to test with your model but so far I still do not reproduce your result...
Revision history for this message
|
#5 |
Hi Olivier,
I have run the same commands as you, but I still see the large cross-section difference:
[amlevin@lxplus055 madgraph-
*******
* *
* W E L C O M E to *
* M A D G R A P H 5 _ a M C @ N L O *
* *
* *
* * * *
* * * * * *
* * * * * 5 * * * * *
* * * * * *
* * * *
* *
* VERSION 2.6.3.2 2018-06-22 *
* *
* The MadGraph5_aMC@NLO Development Team - Find us at *
* https:/
* and *
* http://
* *
* Type 'help' for in-line help. *
* Type 'tutorial' to learn how MG5 works *
* Type 'tutorial aMCatNLO' to learn how aMC@NLO works *
* Type 'tutorial MadLoop' to learn how MadLoop works *
* *
*******
load MG5 configuration from MG5_aMC_
set collier to /afs/cern.
fastjet-config does not seem to correspond to a valid fastjet-config executable (v3+). We will use fjcore instead.
Please set the 'fastjet'variable to the full (absolute) /PATH/TO/
MG5_aMC> set fastjet /PATH/TO/
set lhapdf to /cvmfs/
set ninja to /afs/cern.
Using default eps viewer "evince". Set another one in ./input/
Using default web browser "firefox". Set another one in ./input/
Loading default model: sm
INFO: Restrict model sm with file MG5_aMC_
INFO: Run "set stdout_level DEBUG" before import for more information.
INFO: Change particles name to pass to MG5 convention
Defined multiparticle p = g u c d s u~ c~ d~ s~
Defined multiparticle j = g u c d s u~ c~ d~ s~
Defined multiparticle l+ = e+ mu+
Defined multiparticle l- = e- mu-
Defined multiparticle vl = ve vm vt
Defined multiparticle vl~ = ve~ vm~ vt~
Defined multiparticle all = g u c d s u~ c~ d~ s~ a ve vm vt e- mu- ve~ vm~ vt~ e+ mu+ t b t~ b~ z w+ h w- ta- ta+
MG5_aMC>set low_mem_
MG5_aMC>import model loop_sm-no_b_mass
INFO: Restrict model loop_sm with file MG5_aMC_
INFO: Run "set stdout_level DEBUG" before import for more information.
INFO: Change particles name to pass to MG5 convention
Pass the definition of 'j' and 'p' to 5 flavour scheme.
Kept definitions of multiparticles l- / vl / l+ / vl~ unchanged
Defined multiparticle all = g gh gh~ d u s c b d~ u~ s~ c~ b~ a ve vm vt e- mu- ve~ vm~ vt~ e+ mu+ t t~ z w+ h w- ta- ta+
MG5_aMC>import model loop_sm-no_b_mass
INFO: Restrict model loop_sm with file MG5_aMC_
INFO: Run "set stdout_level DEBUG" before import for more information.
INFO: Change particles name to pass to MG5 convention
Kept definitions of multiparticles l- / j / vl / l+ / p / vl~ unchanged
Defined multiparticle all = g gh gh~ d u s c b d~ u~ s~ c~ b~ a ve vm vt e- mu- ve~ vm~ vt~ e+ mu+ t t~ z w+ h w- ta- ta+
MG5_aMC>define j = j b b~
Defined multiparticle j = g u c d s b u~ c~ d~ s~ b~
MG5_aMC>generate p p > w+ [QCD]
INFO: Generating FKS-subtracted matrix elements for born process: u d~ > w+ [ all = QCD ] (1 / 4)
INFO: Generating FKS-subtracted matrix elements for born process: c s~ > w+ [ all = QCD ] (2 / 4)
INFO: Generating FKS-subtracted matrix elements for born process: d~ u > w+ [ all = QCD ] (3 / 4)
INFO: Generating FKS-subtracted matrix elements for born process: s~ c > w+ [ all = QCD ] (4 / 4)
MG5_aMC>output wplustest_5f_NLO -nojpeg
INFO: Writing out the aMC@NLO code, using optimized Loops
INFO: initialize a new directory: wplustest_5f_NLO
INFO: remove old information in wplustest_5f_NLO
INFO: Generating real matrix elements...
INFO: Generating real process: u d~ > w+ g [ all = QCD ]
INFO: Generating real process: g d~ > w+ u~ [ all = QCD ]
INFO: Generating real process: u g > w+ d [ all = QCD ]
INFO: Generating real process: c s~ > w+ g [ all = QCD ]
INFO: Generating real process: g s~ > w+ c~ [ all = QCD ]
INFO: Generating real process: c g > w+ s [ all = QCD ]
INFO: Generating real process: d~ u > w+ g [ all = QCD ]
INFO: Generating real process: g u > w+ d [ all = QCD ]
INFO: Generating real process: d~ g > w+ u~ [ all = QCD ]
INFO: Generating real process: s~ c > w+ g [ all = QCD ]
INFO: Generating real process: g c > w+ s [ all = QCD ]
INFO: Generating real process: s~ g > w+ c~ [ all = QCD ]
INFO: Generating born and virtual matrix elements...
INFO: Generating born process: u d~ > w+ [ all = QCD ]
INFO: Generating born process: c s~ > w+ [ all = QCD ]
INFO: Generating born process: d~ u > w+ [ all = QCD ]
INFO: Generating born process: s~ c > w+ [ all = QCD ]
INFO: Collecting infos and finalizing matrix elements...
INFO: ... Done
Writing directories...
INFO: Writing files in P0_dxu_wp (2 / 2)
INFO: Writing files in P0_udx_wp (1 / 2)
INFO: Creating files in directory V0_dxu_wp
INFO: Creating files in directory V0_udx_wp
ALOHA: aloha creates FFV1 set of routines with options: L1,P0
ALOHA: aloha creates FFV1 set of routines with options: L2,P0
INFO: Computing diagram color coefficients
INFO: Computing diagram color coefficients
INFO: Drawing loop Feynman diagrams for Process: u d~ > w+ QED<=1 [ all = QCD ]
INFO: Drawing loop Feynman diagrams for Process: d~ u > w+ QED<=1 [ all = QCD ]
INFO: Generating born Feynman diagrams for Process: d~ u > w+ QED<=1 [ all = QCD ]
INFO: Generating born Feynman diagrams for Process: u d~ > w+ QED<=1 [ all = QCD ]
History written to /afs/cern.
ALOHA: aloha creates FFV2 routines
ALOHA: aloha creates FFV1 routines
The option low_mem_
If you want to make this value the default for future session, you can run 'save options --all'
save configuration file to /afs/cern.
INFO: Use Fortran compiler gfortran
INFO: Use c++ compiler g++
INFO: Generate web pages
Type "launch" to generate events from this process, or see
/afs/cern.
Run "open index.html" to see more information about this process.
MG5_aMC>launch
INFO: *******
* *
* W E L C O M E to M A D G R A P H 5 *
* a M C @ N L O *
* *
* * * *
* * * * * *
* * * * * 5 * * * * *
* * * * * *
* * * *
* *
* VERSION 2.6.3.2 2018-06-22 *
* *
* The MadGraph5_aMC@NLO Development Team - Find us at *
* http://
* *
* Type 'help' for in-line help. *
* *
*******
INFO: load configuration from /afs/cern.
INFO: load configuration from /afs/cern.
INFO: load configuration from /afs/cern.
launch auto
The following switches determine which programs are run:
/================== Description =======
| 1. Type of perturbative computation | order = NLO | LO |
| 2. No MC@[N]LO matching / event generation | fixed_order = OFF | ON |
| 3. Shower the generated events | shower = HERWIG6 | OFF|PYTHIA6Q|
| 4. Decay onshell particles | madspin = OFF | ON|onshell |
| 5. Add weights to events for new hypp. | reweight = OFF | ON|NLO|NLO_TREE|LO |
| 6. Run MadAnalysis5 on the events generated | madanalysis = Not Avail. | Please install module |
\======
Either type the switch number (1 to 6) to change its setting,
Set any switch explicitly (e.g. type 'fixed_order=ON' at the prompt)
Type 'help' for the list of all valid option
Type '0', 'auto', 'done' or just press enter when you are done.[60s to answer]
>madspin=ON
The following switches determine which programs are run:
/================== Description =======
| 1. Type of perturbative computation | order = NLO | LO |
| 2. No MC@[N]LO matching / event generation | fixed_order = OFF | ON |
| 3. Shower the generated events | shower = HERWIG6 | OFF|PYTHIA6Q|
| 4. Decay onshell particles | madspin = ON | onshell|OFF |
| 5. Add weights to events for new hypp. | reweight = OFF | ON|NLO|NLO_TREE|LO |
| 6. Run MadAnalysis5 on the events generated | madanalysis = Not Avail. | Please install module |
\======
Either type the switch number (1 to 6) to change its setting,
Set any switch explicitly (e.g. type 'fixed_order=ON' at the prompt)
Type 'help' for the list of all valid option
Type '0', 'auto', 'done' or just press enter when you are done.
>
INFO: will run in mode: aMC@NLO
INFO: modify parameter parton_shower of the run_card.dat to HERWIG6
Do you want to edit a card (press enter to bypass editing)?
/------
| 1. param : param_card.dat |
| 2. run : run_card.dat |
| 3. madspin : madspin_card.dat |
| 4. shower : shower_card.dat |
\------
you can also
- enter the path to a valid card or banner.
- use the 'set' command to modify a parameter directly.
The set option works only for param_card and run_card.
Type 'help set' for more information on this command.
- call an external program (ASperGE/
Type 'help' for the list of available command
[0, done, 1, param, 2, run, 3, madspin, 4, enter path, ... ][90s to answer]
>set pdlabel lhapdf
INFO: modify parameter pdlabel of the run_card.dat to lhapdf
Do you want to edit a card (press enter to bypass editing)?
/------
| 1. param : param_card.dat |
| 2. run : run_card.dat |
| 3. madspin : madspin_card.dat |
| 4. shower : shower_card.dat |
\------
you can also
- enter the path to a valid card or banner.
- use the 'set' command to modify a parameter directly.
The set option works only for param_card and run_card.
Type 'help set' for more information on this command.
- call an external program (ASperGE/
Type 'help' for the list of available command
[0, done, 1, param, 2, run, 3, madspin, 4, enter path, ... ]
>set systematics_program systematics
INFO: modify parameter systematics_program of the run_card.dat to systematics
Do you want to edit a card (press enter to bypass editing)?
/------
| 1. param : param_card.dat |
| 2. run : run_card.dat |
| 3. madspin : madspin_card.dat |
| 4. shower : shower_card.dat |
\------
you can also
- enter the path to a valid card or banner.
- use the 'set' command to modify a parameter directly.
The set option works only for param_card and run_card.
Type 'help set' for more information on this command.
- call an external program (ASperGE/
Type 'help' for the list of available command
[0, done, 1, param, 2, run, 3, madspin, 4, enter path, ... ]
>set systematics_
INFO: modify parameter systematics_
Do you want to edit a card (press enter to bypass editing)?
/------
| 1. param : param_card.dat |
| 2. run : run_card.dat |
| 3. madspin : madspin_card.dat |
| 4. shower : shower_card.dat |
\------
you can also
- enter the path to a valid card or banner.
- use the 'set' command to modify a parameter directly.
The set option works only for param_card and run_card.
Type 'help set' for more information on this command.
- call an external program (ASperGE/
Type 'help' for the list of available command
[0, done, 1, param, 2, run, 3, madspin, 4, enter path, ... ]
>
WARNING: To be able to run systematics program, we set store_rwgt_info to True
INFO: modify parameter store_rwgt_info of the run_card.dat to True
INFO: Update the dependent parameter of the param_card.dat
INFO: Starting run
INFO: Compiling the code
INFO: Using LHAPDF v6.1.6 interface for PDFs
INFO: Compiling source...
INFO: ...done, continuing with P* directories
INFO: Compiling directories...
INFO: Compiling on 10 cores
INFO: Compiling P0_dxu_wp...
INFO: Compiling P0_udx_wp...
INFO: P0_dxu_wp done.
INFO: P0_udx_wp done.
INFO: Checking test output:
INFO: P0_dxu_wp
INFO: Result for test_ME:
INFO: Passed.
INFO: Result for test_MC:
INFO: Passed.
INFO: Result for check_poles:
INFO: Poles successfully cancel for 20 points over 20 (tolerance=1.0e-05)
INFO: P0_udx_wp
INFO: Result for test_ME:
INFO: Passed.
INFO: Result for test_MC:
INFO: Passed.
INFO: Result for check_poles:
INFO: Poles successfully cancel for 20 points over 20 (tolerance=1.0e-05)
INFO: Starting run
INFO: Using 10 cores
INFO: Cleaning previous results
INFO: Doing NLO matched to parton shower
INFO: Setting up grids
INFO: Idle: 0, Running: 2, Completed: 0 [ current time: 12h38 ]
INFO: Idle: 0, Running: 0, Completed: 2 [ 3.5s ]
INFO: Determining the number of unweighted events per channel
Intermediate results:
Random seed: 33
Total cross section: 9.907e+04 +- 6.4e+02 pb
Total abs(cross section): 1.102e+05 +- 5.9e+02 pb
INFO: Computing upper envelope
INFO: Idle: 0, Running: 2, Completed: 0 [ current time: 12h38 ]
INFO: Idle: 0, Running: 1, Completed: 1 [ 8.7s ]
INFO: Idle: 0, Running: 0, Completed: 2 [ 8.9s ]
INFO: Updating the number of unweighted events per channel
Intermediate results:
Random seed: 33
Total cross section: 9.924e+04 +- 3.7e+02 pb
Total abs(cross section): 1.103e+05 +- 3.2e+02 pb
INFO: Generating events
INFO: Idle: 0, Running: 2, Completed: 0 [ current time: 12h38 ]
INFO: Idle: 0, Running: 1, Completed: 1 [ 11.1s ]
INFO: Idle: 0, Running: 0, Completed: 2 [ 11.2s ]
INFO: Doing reweight
INFO: Idle: 0, Running: 2, Completed: 0 [ current time: 12h38 ]
INFO: Idle: 0, Running: 1, Completed: 1 [ 1.1s ]
INFO: Idle: 0, Running: 0, Completed: 2 [ 1.2s ]
INFO: Collecting events
INFO:
----
Summary:
Process p p > w+ [QCD]
Run at p-p collider (6500.0 + 6500.0 GeV)
Number of events generated: 10000
Total cross section: 9.924e+04 +- 3.7e+02 pb
----
Scale variation (computed from LHE events):
----
INFO: The /afs/cern.
INFO: Events generated
systematics run_01 --pdf=292200,
INFO: Running Systematics computation
INFO: Idle: 2, Running: 2, Completed: 0 [ current time: 12h39 ]
INFO: Idle: 0, Running: 3, Completed: 1 [ 46.6s ]
INFO: # events generated with PDF: NNPDF23_
INFO: #Will Compute 173 weights per event.
INFO: #******
#
# original cross-section: 99458.9939652
# scale variation: +-47.2% -54.8%
#
#PDF NNPDF30_
#PDF CT14nlo: 50623 +1.36% -1.53%
# PDF 13065 : 51572.0879909
#******
INFO: End of systematics computation
Revision history for this message
|
#6 |
Hi Andrew,
Ok I have finally succeeded to reproduce your result.
Looks like the result is different if you run in debug mode or in production mode.
My result are done in debug mode and doe not present such difference, while yours are done in production mode and do have such issue.
I will investigate now what's going on, but one quick solution is to force the debug mode.
Cheers,
Olivier
Revision history for this message
|
#7 |
Hi Andrew,
Here is the patch to solve this issue. (a quite stupid bug actually)
Thanks a lot,
Olivier
Revision history for this message
|
#9 |
Hi,
Sorry I do not know why the link did not went trough.
https:/
I hope the link will be kept this time
Cheers,
Olivier
On 24 Jul 2018, at 15:17, Andrew Levin <<email address hidden>
Question #660414 on MadGraph5_aMC@NLO changed:
https:/
Andrew Levin posted a new comment:
Sorry, where is the patch?
--
You received this question notification because you are an answer
contact for MadGraph5_aMC@NLO.
Revision history for this message
|
#10 |
Hi Olivier,
Thanks a lot, and sorry for loosing track of the thread and not following up long ago. Could you give a summary of the issue? In particular, are the weights meaningful if corrected by some normalization or are they unusable?
Is it also straightforward to provide a patch on top of v2.6.0?
Thanks again,
Kenneth
Revision history for this message
|
#11 |
Sorry I did too many stuff at the same time yesterday and put a link to another patch.
The correct patch for this tread is:
=== modified file 'madgraph/
--- madgraph/
+++ madgraph/
@@ -885,6 +885,7 @@
for onewgt in cevent.wgts:
if not __debug__ and (dyn== -1 and Dmur==1 and Dmuf==1 and pdf==self.
+ continue
if dyn == -1:
It should work nicely for 2.6.0 as well.
>Could you give a summary of the issue?
The issue is that in debug mode, I wanted to force to recompute the original weight such that I can check the internal consistency
of the weight written in the lhe file and the one recomputed.
In production mode, I wanted to use the one written in the event.
so in production mode, I added the weight written in the event file and forget to bypass the (re-)computation of the weight.
Consequently the weight is added twice.
>In particular, are the weights meaningful if corrected by some normalization or are they unusable?
indeed this is an exact (up to numerical accuracy) factor of two. So you can correct this issue easily via post-processing.
Cheers,
Olivier
Can you help with this problem?
Provide an answer of your own, or ask Kenneth Long for more information if necessary.