inconsistency between (Z->e+ e-) +jets and e+e- + jets

Asked by Haoyi Jia

I am trying to simulate Z->l+l- + jets at NLO in two ways:
1. import model loop_sm-no_b_mass
generate p p > e+ e- [QCD] @0
add process p p > e+ e- j [QCD] @1
output bkg_zjets_nlo_13TeV
launch
shower=Pythia8
set pdlabel lhapdf
set lhaid 325100
set maxjetflavor 5
set nevents 75
set ickkw 3
set jetradius 1.0
set ptj 8
set etaj 10
set mll_sf 40
set mll 40
set Qcut 20
set njmax 2
set ebeam1 6500
set ebeam2 6500
2. import model loop_sm-no_b_mass
generate p p > z [QCD] @0
add process p p > z j [QCD] @1
output bkg_zjets_nlo_13TeV
launch
shower=Pythia8
set pdlabel lhapdf
set lhaid 325100
set maxjetflavor 5
set nevents 75
set ickkw 3
set jetradius 1.0
set ptj 8
set etaj 10
set mll_sf 40
set mll 40
set Qcut 20
set njmax 2
set ebeam1 6500
set ebeam2 6500

For the first configuration, I got total cross section 2.894e+03pb. For the second configuration, I got 2.622e+06pb. Consider the BR(Z->e+ e-), we have 2.622e+06*3.365% = 8.823 e+04pb which is one order higher than the first configuration. I am wondering what cause the difference here. Also, I am wondering if I would like to require the Z to have at least 100GeV pt, how can i do it? I don't find something like ptllmin at LO, and pt_min_pdg = {23: 100} doesn't seems to work for the first configuration.

Question information

Language:
English Edit question
Status:
Solved
For:
MadGraph5_aMC@NLO Edit question
Assignee:
No assignee Edit question
Solved by:
Haoyi Jia
Solved:
Last query:
Last reply:
Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#1

Hi,

I have done some investigation and did not spot any particular issue.
First this is a DY process, so you have contribution from the photon that you neglect in the second process and since leads on it's own on a 10% difference of cross-section between the two syntax.
Then obviously in one case you use the BR and the second case not (another couple of percent difference)

So in my test (which only differ by the pdf set) I do not spot any inconsitency that can not be explained by the above statement.

Concerning your cut, you need to be very carefull with cut at NLO accuracy, and this explains why they are much less cut available at LO compare to NLO. For example you can not ask for a ptll cut for p p > e+ e- [QCD] since this is equivalent to ask a cut on the first jet which will break NLO accuracy (since you are forbidding the born and the virtual.

Cheers,

Olivier

Revision history for this message
Haoyi Jia (kn-jia625) said (last edit ):
#2

Thank you for your fast reply Olivier! I still don't understand why p p > Z cross-section times the BR(Z>e+ e-) is 10 times larger than pp > e+ e-. I am wondering if you are getting different numbers from your simulation? or is my comparison wrong(I understand the photon contribution but it should be small)?
For my second question, your recommendation for simulate boosted Z + jets cross-section is to do it in LO level or do you recommend NLO simulation with the pt_min_pdg {23: 100} cut?

Cheers,
Kenny

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#3

If I take 1j sample alone
I have the following three number
- 1235 (e+e-)
- 1146 (e+ e- no photon allowed)
- 1168 Z + madspin

With 0+1j (FxFx) I have (before parton-shower --so unphysical result due to large double counting--)
- 3273 (e+e-)
- 2990 (e+ e- no photon allowed)
- 3060 (Z + madspin)

Here is the script that I used :
  1 generate p p > z / a [QCD] @0
   2 add process p p > z j / a [QCD]
   3 output bkg_zjets_nlo_13TeV
   4 launch
   5 shower=OFF
   6 madspin=ON
   7 decay z > e+ e-
   8 set maxjetflavor 5
   9 set jetradius 1.0
  10 set ptj 8
  11 set etaj 10
  12 set mll_sf 40
  13 set mll 40
  14 set Qcut 20
  15 set ebeam1 6500
  16 set ebeam2 6500

To be complete I have run this within the LTS version of the code that I'm currently validating (so technically it will be version 2.9.18 which is going to be released soon)

Cheers,

Olivier

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#4

Also one potential issue on your side is the number of requested events which is super small which can lead to huge statistical error especially if you compute cross-section after acceptance/reject after parton-shower. This will blow up your statistical error by huge factor.

For your second question, I do not know it depends of so many parameters that this is a question that only you can answer.

Revision history for this message
Haoyi Jia (kn-jia625) said :
#5

I've use exactly the same code you have in here:
"Here is the script that I used :
  1 generate p p > z / a [QCD] @0
   2 add process p p > z j / a [QCD]
   3 output bkg_zjets_nlo_13TeV
   4 launch
   5 shower=OFF
   6 madspin=ON
   7 decay z > e+ e-
   8 set maxjetflavor 5
   9 set jetradius 1.0
  10 set ptj 8
  11 set etaj 10
  12 set mll_sf 40
  13 set mll 40
  14 set Qcut 20
  15 set ebeam1 6500
  16 set ebeam2 6500"
in version 3.5.1 and still got 8.42e+04 pb. Maybe is the version causing the problem, I will try installed LTS version 2.9.17 and see how it goes. Thank you!

Best,
Kenny

Revision history for this message
Haoyi Jia (kn-jia625) said :
#6

okay so i have test with version 2.9.17, Z + madspin is getting me 3045 pb. maybe something wrong with 3.5.1?

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#7

Just tested with 3.5.2 and got
2994 ± 12
but cross-section before the madspin is indeed:
8.706e+04 ± 3.6e+02
Are you sure that this is not your issue?

Cheers,

Olivier

Revision history for this message
Haoyi Jia (kn-jia625) said :
#8

can you try with 3.5.1? I have e+04 order pb before mad spin for 3.5.2 but e+06 order pb before mad spin for 3.5.1. I use a reinstalled clean 3.5.1 to test that.

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#9

For me 3.5.1 just crashed but the reported cross-section was indeed
      Total cross section: 2.449e+06 +- 7.9e+03 pb

Cheers,

Olivier

Revision history for this message
Haoyi Jia (kn-jia625) said :
#10

Ok so there was indeed something wrong with 3.5.1. I guess I will use 2.9.17 instead. Sorry i have one more question. When I run NLO with pythia and parton matching seems it take more than O(1) hours to run even for just 100 events (I actually never finished it even once). Every time there is just one parton showering job and just stuck there. Does this sounds like a bug to you or NLO parton-shower job just usually take long time to run.

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#11

A lot of the computation step does not scale with the number of requested events but are a "flat" one-time cost.
Therefore if you request a small number of events, you will be dominated by those step.

Now the parton-shower step should scale linearly with the number of events so that step should be quick if you only have 100 events.
However since you are doing FxFx jobs, you might have issue that some sample does not exists at all in your original sample if your have only 100 events which might trigger unexpected behavior.

So my advise would be to test with larger sample to see if that is the issue. (and otherwise check the log of the parton-shower to see why/where it is stucked)

Cheers,

Olivier

Revision history for this message
Haoyi Jia (kn-jia625) said :
#12

I've look into the log file of pythia8, seems that pythia8 stucks because of the classical unmatched version number problem, even if I export pythia8 and pythia8data everytime. But unlike other people are getting "unmatched version number in code (correct version number ) vs in xml (wrong version number)" i got "unmatched version number in code (wrong version number) vs in xml (correct version number)". I try to look into how the code get the version number but doesn't found the problem. But one thing i know is how i got python3.9 is LCG101. LCG101 got the wrong pythia8 version number. I believe it happens during the installation of pythia8. So far I have no idea how to solve this. But I could wrong everything under python2 with 2.9.17. Thank you so much for the help!