discrete cross section oscillation near mass threshold

Asked by Derek Li

Hello,

I am interested in using Madgraph to study EW processes. As a primer, I am considering lepton pair production: e- e+ --> τ- τ+ which has a simple LO analytical cross section for a photon s-channel. I altered the τ-mass to 100 GeV to consider the production of heavy particles. Varying the initial CM energy of e- (which determines the kinematics of the process entirely), I noticed a peculiar pattern for the computed cross section in the regime where the e-e+ energies are only minimally above the threshold of the τ masses. In particular, zooming in at this region, there seems to be a discrete discontinuity in the computed cross section. Here are some simulated results I got (obtained in version 3.5.1), with monotonically increasing energy:

CM energy per beam (GeV) cross section (pb)
100.00039743389324 0.006513
100.00039744797242 0.009842
100.00039746205212 0.006536
100.00039747613229 0.009821
100.00039749021296 0.009818
100.00039750429414 0.006546
100.00039751837583 0.006556
100.00039753245802 0.006524
100.00039754654068 0.00982

As shown, a discrete oscillation persists. Compared to the LO analytical expression (assuming unpolarized particles), we do not expect oscillatory behavior: at e- energies close to the τ mass (though still relativistic e-), the cross section exhibits a 1/2 power development with the energy. Changing the random seed also altered the positions of the cross sections upon and below the step, so this seems artificial.

Is there some integration setting I need to take into account in this situation? I attach the content of my event generation script below which resulted in the above data points (among a larger sample).

Thanks in advance,
Derek

generate e- e+ > a > ta- ta+
output Repository/e-e+_a_tau-tau+
launch
set ebeam1 100.00039716648348
set ebeam2 100.00039716648348
set mta 100.0
set nevents 10000
launch
set ebeam1 100.00039718055318
set ebeam2 100.00039718055318
set mta 100.0
set nevents 10000
launch
set ebeam1 100.00039719462339
set ebeam2 100.00039719462339
set mta 100.0
set nevents 10000
launch
set ebeam1 100.0003972086941
set ebeam2 100.0003972086941
set mta 100.0
set nevents 10000
launch
set ebeam1 100.0003972227653
set ebeam2 100.0003972227653
set mta 100.0
set nevents 10000
launch
set ebeam1 100.000397236837
set ebeam2 100.000397236837
set mta 100.0
set nevents 10000
launch
set ebeam1 100.00039725090922
set ebeam2 100.00039725090922
set mta 100.0
set nevents 10000
launch
set ebeam1 100.00039726498191
set ebeam2 100.00039726498191
set mta 100.0
set nevents 10000
launch
set ebeam1 100.00039727905512
set ebeam2 100.00039727905512
set mta 100.0
set nevents 10000
launch
set ebeam1 100.00039729312881
set ebeam2 100.00039729312881
set mta 100.0
set nevents 10000
launch
set ebeam1 100.00039730720303
set ebeam2 100.00039730720303
set mta 100.0
set nevents 10000
launch
set ebeam1 100.0003973212777
set ebeam2 100.0003973212777
set mta 100.0
set nevents 10000
launch
set ebeam1 100.00039733535291
set ebeam2 100.00039733535291
set mta 100.0
set nevents 10000
launch
set ebeam1 100.00039734942861
set ebeam2 100.00039734942861
set mta 100.0
set nevents 10000
launch
set ebeam1 100.00039736350482
set ebeam2 100.00039736350482
set mta 100.0
set nevents 10000
launch
set ebeam1 100.00039737758148
set ebeam2 100.00039737758148
set mta 100.0
set nevents 10000
launch
set ebeam1 100.0003973916587
set ebeam2 100.0003973916587
set mta 100.0
set nevents 10000
launch
set ebeam1 100.00039740573636
set ebeam2 100.00039740573636
set mta 100.0
set nevents 10000
launch
set ebeam1 100.00039741981455
set ebeam2 100.00039741981455
set mta 100.0
set nevents 10000

Question information

Language:
English Edit question
Status:
Solved
For:
MadGraph5_aMC@NLO Edit question
Assignee:
No assignee Edit question
Solved by:
Derek Li
Solved:
Last query:
Last reply:
Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#1

Hi,

I'm a bit confused about this since this sounds an old bug that should have been fixed.
Actually If I'm looking at our Long Term Stable version (2.9.17) I do not reproduce your issue:
# run_name tag cross error Nb_event cross_after_matching nb_event_after matching
run_01 tag_1 0.0098178 7.9449e-11 10000
run_02 tag_1 0.0098179 7.7329e-11 10000
run_03 tag_1 0.009818 7.7795e-11 10000
run_04 tag_1 0.0098183 7.4214e-11 10000
run_05 tag_1 0.0098182 2.5395e-07 10000
run_06 tag_1 0.0098186 7.7484e-11 10000
run_07 tag_1 0.0098188 9.1813e-11 10000
run_08 tag_1 0.0098189 8.218e-11 10000
run_09 tag_1 0.0098191 7.1874e-11 10000
run_10 tag_1 0.0098193 8.1531e-11 10000
run_11 tag_1 0.0098195 7.209e-11 10000
run_12 tag_1 0.0098196 8.5843e-11 10000
run_13 tag_1 0.0098199 7.221e-11 10000
run_14 tag_1 0.00982 7.7773e-11 10000
run_15 tag_1 0.0098201 7.3081e-11 10000
run_16 tag_1 0.0098204 8.18e-11 10000
run_17 tag_1 0.0098205 7.0681e-11 10000
run_18 tag_1 0.0098207 8.7411e-11 10000
run_19 tag_1 0.0098209 8.1887e-11 10000

But indeed I do reproduce it in 3.5.2:
run_01 tag_1 0.0065434 1.8846e-05 10000
run_02 tag_1 0.0098288 2.1555e-05 10000
run_03 tag_1 0.0065531 1.3416e-05 10000
run_04 tag_1 0.0065631 1.4395e-05 10000
run_05 tag_1 0.0065518 1.5518e-05 10000
run_06 tag_1 0.0065404 1.4914e-05 10000
run_07 tag_1 0.0065489 9.3155e-06 10000
run_08 tag_1 0.0065608 1.6383e-05 10000
run_09 tag_1 0.0098411 2.4522e-05 10000
run_10 tag_1 0.0065601 1.5666e-05 10000
run_11 tag_1 0.0065338 6.8808e-06 10000
run_12 tag_1 0.0065379 1.745e-05 10000
run_13 tag_1 0.0065667 1.6979e-05 10000
run_14 tag_1 0.0098342 1.0825e-05 10000
run_15 tag_1 0.006542 2.421e-05 10000
run_16 tag_1 0.0065387 7.5969e-06 10000
run_17 tag_1 0.0065502 1.5512e-05 10000
run_18 tag_1 0.0065427 2.1536e-05 10000
run_19 tag_1 0.0065533 1.1186e-05 10000

So I need to identify if the fix for the LTS version was propagated or not to the official running release or not.
In both case I would need to understand what is going on (either what is this new bug or why git failed to merge the patch correctly)

Anyway two quick solutions here:
1) check with either 2.9.17(Long term stable).
2) the issue was with the helicity recycling optimisation (and I can clearly see in 3.5.2 that the issue is related to that optimization)
so the solution is to add the flag --hel_recycling=False

Revision history for this message
Derek Li (dereklogos) said :
#2

Hi Olivier,

Thanks for the response. In 3.5.1, I have modified the output line to

output Repository/lpp_hel --hel_recycling=False

with the flag you suggested. This unfortunately has not removed the discrete discontinuity. I obtained the cross sections for the following energies (as before)

CM energy per beam (GeV)
100.00039716648348
100.00039718055318
100.00039719462339
100.0003972086941
100.0003972227653
100.0003972
100.00039725090922
100.00039726498191
100.00039727905512
100.00039729312881
100.00039730720303
100.0003973212777
100.00039733535291
100.00039734942861
100.00039736350482
100.00039737758148
100.0003973916587
100.00039740573636
100.00039741981455

as

cross section (pb) uncertainty (pb)
0.007651 1.8e-05
0.006534 1.6e-05
0.00981 1.3e-05
0.006824 1.4e-05
0.00654 1.7e-05
0.00655 6.3e-06
0.006841 1.6e-05
0.006838 1.9e-05
0.007647 0.0017
0.006814 1.6e-05
0.009822 1.4e-05
0.006839 2e-05
0.007141 0.0017
0.006554 1.9e-05
0.006574 1.2e-05
0.006539 1.2e-05
0.00982 9.9e-06
0.006845 2e-05
0.007098 1.3e-05

There are a few large uncertainties in the sample, but the pattern of the discrete continuity persists.

I understand that you're still looking into this. Can you provide more insight about the nature of the bug? Does it have to do with optimizations related to helicity?

Thanks,
Derek

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#3

Ah sorry, Was in a workshop at the beginning of the week and forgot to send a follow up message that indeed setting hel_recycling to False was not solving the issue.
I still think that the issue is helicity related but likely relatively independent of the helicity_recycling new features of the code.

Revision history for this message
Derek Li (dereklogos) said :
#4

Hello Olivier,

Thanks for the reply. I would still like to learn about the origin of this issue, if you're still considering looking into it. Should I leave this post as open for now, or should we close it and perhaps you can reach out again once you have identified the source of this effect?

Derek

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#5

Hi Derek,

Since this is a question, they are no real open/close status.
If you really want to have an official tracking of the status, then this should be a bug report.
But clearly this can/should stay open up to the point that I have a real patch.

The origin of the problem is now clearly identified and is indeed related to a miss-selection of the contributing helicity (and indeed independent of the algorithm used).

Two helicities are sometimes discarded because they are estimated as numerically zero (i.e. they are 10^-12 smaller compare to the others.). The reason is a bug in the generation of the \theta angle which is wrongly bias (and actually super-strongly bias) towards pi.
The consequence is that helicities proportional to (1+cos\theta) are therefore numerically suppress. Depending of the seed the code can be lucky (around 20% of the run) and the detect that those actually contribute. Which is then reflected in this fluctuation and or within the increase of the error.

The origin of such strongly bias is relatively clear as well and boils down to the fact that this is a collision with PDF.
Having a hacky fix for your process is "easy" but not generic.
You can change the file Source/dsample.f and add a "return" statement as the first instruction of the setgrid routine (so around line 837). This is obviously working only in this specific example.
(I have not perform deep test to cross-check that this is indeed solving the issue since I want a generic patch, which also include understanding if such issue can happen only for 2>2 with S-channel propagator and without PDF or if other topology without PDF
are also impacted. This is likely what I will investigate next.

With that knowledge, I would then be able to implement a safe patch.

Cheers,

Olivier

PS: I also have to understand why 2.9.17 does not seem impacted while so far everything above seems to impact 2.9.17. I do have my suspicion but not yet an answer.

PPS: Do not worry this is a quite serious report and I'm working on it as much as I can.

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#6

Hi Derek,

We are currently testing the following patch:

diff --git a/Template/LO/SubProcesses/myamp.f b/Template/LO/SubProcesses/myamp.f
index 7b4b7888f..b3d549f12 100644
--- a/Template/LO/SubProcesses/myamp.f
+++ b/Template/LO/SubProcesses/myamp.f
@@ -364,6 +364,12 @@ c Look for identical particles to map radiation processes

 c Start loop over propagators
       do i=-1,-(nexternal-3),-1
+c do not do last propagator since this correspond to sqrtshat
+c which is handle separately. Do not do it by reduding the do
+c loop. To avoid side effect for 2>2
+ if (i.eq.-(nexternal-3)) then
+ CYCLE
+ endif
          if (iforest(1,i,iconfig) .eq. 1.or.iforest(1,i,iconfig) .eq. 2)then
               tsgn=-1d0
          endif

Looks like the issue is also present in 2.9.17 (not fully sure why my test were going trough with that version).
So the patch is for that version. The issue is present since the beginning of Madgraph5 (so 2011) and likely even older than that.
This also impact every process. The catch is that this has typically zero impact on the final prediction since that part of the code is just guessing a initial grid for the VEGAS grid. One grid related to an angle was initialised as the mass. The grid will be quickly updated and this as no impact.

The only case where the above is not True is for the detection of zero helicity where your grid for the angle was so bias that some helicity were at the level of the numerical accuracy of the code and treated as zero.

Cheers,

Olivier

Revision history for this message
Derek Li (dereklogos) said :
#7

Hello Olivier,

Thank you for the update. I think your description of the problem provides enough information for me.

Derek