Error #13 in genps_fks.f
Hi Folks,
I have a tricky cash here involving massive tau leptons in diboson production (with full interference) at NLO in QCD. In summary, I want an inclusive three charged lepton sample but with a tight pT requirement (pTlXCut) on the leading and sub-leading charged leptons.
The cuts are of course nonstandard but easy to implement for pTlXCut = 75 GeV. For larger values of pTlXCut, however, I start to trigger an "Error #13" in genps_fks.f. After working out the numbers, I find that the presence of m_tau = 1.777 GeV is definitely triggering something. Massless lepton channels are not exhibiting this behavior. I am starting to suspect that my additions and the crash are not totally related, just that I am triggering/
The issue can be reproduced with v2.7.1.2 (and 2.6.7bzr) using the process
generate p p > ta+ ta- ta+ vt QED=4 QCD=0 [QCD]
output Test_pp_3lX_NLO
To force the leading and sub-leading (but not the third/trailing) charged lepton to have at least pTlXCut = 75 GeV, in cuts.f, I added to the header at L72:
c USER defined cuts for pTl2 from sum,min,max at L159
double precision pTlXCut,
logical gotLep1
parameter
and at L159 add the cuts:
(sorry, probably a bit sloppy! I sum all three charged lepton pT and then subtract max/min to get middle)
c------
pTlXSum = 0.d0
pTlXMax = 0.d0
pTlXMin = 0.d0
gotLep1 = .false.
do i=nincoming+
if (is_a_lp(
endif
pTlXSum = pTlXSum + pt_04(p(0,i))
endif
enddo
pTlXSum = pTlXSum - pTlXMax - pTlXMin
if(
return
endif
c------
To avoid/resolve inefficient phase space sampling, I try to increment tau_min (=s_hat / s) by pTlXCut. To do this, I modify setcuts.f at L223 with:
double precision pTlXCut,cutFact
parameter (pTlXCut = 75.d0)
and at L421 add the following:
c------
c Add pTlXCut for leading and subleading charged leptons
c------
This is inserted just after the enddo at L422 and just before the lines
stot = 4d0*ebeam(
I should really set pTlXCut here to something between 75GeV and 2x75GeV, but I trigger the issue below. For values of pTlXCut = 75.d0, the following steering commands will work successfully:
order=NLO
shower=PYTHIA8
madspin=OFF
set LHC 13
set nevents 100
set no_parton_cut
set jetradius 0.4
set jetalgo -1
set etal 4
set mll 8
set ptj 30
set etaj 5.5
After ~1 hour on 24 cores, I get the intermediate results:
Intermediate results:
Random seed: 33
Total cross section: 1.030e-02 +- 9.4e-05 pb
Total abs(cross section): 1.195e-02 +- 9.8e-05 pb
For larger cuts in both cuts.f and setcuts.f, such as pTlXCut = 150.d0, event generation fails after about eight minutes on 24 cores with output like,
aMCatNLOError : An error occurred during the collection of results.
Please check the .log files inside the directories which failed:
/.../Test_
/.../Test_
/.../Test_
Further investigation reveals that the crash is due to the following:
$ tail Test_pp_
4 map 1 2
4 inv. map 1 2
======
tau_min 1 1 : 0.15533E+03 -- 0.24519E+03
tau_min 2 1 : 0.15533E+03 -- 0.24519E+03
tau_min 3 1 : 0.15533E+03 -- 0.24519E+03
tau_min 4 1 : 0.15533E+03 -- 0.24519E+03
Error #13 in genps_fks.f
12.630915999
Time in seconds: 0
It is not at all obvious, but the first number is equal to (2*m_tau)**2 and the second number is (m_tau)**2 for m_tau = 1.777, hence I wonder if my cuts are actually the reason for the crash.
In any case, I dug around a little more. At around L1860 of genps_fks.f, the Error #13 flag is
c Generate invariant masses for all s-channel branchings of the Born
smin = (m(itree(
smax = (sqrtshat_
c write(*
stop
endif
which is saying that smin > smax, but by values proportional to some power of m_tau. I added the extra information to the write statement and get the following:
$ tail Test_pp_
4 map 1 2
4 inv. map 1 2
======
tau_min 1 1 : 0.15533E+03 -- 0.24519E+03
tau_min 2 1 : 0.15533E+03 -- 0.24519E+03
tau_min 3 1 : 0.15533E+03 -- 0.24519E+03
tau_min 4 1 : 0.15533E+03 -- 0.24519E+03
Error #13 in genps_fks.f
12.630915999
The fourth number (totalmass) is just 3xm_tau, so again, consistent with m_tau causing problems.
Any suggestions on how to overcome this issue / these issues would be deeply appreciated. I have been dealing with this for ~2 weeks and finally figured out how to isolate the issue just this morning. Thank for looking into this!
best,
richard
Question information
- Language:
- English Edit question
- Status:
- Solved
- Assignee:
- Rikkert Frederix Edit question
- Solved by:
- Rikkert Frederix
- Solved:
- Last query:
- Last reply: