Error #13 in genps_fks.f

Asked by Richard Ruiz on 2020-06-09

Hi Folks,

I have a tricky cash here involving massive tau leptons in diboson production (with full interference) at NLO in QCD. In summary, I want an inclusive three charged lepton sample but with a tight pT requirement (pTlXCut) on the leading and sub-leading charged leptons.

The cuts are of course nonstandard but easy to implement for pTlXCut = 75 GeV. For larger values of pTlXCut, however, I start to trigger an "Error #13" in genps_fks.f. After working out the numbers, I find that the presence of m_tau = 1.777 GeV is definitely triggering something. Massless lepton channels are not exhibiting this behavior. I am starting to suspect that my additions and the crash are not totally related, just that I am triggering/isolating something already there.

The issue can be reproduced with v2.7.1.2 (and 2.6.7bzr) using the process
generate p p > ta+ ta- ta+ vt QED=4 QCD=0 [QCD]
output Test_pp_3lX_NLO

To force the leading and sub-leading (but not the third/trailing) charged lepton to have at least pTlXCut = 75 GeV, in cuts.f, I added to the header at L72:

c USER defined cuts for pTl2 from sum,min,max at L159
      double precision pTlXCut,pTlXSum,pTlXMin,pTlXMax
      logical gotLep1
      parameter(pTlXCut = 75.d0)

and at L159 add the cuts:
(sorry, probably a bit sloppy! I sum all three charged lepton pT and then subtract max/min to get middle)
c------------------------------------------------------------
      pTlXSum = 0.d0
      pTlXMax = 0.d0
      pTlXMin = 0.d0
      gotLep1 = .false.
      do i=nincoming+1,nexternal
         if (is_a_lp(i).or.is_a_lm(i)) then
            if(.not.gotLep1) then
               pTlXMax = pt_04(p(0,i))
               pTlXMin = pt_04(p(0,i))
               gotLep1 = .true.
            endif
            pTlXSum = pTlXSum + pt_04(p(0,i))
            if(pt_04(p(0,i)).gt.pTlXMax) pTlXMax = pt_04(p(0,i))
            if(pt_04(p(0,i)).lt.pTlXMin) pTlXMin = pt_04(p(0,i))
         endif
      enddo
      pTlXSum = pTlXSum - pTlXMax - pTlXMin
      if(pTlXSum.lt.pTlXCut.or.pTlXMax.lt.pTlXCut) then
         passcuts_user=.false.
         return
      endif
c------------------------------------------------------------

To avoid/resolve inefficient phase space sampling, I try to increment tau_min (=s_hat / s) by pTlXCut. To do this, I modify setcuts.f at L223 with:
      double precision pTlXCut,cutFact
      parameter (pTlXCut = 75.d0)

and at L421 add the following:
c-----------------------------------------------------------------
c Add pTlXCut for leading and subleading charged leptons
            taumin(iFKS,ichan)=taumin(iFKS,ichan) +pTlXCut
            taumin_j(iFKS,ichan)=taumin_j(iFKS,ichan)+pTlXCut
           taumin_s(iFKS,ichan)=taumin_s(iFKS,ichan)+pTlXCut
c-----------------------------------------------------------------

This is inserted just after the enddo at L422 and just before the lines
            stot = 4d0*ebeam(1)*ebeam(2)
            tau_Born_lower_bound=taumin(iFKS,ichan)**2/stot
            tau_lower_bound=taumin_j(iFKS,ichan)**2/stot

I should really set pTlXCut here to something between 75GeV and 2x75GeV, but I trigger the issue below. For values of pTlXCut = 75.d0, the following steering commands will work successfully:
order=NLO
shower=PYTHIA8
madspin=OFF
set LHC 13
set nevents 100
set no_parton_cut
set jetradius 0.4
set jetalgo -1
set etal 4
set mll 8
set ptj 30
set etaj 5.5

After ~1 hour on 24 cores, I get the intermediate results:
Intermediate results:
      Random seed: 33
      Total cross section: 1.030e-02 +- 9.4e-05 pb
      Total abs(cross section): 1.195e-02 +- 9.8e-05 pb

For larger cuts in both cuts.f and setcuts.f, such as pTlXCut = 150.d0, event generation fails after about eight minutes on 24 cores with output like,
aMCatNLOError : An error occurred during the collection of results.
 Please check the .log files inside the directories which failed:
 /.../Test_pp_3lX_NLO_Test/SubProcesses/P0_udx_taptamtapvt/GF4/log.txt
 /.../Test_pp_3lX_NLO_Test/SubProcesses/P0_dxu_taptamtapvt/GF2/log.txt
 /.../Test_pp_3lX_NLO_Test/SubProcesses/P0_dxu_taptamtapvt/GF3/log.txt

Further investigation reveals that the crash is due to the following:
$ tail Test_pp_3lX_NLO_Test/SubProcesses/P0_dxu_taptamtapvt/GF3/log.txt
  4 map 1 2
  4 inv. map 1 2
 ================================
tau_min 1 1 : 0.15533E+03 -- 0.24519E+03
tau_min 2 1 : 0.15533E+03 -- 0.24519E+03
tau_min 3 1 : 0.15533E+03 -- 0.24519E+03
tau_min 4 1 : 0.15533E+03 -- 0.24519E+03
 Error #13 in genps_fks.f
   12.630915999999999 3.1577289999999989 -1
Time in seconds: 0

It is not at all obvious, but the first number is equal to (2*m_tau)**2 and the second number is (m_tau)**2 for m_tau = 1.777, hence I wonder if my cuts are actually the reason for the crash.

In any case, I dug around a little more. At around L1860 of genps_fks.f, the Error #13 flag is

c Generate invariant masses for all s-channel branchings of the Born
         smin = (m(itree(1,i))+m(itree(2,i)))**2
         smax = (sqrtshat_born-totalmass+sqrt(smin))**2
         if(smax.lt.smin.or.smax.lt.0.d0.or.smin.lt.0.d0)then
            write(*,*)'Error #13 in genps_fks.f'
c write(*,*)smin,smax,i
            write(*,*)smin,smax,sqrtshat_born,totalmass,i
            stop
         endif

which is saying that smin > smax, but by values proportional to some power of m_tau. I added the extra information to the write statement and get the following:

$ tail Test_pp_3lX_NLO_Test/SubProcesses/P0_dxu_taptamtapvt/GF4/log.txt
  4 map 1 2
  4 inv. map 1 2
 ================================
tau_min 1 1 : 0.15533E+03 -- 0.24519E+03
tau_min 2 1 : 0.15533E+03 -- 0.24519E+03
tau_min 3 1 : 0.15533E+03 -- 0.24519E+03
tau_min 4 1 : 0.15533E+03 -- 0.24519E+03
Error #13 in genps_fks.f
   12.630915999999999 3.1577289999999989 0.0000000000000000 5.3309999999999995 -1

The fourth number (totalmass) is just 3xm_tau, so again, consistent with m_tau causing problems.

Any suggestions on how to overcome this issue / these issues would be deeply appreciated. I have been dealing with this for ~2 weeks and finally figured out how to isolate the issue just this morning. Thank for looking into this!

best,
richard

Question information

Language:
English Edit question
Status:
Solved
For:
MadGraph5_aMC@NLO Edit question
Assignee:
Rikkert Frederix Edit question
Solved by:
Rikkert Frederix
Solved:
2020-06-10
Last query:
2020-06-10
Last reply:
2020-06-10
Best Rikkert Frederix (frederix) said : #1

Hi Richard,

Thanks for your very detailed report.

I had a look, and it seems to be related to the interplay of 'competing resonances' in the phase-space generation and the updated values of tau_min you need to have enough points passing cuts. In particular, the code cannot determine correctly if there are competing resonances needed for your process, making all sorts of things fail. (Competing resonances appear when there are multiple massive s-channel propagators in a single diagram that cannot go on-shell all at the same time due to mass constraints).

I think you'll have to fall back to a less efficient solution. That is, to generate events with looser pTlXCut, and apply the more stringent cut at the analysis level. You can improve the statistics in the tails by using a non-trivial bias_weight_function(), which is available in cuts.f. With this function you can force the code to generate more events in certain regions of phase-space, which then come with a smaller weight. Hence, you won't get entirely unweighted events, but also you won't loose too many events due to the stringent cuts. You'll have to play a bit with this function to see what functional form suits you best.

Best,
Rikkert

Richard Ruiz (rruiz) said : #2

Hi Rikkert,

Thanks for the explanation. I am starting to see the issue. A couple quick followup questions:

1. The s_min, s_max checks (and hence competing resonance checks) are failing for m_tau = 1.777 GeV. Is it safe to expect things to work better if I simply set m_tau = 0 (with width_tau\neq0) at the generator level but keep m_tau \neq at the parton shower (pythia8) level?

2. I have never used the bias_weight_function() feature before. Is it compatible with FxFx merging?

Best,
Richard

Richard Ruiz (rruiz) said : #3

Thanks Rikkert Frederix, that solved my question.

Rikkert Frederix (frederix) said : #4

1. For sure the problems are less severe with massless tau leptons. If you load a model with zero tau lepton masses and generate your processes from there, MG5_aMC running at NLO will assign a tau lepton mass to the LHEF events before writing them out (and passing them to pythia8); simply setting the tau mass to zero in the param_card is not enough here: you have to generate the process from a model without a tau lepton mass. Irrespective of the width used, pythia8 will decay them accordingly.

2. Yes, it's compatible with FxFx.

Best,
Rikkert

Richard Ruiz (rruiz) said : #5

Hi Rikkert,

Thanks for the quick responses!

1. This is really cool and helpful. I did not realize that mgamc treated massless taus like this.

2. Excellent

best for now,
richard