multiparticles in BSM models

Asked by Pilar

Hello,

we used FeynRules (v2.3.26) to define a new model that represents a Seesaw (type I) mechanism, to which we added some effective interactions at low energies with mesons. The generation of the model is successful and the UFO files of the model seems to contain the correct particles and vertices. However, we are finding a strange behavior when we use it with MadGraph (v 2.6.6) to compute some decay widths for the new particles introduced.

We have introduced 4 neutrino mass eigenstates: a heavy one (n1) and three massless (v1, v2 and v3).

If we define nu = v1 v2 v3 and generate the process n1 > v1 v1~ nu we obtain a decay rate of 2.3*10^-37 GeV

However if we generate the separate processes:

n1 > v1 v1~ v1, n1 > v1 v1~ v2 and n1 > v1 v1~ v3

we obtain 3.7*10-41, 1.8^*10^-41 and 1.8*10^-41 GeV, which does not add up to the first result.

This happens with several different processes involving neutrinos and we have not been able to determine what is causing it. Also, if we generate n1 > nu nu~ nu we obtain 2.6*10^-26 GeV, much larger than all the individual processes we generate.

We have also noticed that when requesting n1 > nu~ nu nu, all the subprocesses that are generated are labeled as n1>v2~ v2 v3 and so on. However, when we request a process without using the nu definition such as n1 > v2 v2~ v3 they are instead labeled as n1>vl~ vl vl. This is the case even though we *never* defined vl (not in the model nor in the MadGraph interface when generating the events). Maybe this has something to do with this issue?

Any info you could provide on this would be very helpful. If you would like to see the UFO files of the model, I can send them to you by email.
Many thanks in advance,
Pilar

Question information

Language:
English Edit question
Status:
Solved
For:
MadGraph5_aMC@NLO Edit question
Assignee:
No assignee Edit question
Solved by:
Olivier Mattelaer
Solved:
Last query:
Last reply:
Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#1

My first idea here, is that you are sensitive to numerical cancellation.
The question here is why you have such small value for your width.
If this is due to something like A-B where A and B are very close to each other, then you certainly face a numerical issue.

Cheers,

Olivier

Revision history for this message
Pilar (pcoloma) said :
#2

We don’t expect such small values nor any cancellations in this computation. In fact, we think the larger result is the correct one (or at least it is close to the correct one), so we don’t understand the very small values obtained for the subprocesses separately. The width is expected to be small because of the mass we are using, in the O(100) MeV range.

Also, do you know why the vl multiparticle definition reappears, if we haven't declared it? if this is something that madgraph does automatically, we think that this could be part of the issue...mostly, because it seems to reappear in combination with the very small (and incorrect) values.

Thanks again,

Pilar

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#3

Hi,

Then I guess I would need to have the model in order to comment.

For the vl, this is perfectly normal, this is the group subprocess feature of madgraph which has nothing to do with your definition of the multi-particles so I do not see anything to be worried about here.
One can obviously set it to off (via set group_subprocesses False) but I would not be particularly beworry about this.

Cheers,

Olivier

Revision history for this message
Pilar (pcoloma) said :
#4

Hi,

well, the thing is that there are no ve, vm, vt particles in this model: we removed them, and we are just using the mass eigenstates.

So, how does MadGraph define the vl in this case?

I will send you the files by email so you can take a look, though. Thanks again,
Pilar

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#5

Hi,

I have run your process and observe the following:

1. In the log, I see a lot of "Note: The following floating-point exceptions are signalling: IEEE_INVALID_FLAG"
-> this is a sign of issue with numerical stability (at this stage I do not know what causing it but it is bad

2. The statistical error of your process is 100%:
  === Results Summary for run: run_01 tag: tag_1 ===

     Width : 4.271e-37 +- 2.159e-37 GeV
     Nb of events : 10000

Meaning that you can not trust such result.

3. The reason of this large error, can be seen if you are in debug mode: In that case, you would have seen such lines:

find 12 multijob in /Users/omattelaer/Documents/workspace/2.6.7/PROC_test_UFO_0/SubProcesses/P1_n1_v1v1xv3/G8.2
multi run are inconsistent: 0.0 < 2.13503333333e-37 - 25* 0.0: assign error 4.3188e-37
Combined 6 file generating 37967 events for /Users/omattelaer/Documents/workspace/2.6.7/PROC_test_UFO_0/SubProcesses/P1_n1_v1v1xv3/G8.2

4. Now by debugging the reason of the such numerical error, it is actually linked to the running of alpha_s. So I would suggest to pass in fix scale computation for alpha_s (and force a large scale). If needed you can correct yourself for the value of alphas at such low scale but we are not able to compute it.

5. Fixing alphas, is actually not enough, you have issue with at least the "Z" propagator .
When computing the Z propagator alone, it actually sometimes returns 0 (not sure if this is due to a physics reason or not. Is this expected for you?).

6. When generating such decay without the Z propagator, then you get:
     Width : 1.94e-40 +- 5.978e-41 GeV
     Nb of events : 10000
while separatly for the 3 of them you get:
     Width : 3.775e-41 +- 4.09e-44 GeV
     Nb of events : 10000
-------
     Width : 1.888e-41 +- 2.045e-44 GeV
     Nb of events : 10000
-------
     Width : 1.888e-41 +- 2.045e-44 GeV
     Nb of events : 10000

7. Issue is fixed if you set "set subprocesses_group True" (which is counter-intuitive so I need to understand why)

I will continue to look at this, but this is the current progress that I have done so far (and this give you potentially a way to generate this correctly already).

Cheers,

Olivier

Revision history for this message
Pilar (pcoloma) said :
#6

Hi,

Thank you very much for taking time to look into this.

Indeed you are right, the individual subprocesses have too large errors when generated with the multiparticles and this seems to be fixed by doing "set subprocesses_group True" as you suggested.

However, we think that the values computed this way are too small (even now, with small errors) and therefore incorrect. Our estimations of these processes are that they should be order 10^-26 GeV. And, in fact, if we generate instead the related process

n1 > Z > nu e+ e-

we do obtain 7.902e-27 +- 7.078e-30 GeV, in line with our expectations. However

n1 > Z > nu nu~ nu

returns 0 with the following message:

Survey return zero cross section.
   Typical reasons are the following:
   1) A massive s-channel particle has a width set to zero.
   2) The pdf are zero for at least one of the initial state particles
      or you are using maxjetflavor=4 for initial state b:s.
   3) The cuts are too strong.
   Please check/correct your param_card and/or your run_card.

In answer to your question regarding the width of the Z in this model: no, we do not expect it to change significantly with respect to the SM value). In fact, we checked that the Z width is set to its correct value in the param_card of the model:

DECAY 23 2.495200e+00 # WZ

We then thought we might have a problem with the Z coupling to nu, but when we compute

Z > nu nu~

we obtain Width : 0.4976 +- 7.17e-10 GeV. This is consistent with the invisible width of the Z in the SM,
so the coupling seems to fine (at least for this process).

Do you have any suggestions we could try? We also fixed the running scale for alpha_s as you suggested in all the tests described above, but that didn't solve the issue either...

By the way, we are using version 2.6.6 (from your response it seems you might be using 2.6.7).

Thank you very much for you help again!
Best,
Pilar

Revision history for this message
Best Olivier Mattelaer (olivier-mattelaer) said :
#7

Hi,

I continue my investigation of yesterday. I found that the integration of the phase-space could be wrong in absence of cuts (and any resonances) if the collision energy was below the GeV.

This can be fix by the following patch:
=== modified file 'Template/LO/SubProcesses/myamp.f'
--- Template/LO/SubProcesses/myamp.f 2019-06-17 14:27:09 +0000
+++ Template/LO/SubProcesses/myamp.f 2019-08-01 22:10:48 +0000
@@ -430,7 +430,7 @@
      $ i.ne.-(nexternal-(nincoming+1)))then
                   a=prmass(i,iconfig)**2/stot
                   xo = min(xm(i)**2/stot, 1-1d-8)
- if (xo.eq.0d0) xo=1d0/stot
+ if (xo.eq.0d0) xo=MIN(10d0/stot, stot/50d0, 0.5)
                   call setgrid(-i,xo,a,1)
                endif
 c Set spmass for BWs
@@ -456,7 +456,7 @@
                     xo = (MMJJ * 0.8)**2/stot
                  endif
               endif
- if (xo.eq.0d0) xo=1d0/stot
+ if (xo.eq.0d0) xo=MIN(10d0/stot, stot/50d0, 0.5)
 c if (prwidth_tmp(i, iconfig) .eq. 0d0.or.iden_part(i).gt.0) then
               call setgrid(-i,xo,a,1)
 c else

With that patch, you get a contribution from the Z-diagram and it does not matter anymore if you are using set group_subprocesses True/False. (True is obviously much faster as usual).

Thanks a lot to have found that issue in some computation at low-energies.

Cheers,

Olivier

Revision history for this message
Pilar (pcoloma) said :
#8

Great!!! Indeed, when we did as you suggested, the returned width is correct and the error is reasonable. So I am marking this question as solved.

However, note that when generating the events we still get:

Note: The following floating-point exceptions are signalling: IEEE_INVALID_FLAG

So I am not sure if we will run into numerical issues later on. In that case, I will reopen this thread (let's hope that's not the case...).

Thanks a lot again, Olivier, for all your help!
Pilar

Revision history for this message
Pilar (pcoloma) said :
#9

Thanks Olivier Mattelaer, that solved my question.

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#10

Did you turn off the running of alpha_s ?
I do not face any of such numerical issue.

Cheers,

Olivier

> On 2 Aug 2019, at 08:12, Pilar <email address hidden> wrote:
>
> Question #682437 on MadGraph5_aMC@NLO changed:
> https://answers.launchpad.net/mg5amcnlo/+question/682437
>
> Pilar posted a new comment:
> Great!!! Indeed, when we did as you suggested, the returned width is
> correct and the error is reasonable. So I am marking this question as
> solved.
>
> However, note that when generating the events we still get:
>
> Note: The following floating-point exceptions are signalling:
> IEEE_INVALID_FLAG
>
> So I am not sure if we will run into numerical issues later on. In that
> case, I will reopen this thread (let's hope that's not the case...).
>
> Thanks a lot again, Olivier, for all your help!
> Pilar
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.

Revision history for this message
Pilar (pcoloma) said :
#11

oh, you are right. Sorry, we forgot to fix it in that run...it seems that this is completely solved then!
Thanks again,
Pilar