large error while computing cross-section in madgraph

Asked by Disha Bhatia on 2019-12-29

Hello,

I am evaluating a cross-section for the process:

p p > p2, (p2 > zi1~ zi2, (zi2 > zi1 h1), (h1 > a a))

Here zi1 and zi2 are two dark matter particles and h1 and p2 are scalars.
The problem is the cross-section error for this process is huge and I am not understanding the
reason for this. Individually the processes p2 > zi1~ zi2 and zi2 > zi1 h1 work fine
however are producing problem in this combination.

Here is the current estimate of cross-section: 5994.8 +- 9793100.0

Can you please help me in debugging this?

Also while running madgraph, I get this strange warning,
Traceback (most recent call last):
  File "write_param_card.py", line 205, in <module>
    ParamCardWriter('./param_card.dat', generic=True)
  File "write_param_card.py", line 30, in __init__
    self.write_card(list_of_parameters)
  File "write_param_card.py", line 90, in write_card
    self.write_dep_param_block(lhablock)
  File "write_param_card.py", line 123, in write_dep_param_block
    exec("%s = %s" % (parameter.name, parameter.value))
  File "<string>", line 1, in <module>
ZeroDivisionError: complex division by zero

but madgraph seems to be running fine wih this.

Thanks,
Disha

Question information

Language:
English Edit question
Status:
Solved
For:
MadGraph5_aMC@NLO Edit question
Assignee:
No assignee Edit question
Solved by:
Olivier Mattelaer
Solved:
2019-12-30
Last query:
2019-12-30
Last reply:
2019-12-30

Hi,

> Also while running madgraph, I get this strange warning,

This means that your benchmark is likely to be not physical (or your model badly written).
Looks like you have division by zero when evaluating some parameter of the model.
The division by zero that it complains about is related to one parameter of the param_card
which is not consider as a free parameter of the model.
In general such not free parameter of the param_card are mass or width since those information are mandatory in the param_card (for other code than MG5aMC).

So you certainly have to fix this in order to have a consistent model.
Now this might be the origin of your issue (or not).

> I am evaluating a cross-section for the process:
>
> p p > p2, (p2 > zi1~ zi2, (zi2 > zi1 h1), (h1 > a a))
>
> Here zi1 and zi2 are two dark matter particles and h1 and p2 are scalars.
> The problem is the cross-section error for this process is huge and I am not understanding the
> reason for this. Individually the processes p2 > zi1~ zi2 and zi2 > zi1 h1 work fine
> however are producing problem in this combination.

If we assume that the above error is irrelevant for your process (i.e. the problematic mass/width or other parameter is not used for this computation) then the issue is likely related to the width.
What are the width of p2, zi2 and h1?
Since we have recent improvment in the handling of small width, which version of the MG5aMC are you using? and did you keep the default value for the small_width_treatment options?

Cheers,

Olivier

> On 29 Dec 2019, at 07:43, Disha Bhatia <email address hidden> wrote:
>
> New question #687608 on MadGraph5_aMC@NLO:
> https://answers.launchpad.net/mg5amcnlo/+question/687608
>
> Hello,
>
> I am evaluating a cross-section for the process:
>
> p p > p2, (p2 > zi1~ zi2, (zi2 > zi1 h1), (h1 > a a))
>
> Here zi1 and zi2 are two dark matter particles and h1 and p2 are scalars.
> The problem is the cross-section error for this process is huge and I am not understanding the
> reason for this. Individually the processes p2 > zi1~ zi2 and zi2 > zi1 h1 work fine
> however are producing problem in this combination.
>
> Here is the current estimate of cross-section: 5994.8 +- 9793100.0
>
> Can you please help me in debugging this?
>
> Also while running madgraph, I get this strange warning,
> Traceback (most recent call last):
> File "write_param_card.py", line 205, in <module>
> ParamCardWriter('./param_card.dat', generic=True)
> File "write_param_card.py", line 30, in __init__
> self.write_card(list_of_parameters)
> File "write_param_card.py", line 90, in write_card
> self.write_dep_param_block(lhablock)
> File "write_param_card.py", line 123, in write_dep_param_block
> exec("%s = %s" % (parameter.name, parameter.value))
> File "<string>", line 1, in <module>
> ZeroDivisionError: complex division by zero
>
> but madgraph seems to be running fine wih this.
>
> Thanks,
> Disha
>
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.

Disha Bhatia (dishabhatia1989) said : #2

Thank you so much. I checked for zi2 the width was set to zero by hand. As it is a heavy dark matter particle, I should have set it to some non-zero value in feynrules itself. i.e. why it was giving large errors.
Now atleast this part is working fine.

I converted all couplings to external still this warning is still coming " ZeroDivisionError: complex division by zero".
I will perhaps rewrite the model in parts to see where I am making the mistake.

Another question I want to ask: I want to look at the process
p p > p2, (p2 > h1 p1, (p1 > zi1 zi1~), (h1 > a a)).

Now in feynrules analytically at tree level h1 to a a is being computed. Its numerical value is ofcourse zero, but analytically this process is there. So when I rewrite this command with p p > p2, (p2 > h1 p1, (p1 > zi1 zi1~), (h1 > a a NP =1)), madgraph takes all diagrams for NP<=1. The situation become worse when I want to look at ofshell decays, it takes all spurious diagrams which are zero.
Ideally there is no problem as the answer is zero but it increases the computation time. Is there a solution by which I can prevent these zero cs diagrams?

Thank you,
Disha

Hi,

Did you check this FAQ:
https://answers.launchpad.net/mg5amcnlo/+faq/2312

Otherwise,
you should look at the h>aa diagram and check with coupling it has and forbids it via QED=0

p2, (p2 > h1 p1, (p1 > zi1 zi1~), (h1 > a a NP =1 QED=0))

Cheers,

Olivier

> On 30 Dec 2019, at 08:32, Disha Bhatia <email address hidden> wrote:
>
> Question #687608 on MadGraph5_aMC@NLO changed:
> https://answers.launchpad.net/mg5amcnlo/+question/687608
>
> Status: Answered => Open
>
> Disha Bhatia is still having a problem:
> Thank you so much. I checked for zi2 the width was set to zero by hand. As it is a heavy dark matter particle, I should have set it to some non-zero value in feynrules itself. i.e. why it was giving large errors.
> Now atleast this part is working fine.
>
> I converted all couplings to external still this warning is still coming " ZeroDivisionError: complex division by zero".
> I will perhaps rewrite the model in parts to see where I am making the mistake.
>
> Another question I want to ask: I want to look at the process
> p p > p2, (p2 > h1 p1, (p1 > zi1 zi1~), (h1 > a a)).
>
> Now in feynrules analytically at tree level h1 to a a is being computed. Its numerical value is ofcourse zero, but analytically this process is there. So when I rewrite this command with p p > p2, (p2 > h1 p1, (p1 > zi1 zi1~), (h1 > a a NP =1)), madgraph takes all diagrams for NP<=1. The situation become worse when I want to look at ofshell decays, it takes all spurious diagrams which are zero.
> Ideally there is no problem as the answer is zero but it increases the computation time. Is there a solution by which I can prevent these zero cs diagrams?
>
> Thank you,
> Disha
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.

Disha Bhatia (dishabhatia1989) said : #4

Thanks Olivier Mattelaer, that solved my question.