# EFT dim8 aQGC decomposition in VBS pp->4l2j : Sum of cross sections does not match with the EFT prediction

Asked by Alexandros Marantis

Dear experts,
I am writing you regarding the "decomposition" of some anomalous Quartic Gauge Couplings samples (details: https://twiki.cern.ch/twiki/bin/viewauth/AtlasProtected/MadGraphEFTMCdecomposition), in the VBS process pp->4l2j, generated with MG5 v2.6.1 by using a particular BSM dimension-8 model (SM_LT0: http://feynrules.irmp.ucl.ac.be/wiki/AnomalousGaugeCoupling ) and assuming that all the operators except one, ft0, are zero.

Following the general idea that the EFT predictions can be split up as the sum of the SM, the SM-EFT interference (linear term) and the pure EFT (quadratic term) contribution, I'm generating independent pure samples by using the "New Physics" parameter (NP==0 for SM, NP^2==1 for pure SM-EFT interference and NP==1 for pure QGC). The cross-terms (interference between EFT operators) are expected to be 0, as all the operators, except ft0, are 0. I am also generating a sample including all the contributions by using NP=1 (same as NP<=1) in order to compare the cross section of this sample with the sum of the independent samples.

The cross sections for
pp > j j l+ l- l+ l- QCD=0 QED=6
with ft0=5e-13 are the following:

pure SM, NP==0: xsec = 0.0005771 +- 2.049e-06 pb
pure aQGCs, NP==1: xsec = 0.0002521 +- 5.606e-07 pb
pure interference SM-EFT, NP^2==1: xsec = 1.187e-05 +- 4.084e-08 pb
SUM: 0.00084107 pb

while the cross section of the "all in one" sample is,
NP=1: xsec = 0.0007203 +- 2.18e-06 pb

We have a difference of ~14%. So, my question is if this behaviour is expected and if not, which of two results is more reliable, the sum of the independent samples or the sample with all the contributions?
(I can attach the banners of each run for more information)
Thank you in advance,
Alexandros

## Question information

Language:
English Edit question
Status:
For:
Assignee:
No assignee Edit question
Last query:
 Revision history for this message Olivier Mattelaer (olivier-mattelaer) said on 2019-03-05: #1

Hi,

What is the scale uncertainty for all those numbers.
I would expect such number to agree within scale uncertainty.
The default dynamical scale choice being Feynman diagram based, the two computation only agree within that theoretical error. You could choose another dynamical scale such that the two computation should match within statistical uncertainty.
This make quite a lot of sense, since the default dynamical scale for interference might not be that much physically motivated.

Cheers,

Olivier

> On 5 Mar 2019, at 18:26, Alexandros Marantis <email address hidden> wrote:
>
> New question #678974 on MadGraph5_aMC@NLO:
>
> Dear experts,
> I am writing you regarding the "decomposition" of some anomalous Quartic Gauge Couplings samples (details: https://twiki.cern.ch/twiki/bin/viewauth/AtlasProtected/MadGraphEFTMCdecomposition), in the VBS process pp->4l2j, generated with MG5 v2.6.1 by using a particular BSM dimension-8 model (SM_LT0: http://feynrules.irmp.ucl.ac.be/wiki/AnomalousGaugeCoupling ) and assuming that all the operators except one, ft0, are zero.
>
> Following the general idea that the EFT predictions can be split up as the sum of the SM, the SM-EFT interference (linear term) and the pure EFT (quadratic term) contribution, I'm generating independent pure samples by using the "New Physics" parameter (NP==0 for SM, NP^2==1 for pure SM-EFT interference and NP==1 for pure QGC). The cross-terms (interference between EFT operators) are expected to be 0, as all the operators, except ft0, are 0. I am also generating a sample including all the contributions by using NP=1 (same as NP<=1) in order to compare the cross section of this sample with the sum of the independent samples.
>
> The cross sections for
> pp > j j l+ l- l+ l- QCD=0 QED=6
> with ft0=5e-13 are the following:
>
>
> pure SM, NP==0: xsec = 0.0005771 +- 2.049e-06 pb
> pure aQGCs, NP==1: xsec = 0.0002521 +- 5.606e-07 pb
> pure interference SM-EFT, NP^2==1: xsec = 1.187e-05 +- 4.084e-08 pb
> SUM: 0.00084107 pb
>
> while the cross section of the "all in one" sample is,
> NP=1: xsec = 0.0007203 +- 2.18e-06 pb
>
> We have a difference of ~14%. So, my question is if this behaviour is expected and if not, which of two results is more reliable, the sum of the independent samples or the sample with all the contributions?
> (I can attach the banners of each run for more information)
> Thank you in advance,
> Alexandros
>
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.

 Revision history for this message Alexandros Marantis (amaranti) said on 2019-03-18: #2

Hello Olivier,
I ran various configurations of the dyn_scale_factor parameter (=1,2,3,4), which unfortunately didn't solve the problem. The cross section deviation between the sum (SM+QGC+Interference) and the directly produced sample is huge (10-20%) and it definitely cannot be explained by the scale uncertainty.
The run_card that I'm using is the same as the one of an official ATLAS mc16 sample. I will test again with the default run_card, hoping for better results.
By the way, the concept of qgc decomposition works fine for the WZ channel, giving deviations of 0.5-1%.

Cheers,
Alexandros

 Revision history for this message Olivier Mattelaer (olivier-mattelaer) said on 2019-03-18: #3

This then seems to point to a phase-space integration bias.
Can you test the linear behavior of the interference?
(i.e replace the value of ft0 by
scan:[-5e-14,-5e-13,-5e-12, 5e-13, 5e-14,5e-12]

You can also test the value "0" but this might make MG5aMC to simply crash.

Did you use an "optimal" model (the one that remove all vertex associated to zero coupling).
Using non optimal model complexify the phase-space integration and I would strongly advise to always use optimal model for EFT computation (especially for DIM8 one).
If you did not please follow those instructions:

This being said:
- Note that if you use fix-scale the "dynamical_scale_choice" does not have any impact on the computation. (so what you observe is pure statistical fluctuation --likely under-estimate if you use the same seed in all cases)

- The phase-space integrator for interference computation is something that we did not really know how to do. We use the "standard" code for the integration but this might be quite sub-optimal. So this might be related to that here. (especially since your process is quite complicated...)

Cheers,

Olivier

> On 18 Mar 2019, at 17:37, Alexandros Marantis <email address hidden> wrote:
>
> Question #678974 on MadGraph5_aMC@NLO changed:
>
> Alexandros Marantis posted a new comment:
> Hello Olivier,
> I ran various configurations of the dyn_scale_factor parameter (=1,2,3,4), which unfortunately didn't solve the problem. The cross section deviation between the sum (SM+QGC+Interference) and the directly produced sample is huge (10-20%) and it definitely cannot be explained by the scale uncertainty.
> The run_card that I'm using is the same as the one of an official ATLAS mc16 sample. I will test again with the default run_card, hoping for better results.
> By the way, the concept of qgc decomposition works fine for the WZ channel, giving deviations of 0.5-1%.
>
> Cheers,
> Alexandros
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.

 Revision history for this message Yannis Maznas (imaznas) said on 2019-03-19: #4

Hello Olivier,

We can start producing some samples to test the behavior of the interference.
Do you have a preference on the dyn_scale_factor parameter or it doesn't matter?

About the "optimal" model, we aren't really certain. Is there an easy way to tell or should we better ask the author of the model?

- So fix-scale goes back to False as the default one I suppose.

- About the phase-space integrator for the interference, does this affect both inclusive and pure interference simulations? To rephrase, can we trust the one more than the other?

Cheers,
Yannis

 Revision history for this message Olivier Mattelaer (olivier-mattelaer) said on 2019-03-19: #5

Hi,

> Do you have a preference on the dyn_scale_factor parameter or it doesn't matter?

I'm not an expert on that and I'm not able to determine which scale makes more sense for your process.
For the current comparison, It does not actually matters (it might matters to describe the data but that's not the point for the moment)

> About the "optimal" model, we aren't really certain. Is there an easy
> way to tell or should we better ask the author of the model?

If you have a lot of "0" entry in your param_card it means that the model is not optimal.
You do not need the author to optimise the model, this is something that any user can do.
(just follow the instruction above)

> - So fix-scale goes back to False as the default one I suppose.

Yes it is typically more accurate to use a dynamical scale (it resums some log)

> - About the phase-space integrator for the interference, does this
> affect both inclusive and pure interference simulations? To rephrase,
> can we trust the one more than the other?

Technically both are affecte since they both use the same approximation (no interference) in order to setup the method of integration. This is obviously less problematic when your interference is 10% of the total cross-section and not 100% like when you integrate the interference alone.
So in general I would more trust the inclusive one. But for the moment I would not trust any of those two computations as long as it is not clear which one is wrong. Looking at the details of all the logs might be usefull (especially if you found an integral that returns 0 when it should not)

Cheers,

Olivier

> On 19 Mar 2019, at 09:17, Yannis Maznas <email address hidden> wrote:
>
> Question #678974 on MadGraph5_aMC@NLO changed:
>
> Yannis Maznas posted a new comment:
> Hello Olivier,
>
> We can start producing some samples to test the behavior of the interference.
> Do you have a preference on the dyn_scale_factor parameter or it doesn't matter?
>
> About the "optimal" model, we aren't really certain. Is there an easy
> way to tell or should we better ask the author of the model?
>
> - So fix-scale goes back to False as the default one I suppose.
>
> - About the phase-space integrator for the interference, does this
> affect both inclusive and pure interference simulations? To rephrase,
> can we trust the one more than the other?
>
> Cheers,
> Yannis
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.

 Revision history for this message Yannis Maznas (imaznas) said on 2019-03-19: #6

Hi Olivier,

Thanks for the quick reply!

Just of the sake of clarification I'm posting the link to the param_card waiting for your feedback on whether the "0" values are a lot or not since I cannot judge which are the critical parameters and we have several zero-defined ones
https://www.dropbox.com/s/ytsbwqbhe5v902i/param_card.txt?dl=0

Cheers,
Yannis

 Revision history for this message Olivier Mattelaer (olivier-mattelaer) said on 2019-03-19: #7

So yes this is what i call un-optimised.
The Block anoinputs should have only one entry FT0

Cheers,

Olivier

> On 19 Mar 2019, at 10:57, Yannis Maznas <email address hidden> wrote:
>
> Question #678974 on MadGraph5_aMC@NLO changed:
>
> Yannis Maznas posted a new comment:
> Hi Olivier,
>
> Thanks for the quick reply!
>
> Just of the sake of clarification I'm posting the link to the param_card waiting for your feedback on whether the "0" values are a lot or not since I cannot judge which are the critical parameters and we have several zero-defined ones
> https://www.dropbox.com/s/ytsbwqbhe5v902i/param_card.txt?dl=0
>
> Thanks in advance!
>
> Cheers,
> Yannis
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.

 Revision history for this message Alexandros Marantis (amaranti) said on 2019-04-02: #8

Hello Olivier,
as we discussed above, we generated some samples to study the behaviour of the qgc decomposition for various cases. You can see the results of all these studies in this spreadsheet (https://docs.google.com/spreadsheets/d/1KdJynhnXjIAiZ6H9C5MR-nBsWAYwgcgVBsiQ_c1ygP4/edit?usp=sharing)

So, first, we did a scan in order to test the linear behaviour of the interference and the quadratic term with the non-restricted model, for ft0 = -5e-14, 5e-14, -5e-13, 5e-13, -5e-12, 5e-12. (The links for the banners of each run are in the rightmost column). You are right about the value "ft0=0", it gives an error specifying that the xsec for this process is 0.

The second study, is about the restriction of the SM_LT0 model, which seems that does not affect the results comparing with the non-restricted model. For example, block: "---------> Restricted model - run_card with drll=0.4 and drjj=0.4" compared with block: "---------> official run_card with drll=0.4 and drjj=0.4"

About the dynamical_scale choice, for all the samples, I chose to stick to the default value (-1), which agrees with the one of the official ATLAS sample of my analysis.

Now, as I mentioned in a previous comment, we ran some samples with the default run_card in order to check the decomposition in this case. The difference in the cross section between the sum of the Interference+QGC+SM and the Total decreased to ~2% ! I believe this is a good sign, so I started changing some parameters which I assume that are sensitive, in order to end up to a configuration. What I did is, started with the same run_card of the ATLAS sample and changed one-by-one some parameters. I concluded that the pureQGC samples are very sensitive to the cut which is relevant to the minimum distance between leptons (drll). For drll=0.4 we have a deviation of around ~3% which is ok, but the cut in the event selection for our analysis is smaller (0.2). Actually, the ATLAS sample which we're using has been generated with all dr's equal to 0 (drll, drjj, drjl, drbb, drbl etc).
If you take a look at the spreadsheet, you will see that for drll=0.1 the cross section of the QGC sample is ~e-04 pb while, for drll=0.2 the cross section is ~e-05 pb.
Do you find this behaviour normal?

Thank you,
Alex

 Revision history for this message Olivier Mattelaer (olivier-mattelaer) said on 2019-04-02: #9

If you keep -1 for the scale choice, then a 20% agreement is what I expect (without running the code)
to have a better idea of what you expect, you can do the standard estimation of the scale variation within MG5aMC
and will give you an estimate of precision that you can get between the two computation.

But therefore I think that you have a pretty good agreement for the moment when I see agreement at 14%.

> Actually, the ATLAS sample which we're using has been generated with all dr's equal to 0 (drll, drjj, drjl, drbb, drbl etc).

Your process is divergent if drll is zero (since you should have a photon propagator).
This might be related to your dependence of the cross-section in the drll cut.

Cheers,

Olivier

> On 2 Apr 2019, at 18:33, Alexandros Marantis <email address hidden> wrote:
>
> Question #678974 on MadGraph5_aMC@NLO changed:
>
> Alexandros Marantis posted a new comment:
> Hello Olivier,
> as we discussed above, we generated some samples to study the behaviour of the qgc decomposition for various cases. You can see the results of all these studies in this spreadsheet (https://docs.google.com/spreadsheets/d/1KdJynhnXjIAiZ6H9C5MR-nBsWAYwgcgVBsiQ_c1ygP4/edit?usp=sharing)
>
> So, first, we did a scan in order to test the linear behaviour of the
> interference and the quadratic term with the non-restricted model, for
> ft0 = -5e-14, 5e-14, -5e-13, 5e-13, -5e-12, 5e-12. (The links for the
> banners of each run are in the rightmost column). You are right about
> the value "ft0=0", it gives an error specifying that the xsec for this
> process is 0.
>
> The second study, is about the restriction of the SM_LT0 model, which
> seems that does not affect the results comparing with the non-restricted
> model. For example, block: "---------> Restricted model - run_card with
> drll=0.4 and drjj=0.4" compared with block: "---------> official
> run_card with drll=0.4 and drjj=0.4"
>
> About the dynamical_scale choice, for all the samples, I chose to stick
> to the default value (-1), which agrees with the one of the official
> ATLAS sample of my analysis.
>
> Now, as I mentioned in a previous comment, we ran some samples with the default run_card in order to check the decomposition in this case. The difference in the cross section between the sum of the Interference+QGC+SM and the Total decreased to ~2% ! I believe this is a good sign, so I started changing some parameters which I assume that are sensitive, in order to end up to a configuration. What I did is, started with the same run_card of the ATLAS sample and changed one-by-one some parameters. I concluded that the pureQGC samples are very sensitive to the cut which is relevant to the minimum distance between leptons (drll). For drll=0.4 we have a deviation of around ~3% which is ok, but the cut in the event selection for our analysis is smaller (0.2). Actually, the ATLAS sample which we're using has been generated with all dr's equal to 0 (drll, drjj, drjl, drbb, drbl etc).
> If you take a look at the spreadsheet, you will see that for drll=0.1 the cross section of the QGC sample is ~e-04 pb while, for drll=0.2 the cross section is ~e-05 pb.
> Do you find this behaviour normal?
>
> Thank you,
> Alex
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.