QCD scales in Madgraph generation

Asked by lucia di ciaccio

Dear Madgraph Authors,

we would like to compare the Madgraph response with other generators. To this extend, it is important to have the same QCD scale. Our Madgraph samples is already generated and used the "default" QCD scales. Our sample is pp->WZ at LO (2.7.x)

We have found here: https://cp3.irmp.ucl.ac.be/projects/madgraph/wiki/FAQ-General-13
the information that : the QCD scale "for pair of heavy particles it is the geometric mean of M^2+pT^2 for each particle".

We would like to know if M refers to the pole mass or to the invariant mass of the decay leptons of the heavy particle.
 Thanks really a lot for you help.

   Lucia

Question information

Language:
English Edit question
Status:
Expired
For:
MadGraph5_aMC@NLO Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
Launchpad Janitor (janitor) said :
#1

This question was expired because it remained in the 'Open' state without activity for the last 15 days.

Revision history for this message
lucia di ciaccio (ldcldc) said :
#2

Dear Madgraph Authors,

  as follow up of this question, concerning the default QCD scale in Madgraph, I would like to say that I tried to do some reverse engineering:

  I read the scale and the events in the LHEF Madgraph 2.7.x output (LO, polarised) and compared with six calculations of possible transverse mass computations of the W and Z system.

  In summary:
  none of the considered definition of the transverse mass of the diboson system agrees perfectly with the scale value which is in Madgraph. I could attach the plots here but I do not know how to do.

 The question rises when one wants to compare with calculations.

 Would you advise in this case not to use the default scale but rather a user defined scale?
 I would like to propose that to the ATLAS community.

   Thanks a lot for your help
   Best

   Lucia Di Ciaccio

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#3

Hi,

The default scale computation is based on a CKKW-L clustering and does not have any close formula in general.
In top of this is not a "free" clustering algorithm since we feed in some information from the Feynman diagram topology.

Such type of scale (used heavily in matching/merging algorithm at LO and NLO, and also use heavily in Sherpa) have nice property for multi-scale processes but indeed are not appropriate for the comparison between code and theory (or various code).

I will not as go as Sherpa author that claim that such type of scale is a must have for comparison to data (the computation of such scale takes 50% of the total Sherpa time at NLO accuracy) but indeed this is expected to improve agreement with data.

Cheers,

Olivier

Revision history for this message
lucia di ciaccio (ldcldc) said :
#4

Dear Olivier,

  thanks, this is much clearer.

 Pity that this information is little known (before posting the questions in the launchpad I had asked to the experts of the group).
 I’ll take care to spread the info.

   Best
    Lucia

-------------------------------------------------

> On 31 Oct 2020, at 15:05, Olivier Mattelaer <email address hidden> wrote:
>
> Your question #693487 on MadGraph5_aMC@NLO changed:
> https://answers.launchpad.net/mg5amcnlo/+question/693487
>
> Status: Open => Answered
>
> Olivier Mattelaer proposed the following answer:
> Hi,
>
> The default scale computation is based on a CKKW-L clustering and does not have any close formula in general.
> In top of this is not a "free" clustering algorithm since we feed in some information from the Feynman diagram topology.
>
> Such type of scale (used heavily in matching/merging algorithm at LO and
> NLO, and also use heavily in Sherpa) have nice property for multi-scale
> processes but indeed are not appropriate for the comparison between code
> and theory (or various code).
>
> I will not as go as Sherpa author that claim that such type of scale is
> a must have for comparison to data (the computation of such scale takes
> 50% of the total Sherpa time at NLO accuracy) but indeed this is
> expected to improve agreement with data.
>
> Cheers,
>
> Olivier
>
> --
> If this answers your question, please go to the following page to let us
> know that it is solved:
> https://answers.launchpad.net/mg5amcnlo/+question/693487/+confirm?answer_id=2
>
> If you still need help, you can reply to this email or go to the
> following page to enter your feedback:
> https://answers.launchpad.net/mg5amcnlo/+question/693487
>
> You received this question notification because you asked the question.

Revision history for this message
lucia di ciaccio (ldcldc) said :
#5

.. I have a further question (just to understand) and sorry for the perhaps naive question:

 for LO computations at parton level (in my case pp ->WZ -> ll lnu where the final state is simple, just 4 leptons) I do not understand the role of the CKKW-L clustering which is a schemes of merging fixed- order tree-level matrix element generators with parton-shower models.

   Thanks again for your patience.

   Lucia

Revision history for this message
Launchpad Janitor (janitor) said :
#6

This question was expired because it remained in the 'Open' state without activity for the last 15 days.