Divergent cross section and low number of events for high mnll cut

Asked by Kenneth Long

Hi Experts,

I have an issue similar to https://answers.launchpad.net/mg5amcnlo/+question/257133

I am generating the process p p > w+ l+ l- j j QED = 6, w+ > l+ vl. I am using the model SM_LT012 (http://feynrules.irmp.ucl.ac.be/wiki/AnomalousGaugeCoupling) with the f_{T1} parameter = 1 TeV^{-4}. Even when I generated 1 million events, the region with very high WZ mass and dijet mass was not well populated. My impression is that this is purely a matter of low statistics. However, I am interested in how this region is affected by new physics models (I will be using reweight to scan over a range of parameter values).

My proposed solution for this was to split the generation into two samples split around the 4lepton mass using the mnll/mmnlmax cut. Setting mnllmax = 1000 works, but setting mnll = 1000 gives divergent cross sections and much less events than requested (~100 events when 10 -100,000 were requested). I also ran the same process in the standard model as a check if the new physics parameters introduced a divergence but these runs also diverged. My lhe files (with run_cards) can be found here:

http://www.hep.wisc.edu/~kdlong/madgraph_files/

I also have a related question. I have been running survey/refine/combine_events/store_events separately to fix a problem where the combine steps do not work on the condor cluster. I see that the generate events script only calls the refine step twice, but my impression has been that difficult processes do converge better with extra calls to refine. Is this correct? Could you point me to more information on how these two commands divide up the generation of events or provide an explanation? Also, will it help the total number of events generated if I use more options to the survey command, e.g. survey --accuracy=0.001?

Thanks,

Kenneth

Question information

Language:
English Edit question
Status:
Solved
For:
MadGraph5_aMC@NLO Edit question
Assignee:
No assignee Edit question
Solved by:
Kenneth Long
Solved:
Last query:
Last reply:
Revision history for this message
Kenneth Long (kdlong-e) said :
#1

Hi Experts,

I have a small update to make. I did have a Standard Model process converge and create the 100,000 events I requested today. I did a couple of things differently, so I'm not certain which of these actually fixed the problem. First off, I updated my version of MadGraph using bzr pull branch lp:mg5amcnlo, though I thought I was using the newest version of MadGraph already. I also changed auto_ptj_mjj_cuts to T from false (though I see no difference in the run card printed in the lhe file from than previously), and called refine 10 times, see the output from madevent here: http://www.hep.wisc.edu/~kdlong/madgraph_files/wpz_zg_all_SM_run04.out

I apologize for not having a more systematic description of the problem. Please let me know of any information I could provide to help you identify the source of issue. As always, I greatly appreciate your time expertise.

Kenneth

Revision history for this message
Kenneth Long (kdlong-e) said :
#2

Hi,

I was able to generate 100,000 events without issue with both mmnl cut and mmnlmax cut if auto_ptj_mjj = T. But the run_card indicates that it should be set to false for VBF processes. Can you explain why this has such a dramatic affect?

Also, I would still appreciate more info about the survey and refine commands.

Thanks again,

Kenneth

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#3

Dear Kenneth,

Actually this force all jet (and mmjj) to behave like QCD jet. i.e. the ptj is set at the value of xqcut and mmjj as well.
You typically do not want to use those value for the jet in WBF/single top, this is why this is adviced to turn in of False on those case.
So the effect is to enforce mmjj to be non zero since you already have a cut in pt.

> Also, I would still appreciate more info about the survey and refine commands.

Not sure what i can tell you that you do not know already...
Clearly if you force the refine to be more precise, it will be slower but this should help later. But actually not that much
since the first refine (not sure for the following ones) discard the card coming from the survey, so this is really not that important to have a very precise survey.

Actually the idea to have a very precise survey is the one use in the gridpack mode. Where all channel are computed at high precision on the survey. and where the refine is then done in a completely different way.

Cheers,

Olivier

Revision history for this message
Kenneth Long (kdlong-e) said :
#4

Hi Olivier,

Thanks for your response. My impression now is that I can use the data set I generated with auto_cuts, though I will try manually setting mjj = 10 and comparing the results. I will post a follow up if these results have issues, but for now I think this largely solves the issue.

Best,

Kenneth