lvlvbb NLO with MG5_aMC

Asked by Alessio Pizzini

I am encountering problems when integrating lvlvbb~ @NLO with Madgraph (version 2.6.3.2).

Everything works smoothly at LO, both matrix elements and event generation, while generating events at NLO invariably results in a crash.

Here is a full report of the settings, Madgraph instructions and error messages:

https://www.overleaf.com/read/phvsctjmwdqy

Alessio

Question information

Language:
English Edit question
Status:
Answered
For:
MadGraph5_aMC@NLO Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#1

What is the content of the directory:
/project/atlas/Users/pizzal/MG52_6_3_2/grid_27may/SubProcesses/P0/GF5
this is the directory reported as "wrong" in your case.

The name seems weird actually since P0 is typically something like P0_gg_ttx.
So it might be an hint of the issue.

But I also see a link to a better formatted path:
/project/atlas/Users/pizzal/MG5_aMC_v2_6_3_2/grid_lvlvbb_NLO_27may/SubProcesses/P0_gg_emepvevexbbx/GF5/log_MINT0.txt’

Can you check that file and the associated directory?

Thanks,

Olivier

Revision history for this message
Alessio Pizzini (alessio94) said :
#2

Hi,

there was an issue with the visualisation of certain characters in the Latex code, /project/atlas/Users/pizzal/MG5_aMC_v2_6_3_2/grid_lvlvbb_NLO_27may/SubProcesses/P0_gg_emepvevexbbx/GF5/log_MINT0.txt is actually the correct file.

However, this file doesn't exist, as mentioned in the line

IOError: [Errno 2] No such file or directory: '/project/atlas/Users/pizzal/MG5_aMC_v2_6_3_2/grid_lvlvbb_NLO_27may/SubProcesses/P0_gg_emepvevexbbx/GF5/log_MINT0.txt'

of the log file.

I will now upload a tarball of the whole /project/atlas/Users/pizzal/MG5_aMC_v2_6_3_2/grid_lvlvbb_NLO_27may/SubProcesses/P0_gg_emepvevexbbx/GF5/ folder in the same public folder on lxplus as the log file so you can check its content.

Thanks,

Alessio

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#3

please share it on dropbox or another publick (or cernbox) since I do not have access to lxplus.

Cheers,

Olivier

> On 30 Sep 2019, at 17:08, Alessio Pizzini <email address hidden> wrote:
>
> Question #684478 on MadGraph5_aMC@NLO changed:
> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fanswers.launchpad.net%2Fmg5amcnlo%2F%2Bquestion%2F684478&amp;data=02%7C01%7Colivier.mattelaer%40uclouvain.be%7Ca8149eec4d5a40abdb9908d745b807d0%7C7ab090d4fa2e4ecfbc7c4127b4d582ec%7C0%7C0%7C637054529053934460&amp;sdata=PPNsJ%2BhYT23Z4HcewqyOCEFqJurmyafpwE9YaSCgIns%3D&amp;reserved=0
>
> Alessio Pizzini posted a new comment:
> Hi,
>
> there was an issue with the visualisation of certain characters in the
> Latex code,
> /project/atlas/Users/pizzal/MG5_aMC_v2_6_3_2/grid_lvlvbb_NLO_27may/SubProcesses/P0_gg_emepvevexbbx/GF5/log_MINT0.txt
> is actually the correct file.
>
> However, this file doesn't exist, as mentioned in the line
>
> IOError: [Errno 2] No such file or directory:
> '/project/atlas/Users/pizzal/MG5_aMC_v2_6_3_2/grid_lvlvbb_NLO_27may/SubProcesses/P0_gg_emepvevexbbx/GF5/log_MINT0.txt'
>
> of the log file.
>
> I will now upload a tarball of the whole
> /project/atlas/Users/pizzal/MG5_aMC_v2_6_3_2/grid_lvlvbb_NLO_27may/SubProcesses/P0_gg_emepvevexbbx/GF5/
> folder in the same public folder on lxplus as the log file so you can
> check its content.
>
> Thanks,
>
> Alessio
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.

Revision history for this message
Alessio Pizzini (alessio94) said :
#4

I shared the tarball via Dropbox with you (<email address hidden>).

Revision history for this message
Alessio Pizzini (alessio94) said :
#5

You can also find the folder on cernbox here: https://cernbox.cern.ch/index.php/s/keybR3mg9AFh0Tq

Please let me know whether you need more information.

Revision history for this message
Rikkert Frederix (frederix) said :
#6

Dear Alessio,

I've looked into your log files on the afs server (for cernbox a password is needed), as specified in the short presentation you sent me by e-mail.

From there, nothing special can be seen. The log files in GF5 seem okay up to the point the run stopped; but there is no error message written whatsoever. However, it is clear that this job did not finish correctly, since it's missing a large part of what's typically printed at the end of the run and, more importantly, some output files are missing. This should still be at the stage of the "generate the matrix elements". So, I'm a bit surprised that you wrote that that finished without errors.

Unfortunately, since there is no error message I cannot really help with what has gone wrong. Could you check that all the SubProcesses/P*/G* directories have (more or less) the same content? And for the ones that have fewer files, is there anything special written at the end of the log.txt files?

I cannot see anything wrong with your setup: the cuts on the leptons should screen all EW divergences, while the mass of the b-quark should screen the QCD ones. In the log file, there is specified that you requested 1000 events with req_acc=-1. This is not consistent with what you wrote in the presentation, but should not affect the problem itself; and I guess that this is due to starting the run with --only_generation. So, no problem here.

One thing that would slightly simplify the calculation would be to use only opposite flavour leptons. That would remove some contributions related to pp>zzbb~ (or off-shell photons), which reduces the complexity a bit. But again, it's just reducing a bit the complexity -- I don't think they are responsible for the errors.

Best regards,
Rikkert

Revision history for this message
Rikkert Frederix (frederix) said :
#7

Dear Alessio,

I also checked the full SubProcesses folder that I got from cernbox. Checking for log files, I find the following:

[ ~/Downloads/SubProcesses ]$ ls P*/GF*/log.txt | wc -w
    1254
[ ~/Downloads/SubProcesses ]$ ls P*/GF*/log_MINT0.txt | wc -w
    1212

So, there are 42 channels for which your 'generate matrix elements' step failed. I check the log files in each of the ones that failed, but none finished with an error message. It almost looks like the jobs were killed externally. Could it be that the place you are running has a maximum allowed run time and some of the jobs surpassed that?

Best,
Rikkert

Revision history for this message
Alessio Pizzini (alessio94) said :
#8

Dear Rikkert,

It could well be.
Then I will try to generate the whole process again from scratch with different settings for the cluster queues.

Thank you,

Alessio

Can you help with this problem?

Provide an answer of your own, or ask Alessio Pizzini for more information if necessary.

To post a message you must log in.