State of read-only directory support?

Asked by Josh Bendavid

Can someone summarize at what level it is supported to have a read-only directory either for the mg5_amc@nlo directory itself and/or for generating events with an LO gridpack or precompiled NLO process directory? What about for madspin?

Thanks,
Josh

Question information

Language:
English Edit question
Status:
Answered
For:
MadGraph5_aMC@NLO Edit question
Assignee:
Valentin Hirschi Edit question
Last query:
Last reply:
Revision history for this message
Valentin Hirschi (valentin-hirschi) said :
#1

At LO, there is the option 'gridpack' in the run_card.dat. Setting it to '.true.' for the first run sets up a gridpack tarball that can be moved around and run independently on any system with the same architecture to generate events.

At NLO, such an option is not provided. However, before generating a process and outputting it, one can set the MG5_aMC option 'output_dependencies' to 'internal' by typically typing the command

MG5_aMC > set output_dependencies internal; save options;

This will force MG5_aMC to output all the sources of the extra dependencies of the NLO run (such as CutTools and StdHep) in the Source directory.
At this stage, the process directory output can be gzipped and moved around anywhere (assuming of course no absolute path was specified for lhapdf or fastjet). One should steer the execution by starting the MG5_aMC interface with the 'aMCatNLO' script *placed in the bin directory of the process output*. Notice that with this setup, the process output can in principle be compiled on a different machine than the one that generated it.

It is also possible to create the equivalent of an NLO gridpack by running the code a first time while specifying in 'run_card.dat' that 0 events must be generated and a required accuracy 'req_acc' equal to something like one over the squared root of the total number of events intended to be generated. Notice that when generating a very large number of events (i.e. > 100M) then one can afford to have the grids setup with less precision than suggested by the rule of thumbs I just gave.
Once the code has been compiled and the grid setup with the run above, one can perform the event generation only by setting the target number of events to be generated in the run_card and running

MG5_aMC > generate_events -x -o

from the MG5_aMC interface. The '-x' options skips the compilation and '-o' options reuses the grid and jumps directly to event generation. Notice one can use the option '-f' on top to skip the launch questions and reuse the current values in the cards.

Finally, MadSpin cannot be used with LO gridpacks or with the NLO treatment described above. In all cases, one needs to have access to the MG5_aMC installation directory.
One can however generate .lhe parton-level unweighted events like with the above procedures and the decays them with MadSpin independently in a separate run and in a location that has access to the original MG5_aMC distribution.
Alternatively, one could also simply copy the whole MG5_aMC distribution (which is quite light-weight), with the process output within it, to the node which must generates events.

Revision history for this message
Josh Bendavid (joshbendavid) said :
#2

Hi Valentin,
Thanks, indeed we already have event generation + madspin with packaged tarballs working for the NLO case, essentially as you have described it.

The question is whether it would be possible to generate events using precompiled and pregenerated grids (ie with -o -x) where the contents of the process directory are residing on a read-only filesystem.

(Playing around with it a bit more since I asked the original question I strongly suspect that the answer is no...)

Revision history for this message
Valentin Hirschi (valentin-hirschi) said :
#3

No, I don't think so, because MG5aMC needs to write many files 'locally' in the process output when doing event generation.

Can you help with this problem?

Provide an answer of your own, or ask Josh Bendavid for more information if necessary.

To post a message you must log in.