NLO gridpacks?
A possibly silly question: How can I generate a gridpack for an NLO process? Adding ".true. = gridpack" to the run_card.dat doesn't seem to trigger gridpack generation when calling generate_events.
Question information
- Language:
- English Edit question
- Status:
- Answered
- Assignee:
- Rikkert Frederix Edit question
- Last query:
- Last reply:
Revision history for this message
|
#1 |
Dear Josh,
There is no special gridpack mode for NLO event generation.
However, you can easily create a "gridpack" yourself. What you have to do is the following:
1.) Generate and output the process as you normally do.
2.) When you do the launch of the NLO event generation, set the parameter in 'nevents' to zero (in the run_card) and the parameter req_acc to 0.001 (also in the run_card). This will skip the event generation step, but does create the grids with a 0.001 relative precision (basically enough for infinite number of events) that can be used to generate events later on. If you want to generate less than a total of 1M events, you can increase the req_acc to 1/sqrt(number of events) so that the grid setup step is slightly faster. Once the run is done, you have a "gridpack". (Just tar the whole MG5_aMC directory, including the process directory).
3.) From the grids that you just generated, you can generate any small number of events in many bunches "by hand":
a.) set the nevents parameter in the run_card back to the number of events you want to generate in the bunch
b.) set the iseed parameter in the run_card to a non-trivial number
c.) execute the command:
./bin/generate_
and this will generate the small number of events from the grids created before.
Note that you probably want to run the event generation with the gridpack on a single CPU core (and not on multi-core or in cluster mode), otherwise there is not really a point in generating a gridpack in the first place ;-) . This can be changed in the Cards/amcatnlo_
Best regards,
Rikkert
Revision history for this message
|
#2 |
Hi Rikkert,
Thanks, this seems to work for the event generation.
Is there a way to do something similar for Madspin? At the moment it seems to recalculate and recompile everything each time I generate new events.
One other point is that it would be extremely useful to have the equivalent eg. of the clean4grid script provided for the LO gridpacks (Or at least some instructions on which files can be safely omitted from the tarball, perhaps we can even use clean4grid directly as is?). In general we would like to minimize the number of files which we have to untar on the grid worker nodes.
Thanks,
Josh
Revision history for this message
|
#3 |
Hi Josh,
Concerning MadSpin,
we have implemented one option in this direction.
if you place in the mad spin_card.dat the following line
set ms_dir PATH
Then if PATH doesn't exists, it creates it but if it exists it uses the information of the previous run.
You can therefore create that directory at the same time as your gridpack and reuse the information.
Cheers,
Olivier
On Feb 5, 2014, at 2:46 AM, Josh Bendavid <email address hidden> wrote:
> Question #243268 on MadGraph5_aMC@NLO changed:
> https:/
>
> Josh Bendavid posted a new comment:
> Hi Rikkert,
> Thanks, this seems to work for the event generation.
>
> Is there a way to do something similar for Madspin? At the moment it
> seems to recalculate and recompile everything each time I generate new
> events.
>
> One other point is that it would be extremely useful to have the
> equivalent eg. of the clean4grid script provided for the LO gridpacks
> (Or at least some instructions on which files can be safely omitted from
> the tarball, perhaps we can even use clean4grid directly as is?). In
> general we would like to minimize the number of files which we have to
> untar on the grid worker nodes.
>
> Thanks,
> Josh
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.
Revision history for this message
|
#4 |
Dear Josh,
There is no clean4grid file for the NLO runs.
Best,
Rikkert
Revision history for this message
|
#5 |
Hi Olivier,
When I try this with madspin, the first run computes/compiles everything and then succeeds to decay the events, but the second run fails with a python error. Full details below.
By the way, I assume that I need to generate at least a small number of events in order to initialize everything in Madspin. I see that for my process it was using 75 events to calculate the max weight, etc. Is this number constant, or is it process and/or decay-dependent. (Want to understand if we can safely just generate 100 events at the compilation stage in general as part of our gridpack creation scripts.)
Details of the crash:
[lxplus442] /afs/cern.
No module named madgraph
No module named madgraph.
No module named madgraph
INFO: *******
* *
* W E L C O M E to M A D G R A P H 5 *
* a M C @ N L O *
* *
* * * *
* * * * * *
* * * * * 5 * * * * *
* * * * * *
* * * *
* *
* VERSION 5.2.0.1 *
* *
* The MadGraph5_aMC@NLO Development Team - Find us at *
* http://
* *
* Type 'help' for in-line help. *
* *
*******
INFO: load configuration from /afs/cern.
INFO: load configuration from /afs/cern.
INFO: load configuration from /afs/cern.
Using default eps viewer "evince". Set another one in ./input/
Using default web browser "firefox". Set another one in ./input/
launch --nocompile --only_generation
INFO: Enter mode value: Go to the related mode
The following switches determine which operations are executed:
1 Perturbative order of the calculation: order=NLO
2 Fixed order (no event generation and no MC@[N]LO matching): fixed_order=OFF
3 Shower the generated events: shower=OFF
4 Decay particles with the MadSpin module: madspin=ON
Either type the switch number (1 to 4) to change its default setting,
or set any switch explicitly (e.g. type 'order=LO' at the prompt)
Type '0', 'auto', 'done' or just press enter when you are done.
[0, 1, 2, 3, 4, auto, done, order=LO, order=NLO, ... ][60s to answer]
>
INFO: will run in mode: noshower
WARNING: You have chosen not to run a parton shower. NLO events without showering are NOT physical.
Please, shower the Les Houches events before using them for physics analyses.
Do you want to edit a card (press enter to bypass editing)?
1 / param : param_card.dat
2 / run : run_card.dat
3 / madspin : madspin_card.dat
you can also
- enter the path to a valid card or banner.
- use the 'set' command to modify a parameter directly.
The set option works only for param_card and run_card.
Type 'help set' for more information on this command.
[0, done, 1, param, 2, run, 3, madspin, enter path][60s to answer]
>
INFO: Starting run
INFO: Compiling the code
INFO: For gauge cancellation, the width of 't' has been set to zero.
INFO: Starting run
INFO: Generating events without running the shower.
INFO: Updating the number of unweighted events per channel
Intermediate results:
Random seed: 39
Total cross-section: 4.527e-01 +- 7.5e-04 pb
Total abs(cross-section): 9.718e-01 +- 9.0e-04 pb
INFO: Generating events
INFO: Idle: 11, Running: 1, Completed: 0
INFO: Idle: 10, Running: 1, Completed: 1
INFO: Idle: 9, Running: 1, Completed: 2
INFO: Idle: 8, Running: 1, Completed: 3
INFO: Idle: 7, Running: 1, Completed: 4
INFO: Idle: 6, Running: 1, Completed: 5
INFO: Idle: 5, Running: 1, Completed: 6
INFO: Idle: 4, Running: 1, Completed: 7
INFO: Idle: 3, Running: 1, Completed: 8
INFO: Idle: 2, Running: 1, Completed: 9
INFO: Idle: 1, Running: 1, Completed: 10
INFO: Idle: 0, Running: 1, Completed: 11
INFO: Idle: 0, Running: 0, Completed: 12
INFO: Idle: 0, Running: 0, Completed: 0 [ current time: 19h54 ]
INFO: Doing reweight
INFO: Idle: 11, Running: 1, Completed: 0
INFO: Idle: 10, Running: 1, Completed: 1
INFO: Idle: 9, Running: 1, Completed: 2
INFO: Idle: 8, Running: 1, Completed: 3
INFO: Idle: 7, Running: 1, Completed: 4
INFO: Idle: 6, Running: 1, Completed: 5
INFO: Idle: 5, Running: 1, Completed: 6
INFO: Idle: 4, Running: 1, Completed: 7
INFO: Idle: 3, Running: 1, Completed: 8
INFO: Idle: 2, Running: 1, Completed: 9
INFO: Idle: 1, Running: 1, Completed: 10
INFO: Idle: 0, Running: 1, Completed: 11
INFO: Idle: 0, Running: 0, Completed: 12
INFO: Idle: 0, Running: 0, Completed: 0 [ current time: 19h54 ]
INFO: Collecting events
INFO:
Summary:
Process p p > t t~ h [QCD]
Run at p-p collider (6500 + 6500 GeV)
Total cross-section: 4.527e-01 +- 7.5e-04 pb
Number of events generated: 1000
Parton shower to be used: PYTHIA8
Fraction of negative weights: 0.27
Total running time : 2m 26s
INFO: The /afs/cern.
INFO: Events generated
decay_events -from_cards
INFO: Running MadSpin
INFO: This functionality allows for the decay of resonances
INFO: in a .lhe file, keeping track of the spin correlation effets.
INFO: BE AWARE OF THE CURRENT LIMITATIONS:
INFO: (1) Only a succession of 2 body decay are currently allowed
*******
* *
* W E L C O M E to M A D S P I N *
* *
*******
INFO: Extracting the banner ...
INFO: process: p p > t t~ h
INFO: options:
INFO: detected model: loop_sm-ckm. Loading...
WARNING: The UFO model does not include partial widths information.
Impossible to use analytical formula, will use MG5/MadEvent (slower).
set ms_dir ./madspin_grid
set max_weight_ps_point 400 # number of PS to estimate the maximum for each event
decay t > w+ b, w+ > all all
decay t~ > w- b~, w- > all all
decay w+ > all all
decay w- > all all
decay z > all all
launch
Command "launch --nocompile --only_generation" interrupted with error:
AttributeError : 'Block' object has no attribute 'name'
Please report this bug on https:/
More information is found in '/afs/cern.
Please attach this file to your report.
quit
INFO:
[lxplus442] /afs/cern.
[lxplus442] /afs/cern.
#******
#* MadGraph5_aMC@NLO *
#* *
#* * * *
#* * * * * *
#* * * * * 5 * * * * *
#* * * * * *
#* * * *
#* *
#* *
#* VERSION 5.2.0.1 *
#* *
#* The MadGraph5_aMC@NLO Development Team - Find us at *
#* https:/
#* and *
#* http://
#* *
#******
#* *
#* Command File for aMCatNLO *
#* *
#* run as ./bin/aMCatNLO.py filename *
#* *
#******
launch --nocompile --only_generation
Traceback (most recent call last):
File "/afs/cern.
return self.onecmd_
File "/afs/cern.
return func(arg, **opt)
File "/afs/cern.
self.
File "/afs/cern.
stop = Cmd.onecmd_
File "/afs/cern.
return func(arg, **opt)
File "/afs/cern.
madspin_
File "/afs/cern.
self.
File "/afs/cern.
stop = Cmd.onecmd_
File "/afs/cern.
return func(arg, **opt)
File "/afs/cern.
return self.run_
File "/afs/cern.
generate_all = save_load_
File "/afs/cern.
return files.read_
File "/afs/cern.
ret_value = myfunct(sock, *args)
File "/afs/cern.
return p.load()
File "/afs/cern.
dispatch[
File "/afs/cern.
list.
File "/afs/cern.
assert not obj.lhablock or obj.lhablock == self.name
AttributeError: 'Block' object has no attribute 'name'
Value of current Options:
cluster_
automatic_
exrootana
#******
#* MadGraph5_aMC@NLO *
#* *
#* * * *
#* * * * * *
#* * * * * 5 * * * * *
#* * * * * *
#* * * *
#* *
#* *
#* VERSION 2.0.1 2014-01-20 *
#* *
#* The MadGraph5_aMC@NLO Development Team - Find us at *
#* https:/
#* *
#******
#* *
#* Command File for MadGraph5_aMC@NLO *
#* *
#* run as ./bin/mg5_aMC filename *
#* *
#******
set group_subprocesses Auto
set ignore_
set gauge unitary
set complex_mass_scheme False
import model sm
define p = g u c d s u~ c~ d~ s~
define j = g u c d s u~ c~ d~ s~
define l+ = e+ mu+
define l- = e- mu-
define vl = ve vm vt
define vl~ = ve~ vm~ vt~
import model loop_sm-ckm
generate p p > t t~ h [QCD]
output tthtest
#######
## PARAM_CARD AUTOMATICALY GENERATED BY MG5 FOLLOWING UFO MODEL ####
#######
## ##
## Width set on Auto will be computed following the information ##
## present in the decay.py files of the model. By default, ##
## this is only 1->2 decay modes. ##
## ##
#######
#######
## INFORMATION FOR LOOP
#######
Block loop
1 9.118800e+01 # MU_R
#######
## INFORMATION FOR MASS
#######
Block mass
5 4.700000e+00 # MB
6 1.730000e+02 # MT
15 1.777000e+00 # MTA
23 9.118800e+01 # MZ
25 1.250000e+02 # MH
## Dependent parameters, given by model restrictions.
## Those values should be edited following the
## analytical expression. MG5 ignores those values
## but they are important for interfacing the output of MG5
## to external program such as Pythia.
1 0.000000 # d : 0.0
2 0.000000 # u : 0.0
3 0.000000 # s : 0.0
4 0.000000 # c : 0.0
11 0.000000 # e- : 0.0
12 0.000000 # ve : 0.0
13 0.000000 # mu- : 0.0
14 0.000000 # vm : 0.0
16 0.000000 # vt : 0.0
21 0.000000 # g : 0.0
22 0.000000 # a : 0.0
24 80.419002 # w+ : cmath.sqrt(
82 0.000000 # gh : 0.0
#######
## INFORMATION FOR SMINPUTS
#######
Block sminputs
1 1.325070e+02 # aEWM1
2 1.166390e-05 # Gf
3 1.180000e-01 # aS
#######
## INFORMATION FOR WOLFENSTEIN
#######
Block wolfenstein
1 2.253000e-01 # lamWS
2 8.080000e-01 # AWS
3 1.320000e-01 # rhoWS
4 3.410000e-01 # etaWS
#######
## INFORMATION FOR YUKAWA
#######
Block yukawa
5 4.700000e+00 # ymb
6 1.730000e+02 # ymt
15 1.777000e+00 # ymtau
#######
## INFORMATION FOR DECAY
#######
DECAY 6 1.491500e+00 # WT
DECAY 23 2.441404e+00 # WZ
DECAY 24 2.047600e+00 # WW
DECAY 25 6.382339e-03 # WH
## Dependent parameters, given by model restrictions.
## Those values should be edited following the
## analytical expression. MG5 ignores those values
## but they are important for interfacing the output of MG5
## to external program such as Pythia.
DECAY 1 0.000000 # d : 0.0
DECAY 2 0.000000 # u : 0.0
DECAY 3 0.000000 # s : 0.0
DECAY 4 0.000000 # c : 0.0
DECAY 5 0.000000 # b : 0.0
DECAY 11 0.000000 # e- : 0.0
DECAY 12 0.000000 # ve : 0.0
DECAY 13 0.000000 # mu- : 0.0
DECAY 14 0.000000 # vm : 0.0
DECAY 15 0.000000 # ta- : 0.0
DECAY 16 0.000000 # vt : 0.0
DECAY 21 0.000000 # g : 0.0
DECAY 22 0.000000 # a : 0.0
DECAY 82 0.000000 # gh : 0.0
#======
# QUANTUM NUMBERS OF NEW STATE(S) (NON SM PDG CODE)
#======
Block QNUMBERS 82 # gh
1 0 # 3 times electric charge
2 1 # number of spin states (2S+1)
3 8 # colour rep (1: singlet, 3: triplet, 8: octet)
4 1 # Particle/
#******
# MadGraph5_aMC@NLO *
# *
# run_card.dat aMC@NLO *
# *
# This file is used to set the parameters of the run. *
# *
# Some notation/
# *
# Lines starting with a hash (#) are info or comments *
# *
# mind the format: value = variable ! comment *
#******
#
#******
# Running parameters
#******
#
#******
# Tag name for the run (one word) *
#******
tag_1 = run_tag ! name of the run
#******
# Number of events (and their normalization) and the required *
# (relative) accuracy on the Xsec. *
# These values are ignored for fixed order runs *
#******
1000 = nevents ! Number of unweighted events requested
0.001 = req_acc ! Required accuracy (-1=auto determined from nevents)
-1 = nevt_job! Max number of events per job in event generation.
! (-1= no split).
average = event_norm ! Normalize events to sum or average to the X sect.
#******
# Number of points per itegration channel (ignored for aMC@NLO runs) *
#******
0.01 = req_acc_FO ! Required accuracy (-1=ignored, and use the
# These numbers are ignored except if req_acc_FO is equal to -1
5000 = npoints_FO_grid ! number of points to setup grids
4 = niters_FO_grid ! number of iter. to setup grids
10000 = npoints_FO ! number of points to compute Xsec
6 = niters_FO ! number of iter. to compute Xsec
#******
# Random number seed *
#******
0 = iseed ! rnd seed (0=assigned automatically=
#******
# Collider type and energy *
#******
1 = lpp1 ! beam 1 type (0 = no PDF)
1 = lpp2 ! beam 2 type (0 = no PDF)
6500 = ebeam1 ! beam 1 energy in GeV
6500 = ebeam2 ! beam 2 energy in GeV
#******
# PDF choice: this automatically fixes also alpha_s(MZ) and its evol. *
#******
cteq6_m = pdlabel ! PDF set
21100 = lhaid ! if pdlabel=lhapdf, this is the lhapdf number
#******
# Include the NLO Monte Carlo subtr. terms for the following parton *
# shower (HERWIG6 | HERWIGPP | PYTHIA6Q | PYTHIA6PT | PYTHIA8) *
# WARNING: PYTHIA6PT works only for processes without FSR!!!! *
#******
PYTHIA8 = parton_shower
#******
# Renormalization and factorization scales *
# (Default functional form for the non-fixed scales is the sum of *
# the transverse masses of all final state particles and partons. This *
# can be changed in SubProcesses/
#******
F = fixed_ren_scale ! if .true. use fixed ren scale
F = fixed_fac_scale ! if .true. use fixed fac scale
91.188 = muR_ref_fixed ! fixed ren reference scale
91.188 = muF1_ref_fixed ! fixed fact reference scale for pdf1
91.188 = muF2_ref_fixed ! fixed fact reference scale for pdf2
#******
# Renormalization and factorization scales (advanced and NLO options) *
#******
F = fixed_QES_scale ! if .true. use fixed Ellis-Sexton scale
91.188 = QES_ref_fixed ! fixed Ellis-Sexton reference scale
1 = muR_over_ref ! ratio of current muR over reference muR
1 = muF1_over_ref ! ratio of current muF1 over reference muF1
1 = muF2_over_ref ! ratio of current muF2 over reference muF2
1 = QES_over_ref ! ratio of current QES over reference QES
#******
# Reweight flags to get scale dependence and PDF uncertainty *
# For scale dependence: factor rw_scale_up/down around central scale *
# For PDF uncertainty: use LHAPDF with supported set *
#******
.true. = reweight_scale ! reweight to get scale dependence
0.5 = rw_Rscale_down ! lower bound for ren scale variations
2.0 = rw_Rscale_up ! upper bound for ren scale variations
0.5 = rw_Fscale_down ! lower bound for fact scale variations
2.0 = rw_Fscale_up ! upper bound for fact scale variations
.false. = reweight_PDF ! reweight to get PDF uncertainty
21101 = PDF_set_min ! First of the error PDF sets
21140 = PDF_set_max ! Last of the error PDF sets
#******
# Merging - WARNING! Applies merging only at the hard-event level. *
# After showering an MLM-type merging should be applied as well. *
# See http://
#******
0 = ickkw ! 0 no merging, 3 FxFx merging
#******
#
#******
# BW cutoff (M+/-bwcutoff*
#******
15 = bwcutoff
#******
# Cuts on the jets *
# When matching to a parton shower, these generation cuts should be *
# considerably softer than the analysis cuts. *
# (more specific cuts can be specified in SubProcesses/
#******
1 = jetalgo ! FastJet jet algorithm (1=kT, 0=C/A, -1=anti-kT)
0.7 = jetradius ! The radius parameter for the jet algorithm
10 = ptj ! Min jet transverse momentum
-1 = etaj ! Max jet abs(pseudo-rap) (a value .lt.0 means no cut)
#******
# Cuts on the charged leptons (e+, e-, mu+, mu-, tau+ and tau-) *
# (more specific gen cuts can be specified in SubProcesses/
#******
0 = ptl ! Min lepton transverse momentum
-1 = etal ! Max lepton abs(pseudo-rap) (a value .lt.0 means no cut)
0 = drll ! Min distance between opposite sign lepton pairs
30 = mll ! Min inv. mass of all oppositely charged lepton pairs
#******
# Photon-isolation cuts, according to hep-ph/9801442 *
# When ptgmin=0, all the other parameters are ignored *
#******
20 = ptgmin ! Min photon transverse momentum
-1 = etagamma ! Max photon abs(pseudo-rap)
0.4 = R0gamma ! Radius of isolation code
1.0 = xn ! n parameter of eq.(3.4) in hep-ph/9801442
1.0 = epsgamma ! epsilon_gamma parameter of eq.(3.4) in hep-ph/9801442
.true. = isoEM ! isolate photons from EM energy (photons and leptons)
#******
# maximal pdg code for quark to be considered as a jet *
#******
5 = maxjetflavor
#******
Revision history for this message
|
#6 |
I tried again in 2.0.2 btw and get the same error.
Looks like some problem saving/loading the configuration from the file?
Revision history for this message
|
#7 |
Thanks Josh,
I'm starting to investigate on this, I'll keep you in touch with my progress.
Cheers,
Olivier
Revision history for this message
|
#8 |
Hi Josh,
I succeed to reproduce your problem and you will find the associate patch below (This will be part of the next release):
>By the way, I assume that I need to generate at least a small number of events in order to initialize everything in Madspin.
>I see that for my process it was using 75 events to calculate the max weight, etc. Is this number constant, or is it process and/or decay-dependent.
> (Want to understand if we can safely just generate 100 events at the compilation stage in general as part of our gridpack creation scripts.)
If you do the following
generate p p > z [QCD]
add process p p > w+ [QCD]
Then it needs 150 events (75 of each type)
Typically 75 events are enough to have a good estimate of the maximum weight required for the computation, but I personally tend to increase this number manually (to 100-150) if I have to generate very large sample, this typically allows to have a better estimate of the maximum weight which in returns speed up the computation of the decay.
Cheers,
Olivier
=== modified file 'models/
--- models/
+++ models/
@@ -170,6 +170,8 @@
def append(self, obj):
assert isinstance(obj, Parameter)
+ if not hasattr(self, 'name'): #can happen if loaded from pickle
+ self.__
assert not obj.lhablock or obj.lhablock == self.name
#The following line seems/is stupid but allow to pickle/unpickle this object
Revision history for this message
|
#9 |
Thanks Olivier, this indeed solves the problem!
Revision history for this message
|
#10 |
Hi Rikkert,
Although the NLO "gridpack" creation appeared to work, on further testing I have run into a serious problem.
It seems that the absolute path of the original folder used to generate the process is hardcoded in various places. The extracted tarball is able to generate events only as long as this path is still accessible. If I remove the original folder, the generation does not work nay more.
I tried changing mg5_path in Cards/amcatnlo_
I still get errors as below. Can you confirm that this is a problem and/or advise all the places where I need to remove/replace the original path in order to make this work?
[lxplus410] /afs/cern.
No module named madgraph
No module named madgraph.
No module named madgraph
INFO: *******
* *
* W E L C O M E to M A D G R A P H 5 *
* a M C @ N L O *
* *
* * * *
* * * * * *
* * * * * 5 * * * * *
* * * * * *
* * * *
* *
* VERSION 5.2.0.2 *
* *
* The MadGraph5_aMC@NLO Development Team - Find us at *
* http://
* *
* Type 'help' for in-line help. *
* *
*******
INFO: load configuration from /afs/cern.
INFO: load configuration from /afs/cern.
INFO: load configuration from /afs/cern.
Using default eps viewer "evince". Set another one in ./input/
Using default web browser "firefox". Set another one in ./input/
Traceback (most recent call last):
File "./bin/
launch = run.aMCatNLOCmd
File "/afs/cern.
self.
File "/afs/cern.
self.output()
File "/afs/cern.
old_run += self[key]
File "/afs/cern.
dico[
File "/afs/cern.
local_
File "/afs/cern.
out += self.special_
UnboundLocalError: local variable 'link' referenced before assignment
Revision history for this message
|
#11 |
Hi Josh,
This looks like to me a problem that Paolo face recently as well and that i just fixed in the development version.
Paolo can you confirm if this is the same problem? and then that it should indeed work with the future version 2.1.0?
Cheers,
Olivier
On Feb 16, 2014, at 1:11 PM, Josh Bendavid <email address hidden> wrote:
> Question #243268 on MadGraph5_aMC@NLO changed:
> https:/
>
> Status: Solved => Open
>
> Josh Bendavid is still having a problem:
> Hi Rikkert,
> Although the NLO "gridpack" creation appeared to work, on further testing I have run into a serious problem.
>
> It seems that the absolute path of the original folder used to generate
> the process is hardcoded in various places. The extracted tarball is
> able to generate events only as long as this path is still accessible.
> If I remove the original folder, the generation does not work nay more.
>
> I tried changing mg5_path in Cards/amcatnlo_
> is not sufficient.
>
> I still get errors as below. Can you confirm that this is a problem
> and/or advise all the places where I need to remove/replace the original
> path in order to make this work?
>
> [lxplus410] /afs/cern.
> No module named madgraph
> No module named madgraph.
> No module named madgraph
> INFO: *******
> * *
> * W E L C O M E to M A D G R A P H 5 *
> * a M C @ N L O *
> * *
> * * * *
> * * * * * *
> * * * * * 5 * * * * *
> * * * * * *
> * * * *
> * *
> * VERSION 5.2.0.2 *
> * *
> * The MadGraph5_aMC@NLO Development Team - Find us at *
> * http://
> * *
> * Type 'help' for in-line help. *
> * *
> *******
> INFO: load configuration from /afs/cern.
> INFO: load configuration from /afs/cern.
> INFO: load configuration from /afs/cern.
> Using default eps viewer "evince". Set another one in ./input/
> Using default web browser "firefox". Set another one in ./input/
> Traceback (most recent call last):
> File "./bin/
> launch = run.aMCatNLOCmd
> File "/afs/cern.
> self.results.
> File "/afs/cern.
> self.output()
> File "/afs/cern.
> old_run += self[key]
> File "/afs/cern.
> dico['tag_data'] = '\n'.join(
> File "/afs/cern.
> local_dico['links'] = self.get_
> File "/afs/cern.
> out += self.special_
> UnboundLocalError: local variable 'link' referenced before assignment
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.
Revision history for this message
|
#12 |
Hi all,
> Paolo can you confirm if this is the same problem? and then that it should indeed work with the future version 2.1.0?
yes it seems to be the very same problem, so it should be fixed now.
Cheers.
Paolo
>> Question #243268 on MadGraph5_aMC@NLO changed:
>> https:/
>>
>> Status: Solved => Open
>>
>> Josh Bendavid is still having a problem:
>> Hi Rikkert,
>> Although the NLO "gridpack" creation appeared to work, on further testing I have run into a serious problem.
>>
>> It seems that the absolute path of the original folder used to generate
>> the process is hardcoded in various places. The extracted tarball is
>> able to generate events only as long as this path is still accessible.
>> If I remove the original folder, the generation does not work nay more.
>>
>> I tried changing mg5_path in Cards/amcatnlo_
>> is not sufficient.
>>
>> I still get errors as below. Can you confirm that this is a problem
>> and/or advise all the places where I need to remove/replace the original
>> path in order to make this work?
>>
>> [lxplus410] /afs/cern.
>> No module named madgraph
>> No module named madgraph.
>> No module named madgraph
>> INFO: *******
>> * *
>> * W E L C O M E to M A D G R A P H 5 *
>> * a M C @ N L O *
>> * *
>> * * * *
>> * * * * * *
>> * * * * * 5 * * * * *
>> * * * * * *
>> * * * *
>> * *
>> * VERSION 5.2.0.2 *
>> * *
>> * The MadGraph5_aMC@NLO Development Team - Find us at *
>> * http://
>> * *
>> * Type 'help' for in-line help. *
>> * *
>> *******
>> INFO: load configuration from /afs/cern.
>> INFO: load configuration from /afs/cern.
>> INFO: load configuration from /afs/cern.
>> Using default eps viewer "evince". Set another one in ./input/
>> Using default web browser "firefox". Set another one in ./input/
>> Traceback (most recent call last):
>> File "./bin/
>> launch = run.aMCatNLOCmd
>> File "/afs/cern.
>> self.results.
>> File "/afs/cern.
>> self.output()
>> File "/afs/cern.
>> old_run += self[key]
>> File "/afs/cern.
>> dico['tag_data'] = '\n'.join(
>> File "/afs/cern.
>> local_dico['links'] = self.get_
>> File "/afs/cern.
>> out += self.special_
>> UnboundLocalError: local variable 'link' referenced before assignment
>>
>> --
>> You received this question notification because you are an answer
>> contact for MadGraph5_aMC@NLO.
>
Revision history for this message
|
#13 |
Hi Olivier,
Testing with 2.1.0 and things are indeed working properly now as long as I don't use MadSpin.
For Madspin it seems there are still a few places where the absolute path is being persisted inside the pickle file. When I try to run from a tarball with the original folder moved away (and mg5_path properly reset) I get an error as below.
Thanks,
Josh
[lxplus414] /afs/cern.
No module named madgraph
No module named madgraph.
No module named madgraph
INFO: *******
* *
* W E L C O M E to M A D G R A P H 5 *
* a M C @ N L O *
* *
* * * *
* * * * * *
* * * * * 5 * * * * *
* * * * * *
* * * *
* *
* VERSION 5.2.1.0 *
* *
* The MadGraph5_aMC@NLO Development Team - Find us at *
* http://
* *
* Type 'help' for in-line help. *
* *
*******
INFO: load configuration from /afs/cern.
INFO: load configuration from /afs/cern.
INFO: load configuration from /afs/cern.
Using default eps viewer "evince". Set another one in ./input/
Using default web browser "firefox". Set another one in ./input/
launch -fox -n testrun
INFO: Enter mode value: Go to the related mode
INFO: will run in mode: noshower
WARNING: You have chosen not to run a parton shower. NLO events without showering are NOT physical.
Please, shower the Les Houches events before using them for physics analyses.
INFO: Starting run
INFO: Compiling the code
INFO: Starting run
INFO: Generating events without running the shower.
INFO: Updating the number of unweighted events per channel
Intermediate results:
Random seed: 34
Total cross-section: 1.019e+05 +- 1.1e+02 pb
Total abs(cross-section): 1.158e+05 +- 8.9e+01 pb
INFO: Generating events
INFO: Idle: 5, Running: 1, Completed: 0
INFO: Idle: 4, Running: 1, Completed: 1
INFO: Idle: 3, Running: 1, Completed: 2
INFO: Idle: 2, Running: 1, Completed: 3
INFO: Idle: 1, Running: 1, Completed: 4
INFO: Idle: 0, Running: 1, Completed: 5
INFO: Idle: 0, Running: 0, Completed: 6
INFO: Idle: 0, Running: 0, Completed: 0 [ current time: 01h00 ]
INFO: Doing reweight
INFO: Idle: 5, Running: 1, Completed: 0
INFO: Idle: 4, Running: 1, Completed: 1
INFO: Idle: 3, Running: 1, Completed: 2
INFO: Idle: 2, Running: 1, Completed: 3
INFO: Idle: 1, Running: 1, Completed: 4
INFO: Idle: 0, Running: 1, Completed: 5
INFO: Idle: 0, Running: 0, Completed: 6
INFO: Idle: 0, Running: 0, Completed: 0 [ current time: 01h01 ]
INFO: Collecting events
INFO:
Summary:
Process p p > w+ [QCD]
Run at p-p collider (6500 + 6500 GeV)
Total cross-section: 1.019e+05 +- 1.1e+02 pb
Number of events generated: 1000
Parton shower to be used: PYTHIA8
Fraction of negative weights: 0.06
Total running time : 56s
INFO: The /afs/cern.
INFO: Events generated
decay_events -from_cards
INFO: Running MadSpin
INFO: This functionality allows for the decay of resonances
INFO: in a .lhe file, keeping track of the spin correlation effets.
INFO: BE AWARE OF THE CURRENT LIMITATIONS:
INFO: (1) Only a succession of 2 body decay are currently allowed
*******
* *
* W E L C O M E to M A D S P I N *
* *
*******
INFO: Extracting the banner ...
INFO: process: p p > w+
INFO: options:
INFO: detected model: loop_sm-
WARNING: The UFO model does not include partial widths information.
Impossible to use analytical formula, will use MG5/MadEvent (slower).
set ms_dir ./madspingrid
set Nevents_
set max_weight_ps_point 400 # number of PS to estimate the maximum for each event
decay t > w+ all, w+ > all all
t > all w+ , w+ > all all
decay t~ > w- all, w- > all all
t~ > all w- , w- > all all
decay w+ > all all
w+ > all all
decay w- > all all
w- > all all
decay z > all all
z > all all
launch
INFO: MadSpin: Decaying Events
INFO:
INFO: Decaying the events...
Command "launch -fox -n testrun" interrupted with error:
IOError : [Errno 2] No such file or directory: '/afs/cern.
Please report this bug on https:/
More information is found in '/afs/cern.
Please attach this file to your report.
quit
INFO:
[lxplus414] /afs/cern.
#******
#* MadGraph5_aMC@NLO *
#* *
#* * * *
#* * * * * *
#* * * * * 5 * * * * *
#* * * * * *
#* * * *
#* *
#* *
#* VERSION 5.2.1.0 *
#* *
#* The MadGraph5_aMC@NLO Development Team - Find us at *
#* https:/
#* and *
#* http://
#* *
#******
#* *
#* Command File for aMCatNLO *
#* *
#* run as ./bin/aMCatNLO.py filename *
#* *
#******
launch -fox -n testrun
Traceback (most recent call last):
File "/afs/cern.
return self.onecmd_
File "/afs/cern.
return func(arg, **opt)
File "/afs/cern.
self.
File "/afs/cern.
stop = Cmd.onecmd_
File "/afs/cern.
return func(arg, **opt)
File "/afs/cern.
madspin_
File "/afs/cern.
self.
File "/afs/cern.
stop = Cmd.onecmd_
File "/afs/cern.
return func(arg, **opt)
File "/afs/cern.
return self.run_
File "/afs/cern.
generate_
File "/afs/cern.
efficiency = self.decaying_
File "/afs/cern.
self.outputfile = open(pjoin(
IOError: [Errno 2] No such file or directory: '/afs/cern.
Value of current Options:
cluster_
automatic_
exrootana
Revision history for this message
|
#14 |
Hi Josh,
Indeed I never think that the MadSpin directory can be moved as well.
So this is the patch to make it work: This is going to be part of 2.1.1
Thanks a lot for your help,
Olivier
=== modified file 'MadSpin/
--- MadSpin/
+++ MadSpin/
@@ -509,6 +509,13 @@
+ if generate_
+ for decay in generate_
+ decay['path'] = decay['
+ for decay2 in decay['decays']:
+ decay2['path'] = decay2[
+ generate_
+ generate_all.ms_dir = generate_
if not hasattr(
On Feb 25, 2014, at 12:16 AM, Josh Bendavid <email address hidden> wrote:
> Question #243268 on MadGraph5_aMC@NLO changed:
> https:/
>
> Status: Answered => Open
>
> Josh Bendavid is still having a problem:
> Hi Olivier,
> Testing with 2.1.0 and things are indeed working properly now as long as I don't use MadSpin.
>
> For Madspin it seems there are still a few places where the absolute
> path is being persisted inside the pickle file. When I try to run from
> a tarball with the original folder moved away (and mg5_path properly
> reset) I get an error as below.
>
> Thanks,
> Josh
>
> [lxplus414] /afs/cern.
> No module named madgraph
> No module named madgraph.
> No module named madgraph
> INFO: *******
> * *
> * W E L C O M E to M A D G R A P H 5 *
> * a M C @ N L O *
> * *
> * * * *
> * * * * * *
> * * * * * 5 * * * * *
> * * * * * *
> * * * *
> * *
> * VERSION 5.2.1.0 *
> * *
> * The MadGraph5_aMC@NLO Development Team - Find us at *
> * http://
> * *
> * Type 'help' for in-line help. *
> * *
> *******
> INFO: load configuration from /afs/cern.
> INFO: load configuration from /afs/cern.
> INFO: load configuration from /afs/cern.
> Using default eps viewer "evince". Set another one in ./input/
> Using default web browser "firefox". Set another one in ./input/
> launch -fox -n testrun
> INFO: Enter mode value: Go to the related mode
> INFO: will run in mode: noshower
> WARNING: You have chosen not to run a parton shower. NLO events without showering are NOT physical.
> Please, shower the Les Houches events before using them for physics analyses.
> INFO: Starting run
> INFO: Compiling the code
> INFO: Starting run
> INFO: Generating events without running the shower.
> INFO: Updating the number of unweighted events per channel
>
> Intermediate results:
> Random seed: 34
> Total cross-section: 1.019e+05 +- 1.1e+02 pb
> Total abs(cross-section): 1.158e+05 +- 8.9e+01 pb
>
>
> INFO: Generating events
> INFO: Idle: 5, Running: 1, Completed: 0
> INFO: Idle: 4, Running: 1, Completed: 1
> INFO: Idle: 3, Running: 1, Completed: 2
> INFO: Idle: 2, Running: 1, Completed: 3
> INFO: Idle: 1, Running: 1, Completed: 4
> INFO: Idle: 0, Running: 1, Completed: 5
> INFO: Idle: 0, Running: 0, Completed: 6
> INFO: Idle: 0, Running: 0, Completed: 0 [ current time: 01h00 ]
> INFO: Doing reweight
> INFO: Idle: 5, Running: 1, Completed: 0
> INFO: Idle: 4, Running: 1, Completed: 1
> INFO: Idle: 3, Running: 1, Completed: 2
> INFO: Idle: 2, Running: 1, Completed: 3
> INFO: Idle: 1, Running: 1, Completed: 4
> INFO: Idle: 0, Running: 1, Completed: 5
> INFO: Idle: 0, Running: 0, Completed: 6
> INFO: Idle: 0, Running: 0, Completed: 0 [ current time: 01h01 ]
> INFO: Collecting events
> INFO:
> Summary:
> Process p p > w+ [QCD]
> Run at p-p collider (6500 + 6500 GeV)
> Total cross-section: 1.019e+05 +- 1.1e+02 pb
> Number of events generated: 1000
> Parton shower to be used: PYTHIA8
> Fraction of negative weights: 0.06
> Total running time : 56s
>
> INFO: The /afs/cern.
>
> INFO: Events generated
> decay_events -from_cards
> INFO: Running MadSpin
> INFO: This functionality allows for the decay of resonances
> INFO: in a .lhe file, keeping track of the spin correlation effets.
> INFO: BE AWARE OF THE CURRENT LIMITATIONS:
> INFO: (1) Only a succession of 2 body decay are currently allowed
> *******
> * *
> * W E L C O M E to M A D S P I N *
> * *
> *******
> INFO: Extracting the banner ...
> INFO: process: p p > w+
> INFO: options:
> INFO: detected model: loop_sm-
> WARNING: The UFO model does not include partial widths information.
> Impossible to use analytical formula, will use MG5/MadEvent (slower).
> set ms_dir ./madspingrid
> set Nevents_
> set max_weight_ps_point 400 # number of PS to estimate the maximum for each event
> decay t > w+ all, w+ > all all
> t > all w+ , w+ > all all
> decay t~ > w- all, w- > all all
> t~ > all w- , w- > all all
> decay w+ > all all
> w+ > all all
> decay w- > all all
> w- > all all
> decay z > all all
> z > all all
> launch
> INFO: MadSpin: Decaying Events
> INFO:
> INFO: Decaying the events...
> Command "launch -fox -n testrun" interrupted with error:
> IOError : [Errno 2] No such file or directory: '/afs/cern.
> Please report this bug on https:/
> More information is found in '/afs/cern.
> Please attach this file to your report.
> quit
> INFO:
>
> [lxplus414] /afs/cern.
> #******
> #* MadGraph5_aMC@NLO *
> #* *
> #* * * *
> #* * * * * *
> #* * * * * 5 * * * * *
> #* * * * * *
> #* * * *
> #* *
> #* *
> #* VERSION 5.2.1.0 *
> #* *
> #* The MadGraph5_aMC@NLO Development Team - Find us at *
> #* https:/
> #* and *
> #* http://
> #* *
> #******
> #* *
> #* Command File for aMCatNLO *
> #* *
> #* run as ./bin/aMCatNLO.py filename *
> #* *
> #******
> launch -fox -n testrun
> Traceback (most recent call last):
> File "/afs/cern.
> return self.onecmd_
> File "/afs/cern.
> return func(arg, **opt)
> File "/afs/cern.
> self.exec_
> File "/afs/cern.
> stop = Cmd.onecmd_
> File "/afs/cern.
> return func(arg, **opt)
> File "/afs/cern.
> madspin_
> File "/afs/cern.
> self.exec_cmd(line, precmd=True)
> File "/afs/cern.
> stop = Cmd.onecmd_
> File "/afs/cern.
> return func(arg, **opt)
> File "/afs/cern.
> return self.run_
> File "/afs/cern.
> generate_
> File "/afs/cern.
> efficiency = self.decaying_
> File "/afs/cern.
> self.outputfile = open(pjoin(
> IOError: [Errno 2] No such file or directory: '/afs/cern.
> Value of current Options:
> web_browser : None
> text_editor : None
> cluster_temp_path : None
> syscalc_path : None
> cluster_queue : madgraph
> madanalysis_path : None
> lhapdf : /afs/cern.
> mg5_path : /afs/cern.
> cluster_memory : None
> cluster_
> cluster_time : None
> hepmc_path : None
> pythia8_path : None
> hwpp_path : None
> automatic_
> cluster_retry_wait : 300
> stdout_level : None
> pythia-pgs_path : None
> td_path : None
> delphes_path : None
> thepeg_path : None
> cluster_type : condor
> exrootanalysis_path : None
> fortran_compiler : None
> auto_update : 0
> cluster_nb_retry : 1
> eps_viewer : None
> timeout : 60
> nb_core : 16
> run_mode : 0
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.
Revision history for this message
|
#15 |
Thanks.
With the patch this is now working as expected.
Will let you know once we've tried actual grid production.
Can you help with this problem?
Provide an answer of your own, or ask Josh Bendavid for more information if necessary.