ommand "launch auto --only_generation" interrupted with error: ValueError : max() arg is an empty sequence

Asked by Bajarang

Dear Experts,

I am trying to generate p p > w+ [QCD] in MG5_aMC_v2_5_5
and to produce the grid files with ApplGrid 1.4.70 interfaced with amcfast-01-03-00

I am following the steps given here to produce the grids :
http://amcfast.hepforge.org/instructions.html

For the first preconditioning step, I set fixed_order=ON, iappl=1, passed proper lhapdf and things went smooth with no error.
I did not change anything in the FO_analyse_card.dat
The iterations set for grid formation are the default ones :
 5000 = npoints_FO_grid ! number of points to setup grids
 4 = niters_FO_grid ! number of iter. to setup grids
 10000 = npoints_FO ! number of points to compute Xsec
 6 = niters_FO ! number of iter. to compute Xsec

Problem occurs in the second step where we expect to fill the grids.
After >launch -o
and changing iappl=2 in the run_card I am getting this error :
Command "launch auto --only_generation" interrupted with error:
ValueError : max() arg is an empty sequence

Here is the full log file :

===========================================================================================

#************************************************************
#* MadGraph5_aMC@NLO *
#* *
#* * * *
#* * * * * *
#* * * * * 5 * * * * *
#* * * * * *
#* * * *
#* *
#* *
#* VERSION 2.5.5 2017-05-26 *
#* *
#* The MadGraph5_aMC@NLO Development Team - Find us at *
#* https://server06.fynu.ucl.ac.be/projects/madgraph *
#* and *
#* http://amcatnlo.cern.ch *
#* *
#************************************************************
#* *
#* Command File for aMCatNLO *
#* *
#* run as ./bin/aMCatNLO.py filename *
#* *
#************************************************************
launch auto --only_generation
Traceback (most recent call last):
  File "/home/rohit/HepSoft/MADGraph/MG5_aMC_v2_5_5/madgraph/interface/extended_cmd.py", line 1430, in onecmd
    return self.onecmd_orig(line, **opt)
  File "/home/rohit/HepSoft/MADGraph/MG5_aMC_v2_5_5/madgraph/interface/extended_cmd.py", line 1384, in onecmd_orig
    return func(arg, **opt)
  File "/home/rohit/HepSoft/MADGraph/MG5_aMC_v2_5_5/madgraph/interface/amcatnlo_run_interface.py", line 1227, in do_launch
    evt_file = self.run(mode, options)
  File "/home/rohit/HepSoft/MADGraph/MG5_aMC_v2_5_5/madgraph/interface/amcatnlo_run_interface.py", line 1372, in run
    self.applgrid_distribute(options,mode_dict[mode],p_dirs)
  File "/home/rohit/HepSoft/MADGraph/MG5_aMC_v2_5_5/madgraph/interface/amcatnlo_run_interface.py", line 2374, in applgrid_distribute
    max(time_stamps.iterkeys(), key=(lambda key:
ValueError: max() arg is an empty sequence
Value of current Options:
              text_editor : None
      notification_center : True
                    pjfry : None
       cluster_local_path : None
       group_subprocesses : Auto
ignore_six_quark_processes : False
    loop_optimized_output : True
    cluster_status_update : (600, 30)
         fortran_compiler : None
               hepmc_path : None
                  collier : ./HEPTools/lib
              auto_update : 7
             pythia8_path : /home/rohit/HepSoft/MADGraph/MG5_aMC_v2_5_5/HEPTools/pythia8
                hwpp_path : None
low_mem_multicore_nlo_generation : False
                    golem : None
          pythia-pgs_path : None
                  td_path : None
             delphes_path : /home/rohit/HepSoft/MADGraph/MG5_aMC_v2_5_5/Delphes
              thepeg_path : None
             cluster_type : condor
        madanalysis5_path : /home/rohit/HepSoft/MADGraph/MG5_aMC_v2_5_5/HEPTools/madanalysis5/madanalysis5
      exrootanalysis_path : /home/rohit/HepSoft/MADGraph/MG5_aMC_v2_5_5/ExRootAnalysis
                      OLP : MadLoop
                 applgrid : applgrid-config
               eps_viewer : None
                  fastjet : None
                 run_mode : 2
              web_browser : None
   automatic_html_opening : False
        cluster_temp_path : None
             cluster_size : 100
            cluster_queue : None
             syscalc_path : None
         madanalysis_path : None
                   lhapdf : /usr/local/bin/lhapdf-config
             stdout_level : 20
                  nb_core : 4
            f2py_compiler : None
                    ninja : /home/rohit/HepSoft/MADGraph/MG5_aMC_v2_5_5/HEPTools/lib
                  amcfast : amcfast-config
       cluster_retry_wait : 300
      output_dependencies : external
           crash_on_error : False
mg5amc_py8_interface_path : /home/rohit/HepSoft/MADGraph/MG5_aMC_v2_5_5/HEPTools/MG5aMC_PY8_interface
         loop_color_flows : False
                  samurai : None
         cluster_nb_retry : 1
                 mg5_path : /home/rohit/HepSoft/MADGraph/MG5_aMC_v2_5_5
                  timeout : 60
                    gauge : unitary
      complex_mass_scheme : False
             cpp_compiler : None
   max_npoint_for_channel : 0
#************************************************************
#* MadGraph5_aMC@NLO *
#* *
#* * * *
#* * * * * *
#* * * * * 5 * * * * *
#* * * * * *
#* * * *
#* *
#* *
#* VERSION 2.5.5 2017-05-26 *
#* *
#* The MadGraph5_aMC@NLO Development Team - Find us at *
#* https://server06.fynu.ucl.ac.be/projects/madgraph *
#* *
#************************************************************
#* *
#* Command File for MadGraph5_aMC@NLO *
#* *
#* run as ./bin/mg5_aMC filename *
#* *
#************************************************************
set group_subprocesses Auto
set ignore_six_quark_processes False
set loop_optimized_output True
set loop_color_flows False
set gauge unitary
set complex_mass_scheme False
set max_npoint_for_channel 0
import model sm
define p = g u c d s u~ c~ d~ s~
define j = g u c d s u~ c~ d~ s~
define l+ = e+ mu+
define l- = e- mu-
define vl = ve vm vt
define vl~ = ve~ vm~ vt~
import model loop_sm
generate p p > w+ [QCD]
output pp_wplus_qcd
######################################################################
## PARAM_CARD AUTOMATICALY GENERATED BY MG5 FOLLOWING UFO MODEL ####
######################################################################
## ##
## Width set on Auto will be computed following the information ##
## present in the decay.py files of the model. ##
## See arXiv:1402.1178 for more details. ##
## ##
######################################################################

###################################
## INFORMATION FOR LOOP
###################################
Block loop
    1 9.118800e+01 # MU_R

###################################
## INFORMATION FOR MASS
###################################
Block mass
    5 4.700000e+00 # MB
    6 1.730000e+02 # MT
   15 1.777000e+00 # MTA
   23 9.118800e+01 # MZ
   25 1.250000e+02 # MH
## Dependent parameters, given by model restrictions.
## Those values should be edited following the
## analytical expression. MG5 ignores those values
## but they are important for interfacing the output of MG5
## to external program such as Pythia.
  1 0.000000 # d : 0.0
  2 0.000000 # u : 0.0
  3 0.000000 # s : 0.0
  4 0.000000 # c : 0.0
  11 0.000000 # e- : 0.0
  12 0.000000 # ve : 0.0
  13 0.000000 # mu- : 0.0
  14 0.000000 # vm : 0.0
  16 0.000000 # vt : 0.0
  21 0.000000 # g : 0.0
  22 0.000000 # a : 0.0
  24 80.419002 # w+ : cmath.sqrt(MZ__exp__2/2. + cmath.sqrt(MZ__exp__4/4. - (aEW*cmath.pi*MZ__exp__2)/(Gf*sqrt__2)))

###################################
## INFORMATION FOR SMINPUTS
###################################
Block sminputs
    1 1.325070e+02 # aEWM1
    2 1.166390e-05 # Gf
    3 1.180000e-01 # aS

###################################
## INFORMATION FOR YUKAWA
###################################
Block yukawa
    5 4.700000e+00 # ymb
    6 1.730000e+02 # ymt
   15 1.777000e+00 # ymtau

###################################
## INFORMATION FOR DECAY
###################################
DECAY 6 1.491500e+00 # WT
DECAY 23 2.441404e+00 # WZ
DECAY 24 2.047600e+00 # WW
DECAY 25 6.382339e-03 # WH
## Dependent parameters, given by model restrictions.
## Those values should be edited following the
## analytical expression. MG5 ignores those values
## but they are important for interfacing the output of MG5
## to external program such as Pythia.
DECAY 1 0.000000 # d : 0.0
DECAY 2 0.000000 # u : 0.0
DECAY 3 0.000000 # s : 0.0
DECAY 4 0.000000 # c : 0.0
DECAY 5 0.000000 # b : 0.0
DECAY 11 0.000000 # e- : 0.0
DECAY 12 0.000000 # ve : 0.0
DECAY 13 0.000000 # mu- : 0.0
DECAY 14 0.000000 # vm : 0.0
DECAY 15 0.000000 # ta- : 0.0
DECAY 16 0.000000 # vt : 0.0
DECAY 21 0.000000 # g : 0.0
DECAY 22 0.000000 # a : 0.0
#***********************************************************************
# MadGraph5_aMC@NLO *
# *
# run_card.dat aMC@NLO *
# *
# This file is used to set the parameters of the run. *
# *
# Some notation/conventions: *
# *
# Lines starting with a hash (#) are info or comments *
# *
# mind the format: value = variable ! comment *
# *
# Some of the values of variables can be list. These can either be *
# comma or space separated. *
#***********************************************************************
#
#*******************
# Running parameters
#*******************
#
#***********************************************************************
# Tag name for the run (one word) *
#***********************************************************************
  tag_1 = run_tag ! name of the run
#***********************************************************************
# Number of LHE events (and their normalization) and the required *
# (relative) accuracy on the Xsec. *
# These values are ignored for fixed order runs *
#***********************************************************************
 10000 = nevents ! Number of unweighted events requested
 -1.0 = req_acc ! Required accuracy (-1=auto determined from nevents)
 -1 = nevt_job! Max number of events per job in event generation.
                 ! (-1= no split).
#***********************************************************************
# Normalize the weights of LHE events such that they sum or average to *
# the total cross section *
#***********************************************************************
 average = event_norm ! average or sum
#***********************************************************************
# Number of points per itegration channel (ignored for aMC@NLO runs) *
#***********************************************************************
 0.01 = req_acc_FO ! Required accuracy (-1=ignored, and use the
                     ! number of points and iter. below)
# These numbers are ignored except if req_acc_FO is equal to -1
 5000 = npoints_FO_grid ! number of points to setup grids
 4 = niters_FO_grid ! number of iter. to setup grids
 10000 = npoints_FO ! number of points to compute Xsec
 6 = niters_FO ! number of iter. to compute Xsec
#***********************************************************************
# Random number seed *
#***********************************************************************
 0 = iseed ! rnd seed (0=assigned automatically=default))
#***********************************************************************
# Collider type and energy *
#***********************************************************************
 1 = lpp1 ! beam 1 type (0 = no PDF)
 1 = lpp2 ! beam 2 type (0 = no PDF)
 6500.0 = ebeam1 ! beam 1 energy in GeV
 6500.0 = ebeam2 ! beam 2 energy in GeV
#***********************************************************************
# PDF choice: this automatically fixes also alpha_s(MZ) and its evol. *
#***********************************************************************
 lhapdf = pdlabel ! PDF set
 11000 = lhaid ! If pdlabel=lhapdf, this is the lhapdf number. Only
              ! numbers for central PDF sets are allowed. Can be a list;
              ! PDF sets beyond the first are included via reweighting.
#***********************************************************************
# Include the NLO Monte Carlo subtr. terms for the following parton *
# shower (HERWIG6 | HERWIGPP | PYTHIA6Q | PYTHIA6PT | PYTHIA8) *
# WARNING: PYTHIA6PT works only for processes without FSR!!!! *
#***********************************************************************
  HERWIG6 = parton_shower
  1.0 = shower_scale_factor ! multiply default shower starting
                                  ! scale by this factor
#***********************************************************************
# Renormalization and factorization scales *
# (Default functional form for the non-fixed scales is the sum of *
# the transverse masses divided by two of all final state particles *
# and partons. This can be changed in SubProcesses/set_scales.f or via *
# dynamical_scale_choice option) *
#***********************************************************************
 False = fixed_ren_scale ! if .true. use fixed ren scale
 False = fixed_fac_scale ! if .true. use fixed fac scale
 91.118 = muR_ref_fixed ! fixed ren reference scale
 91.118 = muF_ref_fixed ! fixed fact reference scale
 -1 = dynamical_scale_choice ! Choose one (or more) of the predefined
           ! dynamical choices. Can be a list; scale choices beyond the
           ! first are included via reweighting
 1.0 = muR_over_ref ! ratio of current muR over reference muR
 1.0 = muF_over_ref ! ratio of current muF over reference muF
#***********************************************************************
# Reweight variables for scale dependence and PDF uncertainty *
#***********************************************************************
 1.0, 2.0, 0.5 = rw_rscale ! muR factors to be included by reweighting
 1.0, 2.0, 0.5 = rw_fscale ! muF factors to be included by reweighting
 True = reweight_scale ! Reweight to get scale variation using the
            ! rw_rscale and rw_fscale factors. Should be a list of
            ! booleans of equal length to dynamical_scale_choice to
            ! specify for which choice to include scale dependence.
 False = reweight_PDF ! Reweight to get PDF uncertainty. Should be a
            ! list booleans of equal length to lhaid to specify for
            ! which PDF set to include the uncertainties.
#***********************************************************************
# Store reweight information in the LHE file for off-line model- *
# parameter reweighting at NLO+PS accuracy *
#***********************************************************************
 False = store_rwgt_info ! Store info for reweighting in LHE file
#***********************************************************************
# ickkw parameter: *
# 0: No merging *
# 3: FxFx Merging - WARNING! Applies merging only at the hard-event *
# level. After showering an MLM-type merging should be applied as *
# well. See http://amcatnlo.cern.ch/FxFx_merging.htm for details. *
# 4: UNLOPS merging (with pythia8 only). No interface from within *
# MG5_aMC available, but available in Pythia8. *
# -1: NNLL+NLO jet-veto computation. See arxiv:1412.8408 [hep-ph]. *
#***********************************************************************
 0 = ickkw
#***********************************************************************
#
#***********************************************************************
# BW cutoff (M+/-bwcutoff*Gamma). Determines which resonances are *
# written in the LHE event file *
#***********************************************************************
 15.0 = bwcutoff
#***********************************************************************
# Cuts on the jets. Jet clustering is performed by FastJet. *
# - When matching to a parton shower, these generation cuts should be *
# considerably softer than the analysis cuts. *
# - More specific cuts can be specified in SubProcesses/cuts.f *
#***********************************************************************
  1.0 = jetalgo ! FastJet jet algorithm (1=kT, 0=C/A, -1=anti-kT)
  0.7 = jetradius ! The radius parameter for the jet algorithm
 10.0 = ptj ! Min jet transverse momentum
 -1.0 = etaj ! Max jet abs(pseudo-rap) (a value .lt.0 means no cut)
#***********************************************************************
# Cuts on the charged leptons (e+, e-, mu+, mu-, tau+ and tau-) *
# More specific cuts can be specified in SubProcesses/cuts.f *
#***********************************************************************
  0.0 = ptl ! Min lepton transverse momentum
 -1.0 = etal ! Max lepton abs(pseudo-rap) (a value .lt.0 means no cut)
  0.0 = drll ! Min distance between opposite sign lepton pairs
  0.0 = drll_sf ! Min distance between opp. sign same-flavor lepton pairs
  0.0 = mll ! Min inv. mass of all opposite sign lepton pairs
 30.0 = mll_sf ! Min inv. mass of all opp. sign same-flavor lepton pairs
#***********************************************************************
# Photon-isolation cuts, according to hep-ph/9801442. When ptgmin=0, *
# all the other parameters are ignored. *
# More specific cuts can be specified in SubProcesses/cuts.f *
#***********************************************************************
 20.0 = ptgmin ! Min photon transverse momentum
 -1.0 = etagamma ! Max photon abs(pseudo-rap)
  0.4 = R0gamma ! Radius of isolation code
  1.0 = xn ! n parameter of eq.(3.4) in hep-ph/9801442
  1.0 = epsgamma ! epsilon_gamma parameter of eq.(3.4) in hep-ph/9801442
 True = isoEM ! isolate photons from EM energy (photons and leptons)
#***********************************************************************
# For aMCfast+APPLGRID use in PDF fitting (http://amcfast.hepforge.org)*
#***********************************************************************
 2 = iappl ! aMCfast switch (0=OFF, 1=prepare grids, 2=fill grids)
#***********************************************************************

===========================================================================================

Can you please help me with this. Thanks in advance.

Regards,
Bajrang

Question information

Language:
English Edit question
Status:
Solved
For:
MadGraph5_aMC@NLO Edit question
Assignee:
Valerio Bertone Edit question
Solved by:
Valerio Bertone
Solved:
Last query:
Last reply:
Revision history for this message
Rikkert Frederix (frederix) said :
#1

Dear Bajrang,

As written on the http://amcfast.hepforge.org/instructions.html webpage, you have to use a topdrawer or root analysis. The MG5_aMC default is HwU format, which is not compatible with applgrid/aMCfast. Hence, you have to write a topdrawer or root analysis and specify that in the FO_analyse_card. There are several examples of analysis available in the FixedOrderAnalysis directory, that should help you write your own.

best,
Rikkert

Revision history for this message
Bajarang (bajjubaba) said :
#2

Dear Rikkert,

Thanks for your reply. I tried implementing your suggestion with root as well as top drawer but I am still getting the error.
I have used 4 combinations [1], [2], [3] and [4] below for root and [5] for topdrawer inside FO_analyse_card.dat:

******************************************************************************************
[1]
FO_ANALYSIS_FORMAT = root
FO_EXTRALIBS =
FO_EXTRAPATHS =
FO_INCLUDEPATHS =
FO_ANALYSE = analysis_root_pp_V.o
******************************************************************************************
[2]
FO_ANALYSIS_FORMAT = root
FO_EXTRALIBS = Core Cint Hist Matrix MathCore RIO dl Thread
FO_EXTRAPATHS =/home/rohit/Downloads/root-6.06.06/lib
FO_INCLUDEPATHS =/home/rohit/Downloads/root-6.06.06/include
FO_ANALYSE = analysis_root_pp_V.o
******************************************************************************************
[3]
FO_ANALYSIS_FORMAT = root
FO_EXTRALIBS =
FO_EXTRAPATHS =/home/rohit/Downloads/root-6.06.06/lib
FO_INCLUDEPATHS =/home/rohit/Downloads/root-6.06.06/include
FO_ANALYSE = analysis_root_pp_V.o
******************************************************************************************
[4]
FO_ANALYSIS_FORMAT = root
FO_EXTRALIBS = Core Cint Hist Matrix MathCore RIO dl Thread
FO_EXTRAPATHS =/home/rohit/Downloads/root-6.06.06/lib
FO_INCLUDEPATHS =/home/rohit/Downloads/root-6.06.06/include
FO_ANALYSE = analysis_root_template.o
******************************************************************************************
[5]
FO_ANALYSIS_FORMAT =topdrawer
FO_EXTRALIBS =
FO_EXTRAPATHS =
FO_INCLUDEPATHS =
FO_ANALYSE = analysis_td_pp_V.o
******************************************************************************************

but every time I am getting the same error for root
---------------------------------------------------------------------------------------------------------------------------------------------------
INFO: Starting run
INFO: Using 4 cores
INFO: Cleaning previous results
INFO: Doing fixed order NLO
INFO: Setting up grids
WARNING: program /home/rohit/Downloads/MG5_aMC_v2_3_3/bin/pp_wplus_qcd_root/SubProcesses/P0_udx_wp/ajob1 1 all 0 0 launch ends with non zero status: 127. Stop all computation
INFO: Idle: 0, Running: 0, Completed: 2 [ current time: 12h41 ]
Command "launch auto " interrupted with error:
Exception : program /home/rohit/Downloads/MG5_aMC_v2_3_3/bin/pp_wplus_qcd_root/SubProcesses/P0_udx_wp/ajob1 1 all 0 0 launch ends with non zero status: 127. Stop all computation
Please report this bug on https://bugs.launchpad.net/madgraph5
More information is found in '/home/rohit/Downloads/MG5_aMC_v2_3_3/bin/pp_wplus_qcd_root/run_01_tag_1_debug.log'.
Please attach this file to your report.

---------------------------------------------------------------------------------------------------------------------------------------------------

And this error for topdrawer :
---------------------------------------------------------------------------------------------------------------------------------------------------
Starting run
INFO: Using 4 cores
INFO: Cleaning previous results
INFO: Doing fixed order NLO
INFO: Setting up grids
INFO: Idle: 0, Running: 2, Completed: 0 [ current time: 11h34 ]
INFO: Idle: 0, Running: 1, Completed: 1 [ 0.47s ]
INFO: Idle: 0, Running: 0, Completed: 2 [ 0.58s ]
INFO:
      Results after grid setup:
      Total cross-section: 1.044e+05 +- 1.2e+03 pb

INFO: Refining results, step 1
INFO: Idle: 0, Running: 2, Completed: 0 [ current time: 11h34 ]
/home/rohit/Downloads/MG5_aMC_v2_3_3/bin/pp_wplus_qcd_top/SubProcesses/P0_dxu_wp/ajob1: line 34: 3872 Killed ../madevent_mintFO > log.txt < input_app.txt 2>&1
/home/rohit/Downloads/MG5_aMC_v2_3_3/bin/pp_wplus_qcd_top/SubProcesses/P0_udx_wp/ajob1: line 34: 3867 Killed ../madevent_mintFO > log.txt < input_app.txt 2>&1
WARNING: program /home/rohit/Downloads/MG5_aMC_v2_3_3/bin/pp_wplus_qcd_top/SubProcesses/P0_udx_wp/ajob1 1 all 0 1 launch ends with non zero status: 137. Stop all computation
WARNING: program /home/rohit/Downloads/MG5_aMC_v2_3_3/bin/pp_wplus_qcd_top/SubProcesses/P0_dxu_wp/ajob1 1 all 0 1 launch ends with non zero status: 137. Stop all computation
INFO: Idle: 0, Running: 0, Completed: 2 [ 5m 19s ]
Command "launch auto " interrupted with error:
Exception : program /home/rohit/Downloads/MG5_aMC_v2_3_3/bin/pp_wplus_qcd_top/SubProcesses/P0_udx_wp/ajob1 1 all 0 1 launch ends with non zero status: 137. Stop all computation
Please report this bug on https://bugs.launchpad.net/madgraph5
More information is found in '/home/rohit/Downloads/MG5_aMC_v2_3_3/bin/pp_wplus_qcd_top/run_01_tag_1_debug.log'.
Please attach this file to your report.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Can you please help me
Thanks

Regards
Bajrang

Revision history for this message
Rikkert Frederix (frederix) said :
#3

Dear Bajrang,

Your problems with ROOT seem to be a compilation/linking problem which depends rather specifically on your installation of ROOT. It will be difficult to help you debugging with that.

For the topdrawer one, what does the log.txt file say? There should be one in the
/home/rohit/Downloads/MG5_aMC_v2_3_3/bin/pp_wplus_qcd_top/SubProcesses/P0_dxu_wp/all_G1/
directory. If it doesn't say anything in particular, what is the error message if you, from within that directory, executes
../madevent_mintFO < input_app.txt

Best,
Rikkert

Revision history for this message
Bajarang (bajjubaba) said :
#4

Dear Rikkert,

For the topdrawer case, the /home/rohit/Downloads/MG5_aMC_v2_3_3/bin/pp_wplus_qcd_top/SubProcesses/P0_dxu_wp/all_G1/log.txt does not say anything specific to the error.

I ran :
--------------------------------------------
$ ../madevent_mintFO < input_app.txt
aMCfast INFO: Setting up hook functions ...
 ===============================================================
 INFO: MadFKS read these parameters from FKS_params.dat
 ===============================================================
  > IRPoleCheckThreshold = 1.0000000000000001E-005
  > PrecisionVirtualAtRunTime = 1.0000000000000000E-003
  > NHelForMCoverHels = 4
  > VirtualFraction = 1.0000000000000000
  > MinVirtualFraction = 5.0000000000000001E-003
 ===============================================================
 Process in group number 0
 A PDF is used, so alpha_s(MZ) is going to be modified
 Old value of alpha_s from param_card: 0.11799999999999999
==== LHAPDF6 USING DEFAULT-TYPE LHAGLUE INTERFACE ====
LHAPDF 6.1.5 loading /usr/local/share/LHAPDF/CT10nlo/CT10nlo_0000.dat
CT10nlo PDF set, member #0, version 4; LHAPDF ID = 11000
 New value of alpha_s from PDF lhapdf : 0.11800062239984055
 using LHAPDF
 *****************************************************
 * MadGraph/MadEvent *
 * -------------------------------- *
 * http://madgraph.hep.uiuc.edu *
 * http://madgraph.phys.ucl.ac.be *
 * http://madgraph.roma2.infn.it *
 * -------------------------------- *
 * *
 * PARAMETER AND COUPLING VALUES *
 * *
 *****************************************************

  External Params
  ---------------------------------

 MU_R = 91.188000000000002
 aEWM1 = 132.50700000000001
 mdl_Gf = 1.1663900000000000E-005
 aS = 0.11799999999999999
 mdl_ymb = 4.7000000000000002
 mdl_ymt = 173.00000000000000
 mdl_ymtau = 1.7769999999999999
 mdl_MT = 173.00000000000000
 mdl_MB = 4.7000000000000002
 mdl_MZ = 91.188000000000002
 mdl_MH = 125.00000000000000
 mdl_MTA = 1.7769999999999999
 mdl_WT = 1.4915000000000000
 mdl_WZ = 2.4414039999999999
 mdl_WW = 2.0476000000000001
 mdl_WH = 6.3823389999999999E-003
  Internal Params
  ---------------------------------

 mdl_conjg__CKM3x3 = 1.0000000000000000
 mdl_CKM22 = 1.0000000000000000
 mdl_lhv = 1.0000000000000000
 mdl_CKM3x3 = 1.0000000000000000
 mdl_conjg__CKM22 = 1.0000000000000000
 mdl_conjg__CKM33 = 1.0000000000000000
 mdl_Ncol = 3.0000000000000000
 mdl_CA = 3.0000000000000000
 mdl_TF = 0.50000000000000000
 mdl_CF = 1.3333333333333333
 mdl_complexi = ( 0.0000000000000000 , 1.0000000000000000 )
 mdl_MZ__exp__2 = 8315.2513440000002
 mdl_MZ__exp__4 = 69143404.913893804
 mdl_sqrt__2 = 1.4142135623730951
 mdl_MH__exp__2 = 15625.000000000000
 mdl_Ncol__exp__2 = 9.0000000000000000
 mdl_MB__exp__2 = 22.090000000000003
 mdl_MT__exp__2 = 29929.000000000000
 mdl_aEW = 7.5467711139788835E-003
 mdl_MW = 80.419002445756163
 mdl_sqrt__aEW = 8.6872153846781555E-002
 mdl_ee = 0.30795376724436879
 mdl_MW__exp__2 = 6467.2159543705357
 mdl_sw2 = 0.22224648578577766
 mdl_cw = 0.88190334743339216
 mdl_sqrt__sw2 = 0.47143025548407230
 mdl_sw = 0.47143025548407230
 mdl_g1 = 0.34919219678733299
 mdl_gw = 0.65323293034757990
 mdl_v = 246.21845810181637
 mdl_v__exp__2 = 60623.529110035903
 mdl_lam = 0.12886910601690263
 mdl_yb = 2.6995554250465490E-002
 mdl_yt = 0.99366614581500623
 mdl_ytau = 1.0206617000654717E-002
 mdl_muH = 88.388347648318430
 mdl_AxialZUp = -0.18517701861793787
 mdl_AxialZDown = 0.18517701861793787
 mdl_VectorZUp = 7.5430507588273299E-002
 mdl_VectorZDown = -0.13030376310310560
 mdl_VectorAUp = 0.20530251149624587
 mdl_VectorADown = -0.10265125574812294
 mdl_VectorWmDxU = 0.23095271737156670
 mdl_AxialWmDxU = -0.23095271737156670
 mdl_VectorWpUxD = 0.23095271737156670
 mdl_AxialWpUxD = -0.23095271737156670
 mdl_I1x33 = ( 2.6995554250465490E-002, 0.0000000000000000 )
 mdl_I2x33 = ( 0.99366614581500623 , 0.0000000000000000 )
 mdl_I3x33 = ( 0.99366614581500623 , 0.0000000000000000 )
 mdl_I4x33 = ( 2.6995554250465490E-002, 0.0000000000000000 )
 mdl_Vector_tbGp = (-0.96667059156454072 , 0.0000000000000000 )
 mdl_Axial_tbGp = ( -1.0206617000654716 , -0.0000000000000000 )
 mdl_Vector_tbGm = ( 0.96667059156454072 , 0.0000000000000000 )
 mdl_Axial_tbGm = ( -1.0206617000654716 , -0.0000000000000000 )
 mdl_gw__exp__2 = 0.42671326129048615
 mdl_cw__exp__2 = 0.77775351421422245
 mdl_ee__exp__2 = 9.4835522759998875E-002
 mdl_sw__exp__2 = 0.22224648578577769
 mdl_yb__exp__2 = 7.2875994928982540E-004
 mdl_yt__exp__2 = 0.98737240933884918
  Internal Params evaluated point by point
  ----------------------------------------

 mdl_sqrt__aS = 0.34351128074635334
 mdl_G__exp__4 = 2.1987899468922913
 mdl_G__exp__2 = 1.4828317324943823
 mdl_R2MixedFactor_FIN_ = -2.5040377713124864E-002
 mdl_GWcft_UV_b_1EPS_ = -3.1300472141406080E-003
 mdl_GWcft_UV_t_1EPS_ = -3.1300472141406080E-003
 mdl_bWcft_UV_1EPS_ = -1.8780283284843650E-002
 mdl_tWcft_UV_1EPS_ = -1.8780283284843650E-002
 mdl_G__exp__3 = 1.8056676068262196
 mdl_MU_R__exp__2 = 8315.2513440000002
 mdl_GWcft_UV_b_FIN_ = -1.8563438626678915E-002
 mdl_GWcft_UV_t_FIN_ = 4.0087659331150384E-003
 mdl_bWcft_UV_FIN_ = -0.13642100947319838
 mdl_tWcft_UV_FIN_ = -9.8778211443463623E-004
  Couplings of loop_sm
  ---------------------------------

      R2_sxcW = -0.00000E+00 -0.11566E-01
         GC_5 = 0.00000E+00 0.12177E+01
        GC_47 = 0.00000E+00 0.46191E+00

 Collider parameters:
 --------------------

 Running at P P machine @ 13000.000000000000 GeV
 PDF set = lhapdf
 alpha_s(Mz)= 0.1180 running at 2 loops.
 alpha_s(Mz)= 0.1180 running at 2 loops.
 Renormalization scale set on event-by-event basis
 Factorization scale set on event-by-event basis

 Diagram information for clustering has been set-up for nFKSprocess 1
 Diagram information for clustering has been set-up for nFKSprocess 2
 Diagram information for clustering has been set-up for nFKSprocess 3
 Diagram information for clustering has been set-up for nFKSprocess 4
 getting user params
 Number of phase-space points per iteration: 5000
 Maximum number of iterations is: 2
 Desired accuracy is: 1.1448537750500001E-002
 Using adaptive grids: 2
 Using Multi-channel integration
 Do MC over helicities for the virtuals
Running Configuration Number: 1
 Splitting channel: 0
 doing the all of this channel
 Normal integration (Sfunction != 1)
 RESTART: Use old grids, but refil plots
 Not subdividing B.W.
 about to integrate 4 5000 2 1
 Initializing aMCfast ...
 ================================
 process combination map (specified per FKS dir):
  1 map 1 2
  1 inv. map 1 2
  2 map 1 2
  2 inv. map 1 2
  3 map 1 2
  3 inv. map 1 2
  4 map 1 2
  4 inv. map 1 2
 ================================
    ... done.
 Update iterations and points to 2 5000
 imode is -1
 ------- iteration 1
 Update # PS points (even): 5000 --> 4802
Using random seed offsets: 1 , 2 , 0
  with seed 33
 Ranmar initialization seeds 11949 9409
 Total number of FKS directories is 4
 For the Born we use nFKSprocesses # 2 0
nFKSprocess: 1. Absolute lower bound for tau at the Born is 0.38268E-04 0.80419E+02 0.13000E+05
nFKSprocess: 1. Lower bound for tau is 0.38268E-04 0.80419E+02 0.13000E+05
nFKSprocess: 1. Lower bound for tau is (taking resonances into account) 0.38268E-04 0.80419E+02 0.13000E+05
nFKSprocess: 2. Absolute lower bound for tau at the Born is 0.38268E-04 0.80419E+02 0.13000E+05
nFKSprocess: 2. Lower bound for tau is 0.38268E-04 0.80419E+02 0.13000E+05
nFKSprocess: 2. Lower bound for tau is (taking resonances into account) 0.38268E-04 0.80419E+02 0.13000E+05
nFKSprocess: 3. Absolute lower bound for tau at the Born is 0.38268E-04 0.80419E+02 0.13000E+05
nFKSprocess: 3. Lower bound for tau is 0.38268E-04 0.80419E+02 0.13000E+05
nFKSprocess: 3. Lower bound for tau is (taking resonances into account) 0.38268E-04 0.80419E+02 0.13000E+05
nFKSprocess: 4. Absolute lower bound for tau at the Born is 0.38268E-04 0.80419E+02 0.13000E+05
nFKSprocess: 4. Lower bound for tau is 0.38268E-04 0.80419E+02 0.13000E+05
nFKSprocess: 4. Lower bound for tau is (taking resonances into account) 0.38268E-04 0.80419E+02 0.13000E+05
 bpower is 0.0000000000000000
aMCfast INFO: Booking grid from scratch with name grid_obs_0_in.root ...

aMCfast INFO: Report of the grid parameters:
- Q2 grid:
  * interpolation range: [ 100 : 1e+06 ] GeV^2
  * number of nodes: 30
  * interpolation order: 3
- x grid:
  * interpolation range: [ 2e-07 : 1 ]
  * number of nodes: 50
  * interpolation order: 3

lumi_pdf::lumi_pdf() amcatnlo_obs_0_20170727140527.config combinations 6
appl::grid::amcatnlo() using aMC@NLO convolution
aMCfast INFO: Booking grid from scratch with name grid_obs_1_in.root ...

aMCfast INFO: Report of the grid parameters:
- Q2 grid:
  * interpolation range: [ 100 : 1e+06 ] GeV^2
  * number of nodes: 30
  * interpolation order: 3
- x grid:
  * interpolation range: [ 2e-07 : 1 ]
  * number of nodes: 50
  * interpolation order: 3

lumi_pdf::lumi_pdf() amcatnlo_obs_1_20170727140546.config combinations 6
Killed
------------------------------------------------------------------------------

The process is getting killed by itself after some time.

Regards,
Bajrang

Revision history for this message
Rikkert Frederix (frederix) said :
#5

Dear Bajrang,

I've forwarded your message to Valerio Bertone. He might be able to help you better than I do.

best,
Rikkert

Revision history for this message
Valerio Bertone (valerio-bertone) said :
#6

Dear Bajrang,

I suspect that this might be a memory issue.
The question is that the default (x,Q2) grid allocated by aMCfast is quite large and in some cases it may cause a memory overflow.
If this is the problem, it can be fixed by reducing the size of this grid.
To do so, you just have to comment in the lines:

c*
c* aMCfast common.
c* Needed to redefine the grid parameters
c*
c include "reweight_appl.inc"
c include "appl_common.inc"
c*
c* Grid parameters
c*
c appl_Q2min = 2500d0
c appl_Q2max = 40000d0
c appl_xmin = 2d-7
c appl_xmax = 1d0
c appl_nQ2 = 10
c appl_Q2order = 3
c appl_nx = 50
c appl_xorder = 3

in the analysis_begin subroutine of your analysis file.
To start with, I would use something like:

      appl_Q2min = 2500d0
      appl_Q2max = 40000d0
      appl_xmin = 1d-5
      appl_xmax = 1d0
      appl_nQ2 = 10
      appl_Q2order = 3
      appl_nx = 20
      appl_xorder = 3

that is a significantly smaller grid as compared to the default one.
If this this woks bit does not meet your accuracy requirements, you can try to increase it until the code does not crush.

If you change the grid, you need to restart the production from scratch, meaning that you need to redo step 1.
Also notice that if req_acc_FO in your run card is not equal to -1 (as it seems to be your case) the parameters:

 5000 = npoints_FO_grid ! number of points to setup grids
 4 = niters_FO_grid ! number of iter. to setup grids
 10000 = npoints_FO ! number of points to compute Xsec
 6 = niters_FO ! number of iter. to compute Xsec

will be ignored.
I general, I find that it is a better idea to use the target accuracy req_acc_FO rather than setting the number of iterations.
Let us know if this solves the problem.

Best regards,
Valerio

Revision history for this message
Bajarang (bajjubaba) said :
#7

Dear Valerio,

thanks for your reply. I tried to change the grid size as per your
suggesions with req_acc_FO set to -1 and changing the grid parameters
as below[1].

I tried running it with both the options root and topdrawer.
I still get the error but it is different this time.
Can you please take a look at it. For error with root [2] and
for error with topdrawer [3]

----------------------------------------------------
[1]
c*
c* aMCfast common.
c* Needed to redefine the grid parameters
c*
c include "reweight_appl.inc"
c include "appl_common.inc"
c*
c* Grid parameters
c*
       appl_Q2min = 2500d0
       appl_Q2max = 40000d0
       appl_xmin = 1d-5
       appl_xmax = 1d0
       appl_nQ2 = 10
       appl_Q2order = 3
       appl_nx = 20
       appl_xorder = 3

ROOT Error :
---------------------------------------------------
[2]
INFO: Update the dependent parameter of the param_card.dat
INFO: Starting run
INFO: Compiling the code
INFO: Using LHAPDF v6.1.5 interface for PDFs
INFO: Compiling source...
INFO: ...done, continuing with P* directories
INFO: Compiling directories...
INFO: Compiling on 4 cores
INFO: Compiling P0_udx_wp...
INFO: Compiling P0_dxu_wp...
WARNING: fct <function compile_dir at 0x22c2578> does not return 0. Stopping the code in a clean way. The error was:
A compilation Error occurs when trying to compile /home/bajarang/packages/MG5_aMC_v2_5_5/bin/pp_wplus_qcd_root/SubProcesses/P0_dxu_wp.
The compilation fails with the following output message:
    gfortran -O -fno-automatic -ffixed-line-length-132 -c -I. -I../../FixedOrderAnalysis/ -I/home/bajarang/packages/myroot/root/include ../../FixedOrderAnalysis//analysis_root_pp_V.f
    ../../FixedOrderAnalysis//analysis_root_pp_V.f:29.15:

           appl_nQ2 = 10
                   1
    Error: Symbol 'appl_nq2' at (1) has no IMPLICIT type
    ../../FixedOrderAnalysis//analysis_root_pp_V.f:31.14:

           appl_nx = 20
                  1
    Error: Symbol 'appl_nx' at (1) has no IMPLICIT type
    ../../FixedOrderAnalysis//analysis_root_pp_V.f:26.17:

           appl_Q2max = 40000d0
                     1
    Error: Symbol 'appl_q2max' at (1) has no IMPLICIT type
    ../../FixedOrderAnalysis//analysis_root_pp_V.f:25.17:

           appl_Q2min = 2500d0
                     1
    Error: Symbol 'appl_q2min' at (1) has no IMPLICIT type
    ../../FixedOrderAnalysis//analysis_root_pp_V.f:30.19:

           appl_Q2order = 3
                       1
    Error: Symbol 'appl_q2order' at (1) has no IMPLICIT type
    ../../FixedOrderAnalysis//analysis_root_pp_V.f:28.16:

           appl_xmax = 1d0
                    1
    Error: Symbol 'appl_xmax' at (1) has no IMPLICIT type
    ../../FixedOrderAnalysis//analysis_root_pp_V.f:27.16:

           appl_xmin = 1d-5
                    1
    Error: Symbol 'appl_xmin' at (1) has no IMPLICIT type
    ../../FixedOrderAnalysis//analysis_root_pp_V.f:32.18:

           appl_xorder = 3
                      1
    Error: Symbol 'appl_xorder' at (1) has no IMPLICIT type
    make: *** [analysis_root_pp_V.o] Error 1

Please try to fix this compilations issue and retry.
Help might be found at https://answers.launchpad.net/mg5amcnlo.
If you think that this is a bug, you can report this at https://bugs.launchpad.net/mg5amcnlo
WARNING: fct <function compile_dir at 0x22c2578> does not return 0. Stopping the code in a clean way. The error was:
A compilation Error occurs when trying to compile /home/bajarang/packages/MG5_aMC_v2_5_5/bin/pp_wplus_qcd_root/SubProcesses/P0_udx_wp.
The compilation fails with the following output message:
    gfortran -O -fno-automatic -ffixed-line-length-132 -c -I. -I../../FixedOrderAnalysis/ -I/home/bajarang/packages/myroot/root/include ../../FixedOrderAnalysis//analysis_root_pp_V.f
    ../../FixedOrderAnalysis//analysis_root_pp_V.f:29.15:

           appl_nQ2 = 10
                   1
    Error: Symbol 'appl_nq2' at (1) has no IMPLICIT type
    ../../FixedOrderAnalysis//analysis_root_pp_V.f:31.14:

           appl_nx = 20
                  1
    Error: Symbol 'appl_nx' at (1) has no IMPLICIT type
    ../../FixedOrderAnalysis//analysis_root_pp_V.f:26.17:

           appl_Q2max = 40000d0
                     1
    Error: Symbol 'appl_q2max' at (1) has no IMPLICIT type
    ../../FixedOrderAnalysis//analysis_root_pp_V.f:25.17:

           appl_Q2min = 2500d0
                     1
    Error: Symbol 'appl_q2min' at (1) has no IMPLICIT type
    ../../FixedOrderAnalysis//analysis_root_pp_V.f:30.19:

           appl_Q2order = 3
                       1
    Error: Symbol 'appl_q2order' at (1) has no IMPLICIT type
    ../../FixedOrderAnalysis//analysis_root_pp_V.f:28.16:

           appl_xmax = 1d0
                    1
    Error: Symbol 'appl_xmax' at (1) has no IMPLICIT type
    ../../FixedOrderAnalysis//analysis_root_pp_V.f:27.16:

           appl_xmin = 1d-5
                    1
    Error: Symbol 'appl_xmin' at (1) has no IMPLICIT type
    ../../FixedOrderAnalysis//analysis_root_pp_V.f:32.18:

           appl_xorder = 3
                      1
    Error: Symbol 'appl_xorder' at (1) has no IMPLICIT type
    make: *** [analysis_root_pp_V.o] Error 1

Please try to fix this compilations issue and retry.
Help might be found at https://answers.launchpad.net/mg5amcnlo.
If you think that this is a bug, you can report this at https://bugs.launchpad.net/mg5amcnlo
WARNING: Fail to compile the Subprocesses
INFO:

INFO: Checking test output:
INFO: P0_udx_wp
INFO: Result for test_ME:
INFO: Passed.
INFO: Result for check_poles:
INFO: Poles successfully cancel for 20 points over 20 (tolerance=1.0e-05)
INFO: P0_dxu_wp
INFO: Result for test_ME:
INFO: Passed.
INFO: Result for check_poles:
INFO: Poles successfully cancel for 20 points over 20 (tolerance=1.0e-05)
INFO: Starting run
INFO: Using 4 cores
INFO: Cleaning previous results
INFO: Doing fixed order NLO
INFO: Setting up grids
WARNING: program /home/bajarang/packages/MG5_aMC_v2_5_5/bin/pp_wplus_qcd_root/SubProcesses/P0_dxu_wp/ajob1 1 all 0 0 launch ends with non zero status: 127. Stop all computation
INFO: Idle: 0, Running: 0, Completed: 2 [ current time: 19h19 ]
Command "launch auto " interrupted with error:
Exception : program /home/bajarang/packages/MG5_aMC_v2_5_5/bin/pp_wplus_qcd_root/SubProcesses/P0_dxu_wp/ajob1 1 all 0 0 launch ends with non zero status: 127. Stop all computation
Please report this bug on https://bugs.launchpad.net/mg5amcnlo
More information is found in '/home/bajarang/packages/MG5_aMC_v2_5_5/bin/pp_wplus_qcd_root/run_01_tag_1_debug.log'.
Please attach this file to your report.
INFO:

quit
INFO:

----------------------------------------------------

TopDrawer Error :

----------------------------------------------------
[3]
INFO: Update the dependent parameter of the param_card.dat
INFO: Starting run
INFO: Compiling the code
INFO: Using LHAPDF v6.1.5 interface for PDFs
INFO: Compiling source...
INFO: ...done, continuing with P* directories
INFO: Compiling directories...
INFO: Compiling on 4 cores
INFO: Compiling P0_udx_wp...
INFO: Compiling P0_dxu_wp...
INFO: P0_udx_wp done.
INFO: P0_dxu_wp done.
INFO: Checking test output:
INFO: P0_udx_wp
INFO: Result for test_ME:
INFO: Passed.
INFO: Result for check_poles:
INFO: Poles successfully cancel for 20 points over 20 (tolerance=1.0e-05)
INFO: P0_dxu_wp
INFO: Result for test_ME:
INFO: Passed.
INFO: Result for check_poles:
INFO: Poles successfully cancel for 20 points over 20 (tolerance=1.0e-05)
INFO: Starting run
INFO: Using 4 cores
INFO: Cleaning previous results
INFO: Doing fixed order NLO
INFO: Setting up grids
INFO: Idle: 0, Running: 2, Completed: 0 [ current time: 18h50 ]
INFO: Idle: 0, Running: 1, Completed: 1 [ 4.1s ]
INFO: Idle: 0, Running: 0, Completed: 2 [ 5.4s ]
INFO:
      Results after grid setup:
      Total cross section: 1.028e+05 +- 6.4e+02 pb

INFO: Refining results, step 1
INFO: Idle: 0, Running: 2, Completed: 0 [ current time: 18h50 ]
/home/bajarang/packages/MG5_aMC_v2_5_5/bin/pp_wplus_qcd_root/SubProcesses/P0_udx_wp/ajob1: line 34: 11930 Killed ../madevent_mintFO > log.txt < input_app.txt 2>&1
WARNING: program /home/bajarang/packages/MG5_aMC_v2_5_5/bin/pp_wplus_qcd_root/SubProcesses/P0_udx_wp/ajob1 1 all 0 1 launch ends with non zero status: 137. Stop all computation
INFO: Idle: 0, Running: 0, Completed: 2 [ 15m 27s ]
Command "launch auto " interrupted with error:
Exception : program /home/bajarang/packages/MG5_aMC_v2_5_5/bin/pp_wplus_qcd_root/SubProcesses/P0_udx_wp/ajob1 1 all 0 1 launch ends with non zero status: 137. Stop all computation
Please report this bug on https://bugs.launchpad.net/mg5amcnlo
More information is found in '/home/bajarang/packages/MG5_aMC_v2_5_5/bin/pp_wplus_qcd_root/run_01_tag_1_debug.log'.
Please attach this file to your report.
INFO:

quit
INFO:

----------------------------------------------------

For root I guess something is wrong with the fortran code, which I just copied from the example code given on amcfast site commenting out only the grid related part and changing the values accordingly. It will be really helpful if you give some hints about this fortran code's usage etc.

Also another thing I want to ask is :
while running with root, I chose this combination in FO_analyse_card.dat :
FO_ANALYSIS_FORMAT = root
FO_EXTRALIBS = Core Cint Hist Matrix MathCore RIO dl Thread
FO_EXTRAPATHS = /home/bajarang/packages/myroot/root/lib
FO_INCLUDEPATHS = /home/bajarang/packages/myroot/root/include
FO_ANALYSE = analysis_root_pp_V.o
I want to know if these options are correct, or I have to include/exclude anything.
Thanks for your patience.

Regards,
Bajrang

Revision history for this message
Valerio Bertone (valerio-bertone) said :
#8

Dear Bajrang,

you also have to comment in the lines:

c*
c* aMCfast common.
c* Needed to redefine the grid parameters
c*
c include "reweight_appl.inc"
c include "appl_common.inc"
c*
c* Grid parameters
c*

otherwise you're trying to use undeclared variables.

As for req_acc_FO I meant exactly the contrary: I find it better to set it to some value (different from -1) as your were doing before.
In particular, in my experience, I find that req_acc_FO = 0.01 is good enough for the first step (iappl = 1), while something like req_acc_FO = 0.001 is a good compromise for the second step (iappl = 2).
Let me know if this helps.

Best regards,
Valerio

Revision history for this message
Bajarang (bajjubaba) said :
#9

Dear Valerio,

Thanks for your quick reply. I completely misunderstood your earlier comment about setting req_acc_FO. Now I set it back to 0.01. I commented out the include files and again tried with both formats with FO_ANALYSIS_FORMAT = root, no luck, the include files errors did disappear but the final error still perstists.

with FO_ANALYSIS_FORMAT = topdrawer, it executed the commands and tried to save the output but maybe there is something wrong with my root (expecting your comment) as I am getting below error now. Let me try to debug that from my side, but your comments will really help in knowing what is wrong exactly. Thanks in advance.

---------------------------------------------------------------------------------------------------------------------------
INFO: Update the dependent parameter of the param_card.dat
INFO: Starting run
INFO: Compiling the code
INFO: Using LHAPDF v6.1.5 interface for PDFs
INFO: Compiling source...
INFO: ...done, continuing with P* directories
INFO: Compiling directories...
INFO: Compiling on 4 cores
INFO: Compiling P0_udx_wp...
INFO: Compiling P0_dxu_wp...
INFO: P0_dxu_wp done.
INFO: P0_udx_wp done.
INFO: Checking test output:
INFO: P0_udx_wp
INFO: Result for test_ME:
INFO: Passed.
INFO: Result for check_poles:
INFO: Poles successfully cancel for 20 points over 20 (tolerance=1.0e-05)
INFO: P0_dxu_wp
INFO: Result for test_ME:
INFO: Passed.
INFO: Result for check_poles:
INFO: Poles successfully cancel for 20 points over 20 (tolerance=1.0e-05)
INFO: Starting run
INFO: Using 4 cores
INFO: Cleaning previous results
INFO: Doing fixed order NLO
INFO: Setting up grids
INFO: Idle: 0, Running: 0, Completed: 2 [ current time: 19h56 ]
INFO:
      Results after grid setup:
      Total cross section: 1.031e+05 +- 1.1e+03 pb

INFO: Refining results, step 1
INFO: Idle: 0, Running: 2, Completed: 0 [ current time: 19h56 ]
INFO: Idle: 0, Running: 1, Completed: 1 [ 5.1s ]
INFO: Idle: 0, Running: 0, Completed: 2 [ 5.1s ]
INFO:
   --------------------------------------------------------------
      Final results and run summary:
      Process p p > w+ [QCD]
      Run at p-p collider (6500.0 + 6500.0 GeV)
      Total cross section: 1.021e+05 +- 7.3e+02 pb
   --------------------------------------------------------------
      Scale variation (computed from histogram information):
          Dynamical_scale_choice -1 (envelope of 9 values):
              1.021e+05 pb +4.7% -8.9%
   --------------------------------------------------------------

INFO: The results of this run and the TopDrawer file with the plots have been saved in /home/bajarang/packages/MG5_aMC_v2_5_5/bin/pp_wplus_qcd_top/Events/run_01
reading grids:
 /home/bajarang/packages/MG5_aMC_v2_5_5/bin/pp_wplus_qcd_top/SubProcesses/P0_udx_wp/all_G1/grid_obs_0_out.root
 /home/bajarang/packages/MG5_aMC_v2_5_5/bin/pp_wplus_qcd_top/SubProcesses/P0_dxu_wp/all_G1/grid_obs_0_out.root

output to: /home/bajarang/packages/MG5_aMC_v2_5_5/bin/pp_wplus_qcd_top/Events/run_01/aMCfast_obs_0_starting_grid.root
Fatal in <TVirtualStreamerInfo::Factory>: Cannot find the plugin handler for TVirtualStreamerInfo! $ROOTSYS/etc/plugins/TVirtualStreamerInfo does not exist or is inaccessible.
aborting
Error in <TUnixSystem::StackTrace> script /home/bajarang/etc/gdb-backtrace.sh is missing
reading grids:
 /home/bajarang/packages/MG5_aMC_v2_5_5/bin/pp_wplus_qcd_top/SubProcesses/P0_udx_wp/all_G1/grid_obs_1_out.root
 /home/bajarang/packages/MG5_aMC_v2_5_5/bin/pp_wplus_qcd_top/SubProcesses/P0_dxu_wp/all_G1/grid_obs_1_out.root

output to: /home/bajarang/packages/MG5_aMC_v2_5_5/bin/pp_wplus_qcd_top/Events/run_01/aMCfast_obs_1_starting_grid.root
Fatal in <TVirtualStreamerInfo::Factory>: Cannot find the plugin handler for TVirtualStreamerInfo! $ROOTSYS/etc/plugins/TVirtualStreamerInfo does not exist or is inaccessible.
aborting
Error in <TUnixSystem::StackTrace> script /home/bajarang/etc/gdb-backtrace.sh is missing
reading grids:
 /home/bajarang/packages/MG5_aMC_v2_5_5/bin/pp_wplus_qcd_top/SubProcesses/P0_udx_wp/all_G1/grid_obs_2_out.root
 /home/bajarang/packages/MG5_aMC_v2_5_5/bin/pp_wplus_qcd_top/SubProcesses/P0_dxu_wp/all_G1/grid_obs_2_out.root

output to: /home/bajarang/packages/MG5_aMC_v2_5_5/bin/pp_wplus_qcd_top/Events/run_01/aMCfast_obs_2_starting_grid.root
Fatal in <TVirtualStreamerInfo::Factory>: Cannot find the plugin handler for TVirtualStreamerInfo! $ROOTSYS/etc/plugins/TVirtualStreamerInfo does not exist or is inaccessible.
aborting
Error in <TUnixSystem::StackTrace> script /home/bajarang/etc/gdb-backtrace.sh is missing
INFO: Run complete
INFO:
quit
INFO:

---------------------------------------------------------------------------------------------------------------------------

Regards,
Bajrang

Revision history for this message
Best Valerio Bertone (valerio-bertone) said :
#10

Dear Bajrang,

could you please check whether the grids:

 /home/bajarang/packages/MG5_aMC_v2_5_5/bin/pp_wplus_qcd_top/SubProcesses/P0_udx_wp/all_G1/grid_obs_0_out.root
 /home/bajarang/packages/MG5_aMC_v2_5_5/bin/pp_wplus_qcd_top/SubProcesses/P0_dxu_wp/all_G1/grid_obs_0_out.root

actually exist? And if they do, if their combination:

output to: /home/bajarang/packages/MG5_aMC_v2_5_5/bin/pp_wplus_qcd_top/Events/run_01/aMCfast_obs_0_starting_grid.root

is there?
The problem might be related to APPLgrid because it looks like the code stop when trying to read the grids.
Maybe you installed it linking a different version of ROOT?

Valerio

Revision history for this message
Bajarang (bajjubaba) said :
#11

Dear Valerio,

You are right, the applgrid was configured with some different version of root. I fixed it and now the grids are getting produced with topdrawer option.

Thank you so much.

Regards,
Bajrang

Revision history for this message
Bajarang (bajjubaba) said :
#12

Thanks Valerio Bertone, that solved my question.