Multi-core operation during single-core run in MadSpin?

Asked by Zachary Marshall

Hi all,

I just noticed something that looks a little funny. In a job for which I've put the configuration below, which runs MadSpin at the end of a generation chain, I see two `check` jobs seemingly running in parallel:

    PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
 474490 zmarshal 20 0 6992 2944 2816 R 100.0 0.0 17:11.49 check
 486328 zmarshal 20 0 11004 3072 2944 R 99.7 0.0 11:50.72 check

You can see in the configuration

run_mode = 0
nb_core = 1

so I would've expected this to use just one core at a time. We're using MG5_aMC 3.5.5.

Incidentally, I noticed this when looking over a report that this job takes an extremely long time — and the MadSpin portion does seem to take quite a while (30 minutes so far for me while writing this question, but others have waited hours, and also tested version 3.5.0). In 3.5.0, I see only one instance of `check` running.

In case you can spot something wrong with the job setup or configuration, any tips would be welcome!

Thanks,
Zach

Process card:

set group_subprocesses Auto
set ignore_six_quark_processes False
set low_mem_multicore_nlo_generation False
set complex_mass_scheme False
set include_lepton_initiated_processes False
set gauge unitary
set loop_optimized_output True
set loop_color_flows False
set max_npoint_for_channel 0
set default_unset_couplings 99
set max_t_for_channel 99
set zerowidth_tchannel True
set nlo_mixed_expansion True
import model toponium_eta
define p = g u c d s u~ c~ d~ s~
define j = g u c d s u~ c~ d~ s~
define l+ = e+ mu+
define l- = e- mu-
define charm = c c~
define up = u u~
define q = u u~ d d~ c c~ s s~
define e = e- e+
define mu = mu- mu+
define l+ = e+ mu+ ta+
define l- = e- mu- ta-
define vl = ve vm vt
define vl~ = ve~ vm~ vt~
generate g g > eta > t t~
output -f -nojpeg

Param card:
######################################################################
## PARAM_CARD AUTOMATICALY GENERATED BY MG5 ####
######################################################################
###################################
## INFORMATION FOR MASS
###################################
BLOCK MASS #
      5 4.700000e+00 # mb
      6 1.725000e+02 # mt
      23 9.118760e+01 # mz
      25 1.250000e+02 # mh
      6001 3.430000e+02 # meta
      6003 3.435000e+02 # mjpsi
      1 0.000000e+00 # d : 0.0
      2 0.000000e+00 # u : 0.0
      3 0.000000e+00 # s : 0.0
      4 0.000000e+00 # c : 0.0
      11 0.000000e+00 # e- : 0.0
      12 0.000000e+00 # ve : 0.0
      13 0.000000e+00 # mu- : 0.0
      14 0.000000e+00 # vm : 0.0
      15 0.000000e+00 # ta- : 0.0
      16 0.000000e+00 # vt : 0.0
      21 0.000000e+00 # g : 0.0
      22 0.000000e+00 # a : 0.0
      24 8.041851e+01 # w+ : cmath.sqrt(mz__exp__2/2. + cmath.sqrt(mz__exp__4/4. - (aew*cmath.pi*mz__exp__2)/(gf*sqrt__2)))
###################################
## INFORMATION FOR SMINPUTS
###################################
BLOCK SMINPUTS #
      1 1.325070e+02 # aewm1
      2 1.166390e-05 # gf
      3 1.180023e-01 # as (note that parameter not used if you use a pdf set)
###################################
## INFORMATION FOR YUKAWA
###################################
BLOCK YUKAWA #
      5 4.700000e+00 # ymb
      6 1.725000e+02 # ymt
      6000 1.355000e+00 # cy
      6001 1.570796e+00 # alphay
###################################
## INFORMATION FOR DECAY
###################################
DECAY 6 1.320000e+00 #
DECAY 23 2.495200e+00 #
DECAY 24 2.047600e+00 # ww
DECAY 25 6.382339e-03 #
DECAY 6001 7.000000e+00 # weta
DECAY 6003 3.150000e+00 # wjpsi
DECAY 1 0.000000e+00 # d : 0.0
DECAY 2 0.000000e+00 # u : 0.0
DECAY 3 0.000000e+00 # s : 0.0
DECAY 4 0.000000e+00 # c : 0.0
DECAY 5 0.000000e+00 # b : 0.0
DECAY 11 0.000000e+00 # e- : 0.0
DECAY 12 0.000000e+00 # ve : 0.0
DECAY 13 0.000000e+00 # mu- : 0.0
DECAY 14 0.000000e+00 # vm : 0.0
DECAY 15 0.000000e+00 # ta- : 0.0
DECAY 16 0.000000e+00 # vt : 0.0
DECAY 21 0.000000e+00 # g : 0.0
DECAY 22 0.000000e+00 # a : 0.0
###################################
## INFORMATION FOR QNUMBERS 6001
###################################
BLOCK QNUMBERS 6001 # eta
      1 0 # 3 times electric charge
      2 1 # number of spin states (2s+1)
      3 1 # colour rep (1: singlet, 3: triplet, 8: octet)
      4 0 # particle/antiparticle distinction (0=own anti)
###################################
## INFORMATION FOR QNUMBERS 6003
###################################
BLOCK QNUMBERS 6003 # jpsi
      1 0 # 3 times electric charge
      2 3 # number of spin states (2s+1)
      3 1 # colour rep (1: singlet, 3: triplet, 8: octet)
      4 0 # particle/antiparticle distinction (0=own anti)

Run card:

#*********************************************************************
# MadGraph5_aMC@NLO *
# *
# run_card.dat MadEvent *
# *
# This file is used to set the parameters of the run. *
# *
# Some notation/conventions: *
# *
# Lines starting with a '# ' are info or comments *
# *
# mind the format: value = variable ! comment *
# *
# To display more options, you can type the command: *
# update to_full *
#*********************************************************************
#
#*********************************************************************
# Tag name for the run (one word) *
#*********************************************************************
  tag_1 = run_tag ! name of the run
#*********************************************************************
# Number of events and rnd seed *
# Warning: Do not generate more than 1M events in a single run *
#*********************************************************************
  11000 = nevents ! Number of unweighted events requested
  0 = iseed ! rnd seed (0=assigned automatically=default))
#*********************************************************************
# Collider type and energy *
# lpp: 0=No PDF, 1=proton, -1=antiproton, *
# 2=elastic photon of proton/ion beam *
# +/-3=PDF of electron/positron beam *
# +/-4=PDF of muon/antimuon beam *
#*********************************************************************
  1 = lpp1 ! beam 1 type
  1 = lpp2 ! beam 2 type
  6800.0 = ebeam1 ! beam 1 total energy in GeV
  6800.0 = ebeam2 ! beam 2 total energy in GeV
# To see polarised beam options: type "update beam_pol"

#*********************************************************************
# PDF CHOICE: this automatically fixes alpha_s and its evol. *
# pdlabel: lhapdf=LHAPDF (installation needed) [1412.7420] *
# iww=Improved Weizsaecker-Williams Approx.[hep-ph/9310350] *
# eva=Effective W/Z/A Approx. [2111.02442] *
# edff=EDFF in gamma-UPC [eq.(11) in 2207.03012] *
# chff=ChFF in gamma-UPC [eq.(13) in 2207.03012] *
# none=No PDF, same as lhapdf with lppx=0 *
#*********************************************************************
  lhapdf = pdlabel ! PDF set
  303000 = lhaid ! if pdlabel=lhapdf, this is the lhapdf number
# To see heavy ion options: type "update ion_pdf"
#*********************************************************************
# Renormalization and factorization scales *
#*********************************************************************
  False = fixed_ren_scale ! if .true. use fixed ren scale
  False = fixed_fac_scale ! if .true. use fixed fac scale
  91.188 = scale ! fixed ren scale
  91.188 = dsqrt_q2fact1 ! fixed fact scale for pdf1
  91.188 = dsqrt_q2fact2 ! fixed fact scale for pdf2
  -1 = dynamical_scale_choice ! Choose one of the preselected dynamical choices
  1.0 = scalefact ! scale factor for event-by-event scales

#*********************************************************************
# Type and output format
#*********************************************************************
  False = gridpack !True = setting up the grid pack
  -1.0 = time_of_flight ! threshold (in mm) below which the invariant livetime is not written (-1 means not written)
  average = event_norm ! average/sum. Normalization of the weight in the LHEF
#*********************************************************************
# Matching parameter (MLM only)
#*********************************************************************
 1.0 = alpsfact ! scale factor for QCD emission vx
 False = chcluster ! cluster only according to channel diag
 5 = asrwgtflavor ! highest quark flavor for a_s reweight
 True = auto_ptj_mjj ! Automatic setting of ptj and mjj if xqcut >0
                                   ! (turn off for VBF and single top processes)
 0.0 = xqcut ! minimum kt jet measure between partons

#*********************************************************************
#
#*********************************************************************
# Phase-Space Optimization strategy (basic options)
#*********************************************************************
  0 = nhel ! using helicities importance sampling or not.
! 0: sum over helicity, 1: importance sampling
  1 = sde_strategy ! default integration strategy (hep-ph/2021.00773)
! 1 is old strategy (using amp square)
! 2 is new strategy (using only the denominator)
# To see advanced option for Phase-Space optimization: type "update psoptim"
#*********************************************************************
# Customization (custom cuts/scale/bias/...) *
# list of files containing fortran function that overwrite default *
#*********************************************************************
        = custom_fcts ! List of files containing user hook function
#*******************************
# Parton level cuts definition *
#*******************************
  0.0 = dsqrt_shat ! minimal shat for full process
#
#
#*********************************************************************
# BW cutoff (M+/-bwcutoff*Gamma) ! Define on/off-shell for "$" and decay
#*********************************************************************
  15.0 = bwcutoff ! (M+/-bwcutoff*Gamma)
#*********************************************************************
# Standard Cuts *
#*********************************************************************
# Minimum and maximum pt's (for max, -1 means no cut) *
#*********************************************************************
  {} = pt_min_pdg ! pt cut for other particles (use pdg code). Applied on particle and anti-particle
  {} = pt_max_pdg ! pt cut for other particles (syntax e.g. {6: 100, 25: 50})
#
# For display option for energy cut in the partonic center of mass frame type 'update ecut'
#
#*********************************************************************
# Maximum and minimum absolute rapidity (for max, -1 means no cut) *
#*********************************************************************
  {} = eta_min_pdg ! rap cut for other particles (use pdg code). Applied on particle and anti-particle
  {} = eta_max_pdg ! rap cut for other particles (syntax e.g. {6: 2.5, 23: 5})
#*********************************************************************
# Minimum and maximum DeltaR distance *
#*********************************************************************
#*********************************************************************
# Minimum and maximum invariant mass for pairs *
#*********************************************************************
  {} = mxx_min_pdg ! min invariant mass of a pair of particles X/X~ (e.g. {6:250})
  {'default': False} = mxx_only_part_antipart ! if True the invariant mass is applied only
! to pairs of particle/antiparticle and not to pairs of the same pdg codes.
#*********************************************************************
# Inclusive cuts *
#*********************************************************************
  0.0 = ptheavy ! minimum pt for at least one heavy final state
#*********************************************************************
# maximal pdg code for quark to be considered as a light jet *
# (otherwise b cuts are applied) *
#*********************************************************************
  4 = maxjetflavor ! Maximum jet pdg code
#*********************************************************************
#
#*********************************************************************
# Store info for systematics studies *
# WARNING: Do not use for interference type of computation *
#*********************************************************************
  True = use_syst ! Enable systematics studies
#
  systematics = systematics_program ! none, systematics [python], SysCalc [depreceted, C++]
  ['--weight_info=MUR%(mur).1f_MUF%(muf).1f_PDF%(pdf)i', '--remove_wgts=".*MUR0.5_MUF2.0.*|.*MUR2.0_MUF0.5.*"', '--pdf=NNPDF30_nlo_as_0118_hessian,PDF4LHC21_40_pdfas,NNPDF30_nlo_as_0119@0,NNPDF30_nlo_as_0117@0,NNPDF30_nnlo_as_0118_hessian@0,MSHT20nnlo_as118@0,MSHT20nlo_as118@0,CT18NNLO@0,CT18NLO@0,NNPDF31_nnlo_as_0118_hessian@0,NNPDF31_nlo_as_0118_hessian@0,NNPDF40_nnlo_as_01180_hessian@0,NNPDF40_nlo_as_01180@0,CT18ANNLO@0,CT18XNNLO@0,CT18ZNNLO@0,CT14nlo@0,MMHT2014nlo68clas118@0', '--muf=0.5,1.0,2.0', '--mur=0.5,1.0,2.0', '--dyn=-1'] = systematics_arguments ! see: https://cp3.irmp.ucl.ac.be/projects/madgraph/wiki/Systematics#Systematicspythonmodule

  0 = ickkw
  on = madspin
  123456 = python_seed

Madspin card:

#************************************************************
#* MadSpin *
#* *
#* P. Artoisenet, R. Frederix, R. Rietkerk, O. Mattelaer *
#* *
#* Part of the MadGraph5_aMC@NLO Framework: *
#* The MadGraph5_aMC@NLO Development Team - Find us at *
#* https://server06.fynu.ucl.ac.be/projects/madgraph *
#* *
#************************************************************
#Some options (uncomment to apply)
set Nevents_for_max_weight 500 # number of events for the estimate of the max. weight
set BW_cut 15 # cut on how far the particle can be off-shell
set max_weight_ps_point 500 # number of PS to estimate the maximum for each event
set seed 123456 # random seed
# set spinmode none # flag to turn off spin correlations
# specify the decay for the final state particles
decay t > w+ b, w+ > all all
decay t~ > w- b~, w- > all all
# running the actual code
launch

me5_configuration Card:

################################################################################
#
# Copyright (c) 2009 The MadGraph5_aMC@NLO Development team and Contributors
#
# This file is a part of the MadGraph5_aMC@NLO project, an application which
# automatically generates Feynman diagrams and matrix elements for arbitrary
# high-energy processes in the Standard Model and beyond.
#
# It is subject to the MadGraph5_aMC@NLO license which should accompany this
# distribution.
#
# For more information, visit madgraph.phys.ucl.ac.be and amcatnlo.web.cern.ch
#
################################################################################
#
# This File contains some configuration variable for MadGraph/MadEvent
#
# Line starting by #! are comment and should remain commented
# Line starting with # should be uncommented if you want to modify the default
# value.
# Current value for all options can seen by typing "display options"
# after either ./bin/mg5_aMC or ./bin/madevent
#
# You can place this files in ~/.mg5/mg5_configuration.txt if you have more than
# one version of MG5.
#
################################################################################

#! Allow/Refuse syntax that changed meaning in version 3.1 of the code
#! (Compare to 3.0, 3.1 is back to the meaning of 2.x branch)
#!
# acknowledged_v3.1_syntax = False

#! Prefered Fortran Compiler
#! If None: try to find g77 or gfortran on the system
#!
# fortran_compiler = None
# f2py_compiler_py2 = None
# f2py_compiler_py3 = None

#! Prefered C++ Compiler
#! If None: try to find g++ or clang on the system
#!
# cpp_compiler = None

#! Prefered Text Editor
#! Default: use the shell default Editor
#! or try to find one available on the system
#! Be careful: Only shell based editor are allowed
# text_editor = None

#! Prefered WebBrower
#! If None: try to find one available on the system
# web_browser = None

#! Prefered PS viewer
#! If None: try to find one available on the system
# eps_viewer = None

#! Time allowed to answer question (if no answer takes default value)
#! 0: No time limit
# timeout = 60

#! Pythia8 path.
#! Defines the path to the pythia8 installation directory (i.e. the
#! on containing the lib, bin and include directories) .
#! If using a relative path, that starts from the mg5 directory
# pythia8_path = ./HEPTools/pythia8

#! MG5aMC_PY8_interface path
#! Defines the path of the C++ driver file that is used by MG5_aMC to
#! steer the Pythia8 shower.
#! Can be installed directly from within MG5_aMC with the following command:
#! MG5_aMC> install mg5amc_py8_interface
# mg5amc_py8_interface_path = ./HEPTools/MG5aMC_PY8_interface

#! Herwig++/Herwig7 paths
#! specify here the paths also to HepMC ant ThePEG
#! define the path to the herwig++, thepeg and hepmc directories.
#! paths can be absolute or relative from mg5 directory
#! WARNING: if Herwig7 has been installed with the bootstrap script,
#! then please set thepeg_path and hepmc_path to the same value as
#! hwpp_path
# hwpp_path =
# thepeg_path =
# hepmc_path =

#! Control when MG5 checks if he is up-to-date.
#! Enter the number of day between two check (0 means never)
#! A question is always asked before any update
auto_update = 0

################################################################################
# INFO FOR MADEVENT / aMC@NLO
################################################################################
# If this file is in a MADEVENT Template. 'main directory' is the directory
# containing the SubProcesses directory. Otherwise this is the MadGraph5_aMC@NLO main
# directory (containing the directories madgraph and Template)

#! Allow/Forbid the automatic opening of the web browser (on the status page)
#! when launching MadEvent [True/False]
 automatic_html_opening = False
#! allow notification of finished job in the notification center (Mac Only)
# notification_center = True

#! Default Running mode
#! 0: single machine/ 1: cluster / 2: multicore
 run_mode = 0

#! Cluster Type [pbs|sge|condor|lsf|ge|slurm|htcaas|htcaas2] Use for cluster run only
#! And cluster queue (or partition for slurm)
#! And size of the cluster (some part of the code can adapt splitting accordingly)
# cluster_type = condor
# cluster_queue = madgraph
# cluster_size = 150
# cluster_walltime = # time in minute for slurm and second for condor (not supported for other scheduller)

#! Path to a node directory to avoid direct writing on the central disk
#! Note that condor clusters avoid direct writing by default (therefore this
#! options does not affect condor clusters)
# cluster_temp_path = None

#! path to a node directory where local file can be found (typically pdf)
#! to avoid to send them to the node (if cluster_temp_path is on True or condor)
# cluster_local_path = None # example: /cvmfs/cp3.uclouvain.be/madgraph/

#! Cluster waiting time for status update
#! First number is when the number of waiting job is higher than the number
#! of running one (time in second). The second number is in the second case.
# cluster_status_update = 600 30

#! How to deal with failed submission (can occurs on cluster mode)
#! 0: crash, -1: print error, hangs the program up to manual instructions, N(>0) retry up to N times.
# cluster_nb_retry = 1

#! How much time to wait for the output file before resubmission/crash (filesystem can be very slow)
# cluster_retry_wait = 300

#! Nb_core to use (None = all) This is use only for multicore run
#! This correspond also to the number core used for code compilation for cluster mode
 nb_core = 1

#! Pythia-PGS Package
#! relative path start from main directory
# pythia-pgs_path = ./pythia-pgs

#! Delphes Package
#! relative path start from main directory
# delphes_path = ./Delphes

#! MadAnalysis4 fortran-based package [for basic analysis]
#! relative path start from main directory
# madanalysis_path = ./MadAnalysis

#! MadAnalysis5 python-based Package [For advanced analysis]
#! relative path start from main directory
# madanalysis5_path = ./HEPTools/madanalysis5/madanalysis5

#! ExRootAnalysis Package
#! relative path start from main directory
# exrootanalysis_path = ./ExRootAnalysis

#! TOPDRAWER PATH
#! Path to the directory containing td executables
#! relative path start from main directory
# td_path = ./td

#! lhapdf-config --can be specify differently depending of your python version
#! If None: try to find one available on the system
lhapdf_py2 = /cvmfs/atlas-nightlies.cern.ch/repo/sw/main_AthGeneration_x86_64-el9-gcc13-opt/sw/lcg/releases/MCGenerators/lhapdf/6.5.4-64499/x86_64-el9-gcc13-opt/bin/lhapdf-config
lhapdf_py3 = /cvmfs/atlas-nightlies.cern.ch/repo/sw/main_AthGeneration_x86_64-el9-gcc13-opt/sw/lcg/releases/MCGenerators/lhapdf/6.5.4-64499/x86_64-el9-gcc13-opt/bin/lhapdf-config

#! fastjet-config
#! If None: try to find one available on the system
fastjet = /cvmfs/atlas-nightlies.cern.ch/repo/sw/main_AthGeneration_x86_64-el9-gcc13-opt/sw/lcg/releases/LCG_106a_ATLAS_1/fastjet/3.4.1/x86_64-el9-gcc13-opt/bin/fastjet-config

#! eMELA-config
#! If None: try to find one available on the system
# eMELA = eMELA-config

#! MCatNLO-utilities
#! relative path starting from main directory
# MCatNLO-utilities_path = ./MCatNLO-utilities

#! Set what OLP to use for the loop ME generation
# OLP = MadLoop

#! Set the PJFRy++ directory containing pjfry's library
#! if auto: try to find it automatically on the system (default)
#! if '' or None: disabling pjfry
#! if pjfry=/PATH/TO/pjfry/lib: use that specific installation path for PJFry++
# pjfry = auto

#! Set the Golem95 directory containing golem's library
#! It only supports version higher than 1.3.0
#! if auto: try to find it automatically on the system (default)
#! if '' or None: disabling Golem95
#! if golem=/PATH/TO/golem/lib: use that speficif installation path for Golem95
# golem = auto

#! Set the samurai directory containing samurai's library
#! It only supports version higher than 2.0.0
#! if auto: try to find it automatically on the system (default)
#! if '' or None: disabling samurai
#! if samurai=/PATH/TO/samurai/lib: use that specific installation path for samurai
# samurai = None

#! Set the Ninja directory containing ninja's library
#! if '' or None: disabling ninja
#! if ninja=/PATH/TO/ninja/lib: use that specific installation path for ninja
 ninja = /cvmfs/atlas-nightlies.cern.ch/repo/sw/main_AthGeneration_x86_64-el9-gcc13-opt/sw/lcg/releases/LCG_106a_ATLAS_1/MCGenerators//gosam_contrib/2.0-779ba/x86_64-el9-gcc13-opt/lib

#! Set the COLLIER directory containing COLLIER's library
#! if '' or None: disabling COLLIER
#! if ninja=/PATH/TO/ninja/lib: use that specific installation path for COLLIER
# Note that it is necessary that you have generated a static library for COLLIER
 collier = /cvmfs/atlas-nightlies.cern.ch/repo/sw/main_AthGeneration_x86_64-el9-gcc13-opt/sw/lcg/releases/LCG_106a_ATLAS_1/MCGenerators//collier/1.2.8-d0321/x86_64-el9-gcc13-opt/lib

#! Set how MadLoop dependencies (such as CutTools) should be handled
#! > external : ML5 places a link to the MG5_aMC-wide libraries
#! > internal : ML5 copies all dependencies in the output so that it is independent
#! > environment_paths : ML5 searches for the dependencies in your environment path
output_dependencies = internal

#! SysCalc PATH
#! Path to the directory containing syscalc executables
#! relative path start from main directory
 syscalc_path = /cvmfs/atlas-nightlies.cern.ch/repo/sw/main_AthGeneration_x86_64-el9-gcc13-opt/sw/lcg/releases/LCG_106a_ATLAS_1/MCGenerators//syscalc/1.1.7-fe2f4/x86_64-el9-gcc13-opt

#! Absolute paths to the config script in the bin directory of PineAPPL
#! (to generate PDF-independent fast-interpolation grids).
# pineappl = pineappl

mg5_path = /cvmfs/atlas-nightlies.cern.ch/repo/sw/main_AthGeneration_x86_64-el9-gcc13-opt/sw/lcg/releases/MCGenerators/madgraph5amc/3.5.5.atlas11-b51a9/x86_64-el9-gcc13-opt

# MG5 MAIN DIRECTORY
mg5_path = /cvmfs/atlas-nightlies.cern.ch/repo/sw/main_AthGeneration_x86_64-el9-gcc13-opt/sw/lcg/releases/MCGenerators/madgraph5amc/3.5.5.atlas11-b51a9/x86_64-el9-gcc13-opt

Question information

Language:
English Edit question
Status:
Solved
For:
MadGraph5_aMC@NLO Edit question
Assignee:
No assignee Edit question
Solved by:
Olivier Mattelaer
Solved:
Last query:
Last reply:
Revision history for this message
Zachary Marshall (zach-marshall) said :
#1

Ugh.

The two jobs are my two separate jobs — it's just that _inside_ the container I can only see the jobs inside the container, and outside the container I can see both. Fine. So it does seem to be respecting the number of cores setting.

I'd still love some help with why `check` seems to be running for so long!

Thanks,
Zach

Revision history for this message
Olivier Mattelaer (olivier-mattelaer) said :
#2

Just that narrow width approximation is not valid here and it make very difficult for madspin to recover the correct distribution with correct off-shell effect...
(Which is not clear if he will succeed or not anyway)

Cheers,

Olivier

> On 28 Oct 2024, at 22:00, Zachary Marshall <email address hidden> wrote:
>
> Question #819226 on MadGraph5_aMC@NLO changed:
> https://answers.launchpad.net/mg5amcnlo/+question/819226
>
> Zachary Marshall gave more information on the question:
> Ugh.
>
> The two jobs are my two separate jobs — it's just that _inside_ the
> container I can only see the jobs inside the container, and outside the
> container I can see both. Fine. So it does seem to be respecting the
> number of cores setting.
>
> I'd still love some help with why `check` seems to be running for so
> long!
>
> Thanks,
> Zach
>
> --
> You received this question notification because you are an answer
> contact for MadGraph5_aMC@NLO.

Revision history for this message
Zachary Marshall (zach-marshall) said :
#3

Hi Olivier,

Thanks, but I'm not sure I got it — we use MadSpin for top decays with some regularity. You mean the history (coming from the eta) affects the ability of MS to get the distribution right? Or is there something I've missed in those cards?

Thanks again,
Zach

Revision history for this message
Best Olivier Mattelaer (olivier-mattelaer) said :
#4

Your eta is very close to threshold, which breaks NWA for this process.

MadSpin is known to be slow/fail when NWA is clearly not valid, so this is likely related.
Now in principle here sqrts is eta and should be preserve by MadSpin, avoiding the effect to have to correct for off-shell effect of the eta (which is likely bad but should have let madspin to work "fast").

But in any case, I would not trust MadSpin for this particular case.
I would suggest using the decay chain syntax which is more robust:

Revision history for this message
Zachary Marshall (zach-marshall) said :
#5

Thanks Olivier Mattelaer, that solved my question.