Problem in lepton pT cut when launching p p > e+ ve mu+ mu- a [QCD]

Asked by Huanfeng Cheng on 2018-11-14

Dear MG experts,

I met a problem when launching p p > e+ ve mu+ mu- a [QCD] at fixed LO (also at fNLO) if I impose 'more strict' pT_l cut (eg. pT_l > 20GeV). But it's fine to just impose pT_l > 5 GeV (even > 18 GeV), but crashed when pT_l > 19GeV and report:

"Exception: program /Users/huanfeng/Research/Automated_Tools/MG5_aMC/bin/wpza_decay_qcd/SubProcesses/P0_udx_epvemupmuma/ajob1 2 born 0 0 launch ends with non zero status: 1. Stop all computation"

Is there any hints about why it happened?

I attach my debug.log below where I implemented invariant mass of final states as dynamical scale in setscales.f and defined some separation cuts between photon and leptons (for LO) in cut.f. But if I disables these and use the default, problem is still happening.

debug.log:
————————

launch auto
Traceback (most recent call last):
  File "/Users/huanfeng/Research/Automated_Tools/MG5_aMC/madgraph/interface/extended_cmd.py", line 1501, in onecmd
    return self.onecmd_orig(line, **opt)
  File "/Users/huanfeng/Research/Automated_Tools/MG5_aMC/madgraph/interface/extended_cmd.py", line 1450, in onecmd_orig
    return func(arg, **opt)
  File "/Users/huanfeng/Research/Automated_Tools/MG5_aMC/madgraph/interface/amcatnlo_run_interface.py", line 1664, in do_launch
    evt_file = self.run(mode, options)
  File "/Users/huanfeng/Research/Automated_Tools/MG5_aMC/madgraph/interface/amcatnlo_run_interface.py", line 1828, in run
    self.run_all_jobs(jobs_to_run,integration_step)
  File "/Users/huanfeng/Research/Automated_Tools/MG5_aMC/madgraph/interface/amcatnlo_run_interface.py", line 2101, in run_all_jobs
    self.wait_for_complete(run_type)
  File "/Users/huanfeng/Research/Automated_Tools/MG5_aMC/madgraph/interface/amcatnlo_run_interface.py", line 4636, in wait_for_complete
    self.cluster.wait(self.me_dir, update_status)
  File "/Users/huanfeng/Research/Automated_Tools/MG5_aMC/madgraph/various/cluster.py", line 844, in wait
    raise Exception, self.fail_msg
Exception: program /Users/huanfeng/Research/Automated_Tools/MG5_aMC/bin/wpza_decay_qcd/SubProcesses/P0_udx_epvemupmuma/ajob1 2 born 0 0 launch ends with non zero status: 1. Stop all computation
Value of current Options:
              text_editor : None
      notification_center : True
                    pjfry : None
       cluster_local_path : None
  default_unset_couplings : 99
       group_subprocesses : Auto
ignore_six_quark_processes : False
    loop_optimized_output : True
    cluster_status_update : (600, 30)
         fortran_compiler : None
               hepmc_path : None
                  collier : /Users/huanfeng/Research/Automated_Tools/MG5_aMC/HEPTools/lib
              auto_update : 7
             pythia8_path : None
                hwpp_path : None
low_mem_multicore_nlo_generation : False
                    golem : None
          pythia-pgs_path : None
                  td_path : None
             delphes_path : None
              thepeg_path : None
             cluster_type : condor
        madanalysis5_path : None
      exrootanalysis_path : None
                      OLP : MadLoop
                 applgrid : applgrid-config
               eps_viewer : None
                  fastjet : None
                 run_mode : 2
              web_browser : None
   automatic_html_opening : False
        cluster_temp_path : None
             cluster_size : 100
            cluster_queue : None
             syscalc_path : None
         madanalysis_path : None
                   lhapdf : /Users/huanfeng/Research/Automated_Tools/LHAPDF_6_1/bin/lhapdf-config
             stdout_level : 20
                  nb_core : 4
            f2py_compiler : None
                    ninja : /Users/huanfeng/Research/Automated_Tools/MG5_aMC/HEPTools/lib
                  amcfast : amcfast-config
       cluster_retry_wait : 300
      output_dependencies : external
           crash_on_error : False
mg5amc_py8_interface_path : None
         loop_color_flows : False
                  samurai : None
         cluster_nb_retry : 1
                 mg5_path : /Users/huanfeng/Research/Automated_Tools/MG5_aMC
                  timeout : 60
                    gauge : unitary
      complex_mass_scheme : False
             cpp_compiler : None
   max_npoint_for_channel : 0
#************************************************************
#* MadGraph5_aMC@NLO *
#* *
#* * * *
#* * * * * *
#* * * * * 5 * * * * *
#* * * * * *
#* * * *
#* *
#* *
#* VERSION 2.6.3.2 2018-06-22 *
#* *
#* The MadGraph5_aMC@NLO Development Team - Find us at *
#* https://server06.fynu.ucl.ac.be/projects/madgraph *
#* *
#************************************************************
#* *
#* Command File for MadGraph5_aMC@NLO *
#* *
#* run as ./bin/mg5_aMC filename *
#* *
#************************************************************
set default_unset_couplings 99
set group_subprocesses Auto
set ignore_six_quark_processes False
set loop_optimized_output True
set loop_color_flows False
set gauge unitary
set complex_mass_scheme False
set max_npoint_for_channel 0
import model sm
define p = g u c d s u~ c~ d~ s~
define j = g u c d s u~ c~ d~ s~
define l+ = e+ mu+
define l- = e- mu-
define vl = ve vm vt
define vl~ = ve~ vm~ vt~
import model loop_sm
generate p p > e+ ve mu+ mu- a [QCD]
output wpza_decay_qcd
######################################################################
## PARAM_CARD AUTOMATICALY GENERATED BY MG5 ####
######################################################################
###################################
## INFORMATION FOR LOOP
###################################
BLOCK LOOP #
      1 9.118800e+01 # mu_r
###################################
## INFORMATION FOR MASS
###################################
BLOCK MASS #
      5 0.000000e+00 # mb
      6 1.732000e+02 # mt
      15 1.777000e+00 # mta
      23 9.118760e+01 # mz
      25 1.250000e+02 # mh
      1 0.000000e+00 # d : 0.0
      2 0.000000e+00 # u : 0.0
      3 0.000000e+00 # s : 0.0
      4 0.000000e+00 # c : 0.0
      11 0.000000e+00 # e- : 0.0
      12 0.000000e+00 # ve : 0.0
      13 0.000000e+00 # mu- : 0.0
      14 0.000000e+00 # vm : 0.0
      16 0.000000e+00 # vt : 0.0
      21 0.000000e+00 # g : 0.0
      22 0.000000e+00 # a : 0.0
      24 8.039800e+01 # w+ : cmath.sqrt(mz__exp__2/2. + cmath.sqrt(mz__exp__4/4. - (aew*cmath.pi*mz__exp__2)/(gf*sqrt__2)))
###################################
## INFORMATION FOR SMINPUTS
###################################
BLOCK SMINPUTS #
      1 1.323407e+02 # aewm1
      2 1.166370e-05 # gf
      3 1.180000e-01 # as
###################################
## INFORMATION FOR YUKAWA
###################################
BLOCK YUKAWA #
      5 0.000000e+00 # ymb
      6 0.000000e+00 # ymt
      15 1.777000e+00 # ymtau
###################################
## INFORMATION FOR DECAY
###################################
DECAY 6 1.361764e+00 # wt
DECAY 23 2.508420e+00 # wz
DECAY 24 2.097673e+00 # ww
DECAY 25 6.382339e-03 # wh
DECAY 1 0.000000e+00 # d : 0.0
DECAY 2 0.000000e+00 # u : 0.0
DECAY 3 0.000000e+00 # s : 0.0
DECAY 4 0.000000e+00 # c : 0.0
DECAY 5 0.000000e+00 # b : 0.0
DECAY 11 0.000000e+00 # e- : 0.0
DECAY 12 0.000000e+00 # ve : 0.0
DECAY 13 0.000000e+00 # mu- : 0.0
DECAY 14 0.000000e+00 # vm : 0.0
DECAY 15 0.000000e+00 # ta- : 0.0
DECAY 16 0.000000e+00 # vt : 0.0
DECAY 21 0.000000e+00 # g : 0.0
DECAY 22 0.000000e+00 # a : 0.0
###################################
## INFORMATION FOR QNUMBERS 82
###################################
BLOCK QNUMBERS 82 # gh
      1 0 # 3 times electric charge
      2 1 # number of spin states (2s+1)
      3 8 # colour rep (1: singlet, 3: triplet, 8: octet)
      4 1 # particle/antiparticle distinction (0=own anti)

#***********************************************************************
# MadGraph5_aMC@NLO *
# *
# run_card.dat aMC@NLO *
# *
# This file is used to set the parameters of the run. *
# *
# Some notation/conventions: *
# *
# Lines starting with a hash (#) are info or comments *
# *
# mind the format: value = variable ! comment *
# *
# Some of the values of variables can be list. These can either be *
# comma or space separated. *
# *
# To display additional parameter, you can use the command: *
# update to_full *
#***********************************************************************
#
#*******************
# Running parameters
#*******************
#
#***********************************************************************
# Tag name for the run (one word) *
#***********************************************************************
  test_pTl = run_tag ! name of the run
#***********************************************************************
# Number of LHE events (and their normalization) and the required *
# (relative) accuracy on the Xsec. *
# These values are ignored for fixed order runs *
#***********************************************************************
 10000 = nevents ! Number of unweighted events requested
 -1.0 = req_acc ! Required accuracy (-1=auto determined from nevents)
 -1 = nevt_job! Max number of events per job in event generation.
                 ! (-1= no split).
#***********************************************************************
# Normalize the weights of LHE events such that they sum or average to *
# the total cross section *
#***********************************************************************
 average = event_norm ! valid settings: average, sum, bias
#***********************************************************************
# Number of points per itegration channel (ignored for aMC@NLO runs) *
#***********************************************************************
 0.01 = req_acc_FO ! Required accuracy (-1=ignored, and use the
                     ! number of points and iter. below)
# These numbers are ignored except if req_acc_FO is equal to -1
 5000 = npoints_FO_grid ! number of points to setup grids
 4 = niters_FO_grid ! number of iter. to setup grids
 10000 = npoints_FO ! number of points to compute Xsec
 6 = niters_FO ! number of iter. to compute Xsec
#***********************************************************************
# Random number seed *
#***********************************************************************
 0 = iseed ! rnd seed (0=assigned automatically=default))
#***********************************************************************
# Collider type and energy *
#***********************************************************************
 1 = lpp1 ! beam 1 type (0 = no PDF)
 1 = lpp2 ! beam 2 type (0 = no PDF)
 7000.0 = ebeam1 ! beam 1 energy in GeV
 7000.0 = ebeam2 ! beam 2 energy in GeV
#***********************************************************************
# PDF choice: this automatically fixes also alpha_s(MZ) and its evol. *
#***********************************************************************
 lhapdf = pdlabel ! PDF set
 10042 = lhaid ! If pdlabel=lhapdf, this is the lhapdf number. Only
              ! numbers for central PDF sets are allowed. Can be a list;
              ! PDF sets beyond the first are included via reweighting.
#***********************************************************************
# Include the NLO Monte Carlo subtr. terms for the following parton *
# shower (HERWIG6 | HERWIGPP | PYTHIA6Q | PYTHIA6PT | PYTHIA8) *
# WARNING: PYTHIA6PT works only for processes without FSR!!!! *
#***********************************************************************
  HERWIG6 = parton_shower
  1.0 = shower_scale_factor ! multiply default shower starting
                                  ! scale by this factor
#***********************************************************************
# Renormalization and factorization scales *
# (Default functional form for the non-fixed scales is the sum of *
# the transverse masses divided by two of all final state particles *
# and partons. This can be changed in SubProcesses/set_scales.f or via *
# dynamical_scale_choice option) *
#***********************************************************************
 False = fixed_ren_scale ! if .true. use fixed ren scale
 False = fixed_fac_scale ! if .true. use fixed fac scale
 91.118 = muR_ref_fixed ! fixed ren reference scale
 91.118 = muF_ref_fixed ! fixed fact reference scale
 10 = dynamical_scale_choice ! Choose one (or more) of the predefined
           ! dynamical choices. Can be a list; scale choices beyond the
           ! first are included via reweighting
 1.0 = muR_over_ref ! ratio of current muR over reference muR
 1.0 = muF_over_ref ! ratio of current muF over reference muF
#***********************************************************************
# Reweight variables for scale dependence and PDF uncertainty *
#***********************************************************************
 1.0, 2.0, 0.5 = rw_rscale ! muR factors to be included by reweighting
 1.0, 2.0, 0.5 = rw_fscale ! muF factors to be included by reweighting
 True = reweight_scale ! Reweight to get scale variation using the
            ! rw_rscale and rw_fscale factors. Should be a list of
            ! booleans of equal length to dynamical_scale_choice to
            ! specify for which choice to include scale dependence.
 False = reweight_PDF ! Reweight to get PDF uncertainty. Should be a
            ! list booleans of equal length to lhaid to specify for
            ! which PDF set to include the uncertainties.
#***********************************************************************
# Store reweight information in the LHE file for off-line model- *
# parameter reweighting at NLO+PS accuracy *
#***********************************************************************
 False = store_rwgt_info ! Store info for reweighting in LHE file
#***********************************************************************
# ickkw parameter: *
# 0: No merging *
# 3: FxFx Merging - WARNING! Applies merging only at the hard-event *
# level. After showering an MLM-type merging should be applied as *
# well. See http://amcatnlo.cern.ch/FxFx_merging.htm for details. *
# 4: UNLOPS merging (with pythia8 only). No interface from within *
# MG5_aMC available, but available in Pythia8. *
# -1: NNLL+NLO jet-veto computation. See arxiv:1412.8408 [hep-ph]. *
#***********************************************************************
 0 = ickkw
#***********************************************************************
#
#***********************************************************************
# BW cutoff (M+/-bwcutoff*Gamma). Determines which resonances are *
# written in the LHE event file *
#***********************************************************************
 15.0 = bwcutoff
#***********************************************************************
# Cuts on the jets. Jet clustering is performed by FastJet. *
# - When matching to a parton shower, these generation cuts should be *
# considerably softer than the analysis cuts. *
# - More specific cuts can be specified in SubProcesses/cuts.f *
#***********************************************************************
  1.0 = jetalgo ! FastJet jet algorithm (1=kT, 0=C/A, -1=anti-kT)
  0.8 = jetradius ! The radius parameter for the jet algorithm
 20.0 = ptj ! Min jet transverse momentum
  4.5 = etaj ! Max jet abs(pseudo-rap) (a value .lt.0 means no cut)
#***********************************************************************
# Cuts on the charged leptons (e+, e-, mu+, mu-, tau+ and tau-) *
# More specific cuts can be specified in SubProcesses/cuts.f *
#***********************************************************************
 20.0 = ptl ! Min lepton transverse momentum
  2.5 = etal ! Max lepton abs(pseudo-rap) (a value .lt.0 means no cut)
  0.3 = drll ! Min distance between opposite sign lepton pairs
  0.0 = drll_sf ! Min distance between opp. sign same-flavor lepton pairs
 15.0 = mll ! Min inv. mass of all opposite sign lepton pairs
  0.0 = mll_sf ! Min inv. mass of all opp. sign same-flavor lepton pairs
#***********************************************************************
# Photon-isolation cuts, according to hep-ph/9801442. When ptgmin=0, *
# all the other parameters are ignored. *
# More specific cuts can be specified in SubProcesses/cuts.f *
#***********************************************************************
 10.0 = ptgmin ! Min photon transverse momentum
  2.5 = etagamma ! Max photon abs(pseudo-rap)
  0.7 = R0gamma ! Radius of isolation code
  1.0 = xn ! n parameter of eq.(3.4) in hep-ph/9801442
  1.0 = epsgamma ! epsilon_gamma parameter of eq.(3.4) in hep-ph/9801442
 False = isoEM ! isolate photons from EM energy (photons and leptons)
#***********************************************************************
# Cuts associated to MASSIVE particles identified by their PDG codes. *
# All cuts are applied to both particles and anti-particles, so use *
# POSITIVE PDG CODES only. Example of the syntax is {6 : 100} or *
# {6:100, 25:200} for multiple particles *
#***********************************************************************
  {} = pt_min_pdg ! Min pT for a massive particle
  {} = pt_max_pdg ! Max pT for a massive particle
  {} = mxx_min_pdg ! inv. mass for any pair of (anti)particles
#***********************************************************************
# For aMCfast+APPLGRID use in PDF fitting (http://amcfast.hepforge.org)*
#***********************************************************************
 0 = iappl ! aMCfast switch (0=OFF, 1=prepare grids, 2=fill grids)
#***********************************************************************

——————

Thank you in advance!
Huanfeng

Question information

Language:
English Edit question
Status:
Solved
For:
MadGraph5_aMC@NLO Edit question
Assignee:
marco zaro Edit question
Solved by:
Rikkert Frederix
Solved:
2018-11-20
Last query:
2018-11-20
Last reply:
2018-11-20

Hi,

I do not reproduce this crash with the latest version (and the cut to 19GeV)

Can you check the logs in the subdirectory of
 /Users/huanfeng/Research/Automated_Tools/MG5_aMC/bin/wpza_decay_qcd/SubProcesses/P0_udx_epvemupmuma/

(you should have subdirectory like born_G8 and born_G9 and both should have some log file inside stating the reason of the crash)

Cheers,

Oliviert

Huanfeng Cheng (huanfeng) said : #2

Hi Olivier,

Thank you for the reply.

I tried again at fLO with pT_l > 20 GeV without modifying cut.f, the program didn't crash, but took very long to run and got stuck at 'refined integration step 2'.

Then I added DeltaR cut between charged lepton and photon (which is necessary to make the integration converge efficiently I think, I attached it below) to cuts.f, still pT_l > 20GeV, the program crashed. In Subprocesses/P0_udx_epvemupmua/born_G2/log.txt, it says:

'ERROR: INTEGRAL APPEARS TO BE ZERO.
TRIED 100122 PS POINTS AND ONLY 20 GAVE A NON-ZERO INTEGRAND.'

If I just cut pT_l > 18GeV, it worked no problem, xs=0.377+/- 0.003 fb. Crosschecking with VBFNLO, xs=0.376+/-0.001 fb, I convince myself at fLO my cut should be fine.

In case my defined cut my cause the crash when pT_l > 20GeV, I attached it as follows:
_______

c$$$ cut on charged leptons and photons seperation 'dr_lp'
      do i=1,nexternal
c find charged leptons
         if(istatus(i).eq.1 .and.
     & (ipdg(i).eq.11 .or. ipdg(i).eq.13 .or. ipdg(i).eq.15)) then
            is_a_lm(i)=.true.
         else
            is_a_lm(i)=.false.
         endif
         if(istatus(i).eq.1 .and.
     & (ipdg(i).eq.-11 .or. ipdg(i).eq.-13 .or. ipdg(i).eq.-15)) then
            is_a_lp(i)=.true.
         else
            is_a_lp(i)=.false.
         endif
c find photons
         if (istatus(i).eq.1 .and. ipdg(i).eq.22) then
            is_a_ph(i)=.true.
         else
            is_a_ph(i)=.false.
         endif
      enddo
c cut on DeltaR between charged lepton and photon 'dr_lp'
      do i=nincoming+1,nexternal
         if (is_a_lp(i).or.is_a_lm(i)) then
            do j=nincoming+1,nexternal
               if (is_a_ph(j)) then
                  if (R2_04(p(0,i),p(0,j)).lt.0.4**2) then
                     passcuts_user=.false.
                     return
                  endif
               endif
            enddo
         endif
      enddo
_______

Cheers,
Huanfeng

marco zaro (marco-zaro) said : #3

Hi,
I see you have put to zero the bottom mass.
While this may be unrelated to the rest, it may cause problems since you generated the process with the model with the massive b quark (loop_sm)
Can you please retry either with a finite value of mb or generating with loop_sm-no_b_mass?

Thanks,

Marco

Huanfeng Cheng (huanfeng) said : #4

Hi Marco,

I tried both, with loop_sm I put a non-zero mb, and use loop_sm-no_b_mass model.
Both work when pT_l > 18 GeV (xs agree), but crash (or say get stuck) when pT_l > 20 GeV.

Error messages are the same as what I mentioned above.

Cheers,
Huanfeng

marco zaro (marco-zaro) said : #5

Dear Huanfeng,
thanks, we are investigating into this issue.
We will let you know asap.

cheers,

Marco

> On 16 Nov 2018, at 19:27, Huanfeng Cheng <email address hidden> wrote:
>
> Question #676158 on MadGraph5_aMC@NLO changed:
> https://answers.launchpad.net/mg5amcnlo/+question/676158
>
> Status: Needs information => Open
>
> Huanfeng Cheng gave more information on the question:
> Hi Marco,
>
> I tried both, with loop_sm I put a non-zero mb, and use loop_sm-no_b_mass model.
> Both work when pT_l > 18 GeV (xs agree), but crash (or say get stuck) when pT_l > 20 GeV.
>
> Error messages are the same as what I mentioned above.
>
>
> Cheers,
> Huanfeng
>
> --
> You received this question notification because you are subscribed to
> the question.

Huanfeng Cheng (huanfeng) said : #7

Dear Marco,

Thank you. Just one more thing, I'm working on Mac OS X 10.13, I'm not sure if this issue could have any OS dependence (hope not).

Cheers,
Huanfeng

Best Rikkert Frederix (frederix) said : #8

Dear Huanfeng,

Looking into the reason for the problem, it does not seem to be directly related to a bug. Rather, there is a large inefficiency in the phase-space generation that leads to this problem. Let me explain.

The phase-space generation in MG5_aMC is based on a multi-channel approach where the channels are defined according to the Feynman diagrams contributing to the process. In other words, the Feynman diagrams define the topologies according to which the phase-space is setup. The integration channels can then be integrated mostly independently from each other, resulting in an efficient parallelisation of the computation of the cross section. Since, the MG5_aMC code is process independent, there are various checks to make sure that what is computed is consistent. One of these checks is to make sure that the contribution from all of the channels is non-zero, which must be the case. Now, it turns out that for some of the channels for your process, the cross section is reported as zero: the code tried 100000 phase-space points and only a very few gave a non-zero value for the matrix elements. This results in a full stop, since this typically means some problem with the model or cuts or something.

However, in your case there is no such problem with the model or cuts or something similar. Rather, the interplay with the topology of the Feynman diagram that defines the channel, together with the cuts in terms of transverse momenta of the final state leptons and photon, results in the problem that (almost) all phase-space points generated fail to pass the generation cuts (for the problematic integration channel topologies). In other words (nearly) all of the 100000 phase-space points generated were cut away by the cuts specified in the run_card.

The problematic topologies are the ones where the two initial state quarks fuse into a W-boson, that decays to a lepton-neutrino pair, from which the lepton radiates a photon that splits into a lepton pair, and one of these final leptons radiates the final state photon. Since the phase-space generation first generates all the s-channel invariant masses related to this topology, the Breit-Wigner associated with the W-boson sets the typical scale for this process to a small value and all the subsequent s-channel invariant masses must be smaller. This leaves very little phase-space for generating enough transverse momenta for the leptons and the photon to pass the generation cut, since most of the s-channel invariant masses are rather small.

There are a couple of ways out.

1. The easy way is to relax the generation cuts a little, and then apply analysis cuts that are much more stringent. For this, you'll have to write your analysis in the FixedOrderAnalysis folder and link in through the FOanalysis_card.dat. Easiest here is simply to adapt the FixedOrderAnalysis/analysis_HwU_template.f template analysis file, which is already the default analysis that is linked to the code. This is, of course, a bit of work, since you'll have to write your cuts into this file by hand.

2. Another solution is to increase the 'max_points' data value from 100000 to a much larger value (e.g. 1 million or even 10 million). This will let the code try much longer to see if there are enough non-zero phase-space points that give a non-zero result, increasing the chance that you'll end-up with enough phase-space points with a non-zero value to pass the checks. Unfortunately, there is no guarantee that this will always work: if the phase-space and cuts become complicated enough, there might always be channels for which no points pass the cuts. Once a few are found, the adaptive integration grids make sure that the code will adapt correctly. The 'max_points' data value can be found in the SubProcesses/mint.inc file. Make sure to delete all the SubProcesses/P*/*.o object files after changing this parameter before starting a new run.

3. Instead of using the req_acc_FO parameter in the run_card to specify the accuracy, set that to -1 and give yourself the number of phase-space points and iteration for which you want to run the code. This might be quite a bit slower and much more inefficient, but you should not run into these kind of problems.

Best regards,
Rikkert

Huanfeng Cheng (huanfeng) said : #9

Dear Rikkert,

Thank you very much for the detailed explanation, I think I understand the cause now.

I would choose the 1st way to handle the problem, which should be much more efficient, since anyway I will analyze the distributions using the analysis card.

Just one more question about FixedOrderAnalysis, am I correct that there is no way to perform the cut at very beginning, but to add the same cut repeatedly for respective analysis (histogramming).

Thanks again,
Huanfeng

Huanfeng Cheng (huanfeng) said : #10

Thanks Rikkert Frederix, that solved my question.

Huanfeng Cheng (huanfeng) said : #11

Sorry, forget my last question, I find the answer in one of the templates.

Thanks Frederix, Marco and Olivier.
Huanfeng