compute flux_norm

Asked by Jui-Lin Kuo

Hi,

I'm wondering how you decide the value of flux_norm in the prompt production case.
The only information I got is in the README: flux_norm = npot * A^alpha / xsec_pp_everything(in pb).

I don't know the definition of scaling power (alpha) and how to compute it. I've tried to search for information about it but in vain.
For xsec_pp_everything, do we need to sum over the cross section of all possible processes of SM and BSM in pp collision?
Is there any reference or way to obtain the value of alpha and xsec_pp_everything for a different simulation setting? (Now I use the default value of flux_norm for the prompt production in SHiP)

I'm really new to this field and MC simulation, so any suggestion or comment will be very helpful.

Many thanks,
Jui-Lin

Question information

Language:
English Edit question
Status:
Solved
For:
maddump Edit question
Assignee:
Luca Edit question
Solved by:
Luca
Solved:
Last query:
Last reply:
Revision history for this message
Luca (lbuono) said :
#1

Dear Jui-Lin,

here some comments:

- the term A^alpha takes into account nuclear effects when you consider p-A collisions. The assumption is a simple scaling of the cross section with respect the pp case:
sigma(pA) = A^alpha sigma(pp)
The exponent can be derived using some models, naively one can expect a grow of the cross section as the surface or volume of the nucleus (I'm not a big expert here). In literature, you should find some measurements for particular nuclei, with alpha fitted by data.

- the total cross section pp-> anything (actually the BSM contributions should be negligible in the total ) has been measured at several energies (here from the pdg)
http://pdg.lbl.gov/2018/hadronic-xsections/rpp2018-pp_pbarp.pdf

Some phenomelogical formula are available, for example
https://arxiv.org/pdf/1311.3870.pdf (formula 70)
and you can use also tools as PYTHIA to get an estimate (expecially at high collider energies).

I hope this answer your questions.

Cheers,
Luca

Revision history for this message
Luca (lbuono) said :
#2

Dear Jui-Lin,

I have been a bit sloppy in the previous reply about the alpha exponent.
Actually, you should think that for the flux_norm you have the factor

sigma(pA>DM)/sigma(pA>anything) = A^(alpha1-alpha2) * sigma(pp>DM)/sigma(pp>anything)

assuming in general two different scaling

sigma(pA>DM) = A^alpha1 * sigma(pp>DM) (for DM process)
sigma(pA>anything) = A^alpha2 * sigma(pp>anything) (for the inclusive total cross section)

The sigma(pp>DM) is added automatically by MaDump (if you set to true the flag prod_xsec_in_norm), so what is left in the flux_norm is

A^(alpha1-alpha2)/sigma(pp>anything) := A^alpha/sigma(pp>anything)
alpha:=alpha1-alpha2

For example, for the SHiP example, we used
alpha1 = 1
alpha2 = 0.71
so alpha = 0.29
(https://arxiv.org/pdf/1504.04855.pdf, pag 168)

Cheers,
Luca

Revision history for this message
Jui-Lin Kuo (juilinkuo) said :
#3

Dear Luca,

Many thanks for your detailed explanation and the references!
This really helps. I'll read the references and try to estimate flux_norm.

BTW, I would like to simulate MiniBooNE as well. I notice that the detector of MiniBooNE is spherical (if I'm correct) .
I'm wondering is there possible way to address spherical detector using the setting for cylindrical and parallelepiped detector in the code?

Thanks again for answering my question.

Best regards,
Jui-Lin

Revision history for this message
Luca (lbuono) said :
#4

Dear Jui-Lin,

Yep, in MadDump we do not have a spherical detector. You have different options here depending on your accuracy goal/efforts:

- you can approximate the true geometry with one available (cylinder or parallelepiped), assuming equivalent mass. You can assess the error by trying two setups, one inside and one outside the actual detector volume. (in general, my expectation is that, as for total rates, since the detectors are quite far, the precise details of their shape introduce only small corrections)

- you can use an available geometry, set its parameters such that the actual detector is included in it (you should set the same density as the real case, so now your fake detector is heavier than the true one ). Then, after the interaction events are generated, you can start a procedure of rejection of the events according to the actual geometry (as compared to the fake one) on an event-by-event basis. In other words, you pick an event, look for the direction of the incoming particle entering the detector and compare the maximum travel distance inside the true detector with the fake one using hit-or-miss to accept-reject the event.
This procedure should reproduce the 'exact' geometry but with some inefficiencies since you rejects events (I expect the fraction shouldn't be that large if you start with a fake detector close to the real shape )

- you can try to model your own geometry hacking the MadDump code according to a re-weight strategy.
It gives you full control over geometry and maximum efficiency (the events are all generated within the acceptance ).

Cheers,
Luca

Revision history for this message
Jui-Lin Kuo (juilinkuo) said :
#5

Dear Luca,

Thanks a lot for the detailed suggestions.
I'll try these strategies.

I have run the simulation for SHiP for the first time and I'm analysing the output right now.
I'm wondering what is actually in ehist.dat.
I know that column 1 and 2 are the boundaries of each energy bin.
If I understand correctly, column 3 seems to be the number of DM at each energy bin from the production (Or is it dN/dE?) and column 4 is the error associated to column 3.
Is this normalized according to the flux_norm? I'm confused that the final detected events (the signal type is electron scattering) seems to be way more than total number of DM by summing over all energy bin.
In the end, I would like to extract the information of energy-differential total number of DM (dN/dE v.s. E) at production normalized to flux_norm, so I'm wondering if I could start from analysing ehist.dat.

Many thanks for your help.

Best regards,
Jui-Lin

Revision history for this message
Luca (lbuono) said :
#6

Dear Jui-Lin,

well, the file is organized as you guessed (bin edges, height, error).
The normalization of ehist.dat contains the flux_norm factor times everything required to give you as a result the expected number of events (number of scattering center in the detector, geometrical acceptance, particle multiplicity,...) but the interaction cross section given in pb. So, sure, it could be that the results for the final events are bigger then the integrated area of that histogram (if the current parameter of the model gives a cross section bigger than one in pb).

Cheers,
Luca

Revision history for this message
Jui-Lin Kuo (juilinkuo) said :
#7

Dear Luca,

Do you mean that all of the output quantities are normalized to the case that the production cross section is 1 pb?
So if I want to see the real case, I could just rescale the quantities with the real cross section, right?

I'm also wondering how you rescale the number of signal events in the simulation to get the expected number of signal events (e.g. nevts_electron in the summary for scanning parameter) for a certain value of POT.
It seems to me that the number of nevts_electron will not depend on the value of nevents in run_card.dat if nevents is large enough to reach adequate accuracy.

Many thanks,
Jui-Lin

Revision history for this message
Luca (lbuono) said :
#8

Dear Jui-Lin,

as for the first question, I meant the interaction cross section. The production cross section is already in the normalization if you set to True the flag prod_xsec_in_norm.
It means that if you properly set the value of flux_norm and allow for the inclusion of the production cross section (which in other situation is not required or not computed within MadDump, as for the meson decay production mechanism), the area of the ehist histogram is the number of expected signal events divided by the interaction cross section in pb within the detector geometry you gave.

As for the second point, as you said, the actual number of events generated in production and also in interaction stages are important for statistics reasons and do not affect "directly" the total rates.
For example, the number of DM events produce by the interaction of nPOT is given by

#DMevents = nPOT xsec(pA>DMDM)/xsec(pA>anything)

so that only the ratio of the total cross sections matters. The events generated are unweighted, and provides you differential distributions. Then their number will be important to determine the geometrical acceptance for instance, i.e. the fraction of events over the total which will intercept the detector. So the total number is not important if for statistical purposes. If too few events enter the detector, the statics could be too poor to have a decent fit for the rest of the construction.
The same occurs at the interaction point: for the total rate what matters is the cross section. Then, if you want to impose cuts, you will rely on the events for they give you differential information.

Keep in mind also that the estimate of the cross section in madgraph relies on the number of events, in the sense that more events you generate more accurate is that value.

I hope this clarifies you the whole picture.

Cheers,
Luca

Revision history for this message
Jui-Lin Kuo (juilinkuo) said :
#9

Dear Luca,

Thank you very much for the explanation.

Sorry that I'm confused about the "interaction cross section" you mention. Is it the scattering cross section between DM and electrons?
Also, "the area of the ehist histogram is the number of expected signal events divided by the interaction cross section in pb." Is it the number of DM accepted by detector geometry or only those accepted by detector geometry and also causing an electron recoil signal?

For the second question, you say the same occurs at the interaction point, but it seems that we only have xsec(DM e > DM e) at the interaction point, which seems not similar to the production phase.
I'm still wondering how you get the nevts_electron in the scanning summary from the # of events recorded in the unweighted_events_electron.lhe.
It seems to me that the # of events in the unweighted_events_electron.lhe is the final signal events according to the nevents we set in the beginning.
After the simulation is done, there are probably some formulae that one can convert the # of events in the unweighted_events_electron.lhe to the nevts_electron we see in the scanning summary.
I tried to find the formulae but in vain. Can you clarify this point, please?

Best regards,
Jui-Lin

Revision history for this message
Luca (lbuono) said :
#10

Dear Jui-Lin,

1) yes, for 'interaction cross section' I mean the DM-electron scattering in this case. The terminology comes from the fact you have usually a two-step process: production and detection, and the latter may occur to be a re-scattering (interaction) within the active volume or a visible decay. So your case is production-interaction in my jargon.

2) Let me clear the situation in your specific case. The expected number of signal events (electron signatures in the detectors) after nPOT proton on target collisions is given by the formula:

#events = nDM_in * xsec(DM e > DM e) * < n(z) / A(z) > * l
where

< XXX > := average of XXX quantity
n := number density of scattering centers (as function of the depth of the detector volume z )
l := depth of the detector
A := transverse area (as function of the depth of the detector volume z )

nDM_in := nPOT xsec(pA -> DM DM~)/xsec(pA -> anything) * <eps_geo>
(number of DM entering the detector volume)
<eps_geo> := average fraction of DM events entering the detector

Note: it is important that xsec(DM e > DM e ) and A(z) are given in the same unit (in MadDump A is in converted in pb)

Then, ehist is normalized
N = #events/xsec(DM e > DM e )

The number of unweighted DM e > DM e events that you produce does not affect the above estimation. You only need the cross section. Well, if you prefer, you can think that unweighted events carry a common weight which is the cross section divided by the total number of events generated, such that they sum up to the cross section regardless their number, and this is the contribution they give to the #events. So, if you put cuts (for instance on the energy of the recoil electron), only a fraction of them will contribute, again with the weight I told you, which means it will be a fraction of the total cross section.

Cheers,
Luca

Revision history for this message
Jui-Lin Kuo (juilinkuo) said :
#11

Dear Luca,

Thanks a lot for your detailed explanation!
Now I can understand the whole picture.

Is it possible to output or derive <eps_geo> and xsec(DM e > DM e) for each simulation?
I follow the default setting for SHiP, but in the scanning summary I only get nevts_electron and xsec_prod as output.

Many thanks,
Jui-Lin

Revision history for this message
Jui-Lin Kuo (juilinkuo) said :
#12

Dear Luca,

Just to be sure, here the "n (number density)" should be the number density of electron in my case, right?
And in "#events = nDM_in * xsec(DM e > DM e) * < n(z) / A(z) > * l", it seems that the dimension is not correct, right hand side of the equation has dimension [length]^{-2} but left hand side of the equation has dimension [length]^{-0}. Or is "n" a linear number density?

Best regards,
Jui-Lin

Revision history for this message
Best Luca (lbuono) said :
#13

Dear Jui-Lin,

yes, n(z) is a linear density! (sorry, I forgot to specify it ).

Regarding the other information, if you run with testplots True,
together with ehist.dat (in the each run_XXX folder of the scan)
you will find the file in_DM.dat. The number of lines (minus one
to be precise) of that file corresponds to the number of events
within the geometrical acceptance. So you can compute the <eps_geo>
by taking the ratio between that number and the number of the unweighted_events generated in the production process.
To be more precise, if you do so, you will take also into account the average DM multiplicity per production events, that in this case should be close to 2 (p p > DM DM~, in each primary event, two DM particles are generated).
A s for the interaction cross sections, actually, it is not available. The reason is that it is not a single number since your incoming DM flux is not mono-energetic. I simplified the formula for the number of events. Actually, MadDump compute a convolution between the flux and the cross section for all available energies (this is well explained in our paper https://link.springer.com/article/10.1007/JHEP05(2019)028)
BTW if you want to compute some values for the cross section at given energies, you can always use madgraph with the same model and parameter and generate the process DM e> DM e (be aware however to put fix energies and beam type as electron, i.e. no pdf )

Cheers,
Luca

Revision history for this message
Jui-Lin Kuo (juilinkuo) said :
#14

Dear Luca,

Now it is clear.
Thanks a lot for helping me understand the whole picture of the code.

Best regards,
Jui-Lin

Revision history for this message
Jui-Lin Kuo (juilinkuo) said :
#15

Thanks Luca, that solved my question.