# How to make iteration converge faster

Hi Whizard team,

I am wondering for a well-behaved particle procrocess, during the integrate() step, how to make the error converge faster. Based on the manual, I am currently using multiple passes such as iterations = 6;8000:"gw", 10:40000:"gw", 5:80000:"gw", 5:100000. But it still gives time estimate for generating 1000 events longer than I expect.

Do you know if there are other ways which lead to faster convergence?

Thanks!

## Question information

- Language:
- English Edit question

- Status:
- Solved

- For:
- WHIZARD Edit question

- Assignee:
- Juergen Reuter Edit question

- Solved by:
- David Wang

- Solved:
- 2020-08-20

- Last query:
- 2020-08-20

- Last reply:
- 2020-08-20

Juergen Reuter (j.r.reuter) said : | #1 |

This is too little information to answer your question without knowing the details of the process you are looking at. There are some remarks in order:

(1) WHIZARD does not apply any default cuts, so processes which have soft and/or collinear divergencies will not converge.

(2) There is a default guess on the number of iterations which for standard processes (with appropriate cuts) is usually a good guess.

(3) If you decide to specify iteration numbers on your own, note that four passes are useless, there is normally one pass with several iterations for the adaptation of grids and weights ("gw"), and a second pass where no (weight) adaptation happens. This second pass should contain fewer iterations, but more calls.

(4) The time estimate for the generation for a certain number of events is for unweighted events, and the unweighting efficiency depends on the efficiency of the integration.

David Wang (david-mhw) said : | #2 |

Thanks! For instance, one of examples which I am running is e2 E2 -> H, H, H, Z, n2, N2 with cuts for invariant mass larger than 150 GeV to exclude the on-shell Z. There is no divergence issue for this process. But, this process takes longer time to run than I expect(maybe because more diagrams?). So, I explicitly set the iteration control and it can reach faster time estimate when simulating 10k events but I am not clear about the optimal way to increase the speed.

Meanwhile, a general observation is that if the relative error after the integration step is still large > 5%, then the simulation takes longer time to run. But, when the relative error is small ~ 1%, the simulation usually takes reasonable time to run. So, I am wondering do you have some suggestions on this which leads to faster simulation. My eventually error tolerance for cross section is about 10%, but it seems like if I loose my restriction on relative error, the simulation will take longer instead of shorter. So, I feel there might be an optimal relative error which leads to faster run and not to small error.

David Wang (david-mhw) said : | #3 |

Also, I am interested in using EPA. For process mu, mu -> n_mu, W, mu. I add EPA code inside but the integration does not seem to converge. Do we need to write out the divergent process in order to EPA to kick in?

Juergen Reuter (j.r.reuter) said : | #4 |

For the EPA please open up a separate question and provide the exact definition of process and cuts. For the process you show, µµ -> HHHZvv, this is a 2->6 process, so the efficiency is expected to be of the order of a percent or below. I did give you a little guidance already in my first answer, but a high-energy 2->6 process (I assume that this for multi-TeV muon colliders), you should have like 10-12 iterations in the first pass with maybe a few 100,000 calls each. The number of phase-space channels gives you an estimate because you want to have 50-100 calls per channel on average. The second pass should be shorter, i.e. 3-4 iterations but with a higher number of calls (probably 0.5M - 1M) to get enough statistics. Your observation is correct for the unweighting, though it is mostly the efficiency, not the MC statistical error (weighted by the significance) that gives you directly the unweighting efficiency. Did you keep the muon mass explicit? In any case the VBF process you mentioned has a logarithmic enhancement log (mmu^2/s) which can be enourmous for the muon collider.

Simon Braß (sbrass) said : | #5 |

Hi David,

I supply just some additions to Jürgen's answer:

A good metric for how well the adaption during integration worked is the accuracy field.

The accuracy is just the integration error *without* the MC sampling number, a = √(N) x ΔI , and gives you a hint how well the adaption approaches the integrand, i.e. matrix-element and phase-space.

From experience, I can tell you that a accuracy around 2-3 is reasonable for a 2 → 6 process.

You can try to achieve a better convergence, or adaption in your case, thus higher efficiency, by increasing the number of iterations (maybe from 6 to 8, from 8 to 10 and so on).

You should see then that the accuracy should fall (slowly) below 2 or 3 and maybe stabilize around a value; you can also set a accuracy goal in Sindarin.

Sometimes increasing the iterations is not sufficient (the error and accuracy will highly fluctuate and not converge below O(10)), then you should try to increase the number of calls per iteration.

You can try to jump from 10k calls to 100k and then to 250k, 500k and so.

However, should always watch the metrics clearly: error, accuracy and efficiency.

The (relative) error is not sensitive enough to the adaption progress, you should also consult the accuracy and efficiency fields.

The latter one will tell you the (predicted) unweighting efficieny, i.e. for 10% efficiency, from 100 generated events 10 will be accepted and be "unweighted".

David Wang (david-mhw) said : | #6 |

Thank you both.