Asked by Peter Christian Schmitz on 2019-06-17

Dear Developers and Siesta community,

i have problems to converge the scf cycle of a calculation with many atoms (>600) due to an imprecision in the restart procedure:

After i saved the density matrix in the previous run with small dH error, which however was not yet sufficient for convergence, the restarted scf cycle always has a much higher dH error, by 1-2 orders of magnitude, from which the code has to converge down again.

Since the cluster on which i am calculating limits my calculation time to 24 hours, the error therefore accumulates despite convergence steps in each cycle, when the number of cycles is not sufficient to overcome 2 orders of magnitude in dH.

The error gets smaller when one decreases the mixing coefficient, but it is now at 0.01 and i doubt that further decrease would be beneficial to convergence

It seems to be impossible to restart the calculation with the wavefunctions from the previous run directly. It only uses the density.
=> Could it be that the new initial wavefuncitons are too atomic / randomized / not good enough for such a large system such that i get this behavior ?
=> How can i improve the accuracy of the first restarted scf cycle ?

i am using Siesta version r764.
Exploiting the "expert" diagonalization mode helped (decreasing the number of bands to be diagonalized), but still does not reduce the problem sufficiently to achieve a small enough dH.

The problem occurs both in SOC and non-polarized calculations, but is not as strong in the latter than with SOC, leading to convergence there.

Thank you for your help,
with kind regards,

Peter Schmitz
PhD Student
RWTH Aachen University

Question information

English Edit question
Siesta Edit question
No assignee Edit question
Last query:
Last reply:

Can you help with this problem?

Provide an answer of your own, or ask Peter Christian Schmitz for more information if necessary.

To post a message you must log in.