Callback for forward solution at iterate N, timestep K?

Asked by Martin Sandve Alnæs

I'd like to investigate how the forward solution converges (or not) toward the optimal state, by e.g. storing a selection of snapshots at all iterates x a subset of the timesteps, or storing/printing some norms and functionals computed from the intermediate forward solutions at a subset of the timesteps. Would it be possible to register a callback from which I can get access to these intermediate function values for such customized "postprocessing"? This callback would be called at the end of each timestep of each replay in an optimization run, with functions available by name through a dict or some interface.

Question information

English Edit question
dolfin-adjoint Edit question
No assignee Edit question
Solved by:
Martin Sandve Alnæs
Last query:
Last reply:
Revision history for this message
Patrick Farrell (pefarrell) said :

Hmm. It is possible to supply a callback to the ReducedFunctional, but it isn't powerful enough to do what you want:

I implemented an additional replay_cb callback to ReducedFunctional in the branch


Use it like

def replay_cb(var, data, m):
  '''var is a libadjoint.Variable, data is a dolfin.Function, m is whatever your parameter is.'''

Jhat = ReducedFunctional(..., replay_cb=replay_cb)

Take a look and see if that satisfies your needs. If so, we'll merge. If not, we'll iterate.

Revision history for this message
Martin Sandve Alnæs (martinal) said :

Thanks, this works well. I guess we can't get access to the reason for the call, since it's triggered by the scipy optimization algorithm. Plugging in a function like this can be useful for debugging:

def replay_cb(var, func, m):
    print "/// Replay %s: %s, %s (%g)" % (var.timestep, str(var),, norm(func))

Revision history for this message
Patrick Farrell (pefarrell) said :

Great, I've merged it into the dolfin-adjoint trunk.