question regarding gradient calculation in optimization demo.
I've been studying the optimization demo, and I have a question about the derivation of one of its steps. I'm curious as to how the gradient is calculated. In general, to calculate an objective function gradient from an adjoint, one would have some sort of multiplication of the residual derivative by the adjoint solution. This demo seems to approach this step differently, generating a linear form for the gradient, including spatial derivatives of the adjoint and solution vectors, and assembling over a function space :
# Update of parameter
gradient = -inner(grad(z), w0*grad(u))*dx
where z is the solution of the adjoint problem, u is the solution of the forward problem, and w0 is a test function.
My question is, where does this expression come from, and how is it derived?
Perhaps not specifically relevant to a support forum, but I figured that the authors of the demo could probably offer more insight than other sources.
Thanks,
Doug.
Question information
- Language:
- English Edit question
- Status:
- Answered
- For:
- DOLFIN Edit question
- Assignee:
- No assignee Edit question
- Last query:
- Last reply:
Can you help with this problem?
Provide an answer of your own, or ask Douglas Brinkerhoff for more information if necessary.