# Convergence test don't run for an Initial Condition Parameter

Asked by emmanuel.malherbe

Hi,

I'm struggling in running the convergence test in the case of an initial condition parameter. I took as a clear example the tutorial4.py; instead of having nu as a parameter, I have ic, the initial state. It always get the same answer: w{0} has no entry (w{0} is the name of my initial parameter).
What can I do to correct my code?
Thank you for any advice.

Emmanuel

from dolfin import *
from dolfin_adjoint import *

n = 30
mesh = UnitInterval(n)
V = FunctionSpace(mesh, "CG", 2)

def main(ic):
nu = Constant(0.0001)
u = Function(ic)
u_next = Function(V)
v = TestFunction(V)

timestep = Constant(1.0/n)

F = ((u_next - u)/timestep*v
+ u_next*grad(u_next)*v
+ nu*grad(u_next)*grad(v))*dx
bc = DirichletBC(V, 0.0, "on_boundary")

t = 0.0
end = 0.2
while (t <= end):
solve(F == 0, u_next, bc)
u.assign(u_next)
t += float(timestep)

return u

if __name__ == "__main__":
ic = project(Expression("sin(2*pi*x[0])"), V)
u = main(ic)

dtm=TimeMeasure()
J = Functional(inner(u, u)*dx*dtm[FINISH_TIME])
dJdnu = compute_gradient(J, InitialConditionParameter(ic))

Jnu = assemble(inner(u, u)*dx) # current value

parameters["adjoint"]["stop_annotating"] = True # stop registering equations

def Jhat(nu): # the functional as a pure function of nu
u = main(nu)
return assemble(inner(u, u)*dx)

conv_rate = taylor_test(Jhat, InitialConditionParameter(ic), Jnu, dJdnu)

## Question information

Language:
English Edit question
Status:
Solved
For:
dolfin-adjoint Edit question
Assignee:
No assignee Edit question
Solved by:
Patrick Farrell
Solved:
Last query:
Last reply:

This question was originally filed as bug #1043231.

 Revision history for this message emmanuel.malherbe (emmanuel-malherbe) said on 2012-08-29: #1

Hi,

I'm struggling in running the convergence test in the case of an initial condition parameter. I took as a clear example the tutorial4.py; instead of having nu as a parameter, I have ic, the initial state. It always get the same answer: w{0} has no entry (w{0} is the name of my initial parameter).
What can I do to correct my code?
Thank you for any advice.

Emmanuel

from dolfin import *
from dolfin_adjoint import *

n = 30
mesh = UnitInterval(n)
V = FunctionSpace(mesh, "CG", 2)

def main(ic):
nu = Constant(0.0001)
u = Function(ic)
u_next = Function(V)
v = TestFunction(V)

timestep = Constant(1.0/n)

F = ((u_next - u)/timestep*v
+ u_next*grad(u_next)*v
+ nu*grad(u_next)*grad(v))*dx
bc = DirichletBC(V, 0.0, "on_boundary")

t = 0.0
end = 0.2
while (t <= end):
solve(F == 0, u_next, bc)
u.assign(u_next)
t += float(timestep)

return u

if __name__ == "__main__":
ic = project(Expression("sin(2*pi*x[0])"), V)
u = main(ic)

dtm=TimeMeasure()
J = Functional(inner(u, u)*dx*dtm[FINISH_TIME])
dJdnu = compute_gradient(J, InitialConditionParameter(ic))

Jnu = assemble(inner(u, u)*dx) # current value

parameters["adjoint"]["stop_annotating"] = True # stop registering equations

def Jhat(nu): # the functional as a pure function of nu
u = main(nu)
return assemble(inner(u, u)*dx)

conv_rate = taylor_test(Jhat, InitialConditionParameter(ic), Jnu, dJdnu)

 Revision history for this message Patrick Farrell (pefarrell) said on 2012-09-04: #2

Hi Emmanuel,

There is a minor (but subtle, and admittedly not obvious) mismatch between your code and dolfin-adjoint, but fortunately it's straightforward to fix.

The problem is in the line

u = Function(ic)

At the time that is executed, ic is unknown to dolfin-adjoint -- it hasn't appeared in any equations yet. So because it has never seen ic before, it doesn't annotate the assignment u <- ic. That's why when you attempt to take the derivative with respect to ic later, it doesn't know anything about it. [We might consider changing this default behaviour, in light of your experience -- I agree now that it is not user-friendly.]

So, the fix that is easiest to apply is to change that line to

u = Function(ic, annotate=True)

[as of the latest trunk, r415], which forces dolfin-adjoint to annotate the assgnment u <- ic anyway.

There's one other minor problem: compute_gradient by default tries to be as memory-efficient as possible, and forgets as much as it can of the forward trajectory as it goes backwards -- including the value for the initial condition. That's fine if you're just computing the gradient, but if you later on want to do a taylor test, it will complain at you that it doesn't have the necessary variables. The easy fix is to add forget=False to the call to compute_gradient, or to pass a value=ic to taylor_test.

Here's the code that passes the Taylor test:

from dolfin import *
from dolfin_adjoint import *

n = 30
mesh = UnitInterval(n)
V = FunctionSpace(mesh, "CG", 2)

def main(ic):
nu = Constant(0.0001)
u = Function(ic, annotate=True)
u_next = Function(V)
v = TestFunction(V)

timestep = Constant(1.0/n)

F = ((u_next - u)/timestep*v
+ u_next*grad(u_next)*v
+ nu*grad(u_next)*grad(v))*dx
bc = DirichletBC(V, 0.0, "on_boundary")

t = 0.0
end = 0.2
while (t <= end):
solve(F == 0, u_next, bc)
u.assign(u_next)
t += float(timestep)

return u

if __name__ == "__main__":
ic = project(Expression("sin(2*pi*x[0])"), V)
u = main(ic)

dtm=TimeMeasure()
J = Functional(inner(u, u)*dx*dtm[FINISH_TIME])
dJdnu = compute_gradient(J, InitialConditionParameter(ic), forget=False)

Jnu = assemble(inner(u, u)*dx) # current value

parameters["adjoint"]["stop_annotating"] = True # stop registering equations

def Jhat(nu): # the functional as a pure function of nu
u = main(nu)
return assemble(inner(u, u)*dx)

conv_rate = taylor_test(Jhat, InitialConditionParameter(ic), Jnu, dJdnu)

 Revision history for this message Patrick Farrell (pefarrell) said on 2012-09-04: #3

I should mention that there's another, better way of fixing your problem, which is to give labels to the Functions. See the example near the bottom of

http://dolfin-adjoint.org/documentation/misc.html#naming-functions-and-constants

The trick is to change

u = Function(ic)

to

u = Function(ic, name="Velocity")

Now you can refer to u as "Velocity" in dolfin-adjoint, and it will understand you. So later on you can do

dJdic = compute_gradient(..., InitialConditionParameter("Velocity"), ...)

This is much cleaner, I think, especially when your code is distributed in multiple modules.

Here's another working example which does it this way:

from dolfin import *
from dolfin_adjoint import *

n = 30
mesh = UnitInterval(n)
V = FunctionSpace(mesh, "CG", 2)

def main(ic):
nu = Constant(0.0001)
u = Function(ic, name="Velocity")
u_next = Function(V)
v = TestFunction(V)

timestep = Constant(1.0/n)

F = ((u_next - u)/timestep*v
+ u_next*grad(u_next)*v
+ nu*grad(u_next)*grad(v))*dx
bc = DirichletBC(V, 0.0, "on_boundary")

t = 0.0
end = 0.2
while (t <= end):
solve(F == 0, u_next, bc)
u.assign(u_next)
t += float(timestep)

return u

if __name__ == "__main__":
ic = project(Expression("sin(2*pi*x[0])"), V)
u = main(ic)

dtm=TimeMeasure()
J = Functional(inner(u, u)*dx*dtm[FINISH_TIME])
dJdnu = compute_gradient(J, InitialConditionParameter("Velocity"), forget=False)

Jnu = assemble(inner(u, u)*dx) # current value

parameters["adjoint"]["stop_annotating"] = True # stop registering equations

def Jhat(nu): # the functional as a pure function of nu
u = main(nu)
return assemble(inner(u, u)*dx)

conv_rate = taylor_test(Jhat, InitialConditionParameter("Velocity"), Jnu, dJdnu)

 Revision history for this message emmanuel.malherbe (emmanuel-malherbe) said on 2012-09-04: #4

Thanks Patrick Farrell, that solved my question.

To post a message you must log in.