matrix valued function

Asked by Andreas Naumann

Hi @all,

i try to solve a non-linear stokes problem with a matrix valued factor similiar to

(inner( mu( inner(gamma, gamma) )*nabla u, v) - inner( p, div(v) ))*dx + .... = ....

and gamma is given with a formula similar to:

gamma = gamma + dt * (gamma - nabla u)

How should I implement this problem with DOLFIN in Python? C++ is an option too :)

The simple way
 (u, p) = TrialFunctions(...)
 gamma =grad(u)
 a= (inner( mu( inner(gamma, gamma) )*nabla u, v) - inner( p, div(v) ))*dx

 gamma = gamma + dt * (gamma - nabla u)
 will not work.
 something like
a.gamma = gamma + dt * (gamma - nabla u)
does not work too, because the form does not have an attribute gamma.

 swap = gamma + dt * (gamma - nabla u)
 gamma.assign(swap)

does not work, because the gradient (type?, expression?) does not have the attribute (member function?) assign.

Is there any way to solve such a problem with Dolfin? If yes.. how does it look like?

Question information

Language:
English Edit question
Status:
Answered
For:
DOLFIN Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
Andreas Naumann (andreas-naumann) said :
#1

I solved the problem with the help of TensorFunctionSpace and a function from this space. Now I get some problems with memory. it seems to me, that the command

swap = gamma + dt * (gamma - nabla u)

and following projection on the tensor increases the memory usage without freeing the used memory.

Is this a bug or simply a problem with improper dolfin usage?

Revision history for this message
Johan Hake (johan-hake) said :
#2

Andreas!

You need to proivde a minimal but runnable script that will reproduce your
problem. It is difficult to answer your question otherwise.

Johan

On Wednesday May 4 2011 04:08:06 Andreas Naumann wrote:
> Question #155668 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/155668
>
> Andreas Naumann gave more information on the question:
> I solved the problem with the help of TensorFunctionSpace and a function
> from this space. Now I get some problems with memory. it seems to me,
> that the command
>
> swap = gamma + dt * (gamma - nabla u)
>
> and following projection on the tensor increases the memory usage
> without freeing the used memory.
>
> Is this a bug or simply a problem with improper dolfin usage?

Can you help with this problem?

Provide an answer of your own, or ask Andreas Naumann for more information if necessary.

To post a message you must log in.