moving mesh boundary

Asked by Xiaoxian Liu

Hi all,

I simply wanted to move the boundary of a mesh. Below is the code from "demo_ale.py"

from dolfin import *
mesh = UnitSquare(20, 20)
boundary = BoundaryMesh(mesh)
for x in boundary.coordinates():
    x[0] *= 3.0
    x[1] += 0.1*sin(5.0*x[0])
mesh.move(boundary)

And the error message leads to a reported bug of "Harmonic Smoothing":
https://bugs.launchpad.net/dolfin/+bug/1047641

I could use MeshEditor to modify the mesh boundary vertex by vertex, but that would have distorted the mesh near the boundary.

Thank you!
Xiaoxian

Question information

Language:
English Edit question
Status:
Solved
For:
DOLFIN Edit question
Assignee:
No assignee Edit question
Solved by:
Jan Blechta
Solved:
Last query:
Last reply:
Revision history for this message
Best Jan Blechta (blechta) said :
#1

Read Anders' comment in bug report!

Revision history for this message
Xiaoxian Liu (liuxiaox) said :
#2

Yes, I should have read it more carefully... I thought that was an unsolved problem

Revision history for this message
Xiaoxian Liu (liuxiaox) said :
#3

Thanks Jan Blechta, that solved my question.

Revision history for this message
Jan Blechta (blechta) said :
#4

On Sun, 24 Feb 2013 16:11:16 -0000
Xiaoxian Liu <email address hidden> wrote:
> Question #222653 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/222653
>
> Xiaoxian Liu posted a new comment:
> Yes, I should have read it more carefully... I thought that was an
> unsolved problem
>

It is unsolved...this is only workaround runing only serially.

But it quite straightforward writing your own harmonic smoother running
in parallel. You just solve the Laplace equation with appropriate bcs
on space FunctionSpace(mesh, 'CG', 1) for each spatial direction. Then
you create vector function out of these components, for example (in 2D):

        class Disp(Expression):
            def __init__(self, ux, uy):
                self.ux = ux
                self.uy = uy
                Expression.__init__(self)
            def value_shape(self):
                return (2,)
            def eval(self, values, x):
                self.ux.eval(values[0:1], x)
                self.uy.eval(values[1:2], x)

and use this expression to move mesh. This has advantage that you can
specify your own bcs for displacement and you can even you another PDE
for smoothing.

Jan

Revision history for this message
Jan Blechta (blechta) said :
#5

On Sun, 24 Feb 2013 16:35:49 -0000
Jan Blechta <email address hidden> wrote:
> Question #222653 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/222653
>
> Jan Blechta posted a new comment:
> On Sun, 24 Feb 2013 16:11:16 -0000
> Xiaoxian Liu <email address hidden> wrote:
> > Question #222653 on DOLFIN changed:
> > https://answers.launchpad.net/dolfin/+question/222653
> >
> > Xiaoxian Liu posted a new comment:
> > Yes, I should have read it more carefully... I thought that was an
> > unsolved problem
> >
>
> It is unsolved...this is only workaround runing only serially.
>
> But it quite straightforward writing your own harmonic smoother
> running in parallel. You just solve the Laplace equation with
> appropriate bcs on space FunctionSpace(mesh, 'CG', 1) for each
> spatial direction. Then you create vector function out of these
> components, for example (in 2D):
>
> class Disp(Expression):
> def __init__(self, ux, uy):
> self.ux = ux
> self.uy = uy
> Expression.__init__(self)
> def value_shape(self):
> return (2,)
> def eval(self, values, x):
> self.ux.eval(values[0:1], x)
> self.uy.eval(values[1:2], x)
>
> and use this expression to move mesh. This has advantage that you can
> specify your own bcs for displacement and you can even you another PDE
> for smoothing.
>
> Jan
>

For performance you can rewrite this to C++ and also overload
compute_vertex_values. But the last didn't worked for me.

Revision history for this message
Anders Logg (logg) said :
#6

Jan, if you have the time, could you consider writing a patch for the
(C++) smoothers in DOLFIN that works in parallel?

--
Anders

On Sun, Feb 24, 2013 at 04:35:49PM -0000, Jan Blechta wrote:
> Question #222653 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/222653
>
> Jan Blechta posted a new comment:
> On Sun, 24 Feb 2013 16:11:16 -0000
> Xiaoxian Liu <email address hidden> wrote:
> > Question #222653 on DOLFIN changed:
> > https://answers.launchpad.net/dolfin/+question/222653
> >
> > Xiaoxian Liu posted a new comment:
> > Yes, I should have read it more carefully... I thought that was an
> > unsolved problem
> >
>
> It is unsolved...this is only workaround runing only serially.
>
> But it quite straightforward writing your own harmonic smoother running
> in parallel. You just solve the Laplace equation with appropriate bcs
> on space FunctionSpace(mesh, 'CG', 1) for each spatial direction. Then
> you create vector function out of these components, for example (in 2D):
>
> class Disp(Expression):
> def __init__(self, ux, uy):
> self.ux = ux
> self.uy = uy
> Expression.__init__(self)
> def value_shape(self):
> return (2,)
> def eval(self, values, x):
> self.ux.eval(values[0:1], x)
> self.uy.eval(values[1:2], x)
>
> and use this expression to move mesh. This has advantage that you can
> specify your own bcs for displacement and you can even you another PDE
> for smoothing.
>
> Jan
>

Revision history for this message
Jan Blechta (blechta) said :
#7

On Wed, 27 Feb 2013 20:01:04 -0000
Anders Logg <email address hidden> wrote:
> Question #222653 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/222653
>
> Anders Logg posted a new comment:
> Jan, if you have the time, could you consider writing a patch for the
> (C++) smoothers in DOLFIN that works in parallel?
>

Do you mean fixing existing smoother (bug 1047641) and for present let
users to write their own smoothers that allow setting custom boundary
conditions and/or custom PDE? I could look at first option. Second one
requires some thinking about interface and whether it would still be
able to save some resources by avoiding resulting displacement
evaluations by direct array manipulations.

Revision history for this message
Jan Blechta (blechta) said :
#8

On Wed, 27 Feb 2013 20:01:04 -0000
Anders Logg <email address hidden> wrote:
> Question #222653 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/222653
>
> Anders Logg posted a new comment:
> Jan, if you have the time, could you consider writing a patch for the
> (C++) smoothers in DOLFIN that works in parallel?
>

One more question: does HarmonicSmoothing::move not work in parallel
because of bug 1047641 or it never worked in parallel?

Revision history for this message
Garth Wells (garth-wells) said :
#9

On 27 February 2013 21:15, Jan Blechta
<email address hidden> wrote:
> Question #222653 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/222653
>
> Jan Blechta posted a new comment:
> On Wed, 27 Feb 2013 20:01:04 -0000
> Anders Logg <email address hidden> wrote:
>> Question #222653 on DOLFIN changed:
>> https://answers.launchpad.net/dolfin/+question/222653
>>
>> Anders Logg posted a new comment:
>> Jan, if you have the time, could you consider writing a patch for the
>> (C++) smoothers in DOLFIN that works in parallel?
>>
>
> One more question: does HarmonicSmoothing::move not work in parallel
> because of bug 1047641 or it never worked in parallel?
>

Because of the issue that lead to bug #1047641 it never worked in parallel.

The new function GenericDofMap::set should make #1047641 an easy fix.

Garth

> --
> You received this question notification because you are a member of
> DOLFIN Team, which is an answer contact for DOLFIN.

Revision history for this message
Anders Logg (logg) said :
#10

On Wed, Feb 27, 2013 at 08:45:53PM -0000, Jan Blechta wrote:
> Question #222653 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/222653
>
> Jan Blechta posted a new comment:
> On Wed, 27 Feb 2013 20:01:04 -0000
> Anders Logg <email address hidden> wrote:
> > Question #222653 on DOLFIN changed:
> > https://answers.launchpad.net/dolfin/+question/222653
> >
> > Anders Logg posted a new comment:
> > Jan, if you have the time, could you consider writing a patch for the
> > (C++) smoothers in DOLFIN that works in parallel?
> >
>
> Do you mean fixing existing smoother (bug 1047641) and for present let
> users to write their own smoothers that allow setting custom boundary
> conditions and/or custom PDE? I could look at first option. Second one
> requires some thinking about interface and whether it would still be
> able to save some resources by avoiding resulting displacement
> evaluations by direct array manipulations.

Yes, first option. Sounds like you know how to fix it.

--
Anders

Revision history for this message
Jan Blechta (blechta) said :
#11

On Thu, 28 Feb 2013 02:50:55 -0000
Anders Logg <email address hidden> wrote:
> Question #222653 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/222653
>
> Anders Logg posted a new comment:
> On Wed, Feb 27, 2013 at 08:45:53PM -0000, Jan Blechta wrote:
> > Question #222653 on DOLFIN changed:
> > https://answers.launchpad.net/dolfin/+question/222653
> >
> > Jan Blechta posted a new comment:
> > On Wed, 27 Feb 2013 20:01:04 -0000
> > Anders Logg <email address hidden> wrote:
> > > Question #222653 on DOLFIN changed:
> > > https://answers.launchpad.net/dolfin/+question/222653
> > >
> > > Anders Logg posted a new comment:
> > > Jan, if you have the time, could you consider writing a patch for
> > > the (C++) smoothers in DOLFIN that works in parallel?
> > >
> >
> > Do you mean fixing existing smoother (bug 1047641) and for present
> > let users to write their own smoothers that allow setting custom
> > boundary conditions and/or custom PDE? I could look at first
> > option. Second one requires some thinking about interface and
> > whether it would still be able to save some resources by avoiding
> > resulting displacement evaluations by direct array manipulations.
>
> Yes, first option. Sounds like you know how to fix it.
>

Figured out how to make it run with reorder_dofs_serial True but I
can't figure how make it running in parallel. One problem is that
when we got tho solution of laplace problem, the solution vector has
less entries than there is vertex coordinates. In other words each dof
is present only on one partition. It seems to me that explicit MPI
communication has to be introduced here.

Perhaps I'm wrong - is there facility to gather all vertex values from
ditributed Vector?

Revision history for this message
Jan Blechta (blechta) said :
#12

On Wed, 27 Feb 2013 21:41:04 -0000
Garth Wells <email address hidden> wrote:
> Question #222653 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/222653
>
> Garth Wells posted a new comment:
> On 27 February 2013 21:15, Jan Blechta
> <email address hidden> wrote:
> > Question #222653 on DOLFIN changed:
> > https://answers.launchpad.net/dolfin/+question/222653
> >
> > Jan Blechta posted a new comment:
> > On Wed, 27 Feb 2013 20:01:04 -0000
> > Anders Logg <email address hidden> wrote:
> >> Question #222653 on DOLFIN changed:
> >> https://answers.launchpad.net/dolfin/+question/222653
> >>
> >> Anders Logg posted a new comment:
> >> Jan, if you have the time, could you consider writing a patch for
> >> the (C++) smoothers in DOLFIN that works in parallel?
> >>
> >
> > One more question: does HarmonicSmoothing::move not work in parallel
> > because of bug 1047641 or it never worked in parallel?
> >
>
> Because of the issue that lead to bug #1047641 it never worked in
> parallel.
>
> The new function GenericDofMap::set should make #1047641 an easy fix.

It seems to me that GenericDofMap::set_x function would be great to
setting bcs to rhs of laplace problem. But it doesn't work so to be
helpful here. If you have V CG1 function space on mesh and new_boundary
is boundary mesh of mesh (with updated coordinates)

Vector b(mesh.num_vertices())
V->dofmap()->set_x(b, 1.0, 0, new_boundary);

segfaults in ufc code, for example
poisson2d_dofmap_0::tabulate_coordinates on line 1136
coordinates[2][0] = x[2][0];

This is the file dolfin/ale/Poisson2D.h at revno 7465 if you would like
to investigate.

Jan

>
> Garth
>
>
> > --
> > You received this question notification because you are a member of
> > DOLFIN Team, which is an answer contact for DOLFIN.
>

Revision history for this message
Anders Logg (logg) said :
#13

On Thu, Feb 28, 2013 at 05:21:11PM -0000, Jan Blechta wrote:
> Question #222653 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/222653
>
> Jan Blechta posted a new comment:
> On Thu, 28 Feb 2013 02:50:55 -0000
> Anders Logg <email address hidden> wrote:
> > Question #222653 on DOLFIN changed:
> > https://answers.launchpad.net/dolfin/+question/222653
> >
> > Anders Logg posted a new comment:
> > On Wed, Feb 27, 2013 at 08:45:53PM -0000, Jan Blechta wrote:
> > > Question #222653 on DOLFIN changed:
> > > https://answers.launchpad.net/dolfin/+question/222653
> > >
> > > Jan Blechta posted a new comment:
> > > On Wed, 27 Feb 2013 20:01:04 -0000
> > > Anders Logg <email address hidden> wrote:
> > > > Question #222653 on DOLFIN changed:
> > > > https://answers.launchpad.net/dolfin/+question/222653
> > > >
> > > > Anders Logg posted a new comment:
> > > > Jan, if you have the time, could you consider writing a patch for
> > > > the (C++) smoothers in DOLFIN that works in parallel?
> > > >
> > >
> > > Do you mean fixing existing smoother (bug 1047641) and for present
> > > let users to write their own smoothers that allow setting custom
> > > boundary conditions and/or custom PDE? I could look at first
> > > option. Second one requires some thinking about interface and
> > > whether it would still be able to save some resources by avoiding
> > > resulting displacement evaluations by direct array manipulations.
> >
> > Yes, first option. Sounds like you know how to fix it.
> >
>
> Figured out how to make it run with reorder_dofs_serial True but I
> can't figure how make it running in parallel. One problem is that
> when we got tho solution of laplace problem, the solution vector has
> less entries than there is vertex coordinates. In other words each dof
> is present only on one partition. It seems to me that explicit MPI
> communication has to be introduced here.
>
> Perhaps I'm wrong - is there facility to gather all vertex values from
> ditributed Vector?

Is it possible to just use Function::compute_vertex_values? That
function will get the ghost dofs needed for all vertices.

--
Anders

Revision history for this message
Jan Blechta (blechta) said :
#14

On Thu, 28 Feb 2013 20:41:18 -0000
Anders Logg <email address hidden> wrote:
> Question #222653 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/222653
>
> Anders Logg posted a new comment:
> On Thu, Feb 28, 2013 at 05:21:11PM -0000, Jan Blechta wrote:
> > Question #222653 on DOLFIN changed:
> > https://answers.launchpad.net/dolfin/+question/222653
> >
> > Jan Blechta posted a new comment:
> > On Thu, 28 Feb 2013 02:50:55 -0000
> > Anders Logg <email address hidden> wrote:
> > > Question #222653 on DOLFIN changed:
> > > https://answers.launchpad.net/dolfin/+question/222653
> > >
> > > Anders Logg posted a new comment:
> > > On Wed, Feb 27, 2013 at 08:45:53PM -0000, Jan Blechta wrote:
> > > > Question #222653 on DOLFIN changed:
> > > > https://answers.launchpad.net/dolfin/+question/222653
> > > >
> > > > Jan Blechta posted a new comment:
> > > > On Wed, 27 Feb 2013 20:01:04 -0000
> > > > Anders Logg <email address hidden> wrote:
> > > > > Question #222653 on DOLFIN changed:
> > > > > https://answers.launchpad.net/dolfin/+question/222653
> > > > >
> > > > > Anders Logg posted a new comment:
> > > > > Jan, if you have the time, could you consider writing a patch
> > > > > for the (C++) smoothers in DOLFIN that works in parallel?
> > > > >
> > > >
> > > > Do you mean fixing existing smoother (bug 1047641) and for
> > > > present let users to write their own smoothers that allow
> > > > setting custom boundary conditions and/or custom PDE? I could
> > > > look at first option. Second one requires some thinking about
> > > > interface and whether it would still be able to save some
> > > > resources by avoiding resulting displacement evaluations by
> > > > direct array manipulations.
> > >
> > > Yes, first option. Sounds like you know how to fix it.
> > >
> >
> > Figured out how to make it run with reorder_dofs_serial True but I
> > can't figure how make it running in parallel. One problem is that
> > when we got tho solution of laplace problem, the solution vector has
> > less entries than there is vertex coordinates. In other words each
> > dof is present only on one partition. It seems to me that explicit
> > MPI communication has to be introduced here.
> >
> > Perhaps I'm wrong - is there facility to gather all vertex values
> > from ditributed Vector?
>
> Is it possible to just use Function::compute_vertex_values? That
> function will get the ghost dofs needed for all vertices.
>

I think so but am not sure if it is efficiently implemented for CG 1
function. Do you know that?

Jan

Revision history for this message
Anders Logg (logg) said :
#15

On Thu, Feb 28, 2013 at 08:55:56PM -0000, Jan Blechta wrote:
> Question #222653 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/222653
>
> Jan Blechta posted a new comment:
> On Thu, 28 Feb 2013 20:41:18 -0000
> Anders Logg <email address hidden> wrote:
> > Question #222653 on DOLFIN changed:
> > https://answers.launchpad.net/dolfin/+question/222653
> >
> > Anders Logg posted a new comment:
> > On Thu, Feb 28, 2013 at 05:21:11PM -0000, Jan Blechta wrote:
> > > Question #222653 on DOLFIN changed:
> > > https://answers.launchpad.net/dolfin/+question/222653
> > >
> > > Jan Blechta posted a new comment:
> > > On Thu, 28 Feb 2013 02:50:55 -0000
> > > Anders Logg <email address hidden> wrote:
> > > > Question #222653 on DOLFIN changed:
> > > > https://answers.launchpad.net/dolfin/+question/222653
> > > >
> > > > Anders Logg posted a new comment:
> > > > On Wed, Feb 27, 2013 at 08:45:53PM -0000, Jan Blechta wrote:
> > > > > Question #222653 on DOLFIN changed:
> > > > > https://answers.launchpad.net/dolfin/+question/222653
> > > > >
> > > > > Jan Blechta posted a new comment:
> > > > > On Wed, 27 Feb 2013 20:01:04 -0000
> > > > > Anders Logg <email address hidden> wrote:
> > > > > > Question #222653 on DOLFIN changed:
> > > > > > https://answers.launchpad.net/dolfin/+question/222653
> > > > > >
> > > > > > Anders Logg posted a new comment:
> > > > > > Jan, if you have the time, could you consider writing a patch
> > > > > > for the (C++) smoothers in DOLFIN that works in parallel?
> > > > > >
> > > > >
> > > > > Do you mean fixing existing smoother (bug 1047641) and for
> > > > > present let users to write their own smoothers that allow
> > > > > setting custom boundary conditions and/or custom PDE? I could
> > > > > look at first option. Second one requires some thinking about
> > > > > interface and whether it would still be able to save some
> > > > > resources by avoiding resulting displacement evaluations by
> > > > > direct array manipulations.
> > > >
> > > > Yes, first option. Sounds like you know how to fix it.
> > > >
> > >
> > > Figured out how to make it run with reorder_dofs_serial True but I
> > > can't figure how make it running in parallel. One problem is that
> > > when we got tho solution of laplace problem, the solution vector has
> > > less entries than there is vertex coordinates. In other words each
> > > dof is present only on one partition. It seems to me that explicit
> > > MPI communication has to be introduced here.
> > >
> > > Perhaps I'm wrong - is there facility to gather all vertex values
> > > from ditributed Vector?
> >
> > Is it possible to just use Function::compute_vertex_values? That
> > function will get the ghost dofs needed for all vertices.
> >
>
> I think so but am not sure if it is efficiently implemented for CG 1
> function. Do you know that?

The value will be computed as a linear combination of the element
basis functions at each vertex and all except one of those will be
zero. So it will not be optimal, but I assume it's a small overhead
compared to solving the linear system in the first place.

--
Anders

Revision history for this message
Garth Wells (garth-wells) said :
#16

On 28 February 2013 20:41, Anders Logg
<email address hidden> wrote:
> Question #222653 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/222653
>
> Anders Logg posted a new comment:
> On Thu, Feb 28, 2013 at 05:21:11PM -0000, Jan Blechta wrote:
>> Question #222653 on DOLFIN changed:
>> https://answers.launchpad.net/dolfin/+question/222653
>>
>> Jan Blechta posted a new comment:
>> On Thu, 28 Feb 2013 02:50:55 -0000
>> Anders Logg <email address hidden> wrote:
>> > Question #222653 on DOLFIN changed:
>> > https://answers.launchpad.net/dolfin/+question/222653
>> >
>> > Anders Logg posted a new comment:
>> > On Wed, Feb 27, 2013 at 08:45:53PM -0000, Jan Blechta wrote:
>> > > Question #222653 on DOLFIN changed:
>> > > https://answers.launchpad.net/dolfin/+question/222653
>> > >
>> > > Jan Blechta posted a new comment:
>> > > On Wed, 27 Feb 2013 20:01:04 -0000
>> > > Anders Logg <email address hidden> wrote:
>> > > > Question #222653 on DOLFIN changed:
>> > > > https://answers.launchpad.net/dolfin/+question/222653
>> > > >
>> > > > Anders Logg posted a new comment:
>> > > > Jan, if you have the time, could you consider writing a patch for
>> > > > the (C++) smoothers in DOLFIN that works in parallel?
>> > > >
>> > >
>> > > Do you mean fixing existing smoother (bug 1047641) and for present
>> > > let users to write their own smoothers that allow setting custom
>> > > boundary conditions and/or custom PDE? I could look at first
>> > > option. Second one requires some thinking about interface and
>> > > whether it would still be able to save some resources by avoiding
>> > > resulting displacement evaluations by direct array manipulations.
>> >
>> > Yes, first option. Sounds like you know how to fix it.
>> >
>>
>> Figured out how to make it run with reorder_dofs_serial True but I
>> can't figure how make it running in parallel. One problem is that
>> when we got tho solution of laplace problem, the solution vector has
>> less entries than there is vertex coordinates. In other words each dof
>> is present only on one partition.

Not quite. The dof map has all dofs that a process can contribute to.

To get the values from the vector that are shared with not owned by
the current process, you can access the ghost values.

Garth

It seems to me that explicit MPI
>> communication has to be introduced here.
>>
>> Perhaps I'm wrong - is there facility to gather all vertex values from
>> ditributed Vector?
>
> Is it possible to just use Function::compute_vertex_values? That
> function will get the ghost dofs needed for all vertices.
>
> --
> Anders
>
> You received this question notification because you are a member of
> DOLFIN Team, which is an answer contact for DOLFIN.

Revision history for this message
Jan Blechta (blechta) said :
#17

On Thu, 28 Feb 2013 21:41:16 -0000
Garth Wells <email address hidden> wrote:
> Question #222653 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/222653
>
> Garth Wells posted a new comment:
> On 28 February 2013 20:41, Anders Logg
> <email address hidden> wrote:
> > Question #222653 on DOLFIN changed:
> > https://answers.launchpad.net/dolfin/+question/222653
> >
> > Anders Logg posted a new comment:
> > On Thu, Feb 28, 2013 at 05:21:11PM -0000, Jan Blechta wrote:
> >> Question #222653 on DOLFIN changed:
> >> https://answers.launchpad.net/dolfin/+question/222653
> >>
> >> Jan Blechta posted a new comment:
> >> On Thu, 28 Feb 2013 02:50:55 -0000
> >> Anders Logg <email address hidden> wrote:
> >> > Question #222653 on DOLFIN changed:
> >> > https://answers.launchpad.net/dolfin/+question/222653
> >> >
> >> > Anders Logg posted a new comment:
> >> > On Wed, Feb 27, 2013 at 08:45:53PM -0000, Jan Blechta wrote:
> >> > > Question #222653 on DOLFIN changed:
> >> > > https://answers.launchpad.net/dolfin/+question/222653
> >> > >
> >> > > Jan Blechta posted a new comment:
> >> > > On Wed, 27 Feb 2013 20:01:04 -0000
> >> > > Anders Logg <email address hidden> wrote:
> >> > > > Question #222653 on DOLFIN changed:
> >> > > > https://answers.launchpad.net/dolfin/+question/222653
> >> > > >
> >> > > > Anders Logg posted a new comment:
> >> > > > Jan, if you have the time, could you consider writing a
> >> > > > patch for the (C++) smoothers in DOLFIN that works in
> >> > > > parallel?
> >> > > >
> >> > >
> >> > > Do you mean fixing existing smoother (bug 1047641) and for
> >> > > present let users to write their own smoothers that allow
> >> > > setting custom boundary conditions and/or custom PDE? I could
> >> > > look at first option. Second one requires some thinking about
> >> > > interface and whether it would still be able to save some
> >> > > resources by avoiding resulting displacement evaluations by
> >> > > direct array manipulations.
> >> >
> >> > Yes, first option. Sounds like you know how to fix it.
> >> >
> >>
> >> Figured out how to make it run with reorder_dofs_serial True but I
> >> can't figure how make it running in parallel. One problem is that
> >> when we got tho solution of laplace problem, the solution vector
> >> has less entries than there is vertex coordinates. In other words
> >> each dof is present only on one partition.
>
> Not quite. The dof map has all dofs that a process can contribute to.
>
> To get the values from the vector that are shared with not owned by
> the current process, you can access the ghost values.
>
Ok, but how? How to get its local dof index?
DofMap::vertex_to_dof_map() returns only vertex indeces for non-ghost
dofs.

Jan

> Garth
>
> It seems to me that explicit MPI
> >> communication has to be introduced here.
> >>
> >> Perhaps I'm wrong - is there facility to gather all vertex values
> >> from ditributed Vector?
> >
> > Is it possible to just use Function::compute_vertex_values? That
> > function will get the ghost dofs needed for all vertices.
> >
> > --
> > Anders
> >
> > You received this question notification because you are a member of
> > DOLFIN Team, which is an answer contact for DOLFIN.
>

Revision history for this message
Jan Blechta (blechta) said :
#18

On Thu, 28 Feb 2013 22:01:13 -0000
Jan Blechta <email address hidden> wrote:
> Question #222653 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/222653
>
> Jan Blechta posted a new comment:
> On Thu, 28 Feb 2013 21:41:16 -0000
> Garth Wells <email address hidden> wrote:
> > Question #222653 on DOLFIN changed:
> > https://answers.launchpad.net/dolfin/+question/222653
> >
> > Garth Wells posted a new comment:
> > On 28 February 2013 20:41, Anders Logg
> > <email address hidden> wrote:
> > > Question #222653 on DOLFIN changed:
> > > https://answers.launchpad.net/dolfin/+question/222653
> > >
> > > Anders Logg posted a new comment:
> > > On Thu, Feb 28, 2013 at 05:21:11PM -0000, Jan Blechta wrote:
> > >> Question #222653 on DOLFIN changed:
> > >> https://answers.launchpad.net/dolfin/+question/222653
> > >>
> > >> Jan Blechta posted a new comment:
> > >> On Thu, 28 Feb 2013 02:50:55 -0000
> > >> Anders Logg <email address hidden> wrote:
> > >> > Question #222653 on DOLFIN changed:
> > >> > https://answers.launchpad.net/dolfin/+question/222653
> > >> >
> > >> > Anders Logg posted a new comment:
> > >> > On Wed, Feb 27, 2013 at 08:45:53PM -0000, Jan Blechta wrote:
> > >> > > Question #222653 on DOLFIN changed:
> > >> > > https://answers.launchpad.net/dolfin/+question/222653
> > >> > >
> > >> > > Jan Blechta posted a new comment:
> > >> > > On Wed, 27 Feb 2013 20:01:04 -0000
> > >> > > Anders Logg <email address hidden> wrote:
> > >> > > > Question #222653 on DOLFIN changed:
> > >> > > > https://answers.launchpad.net/dolfin/+question/222653
> > >> > > >
> > >> > > > Anders Logg posted a new comment:
> > >> > > > Jan, if you have the time, could you consider writing a
> > >> > > > patch for the (C++) smoothers in DOLFIN that works in
> > >> > > > parallel?
> > >> > > >
> > >> > >
> > >> > > Do you mean fixing existing smoother (bug 1047641) and for
> > >> > > present let users to write their own smoothers that allow
> > >> > > setting custom boundary conditions and/or custom PDE? I could
> > >> > > look at first option. Second one requires some thinking about
> > >> > > interface and whether it would still be able to save some
> > >> > > resources by avoiding resulting displacement evaluations by
> > >> > > direct array manipulations.
> > >> >
> > >> > Yes, first option. Sounds like you know how to fix it.
> > >> >
> > >>
> > >> Figured out how to make it run with reorder_dofs_serial True but
> > >> I can't figure how make it running in parallel. One problem is
> > >> that when we got tho solution of laplace problem, the solution
> > >> vector has less entries than there is vertex coordinates. In
> > >> other words each dof is present only on one partition.
> >
> > Not quite. The dof map has all dofs that a process can contribute
> > to.
> >
> > To get the values from the vector that are shared with not owned by
> > the current process, you can access the ghost values.
> >
> Ok, but how? How to get its local dof index?
> DofMap::vertex_to_dof_map() returns only vertex indeces for non-ghost
> dofs.
>
> Jan
>

Actually it would be better if DofMap::vertex_to_dof_map() contained
also ghost dofs as it could be inverted - you could easily obtain
dof_to_vertex_mapping

Jan

> > Garth
> >
> > It seems to me that explicit MPI
> > >> communication has to be introduced here.
> > >>
> > >> Perhaps I'm wrong - is there facility to gather all vertex values
> > >> from ditributed Vector?
> > >
> > > Is it possible to just use Function::compute_vertex_values? That
> > > function will get the ghost dofs needed for all vertices.
> > >
> > > --
> > > Anders
> > >
> > > You received this question notification because you are a member
> > > of DOLFIN Team, which is an answer contact for DOLFIN.
> >
>

Revision history for this message
Jan Blechta (blechta) said :
#19

On Thu, 28 Feb 2013 21:01:14 -0000
Anders Logg <email address hidden> wrote:
> Question #222653 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/222653
>
> Anders Logg posted a new comment:
> On Thu, Feb 28, 2013 at 08:55:56PM -0000, Jan Blechta wrote:
> > Question #222653 on DOLFIN changed:
> > https://answers.launchpad.net/dolfin/+question/222653
> >
> > Jan Blechta posted a new comment:
> > On Thu, 28 Feb 2013 20:41:18 -0000
> > Anders Logg <email address hidden> wrote:
> > > Question #222653 on DOLFIN changed:
> > > https://answers.launchpad.net/dolfin/+question/222653
> > >
> > > Anders Logg posted a new comment:
> > > On Thu, Feb 28, 2013 at 05:21:11PM -0000, Jan Blechta wrote:
> > > > Question #222653 on DOLFIN changed:
> > > > https://answers.launchpad.net/dolfin/+question/222653
> > > >
> > > > Jan Blechta posted a new comment:
> > > > On Thu, 28 Feb 2013 02:50:55 -0000
> > > > Anders Logg <email address hidden> wrote:
> > > > > Question #222653 on DOLFIN changed:
> > > > > https://answers.launchpad.net/dolfin/+question/222653
> > > > >
> > > > > Anders Logg posted a new comment:
> > > > > On Wed, Feb 27, 2013 at 08:45:53PM -0000, Jan Blechta wrote:
> > > > > > Question #222653 on DOLFIN changed:
> > > > > > https://answers.launchpad.net/dolfin/+question/222653
> > > > > >
> > > > > > Jan Blechta posted a new comment:
> > > > > > On Wed, 27 Feb 2013 20:01:04 -0000
> > > > > > Anders Logg <email address hidden> wrote:
> > > > > > > Question #222653 on DOLFIN changed:
> > > > > > > https://answers.launchpad.net/dolfin/+question/222653
> > > > > > >
> > > > > > > Anders Logg posted a new comment:
> > > > > > > Jan, if you have the time, could you consider writing a
> > > > > > > patch for the (C++) smoothers in DOLFIN that works in
> > > > > > > parallel?
> > > > > > >
> > > > > >
> > > > > > Do you mean fixing existing smoother (bug 1047641) and for
> > > > > > present let users to write their own smoothers that allow
> > > > > > setting custom boundary conditions and/or custom PDE? I
> > > > > > could look at first option. Second one requires some
> > > > > > thinking about interface and whether it would still be able
> > > > > > to save some resources by avoiding resulting displacement
> > > > > > evaluations by direct array manipulations.
> > > > >
> > > > > Yes, first option. Sounds like you know how to fix it.
> > > > >
> > > >
> > > > Figured out how to make it run with reorder_dofs_serial True
> > > > but I can't figure how make it running in parallel. One problem
> > > > is that when we got tho solution of laplace problem, the
> > > > solution vector has less entries than there is vertex
> > > > coordinates. In other words each dof is present only on one
> > > > partition. It seems to me that explicit MPI communication has
> > > > to be introduced here.
> > > >
> > > > Perhaps I'm wrong - is there facility to gather all vertex
> > > > values from ditributed Vector?
> > >
> > > Is it possible to just use Function::compute_vertex_values? That
> > > function will get the ghost dofs needed for all vertices.
> > >
> >
> > I think so but am not sure if it is efficiently implemented for CG 1
> > function. Do you know that?
>
> The value will be computed as a linear combination of the element
> basis functions at each vertex and all except one of those will be
> zero. So it will not be optimal, but I assume it's a small overhead
> compared to solving the linear system in the first place.
>

Hi Anders,
it's done and now it runs in parallel as well. I've added optional
std::string argument to HarmonicSmoothing::move (and calling functions)
enabling user to choose whether to solve for coordinates themselves or
displacement. Later I realized that it's equivalent in case of exact
aritmethic and solver:) Altough not doing an analysis it seems that
displacement mode is numerically better in typical case as solution and
rhs vector lacks additional linear component which is much larger than
displacement itself. Unit test also gives better numbers for
displacement mode.

What do you think? Should I let both options or keep only displacement
mode to keep interface simple?

Jan

Revision history for this message
Jan Blechta (blechta) said :
#20

On Sat, 02 Mar 2013 20:06:01 -0000
Jan Blechta <email address hidden> wrote:
> Question #222653 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/222653
>
> Jan Blechta posted a new comment:
> On Thu, 28 Feb 2013 21:01:14 -0000
> Anders Logg <email address hidden> wrote:
> > Question #222653 on DOLFIN changed:
> > https://answers.launchpad.net/dolfin/+question/222653
> >
> > Anders Logg posted a new comment:
> > On Thu, Feb 28, 2013 at 08:55:56PM -0000, Jan Blechta wrote:
> > > Question #222653 on DOLFIN changed:
> > > https://answers.launchpad.net/dolfin/+question/222653
> > >
> > > Jan Blechta posted a new comment:
> > > On Thu, 28 Feb 2013 20:41:18 -0000
> > > Anders Logg <email address hidden> wrote:
> > > > Question #222653 on DOLFIN changed:
> > > > https://answers.launchpad.net/dolfin/+question/222653
> > > >
> > > > Anders Logg posted a new comment:
> > > > On Thu, Feb 28, 2013 at 05:21:11PM -0000, Jan Blechta wrote:
> > > > > Question #222653 on DOLFIN changed:
> > > > > https://answers.launchpad.net/dolfin/+question/222653
> > > > >
> > > > > Jan Blechta posted a new comment:
> > > > > On Thu, 28 Feb 2013 02:50:55 -0000
> > > > > Anders Logg <email address hidden> wrote:
> > > > > > Question #222653 on DOLFIN changed:
> > > > > > https://answers.launchpad.net/dolfin/+question/222653
> > > > > >
> > > > > > Anders Logg posted a new comment:
> > > > > > On Wed, Feb 27, 2013 at 08:45:53PM -0000, Jan Blechta wrote:
> > > > > > > Question #222653 on DOLFIN changed:
> > > > > > > https://answers.launchpad.net/dolfin/+question/222653
> > > > > > >
> > > > > > > Jan Blechta posted a new comment:
> > > > > > > On Wed, 27 Feb 2013 20:01:04 -0000
> > > > > > > Anders Logg <email address hidden> wrote:
> > > > > > > > Question #222653 on DOLFIN changed:
> > > > > > > > https://answers.launchpad.net/dolfin/+question/222653
> > > > > > > >
> > > > > > > > Anders Logg posted a new comment:
> > > > > > > > Jan, if you have the time, could you consider writing a
> > > > > > > > patch for the (C++) smoothers in DOLFIN that works in
> > > > > > > > parallel?
> > > > > > > >
> > > > > > >
> > > > > > > Do you mean fixing existing smoother (bug 1047641) and for
> > > > > > > present let users to write their own smoothers that allow
> > > > > > > setting custom boundary conditions and/or custom PDE? I
> > > > > > > could look at first option. Second one requires some
> > > > > > > thinking about interface and whether it would still be
> > > > > > > able to save some resources by avoiding resulting
> > > > > > > displacement evaluations by direct array manipulations.
> > > > > >
> > > > > > Yes, first option. Sounds like you know how to fix it.
> > > > > >
> > > > >
> > > > > Figured out how to make it run with reorder_dofs_serial True
> > > > > but I can't figure how make it running in parallel. One
> > > > > problem is that when we got tho solution of laplace problem,
> > > > > the solution vector has less entries than there is vertex
> > > > > coordinates. In other words each dof is present only on one
> > > > > partition. It seems to me that explicit MPI communication has
> > > > > to be introduced here.
> > > > >
> > > > > Perhaps I'm wrong - is there facility to gather all vertex
> > > > > values from ditributed Vector?
> > > >
> > > > Is it possible to just use Function::compute_vertex_values? That
> > > > function will get the ghost dofs needed for all vertices.
> > > >
> > >
> > > I think so but am not sure if it is efficiently implemented for
> > > CG 1 function. Do you know that?
> >
> > The value will be computed as a linear combination of the element
> > basis functions at each vertex and all except one of those will be
> > zero. So it will not be optimal, but I assume it's a small overhead
> > compared to solving the linear system in the first place.
> >
>
> Hi Anders,
> it's done and now it runs in parallel as well. I've added optional
> std::string argument to HarmonicSmoothing::move (and calling
> functions) enabling user to choose whether to solve for coordinates
> themselves or displacement. Later I realized that it's equivalent in
> case of exact aritmethic and solver:) Altough not doing an analysis
> it seems that displacement mode is numerically better in typical case
> as solution and rhs vector lacks additional linear component which is
> much larger than displacement itself. Unit test also gives better
> numbers for displacement mode.
>
> What do you think? Should I let both options or keep only displacement
> mode to keep interface simple?
>
> Jan
>

Anders, check
https://code.launchpad.net/~blechta/dolfin/harmonic_smoothing/+merge/151370

Revision history for this message
Marco Morandini (morandini) said :
#21

> Hi Anders,
> it's done and now it runs in parallel as well. I've added optional
> std::string argument to HarmonicSmoothing::move (and calling functions)
> enabling user to choose whether to solve for coordinates themselves or
> displacement. Later I realized that it's equivalent in case of exact
> aritmethic and solver:) Altough not doing an analysis it seems that
> displacement mode is numerically better in typical case as solution and
> rhs vector lacks additional linear component which is much larger than
> displacement itself. Unit test also gives better numbers for
> displacement mode.
>
> What do you think? Should I let both options or keep only displacement
> mode to keep interface simple?

As a user I'd suggest to keep both.
Some time ago I've spent more than a week debugging a fluid-structure
simulation. The structural simulation was carried by an external
multibody solver.
It turned out by that, by imposing incremental boundary displacements,
rounding errors were accumulating, so that we ended with negative volume
for some very small elements at the boundary.
The mesh was not deformed HarmonicSmoothing, but I fear that
something like what I've observed might happen again.

Marco

Revision history for this message
Anders Logg (logg) said :
#22

On Mon, Mar 04, 2013 at 08:51:02AM -0000, Marco Morandini wrote:

>> Hi Anders,
>> it's done and now it runs in parallel as well. I've added optional
>> std::string argument to HarmonicSmoothing::move (and calling functions)
>> enabling user to choose whether to solve for coordinates themselves or
>> displacement. Later I realized that it's equivalent in case of exact
>> aritmethic and solver:) Altough not doing an analysis it seems that
>> displacement mode is numerically better in typical case as solution and
>> rhs vector lacks additional linear component which is much larger than
>> displacement itself. Unit test also gives better numbers for
>> displacement mode.
>>
>> What do you think? Should I let both options or keep only displacement
>> mode to keep interface simple?
> As a user I'd suggest to keep both.
> Some time ago I've spent more than a week debugging a fluid-structure
> simulation. The structural simulation was carried by an external
> multibody solver.
> It turned out by that, by imposing incremental boundary displacements,
> rounding errors were accumulating, so that we ended with negative volume
> for some very small elements at the boundary.
> The mesh was not deformed HarmonicSmoothing, but I fear that
> something like what I've observed might happen again.

Is it the same issue? I assume in both cases that the input to the
solver is the location of the boundary, not its displacement, and that
the solver will subtract to get the displacements to solve for (but I
haven't looked at the code yet). Then there shouldn't be any problems
with round-off errors.

--
Anders

Revision history for this message
Marco Morandini (morandini) said :
#23

> Is it the same issue? I assume in both cases that the input to the
> solver is the location of the boundary, not its displacement,

If this is the case then you're right, there should be not problems.

Apologize for the noise.

Marco

> and that
> the solver will subtract to get the displacements to solve for (but I
> haven't looked at the code yet). Then there shouldn't be any problems
> with round-off errors.

Revision history for this message
Jan Blechta (blechta) said :
#24

On Mon, 04 Mar 2013 09:35:45 -0000
Anders Logg <email address hidden> wrote:
> Question #222653 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/222653
>
> Anders Logg posted a new comment:
> On Mon, Mar 04, 2013 at 08:51:02AM -0000, Marco Morandini wrote:
>
> >> Hi Anders,
> >> it's done and now it runs in parallel as well. I've added optional
> >> std::string argument to HarmonicSmoothing::move (and calling
> >> functions) enabling user to choose whether to solve for
> >> coordinates themselves or displacement. Later I realized that it's
> >> equivalent in case of exact aritmethic and solver:) Altough not
> >> doing an analysis it seems that displacement mode is numerically
> >> better in typical case as solution and rhs vector lacks additional
> >> linear component which is much larger than displacement itself.
> >> Unit test also gives better numbers for displacement mode.
> >>
> >> What do you think? Should I let both options or keep only
> >> displacement mode to keep interface simple?
> > As a user I'd suggest to keep both.
> > Some time ago I've spent more than a week debugging a
> > fluid-structure simulation. The structural simulation was carried
> > by an external multibody solver.
> > It turned out by that, by imposing incremental boundary
> > displacements, rounding errors were accumulating, so that we ended
> > with negative volume for some very small elements at the boundary.
> > The mesh was not deformed HarmonicSmoothing, but I fear that
> > something like what I've observed might happen again.
>
> Is it the same issue? I assume in both cases that the input to the
> solver is the location of the boundary, not its displacement, and that
> the solver will subtract to get the displacements to solve for (but I
> haven't looked at the code yet). Then there shouldn't be any problems
> with round-off errors.

You're right - input are coordinates of BoundaryMesh. But it turns out
that for UnitSquareMesh(10, 10) and quite large displacements being
from 0 to 0.5 for different vertices, displacement
approach yields about 1~2 orders of magnitude better error in boundary
coordinates. And this error being quite dependent on number of
processes. If users would report that in some cases acuumulated error
is getting too large iterative solver precision thresholds could be
exposed to users.

I canceled the merge proposal because I'm getting rid of
Function::compute_vertex_values(). It will probably be done today or
tommorrow.

Jan

>
> --
> Anders
>

Revision history for this message
Anders Logg (logg) said :
#25

On Mon, Mar 04, 2013 at 12:15:45PM -0000, Jan Blechta wrote:

>> Question #222653 on DOLFIN changed:
>> https://answers.launchpad.net/dolfin/+question/222653
>>
>> Anders Logg posted a new comment:
>>
>>
>> Is it the same issue? I assume in both cases that the input to the
>> solver is the location of the boundary, not its displacement, and that
>> the solver will subtract to get the displacements to solve for (but I
>> haven't looked at the code yet). Then there shouldn't be any problems
>> with round-off errors.
> You're right - input are coordinates of BoundaryMesh. But it turns out
> that for UnitSquareMesh(10, 10) and quite large displacements being
> from 0 to 0.5 for different vertices, displacement
> approach yields about 1~2 orders of magnitude better error in boundary
> coordinates. And this error being quite dependent on number of
> processes. If users would report that in some cases acuumulated error
> is getting too large iterative solver precision thresholds could be
> exposed to users.

So the conclusion is to keep only the displacement version?

> I canceled the merge proposal because I'm getting rid of
> Function::compute_vertex_values(). It will probably be done today or
> tommorrow.

ok.

--
Anders

Revision history for this message
Jan Blechta (blechta) said :
#26

On Mon, 04 Mar 2013 15:45:51 -0000
Anders Logg <email address hidden> wrote:
> Question #222653 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/222653
>
> Anders Logg posted a new comment:
> On Mon, Mar 04, 2013 at 12:15:45PM -0000, Jan Blechta wrote:
>
> >> Question #222653 on DOLFIN changed:
> >> https://answers.launchpad.net/dolfin/+question/222653
> >>
> >> Anders Logg posted a new comment:
> >>
> >>
> >> Is it the same issue? I assume in both cases that the input to the
> >> solver is the location of the boundary, not its displacement, and
> >> that the solver will subtract to get the displacements to solve
> >> for (but I haven't looked at the code yet). Then there shouldn't
> >> be any problems with round-off errors.
> > You're right - input are coordinates of BoundaryMesh. But it turns
> > out that for UnitSquareMesh(10, 10) and quite large displacements
> > being from 0 to 0.5 for different vertices, displacement
> > approach yields about 1~2 orders of magnitude better error in
> > boundary coordinates. And this error being quite dependent on
> > number of processes. If users would report that in some cases
> > acuumulated error is getting too large iterative solver precision
> > thresholds could be exposed to users.
>
> So the conclusion is to keep only the displacement version?

I'm not sure. What do good numeric analysts think? :)

Jan

>
> > I canceled the merge proposal because I'm getting rid of
> > Function::compute_vertex_values(). It will probably be done today or
> > tommorrow.
>
> ok.
>
> --
> Anders
>