fluid flow - memory problems

Asked by Bento

Hi,

I recently installed the Fenics package and now I try to solve some problems with it. I want to simulate the flow of air around an obstacle. Eventually I want to include the temperature and see how a hot obstacle cools down in a colder air flow. However I encounter several problems, already with my baby example which does not even include the temperature. I am new in the CFD business. To begin, I adapted a few examples for Navier-Stokes solvers that I found in the Dolfin examples or on launchpad.net/nsbench (I tried both steady state and transient flow examples). I use a 3d mesh.

My main problem occurs with mesh size. Unless I use a rather coarse mesh, I run out of memory, sometimes with a std::bad_alloc error, sometimes with the message that UMFPACK has run out of memory. It works with a mesh that has 7828 cells, but for example not, if I refine this mesh with the refine function of Dolfin.
Is there anything I can do about this? I could get more RAM, but I do not think that this will ultimately solve my problem. In the end I want to use much larger meshes.

With this coarse mesh I do not get realistic results. Now matter how I set my inflow velocity, the speed of the fluid in the bulk is very low. When I use a 2d version of my example with a finer mesh, I get results that make sense.

Is there a way to circumvent the memory issue?

Thank you!

Till

Question information

Language:
English Edit question
Status:
Solved
For:
DOLFIN Edit question
Assignee:
No assignee Edit question
Solved by:
Bento
Solved:
Last query:
Last reply:
Revision history for this message
Andre Massing (massing) said :
#1

Den 03.02.11 17.37, skrev Till B:
> New question #143963 on DOLFIN:
> https://answers.launchpad.net/dolfin/+question/143963
>
> Hi,
>
> I recently installed the Fenics package and now I try to solve some problems with it. I want to simulate the flow of air around an obstacle. Eventually I want to include the temperature and see how a hot obstacle cools down in a colder air flow. However I encounter several problems, already with my baby example which does not even include the temperature. I am new in the CFD business. To begin, I adapted a few examples for Navier-Stokes solvers that I found in the Dolfin examples or on launchpad.net/nsbench (I tried both steady state and transient flow examples). I use a 3d mesh.
>
> My main problem occurs with mesh size. Unless I use a rather coarse mesh, I run out of memory, sometimes with a std::bad_alloc error, sometimes with the message that UMFPACK has run out of memory. It works with a mesh that has 7828 cells, but for example not, if I refine this mesh with the refine function of Dolfin.
> Is there anything I can do about this? I could get more RAM, but I do not think that this will ultimately solve my problem. In the end I want to use much larger meshes.
>
> With this coarse mesh I do not get realistic results. Now matter how I set my inflow velocity, the speed of the fluid in the bulk is very low. When I use a 2d version of my example with a finer mesh, I get results that make sense.
>
> Is there a way to circumvent the memory issue?
>
> Thank you!
>
> Till
>

To remedy the issues with a direct solver you can change to an iterative
solver which is by far less memory consuming.
Have a look at the parameter demos or the stokes-iterative demos
in the undocumented demo section in your dolfin installation.

Cheers,
Andre

--
André Massing email: <email address hidden>
Ph.D. student mobile: +47 469 57 401
Simula Research Laboratory
NO-1367 Fornebu, Norway

Revision history for this message
Anders Logg (logg) said :
#2

On Thu, Feb 03, 2011 at 04:37:00PM -0000, Till B wrote:
> New question #143963 on DOLFIN:
> https://answers.launchpad.net/dolfin/+question/143963
>
> Hi,
>
> I recently installed the Fenics package and now I try to solve some problems with it. I want to simulate the flow of air around an obstacle. Eventually I want to include the temperature and see how a hot obstacle cools down in a colder air flow. However I encounter several problems, already with my baby example which does not even include the temperature. I am new in the CFD business. To begin, I adapted a few examples for Navier-Stokes solvers that I found in the Dolfin examples or on launchpad.net/nsbench (I tried both steady state and transient flow examples). I use a 3d mesh.
>
> My main problem occurs with mesh size. Unless I use a rather coarse mesh, I run out of memory, sometimes with a std::bad_alloc error, sometimes with the message that UMFPACK has run out of memory. It works with a mesh that has 7828 cells, but for example not, if I refine this mesh with the refine function of Dolfin.
> Is there anything I can do about this? I could get more RAM, but I do not think that this will ultimately solve my problem. In the end I want to use much larger meshes.
>
> With this coarse mesh I do not get realistic results. Now matter how I set my inflow velocity, the speed of the fluid in the bulk is very low. When I use a 2d version of my example with a finer mesh, I get results that make sense.
>
> Is there a way to circumvent the memory issue?
>
> Thank you!

What is the size of the linear systems you are solving? If you use P2
elements in 3D you may very quickly get very large systems.

I suspect the problem with memory allocation is a result of using a
direct solver. UMFPACK allocates quite a lot of memory which makes it
unsuitable for large problems.

Try an iterative method and you should see that the memory usage is
kept under control. The mesh itself should only require a minimal
amount of storage.

--
Anders

Revision history for this message
Bento (tbarmeier-deactivatedaccount) said :
#3

Hi

Thank you two for your answers.

Anders:
With the working mesh, it solves a 37076 by 37076 matrix (which, however, is
singular). If I refine the mesh with the built-in refine function, I get a
279286 by 279286 matrix (and here I get the memory problems).
The mesh file itself (in xml format) is around 650 kb at the moment. I
already tried meshes with 30mb (and then reduced the mesh density until I
arrived at a working version). Do you mean that the mesh.xml file does not
need much hard disk storage? Or that processing the mesh information should
not cause much memory usage?

Andre:
The iterative stokes example does not work, it aborts with the message
"DOLFIN has not been configured with Trilinos or PETSc. Exiting.". I should
probably note that I use the precompiled Windows version of Fenics.

Thank you so far

Till

2011/2/3 Anders Logg <email address hidden>

> Your question #143963 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/143963
>
> Anders Logg proposed the following answer:
> On Thu, Feb 03, 2011 at 04:37:00PM -0000, Till B wrote:
> > New question #143963 on DOLFIN:
> > https://answers.launchpad.net/dolfin/+question/143963
> >
> > Hi,
> >
> > I recently installed the Fenics package and now I try to solve some
> problems with it. I want to simulate the flow of air around an obstacle.
> Eventually I want to include the temperature and see how a hot obstacle
> cools down in a colder air flow. However I encounter several problems,
> already with my baby example which does not even include the temperature. I
> am new in the CFD business. To begin, I adapted a few examples for
> Navier-Stokes solvers that I found in the Dolfin examples or on
> launchpad.net/nsbench (I tried both steady state and transient flow
> examples). I use a 3d mesh.
> >
> > My main problem occurs with mesh size. Unless I use a rather coarse mesh,
> I run out of memory, sometimes with a std::bad_alloc error, sometimes with
> the message that UMFPACK has run out of memory. It works with a mesh that
> has 7828 cells, but for example not, if I refine this mesh with the refine
> function of Dolfin.
> > Is there anything I can do about this? I could get more RAM, but I do not
> think that this will ultimately solve my problem. In the end I want to use
> much larger meshes.
> >
> > With this coarse mesh I do not get realistic results. Now matter how I
> set my inflow velocity, the speed of the fluid in the bulk is very low. When
> I use a 2d version of my example with a finer mesh, I get results that make
> sense.
> >
> > Is there a way to circumvent the memory issue?
> >
> > Thank you!
>
> What is the size of the linear systems you are solving? If you use P2
> elements in 3D you may very quickly get very large systems.
>
> I suspect the problem with memory allocation is a result of using a
> direct solver. UMFPACK allocates quite a lot of memory which makes it
> unsuitable for large problems.
>
> Try an iterative method and you should see that the memory usage is
> kept under control. The mesh itself should only require a minimal
> amount of storage.
>
> --
> Anders
>
> --
> If this answers your question, please go to the following page to let us
> know that it is solved:
> https://answers.launchpad.net/dolfin/+question/143963/+confirm?answer_id=1
>
> If you still need help, you can reply to this email or go to the
> following page to enter your feedback:
> https://answers.launchpad.net/dolfin/+question/143963
>
> You received this question notification because you are a direct
> subscriber of the question.
>

Revision history for this message
Anders Logg (logg) said :
#4

On Thu, Feb 03, 2011 at 05:18:39PM -0000, Till B wrote:
> Question #143963 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/143963
>
> Status: Answered => Open
>
> Till B is still having a problem:
> Hi
>
> Thank you two for your answers.
>
> Anders:
> With the working mesh, it solves a 37076 by 37076 matrix (which, however, is
> singular). If I refine the mesh with the built-in refine function, I get a
> 279286 by 279286 matrix (and here I get the memory problems).
> The mesh file itself (in xml format) is around 650 kb at the moment. I
> already tried meshes with 30mb (and then reduced the mesh density until I
> arrived at a working version). Do you mean that the mesh.xml file does not
> need much hard disk storage? Or that processing the mesh information should
> not cause much memory usage?

300000x300000 is quite a big problem for UMFPACK. No wonder you run
out of memory.

It's unlikely that the storage of the mesh file itself will ever be a
problem (you have more hard disk space than RAM). If it gets big, you
can always gzip it. DOLFIN can read .xml.gz files.

I meant that the storage (RAM) used by DOLFIN itself for meshes,
vectors etc is (or should be) quite small. Your problem is UMFPACK, or
in other words, that you use a direct solver instead of an iterative
solver.

> Andre:
> The iterative stokes example does not work, it aborts with the message
> "DOLFIN has not been configured with Trilinos or PETSc. Exiting.". I should
> probably note that I use the precompiled Windows version of Fenics.

If you want to solve any real problems with FEniCS, you need either
PETSc or Trilinos.

--
Anders

> Thank you so far
>
> Till
>
>
> 2011/2/3 Anders Logg <email address hidden>
>
> > Your question #143963 on DOLFIN changed:
> > https://answers.launchpad.net/dolfin/+question/143963
> >
> > Anders Logg proposed the following answer:
> > On Thu, Feb 03, 2011 at 04:37:00PM -0000, Till B wrote:
> > > New question #143963 on DOLFIN:
> > > https://answers.launchpad.net/dolfin/+question/143963
> > >
> > > Hi,
> > >
> > > I recently installed the Fenics package and now I try to solve some
> > problems with it. I want to simulate the flow of air around an obstacle.
> > Eventually I want to include the temperature and see how a hot obstacle
> > cools down in a colder air flow. However I encounter several problems,
> > already with my baby example which does not even include the temperature. I
> > am new in the CFD business. To begin, I adapted a few examples for
> > Navier-Stokes solvers that I found in the Dolfin examples or on
> > launchpad.net/nsbench (I tried both steady state and transient flow
> > examples). I use a 3d mesh.
> > >
> > > My main problem occurs with mesh size. Unless I use a rather coarse mesh,
> > I run out of memory, sometimes with a std::bad_alloc error, sometimes with
> > the message that UMFPACK has run out of memory. It works with a mesh that
> > has 7828 cells, but for example not, if I refine this mesh with the refine
> > function of Dolfin.
> > > Is there anything I can do about this? I could get more RAM, but I do not
> > think that this will ultimately solve my problem. In the end I want to use
> > much larger meshes.
> > >
> > > With this coarse mesh I do not get realistic results. Now matter how I
> > set my inflow velocity, the speed of the fluid in the bulk is very low. When
> > I use a 2d version of my example with a finer mesh, I get results that make
> > sense.
> > >
> > > Is there a way to circumvent the memory issue?
> > >
> > > Thank you!
> >
> > What is the size of the linear systems you are solving? If you use P2
> > elements in 3D you may very quickly get very large systems.
> >
> > I suspect the problem with memory allocation is a result of using a
> > direct solver. UMFPACK allocates quite a lot of memory which makes it
> > unsuitable for large problems.
> >
> > Try an iterative method and you should see that the memory usage is
> > kept under control. The mesh itself should only require a minimal
> > amount of storage.
> >
> >
> >
> > If you still need help, you can reply to this email or go to the
> > following page to enter your feedback:
> > https://answers.launchpad.net/dolfin/+question/143963
> >
> > You received this question notification because you are a direct
> > subscriber of the question.
> >
>

Revision history for this message
Bento (tbarmeier-deactivatedaccount) said :
#5

ok, then I need PetSc or Trilinos. As I said, I have the precompiled Windows
version of Fenics. Is it possible to include these libraries into an
existing Fenics-installation? How do I do that?

Till

2011/2/3 Anders Logg <email address hidden>

> Your question #143963 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/143963
>
> Status: Open => Answered
>
> Anders Logg proposed the following answer:
> On Thu, Feb 03, 2011 at 05:18:39PM -0000, Till B wrote:
> > Question #143963 on DOLFIN changed:
> > https://answers.launchpad.net/dolfin/+question/143963
> >
> > Status: Answered => Open
> >
> > Till B is still having a problem:
> > Hi
> >
> > Thank you two for your answers.
> >
> > Anders:
> > With the working mesh, it solves a 37076 by 37076 matrix (which, however,
> is
> > singular). If I refine the mesh with the built-in refine function, I get
> a
> > 279286 by 279286 matrix (and here I get the memory problems).
> > The mesh file itself (in xml format) is around 650 kb at the moment. I
> > already tried meshes with 30mb (and then reduced the mesh density until I
> > arrived at a working version). Do you mean that the mesh.xml file does
> not
> > need much hard disk storage? Or that processing the mesh information
> should
> > not cause much memory usage?
>
> 300000x300000 is quite a big problem for UMFPACK. No wonder you run
> out of memory.
>
> It's unlikely that the storage of the mesh file itself will ever be a
> problem (you have more hard disk space than RAM). If it gets big, you
> can always gzip it. DOLFIN can read .xml.gz files.
>
> I meant that the storage (RAM) used by DOLFIN itself for meshes,
> vectors etc is (or should be) quite small. Your problem is UMFPACK, or
> in other words, that you use a direct solver instead of an iterative
> solver.
>
> > Andre:
> > The iterative stokes example does not work, it aborts with the message
> > "DOLFIN has not been configured with Trilinos or PETSc. Exiting.". I
> should
> > probably note that I use the precompiled Windows version of Fenics.
>
> If you want to solve any real problems with FEniCS, you need either
> PETSc or Trilinos.
>
> --
> Anders
>
>
> > Thank you so far
> >
> > Till
> >
> >
> > 2011/2/3 Anders Logg <email address hidden>
> >
> > > Your question #143963 on DOLFIN changed:
> > > https://answers.launchpad.net/dolfin/+question/143963
> > >
> > > Anders Logg proposed the following answer:
> > > On Thu, Feb 03, 2011 at 04:37:00PM -0000, Till B wrote:
> > > > New question #143963 on DOLFIN:
> > > > https://answers.launchpad.net/dolfin/+question/143963
> > > >
> > > > Hi,
> > > >
> > > > I recently installed the Fenics package and now I try to solve some
> > > problems with it. I want to simulate the flow of air around an
> obstacle.
> > > Eventually I want to include the temperature and see how a hot obstacle
> > > cools down in a colder air flow. However I encounter several problems,
> > > already with my baby example which does not even include the
> temperature. I
> > > am new in the CFD business. To begin, I adapted a few examples for
> > > Navier-Stokes solvers that I found in the Dolfin examples or on
> > > launchpad.net/nsbench (I tried both steady state and transient flow
> > > examples). I use a 3d mesh.
> > > >
> > > > My main problem occurs with mesh size. Unless I use a rather coarse
> mesh,
> > > I run out of memory, sometimes with a std::bad_alloc error, sometimes
> with
> > > the message that UMFPACK has run out of memory. It works with a mesh
> that
> > > has 7828 cells, but for example not, if I refine this mesh with the
> refine
> > > function of Dolfin.
> > > > Is there anything I can do about this? I could get more RAM, but I do
> not
> > > think that this will ultimately solve my problem. In the end I want to
> use
> > > much larger meshes.
> > > >
> > > > With this coarse mesh I do not get realistic results. Now matter how
> I
> > > set my inflow velocity, the speed of the fluid in the bulk is very low.
> When
> > > I use a 2d version of my example with a finer mesh, I get results that
> make
> > > sense.
> > > >
> > > > Is there a way to circumvent the memory issue?
> > > >
> > > > Thank you!
> > >
> > > What is the size of the linear systems you are solving? If you use P2
> > > elements in 3D you may very quickly get very large systems.
> > >
> > > I suspect the problem with memory allocation is a result of using a
> > > direct solver. UMFPACK allocates quite a lot of memory which makes it
> > > unsuitable for large problems.
> > >
> > > Try an iterative method and you should see that the memory usage is
> > > kept under control. The mesh itself should only require a minimal
> > > amount of storage.
> > >
> > >
> > >
> > > If you still need help, you can reply to this email or go to the
> > > following page to enter your feedback:
> > > https://answers.launchpad.net/dolfin/+question/143963
> > >
> > > You received this question notification because you are a direct
> > > subscriber of the question.
> > >
> >
>
> --
> If this answers your question, please go to the following page to let us
> know that it is solved:
> https://answers.launchpad.net/dolfin/+question/143963/+confirm?answer_id=3
>
> If you still need help, you can reply to this email or go to the
> following page to enter your feedback:
> https://answers.launchpad.net/dolfin/+question/143963
>
> You received this question notification because you are a direct
> subscriber of the question.
>

Revision history for this message
Johannes Ring (johannr) said :
#6

It is possible but I won't encourage you to try it out ;-) I will give a shot at building Trilinos on Windows tomorrow and, if it's easy, rebuild the FEniCS package with Trilinos support.

Revision history for this message
Paul Robinson (prarobinson) said :
#7

Does this mean the future for FEniCS with Trillinos on Ubuntu is grim?

On Thu, Feb 3, 2011 at 2:03 PM, Johannes Ring <
<email address hidden>> wrote:

> Question #143963 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/143963
>
> Johannes Ring posted a new comment:
> It is possible but I won't encourage you to try it out ;-) I will give a
> shot at building Trilinos on Windows tomorrow and, if it's easy, rebuild
> the FEniCS package with Trilinos support.
>
> --
> You received this question notification because you are a member of
> DOLFIN Team, which is an answer contact for DOLFIN.
>
> _______________________________________________
> Mailing list: https://launchpad.net/~dolfin<https://launchpad.net/%7Edolfin>
> Post to : <email address hidden>
> Unsubscribe : https://launchpad.net/~dolfin<https://launchpad.net/%7Edolfin>
> More help : https://help.launchpad.net/ListHelp
>

Revision history for this message
Paul Robinson (prarobinson) said :
#8

Sorry, I meant to post this under "Hierarchical wrapping troubles"

On Thu, Feb 3, 2011 at 2:12 PM, Paul Robinson <
<email address hidden>> wrote:

> Question #143963 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/143963
>
> Status: Open => Answered
>
> Paul Robinson proposed the following answer:
> Does this mean the future for FEniCS with Trillinos on Ubuntu is grim?
>
> On Thu, Feb 3, 2011 at 2:03 PM, Johannes Ring <
> <email address hidden>> wrote:
>
> > Question #143963 on DOLFIN changed:
> > https://answers.launchpad.net/dolfin/+question/143963
> >
> > Johannes Ring posted a new comment:
> > It is possible but I won't encourage you to try it out ;-) I will give a
> > shot at building Trilinos on Windows tomorrow and, if it's easy, rebuild
> > the FEniCS package with Trilinos support.
> >
> > --
> > You received this question notification because you are a member of
> > DOLFIN Team, which is an answer contact for DOLFIN.
> >
> > _______________________________________________
> > Mailing list: https://launchpad.net/~dolfin<https://launchpad.net/%7Edolfin>
> <https://launchpad.net/%7Edolfin>
> > Post to : <email address hidden>
> > Unsubscribe : https://launchpad.net/~dolfin<https://launchpad.net/%7Edolfin>
> <https://launchpad.net/%7Edolfin>
> > More help : https://help.launchpad.net/ListHelp
> >
>
> --
> You received this question notification because you are a member of
> DOLFIN Team, which is an answer contact for DOLFIN.
>
> _______________________________________________
> Mailing list: https://launchpad.net/~dolfin<https://launchpad.net/%7Edolfin>
> Post to : <email address hidden>
> Unsubscribe : https://launchpad.net/~dolfin<https://launchpad.net/%7Edolfin>
> More help : https://help.launchpad.net/ListHelp
>

Revision history for this message
Anders Logg (logg) said :
#9

On Thu, Feb 03, 2011 at 07:21:33PM -0000, Paul Robinson wrote:
> Question #143963 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/143963
>
> Paul Robinson proposed the following answer:
> Sorry, I meant to post this under "Hierarchical wrapping troubles"

You had me quite confused for a moment.

If we can't get things to run smoothly on Ubuntu and other platforms,
we need to wait with the move to SWIG 2.0.

--
Anders

> On Thu, Feb 3, 2011 at 2:12 PM, Paul Robinson <
> <email address hidden>> wrote:
>
> > Question #143963 on DOLFIN changed:
> > https://answers.launchpad.net/dolfin/+question/143963
> >
> > Status: Open => Answered
> >
> > Paul Robinson proposed the following answer:
> > Does this mean the future for FEniCS with Trillinos on Ubuntu is grim?
> >
> > On Thu, Feb 3, 2011 at 2:03 PM, Johannes Ring <
> > <email address hidden>> wrote:
> >
> > > Question #143963 on DOLFIN changed:
> > > https://answers.launchpad.net/dolfin/+question/143963
> > >
> > > Johannes Ring posted a new comment:
> > > It is possible but I won't encourage you to try it out ;-) I will give a
> > > shot at building Trilinos on Windows tomorrow and, if it's easy, rebuild
> > > the FEniCS package with Trilinos support.
> > >
> > >
> > > _______________________________________________
> > > Mailing list: https://launchpad.net/~dolfin<https://launchpad.net/%7Edolfin>
> > <https://launchpad.net/%7Edolfin>
> > > Post to : <email address hidden>
> > > Unsubscribe : https://launchpad.net/~dolfin<https://launchpad.net/%7Edolfin>
> > <https://launchpad.net/%7Edolfin>
> > > More help : https://help.launchpad.net/ListHelp
> > >
> >
> >
> > _______________________________________________
> > Mailing list: https://launchpad.net/~dolfin<https://launchpad.net/%7Edolfin>
> > Post to : <email address hidden>
> > Unsubscribe : https://launchpad.net/~dolfin<https://launchpad.net/%7Edolfin>
> > More help : https://help.launchpad.net/ListHelp
> >
>

Revision history for this message
Bento (tbarmeier-deactivatedaccount) said :
#10

ok, thank you Johannes. A fully featured Fenics for Windows would be really
great!

Till

2011/2/3 Anders Logg <email address hidden>

> Your question #143963 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/143963
>
> Anders Logg proposed the following answer:
> On Thu, Feb 03, 2011 at 07:21:33PM -0000, Paul Robinson wrote:
> > Question #143963 on DOLFIN changed:
> > https://answers.launchpad.net/dolfin/+question/143963
> >
> > Paul Robinson proposed the following answer:
> > Sorry, I meant to post this under "Hierarchical wrapping troubles"
>
> You had me quite confused for a moment.
>
> If we can't get things to run smoothly on Ubuntu and other platforms,
> we need to wait with the move to SWIG 2.0.
>
> --
> Anders
>
>
> > On Thu, Feb 3, 2011 at 2:12 PM, Paul Robinson <
> > <email address hidden>> wrote:
> >
> > > Question #143963 on DOLFIN changed:
> > > https://answers.launchpad.net/dolfin/+question/143963
> > >
> > > Status: Open => Answered
> > >
> > > Paul Robinson proposed the following answer:
> > > Does this mean the future for FEniCS with Trillinos on Ubuntu is grim?
> > >
> > > On Thu, Feb 3, 2011 at 2:03 PM, Johannes Ring <
> > > <email address hidden>> wrote:
> > >
> > > > Question #143963 on DOLFIN changed:
> > > > https://answers.launchpad.net/dolfin/+question/143963
> > > >
> > > > Johannes Ring posted a new comment:
> > > > It is possible but I won't encourage you to try it out ;-) I will
> give a
> > > > shot at building Trilinos on Windows tomorrow and, if it's easy,
> rebuild
> > > > the FEniCS package with Trilinos support.
> > > >
> > > >
> > > > _______________________________________________
> > > > Mailing list: https://launchpad.net/~dolfin<
> https://launchpad.net/%7Edolfin>
> > > <https://launchpad.net/%7Edolfin>
> > > > Post to : <email address hidden>
> > > > Unsubscribe : https://launchpad.net/~dolfin<
> https://launchpad.net/%7Edolfin>
> > > <https://launchpad.net/%7Edolfin>
> > > > More help : https://help.launchpad.net/ListHelp
> > > >
> > >
> > >
> > > _______________________________________________
> > > Mailing list: https://launchpad.net/~dolfin<
> https://launchpad.net/%7Edolfin>
> > > Post to : <email address hidden>
> > > Unsubscribe : https://launchpad.net/~dolfin<
> https://launchpad.net/%7Edolfin>
> > > More help : https://help.launchpad.net/ListHelp
> > >
> >
>
> --
> If this answers your question, please go to the following page to let us
> know that it is solved:
> https://answers.launchpad.net/dolfin/+question/143963/+confirm?answer_id=8
>
> If you still need help, you can reply to this email or go to the
> following page to enter your feedback:
> https://answers.launchpad.net/dolfin/+question/143963
>
> You received this question notification because you are a direct
> subscriber of the question.
>

Revision history for this message
Johannes Ring (johannr) said :
#11

I had unfortunately no luck building Trilinos on Windows with the MinGW compilers. I also tried to build PETSc but no luck with that one either. This might change in the future but for now I guess the conclusion is that the FEniCS binary package for Windows is not suitable for solving any real problems.

Revision history for this message
Andre Massing (massing) said :
#12

Den 07.02.11 12.38, skrev Johannes Ring:
> Question #143963 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/143963
>
> Johannes Ring posted a new comment:
> I had unfortunately no luck building Trilinos on Windows with the MinGW
> compilers. I also tried to build PETSc but no luck with that one either.

BTW, did you use MinGW also for the PETSc compilation?
At least according to their webside PETSc works with cygwin:

http://www.mcs.anl.gov/petsc/petsc-as/documentation/installation.html#Windows

Maybe that could be worth a try?

--
Andre

> This might change in the future but for now I guess the conclusion is
> that the FEniCS binary package for Windows is not suitable for solving
> any real problems.
>

--
André Massing email: <email address hidden>
Ph.D. student mobile: +47 469 57 401
Simula Research Laboratory
NO-1367 Fornebu, Norway

Revision history for this message
Johannes Ring (johannr) said :
#13

Yes, I tried using Cygwin but together with my own MinGW compilers. I ran into some strange path errors. I can try again with Cygwin only stuff and see if it makes any difference.

Revision history for this message
Bento (tbarmeier-deactivatedaccount) said :
#14

hmm, too bad. Unfortunately my company only allows me to use Windows
machines. So I am afraid this will mean the end to my Fenics experiments.
If you ever get Trilinos or Petsc working in Fenics on Windows, then let me
know and I will be happy to use it.

Thank you anyway for your efforts, Johannes.

2011/2/7 Johannes Ring <email address hidden>

> Your question #143963 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/143963
>
> Johannes Ring posted a new comment:
> Yes, I tried using Cygwin but together with my own MinGW compilers. I
> ran into some strange path errors. I can try again with Cygwin only
> stuff and see if it makes any difference.
>
> --
> You received this question notification because you are a direct
> subscriber of the question.
>

Revision history for this message
Anders Logg (logg) said :
#15

On Mon, Feb 07, 2011 at 04:39:29PM -0000, Till B wrote:
> Question #143963 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/143963
>
> Status: Answered => Open
>
> Till B is still having a problem:
> hmm, too bad. Unfortunately my company only allows me to use Windows
> machines. So I am afraid this will mean the end to my Fenics experiments.
> If you ever get Trilinos or Petsc working in Fenics on Windows, then let me
> know and I will be happy to use it.
>
> Thank you anyway for your efforts, Johannes.

You can install Ubuntu on a virtual machine running in Windows (like
VirtualBox). Then you will get all the packages you need.

--
Anders

Revision history for this message
Bento (tbarmeier-deactivatedaccount) said :
#16

no, unfortunately not. As far as I know, no virtual machines on our
computers. But I will speak to my boss, he was quite convinced of the Fenics
examples so maybe I can get an exception here. Running Fenics on a virtual
machine would also reduce performance, I think.

2011/2/7 Anders Logg <email address hidden>

> Your question #143963 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/143963
>
> Status: Open => Answered
>
> Anders Logg proposed the following answer:
> On Mon, Feb 07, 2011 at 04:39:29PM -0000, Till B wrote:
> > Question #143963 on DOLFIN changed:
> > https://answers.launchpad.net/dolfin/+question/143963
> >
> > Status: Answered => Open
> >
> > Till B is still having a problem:
> > hmm, too bad. Unfortunately my company only allows me to use Windows
> > machines. So I am afraid this will mean the end to my Fenics experiments.
> > If you ever get Trilinos or Petsc working in Fenics on Windows, then let
> me
> > know and I will be happy to use it.
> >
> > Thank you anyway for your efforts, Johannes.
>
> You can install Ubuntu on a virtual machine running in Windows (like
> VirtualBox). Then you will get all the packages you need.
>
> --
> Anders
>
> --
> If this answers your question, please go to the following page to let us
> know that it is solved:
> https://answers.launchpad.net/dolfin/+question/143963/+confirm?answer_id=14
>
> If you still need help, you can reply to this email or go to the
> following page to enter your feedback:
> https://answers.launchpad.net/dolfin/+question/143963
>
> You received this question notification because you are a direct
> subscriber of the question.
>

Revision history for this message
Bento (tbarmeier-deactivatedaccount) said :
#17

Hi,
it is me again. I have still not given up. I tried to build Fenics out of
the source that I downloaded here:
http://www.fenicsproject.org/pub/software/fenics/ The problem seems to be
that OpenMPI can not be built with Mingw. This then also caused some path
errors, maybe similar ones like Johannes encountered.

Has anyone tried (and succeeded) to build Fenics with Visual C++? This would
make it possible to use OpenMPI. Or is it possible to use other MPI
libraries?

Thank you!

2011/2/7 Till B <email address hidden>

> Your question #143963 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/143963
>
> Status: Answered => Open
>
> You are still having a problem:
> no, unfortunately not. As far as I know, no virtual machines on our
> computers. But I will speak to my boss, he was quite convinced of the
> Fenics
> examples so maybe I can get an exception here. Running Fenics on a virtual
> machine would also reduce performance, I think.
>
>
> 2011/2/7 Anders Logg <email address hidden>
>
> > Your question #143963 on DOLFIN changed:
> > https://answers.launchpad.net/dolfin/+question/143963
> >
> > Status: Open => Answered
> >
> > Anders Logg proposed the following answer:
> > On Mon, Feb 07, 2011 at 04:39:29PM -0000, Till B wrote:
> > > Question #143963 on DOLFIN changed:
> > > https://answers.launchpad.net/dolfin/+question/143963
> > >
> > > Status: Answered => Open
> > >
> > > Till B is still having a problem:
> > > hmm, too bad. Unfortunately my company only allows me to use Windows
> > > machines. So I am afraid this will mean the end to my Fenics
> experiments.
> > > If you ever get Trilinos or Petsc working in Fenics on Windows, then
> let
> > me
> > > know and I will be happy to use it.
> > >
> > > Thank you anyway for your efforts, Johannes.
> >
> > You can install Ubuntu on a virtual machine running in Windows (like
> > VirtualBox). Then you will get all the packages you need.
> >
> > --
> > Anders
> >
> > --
> > If this answers your question, please go to the following page to let us
> > know that it is solved:
> >
> https://answers.launchpad.net/dolfin/+question/143963/+confirm?answer_id=14
> >
> > If you still need help, you can reply to this email or go to the
> > following page to enter your feedback:
> > https://answers.launchpad.net/dolfin/+question/143963
> >
> > You received this question notification because you are a direct
> > subscriber of the question.
> >
>
> --
> You received this question notification because you are a direct
> subscriber of the question.
>

Revision history for this message
Bento (tbarmeier-deactivatedaccount) said :
#18

Hi again!

So I finally got XUbuntu running on a virtual machine. I installed Fenics from the Fenics ppa repositories. Now the iterative Stokes demo works. But when I put my own mesh in, I get memory problems again. In addition Petsc produces errors. My virtual machine has around 1.2 gig RAM, more is not possible. Does this mean that Fenics is in general not suitable for larger meshes?
Here is the output:

till@virtual-python:~/Desktop/sf_winshared/test$ python demo.py
Assembling linear system and applying boundary conditions...
[0]PETSC ERROR: --------------------- Error Message ------------------------------------
[0]PETSC ERROR: Out of memory. This could be due to allocating
[0]PETSC ERROR: too large an object or bleeding by not properly
[0]PETSC ERROR: destroying unneeded objects.
[0]PETSC ERROR: Memory allocated 0 Memory used by process 878977024
[0]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info.
[0]PETSC ERROR: Memory requested 1131238140!
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 10, Tue Nov 24 16:38:09 CST 2009
[0]PETSC ERROR: See docs/changes/index.html for recent updates.
[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[0]PETSC ERROR: See docs/index.html for manual pages.
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Unknown Name on a linux-gnu named virtual-python by till Mon Feb 14 10:56:04 2011
[0]PETSC ERROR: Libraries linked from /build/buildd/petsc-3.0.0.dfsg/linux-gnu-c-opt/lib
[0]PETSC ERROR: Configure run at Thu Dec 31 09:53:25 2009
[0]PETSC ERROR: Configure options --with-shared --with-debugging=0 --useThreads 0 --with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi --with-mpi-shared=1 --with-blas-lib=-lblas-3gf --with-lapack-lib=-llapackgf-3 --with-umfpack=1 --with-umfpack-include=/usr/include/suitesparse --with-umfpack-lib="[/usr/lib/libumfpack.so,/usr/lib/libamd.so]" --with-superlu=1 --with-superlu-include=/usr/include/superlu --with-superlu-lib=/usr/lib/libsuperlu.so --with-spooles=1 --with-spooles-include=/usr/include/spooles --with-spooles-lib=/usr/lib/libspooles.so --with-hypre=1 --with-hypre-dir=/usr --with-scotch=1 --with-scotch-include=/usr/include/scotch --with-scotch-lib=/usr/lib/libscotch.so
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: PetscMallocAlign() line 61 in src/sys/memory/mal.c
[0]PETSC ERROR: MatSeqAIJSetPreallocation_SeqAIJ() line 2986 in src/mat/impls/aij/seq/aij.c
[0]PETSC ERROR: MatCreateSeqAIJ() line 2863 in src/mat/impls/aij/seq/aij.c
[0]PETSC ERROR: --------------------- Error Message ------------------------------------
[0]PETSC ERROR: Null argument, when expecting valid pointer!
[0]PETSC ERROR: Trying to zero at a null pointer!
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 10, Tue Nov 24 16:38:09 CST 2009
[0]PETSC ERROR: See docs/changes/index.html for recent updates.
[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[0]PETSC ERROR: See docs/index.html for manual pages.
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Unknown Name on a linux-gnu named virtual-python by till Mon Feb 14 10:56:04 2011
[0]PETSC ERROR: Libraries linked from /build/buildd/petsc-3.0.0.dfsg/linux-gnu-c-opt/lib
[0]PETSC ERROR: Configure run at Thu Dec 31 09:53:25 2009
[0]PETSC ERROR: Configure options --with-shared --with-debugging=0 --useThreads 0 --with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi --with-mpi-shared=1 --with-blas-lib=-lblas-3gf --with-lapack-lib=-llapackgf-3 --with-umfpack=1 --with-umfpack-include=/usr/include/suitesparse --with-umfpack-lib="[/usr/lib/libumfpack.so,/usr/lib/libamd.so]" --with-superlu=1 --with-superlu-include=/usr/include/superlu --with-superlu-lib=/usr/lib/libsuperlu.so --with-spooles=1 --with-spooles-include=/usr/include/spooles --with-spooles-lib=/usr/lib/libspooles.so --with-hypre=1 --with-hypre-dir=/usr --with-scotch=1 --with-scotch-include=/usr/include/scotch --with-scotch-lib=/usr/lib/libscotch.so
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: PetscMemzero() line 189 in src/sys/utils/memc.c
[0]PETSC ERROR: MatZeroEntries_SeqAIJ() line 727 in src/mat/impls/aij/seq/aij.c
[0]PETSC ERROR: MatZeroEntries() line 4796 in src/mat/interface/matrix.c
[0]PETSC ERROR: --------------------- Error Message ------------------------------------
[0]PETSC ERROR: Null argument, when expecting valid pointer!
[0]PETSC ERROR: Trying to zero at a null pointer!
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 10, Tue Nov 24 16:38:09 CST 2009
[0]PETSC ERROR: See docs/changes/index.html for recent updates.
[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[0]PETSC ERROR: See docs/index.html for manual pages.
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Unknown Name on a linux-gnu named virtual-python by till Mon Feb 14 10:56:04 2011
[0]PETSC ERROR: Libraries linked from /build/buildd/petsc-3.0.0.dfsg/linux-gnu-c-opt/lib
[0]PETSC ERROR: Configure run at Thu Dec 31 09:53:25 2009
[0]PETSC ERROR: Configure options --with-shared --with-debugging=0 --useThreads 0 --with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi --with-mpi-shared=1 --with-blas-lib=-lblas-3gf --with-lapack-lib=-llapackgf-3 --with-umfpack=1 --with-umfpack-include=/usr/include/suitesparse --with-umfpack-lib="[/usr/lib/libumfpack.so,/usr/lib/libamd.so]" --with-superlu=1 --with-superlu-include=/usr/include/superlu --with-superlu-lib=/usr/lib/libsuperlu.so --with-spooles=1 --with-spooles-include=/usr/include/spooles --with-spooles-lib=/usr/lib/libspooles.so --with-hypre=1 --with-hypre-dir=/usr --with-scotch=1 --with-scotch-include=/usr/include/scotch --with-scotch-lib=/usr/lib/libscotch.so
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: PetscMemzero() line 189 in src/sys/utils/memc.c
[0]PETSC ERROR: MatZeroEntries_SeqAIJ() line 727 in src/mat/impls/aij/seq/aij.c
[0]PETSC ERROR: MatZeroEntries() line 4796 in src/mat/interface/matrix.c
Computing Dirichlet boundary values, topological search [=> ] 8.2%
Computing Dirichlet boundary values, topological search [==> ] 16.5%
Computing Dirichlet boundary values, topological search [===> ] 28.9%
Computing Dirichlet boundary values, topological search [=====> ] 41.2%
Computing Dirichlet boundary values, topological search [======> ] 53.6%
Computing Dirichlet boundary values, topological search [========> ] 65.9%
Computing Dirichlet boundary values, topological search [==========> ] 78.3%
Computing Dirichlet boundary values, topological search [===========> ] 90.7%
Computing Dirichlet boundary values, topological search [=============] 100.0%
Computing Dirichlet boundary values, topological search [=============] 100.0%
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range
[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSC ERROR: or try http://valgrind.org on linux or man libgmalloc on Apple to find memory corruption errors
[0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[0]PETSC ERROR: to get more information on the crash.
[0]PETSC ERROR: --------------------- Error Message ------------------------------------
[0]PETSC ERROR: Signal received!
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 10, Tue Nov 24 16:38:09 CST 2009
[0]PETSC ERROR: See docs/changes/index.html for recent updates.
[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[0]PETSC ERROR: See docs/index.html for manual pages.
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Unknown Name on a linux-gnu named virtual-python by till Mon Feb 14 10:56:04 2011
[0]PETSC ERROR: Libraries linked from /build/buildd/petsc-3.0.0.dfsg/linux-gnu-c-opt/lib
[0]PETSC ERROR: Configure run at Thu Dec 31 09:53:25 2009
[0]PETSC ERROR: Configure options --with-shared --with-debugging=0 --useThreads 0 --with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi --with-mpi-shared=1 --with-blas-lib=-lblas-3gf --with-lapack-lib=-llapackgf-3 --with-umfpack=1 --with-umfpack-include=/usr/include/suitesparse --with-umfpack-lib="[/usr/lib/libumfpack.so,/usr/lib/libamd.so]" --with-superlu=1 --with-superlu-include=/usr/include/superlu --with-superlu-lib=/usr/lib/libsuperlu.so --with-spooles=1 --with-spooles-include=/usr/include/spooles --with-spooles-lib=/usr/lib/libspooles.so --with-hypre=1 --with-hypre-dir=/usr --with-scotch=1 --with-scotch-include=/usr/include/scotch --with-scotch-lib=/usr/lib/libscotch.so
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 59.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------

Revision history for this message
Anders Logg (logg) said :
#19

How large is your mesh? What kind of mesh is it? Triangles or
tetrahedra?

The storage of the mesh itself is very efficient in DOLFIN but
depending on what other things you do in your program, you may well
run out of memory. 1GB is not that much.

--
Anders

On Mon, Feb 14, 2011 at 11:49:36AM -0000, Till B wrote:
> Question #143963 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/143963
>
> Till B posted a new comment:
> Hi again!
>
> So I finally got XUbuntu running on a virtual machine. I installed Fenics from the Fenics ppa repositories. Now the iterative Stokes demo works. But when I put my own mesh in, I get memory problems again. In addition Petsc produces errors. My virtual machine has around 1.2 gig RAM, more is not possible. Does this mean that Fenics is in general not suitable for larger meshes?
> Here is the output:
>
> till@virtual-python:~/Desktop/sf_winshared/test$ python demo.py
> Assembling linear system and applying boundary conditions...
> [0]PETSC ERROR: --------------------- Error Message ------------------------------------
> [0]PETSC ERROR: Out of memory. This could be due to allocating
> [0]PETSC ERROR: too large an object or bleeding by not properly
> [0]PETSC ERROR: destroying unneeded objects.
> [0]PETSC ERROR: Memory allocated 0 Memory used by process 878977024
> [0]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info.
> [0]PETSC ERROR: Memory requested 1131238140!
> [0]PETSC ERROR: ------------------------------------------------------------------------
> [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 10, Tue Nov 24 16:38:09 CST 2009
> [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> [0]PETSC ERROR: See docs/index.html for manual pages.
> [0]PETSC ERROR: ------------------------------------------------------------------------
> [0]PETSC ERROR: Unknown Name on a linux-gnu named virtual-python by till Mon Feb 14 10:56:04 2011
> [0]PETSC ERROR: Libraries linked from /build/buildd/petsc-3.0.0.dfsg/linux-gnu-c-opt/lib
> [0]PETSC ERROR: Configure run at Thu Dec 31 09:53:25 2009
> [0]PETSC ERROR: Configure options --with-shared --with-debugging=0 --useThreads 0 --with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi --with-mpi-shared=1 --with-blas-lib=-lblas-3gf --with-lapack-lib=-llapackgf-3 --with-umfpack=1 --with-umfpack-include=/usr/include/suitesparse --with-umfpack-lib="[/usr/lib/libumfpack.so,/usr/lib/libamd.so]" --with-superlu=1 --with-superlu-include=/usr/include/superlu --with-superlu-lib=/usr/lib/libsuperlu.so --with-spooles=1 --with-spooles-include=/usr/include/spooles --with-spooles-lib=/usr/lib/libspooles.so --with-hypre=1 --with-hypre-dir=/usr --with-scotch=1 --with-scotch-include=/usr/include/scotch --with-scotch-lib=/usr/lib/libscotch.so
> [0]PETSC ERROR: ------------------------------------------------------------------------
> [0]PETSC ERROR: PetscMallocAlign() line 61 in src/sys/memory/mal.c
> [0]PETSC ERROR: MatSeqAIJSetPreallocation_SeqAIJ() line 2986 in src/mat/impls/aij/seq/aij.c
> [0]PETSC ERROR: MatCreateSeqAIJ() line 2863 in src/mat/impls/aij/seq/aij.c
> [0]PETSC ERROR: --------------------- Error Message ------------------------------------
> [0]PETSC ERROR: Null argument, when expecting valid pointer!
> [0]PETSC ERROR: Trying to zero at a null pointer!
> [0]PETSC ERROR: ------------------------------------------------------------------------
> [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 10, Tue Nov 24 16:38:09 CST 2009
> [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> [0]PETSC ERROR: See docs/index.html for manual pages.
> [0]PETSC ERROR: ------------------------------------------------------------------------
> [0]PETSC ERROR: Unknown Name on a linux-gnu named virtual-python by till Mon Feb 14 10:56:04 2011
> [0]PETSC ERROR: Libraries linked from /build/buildd/petsc-3.0.0.dfsg/linux-gnu-c-opt/lib
> [0]PETSC ERROR: Configure run at Thu Dec 31 09:53:25 2009
> [0]PETSC ERROR: Configure options --with-shared --with-debugging=0 --useThreads 0 --with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi --with-mpi-shared=1 --with-blas-lib=-lblas-3gf --with-lapack-lib=-llapackgf-3 --with-umfpack=1 --with-umfpack-include=/usr/include/suitesparse --with-umfpack-lib="[/usr/lib/libumfpack.so,/usr/lib/libamd.so]" --with-superlu=1 --with-superlu-include=/usr/include/superlu --with-superlu-lib=/usr/lib/libsuperlu.so --with-spooles=1 --with-spooles-include=/usr/include/spooles --with-spooles-lib=/usr/lib/libspooles.so --with-hypre=1 --with-hypre-dir=/usr --with-scotch=1 --with-scotch-include=/usr/include/scotch --with-scotch-lib=/usr/lib/libscotch.so
> [0]PETSC ERROR: ------------------------------------------------------------------------
> [0]PETSC ERROR: PetscMemzero() line 189 in src/sys/utils/memc.c
> [0]PETSC ERROR: MatZeroEntries_SeqAIJ() line 727 in src/mat/impls/aij/seq/aij.c
> [0]PETSC ERROR: MatZeroEntries() line 4796 in src/mat/interface/matrix.c
> [0]PETSC ERROR: --------------------- Error Message ------------------------------------
> [0]PETSC ERROR: Null argument, when expecting valid pointer!
> [0]PETSC ERROR: Trying to zero at a null pointer!
> [0]PETSC ERROR: ------------------------------------------------------------------------
> [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 10, Tue Nov 24 16:38:09 CST 2009
> [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> [0]PETSC ERROR: See docs/index.html for manual pages.
> [0]PETSC ERROR: ------------------------------------------------------------------------
> [0]PETSC ERROR: Unknown Name on a linux-gnu named virtual-python by till Mon Feb 14 10:56:04 2011
> [0]PETSC ERROR: Libraries linked from /build/buildd/petsc-3.0.0.dfsg/linux-gnu-c-opt/lib
> [0]PETSC ERROR: Configure run at Thu Dec 31 09:53:25 2009
> [0]PETSC ERROR: Configure options --with-shared --with-debugging=0 --useThreads 0 --with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi --with-mpi-shared=1 --with-blas-lib=-lblas-3gf --with-lapack-lib=-llapackgf-3 --with-umfpack=1 --with-umfpack-include=/usr/include/suitesparse --with-umfpack-lib="[/usr/lib/libumfpack.so,/usr/lib/libamd.so]" --with-superlu=1 --with-superlu-include=/usr/include/superlu --with-superlu-lib=/usr/lib/libsuperlu.so --with-spooles=1 --with-spooles-include=/usr/include/spooles --with-spooles-lib=/usr/lib/libspooles.so --with-hypre=1 --with-hypre-dir=/usr --with-scotch=1 --with-scotch-include=/usr/include/scotch --with-scotch-lib=/usr/lib/libscotch.so
> [0]PETSC ERROR: ------------------------------------------------------------------------
> [0]PETSC ERROR: PetscMemzero() line 189 in src/sys/utils/memc.c
> [0]PETSC ERROR: MatZeroEntries_SeqAIJ() line 727 in src/mat/impls/aij/seq/aij.c
> [0]PETSC ERROR: MatZeroEntries() line 4796 in src/mat/interface/matrix.c
> Computing Dirichlet boundary values, topological search [=> ] 8.2%
> Computing Dirichlet boundary values, topological search [==> ] 16.5%
> Computing Dirichlet boundary values, topological search [===> ] 28.9%
> Computing Dirichlet boundary values, topological search [=====> ] 41.2%
> Computing Dirichlet boundary values, topological search [======> ] 53.6%
> Computing Dirichlet boundary values, topological search [========> ] 65.9%
> Computing Dirichlet boundary values, topological search [==========> ] 78.3%
> Computing Dirichlet boundary values, topological search [===========> ] 90.7%
> Computing Dirichlet boundary values, topological search [=============] 100.0%
> Computing Dirichlet boundary values, topological search [=============] 100.0%
> [0]PETSC ERROR: ------------------------------------------------------------------------
> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range
> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
> [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSC ERROR: or try http://valgrind.org on linux or man libgmalloc on Apple to find memory corruption errors
> [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
> [0]PETSC ERROR: to get more information on the crash.
> [0]PETSC ERROR: --------------------- Error Message ------------------------------------
> [0]PETSC ERROR: Signal received!
> [0]PETSC ERROR: ------------------------------------------------------------------------
> [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 10, Tue Nov 24 16:38:09 CST 2009
> [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> [0]PETSC ERROR: See docs/index.html for manual pages.
> [0]PETSC ERROR: ------------------------------------------------------------------------
> [0]PETSC ERROR: Unknown Name on a linux-gnu named virtual-python by till Mon Feb 14 10:56:04 2011
> [0]PETSC ERROR: Libraries linked from /build/buildd/petsc-3.0.0.dfsg/linux-gnu-c-opt/lib
> [0]PETSC ERROR: Configure run at Thu Dec 31 09:53:25 2009
> [0]PETSC ERROR: Configure options --with-shared --with-debugging=0 --useThreads 0 --with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi --with-mpi-shared=1 --with-blas-lib=-lblas-3gf --with-lapack-lib=-llapackgf-3 --with-umfpack=1 --with-umfpack-include=/usr/include/suitesparse --with-umfpack-lib="[/usr/lib/libumfpack.so,/usr/lib/libamd.so]" --with-superlu=1 --with-superlu-include=/usr/include/superlu --with-superlu-lib=/usr/lib/libsuperlu.so --with-spooles=1 --with-spooles-include=/usr/include/spooles --with-spooles-lib=/usr/lib/libspooles.so --with-hypre=1 --with-hypre-dir=/usr --with-scotch=1 --with-scotch-include=/usr/include/scotch --with-scotch-lib=/usr/lib/libscotch.so
> [0]PETSC ERROR: ------------------------------------------------------------------------
> [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file
> --------------------------------------------------------------------------
> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> with errorcode 59.
>
> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> You may or may not see output from other processes, depending on
> exactly when Open MPI kills them.
> --------------------------------------------------------------------------
>
> You received this question notification because you are a member of
> DOLFIN Team, which is an answer contact for DOLFIN.
>
> _______________________________________________
> Mailing list: https://launchpad.net/~dolfin
> Post to : <email address hidden>
> Unsubscribe : https://launchpad.net/~dolfin
> More help : https://help.launchpad.net/ListHelp

Revision history for this message
Bento (tbarmeier-deactivatedaccount) said :
#20

I made the mesh with gmsh. It contains 227792 tetrahedra.

In the program I do nothing more than apply boundary conditions, define the
variational problem and solve it. I took the iterative stokes demo and just
edited the definition of the mesh and added my boundary conditions.
Similarly to the other stokes demos I already prepared the subdomains in a
separate xml file.

I accept that 1GB RAM is not much. But is there an amount of RAM that is
sufficient, no matter how large the mesh is? When I first asked about my
memory problems, I understood that the iterative solver has an upper bound
for RAM consumption.

2011/2/14 Anders Logg <email address hidden>

> Your question #143963 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/143963
>
> Status: Open => Answered
>
> Anders Logg proposed the following answer:
> How large is your mesh? What kind of mesh is it? Triangles or
> tetrahedra?
>
> The storage of the mesh itself is very efficient in DOLFIN but
> depending on what other things you do in your program, you may well
> run out of memory. 1GB is not that much.
>
> --
> Anders
>
>
> On Mon, Feb 14, 2011 at 11:49:36AM -0000, Till B wrote:
> > Question #143963 on DOLFIN changed:
> > https://answers.launchpad.net/dolfin/+question/143963
> >
> > Till B posted a new comment:
> > Hi again!
> >
> > So I finally got XUbuntu running on a virtual machine. I installed Fenics
> from the Fenics ppa repositories. Now the iterative Stokes demo works. But
> when I put my own mesh in, I get memory problems again. In addition Petsc
> produces errors. My virtual machine has around 1.2 gig RAM, more is not
> possible. Does this mean that Fenics is in general not suitable for larger
> meshes?
> > Here is the output:
> >
> > till@virtual-python:~/Desktop/sf_winshared/test$ python demo.py
> > Assembling linear system and applying boundary conditions...
> > [0]PETSC ERROR: --------------------- Error Message
> ------------------------------------
> > [0]PETSC ERROR: Out of memory. This could be due to allocating
> > [0]PETSC ERROR: too large an object or bleeding by not properly
> > [0]PETSC ERROR: destroying unneeded objects.
> > [0]PETSC ERROR: Memory allocated 0 Memory used by process 878977024
> > [0]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info.
> > [0]PETSC ERROR: Memory requested 1131238140!
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 10, Tue Nov 24
> 16:38:09 CST 2009
> > [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> > [0]PETSC ERROR: See docs/index.html for manual pages.
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: Unknown Name on a linux-gnu named virtual-python by till
> Mon Feb 14 10:56:04 2011
> > [0]PETSC ERROR: Libraries linked from
> /build/buildd/petsc-3.0.0.dfsg/linux-gnu-c-opt/lib
> > [0]PETSC ERROR: Configure run at Thu Dec 31 09:53:25 2009
> > [0]PETSC ERROR: Configure options --with-shared --with-debugging=0
> --useThreads 0 --with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi
> --with-mpi-shared=1 --with-blas-lib=-lblas-3gf
> --with-lapack-lib=-llapackgf-3 --with-umfpack=1
> --with-umfpack-include=/usr/include/suitesparse
> --with-umfpack-lib="[/usr/lib/libumfpack.so,/usr/lib/libamd.so]"
> --with-superlu=1 --with-superlu-include=/usr/include/superlu
> --with-superlu-lib=/usr/lib/libsuperlu.so --with-spooles=1
> --with-spooles-include=/usr/include/spooles
> --with-spooles-lib=/usr/lib/libspooles.so --with-hypre=1
> --with-hypre-dir=/usr --with-scotch=1
> --with-scotch-include=/usr/include/scotch
> --with-scotch-lib=/usr/lib/libscotch.so
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: PetscMallocAlign() line 61 in src/sys/memory/mal.c
> > [0]PETSC ERROR: MatSeqAIJSetPreallocation_SeqAIJ() line 2986 in
> src/mat/impls/aij/seq/aij.c
> > [0]PETSC ERROR: MatCreateSeqAIJ() line 2863 in
> src/mat/impls/aij/seq/aij.c
> > [0]PETSC ERROR: --------------------- Error Message
> ------------------------------------
> > [0]PETSC ERROR: Null argument, when expecting valid pointer!
> > [0]PETSC ERROR: Trying to zero at a null pointer!
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 10, Tue Nov 24
> 16:38:09 CST 2009
> > [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> > [0]PETSC ERROR: See docs/index.html for manual pages.
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: Unknown Name on a linux-gnu named virtual-python by till
> Mon Feb 14 10:56:04 2011
> > [0]PETSC ERROR: Libraries linked from
> /build/buildd/petsc-3.0.0.dfsg/linux-gnu-c-opt/lib
> > [0]PETSC ERROR: Configure run at Thu Dec 31 09:53:25 2009
> > [0]PETSC ERROR: Configure options --with-shared --with-debugging=0
> --useThreads 0 --with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi
> --with-mpi-shared=1 --with-blas-lib=-lblas-3gf
> --with-lapack-lib=-llapackgf-3 --with-umfpack=1
> --with-umfpack-include=/usr/include/suitesparse
> --with-umfpack-lib="[/usr/lib/libumfpack.so,/usr/lib/libamd.so]"
> --with-superlu=1 --with-superlu-include=/usr/include/superlu
> --with-superlu-lib=/usr/lib/libsuperlu.so --with-spooles=1
> --with-spooles-include=/usr/include/spooles
> --with-spooles-lib=/usr/lib/libspooles.so --with-hypre=1
> --with-hypre-dir=/usr --with-scotch=1
> --with-scotch-include=/usr/include/scotch
> --with-scotch-lib=/usr/lib/libscotch.so
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: PetscMemzero() line 189 in src/sys/utils/memc.c
> > [0]PETSC ERROR: MatZeroEntries_SeqAIJ() line 727 in
> src/mat/impls/aij/seq/aij.c
> > [0]PETSC ERROR: MatZeroEntries() line 4796 in src/mat/interface/matrix.c
> > [0]PETSC ERROR: --------------------- Error Message
> ------------------------------------
> > [0]PETSC ERROR: Null argument, when expecting valid pointer!
> > [0]PETSC ERROR: Trying to zero at a null pointer!
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 10, Tue Nov 24
> 16:38:09 CST 2009
> > [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> > [0]PETSC ERROR: See docs/index.html for manual pages.
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: Unknown Name on a linux-gnu named virtual-python by till
> Mon Feb 14 10:56:04 2011
> > [0]PETSC ERROR: Libraries linked from
> /build/buildd/petsc-3.0.0.dfsg/linux-gnu-c-opt/lib
> > [0]PETSC ERROR: Configure run at Thu Dec 31 09:53:25 2009
> > [0]PETSC ERROR: Configure options --with-shared --with-debugging=0
> --useThreads 0 --with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi
> --with-mpi-shared=1 --with-blas-lib=-lblas-3gf
> --with-lapack-lib=-llapackgf-3 --with-umfpack=1
> --with-umfpack-include=/usr/include/suitesparse
> --with-umfpack-lib="[/usr/lib/libumfpack.so,/usr/lib/libamd.so]"
> --with-superlu=1 --with-superlu-include=/usr/include/superlu
> --with-superlu-lib=/usr/lib/libsuperlu.so --with-spooles=1
> --with-spooles-include=/usr/include/spooles
> --with-spooles-lib=/usr/lib/libspooles.so --with-hypre=1
> --with-hypre-dir=/usr --with-scotch=1
> --with-scotch-include=/usr/include/scotch
> --with-scotch-lib=/usr/lib/libscotch.so
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: PetscMemzero() line 189 in src/sys/utils/memc.c
> > [0]PETSC ERROR: MatZeroEntries_SeqAIJ() line 727 in
> src/mat/impls/aij/seq/aij.c
> > [0]PETSC ERROR: MatZeroEntries() line 4796 in src/mat/interface/matrix.c
> > Computing Dirichlet boundary values, topological search [=> ]
> 8.2%
> > Computing Dirichlet boundary values, topological search [==> ]
> 16.5%
> > Computing Dirichlet boundary values, topological search [===> ]
> 28.9%
> > Computing Dirichlet boundary values, topological search [=====> ]
> 41.2%
> > Computing Dirichlet boundary values, topological search [======> ]
> 53.6%
> > Computing Dirichlet boundary values, topological search [========> ]
> 65.9%
> > Computing Dirichlet boundary values, topological search [==========> ]
> 78.3%
> > Computing Dirichlet boundary values, topological search [===========> ]
> 90.7%
> > Computing Dirichlet boundary values, topological search [=============]
> 100.0%
> > Computing Dirichlet boundary values, topological search [=============]
> 100.0%
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
> probably memory access out of range
> > [0]PETSC ERROR: Try option -start_in_debugger or
> -on_error_attach_debugger
> > [0]PETSC ERROR: or see
> http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSCERROR: or try
> http://valgrind.org on linux or man libgmalloc on Apple to find memory
> corruption errors
> > [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link,
> and run
> > [0]PETSC ERROR: to get more information on the crash.
> > [0]PETSC ERROR: --------------------- Error Message
> ------------------------------------
> > [0]PETSC ERROR: Signal received!
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 10, Tue Nov 24
> 16:38:09 CST 2009
> > [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> > [0]PETSC ERROR: See docs/index.html for manual pages.
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: Unknown Name on a linux-gnu named virtual-python by till
> Mon Feb 14 10:56:04 2011
> > [0]PETSC ERROR: Libraries linked from
> /build/buildd/petsc-3.0.0.dfsg/linux-gnu-c-opt/lib
> > [0]PETSC ERROR: Configure run at Thu Dec 31 09:53:25 2009
> > [0]PETSC ERROR: Configure options --with-shared --with-debugging=0
> --useThreads 0 --with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi
> --with-mpi-shared=1 --with-blas-lib=-lblas-3gf
> --with-lapack-lib=-llapackgf-3 --with-umfpack=1
> --with-umfpack-include=/usr/include/suitesparse
> --with-umfpack-lib="[/usr/lib/libumfpack.so,/usr/lib/libamd.so]"
> --with-superlu=1 --with-superlu-include=/usr/include/superlu
> --with-superlu-lib=/usr/lib/libsuperlu.so --with-spooles=1
> --with-spooles-include=/usr/include/spooles
> --with-spooles-lib=/usr/lib/libspooles.so --with-hypre=1
> --with-hypre-dir=/usr --with-scotch=1
> --with-scotch-include=/usr/include/scotch
> --with-scotch-lib=/usr/lib/libscotch.so
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: User provided function() line 0 in unknown directory
> unknown file
> >
> --------------------------------------------------------------------------
> > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> > with errorcode 59.
> >
> > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> > You may or may not see output from other processes, depending on
> > exactly when Open MPI kills them.
> >
> --------------------------------------------------------------------------
> >
> > You received this question notification because you are a member of
> > DOLFIN Team, which is an answer contact for DOLFIN.
> >
> > _______________________________________________
> > Mailing list: https://launchpad.net/~dolfin
> > Post to : <email address hidden>
> > Unsubscribe : https://launchpad.net/~dolfin
> > More help : https://help.launchpad.net/ListHelp
>
> --
> If this answers your question, please go to the following page to let us
> know that it is solved:
> https://answers.launchpad.net/dolfin/+question/143963/+confirm?answer_id=18
>
> If you still need help, you can reply to this email or go to the
> following page to enter your feedback:
> https://answers.launchpad.net/dolfin/+question/143963
>
> You received this question notification because you are a direct
> subscriber of the question.
>

Revision history for this message
Anders Logg (logg) said :
#21

On Mon, Feb 14, 2011 at 12:15:50PM -0000, Till B wrote:
> Question #143963 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/143963
>
> Status: Answered => Open
>
> Till B is still having a problem:
> I made the mesh with gmsh. It contains 227792 tetrahedra.
>
> In the program I do nothing more than apply boundary conditions, define the
> variational problem and solve it. I took the iterative stokes demo and just
> edited the definition of the mesh and added my boundary conditions.
> Similarly to the other stokes demos I already prepared the subdomains in a
> separate xml file.
>
> I accept that 1GB RAM is not much. But is there an amount of RAM that is
> sufficient, no matter how large the mesh is?

No.

> When I first asked about my memory problems, I understood that the
> iterative solver has an upper bound for RAM consumption.

Not an upper bound, just that it uses more memory than the iterative
solvers.

--
Anders

>
> 2011/2/14 Anders Logg <email address hidden>
>
> > Your question #143963 on DOLFIN changed:
> > https://answers.launchpad.net/dolfin/+question/143963
> >
> > Status: Open => Answered
> >
> > Anders Logg proposed the following answer:
> > How large is your mesh? What kind of mesh is it? Triangles or
> > tetrahedra?
> >
> > The storage of the mesh itself is very efficient in DOLFIN but
> > depending on what other things you do in your program, you may well
> > run out of memory. 1GB is not that much.
> >
> >
> >
> > On Mon, Feb 14, 2011 at 11:49:36AM -0000, Till B wrote:
> > > Question #143963 on DOLFIN changed:
> > > https://answers.launchpad.net/dolfin/+question/143963
> > >
> > > Till B posted a new comment:
> > > Hi again!
> > >
> > > So I finally got XUbuntu running on a virtual machine. I installed Fenics
> > from the Fenics ppa repositories. Now the iterative Stokes demo works. But
> > when I put my own mesh in, I get memory problems again. In addition Petsc
> > produces errors. My virtual machine has around 1.2 gig RAM, more is not
> > possible. Does this mean that Fenics is in general not suitable for larger
> > meshes?
> > > Here is the output:
> > >
> > > till@virtual-python:~/Desktop/sf_winshared/test$ python demo.py
> > > Assembling linear system and applying boundary conditions...
> > > [0]PETSC ERROR: --------------------- Error Message
> > ------------------------------------
> > > [0]PETSC ERROR: Out of memory. This could be due to allocating
> > > [0]PETSC ERROR: too large an object or bleeding by not properly
> > > [0]PETSC ERROR: destroying unneeded objects.
> > > [0]PETSC ERROR: Memory allocated 0 Memory used by process 878977024
> > > [0]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info.
> > > [0]PETSC ERROR: Memory requested 1131238140!
> > > [0]PETSC ERROR:
> > ------------------------------------------------------------------------
> > > [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 10, Tue Nov 24
> > 16:38:09 CST 2009
> > > [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> > > [0]PETSC ERROR: See docs/index.html for manual pages.
> > > [0]PETSC ERROR:
> > ------------------------------------------------------------------------
> > > [0]PETSC ERROR: Unknown Name on a linux-gnu named virtual-python by till
> > Mon Feb 14 10:56:04 2011
> > > [0]PETSC ERROR: Libraries linked from
> > /build/buildd/petsc-3.0.0.dfsg/linux-gnu-c-opt/lib
> > > [0]PETSC ERROR: Configure run at Thu Dec 31 09:53:25 2009
> > > [0]PETSC ERROR: Configure options --with-shared --with-debugging=0
> > --useThreads 0 --with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi
> > --with-mpi-shared=1 --with-blas-lib=-lblas-3gf
> > --with-lapack-lib=-llapackgf-3 --with-umfpack=1
> > --with-umfpack-include=/usr/include/suitesparse
> > --with-umfpack-lib="[/usr/lib/libumfpack.so,/usr/lib/libamd.so]"
> > --with-superlu=1 --with-superlu-include=/usr/include/superlu
> > --with-superlu-lib=/usr/lib/libsuperlu.so --with-spooles=1
> > --with-spooles-include=/usr/include/spooles
> > --with-spooles-lib=/usr/lib/libspooles.so --with-hypre=1
> > --with-hypre-dir=/usr --with-scotch=1
> > --with-scotch-include=/usr/include/scotch
> > --with-scotch-lib=/usr/lib/libscotch.so
> > > [0]PETSC ERROR:
> > ------------------------------------------------------------------------
> > > [0]PETSC ERROR: PetscMallocAlign() line 61 in src/sys/memory/mal.c
> > > [0]PETSC ERROR: MatSeqAIJSetPreallocation_SeqAIJ() line 2986 in
> > src/mat/impls/aij/seq/aij.c
> > > [0]PETSC ERROR: MatCreateSeqAIJ() line 2863 in
> > src/mat/impls/aij/seq/aij.c
> > > [0]PETSC ERROR: --------------------- Error Message
> > ------------------------------------
> > > [0]PETSC ERROR: Null argument, when expecting valid pointer!
> > > [0]PETSC ERROR: Trying to zero at a null pointer!
> > > [0]PETSC ERROR:
> > ------------------------------------------------------------------------
> > > [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 10, Tue Nov 24
> > 16:38:09 CST 2009
> > > [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> > > [0]PETSC ERROR: See docs/index.html for manual pages.
> > > [0]PETSC ERROR:
> > ------------------------------------------------------------------------
> > > [0]PETSC ERROR: Unknown Name on a linux-gnu named virtual-python by till
> > Mon Feb 14 10:56:04 2011
> > > [0]PETSC ERROR: Libraries linked from
> > /build/buildd/petsc-3.0.0.dfsg/linux-gnu-c-opt/lib
> > > [0]PETSC ERROR: Configure run at Thu Dec 31 09:53:25 2009
> > > [0]PETSC ERROR: Configure options --with-shared --with-debugging=0
> > --useThreads 0 --with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi
> > --with-mpi-shared=1 --with-blas-lib=-lblas-3gf
> > --with-lapack-lib=-llapackgf-3 --with-umfpack=1
> > --with-umfpack-include=/usr/include/suitesparse
> > --with-umfpack-lib="[/usr/lib/libumfpack.so,/usr/lib/libamd.so]"
> > --with-superlu=1 --with-superlu-include=/usr/include/superlu
> > --with-superlu-lib=/usr/lib/libsuperlu.so --with-spooles=1
> > --with-spooles-include=/usr/include/spooles
> > --with-spooles-lib=/usr/lib/libspooles.so --with-hypre=1
> > --with-hypre-dir=/usr --with-scotch=1
> > --with-scotch-include=/usr/include/scotch
> > --with-scotch-lib=/usr/lib/libscotch.so
> > > [0]PETSC ERROR:
> > ------------------------------------------------------------------------
> > > [0]PETSC ERROR: PetscMemzero() line 189 in src/sys/utils/memc.c
> > > [0]PETSC ERROR: MatZeroEntries_SeqAIJ() line 727 in
> > src/mat/impls/aij/seq/aij.c
> > > [0]PETSC ERROR: MatZeroEntries() line 4796 in src/mat/interface/matrix.c
> > > [0]PETSC ERROR: --------------------- Error Message
> > ------------------------------------
> > > [0]PETSC ERROR: Null argument, when expecting valid pointer!
> > > [0]PETSC ERROR: Trying to zero at a null pointer!
> > > [0]PETSC ERROR:
> > ------------------------------------------------------------------------
> > > [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 10, Tue Nov 24
> > 16:38:09 CST 2009
> > > [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> > > [0]PETSC ERROR: See docs/index.html for manual pages.
> > > [0]PETSC ERROR:
> > ------------------------------------------------------------------------
> > > [0]PETSC ERROR: Unknown Name on a linux-gnu named virtual-python by till
> > Mon Feb 14 10:56:04 2011
> > > [0]PETSC ERROR: Libraries linked from
> > /build/buildd/petsc-3.0.0.dfsg/linux-gnu-c-opt/lib
> > > [0]PETSC ERROR: Configure run at Thu Dec 31 09:53:25 2009
> > > [0]PETSC ERROR: Configure options --with-shared --with-debugging=0
> > --useThreads 0 --with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi
> > --with-mpi-shared=1 --with-blas-lib=-lblas-3gf
> > --with-lapack-lib=-llapackgf-3 --with-umfpack=1
> > --with-umfpack-include=/usr/include/suitesparse
> > --with-umfpack-lib="[/usr/lib/libumfpack.so,/usr/lib/libamd.so]"
> > --with-superlu=1 --with-superlu-include=/usr/include/superlu
> > --with-superlu-lib=/usr/lib/libsuperlu.so --with-spooles=1
> > --with-spooles-include=/usr/include/spooles
> > --with-spooles-lib=/usr/lib/libspooles.so --with-hypre=1
> > --with-hypre-dir=/usr --with-scotch=1
> > --with-scotch-include=/usr/include/scotch
> > --with-scotch-lib=/usr/lib/libscotch.so
> > > [0]PETSC ERROR:
> > ------------------------------------------------------------------------
> > > [0]PETSC ERROR: PetscMemzero() line 189 in src/sys/utils/memc.c
> > > [0]PETSC ERROR: MatZeroEntries_SeqAIJ() line 727 in
> > src/mat/impls/aij/seq/aij.c
> > > [0]PETSC ERROR: MatZeroEntries() line 4796 in src/mat/interface/matrix.c
> > > Computing Dirichlet boundary values, topological search [=> ]
> > 8.2%
> > > Computing Dirichlet boundary values, topological search [==> ]
> > 16.5%
> > > Computing Dirichlet boundary values, topological search [===> ]
> > 28.9%
> > > Computing Dirichlet boundary values, topological search [=====> ]
> > 41.2%
> > > Computing Dirichlet boundary values, topological search [======> ]
> > 53.6%
> > > Computing Dirichlet boundary values, topological search [========> ]
> > 65.9%
> > > Computing Dirichlet boundary values, topological search [==========> ]
> > 78.3%
> > > Computing Dirichlet boundary values, topological search [===========> ]
> > 90.7%
> > > Computing Dirichlet boundary values, topological search [=============]
> > 100.0%
> > > Computing Dirichlet boundary values, topological search [=============]
> > 100.0%
> > > [0]PETSC ERROR:
> > ------------------------------------------------------------------------
> > > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
> > probably memory access out of range
> > > [0]PETSC ERROR: Try option -start_in_debugger or
> > -on_error_attach_debugger
> > > [0]PETSC ERROR: or see
> > http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSCERROR: or try
> > http://valgrind.org on linux or man libgmalloc on Apple to find memory
> > corruption errors
> > > [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link,
> > and run
> > > [0]PETSC ERROR: to get more information on the crash.
> > > [0]PETSC ERROR: --------------------- Error Message
> > ------------------------------------
> > > [0]PETSC ERROR: Signal received!
> > > [0]PETSC ERROR:
> > ------------------------------------------------------------------------
> > > [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 10, Tue Nov 24
> > 16:38:09 CST 2009
> > > [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> > > [0]PETSC ERROR: See docs/index.html for manual pages.
> > > [0]PETSC ERROR:
> > ------------------------------------------------------------------------
> > > [0]PETSC ERROR: Unknown Name on a linux-gnu named virtual-python by till
> > Mon Feb 14 10:56:04 2011
> > > [0]PETSC ERROR: Libraries linked from
> > /build/buildd/petsc-3.0.0.dfsg/linux-gnu-c-opt/lib
> > > [0]PETSC ERROR: Configure run at Thu Dec 31 09:53:25 2009
> > > [0]PETSC ERROR: Configure options --with-shared --with-debugging=0
> > --useThreads 0 --with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi
> > --with-mpi-shared=1 --with-blas-lib=-lblas-3gf
> > --with-lapack-lib=-llapackgf-3 --with-umfpack=1
> > --with-umfpack-include=/usr/include/suitesparse
> > --with-umfpack-lib="[/usr/lib/libumfpack.so,/usr/lib/libamd.so]"
> > --with-superlu=1 --with-superlu-include=/usr/include/superlu
> > --with-superlu-lib=/usr/lib/libsuperlu.so --with-spooles=1
> > --with-spooles-include=/usr/include/spooles
> > --with-spooles-lib=/usr/lib/libspooles.so --with-hypre=1
> > --with-hypre-dir=/usr --with-scotch=1
> > --with-scotch-include=/usr/include/scotch
> > --with-scotch-lib=/usr/lib/libscotch.so
> > > [0]PETSC ERROR:
> > ------------------------------------------------------------------------
> > > [0]PETSC ERROR: User provided function() line 0 in unknown directory
> > unknown file
> > >
> > --------------------------------------------------------------------------
> > > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> > > with errorcode 59.
> > >
> > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> > > You may or may not see output from other processes, depending on
> > > exactly when Open MPI kills them.
> > >
> > --------------------------------------------------------------------------
> > >
> > > You received this question notification because you are a member of
> > > DOLFIN Team, which is an answer contact for DOLFIN.
> > >
> > > _______________________________________________
> > > Mailing list: https://launchpad.net/~dolfin
> > > Post to : <email address hidden>
> > > Unsubscribe : https://launchpad.net/~dolfin
> > > More help : https://help.launchpad.net/ListHelp
> >
> >
> > If you still need help, you can reply to this email or go to the
> > following page to enter your feedback:
> > https://answers.launchpad.net/dolfin/+question/143963
> >
> > You received this question notification because you are a direct
> > subscriber of the question.
> >
>
> You received this question notification because you are a member of
> DOLFIN Team, which is an answer contact for DOLFIN.
>
> _______________________________________________
> Mailing list: https://launchpad.net/~dolfin
> Post to : <email address hidden>
> Unsubscribe : https://launchpad.net/~dolfin
> More help : https://help.launchpad.net/ListHelp

Revision history for this message
Bento (tbarmeier-deactivatedaccount) said :
#22

OK, thank you Anders. Then I will look for alternatives.

Revision history for this message
Anders Logg (logg) said :
#23

On Mon, Feb 14, 2011 at 12:35:06PM -0000, Till B wrote:
> Question #143963 on DOLFIN changed:
> https://answers.launchpad.net/dolfin/+question/143963
>
> Status: Answered => Solved
>
> Till B confirmed that the question is solved:
> OK, thank you Anders. Then I will look for alternatives.

Good luck finding an alternative that can store meshes of arbitrary
size in a small fixed amount of memory. :-)

--
Anders

Revision history for this message
Bento (tbarmeier-deactivatedaccount) said :
#24

Well, I tried a software called "Elmer" that had no memory problems with meshes that were a lot larger than my current example. Unfortunately solving the heat equation produced wrong results.