eigenvalue problems with Dirichlet boundary conditions
This concerns the computation of eigenvalues, e.g., for the Laplacian with Dirichlet boundary conditions. This is tricky, since in dolfin we work with matrices whose size is the number of DOFs including those on the boundary, but number of eigenvalues for the eigenvalue problem we are considering is only the number of DOFs excluding those on the Dirichlet boundary. Looking at dolfin email, I see that people are unclear about (1) how to do this mathematically and (2) how to implement in it dolfin. I have figured out the solution to (1), and the implemention in the case we run serially, but don't see how to implement it implement it in parallel.
This is important since it is natural to expect dolfin to solve Dirichlet eigenvalue problems in parallel. If anyone has done this and could share some code please do. Meanwhile, I'll say what I have done. At the end is a question, which I hope someone can help me with.
(1) Suppose that A is the stiffness matrix and M is the mass matrix, so the eigenvalue problem with Neumann boundary conditions is the (A,M) generalized eigenvalue problem
A u = lambda M u,
which can be solved in FEniCS using SLEPc.
With Dirichlet boundary conditions on some or all of the boundary, the problem should be changed by
(a) zeroing out the rows and columns of A corresponding to boundary nodes, but putting a 1 (or other nonzero) on the diagonal of such rows;
(b) zeroing out the rows and columns of M corresponding to boundary node including the diagonal elements.
With these changes the eigenpairs to the new (A,M) generalized eigenvalue problem are exactly the eigenpairs for the problem with Dirichlet boundary conditions. (Because M is singular, the number of eigenvalues is less than the size of the matrices.)
(2) Now to implement this in dolfin, we can make the matrix A as follows:
a = dot(grad(u), grad(v))*dx
b = v*dx
bc = DirichletBC(V, Constant(0.0), bdry)
A, B = assemble_system(a, b, bc)
(Remarks: (1) b and B are of no further use. (2) we have to set A = down_cast(A) to convert A to a PETSc matrix before calling SLEPc.)
The problem comes with constructing the matrix M in dolfin. I don't know a way to simply set the rows and columns associated to boundary DOFs to zero without putting something on the diagonal. So what I do now is compute the list of indices of the boundary DOFs and then use
for i in bdrydoflist:
row = np.array((i,), dtype='I')
col = row
val = np.array( (0.,) )
M.set(val, row, col)
M.apply(
This seems ugly, but it works fine in serial. Not surprisingly, it does not work in parallel.
All this leads to my question: is there a nice way to construct the matrix M described in (b) above. Or, is there way to set the diagonal elements corresponding to boundary DOFs to 0? PETSc has MatDiagonalSet for this, but I don't know how to use this in dolfin.
Question information
- Language:
- English Edit question
- Status:
- Solved
- For:
- DOLFIN Edit question
- Assignee:
- No assignee Edit question
- Solved by:
- Joachim Haga
- Solved:
- Last query:
- Last reply: