“spectral_transform” and "spectral_shift" parameters in eigenvalue solvers

Asked by Houdong Hu on 2013-03-25

I want to calculate "smallest real" eigenvalue of a pde, and I am using the “krylov-schur” method. The smallest eigenvalue is about -0.5.

1. without setting “spectral_transform” and "spectral_shift", the number of iterations is about 10000.
2. with “spectral_transform” and "spectral_shift" (about -0.7), the number of iterations is about 1.

What is indeed inside the “spectral_transform” and "spectral_shift" parameters? How it can increase the speed so much? Is that some method similar with the "inverse-power-method" to increase the convergece rate so much?

And Always the best method to calculate one specific eigenvalue (such as largest magnitude, smallest magnitude) is by using power method or inverse power method. But I can not call "power", always get such errors:

Computing eigenvalues. This can take a minute.
[0]PETSC ERROR: --------------------- Error Message ------------------------------------
[0]PETSC ERROR: Wrong value of eps->which!
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 5, Sat Oct 29 13:45:54 CDT 2011
[0]PETSC ERROR: See docs/changes/index.html for recent updates.
[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[0]PETSC ERROR: See docs/index.html for manual pages.
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Unknown Name on a linux-gnu named vincehouhou-ThinkPad-X230 by vincehouhou Sun Mar 24 23:49:29 2013
[0]PETSC ERROR: Libraries linked from /build/buildd/petsc-3.2.dfsg/linux-gnu-c-opt/lib
[0]PETSC ERROR: Configure run at Tue Jun 12 21:14:38 2012
[0]PETSC ERROR: Configure options --with-shared-libraries --with-debugging=0 --useThreads 0 --with-clanguage=C++ --with-c-support --with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi --with-mpi-shared=1 --with-blas-lib=-lblas --with-lapack-lib=-llapack --with-blacs=1 --with-blacs-include=/usr/include --with-blacs-lib="[/usr/lib/libblacsCinit-openmpi.so,/usr/lib/libblacs-openmpi.so]" --with-scalapack=1 --with-scalapack-include=/usr/include --with-scalapack-lib=/usr/lib/libscalapack-openmpi.so --with-mumps=1 --with-mumps-include=/usr/include --with-mumps-lib="[/usr/lib/libdmumps.so,/usr/lib/libzmumps.so,/usr/lib/libsmumps.so,/usr/lib/libcmumps.so,/usr/lib/libmumps_common.so,/usr/lib/libpord.so]" --with-umfpack=1 --with-umfpack-include=/usr/include/suitesparse --with-umfpack-lib="[/usr/lib/libumfpack.so,/usr/lib/libamd.so]" --with-cholmod=1 --with-cholmod-include=/usr/include/suitesparse --with-cholmod-lib=/usr/lib/libcholmod.so --with-spooles=1 --with-spooles-include=/usr/include/spooles --with-spooles-lib=/usr/lib/libspooles.so --with-hypre=1 --with-hypre-dir=/usr --with-ptscotch=1 --with-ptscotch-include=/usr/include/scotch --with-ptscotch-lib="[/usr/lib/libptesmumps.so,/usr/lib/libptscotch.so,/usr/lib/libptscotcherr.so]" --with-hdf5=1 --with-hdf5-dir=/usr
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: EPSSetUp_Power() line 71 in src/eps/impls/power/power.c
[0]PETSC ERROR: EPSSetUp() line 138 in src/eps/interface/setup.c
[0]PETSC ERROR: EPSSolve() line 122 in src/eps/interface/solve.c
Eigenvalue solver (power) converged in 0 iterations.
Traceback (most recent call last):
  File "power_groundE.py", line 81, in <module>
    r, c, rx, cx = eigensolver.get_eigenpair(0)
  File "/usr/lib/python2.7/dist-packages/dolfin/cpp.py", line 5436, in get_eigenpair
    lr, lc = self._get_eigenpair(r_vec, c_vec, i)

I meet similar errors when I try to use "subspace", "lanczos" and "lapack" method, What is the error?

My system is Ubuntu 12.10.


Question information

English Edit question
DOLFIN Edit question
No assignee Edit question
Solved by:
Marie Rognes
Last query:
Last reply:
Best Marie Rognes (meg-simula) said : #1

Take a look at the SLEPc documentation http://www.grycap.upv.es/slepc/documentation/manual.htm for more information on the underlying eigensolver algorithms.

Houdong Hu (vincehouhou) said : #2

Thanks Marie Rognes, that solved my question.