User Contributed Dictionary
Etymology
From eigen + vectorPronunciation
Noun
- A vector that is not rotated under a given linear transformation; a left or right eigenvector depending on context.
- In the context of "physics|engineering": A right eigenvector; a nonzero vector x such that, for a particular matrix A, A x = \lambda x for some scalar \lambda which is its eigenvalue and an eigenvalue of the matrix.
Synonyms
Related terms
Translations
vector not rotated by linear transformation
- Chinese: 本徵向量, 特徵向量
- Dutch: eigenvector
- Finnish: ominaisvektori
- French: vecteur propre
- German: Eigenvektor
- Italian: autovettore
- Japanese: (koyū bekutoru)
- Norwegian: egenvektor
- Romanian: vector propriu
- Russian: собственный вектор (sóbstvennyj vektor)
- Swedish: egenvektor
See also
Extensive Definition
In mathematics, a vector
may be thought of as an arrow. It has a length, called its
magnitude, and it points in some particular direction. A linear
transformation may be considered to operate on a vector to
change it, usually changing both its magnitude and its direction.
An eigenvector of a given linear transformation is a non-zero
vector which is multiplied by a constant called the as a result of
that transformation. The direction of the eigenvector is either
unchanged by that transformation (for positive eigenvalues) or
reversed (for negative eigenvalues).
For example, an eigenvalue of +2 means that the
eigenvector is doubled in length and points in the same direction.
An eigenvalue of +1 means that the eigenvector is unchanged, while
an eigenvalue of −1 means that the eigenvector is reversed in
direction. An eigenspace of a given transformation is the span of the
eigenvectors of that transformation with the same eigenvalue,
together with the zero vector (which has no direction). An
eigenspace is an example of a subspace
of a vector
space.
In linear
algebra, every linear transformation between finite-dimensional
vector spaces can be given by a matrix,
which is a rectangular array of numbers arranged in rows and
columns. Standard methods for finding eigenvalues, eigenvectors,
and eigenspaces of a given matrix are discussed below.
These concepts play a major role in several
branches of both pure and
applied
mathematics — appearing prominently in linear
algebra, functional
analysis, and to a lesser extent in nonlinear mathematics.
Many kinds of mathematical objects can be treated
as vectors: functions,
harmonic modes,
quantum
states, and frequencies, for example. In
these cases, the concept of direction loses its ordinary meaning,
and is given an abstract definition. Even so, if this abstract
direction is unchanged by a given linear transformation, the prefix
"eigen" is used, as in eigenfunction, eigenmode, eigenstate, and
eigenfrequency.
History
Eigenvalues are often introduced in the context
of linear
algebra or matrix
theory. Historically, however, they arose in the study of
quadratic
forms and differential
equations.
Euler had
also studied the rotational motion of a rigid body and
discovered the importance of the principal
axes. As Lagrange realized, the principal axes are the
eigenvectors of the inertia matrix. In the early 19th century,
Cauchy
saw how their work could be used to classify the quadric
surfaces, and generalized it to arbitrary dimensions. Cauchy
also coined the term racine caractéristique (characteristic root)
for what is now called eigenvalue; his term survives in characteristic
equation.
Fourier
used the work of Laplace and Lagrange to solve the heat
equation by separation
of variables in his famous 1822 book Théorie analytique de la
chaleur.
Sturm developed Fourier's ideas further and he brought them to
the attention of Cauchy, who combined them with his own ideas and
arrived at the fact that symmetric matrices have real eigenvalues.
Schwarz
studied the first eigenvalue of Laplace's
equation on general domains towards the end of the 19th
century, while Poincaré
studied Poisson's
equation a few years later.
At the start of the 20th century, Hilbert
studied the eigenvalues of integral
operators by viewing the operators as infinite matrices. He was
the first to use the German
word eigen to denote eigenvalues and eigenvectors in 1904, though
he may have been following a related usage by Helmholtz.
"Eigen" can be translated as "own", "peculiar to",
"characteristic", or "individual" — emphasizing how important
eigenvalues are to defining the unique nature of a specific
transformation. For some time, the standard term in English was
"proper value", but the more distinctive term "eigenvalue" is
standard today.
The first numerical algorithm for computing
eigenvalues and eigenvectors appeared in 1929, when Von
Mises published the power
method. One of the most popular methods today, the QR
algorithm, was proposed independently by Francis
and Kublanovskaya
in 1961.
Definitions: the eigenvalue equation
see also EigenplaneLinear
transformations of a vector space, such as rotation,
reflection,
stretching, compression, shear
or any combination of these, may be visualized by the effect they
produce on vectors.
In other words, they are vector functions. More formally, in a
vector space L a vector function A is defined if for each vector x
of L there corresponds a unique vector y = A(x) of L. For the sake
of brevity, the parentheses around the vector on which the
transformation is acting are often omitted. A vector function A is
linear if it has the following two properties:
- Additivity: A(x + y) = Ax + Ay
- Homogeneity: A(αx) = αAx
The requirement that the eigenvector be non-zero
is imposed because the equation A0 = λ0 holds for every A and every
λ. Since the equation is always trivially true, it is not an
interesting case. In contrast, an eigenvalue can be zero in a
nontrivial way. Each eigenvector is associated with a specific
eigenvalue. One eigenvalue can be associated with several or even
with infinite number of eigenvectors.
Geometrically (Fig. 2), the eigenvalue equation
means that under the transformation A eigenvectors experience only
changes in magnitude and sign — the direction of Ax is the same as
that of x. The eigenvalue λ is simply the amount of "stretch" or
"shrink" to which a vector is subjected when transformed by A. If λ
= 1, the vector remains unchanged (unaffected by the
transformation). A transformation I under which a vector x remains
unchanged, Ix = x, is defined as identity
transformation. If λ = –1, the vector flips to the opposite
direction (rotates to 180°); this is defined as
reflection.
If x is an eigenvector of the linear
transformation A with eigenvalue λ, then any scalar multiple αx is
also an eigenvector of A with the same eigenvalue. Similarly if
more than one eigenvector share the same eigenvalue λ, any linear
combination of these eigenvectors will itself be an eigenvector
with eigenvalue λ. . Together with the zero vector, the
eigenvectors of A with the same eigenvalue form a linear
subspace of the vector space called an eigenspace.
The eigenvectors corresponding to different
eigenvalues are linearly independent meaning, in particular, that
in an n-dimensional space the linear transformation A cannot have
more than n eigenvectors with different eigenvalues.
If a basis
is defined in vector space, all vectors can be expressed in terms
of components.
For finite dimensional vector spaces with dimension n, linear
transformations can be represented with n × n square matrices.
Conversely, every such square matrix corresponds a linear
transformation for a given basis. Thus, in a the two-dimensional
vector space R2 fitted with standard
basis, the eigenvector equation for a linear transformation A
can be written in the following matrix representation:
- \begin a_ & a_ \\ a_ & a_ \end \begin x \\ y \end = \lambda \begin x \\ y \end,
where the juxtaposition of matrices means
matrix multiplication.
Characteristic equation
When the transformation is represented by a square matrix, the eigenvalue equation can be expressed as- A \mathbf - \lambda I \mathbf = \mathbf.
- \det(A - \lambda I) = 0.
This equation is defined as the characteristic
equation (less often, secular
equation) of A, and the left-hand side is defined as the
characteristic
polynomial. When expanded, this gives a polynomial equation for
\lambda. The eigenvector x or its components are not present in the
characteristic equation.
Example
The matrix- \begin 2 & 1\\1 & 2 \end
- \det\begin 2-\lambda & 1\\1 & 2-\lambda \end = (2-\lambda)^2 - 1 = 0.
- \begin 2 & 1\\1 & 2 \end\beginx\\y\end = 3\times\beginx\\y\end.
- \begin1\\1\end.
- \begin1\\-1\end.
The complexity of the problem for finding
roots/eigenvalues of the characteristic polynomial increases
rapidly with increasing the degree of the polynomial (the dimension
of the vector space). There are exact solutions for dimensions
below 5, but for higher dimensions there are no exact solutions and
one has to resort to numerical methods to find them approximately.
For large symmetric sparse
matrices, Lanczos
algorithm is used to compute eigenvalues and
eigenvectors.
Existence and Multiplicity of Eigenvalues
For transformations on real vector spaces, the
coefficients of the characteristic polynomial are all real.
However, the roots are not necessarily real; they may well be
complex numbers, or a mixture of real and complex numbers. For
example, a matrix representing a planar rotation of 45 degrees will
not leave any non-zero vector pointing in the the same direction.
Over a complex vector space, the
fundamental theorem of algebra guarantees that the
characteristic polynomial has at least one root, and thus the
linear transformation has at least one eigenvalue.
As well as distinct roots, the characteristic
equation may also have repeated roots. However, having repeated
roots does not imply there are multiple distinct (i.e. linearly
independent) eigenvectors with that eigenvalue. The algebraic
multiplicity of an
eigenvalue is defined as the
multiplicity of the corresponding root of the characteristic
polynomial. The geometric multiplicity of an eigenvalue is defined
as the dimension of the associated eigenspace, i.e. number of
linearly independent eigenvectors with that eigenvalue.
Over a complex space, the sum of the algebraic
multiplicities will equal the dimension of the vector space, but
the sum of the geometric multiplicities may be smaller. In a sense,
then it is possible that there may not be sufficient eigenvectors
to span the entire space. This is intimately related to the
question of whether a given matrix may be diagonalized by a
suitable choice of coordinates.
Example: Shear
Shear in the plane is a transformation in which all points along a given line remain fixed while other points are shifted parallel to that line by a distance proportional to their perpendicular distance from the line. Shearing a plane figure does not change its area. Shear can be horizontal − along the X axis, or vertical − along the Y axis. In horizontal shear (see figure), a point P of the plane moves parallel to the X axis to the place P' so that its coordinate y does not change while the x coordinate increments to become x' = x + k y, where k is called the shear factor.The matrix of a horizontal shear transformation
is \begin1 & k\\ 0 & 1\end. The characteristic equation is
λ2 − 2 λ + 1 = (1 − λ)2 = 0 which has a single, repeated root λ =
1. Therefore, the eigenvalue λ = 1 has algebraic multiplicity 2.
The eigenvector(s) are found as solutions of
- \begin1 - 1 & k\\ 0 & 1 - 1 \end\beginx\\ y\end = \begin0 & k\\ 0 & 0 \end\beginx\\ y\end = ky = 0.
Example: Uniform scaling and Reflection
As a one-dimensional vector space, consider a
rubber string tied to unmoving support in one end, such as that on
a child's sling. Pulling the string away from the point of
attachment stretches it and elongates it by some scaling factor λ
which is a real number. Each vector on the string is stretched
equally, with the same scaling factor λ, and although elongated it
preserves its original direction. For a two-dimensional vector
space, consider a rubber sheet stretched equally in all directions
such as a small area of the surface of an inflating balloon (Fig.
3). All vectors originating at the fixed point on the balloon
surface (the origin) are stretched equally with the same scaling
factor λ. This transformation in two-dimensions is described by the
2×2 square matrix:
- A \mathbf = \begin\lambda & 0\\0 & \lambda\end \begin x \\ y \end = \begin\lambda \cdot x + 0 \cdot y \\0 \cdot x + \lambda \cdot y\end = \lambda \begin x \\ y \end = \lambda \mathbf.
Expressed in words, the transformation is
equivalent to multiplying the length of any vector by λ while
preserving its original direction. Since the vector taken was
arbitrary, every non-zero vector in the vector space is an
eigenvector. Whether the transformation is stretching (elongation,
extension, inflation), or shrinking (compression, deflation)
depends on the scaling factor: if λ > 1, it is stretching, if λ
< 1, it is shrinking. Negative values of λ correspond to a
reversal of direction, followed by a stretch or a shrink, depending
on the absolute value of λ.
Example: Unequal scaling
For a slightly more complicated example, consider
a sheet that is stretched unequally in two perpendicular directions
along the coordinate axes, or, similarly, stretched in one
direction, and shrunk in the other direction. In this case, there
are two different scaling factors: k1 for the scaling in direction
x, and k2 for the scaling in direction y. The transformation matrix
is \begink_1 & 0\\0 & k_2\end, and the characteristic
equation is (k_1-\lambda)(k_2-\lambda) = 0. The eigenvalues,
obtained as roots of this equation are λ1 = k1, and λ2 = k2 which
means, as expected, that the two eigenvalues are the scaling
factors in the two directions. Plugging k1 back in the eigenvalue
equation gives one of the eigenvectors:
- \begin0 & 0\\0 & k_2 - k_1\end \begin x \\ y\end = \begin0\\0\end or, more simply, y=0.
The figure shows the case where k_1>1 and
1>k_2>0. The rubber sheet is stretched along the x axis and
simultaneously shrunk along the y axis. After repeatedly applying
this transformation of stretching/shrinking many times, almost any
vector on the surface of the rubber sheet will be oriented closer
and closer to the direction of the x axis (the direction of
stretching). The exceptions are vectors along the y-axis, which
will gradually shrink away to nothing.
Example: Rotation
details Rotation matrix A rotation in a plane is a transformation that describes motion of a vector, plane, coordinates, etc., around a fixed point. Clearly, for rotations other than through 0° and 180°, every vector in the real plane will have it's direction changed, and thus there cannot be any eigenvectors. But this is not necessarily if we consider the same matrix over a complex vector space.A counterclockwise
rotation in the horizontal plane about the origin at an angle φ is
represented by the matrix
- \mathbf = \begin \cos \varphi & -\sin \varphi \\ \sin \varphi & \cos \varphi \end.
Rotation matrices on complex vector spaces
The characteristic equation has two complex roots λ1 and λ2. If we choose to think of the rotation matrix as a linear operator on the complex two dimensional, we can consider these complex eigenvalues. The roots are complex conjugates of each other: λ1,2 = cos φ ± i sin φ = e ± iφ, each with an algebraic multiplicity equal to 1, where i is the imaginary unit.The first eigenvector is found by substituting
the first eigenvalue, λ1, back in the eigenvalue equation:
- \begin \cos \varphi - \lambda_1 & -\sin \varphi \\ \sin \varphi & \cos \varphi - \lambda_1 \end \begin x \\ y \end = \begin - i \sin \varphi & -\sin \varphi \\ \sin \varphi & - i \sin \varphi \end \begin x \\ y \end = \begin 0 \\ 0 \end.
- \begin1\\-i\end.
- \begin1\\i\end.
Infinite-dimensional spaces and Spectral Theory
details Spectral theorem If the vector space is an infinite dimensional Banach space, the notion of eigenvalues can be generalized to the concept of spectrum. The spectrum is the set of scalars λ for which (T − λ)−1 is not defined; that is, such that T − λ has no bounded inverse.Clearly if λ is an eigenvalue of T, λ is in the
spectrum of T. In general, the converse is not true. There are
operators on Hilbert or
Banach
spaces which have no eigenvectors at all. This can be seen in
the following example. The bilateral
shift on the Hilbert space ℓ 2(Z) (that
is, the space of all sequences of scalars … a−1, a0, a1, a2, … such
that
- \cdots + |a_|^2 + |a_0|^2 + |a_1|^2 + |a_2|^2 + \cdots
converges) has no eigenvalue but does have
spectral values.
In infinite-dimensional spaces, the spectrum of a
bounded
operator is always nonempty. This is also true for an unbounded
self
adjoint operator. Via its spectral
measures, the spectrum of any self adjoint operator, bounded or
otherwise, can be decomposed into absolutely continuous, pure
point, and singular parts. (See
Decomposition of spectrum.)
The hydrogen
atom is an example where both types of spectra appear. The
eigenfunctions of the hydrogen
atom Hamiltonian are called eigenstates and are grouped into
two categories. The bound states
of the hydrogen atom correspond to the discrete part of the
spectrum (they have a discrete set of eigenvalues which can be
computed by Rydberg
formula) while the ionization processes are
described by the continuous part (the energy of the
collision/ionization is not quantified).
Eigenfunctions
A common example of such maps on infinite
dimensional spaces are the action of differential
operators on function
spaces. As an example, on the space of infinitely differentiable
functions, the process of differentiation defines
a linear operator since
- \displaystyle\frac(af+bg) = a \frac + b \frac,
The eigenvalue equation for linear differential
operators is then a set of one or more differential
equations. The eigenvectors are commonly called eigenfunctions.
The most simple case is the eigenvalue equation for differentiation
of a real valued function by a single real variable. In this case,
the eigenvalue equation becomes the linear differential equation
- \displaystyle\frac f(x) = \lambda f(x).
- f(x) = A,
- f(x) = Ae^.
If we expand our horizons to complex valued
functions, the value of λ can be any complex
number. The spectrum of d/dt is therefore the whole complex
plane. This is an example of a continuous
spectrum.
Example: waves on a string
The displacement, h(x,t), of a stressed rope
fixed at both ends, like the vibrating
strings of a string
instrument, satisfies the wave
equation
- \frac = c^2\frac,
- X=-\fracX and T=-\omega^2 T.
- X = \sin(\frac + \phi) and T = \sin(\omega t + \psi).
- \sin(\phi) = 0, and so the phase angle \phi=0
- \sin(\frac) = 0,
- h(x,t) = \sin(n\pi x/L)\sin(\omega_n t).
Eigendecomposition
The spectral theorem for matrices can be stated as follows. Let A be a square n × n matrix. Let q1 ... qk be an eigenvector basis, i.e. an indexed set of k linearly independent eigenvectors, where k is the dimension of the space spanned by the eigenvectors of A. If k = n, then A can be written- \mathbf=\mathbf\mathbf\mathbf^
where Q is the square n × n matrix whose i-th
column is the basis eigenvector qi of A and Λ is the diagonal
matrix whose diagonal elements are the corresponding
eigenvalues, i.e. Λii = λi.
Applications
Schrödinger equation
An example of an eigenvalue equation where the
transformation T is represented in terms of a differential operator
is the time-independent Schrödinger
equation in quantum
mechanics:
- H\psi_E = E\psi_E \,
where H, the
Hamiltonian, is a second-order differential
operator and \psi_E, the wavefunction, is one of its
eigenfunctions corresponding to the eigenvalue E, interpreted as
its energy.
However, in the case where one is interested only
in the bound state
solutions of the Schrödinger equation, one looks for \psi_E within
the space of square
integrable functions. Since this space is a Hilbert
space with a well-defined scalar
product, one can introduce a basis
set in which \psi_E and H can be represented as a
one-dimensional array and a matrix respectively. This allows one to
represent the Schrödinger equation in a matrix form. (Fig. 8
presents the lowest eigenfunctions of the Hydrogen
atom Hamiltonian.)
The Dirac
notation is often used in this context. A vector, which
represents a state of the system, in the Hilbert space of square
integrable functions is represented by |\Psi_E\rangle. In this
notation, the Schrödinger equation is:
- H|\Psi_E\rangle = E|\Psi_E\rangle
where |\Psi_E\rangle is an eigenstate of H. It is
a self
adjoint operator, the infinite dimensional analog of Hermitian
matrices (see Observable). As
in the matrix case, in the equation above H|\Psi_E\rangle is
understood to be the vector obtained by application of the
transformation H to |\Psi_E\rangle.
Molecular orbitals
In quantum
mechanics, and in particular in atomic and
molecular
physics, within the Hartree-Fock
theory, the atomic and
molecular
orbitals can be defined by the eigenvectors of the Fock
operator. The corresponding eigenvalues are interpreted as
ionization
potentials via Koopmans'
theorem. In this case, the term eigenvector is used in a
somewhat more general meaning, since the Fock operator is
explicitly dependent on the orbitals and their eigenvalues. If one
wants to underline this aspect one speaks of nonlinear eigenvalue
problem. Such equations are usually solved by an iteration procedure, called in
this case self-consistent
field method. In quantum
chemistry, one often represents the Hartree-Fock equation in a
non-orthogonal
basis
set. This particular representation is a generalized eigenvalue
problem called Roothaan
equations.
Geology and glaciology: (orientation tensor)
In geology, especially in the study
of glacial
till, eigenvectors and eigenvalues are used as a method by
which a mass of information of a clast fabric's constituents'
orientation and dip can be summarized in a 3-D space by six
numbers. In the field, a geologist may collect such data for
hundreds or thousands of clasts in a soil sample, which
can only be compared graphically such as in a Tri-Plot (Sneed and
Folk) diagram , , or as a Stereonet on a Wulff Net . The output for
the orientation tensor is in the three orthogonal (perpendicular)
axes of space. Eigenvectors output from programs such as Stereo32
are in the order
E1 ≥ E2 ≥ E3,
with E1 being the primary orientation of clast orientation/dip, E2
being the secondary and E3 being the tertiary, in terms of
strength. The clast orientation is defined as the eigenvector, on a
compass rose of 360°. Dip is measured as the eigenvalue, the
modulus of the tensor: this is valued from 0° (no dip) to 90°
(vertical). The relative values of E1, E2, and E3 are dictated by
the nature of the sediment's fabric. If
E1 = E2 = E3, the fabric is
said to be isotropic. If
E1 = E2 > E3 the fabric is
planar. If E1 > E2 > E3
the fabric is linear. See 'A Practical Guide to the Study of
Glacial Sediments' by Benn & Evans, 2004 .
Factor analysis
In factor
analysis, the eigenvectors of a covariance
matrix or correlation
matrix correspond to factors,
and eigenvalues to the variance explained by these factors. Factor
analysis is a statistical technique used in
the social
sciences and in marketing, product
management, operations
research, and other applied sciences that deal with large
quantities of data. The objective is to explain most of the
covariability among a number of observable random
variables in terms of a smaller number of unobservable latent
variables called factors. The observable random variables are
modeled as linear
combinations of the factors, plus unique variance terms.
Eigenvalues are used in analysis used by Q-methodology software;
factors with eigenvalues greater than 1.00 are considered
significant, explaining an important amount of the variability in
the data, while eigenvalues less than 1.00 are considered too weak,
not explaining a significant portion of the data variability.
Eigenfaces
In image processing, processed images of faces can be seen as vectors whose components are the brightnesses of each pixel. The dimension of this vector space is the number of pixels. The eigenvectors of the covariance matrix associated to a large set of normalized pictures of faces are called eigenfaces; this is an example of principal components analysis. They are very useful for expressing any face image as a linear combination of some of them. In the facial recognition branch of biometrics, eigenfaces provide a means of applying data compression to faces for identification purposes. Research related to eigen vision systems determining hand gestures has also been made. More on determining sign language letters using eigen systems can be found here: http://www.geigel.com/signlanguage/index.phpSimilar to this concept, eigenvoices concept is
also developed which represents the general direction of
variability in human pronunciations of a particular utterance, such
as a word in a language. Based on a linear combination of such
eigenvoices, a new voice pronunciation of the word can be
constructed. These concepts have been found useful in automatic
speech recognition systems, for speaker adaptation.
Tensor of inertia
In mechanics, the eigenvectors of
the
inertia tensor define the principal
axes of a rigid body.
The tensor of inertia is a key quantity
required in order to determine the rotation of a rigid body around
its center of
mass.
Stress tensor
In solid
mechanics, the stress
tensor is symmetric and so can be decomposed into a diagonal tensor with the
eigenvalues on the diagonal and eigenvectors as a basis. Because it
is diagonal, in this orientation, the stress tensor has no shear
components; the components it does have are the principal
components.
Eigenvalues of a graph
In spectral
graph theory, an eigenvalue of a graph is
defined as an eigenvalue of the graph's adjacency
matrix A, or (increasingly) of the graph's Laplacian
matrix, which is either T−A or I−T 1/2AT −1/2, where T is a
diagonal matrix holding the degree of each vertex, and in T −1/2, 0
is substituted for 0−1/2. The kth principal eigenvector of a graph
is defined as either the eigenvector corresponding to the kth
largest eigenvalue of A, or the eigenvector corresponding to the
kth smallest eigenvalue of the Laplacian. The first principal
eigenvector of the graph is also referred to merely as the
principal eigenvector.
The principal eigenvector is used to measure the
centrality
of its vertices. An example is Google's PageRank
algorithm. The principal eigenvector of a modified adjacency
matrix of the World Wide Web graph gives the page ranks as its
components. This vector corresponds to the stationary
distribution of the Markov chain
represented by the row-normalized adjacency matrix; however, the
adjacency matrix must first be modified to ensure a stationary
distribution exists. The second principal eigenvector can be used
to partition the graph into clusters, via
spectral clustering. Other methods are also available for
clustering.
See also
Notes
References
- .
- .
- .
- .
- .
- .
- .
- .
- .
- .
- .
- .
- .
- .
- .
- .
- .
- .
- .
- .
- .
- .
- .
- .
- .
- .
- .
- Pigolkina, T. S. and Shulman, V. S., Eigenvalue (in Russian), In:Vinogradov, I. M. (Ed.), Mathematical Encyclopedia, Vol. 5, Soviet Encyclopedia, Moscow, 1977.
- Pigolkina, T. S. and Shulman, V. S., Eigenvector (in Russian), In:Vinogradov, I. M. (Ed.), Mathematical Encyclopedia, Vol. 5, Soviet Encyclopedia, Moscow, 1977.
- .
- .
- Curtis, Charles W., Linear Algebra: An Introductory Approach, 347 p., Springer; 4th ed. 1984. Corr. 7th printing edition (August 19, 1999), ISBN 0387909923.
- .
- .
- .
External links
- MIT Video Lecture on Eigenvalues and Eigenvectors at Google Video, from MIT OpenCourseWare
- ARPACK is a collection of FORTRAN subroutines for solving large scale (sparse) eigenproblems.
- IRBLEIGS, has MATLAB code with similar capabilities to ARPACK. (See this paper for a comparison between IRBLEIGS and ARPACK.)
- LAPACK is a collection of FORTRAN subroutines for solving dense linear algebra problems
- ALGLIB includes a partial port of the LAPACK to C++, C#, Delphi, etc.
- MathWorld: Eigenvector
- Online calculator for Eigenvalues and Eigenvectors
- Online Matrix Calculator Calculates eigenvalues, eigenvectors and other decompositions of matrices online
- Vanderplaats Research and Development - Provides the SMS eigenvalue solver for Structural Finite Element. The solver is in the GENESIS program as well as other commercial programs. SMS can be easily use with MSC.Nastran or NX/Nastran via DMAPs.
- What are Eigen Values? from PhysLink.com's "Ask the Experts"
- Templates for the Solution of Algebraic Eigenvalue Problems Edited by Zhaojun Bai, James Demmel, Jack Dongarra, Axel Ruhe, and Henk van der Vorst (a guide to the numerical solution of eigenvalue problems)
eigenvector in Arabic: قيمة ذاتية
eigenvector in Belarusian (Tarashkevitsa):
Уласныя лікі, вэктары й прасторы
eigenvector in Czech: Vlastní číslo
eigenvector in Danish: Egenværdi, egenvektor og
egenrum
eigenvector in German: Eigenwertproblem
eigenvector in Esperanto: Ajgeno kaj
ajgenvektoro
eigenvector in Spanish: Vector propio y valor
propio
eigenvector in French: Valeur propre, vecteur
propre et espace propre
eigenvector in Korean: 고유값
eigenvector in Italian: Autovettore e
autovalore
eigenvector in Hebrew: ערך עצמי
eigenvector in Lithuanian: Tikrinių verčių
lygtis
eigenvector in Hungarian: Sajátvektor és
sajátérték
eigenvector in Dutch: Eigenwaarde
(wiskunde)
eigenvector in Japanese: 固有値
eigenvector in Norwegian: Egenvektor
eigenvector in Polish: Wartość własna
eigenvector in Portuguese: Valor próprio
eigenvector in Romanian: Vector propriu
eigenvector in Russian: Собственные векторы,
значения и пространства
eigenvector in Slovenian: Lastna vrednost
eigenvector in Finnish: Ominaisarvo,
ominaisvektori ja ominaisavaruus
eigenvector in Swedish: Egenvärde,
egenvektor
eigenvector in Vietnamese: Vectơ riêng
eigenvector in Ukrainian: Власний вектор
eigenvector in Urdu: ویژہ قدر
eigenvector in Chinese: 特征向量