Eigenfunction
An eigenfunction is a type of eigenvector that is both a unique characteristic of a parameter and a function. Like eigenvectors, the function’s direction remains the same when a linear transformation is applied and instead it is only multiplied by a scaling factor (the eigenvalue). For example, if you imagine resizing a picture, eigenfunctions are the unmoving axes along which the linear transformation stretches, compresses or flips the data. In multi-dimensional data analysis, using a function in place of a simple eigenvector allows you to model all the dimensions of any given space in one formula.[1]
In mathematics, an eigenfunction of a linear operator D defined on some function space is any non-zero function f in that space that, when acted upon by D, is only multiplied by some scaling factor called an eigenvalue. As an equation, this condition can be written as
- Df=λf{displaystyle Df=lambda f}
for some scalar eigenvalue λ.[2][3][4] The solutions to this equation may also be subject to boundary conditions that limit the allowable eigenvalues and eigenfunctions.
An eigenfunction is a type of eigenvector.
Contents
1 Eigenfunctions
1.1 Derivative example
1.2 Link to eigenvalues and eigenvectors of matrices
1.3 Eigenvalues and eigenfunctions of Hermitian operators
2 Applications
2.1 Vibrating strings
2.2 Schrödinger equation
2.3 Signals and systems
3 See also
4 Notes
5 References
6 External links
Eigenfunctions
In general, an eigenvector of a linear operator D defined on some vector space is a nonzero vector in the domain of D that, when D acts upon it, is simply scaled by some scalar value called an eigenvalue. In the special case where D is defined on a function space, the eigenvectors are referred to as eigenfunctions. That is, a function f is an eigenfunction of D if it satisfies the equation
Df=λf,{displaystyle Df=lambda f,}
(1)
where λ is a scalar.[2][3][4] The solutions to Equation (1) may also be subject to boundary conditions. Because of the boundary conditions, the possible values of λ are generally limited, for example to a discrete set λ1, λ2, ... or to a continuous set over some range. The set of all possible eigenvalues of D is sometimes called its spectrum, which may be discrete, continuous, or a combination of both.[2]
Each value of λ corresponds to one or more eigenfunctions. If multiple linearly independent eigenfunctions have the same eigenvalue, the eigenvalue is said to be degenerate and the maximum number of linearly independent eigenfunctions associated with the same eigenvalue is the eigenvalue's degree of degeneracy or geometric multiplicity.[5][6]
Derivative example
A widely used class of linear operators acting on infinite dimensional spaces are differential operators on the space C∞ of infinitely differentiable real or complex functions of a real or complex argument t. For example, consider the derivative operator ddt{displaystyle {tfrac {d}{dt}}} with eigenvalue equation
- ddtf(t)=λf(t).{displaystyle {frac {d}{dt}}f(t)=lambda f(t).}
This differential equation can be solved by multiplying both sides by dtf(t){displaystyle {tfrac {dt}{f(t)}}} and integrating. Its solution, the exponential function
- f(t)=f0eλt,{displaystyle f(t)=f_{0}e^{lambda t},}
is the eigenfunction of the derivative operator, where f0 is a parameter that depends on the boundary conditions. Note that in this case the eigenfunction is itself a function of its associated eigenvalue λ, which can take any real or complex value. In particular, note that for λ = 0 the eigenfunction f(t) is a constant.
Suppose in the example that f(t) is subject to the boundary conditions f(0) = 1 and dfdt|t=0{displaystyle {tfrac {df}{dt}}|_{t=0}} = 2. We then find that
- f(t)=e2t,{displaystyle f(t)=e^{2t},}
where λ = 2 is the only eigenvalue of the differential equation that also satisfies the boundary condition.
Link to eigenvalues and eigenvectors of matrices
Eigenfunctions can be expressed as column vectors and linear operators can be expressed as matrices, although they may have infinite dimensions. As a result, many of the concepts related to eigenvectors of matrices carry over to the study of eigenfunctions.
Define the inner product in the function space on which D is defined as
- ⟨f,g⟩=∫Ω f∗(t)g(t)dt,{displaystyle langle f,grangle =int _{Omega } f^{*}(t)g(t)dt,}
integrated over some range of interest for t called Ω. The * denotes the complex conjugate.
Suppose the function space has an orthonormal basis given by the set of functions {u1(t), u2(t), ..., un(t)}, where n may be infinite. For the orthonormal basis,
- ⟨ui,uj⟩=∫Ω ui∗(t)uj(t)dt=δij={1i=j0i≠j,{displaystyle langle u_{i},u_{j}rangle =int _{Omega } u_{i}^{*}(t)u_{j}(t)dt=delta _{ij}={begin{cases}1&i=j\0&ineq jend{cases}},}
where δij is the Kronecker delta and can be thought of as the elements of the identity matrix.
Functions can be written as a linear combination of the basis functions,
- f(t)=∑j=1nbjuj(t),{displaystyle f(t)=sum _{j=1}^{n}b_{j}u_{j}(t),}
for example through a Fourier expansion of f(t). The coefficients bj can be stacked into an n by 1 column vector b = [b1b2 ... bn]T. In some special cases, such as the coefficients of the Fourier series of a sinusoidal function, this column vector has finite dimension.
Additionally, define a matrix representation of the linear operator D with elements
- Aij=⟨ui,Duj⟩=∫Ω ui∗(t)Duj(t)dt.{displaystyle A_{ij}=langle u_{i},Du_{j}rangle =int _{Omega } u_{i}^{*}(t)Du_{j}(t)dt.}
We can write the function Df(t) either as a linear combination of the basis functions or as D acting upon the expansion of f(t),
- Df(t)=∑j=1ncjuj(t)=∑j=1nbjDuj(t).{displaystyle Df(t)=sum _{j=1}^{n}c_{j}u_{j}(t)=sum _{j=1}^{n}b_{j}Du_{j}(t).}
Taking the inner product of each side of this equation with an arbitrary basis function ui(t),
- ∑j=1ncj∫Ω ui∗(t)uj(t)dt=∑j=1nbj∫Ω ui∗(t)Duj(t)dt,ci=∑j=1nbjAij.{displaystyle {begin{aligned}sum _{j=1}^{n}c_{j}int _{Omega } u_{i}^{*}(t)u_{j}(t)dt&=sum _{j=1}^{n}b_{j}int _{Omega } u_{i}^{*}(t)Du_{j}(t)dt,\c_{i}&=sum _{j=1}^{n}b_{j}A_{ij}.end{aligned}}}
This is the matrix multiplication Ab = c written in summation notation and is a matrix equivalent of the operator D acting upon the function f(t) expressed in the orthonormal basis. If f(t) is an eigenfunction of D with eigenvalue λ, then Ab = λb.
Eigenvalues and eigenfunctions of Hermitian operators
Many of the operators encountered in physics are Hermitian. Suppose the linear operator D acts on a function space that is a Hilbert space with an orthonormal basis given by the set of functions {u1(t), u2(t), ..., un(t)}, where n may be infinite. In this basis, the operator D has a matrix representation A with elements
- Aij=⟨ui,Duj⟩=∫Ωdt ui∗(t)Duj(t).{displaystyle A_{ij}=langle u_{i},Du_{j}rangle =int _{Omega }dt u_{i}^{*}(t)Du_{j}(t).}
integrated over some range of interest for t denoted Ω.
By analogy with Hermitian matrices, D is a Hermitian operator if Aij = Aji*, or[7]
- ⟨ui,Duj⟩=⟨Dui,uj⟩,∫Ωdt ui∗(t)Duj(t)=∫Ωdt uj(t)[Dui(t)]∗.{displaystyle {begin{aligned}langle u_{i},Du_{j}rangle &=langle Du_{i},u_{j}rangle ,\int _{Omega }dt u_{i}^{*}(t)Du_{j}(t)&=int _{Omega }dt u_{j}(t)[Du_{i}(t)]^{*}.end{aligned}}}
Consider the Hermitian operator D with eigenvalues λ1, λ2, ... and corresponding eigenfunctions f1(t), f2(t), ... . This Hermitian operator has the following properties:
- Its eigenvalues are real, λi = λi*[5][7]
- Its eigenfunctions obey an orthogonality condition, ⟨fi,fj⟩{displaystyle langle f_{i},f_{j}rangle } = 0 if i≠j[7][8][9]
The second condition always holds for λi ≠ λj. For degenerate eigenfunctions with the same eigenvalue λi, orthogonal eigenfunctions can always be chosen that span the eigenspace associated with λi, for example by using the Gram-Schmidt process.[6] Depending on whether the spectrum is discrete or continuous, the eigenfunctions can be normalized by setting the inner product of the eigenfunctions equal to either a Kronecker delta or a Dirac delta function, respectively.[9][10]
For many Hermitian operators, notably Sturm-Liouville operators, a third property is
- Its eigenfunctions form a basis of the function space on which the operator is defined[6]
As a consequence, in many important cases, the eigenfunctions of the Hermitian operator form an orthonormal basis. In these cases, an arbitrary function can be expressed as a linear combination of the eigenfunctions of the Hermitian operator.
Applications
Vibrating strings
Let h(x, t) denote the transverse displacement of a stressed elastic chord, such as the vibrating strings of a string instrument, as a function of the position x along the string and of time t. Applying the laws of mechanics to infinitesimal portions of the string, the function h satisfies the partial differential equation
- ∂2h∂t2=c2∂2h∂x2,{displaystyle {frac {partial ^{2}h}{partial t^{2}}}=c^{2}{frac {partial ^{2}h}{partial x^{2}}},}
which is called the (one-dimensional) wave equation. Here c is a constant speed that depends on the tension and mass of the string.
This problem is amenable to the method of separation of variables. If we assume that h(x, t) can be written as the product of the form X(x)T(t), we can form a pair of ordinary differential equations:
- d2dx2X=−ω2c2X,d2dt2T=−ω2T.{displaystyle {frac {d^{2}}{dx^{2}}}X=-{frac {omega ^{2}}{c^{2}}}X,qquad {frac {d^{2}}{dt^{2}}}T=-omega ^{2}T.}
Each of these is an eigenvalue equation with eigenvalues −ω2c2{displaystyle -{tfrac {omega ^{2}}{c^{2}}}} and −ω2, respectively. For any values of ω and c, the equations are satisfied by the functions
- X(x)=sin(ωxc+φ),T(t)=sin(ωt+ψ),{displaystyle X(x)=sin left({frac {omega x}{c}}+varphi right),qquad T(t)=sin(omega t+psi ),}
where the phase angles φ and ψ are arbitrary real constants.
If we impose boundary conditions, for example that the ends of the string are fixed at x = 0 and x = L, namely X(0) = X(L) = 0, and that T(0) = 0, we constrain the eigenvalues. For these boundary conditions, sin(φ) = 0 and sin(ψ) = 0, so the phase angles φ = ψ = 0, and
- sin(ωLc)=0.{displaystyle sin left({frac {omega L}{c}}right)=0.}
This last boundary condition constrains ω to take a value ωn = ncπ/L, where n is any integer. Thus, the clamped string supports a family of standing waves of the form
- h(x,t)=sin(nπxL)sin(ωnt).{displaystyle h(x,t)=sin left({frac {npi x}{L}}right)sin(omega _{n}t).}
In the example of a string instrument, the frequency ωn is the frequency of the nthharmonic, which is called the (n − 1)thovertone.
Schrödinger equation
In quantum mechanics, the Schrödinger equation
- iℏ∂∂tΨ(r,t)=HΨ(r,t){displaystyle ihbar {frac {partial }{partial t}}Psi (mathbf {r} ,t)=HPsi (mathbf {r} ,t)}
with the Hamiltonian operator
- H=−ℏ22m∇2+V(r,t){displaystyle H=-{frac {hbar ^{2}}{2m}}nabla ^{2}+V(mathbf {r} ,t)}
can be solved by separation of variables if the Hamiltonian does not depend explicitly on time.[11] In that case, the wave function Ψ(r,t) = φ(r)T(t) leads to the two differential equations,
Hφ(r)=Eφ(r),{displaystyle Hvarphi (mathbf {r} )=Evarphi (mathbf {r} ),}
(2)
iℏ∂T(t)∂t=ET(t).{displaystyle ihbar {frac {partial T(t)}{partial t}}=ET(t).}
(3)
Both of these differential equations are eigenvalue equations with eigenvalue E. As shown in an earlier example, the solution of Equation (3) is the exponential
- T(t)=e−iEtℏ.{displaystyle T(t)=e^{tfrac {-iEt}{hbar }}.}
Equation (2) is the time-independent Schrödinger equation. The eigenfunctions φk of the Hamiltonian operator are stationary states of the quantum mechanical system, each with a corresponding energy Ek. They represent allowable energy states of the system and may be constrained by boundary conditions.
The Hamiltonian operator H is an example of a Hermitian operator whose eigenfunctions form an orthonormal basis. When the Hamiltonian does not depend explicitly on time, general solutions of the Schrödinger equation are linear combinations of the stationary states multiplied by the oscillatory T(t),[12]
- Ψ(r,t)=∑kckφk(r)e−iEktℏ{displaystyle Psi (mathbf {r} ,t)=sum _{k}c_{k}varphi _{k}(mathbf {r} )e^{tfrac {-iE_{k}t}{hbar }}}
or, for a system with a continuous spectrum,
- Ψ(r,t)=∫dEcEφE(r)e−iEtℏ.{displaystyle Psi (mathbf {r} ,t)=int dEc_{E}varphi _{E}(mathbf {r} )e^{tfrac {-iEt}{hbar }}.}
The success of the Schrödinger equation in explaining the spectral characteristics of hydrogen is considered one of the greatest triumphs of 20th century physics.
Signals and systems
In the study of signals and systems, an eigenfunction of a system is a signal f(t) that, when input into the system, produces a response y(t) = λf(t), where λ is a complex scalar eigenvalue.[13]
See also
- Eigenvalues and eigenvectors
- Hilbert–Schmidt theorem
- Spectral theory of ordinary differential equations
- Fixed point combinator
- Fourier transform eigenfunctions
Notes
^ "What is an Eigenfunction?". deepai.org..mw-parser-output cite.citation{font-style:inherit}.mw-parser-output .citation q{quotes:"""""""'""'"}.mw-parser-output .citation .cs1-lock-free a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .citation .cs1-lock-limited a,.mw-parser-output .citation .cs1-lock-registration a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .citation .cs1-lock-subscription a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-ws-icon a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Wikisource-logo.svg/12px-Wikisource-logo.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{font-size:100%}.mw-parser-output .cs1-maint{display:none;color:#33aa33;margin-left:0.3em}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}
^ abc Davydov 1976, p. 20.
^ ab Kusse 1998, p. 435.
^ ab Wasserman, Eric W. (2016). "Eigenfunction". MathWorld--A Wolfram Web Resource. Wolfram Research, Inc. Retrieved April 12, 2016.
^ ab Davydov 1976, p. 21.
^ abc Kusse 1998, p. 437.
^ abc Kusse 1998, p. 436.
^ Davydov 1976, p. 24.
^ ab Davydov 1976, p. 29.
^ Davydov 1976, p. 25.
^ Davydov 1976, p. 51.
^ Davydov 1976, p. 52.
^ Girod 2001, p. 49.
References
- Courant, R.; Hilbert, D. Methods of Mathematical Physics.
ISBN 0471504475 (Volume 1 Paperback),
ISBN 0471504394 (Volume 2 Paperback),
ISBN 0471179906 (Hardback)
Davydov, A. S. (1976). Quantum Mechanics. Translated, edited, and with additions by D. ter Haar (2nd ed.). Oxford: Pergamon Press. ISBN 0080204384.
Girod, Bernd; Rabenstein, Rudolf; Stenger, Alexander (2001). Signals and systems (2nd ed.). Wiley. ISBN 0471988006.
Kusse, Bruce; Westwig, Erik (1998). Mathematical Physics. New York: Wiley Interscience. ISBN 0471154318.
External links
- More images (non-GPL) at Atom in a Box