User:XOR'easter/sandbox/perturbation theory

From Wikipedia, the free encyclopedia

Retrieved from here and edited for tone and level of detail.

Rise of understanding of chaotic systems[edit]

The development of basic perturbation theory for differential equations was fairly complete by the middle of the 19th century. It was at that time that Charles-Eugène Delaunay was studying the perturbative expansion for the Earth-Moon-Sun system, and discovered the so-called "problem of small denominators". Here, the denominator appearing in the n-th term of the perturbative expansion could become arbitrarily small, causing the n-th correction to be as large or larger than the first-order correction.

At the turn of the 20th century, this problem led Henri Poincaré to make one of the first deductions of the existence of chaos, and what is poetically called the “butterfly effect”: that even a very small perturbation can ultimately have a very large effect on non-dissipative or "friction-free" dynamic systems.

A partial resolution of the small-divisor problem was given by the statement of the KAM theorem in 1954. Developed by Andrey Kolmogorov, Vladimir Arnold and Jürgen Moser, this theorem stated the conditions under which a system of partial differential equations will have only mildly chaotic behaviour under small perturbations.

Application to new problems in 20th century physics[edit]

Perturbation theory saw an expansion and evolution with the arrival of quantum mechanics. Although perturbation theory was used in the semi-classical theory of the Bohr atom, the calculations were complicated, and subject to somewhat ambiguous interpretation. The discovery of Heisenberg's matrix mechanics allowed a simplification of the application of perturbation theory. Notable examples are the Stark effect and the Zeeman effect, which have a simple enough theory to be included in standard undergraduate textbooks in quantum mechanics. Other early applications include the fine structure and the hyperfine structure in the hydrogen atom.

In modern times, perturbation theory underlies much of quantum chemistry and quantum field theory. In chemistry, perturbation theory was used to obtain the first solutions for the helium atom.

In the middle of the 20th century, Joseph E. Mayer and Elliott Montroll introduced diagrammatic methods for organizing perturbative calculations in statistical mechanics, and Richard Feynman realized that perturbative expansions in quantum field theory could be given a graphical representation in terms of what are now called Feynman diagrams. Diagrams now find use in many areas where perturbative expansions are studied.

Search for better methods for quantum mechanics[edit]

In the late 20th century, broad dissatisfaction with perturbation theory in the quantum physics community, including not only the difficulty of going beyond second order in the expansion, but also questions about whether the perturbative expansion is even convergent, has led to a strong interest in the area of non-perturbative analysis, that is, the study of exactly solvable models.

Much of the theoretical work in non-perturbative analysis goes under the name of quantum groups and non-commutative geometry. The prototypical model is the Korteweg–de Vries equation, a highly non-linear equation for which the interesting solutions, the solitons, cannot be reached by perturbation theory, even if the perturbations were carried out to infinite order.

Perturbation orders[edit]

The standard exposition of perturbation theory is given in terms of the order to which the perturbation is carried out: first-order perturbation theory or second-order perturbation theory, and whether the perturbed states are degenerate, which requires singular perturbation. In the singular case extra care must be taken, and the theory is slightly more elaborate.

First-order, non-singular perturbation theory[edit]

This section develops, in simple terms,[1] the general theory for the perturbative solution to a differential equation to the first order. To keep the exposition simple, a crucial assumption is made: that the solutions to the unperturbed system are not degenerate, so that the perturbation series can be inverted. There are ways of dealing with the degenerate (or singular) case; these require extra care.

Suppose one wants to solve a differential equation of the form

where D is some specific differential operator, and λ is an eigenvalue. Many problems involving ordinary or partial differential equations can be cast in this form.

It is presumed that the differential operator can be written in the form

where ε is presumed to be small, and that, furthermore, the complete set of solutions for D(0) are known.

That is, one has a set of solutions , labelled by some arbitrary index n, such that

Furthermore, one assumes that the set of solutions form an orthonormal set,

with δmn the Kronecker delta function.

To zeroth order, one expects that the solutions g(x) are then somehow "close" to one of the unperturbed solutions . That is,

and

where denotes the relative size, in big-O notation, of the perturbation.

To solve this problem, one assumes that the solution g(x) can be written as a linear combination of the , with all the coefficients in the expansion small except for one. Thus, to the first order in ε, the linear equation may be solved as

since all of the other terms in the linear equation are of order . The above gives the solution of the eigenvalue to first order in perturbation theory.

The function g(x) to first order is obtained through similar reasoning, obtaining

which gives the solution to the perturbed differential equation to first order in the perturbation ε.

Several observations may be made about the form of this solution. First, the sum over functions with differences of eigenvalues in the denominator evokes the resolvent in Fredholm theory. This is no accident; the resolvent acts essentially as a kind of Green's function or propagator, passing the perturbation along. Higher-order perturbations resemble this form, with an additional sum over a resolvent appearing at each order.

The form of this solution also illustrates the idea behind the small-divisor problem. If, for whatever reason, two eigenvalues are close, so that the difference becomes small, the corresponding term in the above sum will become disproportionately large. In particular, if this happens in higher-order terms, the higher-order perturbation may become as large or larger in magnitude than the first-order perturbation. Such a situation calls into question the validity of utilizing a perturbative analysis to begin with, a situation frequently encountered in chaotic dynamical systems which requires the development of techniques other than perturbation theory to solve the problem.

Curiously, the situation is not at all bad if two or more eigenvalues are exactly equal. This case is referred to as singular or degenerate perturbation theory. The degeneracy of eigenvalues indicates that the unperturbed system has some sort of symmetry, and that the generators of that symmetry commute with the unperturbed differential operator. Typically, the perturbing term does not possess the symmetry, and so the full solutions do not, either; one says that the perturbation lifts or breaks the degeneracy. In this case, the perturbation can still be performed; however, care must be taken to work in a basis for the unperturbed states, so that these map one-to-one to the perturbed states, rather than being a mixture.

Some modern applications and limitations[edit]

Both regular and singular perturbation theory are frequently used in physics and engineering. Regular perturbation theory may only be used to find those solutions of a problem that evolve smoothly out of the initial solution when changing the parameter (that are "adiabatically connected" to the initial solution).

A well-known example from physics where regular perturbation theory fails is in fluid dynamics when one treats the viscosity as a small parameter. Close to a boundary, the fluid velocity goes to zero, even for very small viscosity (the no-slip condition). For zero viscosity, it is not possible to impose this boundary condition and a regular perturbative expansion amounts to an expansion about an unrealistic physical solution. Singular perturbation theory can, however, be applied here and this amounts to "zooming in" at the boundaries (using the method of matched asymptotic expansions).

Perturbation theory can fail when the system can transition to a different "phase" of matter, with a qualitatively different behaviour, that cannot be modelled by the physical formulas put into the perturbation theory (e.g., a solid crystal melting into a liquid). In some cases, this failure manifests itself by divergent behavior of the perturbation series. Such divergent series can sometimes be resummed using techniques such as Borel resummation.

Perturbation techniques can be also used to find approximate solutions to non-linear differential equations. Examples of techniques used to find approximate solutions to these types of problems are the Lindstedt–Poincaré technique and the method of multiple time scales.

There is absolutely no guarantee that perturbative methods result in a convergent solution. In fact, asymptotic series are the norm.

  1. ^ *Sakurai, J.J., and Napolitano, J. (1964,2011). Modern quantum mechanics (2nd ed.), Addison Wesley ISBN 978-0-8053-8291-4 . Chapter 5