Unconditional convergence

From Wikipedia, the free encyclopedia

In mathematics, specifically functional analysis, a series is unconditionally convergent if all reorderings of the series converge to the same value. In contrast, a series is conditionally convergent if it converges but different orderings do not all converge to that same value. Unconditional convergence is equivalent to absolute convergence in finite-dimensional vector spaces, but is a weaker property in infinite dimensions.

Definition[edit]

Let be a topological vector space. Let be an index set and for all

The series is called unconditionally convergent to if

  • the indexing set is countable, and
  • for every permutation (bijection) of the following relation holds:

Alternative definition[edit]

Unconditional convergence is often defined in an equivalent way: A series is unconditionally convergent if for every sequence with the series

converges.

If is a Banach space, every absolutely convergent series is unconditionally convergent, but the converse implication does not hold in general. Indeed, if is an infinite-dimensional Banach space, then by Dvoretzky–Rogers theorem there always exists an unconditionally convergent series in this space that is not absolutely convergent. However when by the Riemann series theorem, the series is unconditionally convergent if and only if it is absolutely convergent.

See also[edit]

References[edit]

  • Ch. Heil: A Basis Theory Primer
  • Knopp, Konrad (1956). Infinite Sequences and Series. Dover Publications. ISBN 9780486601533.
  • Knopp, Konrad (1990). Theory and Application of Infinite Series. Dover Publications. ISBN 9780486661650.
  • Wojtaszczyk, P. (1996). Banach spaces for analysts. Cambridge University Press. ISBN 9780521566759.

This article incorporates material from Unconditional convergence on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.