Doob's martingale inequality

From Wikipedia, the free encyclopedia

In mathematics, Doob's martingale inequality, also known as Kolmogorov’s submartingale inequality is a result in the study of stochastic processes. It gives a bound on the probability that a submartingale exceeds any given value over a given interval of time. As the name suggests, the result is usually given in the case that the process is a martingale, but the result is also valid for submartingales.

The inequality is due to the American mathematician Joseph L. Doob.

Statement of the inequality[edit]

The setting of Doob's inequality is a submartingale relative to a filtration of the underlying probability space. The probability measure on the sample space of the martingale will be denoted by P. The corresponding expected value of a random variable X, as defined by Lebesgue integration, will be denoted by E[X].

Informally, Doob's inequality states that the expected value of the process at some final time controls the probability that a sample path will reach above any particular value beforehand. As the proof uses very direct reasoning, it does not require any restrictive assumptions on the underlying filtration or on the process itself, unlike for many other theorems about stochastic processes. In the continuous-time setting, right-continuity (or left-continuity) of the sample paths is required, but only for the sake of knowing that the supremal value of a sample path equals the supremum over an arbitrary countable dense subset of times.

Discrete time[edit]

Let X1, ..., Xn be a discrete-time submartingale relative to a filtration of the underlying probability space, which is to say:

The submartingale inequality[clarification needed] says that

for any positive number C. The proof relies on the set-theoretic fact that the event defined by max(Xi) > C may be decomposed as the disjoint union of the events Ei defined by Xi > C and XjC for all j < i. Then

having made use of the submartingale property for the last inequality and the fact that for the last equality. Summing this result as i ranges from 1 to n results in the conclusion

which is sharper than the stated result. By using the elementary fact that Xn ≤ max(Xn, 0), the given submartingale inequality follows.

In this proof, the submartingale property is used once, together with the definition of conditional expectation.[1] The proof can also be phrased in the language of stochastic processes so as to become a corollary of the powerful theorem that a stopped submartingale is itself a submartingale.[2] In this setup, the minimal index i appearing in the above proof is interpreted as a stopping time.

Continuous time[edit]

Now let Xt be a submartingale indexed by an interval [0,T] of real numbers, relative to a filtration Ft of the underlying probability space, which is to say:

for all s < t. The submartingale inequality[clarification needed] says that if the sample paths of the martingale are almost-surely right-continuous, then

for any positive number C. This is a corollary of the above discrete-time result, obtained by writing

in which Q1Q2 ⊂ ⋅⋅⋅ is any sequence of finite sets whose union is the set of all rational numbers. The first equality is a consequence of the right-continuity assumption, while the second equality is purely set-theoretic. The discrete-time inequality applies to say that

for each i, and this passes to the limit to yield the submartingale inequality.[3] This passage from discrete time to continuous time is very flexible, as it only required having a countable dense subset of [0,T], which can then automatically be built out of an increasing sequence of finite sets. As such, the submartingale inequality holds even for more general index sets, which are not required to be intervals or natural numbers.[4]

Further inequalities[edit]

There are further submartingale inequalities also due to Doob. Now let Xt be a martingale or a positive submartingale; if the index set is uncountable, then (as above) assume that the sample paths are right-continuous. In these scenarios, Jensen's inequality implies that |Xt|p is a submartingale for any number p ≥ 1, provided that these new random variables all have finite integral. The submartingale inequality is then applicable to say that[5]

for any positive number C. Here T is the final time, i.e. the largest value of the index set. Furthermore one has

if p is larger than one. This, sometimes known as Doob's maximal inequality, is a direct result of combining the layer cake representation with the submartingale inequality and the Hölder inequality.[6]

In addition to the above inequality, there holds[7]

Related inequalities[edit]

Doob's inequality for discrete-time martingales implies Kolmogorov's inequality: if X1, X2, ... is a sequence of real-valued independent random variables, each with mean zero, it is clear that

so Sn = X1 + ... + Xn is a martingale. Note that Jensen's inequality implies that |Sn| is a nonnegative submartingale if Sn is a martingale. Hence, taking p = 2 in Doob's martingale inequality,

which is precisely the statement of Kolmogorov's inequality.[8]

Application: Brownian motion[edit]

Let B denote canonical one-dimensional Brownian motion. Then[9]

The proof is just as follows: since the exponential function is monotonically increasing, for any non-negative λ,

By Doob's inequality, and since the exponential of Brownian motion is a positive submartingale,

Since the left-hand side does not depend on λ, choose λ to minimize the right-hand side: λ = C/T gives the desired inequality.

References[edit]

  1. ^ Billingsley 1995, Theorem 31.3; Doob 1953, Theorem VII.3.2; Hall & Heyde 1980, Theorem 2.1; Shiryaev 2019, Theorem 7.3.1.
  2. ^ Doob 1953, Theorem VII.3.2; Durrett 2019, Theorem 5.4.2; Kallenberg 2021, Theorem 9.16; Revuz & Yor 1999, Proposition II.1.5.
  3. ^ Karatzas & Shreve 1991, Theorem 1.3.8.
  4. ^ Doob 1953, p. 353; Loève 1978, Section 39.
  5. ^ Revuz & Yor 1999, Corollary II.1.6 and Theorem II.1.7.
  6. ^ Hall & Heyde 1980, Theorem 2.2; Karatzas & Shreve 1991, Theorem 1.3.8; Revuz & Yor 1999, Corollary II.1.6 and Theorem II.1.7.
  7. ^ Durrett 2019, p. 55, Theorem 5.4.4; Revuz & Yor 1999; Shiryaev 2019, Theorem 7.3.2.
  8. ^ Durrett 2019, Example 5.4.1.
  9. ^ Revuz & Yor 1999, Proposition II.1.8.

Sources

  • Billingsley, Patrick (1995). Probability and measure. Wiley Series in Probability and Mathematical Statistics (Third edition of 1979 original ed.). New York: John Wiley & Sons, Inc. ISBN 0-471-00710-2. MR 1324786.
  • Doob, J. L. (1953). Stochastic processes. New York: John Wiley & Sons, Inc. MR 0058896.
  • Durrett, Rick (2019). Probability – theory and examples. Cambridge Series in Statistical and Probabilistic Mathematics. Vol. 49 (Fifth edition of 1991 original ed.). Cambridge: Cambridge University Press. doi:10.1017/9781108591034. ISBN 978-1-108-47368-2. MR 3930614. S2CID 242105330.
  • Hall, P.; Heyde, C. C. (1980). Martingale limit theory and its application. Probability and Mathematical Statistics. San Diego, CA: Academic Press. doi:10.1016/C2013-0-10818-5. ISBN 0-12-319350-8.
  • Kallenberg, Olav (2021). Foundations of modern probability. Probability Theory and Stochastic Modelling. Vol. 99 (Third edition of 1997 original ed.). Springer, Cham. doi:10.1007/978-3-030-61871-1. ISBN 978-3-030-61871-1. MR 4226142.
  • Karatzas, Ioannis; Shreve, Steven E. (1991). Brownian motion and stochastic calculus. Graduate Texts in Mathematics. Vol. 113 (Second edition of 1988 original ed.). New York: Springer-Verlag. doi:10.1007/978-1-4612-0949-2. ISBN 0-387-97655-8. MR 1121940.
  • Loève, Michel (1978). Probability theory. II. Graduate Texts in Mathematics. Vol. 46 (Fourth edition of 1955 original ed.). New York–Heidelberg: Springer-Verlag. ISBN 0-387-90262-7. MR 0651018.
  • Revuz, Daniel; Yor, Marc (1999). Continuous martingales and Brownian motion. Grundlehren der mathematischen Wissenschaften. Vol. 293 (Third edition of 1991 original ed.). Berlin: Springer-Verlag. doi:10.1007/978-3-662-06400-9. ISBN 3-540-64325-7. MR 1725357.
  • Shiryaev, Albert N. (2019). Probability—2. Graduate Texts in Mathematics. Vol. 95. Translated by Boas, R. P.; Chibisov, D. M. (Third edition of 1980 original ed.). New York: Springer. doi:10.1007/978-0-387-72208-5. ISBN 978-0-387-72207-8. MR 3930599.

External links[edit]