A Markov chain on a measurable state space is a discrete-time-homogeneous Markov chain with a measurable space as state space.
History[edit]
The definition of Markov chains has evolved during the 20th century. In 1953 the term Markov chain was used for stochastic processes with discrete or continuous index set, living on a countable or finite state space, see Doob.[1] or Chung.[2] Since the late 20th century it became more popular to consider a Markov chain as a stochastic process with discrete index set, living on a measurable state space.[3][4][5]
Definition[edit]
Denote with
a measurable space and with
a Markov kernel with source and target
.
A stochastic process
on
is called a time homogeneous Markov chain with Markov kernel
and start distribution
if
![{\displaystyle \mathbb {P} [X_{0}\in A_{0},X_{1}\in A_{1},\dots ,X_{n}\in A_{n}]=\int _{A_{0}}\dots \int _{A_{n-1}}p(y_{n-1},A_{n})\,p(y_{n-2},dy_{n-1})\dots p(y_{0},dy_{1})\,\mu (dy_{0})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cd368abc46aa7894d456e87e86333871e9d3faa6)
is satisfied for any
. One can construct for any Markov kernel and any probability measure an associated Markov chain.[4]
For any measure
we denote for
-integrable function
the Lebesgue integral as
. For the measure
defined by
we used the following notation:
![{\displaystyle \int _{E}f(y)\,p(x,dy):=\int _{E}f(y)\,\nu _{x}(dy).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7727646d05df6a1f4242a304c68978b6e440ae51)
Basic properties[edit]
Starting in a single point[edit]
If
is a Dirac measure in
, we denote for a Markov kernel
with starting distribution
the associated Markov chain as
on
and the expectation value
![{\displaystyle \mathbb {E} _{x}[X]=\int _{\Omega }X(\omega )\,\mathbb {P} _{x}(d\omega )}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e5849c50b97b81539930831b1c94c8471528541a)
for a
-integrable function
. By definition, we have then
.
We have for any measurable function
the following relation:[4]
![{\displaystyle \int _{E}f(y)\,p(x,dy)=\mathbb {E} _{x}[f(X_{1})].}](https://wikimedia.org/api/rest_v1/media/math/render/svg/92c5abbf9b54b355ea4163ebdea632ca97db11eb)
Family of Markov kernels[edit]
For a Markov kernel
with starting distribution
one can introduce a family of Markov kernels
by
![{\displaystyle p_{n+1}(x,A):=\int _{E}p_{n}(y,A)\,p(x,dy)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4c1fe0e930516af553178704956cade293e80d7f)
for
and
. For the associated Markov chain
according to
and
one obtains
.
Stationary measure[edit]
A probability measure
is called stationary measure of a Markov kernel
if
![{\displaystyle \int _{A}\mu (dx)=\int _{E}p(x,A)\,\mu (dx)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cf3152dafdb292cf3075fb425d21960bb2dee99f)
holds for any
. If
on
denotes the Markov chain according to a Markov kernel
with stationary measure
, and the distribution of
is
, then all
have the same probability distribution, namely:
![{\displaystyle \mathbb {P} [X_{n}\in A]=\mu (A)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2ca51e01c62c1881061da3a7a641bc03e1454f3d)
for any
.
Reversibility[edit]
A Markov kernel
is called reversible according to a probability measure
if
![{\displaystyle \int _{A}p(x,B)\,\mu (dx)=\int _{B}p(x,A)\,\mu (dx)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/999cfb5205c38656b07ac03b58d06780c4156af1)
holds for any
.
Replacing
shows that if
is reversible according to
, then
must be a stationary measure of
.
See also[edit]
References[edit]
- ^ Joseph L. Doob: Stochastic Processes. New York: John Wiley & Sons, 1953.
- ^ Kai L. Chung: Markov Chains with Stationary Transition Probabilities. Second edition. Berlin: Springer-Verlag, 1974.
- ^ Sean Meyn and Richard L. Tweedie: Markov Chains and Stochastic Stability. 2nd edition, 2009.
- ^ a b c Daniel Revuz: Markov Chains. 2nd edition, 1984.
- ^ Rick Durrett: Probability: Theory and Examples. Fourth edition, 2005.