From Wikipedia, the free encyclopedia
In probability theory , the probability distribution of the sum of two or more independent random variables is the convolution of their individual distributions. The term is motivated by the fact that the probability mass function or probability density function of a sum of independent random variables is the convolution of their corresponding probability mass functions or probability density functions respectively. Many well known distributions have simple convolutions. The following is a list of these convolutions. Each statement is of the form
∑
i
=
1
n
X
i
∼
Y
{\displaystyle \sum _{i=1}^{n}X_{i}\sim Y}
where
X
1
,
X
2
,
…
,
X
n
{\displaystyle X_{1},X_{2},\dots ,X_{n}}
are independent random variables, and
Y
{\displaystyle Y}
is the distribution that results from the convolution of
X
1
,
X
2
,
…
,
X
n
{\displaystyle X_{1},X_{2},\dots ,X_{n}}
. In place of
X
i
{\displaystyle X_{i}}
and
Y
{\displaystyle Y}
the names of the corresponding distributions and their parameters have been indicated.
Discrete distributions [ edit ]
∑
i
=
1
n
B
e
r
n
o
u
l
l
i
(
p
)
∼
B
i
n
o
m
i
a
l
(
n
,
p
)
0
<
p
<
1
n
=
1
,
2
,
…
{\displaystyle \sum _{i=1}^{n}\mathrm {Bernoulli} (p)\sim \mathrm {Binomial} (n,p)\qquad 0<p<1\quad n=1,2,\dots }
∑
i
=
1
n
B
i
n
o
m
i
a
l
(
n
i
,
p
)
∼
B
i
n
o
m
i
a
l
(
∑
i
=
1
n
n
i
,
p
)
0
<
p
<
1
n
i
=
1
,
2
,
…
{\displaystyle \sum _{i=1}^{n}\mathrm {Binomial} (n_{i},p)\sim \mathrm {Binomial} \left(\sum _{i=1}^{n}n_{i},p\right)\qquad 0<p<1\quad n_{i}=1,2,\dots }
∑
i
=
1
n
N
e
g
a
t
i
v
e
B
i
n
o
m
i
a
l
(
n
i
,
p
)
∼
N
e
g
a
t
i
v
e
B
i
n
o
m
i
a
l
(
∑
i
=
1
n
n
i
,
p
)
0
<
p
<
1
n
i
=
1
,
2
,
…
{\displaystyle \sum _{i=1}^{n}\mathrm {NegativeBinomial} (n_{i},p)\sim \mathrm {NegativeBinomial} \left(\sum _{i=1}^{n}n_{i},p\right)\qquad 0<p<1\quad n_{i}=1,2,\dots }
∑
i
=
1
n
G
e
o
m
e
t
r
i
c
(
p
)
∼
N
e
g
a
t
i
v
e
B
i
n
o
m
i
a
l
(
n
,
p
)
0
<
p
<
1
n
=
1
,
2
,
…
{\displaystyle \sum _{i=1}^{n}\mathrm {Geometric} (p)\sim \mathrm {NegativeBinomial} (n,p)\qquad 0<p<1\quad n=1,2,\dots }
∑
i
=
1
n
P
o
i
s
s
o
n
(
λ
i
)
∼
P
o
i
s
s
o
n
(
∑
i
=
1
n
λ
i
)
λ
i
>
0
{\displaystyle \sum _{i=1}^{n}\mathrm {Poisson} (\lambda _{i})\sim \mathrm {Poisson} \left(\sum _{i=1}^{n}\lambda _{i}\right)\qquad \lambda _{i}>0}
Continuous distributions [ edit ]
∑
i
=
1
n
Stable
(
α
,
β
i
,
c
i
,
μ
i
)
=
Stable
(
α
,
∑
i
=
1
n
β
i
c
i
α
∑
i
=
1
n
c
i
α
,
(
∑
i
=
1
n
c
i
α
)
1
/
α
,
∑
i
=
1
n
μ
i
)
{\displaystyle \sum _{i=1}^{n}\operatorname {Stable} \left(\alpha ,\beta _{i},c_{i},\mu _{i}\right)=\operatorname {Stable} \left(\alpha ,{\frac {\sum _{i=1}^{n}\beta _{i}c_{i}^{\alpha }}{\sum _{i=1}^{n}c_{i}^{\alpha }}},\left(\sum _{i=1}^{n}c_{i}^{\alpha }\right)^{1/\alpha },\sum _{i=1}^{n}\mu _{i}\right)}
0
<
α
i
≤
2
−
1
≤
β
i
≤
1
c
i
>
0
∞
<
μ
i
<
∞
{\displaystyle \qquad 0<\alpha _{i}\leq 2\quad -1\leq \beta _{i}\leq 1\quad c_{i}>0\quad \infty <\mu _{i}<\infty }
The following three statements are special cases of the above statement:
∑
i
=
1
n
Normal
(
μ
i
,
σ
i
2
)
∼
Normal
(
∑
i
=
1
n
μ
i
,
∑
i
=
1
n
σ
i
2
)
−
∞
<
μ
i
<
∞
σ
i
2
>
0
(
α
=
2
,
β
i
=
0
)
{\displaystyle \sum _{i=1}^{n}\operatorname {Normal} (\mu _{i},\sigma _{i}^{2})\sim \operatorname {Normal} \left(\sum _{i=1}^{n}\mu _{i},\sum _{i=1}^{n}\sigma _{i}^{2}\right)\qquad -\infty <\mu _{i}<\infty \quad \sigma _{i}^{2}>0\quad (\alpha =2,\beta _{i}=0)}
∑
i
=
1
n
Cauchy
(
a
i
,
γ
i
)
∼
Cauchy
(
∑
i
=
1
n
a
i
,
∑
i
=
1
n
γ
i
)
−
∞
<
a
i
<
∞
γ
i
>
0
(
α
=
1
,
β
i
=
0
)
{\displaystyle \sum _{i=1}^{n}\operatorname {Cauchy} (a_{i},\gamma _{i})\sim \operatorname {Cauchy} \left(\sum _{i=1}^{n}a_{i},\sum _{i=1}^{n}\gamma _{i}\right)\qquad -\infty <a_{i}<\infty \quad \gamma _{i}>0\quad (\alpha =1,\beta _{i}=0)}
∑
i
=
1
n
Levy
(
μ
i
,
c
i
)
∼
Levy
(
∑
i
=
1
n
μ
i
,
(
∑
i
=
1
n
c
i
)
2
)
−
∞
<
μ
i
<
∞
c
i
>
0
(
α
=
1
/
2
,
β
i
=
1
)
{\displaystyle \sum _{i=1}^{n}\operatorname {Levy} (\mu _{i},c_{i})\sim \operatorname {Levy} \left(\sum _{i=1}^{n}\mu _{i},\left(\sum _{i=1}^{n}{\sqrt {c_{i}}}\right)^{2}\right)\qquad -\infty <\mu _{i}<\infty \quad c_{i}>0\quad (\alpha =1/2,\beta _{i}=1)}
∑
i
=
1
n
Gamma
(
α
i
,
β
)
∼
Gamma
(
∑
i
=
1
n
α
i
,
β
)
α
i
>
0
β
>
0
{\displaystyle \sum _{i=1}^{n}\operatorname {Gamma} (\alpha _{i},\beta )\sim \operatorname {Gamma} \left(\sum _{i=1}^{n}\alpha _{i},\beta \right)\qquad \alpha _{i}>0\quad \beta >0}
∑
i
=
1
n
Voigt
(
μ
i
,
γ
i
,
σ
i
)
∼
Voigt
(
∑
i
=
1
n
μ
i
,
∑
i
=
1
n
γ
i
,
∑
i
=
1
n
σ
i
2
)
−
∞
<
μ
i
<
∞
γ
i
>
0
σ
i
>
0
{\displaystyle \sum _{i=1}^{n}\operatorname {Voigt} (\mu _{i},\gamma _{i},\sigma _{i})\sim \operatorname {Voigt} \left(\sum _{i=1}^{n}\mu _{i},\sum _{i=1}^{n}\gamma _{i},{\sqrt {\sum _{i=1}^{n}\sigma _{i}^{2}}}\right)\qquad -\infty <\mu _{i}<\infty \quad \gamma _{i}>0\quad \sigma _{i}>0}
[ 1]
∑
i
=
1
n
VarianceGamma
(
μ
i
,
α
,
β
,
λ
i
)
∼
VarianceGamma
(
∑
i
=
1
n
μ
i
,
α
,
β
,
∑
i
=
1
n
λ
i
)
−
∞
<
μ
i
<
∞
λ
i
>
0
α
2
−
β
2
>
0
{\displaystyle \sum _{i=1}^{n}\operatorname {VarianceGamma} (\mu _{i},\alpha ,\beta ,\lambda _{i})\sim \operatorname {VarianceGamma} \left(\sum _{i=1}^{n}\mu _{i},\alpha ,\beta ,\sum _{i=1}^{n}\lambda _{i}\right)\qquad -\infty <\mu _{i}<\infty \quad \lambda _{i}>0\quad {\sqrt {\alpha ^{2}-\beta ^{2}}}>0}
[ 2]
∑
i
=
1
n
Exponential
(
θ
)
∼
Erlang
(
n
,
θ
)
θ
>
0
n
=
1
,
2
,
…
{\displaystyle \sum _{i=1}^{n}\operatorname {Exponential} (\theta )\sim \operatorname {Erlang} (n,\theta )\qquad \theta >0\quad n=1,2,\dots }
∑
i
=
1
n
Exponential
(
λ
i
)
∼
Hypoexponential
(
λ
1
,
…
,
λ
n
)
λ
i
>
0
{\displaystyle \sum _{i=1}^{n}\operatorname {Exponential} (\lambda _{i})\sim \operatorname {Hypoexponential} (\lambda _{1},\dots ,\lambda _{n})\qquad \lambda _{i}>0}
[ 3]
∑
i
=
1
n
χ
2
(
r
i
)
∼
χ
2
(
∑
i
=
1
n
r
i
)
r
i
=
1
,
2
,
…
{\displaystyle \sum _{i=1}^{n}\chi ^{2}(r_{i})\sim \chi ^{2}\left(\sum _{i=1}^{n}r_{i}\right)\qquad r_{i}=1,2,\dots }
∑
i
=
1
r
N
2
(
0
,
1
)
∼
χ
r
2
r
=
1
,
2
,
…
{\displaystyle \sum _{i=1}^{r}N^{2}(0,1)\sim \chi _{r}^{2}\qquad r=1,2,\dots }
∑
i
=
1
n
(
X
i
−
X
¯
)
2
∼
σ
2
χ
n
−
1
2
,
{\displaystyle \sum _{i=1}^{n}(X_{i}-{\bar {X}})^{2}\sim \sigma ^{2}\chi _{n-1}^{2},\quad }
where
X
1
,
…
,
X
n
{\displaystyle X_{1},\dots ,X_{n}}
is a random sample from
N
(
μ
,
σ
2
)
{\displaystyle N(\mu ,\sigma ^{2})}
and
X
¯
=
1
n
∑
i
=
1
n
X
i
.
{\displaystyle {\bar {X}}={\frac {1}{n}}\sum _{i=1}^{n}X_{i}.}
Mixed distributions:
Normal
(
μ
,
σ
2
)
+
Cauchy
(
x
0
,
γ
)
∼
Voigt
(
μ
+
x
0
,
σ
,
γ
)
−
∞
<
μ
<
∞
−
∞
<
x
0
<
∞
γ
>
0
σ
>
0
{\displaystyle \operatorname {Normal} (\mu ,\sigma ^{2})+\operatorname {Cauchy} (x_{0},\gamma )\sim \operatorname {Voigt} (\mu +x_{0},\sigma ,\gamma )\qquad -\infty <\mu <\infty \quad -\infty <x_{0}<\infty \quad \gamma >0\quad \sigma >0}