From Wikipedia, the free encyclopedia
In probability theory , to obtain a nondegenerate limiting distribution of the extreme value distribution , it is necessary to "reduce" the actual greatest value by applying a linear transformation with coefficients that depend on the sample size.
If
X
1
,
X
2
,
…
,
X
n
{\displaystyle X_{1},X_{2},\dots ,X_{n}}
are independent random variables with common probability density function
p
X
j
(
x
)
=
f
(
x
)
,
{\displaystyle p_{X_{j}}(x)=f(x),}
then the cumulative distribution function of
X
n
′
=
max
{
X
1
,
…
,
X
n
}
{\displaystyle X'_{n}=\max\{\,X_{1},\ldots ,X_{n}\,\}}
is
F
X
n
′
=
[
F
(
x
)
]
n
{\displaystyle F_{X'_{n}}={[F(x)]}^{n}}
If there is a limiting distribution of interest, the stability postulate states that the limiting distribution is some sequence of transformed "reduced" values, such as
(
a
n
X
n
′
+
b
n
)
{\displaystyle (a_{n}X'_{n}+b_{n})}
, where
a
n
,
b
n
{\displaystyle a_{n},b_{n}}
may depend on n but not on x .
To distinguish the limiting cumulative distribution function from the "reduced" greatest value from F (x ), we will denote it by G (x ). It follows that G (x ) must satisfy the functional equation
[
G
(
x
)
]
n
=
G
(
a
n
x
+
b
n
)
{\displaystyle {[G(x)]}^{n}=G{(a_{n}x+b_{n})}}
This equation was obtained by Maurice René Fréchet and also by Ronald Fisher .
Boris Vladimirovich Gnedenko has shown there are no other distributions satisfying the stability postulate other than the following:[ 1]
Gumbel distribution for the minimum stability postulate
If
X
i
=
Gumbel
(
μ
,
β
)
{\displaystyle X_{i}={\textrm {Gumbel}}(\mu ,\beta )}
and
Y
=
min
{
X
1
,
…
,
X
n
}
{\displaystyle Y=\min\{\,X_{1},\ldots ,X_{n}\,\}}
then
Y
∼
a
n
X
+
b
n
{\displaystyle Y\sim a_{n}X+b_{n}}
where
a
n
=
1
{\displaystyle a_{n}=1}
and
b
n
=
β
log
(
n
)
{\displaystyle b_{n}=\beta \log(n)}
In other words,
Y
∼
Gumbel
(
μ
−
β
log
(
n
)
,
β
)
{\displaystyle Y\sim {\textrm {Gumbel}}(\mu -\beta \log(n),\beta )}
Extreme value distribution for the maximum stability postulate
If
X
i
=
EV
(
μ
,
σ
)
{\displaystyle X_{i}={\textrm {EV}}(\mu ,\sigma )}
and
Y
=
max
{
X
1
,
…
,
X
n
}
{\displaystyle Y=\max\{\,X_{1},\ldots ,X_{n}\,\}}
then
Y
∼
a
n
X
+
b
n
{\displaystyle Y\sim a_{n}X+b_{n}}
where
a
n
=
1
{\displaystyle a_{n}=1}
and
b
n
=
σ
log
(
1
n
)
{\displaystyle b_{n}=\sigma \log({\tfrac {1}{n}})}
In other words,
Y
∼
EV
(
μ
−
σ
log
(
1
n
)
,
σ
)
{\displaystyle Y\sim {\textrm {EV}}(\mu -\sigma \log({\tfrac {1}{n}}),\sigma )}
Fréchet distribution for the maximum stability postulate
If
X
i
=
Frechet
(
α
,
s
,
m
)
{\displaystyle X_{i}={\textrm {Frechet}}(\alpha ,s,m)}
and
Y
=
max
{
X
1
,
…
,
X
n
}
{\displaystyle Y=\max\{\,X_{1},\ldots ,X_{n}\,\}}
then
Y
∼
a
n
X
+
b
n
{\displaystyle Y\sim a_{n}X+b_{n}}
where
a
n
=
n
−
1
α
{\displaystyle a_{n}=n^{-{\tfrac {1}{\alpha }}}}
and
b
n
=
m
(
1
−
n
−
1
α
)
{\displaystyle b_{n}=m\left(1-n^{-{\tfrac {1}{\alpha }}}\right)}
In other words,
Y
∼
Frechet
(
α
,
n
1
α
s
,
m
)
{\displaystyle Y\sim {\textrm {Frechet}}(\alpha ,n^{\tfrac {1}{\alpha }}s,m)}
^ Gnedenko, B. (1943). "Sur La Distribution Limite Du Terme Maximum D'Une Serie Aleatoire". Annals of Mathematics . 44 (3): 423–453. doi :10.2307/1968974 .