View
231
Download
0
Category
Preview:
Citation preview
CHAPTER 2
Generalized Laplacian Distributions and
Autoregressive Processes
2.1 Introduction
Generalized Laplacian distribution corresponds to the distribution of differences of inde-
pendently and identically distributed (i.i.d.) gamma random variables. Mathai (1993a,b,c),
Mathai et al. (2006) discussed various generalizations of Laplace distribution and their ap-
plications in different contexts such as input-output processes, growth-decay mechanism,
formation of sand dunes, growth of melatonin in human body, formation of solar neutrinos
etc. Another variant namely, the skew Laplace distribution was introduced and studied in
detail by Kozubowski and Podgorski (2008a,b). Skew Laplace distribution arises as a mix-
Some results included in this chapter form part of papers Jose and Manu (2009) and Jose and Manu(2011).
25
CHAPTER 2. GENERALIZED LAPLACIAN DISTRIBUTIONS AND AUTOREGRESSIVE PROCESSES
ture of normal distributions with stochastic variance having gamma distribution, and hence
it is also called Variance Gamma Model. The tails of the variance gamma distribution de-
crease more slowly than the normal distribution. Due to the simplicity, flexibility and an
excellent fit to empirical data, the variance gamma model is recently very popular among
financial modelers.
In the present chapter, we consider the generalized Laplacian distribution and a more
general case namely bilateral gamma distribution. We develop first order autoregressive
processes with generalized Laplacian marginals and the generation of the process. The
regression behaviour and simulation of sample paths are discussed. We consider the
distribution of sums and the covariance structure. We introduce the geometric general-
ized Laplacian distribution and study its properties. Geometric generalized Laplacian pro-
cesses are discussed. The kth order autoregressive processes are described. Bilateral
gamma distribution and its estimation of parameters and a numerical illustration is also
carried out.
2.2 Generalized Laplacian distribution and its properties
Mathai (1993a,b,c) introduced a generalized Laplacian distribution with parameters α and
β, which has the characteristic function (c.f.),
φ(t) =1
(1 + β2t2)α, α > 0, β > 0.
Kotz et al. (2001) give a more general form of the generalized Laplacian distribution with
more parameters and we concentrate on this form. The c.f is as follows
φ(t) =eiθt
(1 + 12σ2t2 − iµt)τ
, −∞ < t <∞, σ > 0, −∞ < µ <∞.
26
CHAPTER 2. GENERALIZED LAPLACIAN DISTRIBUTIONS AND AUTOREGRESSIVE PROCESSES
This c.f. can be factored as
φ(t) = eiθt
(1
1 + i σ√2κt
)τ (1
1− i σ√2κt
)τ
, (2.2.1)
where κ > 0, µ = σ√2
(1κ− κ).
The probability density function (p.d.f.) of generalized Laplace distribution (GL(θ, κ, σ, τ ))
is
h(x) =
√2e√
22σ
( 1κ−κ)(x−θ)
√πστ+ 1
2 Γ(τ)
(√2|x− θ|κ+ 1
κ
)τ− 12
Kτ− 12
(√2
2σ
(κ+
1
κ
)|x− θ|
)for x 6= θ,
(2.2.2)
where Kλ is the modified Bessel function of the third kind with the index λ, see Kotz et
al. (2001). There are some special cases associated with this distribution. A standard GL
density is obtained for θ = 0 and σ = 1. For τ = 1, we have an asymmetric Laplace, and
for κ = 1 and θ = 0, we obtain a symmetric Laplace distribution. The p.d.f. (2.2.2) can
also be written as
f(x) = (αβ)τ exp
(−(α− β)x
2
)(|x|
α + β
)τ−1/2
Kτ−1/2
(α + β
2|x|),
where α and β describe the left and right-tail behavior and τ is a parameter relating to the
peakedness of the pdf (for details see Wu (2008)).
The mean value, variance, coefficients of skewness and kurtosis of generalized Lapla-
cian distribution are obtained as
Mean value = τµ, Variance = τ(µ2 + σ2).
Coefficient of skewness =µ(2µ2 + 3σ2)√τ(µ2 + σ2)3/2
.
Coefficient of kurtosis = 3 +3(2µ4 + 4σ2µ2 + 3σ4)
τ(µ2 + σ2)2.
27
CHAPTER 2. GENERALIZED LAPLACIAN DISTRIBUTIONS AND AUTOREGRESSIVE PROCESSES
Figure 2.1: The histogram and frequency curve corresponding to GL(0, 2, 1, 12)
From the factorization of the c.f. of the generalized Laplacian distribution, it can be noted
that a variable Y following GL(θ, κ, σ, τ) can be represented as Y d= θ+ σ√
2( 1κG1− κG2),
where G1 and G2 are i.i.d. gamma random variables following one parameter gamma
distribution with p.d.f.,
fGi(x) =1
Γ(τ)e−xxτ−1; x > 0, τ > 0, i = 1, 2.
The generalized Laplacian random variable Y also admits the representation of the form
Yd= θ+ µW + σ
√WZ, where Z is a standard normal variate and W is a gamma variate
with parameters (τ ,1). Figure 2.1 gives the histogram and frequency curve corresponding
to GL(0, 2, 1, 12) obtained as the result of a simulation study.
2.3 Generalized Laplacian autoregressive processes
Consider the usual linear, additive first order autoregressive model given by
Xn = aXn−1 + εn, 0 < a < 1, n = 0,±1,±2, . . . , (2.3.1)
28
CHAPTER 2. GENERALIZED LAPLACIAN DISTRIBUTIONS AND AUTOREGRESSIVE PROCESSES
where {εn} is the innovation sequence of i.i.d. random variables. The basic problem is
to find the distribution of {εn} such that {Xn} has the generalized Laplacian distribution
GL(θ, κ, σ, τ) as the stationary marginal distribution. The following theorem summarizes
the results in this context.
Theorem 2.3.1. A necessary and sufficient condition that an AR(1) process of the
form Xn = aXn−1 + εn is stationary with GL(θ, κ, σ, τ) marginals is that the innova-
tions {εn} are i.i.d. as a convolution of the form X + Y1 − Y2 as in (2.3.3) provided
ε0d= X0.
Proof. The necessary part can be proved as follows. In terms of the c.f., we can rewrite
(2.3.1) in the form φXn(t) = φXn−1(at)φεn(t).
Under stationarity this yields φε(t) = φX(t)φX(at)
.
Substituting the c.f. given by (2.2.1) we get,
φε(t) =eiθt
eiθat
(1 + i σ√2κat)τ
(1 + i σ√2κt)τ
(1− i σ√2κat)τ
(1− i σ√2κt)τ
= eiθ(1−a)t
[a+ (1− a)
1
(1 + i σ√2κt)
]τ [a+ (1− a)
1
(1− i σ√2κt)
]τ. (2.3.2)
This implies that ε has a convolution structure of the following form.
εd= X + Y1 − Y2, (2.3.3)
where X is a degenerate random variable taking value θ(1 − a) with probability one and
Y1 and Y2 are τ -fold convolutions of T1 and T2 where,
T1 =
0, with probability a
E1, with probability 1− a
T2 =
0, with probability a
E2, with probability 1− a
29
CHAPTER 2. GENERALIZED LAPLACIAN DISTRIBUTIONS AND AUTOREGRESSIVE PROCESSES
where E1 and E2 are exponential random variables with means σ√2κ
and σ√2κ respectively.
When τ is integer valued, Y1 and Y2 can be easily generated as gamma convolutions.
The sufficiency part can be proved by the method of induction. We assume that Xn−1
follows GL(θ, κ, σ, τ) with c.f. (2.2.1).
φXn(t) = φXn−1(at)φεn(t)
=
[eiθat
(1 + i σ√2κat)τ (1− i σ√
2κat)τ
][eiθ(1−a)t(1 + i σ√
2κat)τ (1− i σ√
2κat)τ
(1 + i σ√2κt)τ (1− i σ√
2κt)τ
]
= eiθt
(1
1 + i σ√2κt
)τ (1
1− i σ√2κt
)τ
,
which is the same as the generalized Laplacian c.f. This shows that the process {Xn} is
strictly stationary with generalized Laplacian marginals, provided ε0d= X0 which follows
GL(θ, κ, σ, τ ).
2.3.1 Generation of the process
Irrespective of whether τ is integer-valued or not, the process can be generated using a
compound Poisson representation. In (2.3.2), the expression[a+ (1− a) 1
(1−i σ√2κt)
]τcan
be obtained as the c.f. of a compound Poisson distribution, see Lawrance (1982). In
particular, this is the c.f. of the random sumN∑i=1
aUiVi, where {Vi} are gamma(1, σ√2κ
)
random variables, {Ui} are i.i.d. random variables uniformly distributed on (0,1), N is a
Poisson random variable of mean−τ ln a and all these random variables are independent.
This can be proved as follows.
Let η =N∑i=1
aUiVi, where {Vi} are gamma(1, σ√2κ
) random variables, {Ui} are i.i.d.
random variables uniformly distributed on (0,1), N is a Poisson random variable of mean
30
CHAPTER 2. GENERALIZED LAPLACIAN DISTRIBUTIONS AND AUTOREGRESSIVE PROCESSES
−τ ln a. Then the c.f. of η can be obtained as
φη(t) =∞∑n=0
[φV (aU t)
]nP [N = n]
=∞∑n=0
[φV (aU t)
]n eτ ln a(−τ ln a)n
n!
= exp(τ ln a
(1− φV (aU t)
))= exp
(τ ln a
(1−
∫ 1
0
1
1− iau σ√2κtdu
))
= exp
(τ ln a
(∫ 1
0
iau σ√2κt
1− iau σ√2κtdu
))
= exp
(τ
(ln
(1− ia σ√
2κt
)− ln
(1− i σ√
2κt
)))=
[a+ (1− a)
1
1− i σ√2κt
]τ.
Similarly we can generate the other gamma process also and taking the difference we can
obtain a generalized Laplacian process.
Remark 2.3.1. If X0 is distributed arbitrary, then also the process is asymptotically
stationary with generalized Laplacian marginal distribution.
Proof. We have
Xn = aXn−1 + εn = anX0 +n−1∑l=0
alεn−l.
φXn(t) = φX0(ant)n−1∏l=0
φε(alt)
= φX0(ant){exp(iθt(1− a)n−1∑l=0
al)}n−1∏l=0
[(1 + i σ√
2κal+1t)
τ(1− i σ√
2κal+1t)
τ
(1 + i σ√2κalt)τ (1− i σ√
2κalt)τ
]
→ eiθt
(1 + i σ√2κt)τ (1− i σ√
2κt)τ
as n→∞.
31
CHAPTER 2. GENERALIZED LAPLACIAN DISTRIBUTIONS AND AUTOREGRESSIVE PROCESSES
2.3.2 Regression behaviour and sample path properties
The forward regression equation for the process is given by
E(Xn|Xn−1 = x) = ax+ (1− a)[θ + τµ].
Similarly the conditional variance is given by
Var(Xn|Xn−1 = x) = (1− a2)τ(µ2 + σ2) where µ =σ√2
(1
κ− κ).
In the backward direction, the conditional distribution of Xn given Xn+1 = x has non-
linear regression and non-constant conditional variance. In this backward case we will
proceed from the joint characteristic function ofXn andXn+1 following the steps described
in Lawrance (1978). Then we have,
φXn,Xn+1(t1, t2) =φX(t1 + at2)φX(t2)
φX(at2).
Differentiating this with respect to t1 and setting t1 = 0, t2 = t;
i E[eitXn+1E(Xn|Xn+1)] =φ′X(at)φX(t)
φX(at)
= φ′X(at)φε(t), (2.3.4)
where φε(t) is as defined in (2.3.2). Lawrance (1978) showed that linear backward regres-
sion is only possible in the Gaussian case and that non-normal autoregressive processes
are not time reversible as a result due to Weiss (1977). In this case, E(Xn|Xn−1) 6=E(Xn|Xn+1) so that the process is not time reversible.
2.3.3 Observed sample path properties
The sample path properties of the process are studied by generating 100 observations
each from the generalized Laplacian process with various parameter (θ, κ, σ, τ) combina-
32
CHAPTER 2. GENERALIZED LAPLACIAN DISTRIBUTIONS AND AUTOREGRESSIVE PROCESSES
(a) (b)
Figure 2.2: (a) Sample path of the process for a = 0.8 and (θ, κ, σ, τ) = (0, 1, 1, 2) (b)Sample path of the process for a = 0.3 and (θ, κ, σ, τ) = (0, 1, 1, 2)
(a) (b)
Figure 2.3: (a) Sample path of the process for a = 0.3 and (θ, κ, σ, τ) = (0, 1.3, 1, 2) (b)Sample path of the process for a = 0.3 and (θ, κ, σ, τ) = (0, 1.2, 1.9, 2)
tions. In Figure 2.2(a), we take a = 0.8 and the values of (θ, κ, σ, τ) as (0, 1, 1, 2) and
in Figures 2.2(b), 2.3(a) and 2.3(b) we take a = 0.3 and parameter values as (0, 1, 1,
2), (0, 1.3, 1, 2) and (0, 1.2, 1.9, 2) respectively. The process exhibits both positive and
negative values with upward as well as downward runs as evident from the figures. These
figures point out the rich variety of contexts where the newly developed time series mod-
els can be applied. These observations can be verified by referring to the table showing
P (Xn < Xn−1). These probabilities are obtained by a simulation procedure. Sequences
of 10,000 observations from generalized Laplacian process are generated repeatedly for
ten times, and from each sequence the probability is estimated. Table 2.1 provides the
33
CHAPTER 2. GENERALIZED LAPLACIAN DISTRIBUTIONS AND AUTOREGRESSIVE PROCESSES
(θ, κ, σ, τ) (0,1,1, 2) (0,1.3,1,2) (0,1.2,1.9,2) (0,1.5,2.5,2)
@@
a0.1 0.4990 0.4970 0.4967 0.4933
(0.0022) (0.0035) (0.0032) (0.0032)0.2 0.5004 0.4904 0.4935 0.4881
(0.0026) (0.0027) (0.0029) (0.0043)0.3 0.5011 0.4853 0.4923 0.4783
(0.0027) (0.0038) (0.0032) (0.0017)0.4 0.4989 0.4822 0.4859 0.4743
(0.0034) (0.0033) (0.0024) (0.0043)0.5 0.5000 0.4765 0.4833 0.4661
(0.0041) (0.0031) (0.0039) (0.0030)0.6 0.5001 0.4687 0.4792 0.4584
(0.0027) (0.0032) (0.0023) (0.0025)0.7 0.4996 0.4611 0.4697 0.4446
(0.0027) (0.0032) (0.0032) (0.0034)0.8 0.4992 0.4428 0.4571 0.4196
(0.0031) (0.0027) (0.0026) (0.0041)0.9 0.5012 0.4190 0.4393 0.3908
(0.0036) (0.0043) (0.0034) (0.0028)
Table 2.1: P (Xn < Xn−1) for the generalized Laplacian process.
average of such probabilities from the ten trials along with an estimate of variance for
different values of a and for different combinations of (θ, κ, σ, τ ).
2.3.4 Distribution of sums and joint distribution of (Xn, Xn+1)
When a stationary sequence Xn is used, the distribution of sums Tr = Xn +Xn+1 + · · ·+Xn+r−1 is important. We have
Xn+j = ajXn + aj−1εn+1 + aj−2εn+2 + · · ·+ εn+j.
34
CHAPTER 2. GENERALIZED LAPLACIAN DISTRIBUTIONS AND AUTOREGRESSIVE PROCESSES
Hence
Tr = Xn +Xn+1 + · · ·+Xn+r−1
=r−1∑j=0
[ajXn + aj−1εn+1 + · · ·+ εn+j]
= Xn
(1− ar
1− a
)+
r−1∑j=1
εn+j
(1− ar−j
1− a
).
The c.f. of Tr is given by
φTr(t) = φXn
(t1− ar
1− a
) r−1∏j=1
φε
(t1− ar−j
1− a
).
φTr(t) =eiθ(t
1−ar1−a )[
1 + i σ√2κ(t1−ar
1−a
)]τ[1− i σ√
2κ
(t1−ar
1−a
)]τ×
r−1∏j=1
eiθt(1−ar−j)
a+ (1− a)1(
1 + i σ√2κt(
1−ar−j1−a
))τ
×
a+ (1− a)1(
1− i σ√2κt(
1−ar−j1−a
))τ .
The distribution of Tr can be obtained by inverting the c.f. as the c.f. uniquely determines
the distribution of a random variable. Now the joint distribution of (Xn, Xn+1) can be given
35
CHAPTER 2. GENERALIZED LAPLACIAN DISTRIBUTIONS AND AUTOREGRESSIVE PROCESSES
in terms of c.f. as
φXn,Xn+1(t1, t2) = E[e(it1Xn+it2Xn+1)
]= E
[e(it1Xn+it2(aXn+εn+1))
]= E
[e(i(t1+at2)Xn+it2εn+1)
]= φXn(t1 + at2)φεn+1(t2)
=eiθ(t1+t2)
[a+ (1− a) 1
(1+i σ√2κt2)
]τ [a+ (1− a) 1
(1−i σ√2κt2)
]τ[1 + i σ√
2κ(t1 + at2)]τ [1− i σ√
2κ(t1 + at2)]τ
.
The above c.f. is not symmetric in t1 and t2, demonstrating once again that the process is
not time reversible.
Remark 2.3.2. The covariance between Xn and Xn−1 is
Cov(Xn, Xn−1) = θ2 + (1− a)τ 2µ2 − θτµ.
The correlation coefficient ρ is
ρ =θ2 + (1− a)τ 2µ2 − θτµ
(1− a2)τ(µ2 + σ2).
When θ = 0,
ρ =τµ2
(1 + a)(µ2 + σ2).
2.4 Geometric generalized Laplacian distribution
Here we introduce a new distribution namely the geometric generalized Laplacian distribu-
tion (GGLD(θ, µ, σ, τ )). Its c.f. is given by
ψ(t) =1
1 + τ ln(1 + 12σ2t2 − iµt)− iθt
. (2.4.1)
36
CHAPTER 2. GENERALIZED LAPLACIAN DISTRIBUTIONS AND AUTOREGRESSIVE PROCESSES
The above c.f. ψ(t) can be written in the form φ(t) = exp(
1− 1ψ(t)
), where φ(t) is
the c.f. of the generalized Laplacian distribution, which is infinitely divisible. Therefore
the geometric generalized Laplacian distribution is geometrically infinitely divisible (see
Klebanov et al. (1984), Pillai and Sandhya (1990), Seethalekshmi and Jose (2004a,b;
2006)).
Theorem 2.4.1. Let X1, X2, . . . be i.i.d. GGLD(0, µ, σ, τ) random variables and N
be geometric with mean 1p, such that P [N = k] = p(1 − p)k−1, k = 1, 2, . . . , 0 < p < 1.
Then Y = X1 +X2 + · · ·+XNd= GGLD(0, µ, σ, τ/p).
Proof. The c.f. of Y is
φY (t) =∞∑k=1
[φX(t)]kp(1− p)k−1
=p/(1 + τ ln(1 + t2σ2
2− iµt))
1− ((1− p)/(1 + τ ln(1 + t2σ2
2− iµt))
=1
1 + (τ/p) ln(1 + t2σ2
2− iµt)
.
Hence Y d= GGLD(0, µ, σ, τ/p).
Theorem 2.4.2. Geometric generalized Laplacian distribution GGLD(0, µ, σ, τ) is the
limit distribution of geometric sum of random variables following GL(0, µ, σ, τ/n).
Proof. We have [1 + t2σ2
2− iµt]−τ/n = {1 + [1 + t2σ2
2− iµt]τ/n − 1}−1 is the c.f. of a
probability distribution since generalized Laplacian distribution is infinitely divisible. Hence
by Lemma 3.2 of Pillai (1990),
φn(t) = {1 + n[(1 +t2σ2
2− iµt)τ/n − 1]}−1
is the c.f. of geometric sum of i.i.d. generalized Laplacian random variables. Taking limit
37
CHAPTER 2. GENERALIZED LAPLACIAN DISTRIBUTIONS AND AUTOREGRESSIVE PROCESSES
as n→∞.
φ(t) = limn→∞
φn(t)
= {1 + limn→∞
n[(1 +t2σ2
2− iµt)τ/n − 1]}−1
= [1 + τ ln(1 +t2σ2
2− iµt)]−1.
2.4.1 Geometric generalized Laplacian processes
Here we develop a first order autoregressive process with geometric generalized Laplacian
marginals. Consider an autoregressive structure given by
Xn =
εn, with probability p
Xn−1 + εn, with probability 1− p(2.4.2)
where {Xn} and {εn} are independently distributed and 0 < p < 1. Now we shall con-
struct an AR(1) process with stationary marginal distribution as GGLD(0, µ, σ, τ ).
Theorem 2.4.3. Consider a stationary autoregressive process {Xn} with the structure
as in (2.4.2). Then a necessary and sufficient condition that {Xn} is stationary with
GGLD(0, µ, σ, τ) marginal distribution, is that {εn} follows GGLD(0, µ, σ, pτ).
Proof. In terms of c.f.s, we can write the above model as
φXn(t) = pφεn(t) + (1− p)φXn−1(t)φεn(t).
Assuming stationarity it reduces to the form
φX(t) = pφε(t) + (1− p)φX(t)φε(t).
38
CHAPTER 2. GENERALIZED LAPLACIAN DISTRIBUTIONS AND AUTOREGRESSIVE PROCESSES
Therefore φε(t) =φX(t)
p+ (1− p)φX(t).
Substituting, φX(t) =1
1 + τ ln(1 + 12σ2t2 − iµt)
.
We get φε(t) =1
1 + pτ ln(1 + 12σ2t2 − iµt)
. (2.4.3)
Hence εnd= GGLD(0, µ, σ, pτ).
Conversely assume that {εn} are identically distributed as given by (2.4.3), so that
φε(t) =1
1 + pτ ln(1 + 12σ2t2 − iµt)
.
Then φX(t) =pφε(t)
1− (1− p)φε(t)
=1
1 + τ ln(1 + t2σ2
2− iµt)
.
Hence it follows that Xn is distributed as GGLD(0, µ, σ, τ).
2.4.2 Extension to higher order processes
Consider a kth order autoregressive process having the following structure,
Xn =
εn, with probability p,
a1Xn−1 + εn, with probability p1,
a2Xn−2 + εn, with probability p2,...
......
akXn−k + εn, with probability pk,
,
where p1 + p2 + · · · + pk = 1 − p, 0 ≤ pi, p ≤ 1, i = 1, 2, . . . , k and {εn} is sequence of
independently and identically distributed random variables.
When a1 = a2 = · · · = ak = a; 0 < a < 1, we can write the above model in terms of
39
CHAPTER 2. GENERALIZED LAPLACIAN DISTRIBUTIONS AND AUTOREGRESSIVE PROCESSES
characteristic functions as
φXn(t) = pφεn(t) + p1φXn−1(at)φεn(t) + · · ·+ pkφXn−k(at)φεn(t).
If {Xn} is stationary, then
φX(t) = φε(t)[p+ (1− p)φX(at)].
That is
φε(t) =φX(t)
p+ (1− p)φX(at).
This shows that the innovation structure can be obtained as in the first order case.
2.5 Bilateral gamma distribution and bilateral gamma processes
Kuchler and Tappe (2008a,b) introduced and studied bilateral gamma distribution and
its various properties and also developed an associated Levy process namely bilateral
gamma processes which have applications in modelling financial market fluctuations. A
bilateral gamma distribution with parameters α1, λ1, α2, λ2 > 0 (BG(α1, λ1, α2, λ2)) is de-
fined as the distribution of the difference of two independent gamma variables with pa-
rameters (α1, λ1) and (α2, λ2). The characteristic function of a bilateral gamma distribution
is
ϕ(t) =
(λ1
λ1 − it
)α1(
λ2
λ2 + it
)α2
, t ∈ R. (2.5.1)
The p.d.f. is
f(x) =λα1
1 λα22
(λ1 + λ2)α2Γ(α1)Γ(α2)e−λ1x
∫ ∞0
vλ2−1
(x+
v
λ1 + λ2
)α1−1
e−vdv, x > 0.
40
CHAPTER 2. GENERALIZED LAPLACIAN DISTRIBUTIONS AND AUTOREGRESSIVE PROCESSES
The density function can also be expressed by means of Whittaker function
f(x) =λα1
1 λα22
(λ1 + λ2)12
(α1+α2)Γ(α2)x
12
(α1+α2)−1e−x2
(λ1+λ2)
×W 12
(α1−α2), 12
(α1+α2−1) (x (λ1 + λ2)) , x > 0, (2.5.2)
where
Wλ,µ(z) =zλe
z2
Γ(µ− λ+ 12)
∫ ∞0
tµ−λ−12 e−t
(1 +
t
z
)µ+λ− 12
dt for µ− λ > −1
2.
2.5.1 Geometric bilateral gamma distribution
Here, we introduce a new distribution namely the geometric bilateral gamma distribution
(GBG(α1, λ1, α2, λ2)). Its characteristic function is given by
ψ(t) =1
1 + α1 ln(1− itλ1
) + α2 ln(1 + itλ2
). (2.5.3)
The above characteristic function ψ(t) can be written in the form ϕ(t) = exp(
1− 1ψ(t)
),
where ϕ(t) is the characteristic function of the bilateral gamma distribution, which is in-
finitely divisible. Therefore the geometric bilateral gamma distribution is geometrically in-
finitely divisible , see Klebanov et al. (1984).
Theorem 2.5.1. Let X1, X2, . . . are i.i.d. GBG(α1, λ1, α2, λ2) random variables and
N be geometric with mean 1p, such that P [N = k] = p(1−p)k−1, k = 1, 2, . . . , 0 < p < 1.
Then Y = X1 +X2 + · · ·+XNd= GBG(α1/p, λ1, α2/p, λ2).
41
CHAPTER 2. GENERALIZED LAPLACIAN DISTRIBUTIONS AND AUTOREGRESSIVE PROCESSES
Proof. The characteristic function of Y is
ψY (t) =∞∑k=1
[ψX(t)]kp(1− p)k−1
=p/(1 + α1 ln(1− it
λ1) + α2 ln(1 + it
λ2))
1− ((1− p)/(1 + α1 ln(1− itλ1
) + α2 ln(1 + itλ2
)))
=1
1 + (α1/p) ln(1− itλ1
) + (α2/p) ln(1 + itλ2
).
Hence Y d= GBG(α1/p, λ1, α2/p, λ2).
2.5.2 Geometric bilateral gamma processes
Here, we develop a first-order autoregressive process with geometric bilateral gamma
marginals. Consider an autoregressive structure given by Consider an autoregressive
structure given by
Xn =
εn, with probability p
Xn−1 + εn, with probability 1− p(2.5.4)
where {Xn} and {εn} are independently distributed and 0 < p < 1.
Now we shall construct an AR(1) process with stationary marginal distribution as
GBG (α1, λ1, α2, λ2). The innovation structure can be derived by considering the charac-
teristic functions on both sides of (2.5.4), which will yield
ψXn(t) = pψεn(t) + (1− p)ψXn−1(t)ψεn(t).
Assuming stationarity it reduces to the form
ψX(t) = pψε(t) + (1− p)ψX(t)ψε(t).
42
CHAPTER 2. GENERALIZED LAPLACIAN DISTRIBUTIONS AND AUTOREGRESSIVE PROCESSES
Therefore ψε(t) =ψX(t)
p+ (1− p)φX(t).
On substitution, we get
ψε(t) =1
1 + pα1 ln(1− itλ1
) + pα2 ln(1 + itλ2
).
Hence εnd=GBG(pα1, λ1, pα2, λ2).
2.5.3 Estimation of parameters
Bilateral gamma distributions are absolutely continuous with respect to the Lebesgue mea-
sure, because they are the convolution of two gamma distributions. In order to perform a
maximum likelihood estimation, the density in terms of Whittaker function given in (2.5.2)
is used. We take a sample of n observations. The logarithm of the likelihood function for
Θ = (α1, α2, λ1, λ2) is given by
lnL(Θ) = n
(α1 lnλ1 + α2 lnλ2 −
α1 + α2
2ln(λ1 + λ2)− ln Γ(α2)
)+
(α1 + α2
2− 1
)( n∑i=1
ln |xi|
)− λ1 − λ2
2
(n∑i=1
xi
)
+n∑i=1
ln(W 1
2(α1−α2), 1
2(α1+α2−1) (|xi| (λ1 + λ2))
)(2.5.5)
The vector Θ0 obtained from the method of moments is taken as starting point for an
algorithm, for example the Hooke-Jeeves algorithm (see Quarteroni et al. (2002)), which
maximizes the logarithmic likelihood function (2.5.5) numerically. This gives a maximum
likelihood estimate Θ of the parameters. For more details see Kuchler and Tappe (2008b).
Now we consider the estimation of parameters. We use the conditional least square
approach to estimate the four parameters of bilateral gamma distribution. The conditional
least square estimators are strongly consistent under certain conditions, see Klimko and
Nelson (1978). Let x0, x1, ..., xk be a given set of observations of an AR(1) process {Xn}
43
CHAPTER 2. GENERALIZED LAPLACIAN DISTRIBUTIONS AND AUTOREGRESSIVE PROCESSES
with bilateral gamma marginals. Then the conditional estimators of the parameters can be
obtained by minimizing
E =k∑
n=1
[xn − E(Xn|Xn−1 = xn−1)]2 =k∑
n=1
[xn − axn−1 − (1− a)
(α1
λ1
− α2
λ2
)]2
.
We get the estimates of the parameters by solving the following equations recursively.
a =
∑kn=1
[xn −
(α1
λ1− α2
λ2
)] [xn−1 −
(α1
λ1− α2
λ2
)]∑k
n=1
[xn−1 −
(α1
λ1− α2
λ2
)]2
α1 =λ1
(1− a)
1
k
[k∑
n=1
xn − ak∑
n=1
xn−1
]+λ1
λ2
α2
α2 =λ2
(1− a)
1
k
[a
k∑n=1
xn−1 −k∑
n=1
xn
]+λ2
λ1
α1
λ1 =
{1
(1− a)α1
1
k
[k∑
n=1
xn − ak∑
n=1
xn−1
]+α2
α1
1
λ2
}−1
λ2 =
{1
(1− a)α2
1
k
[k∑
n=1
xn − ak∑
n=1
xn−1
]+α1
α2
1
λ1
}−1
.
These equations can be easily solved using standard software packages like Matlab,
Maple etc.
Tjetjep and Seneta (2006) discussed the generalized method of moments to estimate
the parameters in situations where the method of moments is not easily tractable. This
resembles the method of minimum chi-squared estimation but applied to moments. This
is based on the quantity:
M = minparameters
∑i
M (i),
where
M (i) =
(MP (i)−ME(i)
ME(i)
)2
,
44
CHAPTER 2. GENERALIZED LAPLACIAN DISTRIBUTIONS AND AUTOREGRESSIVE PROCESSES
(a) (b)
Figure 2.4: (a) The histogram of the log transformed data (b) The Q-Q plot for the dataand bilateral gamma random variables
where MP (i) is the moment expressed in terms of parameters and ME(i) is the sample
moment. When the value of M = 0 it is clear that the totality of estimating equations for
the model are consistent with each other and that there is a solution for the parameter
estimates.
2.5.4 Application to a financial data
Now we apply the model to a data on daily exchange rates of Pound to Indian rupee during
the period 2004 to 2008. The data is available in ‘Hand Book of Statistics on the Indian
Economy’, published by Reserve Bank of India.
From the data, the summary statistics obtained are as follows.
Mean value = −1.8543× 10−5, Variance = 5.3628× 10−6,
Coefficient of skewness = −0.1139, Coefficient of kurtosis = 4.0465.
45
CHAPTER 2. GENERALIZED LAPLACIAN DISTRIBUTIONS AND AUTOREGRESSIVE PROCESSES
The parameter values are obtained by the generalized method of moments as
α1 = 0.003927, α2 = 0.007339, λ1 = 84.9434, λ2 = 101.0374.
Here these estimated values make M equal to zero, showing that the model is consistent
with the data. The histogram of the log transformed data is given in Figure 2.4(a) and the
Q-Q plot is given in Figure 2.4(b). The Q-Q plot confirms that the distribution is a good fit
for the data.
2.6 Conclusion
In this chapter the generalized Laplacian distribution is considered and its properties are
studied. We developed first order autoregressive processes with generalized Laplacian
marginals and the generation of the process. The regression behaviour and simulation
of sample paths are discussed. We consider the distribution of sums and the covari-
ance structure. We introduced the geometric generalized Laplacian distribution and its
properties are studied. Geometric generalized Laplacian processes are discussed. The
kth order autoregressive processes are described. A more general case namely bilateral
gamma distribution is also considered. Its parameters are estimated using conditional least
squares and generalized method of moments and a numerical illustration is also carried
out.
References
Jose, K.K., Manu, M.T. (2009). Bessel function processes, STARS International Journal
(Sciences) 1 (2), 97–106.
Jose, K.K., Manu, M.T. (2011). Generalized Laplacian distributions and autoregressive
processes, Communications in Statistics - Theory and Methods 40, 4263–4277.
Klebanov, L.B., Maniya, G.M., Melamed, I.A. (1984). A problem of Zolotarev and analogs
of infinitely divisible and stable distribution in a scheme for summing a random num-
46
CHAPTER 2. GENERALIZED LAPLACIAN DISTRIBUTIONS AND AUTOREGRESSIVE PROCESSES
ber of random variables, Theory of Probability and its Applications 29, 791–794.
Klimko, L.A., Nelson, P.I. (1978). On conditional least squares estimation for stochastic
processes, Annals of Statistics 6(3), 629–642.
Kotz, S., Kozubowski, T.J., Podgorski, K. (2001). The Laplace Distribution and General-
izations - A Revisit with Applications to Communications, Economics, Engineering
and Finance, Birkhauser, Boston.
Kozubowski, T., Podgorski, K. (2008a). Skew Laplace distributions I, their origins and
inter-relations. The Mathematical Scientist 33, 22-34.
Kozubowski, T., Podgorski, K. (2008b). Skew Laplace distributions II, divisibility proper-
ties and extensions to stochastic processes. The Mathematical Scientist 33, 1–14.
Kuchler, U., Tappe, S. (2008a). On the shapes of bilateral Gamma densities, Statistics
and Probability Letters 78, 2478–2484.
Kuchler, U., Tappe, S. (2008b). Bilateral Gamma distributions and processes in financial
Mathematics, Stochastic Processes and their Applications 118(2), 261–283.
Lawrance, A.J. (1978). Some autoregressive models for point processes, In Proceedings
of Bolyai Mathematical Society Colloquiuon point processes and queuing problems
24, Hungary, 257–275.
Lawrance, A.J. (1982) . The innovation distribution of a gamma distributed autoregressive
process, Scandinavian Journal of Statistics 9, 234–236.
Mathai, A.M., Saxena, R.K., Haubold, H.J. (2006). A certain class of Laplace trans-
forms with applications to reaction and reaction-diffusion equations, Astrophysics
and Space Science 305, 283–288.
Mathai, A.M. (1993a). On noncentral generalized Laplacian distribution, Journal of Sta-
tistical Research 27(1-2), Nos., 57–80.
47
CHAPTER 2. GENERALIZED LAPLACIAN DISTRIBUTIONS AND AUTOREGRESSIVE PROCESSES
Mathai, A.M. (1993b). On noncentral generalized Laplacianness of quadratic forms in
normal variables, Journal of Multivariate Analysis 45(2), 239–246.
Mathai, A.M. (1993c). The residual effect of a growth-decay mechanism and the distribu-
tion of covariance structures, The Canadian Journal of Statistics 21(3) 277–283.
Pillai, R.N. (1990). Harmonic mixtures and geometric infinite divisibility, Journal of Indian
Statistical Association 28, 87–98.
Pillai, R.N., Sandhya, E. (1990). Distributions with complete monotone derivative and
geometric infinite divisibility, Advances in Applied Probability 22, 751–754.
Quarteroni, A., Sacco, R., Saleri, F. (2002). Numerische Mathematik 1, Springer, Berlin.
Seethalekshmi, V., Jose, K.K. (2004a). An autoregressive process with geometric α-
Laplace marginals, Statistical Papers 45, 337–350.
Seethalekshmi, V., Jose, K.K. (2004b). Geometric Mittag-Leffler distributions and pro-
cesses, Journal of Applied Statistical Science 13(4), 335–342.
Seethalekshmi, V., Jose, K.K. (2006). Autoregressive process with Pakes and geometric
Pakes generalized Linnik marginals, Statistics and Probability Letters 76(2), 318–
326.
Tjetjep, A., Seneta, E. (2006). Skewed normal variance-mean models for asset pricing
and the method of moments, International Statistical Review 74(1), 109–126.
Weiss, G. (1977). Shot noise models for the generation of synthetic streamflow data,
Water Resources Research 13, 101–108.
Wu, F. (2008). Applications of the normal Laplace and generalized normal Laplace Dis-
tributions, Unpublished M.Sc. Thesis, Department of Mathematics and Statistics,
University of Victoria 2008.
48
Recommended