**Open Journal of Statistics**

Vol.07 No.02(2017), Article ID:75963,22 pages

10.4236/ojs.2017.72025

Likelihood and Quadratic Distance Methods for the Generalized Asymmetric Laplace Distribution for Financial Data

Andrew Luong^{ }

École d’actuariat, Université Laval, Canada

Copyright © 2017 by author and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: March 6, 2017; Accepted: April 27, 2017; Published: April 30, 2017

ABSTRACT

Maximum likelihood (ML) estimation for the generalized asymmetric Laplace (GAL) distribution also known as Variance gamma using simplex direct search algorithms is investigated. In this paper, we use numerical direct search techniques for maximizing the log-likelihood to obtain ML estimators instead of using the traditional EM algorithm. The density function of the GAL is only continuous but not differentiable with respect to the parameters and the appearance of the Bessel function in the density make it difficult to obtain the asymptotic covariance matrix for the entire GAL family. Using M-estimation theory, the properties of the ML estimators are investigated in this paper. The ML estimators are shown to be consistent for the GAL family and their asymptotic normality can only be guaranteed for the asymmetric Laplace (AL) family. The asymptotic covariance matrix is obtained for the AL family and it completes the results obtained previously in the literature. For the general GAL model, alternative methods of inferences based on quadratic distances (QD) are proposed. The QD methods appear to be overall more efficient than likelihood methods infinite samples using sample sizes $n\le 5000$ and the range of parameters often encountered for financial data. The proposed methods only require that the moment generating function of the parametric model exists and has a closed form expression and can be used for other models.

**Keywords:**

M-Estimators, Cumulant Generating Function, Chi-Square Tests, Generalized Hyperbolic Distribution, Simplex Pattern Search, Variance Gamma, Minimum Distance, Value at Risk, Entropic Value at Risk, European Call Option

1. Introduction

1.1. Generalized Asymmetric Laplace (GAL) Distribution

The generalized asymmetric Laplace distribution (GAL) is a four parameters infinitely divisible continuous distribution with four parameters given by

$\beta ={\left(\theta ,\mu ,\sigma ,\tau \right)}^{\prime}.$ (1)

The parameter $\theta $ is a location parameter and $\sigma $ is a scale parameter. The parameter $\mu $ can be viewed as the asymmetry parameter of the distribution and $\tau $ is the shape parameter which controls the thickness of the tail of the distribution. If $\mu =0$ , the distribution is symmetric around $\theta $ , see Kotz et al. ( [1] , p. 180). It is flexible and can be used as an alternative to the four parameters stable distribution. The GAL distribution has a thicker tail than the normal distribution but unlike the stable distribution where even the first positive moment might not exist, all the positive integer moments exist. Its moment generating function is

$M\left(s\right)=\frac{{\text{e}}^{\theta s}}{{\left(1-\mu s-\frac{1}{2}{\sigma}^{2}{s}^{2}\right)}^{\tau}},\sigma ,\tau \ge 0,\beta ={\left(\theta ,\mu ,\sigma ,\tau \right)}^{\prime},$ (2)

$s$ must satisfy the inequality

$1-\frac{1}{2}{\sigma}^{2}{s}^{2}-\mu s>0.$ (3)

The GAL distribution is also known as variance gamma (VG) distribution. It was introduced by Madan and Senata [2] , Madan et al. [3] . For the GAL distribution, we adopt the parameterizations used by Kotz et al. [1] . It is not difficult to relate them to the original parameterization, see Senata [4] . The commonly used parameterisations will be discussed in Section (1.2).

From the moment generating function, it is easy to see that the first four cumulants of the GAL distribution are given by

${c}_{1}=\theta +\tau \mu ,\text{}{c}_{2}=\tau {\sigma}^{2}+\tau {\mu}^{2},$ (4)

${c}_{3}=3\tau {\sigma}^{2}\mu +2\tau {\mu}^{3},\text{}{c}_{4}=6\tau {\mu}^{4}+12\tau {\mu}^{2}{\sigma}^{2}+3\tau {\sigma}^{4}.$ (5)

Note that ${c}_{3}=0$ if $\mu =0$ and ${c}_{3}$ can be positive or negative depending on values of the parameters .Therefore, the GAL distribution can be symmetric or asymmetric. Furthermore, with ${c}_{4}>0$ , the tail of the GAL distribution is thicker than the normal distribution. These characteristics make the GAL distribution useful for modelling asset returns, see Senata [4] for further discussions on financial modelling using the GAL distribution.

The moments can be obtained based on cumulants and they are given below,

$\begin{array}{l}E\left(X\right)={c}_{1},\\ E{\left(X-E\left(X\right)\right)}^{2}={c}_{2},\\ E{\left(X-E\left(X\right)\right)}^{3}={c}_{3},\\ E{\left(X-E\left(X\right)\right)}^{4}=6\tau {\mu}^{4}+12\tau {\mu}^{2}{\sigma}^{2}+3\tau {\sigma}^{4}+3{\tau}^{2}{\sigma}^{4}+6{\sigma}^{2}{\tau}^{2}{\mu}^{2}+3{\mu}^{4}{\tau}^{2}.\end{array}$

The GAL distribution belongs to the class of normal mean-variance mixture distributions where the mixture variable follows agamma distribution with shape parameter $\tau $ and scale parameter equal to 1, i.e., with density function

${f}_{w}\left(w\right)=\frac{1}{\Gamma \left(\tau \right)}{w}^{\tau -1}{\text{e}}^{-w},w,\tau >0$ , $\Gamma (.)$ is the commonly used gamma function.

This leads to the following representation in distribution using Expression (4.1.10) in Kotz et al. ( [3] , p. 183),

$X={}^{d}\theta +\mu Y+\sigma \sqrt{Y}Z$ where (6)

1) $Z~N\left(0,1\right),$

2) $Y~G\left(\tau ,1\right)$ as given by expression (8) and independent of $Z$

3) $\theta ,\mu ,\sigma ,\tau $ are parameters with $\sigma ,\tau >0$ 。

The representation given by expression (6) is useful for simulating samples from a GAL distribution. Note that despite the simple closed form expression for the moment generating function, the density function is rather complicated as it depends on the modified Bessel function of the third kind with real index $\lambda $ , i.e., ${K}_{\lambda}\left(u\right)$ see Kotz et al. ( [1] , p. 315) for various representations for the function ${K}_{\lambda}(.)$ . The density function will be introduced in Section (1.2). Using the moment generating function of the GAL distribution, it is easy to see that the distribution is related to a Lévy process, see Podgorski and Wegener [5] for GAL processes.

The GAL parametric family can be introduced as a limit case of the generalized hyperbolic (GH) family where the mixing random variable belongs to the generalized inverse Gaussian family, see Mc Neil et al. [6] for properties of the GH family. Note that the GAL family is nested within the bilateral gamma family as the GAL random variable can be represented in distribution as

$X={}^{d}\theta +\frac{\sigma}{\sqrt{2}}\left(\frac{1}{\kappa}{G}_{1}-\kappa {G}_{2}\right)$ , (7)

${G}_{1}$ and ${G}_{2}$ are independent random variables with common gamma distribution. The common mgf of the gamma distribution is given by

${M}_{G}\left(s\right)=\frac{1}{{\left(1-s\right)}^{\alpha}}$ , see expression (4.1.1) given by Kotz et al. ( [1] , p. 183).

If we introduce $\kappa $ using $\mu =\frac{\sigma}{\sqrt{2}}\left(\frac{1}{\kappa}-\kappa \right)$ , the GAL distribution can also be

parameterised using the four equivalent parameters, i.e., with $\theta ,\sigma ,\kappa ,\tau $ .

Moment estimation for the GAL family has been given by Podgorski and Wegener [5] . Maximum likelihood estimation for the GH family by fixing the parameter $\tau $ within some bounds has been given by Protassov [7] , McNeil et al. ( [6] , p. 80). For ML estimation, they implicitly assumed that the mixing random

variable $Y~\text{Gamma}\left(\tau ,\frac{\varphi}{2}\right)$ which implies the following form of the moment

generating function for $X$ ,

$M\left(s\right)=\frac{{\text{e}}^{\theta s}}{{\left(1-\frac{\varphi}{2}\left(\mu s+\frac{1}{2}{\sigma}^{2}{s}^{2}\right)\right)}^{\tau}},\varphi >0.$

From the above expression, it is easy to see that the parameter $\varphi >0$ is redundant and the parameterisation using five parameters will introduce instability in the estimation process. It appears to be simpler to use the parameterisation given by Kotz et al. [1] or the parametrisation used by Madan and Senata [2] , Madan et al. [3] , Senata [4] with only four parameters by letting $\varphi =2$ .

Hu [8] advocated fitting the GAL distribution using the EM algorithm but the drawback of this approach is the difficulty to obtain the information matrix using the method of Louis [9] , see McLachlan and Krishnan [10] for a comprehensive review of the EM algorithm. The lack of a closed form asymptotic covariance matrix for the estimators might create difficulties for hypotheses testing.

1.2. Some Properties of the GAL Distribution and Parameterisations

In this subsection, we first review a few parameterisations which are commonly used for the GAL distribution.

Definition 1 (GAL density)

From the GH density, the density function for the GAL distribution can be obtained and it can be expressed as

$\begin{array}{c}f\left(x;\theta ,\sigma ,\mu ,\tau \right)=\frac{\sqrt{2}{\text{e}}^{\frac{\mu}{\sigma}\left(\frac{X-\theta}{\sigma}\right)}}{\sigma \sqrt{\text{\pi}}\Gamma \left(\tau \right)}{\left(\frac{1}{\sqrt{2+{\left(\frac{\mu}{\sigma}\right)}^{2}}}\right)}^{\tau -0.5}{\left(\frac{\left|X-\theta \right|}{\sigma}\right)}^{\tau -0.5}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdot {K}_{\tau -0.5}\left(\sqrt{2+{\left(\frac{\mu}{\sigma}\right)}^{2}}\frac{\left|X-\theta \right|}{\sigma}\right).\end{array}$ (8)

The vector of parameters is $\beta ={\left(\theta ,\mu ,\sigma ,\tau \right)}^{\prime}$ and we shall call this parametrisation parameterisation1.The density can be derived using thenormal mean variance mixture representation given by expression (6). See expression (3.30) given by Mc Neil et al. ( [6] , p. 78).

Alternatively, by letting $\mu =\frac{\sigma}{\sqrt{2}}\left(\frac{1}{\kappa}-\kappa \right)$ and keeping other parameters as in

parametrisation 1, we obtain the following expression for the density of a GAL distribution

$\begin{array}{l}f\left(x;\theta ,\sigma ,\kappa ,\tau \right)=\frac{\sqrt{2}{\text{e}}^{\frac{\sqrt{2}}{2}\left(\frac{1}{\kappa}-\kappa \right)\left(\frac{X-\theta}{\sigma}\right)}}{\sigma \sqrt{\text{\pi}}\Gamma \left(\tau \right)}{\left(\frac{\sqrt{2}}{\frac{1}{\kappa}+\kappa}\right)}^{\tau -0.5}{\left(\frac{\left|X-\theta \right|}{\sigma}\right)}^{\tau -0.5}\\ \text{}\cdot {K}_{\tau -0.5}\left(\frac{1}{\sqrt{2}}\left(\frac{1}{\kappa}+\kappa \right)\frac{\left|X-\theta \right|}{\sigma}\right).\end{array}$ (9)

with the vector of parameters given by $\beta ={\left(\theta ,\kappa ,\sigma ,\tau \right)}^{\prime}$ . We shall call this parameterisation, parameterisation 2 which is used by Kotz et al. ( [1] , p. 184).

Note that $\theta ,\sigma $ are respectively the location and scale parameter with either parameterisation 1 or 2. Setting $\theta =0,\sigma =1$ , the standardized GAL density with parameterisation 2 will have only two parameters and it is given by

${f}_{\epsilon}\left(x;\kappa ,\tau \right)=\frac{\sqrt{2}{\text{e}}^{\frac{\sqrt{2}}{2}\left(\frac{1}{\kappa}-\kappa \right)x}}{\sqrt{\text{\pi}}\Gamma \left(\tau \right)}{\left(\frac{\sqrt{2}}{\frac{1}{\kappa}+\kappa}\right)}^{\tau -0.5}{\left(\left|x\right|\right)}^{\tau -0.5}{K}_{\tau -0.5}\left(\frac{1}{\sqrt{2}}\left(\frac{1}{\kappa}+\kappa \right)\left|x\right|\right)$

or equivalently by using parametrisation1,

${f}_{\epsilon}\left(x;\mu ,\tau \right)=\frac{\sqrt{2}{\text{e}}^{\mu x}}{\sqrt{\text{\pi}}\Gamma \left(\tau \right)}{\left(\frac{1}{\sqrt{2+{\left(\mu \right)}^{2}}}\right)}^{\tau -0.5}{\left(\left|x\right|\right)}^{\tau -0.5}{K}_{\tau -0.5}\left(\sqrt{2+{\left(\mu \right)}^{2}}\left|x\right|\right)$ .

Following Kotz et al. [1] we only use these two parametrisations but it is easy to see their relationships with parametrisation 3 used by Madan et al. [3] and Senata [4] . With parametrisation 3, the mgf of the GAL distribution is

$M\left(s\right)=\frac{{\text{e}}^{cs}}{{\left(1-{\theta}^{\prime}\nu s-\frac{1}{2}\nu \sigma {\text{'}}^{2}{s}^{2}\right)}^{1/\nu}},$ (10)

the parameters are ${\theta}^{\prime},{\sigma}^{\prime},\nu ,c$ with

${\theta}^{\prime}=\frac{\mu}{\nu},{{\sigma}^{\prime}}^{2}=\frac{{\sigma}^{2}}{\nu},\nu =1/\tau ,c=\theta $ .

The first four moments using parameterisation 3 as given by Senata ( [4] , p. 181) aregiven below,

$\begin{array}{l}E\left(X\right)=c+{\theta}^{\prime},V\left(X\right)={{\sigma}^{\prime}}^{2}+{{\theta}^{\prime}}^{2}\nu ,\\ E{\left(X-E\left(X\right)\right)}^{3}=2{{\theta}^{\prime}}^{3}{\nu}^{2}+3{{\sigma}^{\prime}}^{2}{\theta}^{\prime}\nu ,\\ E{\left(X-E\left(X\right)\right)}^{4}=3{{\sigma}^{\prime}}^{4}\nu +12{{\sigma}^{\prime}}^{2}{{\theta}^{\prime}}^{2}{\nu}^{2}+6{{\theta}^{\prime}}^{4}{\nu}^{3}+3{{\sigma}^{\prime}}^{4}+6{{\sigma}^{\prime}}^{2}{{\theta}^{\prime}}^{2}\nu +3{{\theta}^{\prime}}^{4}{\nu}^{2}.\end{array}$ ,

The GAL random variable can also be expressed as the difference of two independent gamma random variables, the GAL random variable is nested inside the class of bilateral gamma random variable $Y$ which can be represented as

$Y={}^{d}\theta +{G}_{1}-{G}_{2}$ (11)

with ${G}_{1}$ , ${G}_{2}$ are independent gamma random variables with the mgf’s given

respectively by ${M}_{{G}_{1}}\left(s\right)=\frac{1}{{\left(1-{\beta}_{1}s\right)}^{\alpha}}$ and ${M}_{{G}_{2}}\left(s\right)=\frac{1}{{\left(1-{\beta}_{2}s\right)}^{\alpha}}$ . We obtain the GAL random variable by letting ${\beta}_{1}=\frac{\sigma}{\kappa \sqrt{2}}$ and ${\beta}_{2}=\frac{\kappa \sigma}{\sqrt{2}}$ .

The class of bilateral gamma distribution was introduced by Küchler and Tappe [11] and they have shown that the Esscher transform of a bilateral gamma distribution remains within this class of distribution .More specifically, let ${Y}^{E}$

be the random variable with mgf given by ${M}_{{Y}^{E}}\left(s\right)=\frac{{M}_{Y}\left(s+h\right)}{{M}_{Y}\left(s\right)}$ . I is easy to see

that ${Y}^{E}={}^{d}\theta +{\overline{G}}_{1}-{\overline{G}}_{2}$ , ${\overline{G}}_{1}$ and ${\overline{G}}_{2}$ are independent gamma random variables with common shape parameter $\alpha \left(\alpha =1\right)$ and scale parameters given

respectively by ${\overline{\beta}}_{1}=\frac{{\beta}_{1}}{1-{\beta}_{1}h}$ and ${\overline{\beta}}_{2}=\frac{{\beta}_{2}}{1+{\beta}_{2}h}$ .

For option pricing with the risk neutral approach, this property is useful as it is easy to simulate samples from a bilateral gamma distribution. The use of Esscher transform to find risk neutral parameters for option pricing in financeisdue to the seminal works of Gerber and Shiu [12] . The Esscher transform risk neutral parameters can also be interpreted as minimum entropy risk neutral parameters. See Miyahara [13] for this interpretation, see section 4 for more discussions on financial applications.

For numerical methods to find estimators, Nelder-Mead simplex method and related derivative free simplex methods are recommended. Derivative free simplex direct search methods are well described in chapter 16 of the book by Bierlaire [14] .

The paper is organized as follows. In Section 2, some submodels of the GAL family are introduced to highlight the difficulty on obtaining the asymptotic covariance matrix using classical likelihood theory. Asymptotic properties of the ML estimators are investigated in section (3). The ML estimators for the GAL

family are shown to be consistent for $\tau >\frac{1}{2}$ . For the special case with $\tau =1$ ,

this corresponds to the asymmetric Laplace (AL) model, we obtain the asymptotic covariance matrix in closed form using the approach based on M-estima- tion theory as given by Huber [15] which completes the missing components of expression (2) given by Kotz et al. ( [16] , p. 818). As an alternative to ML estimation, QD estimation based on matching cumulant generating functions is developed in section (4) for the entire GAL family. The QD estimators are shown to be consistent and follow an asymptotic normal distribution. The asymptotic covariance matrix can be obtained in closed form for the entire GAL family using QD methods which makes testing for parameters easy to implement. Chi-square goodness of fit tests statistics can also be constructed based on the distance function used to obtained QD estimators. The methods are also general and can be applied to other models. Numerical issues and simulations illustrations are discussed in Section (5). A limited simulation study shows that the proposed QD estimators perform better than ML estimators overall for sample sizes $n\le 5000$ using parameters values often encountered for financial data. Some applications drawn from finance are discussed in Section (6).

We shall consider first a few submodels of the GAL model to show the difficulties encountered when likelihood theory is used to obtain the asymptotic covariance matrix for ML estimators.

The difficulties are mainly due to the score functions when viewed as functions of the parameters have a discontinuity point and fail to be differentiable. If the asymptotic covariance matrix for the ML estimators is derived based on likelihood theory, it will have missing components. This is the problem of expression (2) given by Kotz et al. ( [16] , p. 818) for the AL family, a subfamily of the GAL family. M-estimation theory will be used to replace likelihood theory for deriving the asymptotic covariance matrix.

2. Some Subfamilies of the GAL Family

Example 1

Let $\mu =0,\tau =1,\sigma =1,\tau =1$ and the only parameter is the location parameter

and the family is symmetric around $\theta $ . Using the result ${K}_{\frac{1}{2}}\left(u\right)=\sqrt{\frac{\text{\pi}}{2u}}{\text{e}}^{-u}$ , the

density function is reduced to

$f\left(x,\theta \right)=\frac{1}{2{s}_{0}}{\text{e}}^{-\frac{\left|x-\theta \right|}{{s}_{0}}},{s}_{0}=\frac{1}{\sqrt{2}}.$

Equivalently,

$f\left(x,\theta \right)={f}_{0}\left(x-\theta \right),{f}_{0}\left(x\right)=\frac{1}{\sqrt{2}}{\text{e}}^{-\sqrt{2}\left|x\right|},-\infty <x<\infty .$

This is the well known double exponential distribution, the maximum likelihood estimator for $\theta $ is the sample median. There is no Fisher information matrix available as the score function is discontinuous with respect to the parameter $\theta $ . The asymptotic variance of the sample median can be found by using M-estimation theory, see Huber [17] , Huber [15] , also see Amemiya ( [18] , p. 148-154) on the least absolute deviations (LAD) estimator .We shall use the same approach to derive the asymptotic covariance matrix for the ML estimators for the GAL distribution with $\tau =1$ . The GAL distribution when $\tau =1$ is the asymmetric Laplace (AL) distribution. The AL distribution will be introduced below.

Example 2

Using the density of the GAL distribution and setting $\tau =1$ , we obtain the AL distribution with only 3 parameters. The location and scale parameters are given respectively by $\theta ,\sigma $ and the asymmetry parameter $\mu $ . If parameterisation 2 is used, the density function $g\left(x;\theta ,\sigma ,\kappa \right)$ of the AL distribution is based on the standardized AL density as given by expression (4.1.31) in Kotz et al. ( [1] , p. 189) with

$\begin{array}{l}g\left(x;\theta ,\sigma ,\kappa \right)=\frac{\sqrt{2}}{\sigma \gamma}\mathrm{exp}\left(\frac{\sqrt{2}}{2}\delta \left(\frac{X-\theta}{\sigma}\right)\right)\mathrm{exp}\left(-\frac{\sqrt{2}}{2}\gamma \left(\frac{\left|X-\theta \right|}{\sigma}\right)\right),\\ \gamma =\kappa +\frac{1}{k},\delta =\kappa -\frac{1}{k}.\end{array}$

The AL family can be considered as a subfamily of the GAL family and the score functions for this model are again discontinuous. We shall derive the asymptotic covariance matrix using M-estimation theory in Section (3.2) and complete the expression (2) of Kotz et al. ( [16] , p. 818). The expression derived by the authors has missing components as it is derived based on likelihood theory. Kotz et al. [16] used a different parametrisation but it is equivalent to the one used in Kotz et al. ( [1] , p. 189) and it is not difficult to establish the links between these 2 parameterisations.

3. Maximum Likelihood Estimation for the GAL Family

3.1. Maximum Likelihood Estimation for the GAL Distribution

For consistency of the MLE, the following Theorem which is Theorem 2.5 given by Newey and McFadden ( [19] , p. 2131) is useful. We make the basic assumption that we have a random sample which consists of n iid observations ${X}_{1},\cdots ,{X}_{n}$ drawn from the GAL parametric family with density $f\left(x;\beta \right)$ where ${\beta}_{0}$ is the vector of the true parameters.

Theorem (Consistency)

Assume that:

1) If ${\beta}_{1}\ne {\beta}_{2}$ then $f\left(x;{\beta}_{1}\right)\ne f\left(x;{\beta}_{2}\right)$ .

2) The parameter space $\Theta $ is compact, ${\beta}_{0}\in \Theta $ .

3) $f\left(x;\beta \right)$ is a continuous with respect to $\beta $ .

4) $E\left({\mathrm{sup}}_{\beta \in \Theta}|\mathrm{ln}\text{}f\left(x;\beta \right)\right)<\infty $ .

Under the conditons stated, the ML estimators (MLE) given by the vector $\widehat{\beta}$ is obtained by maximizing the log of the likelihood function

$\mathrm{ln}L\left(\beta \right)={\displaystyle {\sum}_{i=1}^{n}\mathrm{ln}f\left(x;\beta \right)}$ is consstent, $\widehat{\beta}\stackrel{p}{\to}{\beta}_{0}$ .

One can see that the conditions for consistency are mild, the condition d) will

be satisfied for the GAL family if $\tau >\frac{1}{2}$ as the density function remains bounded. For $\tau \le \frac{1}{2}$ , the density functions with $\theta =0,\mu =0$ tend to infinity as

$x\to {0}_{+}$ , see Theorem 4.1.2 given by Kotz et al. ( [1] , p. 190-192).

It might be possible to prove consistency using the approach to obtain results of Theorem 4 by Broniatowski et al. ( [20] , p. 2578).

For asymptotic normality, it is more complicated as standard theory often requires that the function $\mathrm{ln}L\left(\beta \right)$ being twice differentiable with respect to $\beta $ . The appearance of the Bessel function creates further complications. It makes it very difficult to establish asymptotic properties even with the use of M estimation theory.

For the special case with $\tau =1$ which corresponds to the AL distribution, the density function can be expressed without the use of the Bessel function and M- estimation theory can be used to find the asymptotic covariance matrix for the ML estimators. Asymptotic normality has been shown by Kotz et al. ( [1] , p. 158-174) but the asymptotic covariance matrix of the ML estimators is still incomplete.

The formula (2.2) given by Kotz et al. ( [16] , p. 818) does not give the correct asymptotic covariance for the ML estimators. The complete formula for the asymptotic covariance matrix of the ML estimators can be obtained using M-estimation theory. An example is given at the end of section (3.2) which shows that one cannot recover the common asymptotic variance of the sample median using results in Kotz et al. ( [16] , p. 818).

M-estimation theory allows the score functions when viewed as functions of the parameters to have a few points of discontinuities and full differentiability with respect to $\beta $ can be replaced by one side differentiability accordingly. Amemiya ( [18] , p. 151) uses this approach. For establishing asymptotic normality for the sample median, the sample median is viewed as a root given by a solution of the estimating equation

$\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}\psi \left({x}_{i},\theta \right)}=0$ ,

using the indicator function $I[.]$ ,

$\psi \left(x,\theta \right)=-I\left[\theta <x\right]$ and $\psi \left(x,\theta \right)=I\left[\theta >x\right],\psi \left(x,\theta \right)=0$ if $x=\theta $ .

The function $\psi \left(x,\theta \right)$ is simply the one side derivative and we adopt the notation

$\psi \left(x,\theta \right)=\frac{\partial \left|x-\theta \right|}{\partial \theta}$ with the meaning of one side derivative, also see Hogg et

al. ( [21] , p. 538) on estimating equations based on the sign test. The probability of the existence of such a root tend to 1 as $n\to \infty $ .

Another M estimator for the location parameter $\theta $ has been proposed by Huber ( [17] , p. 232-233). It consists of estimating $\theta $ by solving

$\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}\psi \left({x}_{i},\theta \right)}=0$ with

$\psi \left(x,\theta \right)=x-\theta $ if $\left|x-\theta \right|\le k$ , k is chosen.

$\psi \left(x,\theta \right)=k$ , if $\left|x-\theta \right|>k$ .

For M-estimators based on $\psi \left(x,\beta \right)$ , where $\beta $ is a vector of parameters, Huber [17] , Huber [15] has generalized and relaxed conditions for the classical Taylor expansion. The technical details can be found in his seminal paper and in Huber [15] . It can be summarized as follows. Suppose that the M-estimators given by $\widehat{\beta}$ , given as the roots of the following estimating functions

$\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}\psi \left(x,\beta \right)}=0$ . (12)

Under the following main conditions:

a) $\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}\psi \left({x}_{i},\widehat{\beta}\right)}\stackrel{p}{\to}0$ , assuming $\widehat{\beta}\stackrel{p}{\to}{\beta}_{0}$ has been shown,

b) $\lambda \left({\beta}_{0}\right)={E}_{{\beta}_{0}}\left({\beta}_{0}\right)=0,\lambda \left(\beta \right)={E}_{{\beta}_{0}}\left(\psi \left(x,\beta \right)\right),$

with assumption N-3 given by Huber ( [15] , p. 132) and $\lambda \left(\beta \right)$ is differentiable with respect to $\beta $ , then we have the following representation:

$\frac{1}{\sqrt{n}}{\displaystyle {\sum}_{i=1}^{n}\psi \left(x,{\beta}_{0}\right)}=-\Lambda \left({\beta}_{0}\right)\sqrt{n}\left(\widehat{\beta}-{\beta}_{0}\right)+{o}_{p}\left(1\right)$ ,

${\Lambda \left({\beta}_{0}\right)=\frac{\partial \lambda \left(\beta \right)}{\partial {\beta}^{\prime}}|}_{\beta ={\beta}_{0}}$ and ${o}_{p}\left(1\right)$ is a term converging to 0 in probability.

When we compare with the usual Taylor expansion, we only require $\lambda \left(\beta \right)={E}_{{\beta}_{0}}\left(\psi \left(x,\beta \right)\right)$ to be differentiable with respect to $\beta $ . This differentiability condition is satisfied for the AL family. Note that if indeed the score functions are differentiable then $-\Lambda \left({\beta}_{0}\right)$ is the Fisher information matrix.

For the technical details on how to verify the conditions N-3, see Hinkley and Revankar ( [22] , p. 7). The condition 1) is usually verified by making use of the Lebesgue dominated convergence theorem (LDGT) as given by Rudin ( [23] , p. 321). It can become every technical to construct integrable functions to bound the score functions in order to check the sufficient conditions for the LDGT but they are expected to hold for the AL distribution with the existence of all integer positive moments and the parameters space is assumed to be compact. Essentially, we need to show that the condition 2) is met by showing the convergence in probability of the integrals

${\int}_{-\infty}^{\infty}\psi \left(x,\widehat{\beta}\right)\text{d}{F}_{n}\left(x\right)}\stackrel{p}{\to}{\displaystyle {\int}_{-\infty}^{\infty}\psi \left(x,{\beta}_{0}\right)\text{d}{F}_{{\beta}_{0}}\left(x\right)}={E}_{{\beta}_{0}}\left({\beta}_{0}\right)=0$ , ${F}_{n}\left(x\right)$ is the

sample distribution function, the score functions are given by expressions (14)-(16).

From the above representation, we then have

$\sqrt{n}\left(\widehat{\beta}-{\beta}_{0}\right)\stackrel{L}{\to}N\left(0,\left({\left[\Lambda \left({\beta}_{0}\right)\right]}^{-1}\right)\right){V}_{{\beta}_{0}}\left(\psi \left(x,{\beta}_{0}\right)\right){\left({\left[\Lambda \left({\beta}_{0}\right)\right]}^{-1}\right)}^{\prime}$ .

The asymptotic covariance matrix of $\widehat{\beta}$ is given by

$V\left(\widehat{\beta}\right)=\frac{1}{n}\left({\left[\Lambda \left({\beta}_{0}\right)\right]}^{-1}\right){V}_{{\beta}_{0}}\left(\psi \left(x,{\beta}_{0}\right)\right){\left({\left[\Lambda \left({\beta}_{0}\right)\right]}^{-1}\right)}^{\prime}$ , (13)

${V}_{{\beta}_{0}}\left(\psi \left(x,{\beta}_{0}\right)\right)$ is the covariance matrix of the vector $\psi \left(x,{\beta}_{0}\right)$ , $\psi \left(x,{\beta}_{0}\right)$ is the vector of the true score functions or quasi score functions if a proxy density function is used to replace the true density function.

Now based on M-estimation theory, we proceed to find $\Lambda \left({\beta}_{0}\right)$ and ${V}_{{\beta}_{0}}\left(\psi \left(x,{\beta}_{0}\right)\right)$ for the AL distribution to obtain the asymptotic covariance matrix of the ML estimators in the following section.

3.2. Asymptotic Covariance Matrix for the AL Family

Kotz et al. [1] , Kotz et al. [16] have shown that the ML estimators for the AL distribution have an asymptotic normal distribution but their asymptotic covariance matrix given by expression (3.5.1) of Kotz ( [1] , p. 158) which is identical to expression (2) given by Kotz et al. ( [16] , p. 818) is still incomplete. If M-esti- mation theory is used then the asymptotic covariance matrix should be based on Corollary (3.2) as given by Huber ( [15] , p. 133), also see expression (12.18) given by Woolridge ( [24] , p. 407).

Since

$\mathrm{ln}g\left(x;\theta ,\sigma ,\kappa \right)=\mathrm{ln}\sqrt{2}-\mathrm{ln}\sigma -\mathrm{ln}\gamma +\frac{\delta \sqrt{2}}{2}\frac{\left(x-\theta \right)}{\sigma}-\frac{\gamma \sqrt{2}}{2}\frac{\left|x-\theta \right|}{\sigma}$ ,

the following derivatives are the score functions of the AL distribution,

$\begin{array}{l}{\psi}_{1}\left(x;\theta ,\sigma ,\kappa \right)=\frac{\partial \mathrm{ln}g\left(x;\theta ,\sigma ,\kappa \right)}{\partial \theta}=-\frac{\sqrt{2}\delta}{2\sigma}-\frac{\sqrt{2}\gamma}{2\sigma}v\left(x;\theta \right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{with}\text{\hspace{0.17em}}v\left(x;\theta \right)=-I\left[x>\theta \right]+I\left[x<\theta \right],\text{\hspace{0.17em}}v\left(x;\theta \right)=0\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}x=\theta .\end{array}$ (14)

${\psi}_{2}\left(x;\theta ,\sigma ,\kappa \right)=\frac{\partial \mathrm{ln}g\left(x;\theta ,\sigma ,\kappa \right)}{\partial \sigma}=-\frac{1}{\sigma}-\frac{\sqrt{2}\delta}{2}\frac{\left(x-\theta \right)}{{\sigma}^{2}}+\frac{\sqrt{2}\gamma}{2}\frac{\left|x-\theta \right|}{{\sigma}^{2}}$ , (15)

${\psi}_{3}\left(x;\theta ,\sigma ,\kappa \right)=\frac{\partial \mathrm{ln}g\left(x;\theta ,\sigma ,\kappa \right)}{\partial \kappa}=-\frac{\frac{\partial \gamma}{\partial \kappa}}{\gamma}+\frac{\partial \delta}{\partial \kappa}\frac{\sqrt{2}\left(x-\theta \right)}{\sigma}-\frac{\sqrt{2}}{2}\frac{\partial \gamma}{\partial \kappa}\frac{\left|x-\theta \right|}{\sigma}$ . (16)

Let $\beta ={\left(\theta ,\sigma ,\kappa \right)}^{\prime}$ and ${\beta}_{0}$ the vector of the true parameters we need to find first the vector

$\begin{array}{l}\lambda \left(\beta \right)={\left({\lambda}_{1}\left(\beta \right),{\lambda}_{2}\left(\beta \right),{\lambda}_{3}\left(\beta \right)\right)}^{\prime},\\ {\lambda}_{i}\left(\beta \right)={E}_{{\beta}_{0}}\left({\psi}_{i}\left(x;\theta ,\sigma ,\kappa \right)\right),i=1,2,3.\end{array}$

Subsequently, we need to find the derivatives of these expressions with respect to $\beta $ then evaluated at $\beta ={\beta}_{0}$ to obtain the matrix $-\Lambda \left({\beta}_{0}\right)$ . The matrix $-\Lambda \left({\beta}_{0}\right)$ generalizes the Fisher information matrix.

It will be reduced to this matrix if the score functions ${\psi}_{i}\left(x;\beta \right),i=1,2,3,$ are differentiable with respect to $\beta $ . It is clear that the elements of $\Lambda \left({\beta}_{0}\right)$ will have closed form expressions but are lengthy to display. To obtain ${E}_{{\beta}_{0}}\left({\psi}_{i}\left(x;\theta ,\sigma ,\kappa \right)\right),i=1,2,3$ , note that we have a location and scale parameter. Consequently, it appears to be simpler to define first the standardized AL density as the AL density with $\theta =0,\sigma =1$ , i.e.,

${g}_{\epsilon}\left(x;\kappa \right)=\frac{\sqrt{2}}{\gamma}\mathrm{exp}\left(\frac{\sqrt{2}}{2}\delta x\right)\mathrm{exp}\left(-\frac{\sqrt{2}}{2}\gamma \left(\left|x\right|\right)\right)$ and the AL density with three

parameters as

$g\left(x;\theta ,\sigma ,\kappa \right)=\frac{1}{\sigma}{g}_{\epsilon}\left(\frac{x-\theta}{\sigma};\kappa \right)$ .

Making use of ${g}_{\epsilon}\left(x;\kappa \right)$ ,

${E}_{{\beta}_{0}}\left(v\left(x;\theta \right)\right)={\displaystyle {\int}_{\theta}^{\infty}\frac{1}{{\sigma}_{0}}{g}_{\epsilon}}\left(\frac{x-{\theta}_{0}}{{\sigma}_{0}};{\kappa}_{0}\right)\text{d}x-{\displaystyle {\int}_{-\infty}^{\theta}\frac{1}{{\sigma}_{0}}{g}_{\epsilon}}\left(\frac{x-{\theta}_{0}}{{\sigma}_{0}};{\kappa}_{0}\right)\text{d}x$ ,

or

${E}_{{\beta}_{0}}\left(v\left(x;\theta \right)\right)=1-2{G}_{\epsilon}\left(\frac{\theta -{\theta}_{0}}{{\sigma}_{0}};{\kappa}_{0}\right)$ , (17)

${G}_{\epsilon}\left(x;\kappa \right)$ is the distribution function with density function ${g}_{\epsilon}\left(x;\kappa \right)$ .

Similarly,

${E}_{{\beta}_{0}}\left(\left|x-\theta \right|\right)={\displaystyle {\int}_{\theta}^{\infty}\left(x-\theta \right)}{g}_{\epsilon}\left(\frac{x-{\theta}_{0}}{{\sigma}_{0}};{\kappa}_{0}\right)\text{d}x+{\displaystyle {\int}_{-\infty}^{\theta}\left(\theta -x\right)}\frac{1}{{\sigma}_{0}}{g}_{\epsilon}\left(\frac{x-{\theta}_{0}}{{\sigma}_{0}};{\kappa}_{0}\right)\text{d}x$ . (18)

Therefore, $\frac{\partial {E}_{{\beta}_{0}}\left(\left|x-\theta \right|\right)}{\partial \theta}$ can be obtained by first evaluating the term

$\begin{array}{c}\frac{\partial}{\partial \theta}{\displaystyle {\int}_{\theta}^{\infty}\left(x-\theta \right)}\frac{1}{{\sigma}_{0}}{g}_{\epsilon}\left(\frac{x-{\theta}_{0}}{{\sigma}_{0}};{\kappa}_{0}\right)\text{d}x=-{\displaystyle {\int}_{\theta}^{\infty}\frac{1}{{\sigma}_{0}}{g}_{\epsilon}}\left(\frac{x-{\theta}_{0}}{{\sigma}_{0}};{\kappa}_{0}\right)\text{d}x\\ =-\left[1-{G}_{\epsilon}\left(\frac{\theta -{\theta}_{0}}{{\sigma}_{0}};{\kappa}_{0}\right)\right],\end{array}$

using Leibnitz’s rule which taking into account the lower bound of the interval also depends on $\theta $ then subsequently evaluate using Leibnitz’s rule the expression

$\frac{\partial}{\partial \theta}{\displaystyle {\int}_{-\infty}^{\theta}\left(\theta -x\right)\frac{1}{{\sigma}_{0}}}{g}_{\epsilon}\left(\frac{x-{\theta}_{0}}{{\sigma}_{0}};{\kappa}_{0}\right)\text{d}x={G}_{\epsilon}\left(\frac{\theta -{\theta}_{0}}{{\sigma}_{0}};{\kappa}_{0}\right)$ .

Consequently,

$\frac{\partial {E}_{{\beta}_{0}}\left(\left|x-\theta \right|\right)}{\partial \theta}=-1+2{G}_{\epsilon}\left(\frac{\theta -{\theta}_{0}}{{\sigma}_{0}};{\kappa}_{0}\right).$

The elements of $\Lambda \left({\beta}_{0}\right)$ can be found subsequently by first forming

${\lambda}_{1}\left(\beta ;{\beta}_{0}\right)={\displaystyle {\int}_{-\infty}^{\infty}{\psi}_{1}\left(x,\beta \right)g\left(x;{\beta}_{0}\right)\text{d}x}=-\frac{\sqrt{2}}{2}\frac{\delta}{\sigma}-\frac{\sqrt{2}}{2}\frac{\gamma}{\sigma}{E}_{{\beta}_{0}}\left(v\left(x;\theta \right)\right)$ ,

${E}_{{\beta}_{0}}\left(v\left(x;\theta \right)\right)$ is as given by expression (17). Also,

$\begin{array}{c}{\lambda}_{2}\left(\beta ;{\beta}_{0}\right)={\displaystyle {\int}_{-\infty}^{\infty}{\psi}_{2}\left(x,\beta \right)g\left(x;{\beta}_{0}\right)\text{d}x}\\ =-\frac{1}{\sigma}-\frac{\sqrt{2}}{2}\frac{\delta}{{\sigma}^{2}}\left({E}_{{\beta}_{0}}\left(x\right)-\theta \right)+\frac{\sqrt{2}}{2}\frac{\gamma}{{\sigma}^{2}}{E}_{{\beta}_{0}}\left(\left|x-\theta \right|\right),\end{array}$

${E}_{{\beta}_{0}}\left(\left|x-\theta \right|\right)$ is as given by expression (18), ${E}_{{\beta}_{0}}\left(x\right)={\theta}_{0}+{\tau}_{0}{\mu}_{0}$ with ${\tau}_{0}=1$ using expression (4). With

${\lambda}_{3}\left(\beta ;{\beta}_{0}\right)={\displaystyle {\int}_{-\infty}^{\infty}{\psi}_{3}\left(x,\beta \right)g\left(x;{\beta}_{0}\right)}\text{d}x$ or equivalently,

${\lambda}_{3}\left(\beta ;{\beta}_{0}\right)=-\frac{\frac{\partial \gamma}{\partial \kappa}}{\gamma}+\frac{\partial \gamma}{\partial \kappa}\frac{\sqrt{2}}{\sigma}\left({E}_{{\beta}_{0}}\left(x\right)-\theta \right)-\frac{\sqrt{2}}{2}\frac{\partial \gamma}{\partial \kappa}{E}_{{\beta}_{0}}\left(\left|x-\theta \right|\right),$

then the matrix $\Lambda \left({\beta}_{0}\right)$ can be obtained by differentiating with respect to $\beta $ the vector

$\lambda \left(\beta ;{\beta}_{0}\right)={\left({\lambda}_{1}\left(\beta ;{\beta}_{0}\right),{\lambda}_{2}\left(\beta ;{\beta}_{0}\right),{\lambda}_{3}\left(\beta ;{\beta}_{0}\right)\right)}^{\prime}$ and set $\beta ={\beta}_{0}$ , i.e.,

$\Lambda \left({\beta}_{0}\right)={\frac{\partial \lambda \left(\beta ;{\beta}_{0}\right)}{\partial {\beta}^{\prime}}|}_{\beta ={\beta}_{0}}$ .

Clearly, the elements of the matrix $\Lambda \left({\beta}_{0}\right)$ have closed form expressions but are lengthy to display. Packages like MATLAB or Mathematica can handle symbolic derivatives and can be used to obtain these elements. Substituting ${\beta}_{0}$ by the ML estimator $\widehat{\beta}$ in $\Lambda \left({\beta}_{0}\right)$ yields an estimate for the matrix $\Lambda \left({\beta}_{0}\right)$ .

Now we turn our attention to the matrix ∑ which is the covariance matrix of the vector of score functions $\psi \left(x,{\beta}_{0}\right)={\left({\psi}_{1}\left(x,{\beta}_{0}\right),{\psi}_{2}\left(x,{\beta}_{0}\right),{\psi}_{3}\left(x,{\beta}_{0}\right)\right)}^{\prime}$ . Using a different but equivalent parameterisation, this matrix has been obtained by Kotz et al. ( [16] , p. 818), Kotz et al. ( [1] , p. 158) but its inverse does not give the asymptotic covariance matrix of the ML estimators as claimed in their paper. It is not difficult to establish the relationships between the parameterisation used in example 2 and the one used in the paper by Kotz et al. ( [1] , p. 818).

Note that the inverse of Σ is not the asymptotic covariance matrix of the ML

estimators is due to $\Lambda \left({\beta}_{0}\right)={\frac{\partial \lambda \left(\beta ;{\beta}_{0}\right)}{\partial {\beta}^{\prime}}|}_{\beta ={\beta}_{0}}$ is not equal to $-{\Sigma}^{-1}$ if the differ-

rentiability assumptions for the score functions do not hold, see corollary (3.2) and proposition (3.3) given by Huber ( [15] , p. 133).

The matrix $\Sigma $ can also be estimated by the following estimator

$\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}\left[\psi \left({x}_{i},\widehat{\beta}\right)\right]{\left[\psi \left({x}_{i},\widehat{\beta}\right)\right]}^{\prime}}$ .

Let us consider the following location model with known ${\sigma}_{0}$ and check the expression (2.2) as given by Kotz et al. ( [16] , p. 818) has missing components. The density function is given by

$f\left(x;\theta \right)=\frac{1}{{\sigma}_{0}\sqrt{2}}{\text{e}}^{-\frac{\sqrt{2}}{{\sigma}_{0}}\left|x-\theta \right|}$ , or alternatively the density can also be expressed as

$f\left(x;\theta \right)={f}_{0}\left(x-\theta \right),{f}_{0}\left(x\right)=\frac{1}{{\sigma}_{0}\sqrt{2}}{\text{e}}^{-\frac{\sqrt{2}\left|x\right|}{{\sigma}_{0}}}$ .

This subfamily will correspond to their parametrisation with ${{\displaystyle \kappa =1}}^{\text{}}$ in their paper. The sample median $\widehat{\theta}$ is the ML estimator for $\theta $ , using their result it will

lead to conclude that the asymptotic variance is given by ${\left(E{\left(\frac{\partial \mathrm{ln}f}{\partial \theta}\right)}^{2}\right)}^{-1}=\frac{{\sigma}_{0}^{2}}{2}$ ,

as indicated by case1 in the table of their paper. On the other hand, it is known that the asymptotic distribution of the sample median is given by

$\sqrt{n}\left(\widehat{\theta}-{\theta}_{0}\right)\stackrel{L}{\to}N\left(0,\frac{1}{4{\left({f}_{0}\left(0\right)\right)}^{2}}\right)$ , see expression (2.4.19) given by Lehmann

( [25] , p. 81) for example. For the location model being considered, we have

$\frac{1}{4{\left({f}_{0}\left(0\right)\right)}^{2}}=\frac{1}{8{\sigma}_{0}^{2}}$ . Clearly, $\frac{1}{8{\sigma}_{0}^{2}}\ne \frac{{\sigma}_{0}^{2}}{2}$ but the correct asymptotic variance can

be obtained using expression (13).

For the general GAL distribution with four parameters, alternative methods of estimation based on quadratic distances (QD) which make use of the empirical cumulant generating function will be introduced in the next section. The QD

methods are developed based on empirical findings which show that the ML methods for finite sample sizes as large as n = 5000 do not give good estimates for the shape parameter $\tau $ and the scale parameter $\sigma $ but ML methods give good estimates for the other two parameters. Howewer, the overall efficiency of ML methods lags behind QD methods in finite samples. Also, QD methods beside giving better estimates for $\sigma $ and $\tau $ , the methods can be used for parameter testing since the asymptotic covariance matrix for the QD estimators can be obtained explicitly for the entire GAL family. The methods also provide a chi- square test statistics of goodness-of-fit for the model being used. Therefore, it might be of interests to consider using QD methods whenever ML methods might have deficiencies.

4. Quadratic Distance Methods

General Quadratic distance (QD) theory has been developed in Luong and Thompson [26] . Howewer, if it is used for estimating parameters of the GAL distribution we need to specify a distance which can generate estimators with good efficiencies. For applied works, it is also preferable to have methods which are relatively simple to implement numerically.

For financial data, observations are recorded as percentages so they are small in magnitude, we recommend minimizing the following distance based on matching the empirical cumulant generating function ${K}_{n}\left(t\right)$ with its model counterpart ${K}_{\beta}\left(t\right)$ using the following points

${t}_{j},j=1,\cdots ,m=20$

with

${t}_{1}=0.01,{t}_{2}=0.02,\cdots ,{t}_{10}=0.1,{t}_{11}=-0.01,{t}_{12}=-0.02,\cdots ,{t}_{20}=-\mathrm{0.1.}$ (19)

The choice of points as given above is suggested based on empirical findings that overall, the QD estimators are more efficient than the ML estimators for the range of parameters often encountered for modelling financial data using finite sample sizes as large as n = 5000. Note that the set of points chosen does not include the origin 0.

The empirical moment generating function, empirical cumulant generating function are given respectively by

${M}_{n}\left(s\right)=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}{\text{e}}^{s{X}_{i}}}$ and ${K}_{n}\left(s\right)=\mathrm{log}{M}_{n}\left(s\right)$ .

The model cumulant generating function is ${K}_{\beta}\left(t\right)$ , ${K}_{\beta}\left(t\right)=\mathrm{log}{M}_{\beta}\left(t\right)$ with ${M}_{\beta}\left(s\right)$ being the model moment generating function as defined by expression (1). The proposed QD estimators given by the vector $\tilde{\beta}$ is obtained by minimizing with respect to $\beta $ the following specific QD distance given by

$D\left(\beta \right)={{\displaystyle {\sum}_{j=1}^{20}\left({K}_{n}\left({s}_{j}\right)-{K}_{\beta}\left({s}_{j}\right)\right)}}^{2}$ . (20)

Once the estimates are obtained, goodness of fit test statistics with an asymptotic chi-square distribution with $r=16$ degree of feedom can also be constructed. General QD distances theory can be used to derive the asymptotic covariance matrix of the QD estimators and the chi-square goodness of fit test statistics. They will be given at the end of this section. Having the asymptotic covariance matrix of the QD estimators in closed form for the GAL family is useful for parameter testing.

For notations, let us define the vector based on observations

${z}_{n}={\left({K}_{n}\left({t}_{1}\right),\cdots ,{K}_{n}\left({t}_{m}\right)\right)}^{\prime},m=20$ .

Its model counterpart is the vector

${z}_{\beta}={\left({K}_{\beta}\left({t}_{1}\right),\cdots ,{K}_{\beta}\left({t}_{m}\right)\right)}^{\prime}$ .

Therefore,

$D\left(\beta \right)={\left({z}_{n}-{z}_{\beta}\right)}^{\prime}\left({z}_{n}-{z}_{\beta}\right).$

Observe that the elements of the covariance matrix ${V}_{M}$ for the vector

$\sqrt{n}{\left({M}_{n}\left({t}_{1}\right),\cdots ,{M}_{n}\left({t}_{m}\right)\right)}^{\prime}$ are given by

${V}_{M}\left(i,j\right)={M}_{\beta}\left({t}_{i}+{t}_{j}\right)-{M}_{\beta}\left({t}_{i}\right){M}_{\beta}\left({t}_{j}\right),i=1,\cdots ,20,j=1,\cdots ,20.$

The elements of the approximate covariance matrix based on the differential method or delta method for

$\sqrt{n}{\left({K}_{n}\left({t}_{1}\right),\cdots ,{K}_{n}\left({t}_{m}\right)\right)}^{\prime}$

are given by

${V}_{K}\left(i,j\right)=\left({M}_{\beta}\left({t}_{i}+{t}_{j}\right)-{M}_{\beta}\left({t}_{i}\right){M}_{\beta}\left({t}_{j}\right)\right)/\left({M}_{\beta}\left({t}_{i}\right){M}_{\beta}\left({t}_{j}\right)\right),i=1,\cdots ,20,j=1,\cdots ,20.$

Under the regularity conditions given by Lemma (3.4.1) of Luong and Thompson ( [26] , p. 244), the QD estimators given by the vector $\tilde{\beta}$ are consistent. Clearly, we need to assume that on the restricted parameter space the model moment generating function and the covariance matrix ${V}_{M}$ given by expression (21) are well defined. Some modifications might be necessary if the methods are applied to other models. The conditions are met in general for the GAL distribution when used for modeling financial data. We then have

$\begin{array}{l}\sqrt{n}\left(\tilde{\beta}-{\beta}_{0}\right)\stackrel{L}{\to}N\left(0,V\right),\\ V={\left({S}^{\prime}S\right)}^{-1}{S}^{\prime}{V}_{K}S{\left({S}^{\prime}S\right)}^{-1}.\end{array}$

The asymptotic covariance for the QD estimators is simply $\frac{1}{n}V$ .

All the expressions which form $V$ as given above are evaluated under the true vector of parameters ${\beta}_{0}$ , $\beta ={\left(\theta ,\mu ,\sigma ,\tau \right)}^{\prime}={\left({\beta}_{1},{\beta}_{2},{\beta}_{3},{\beta}_{4}\right)}^{\prime}$ and

$S=\left[\begin{array}{ccc}\frac{\partial {K}_{\beta}\left({t}_{1}\right)}{\partial {\beta}_{1}}& \cdots & \frac{\partial {K}_{\beta}\left({t}_{1}\right)}{\partial {\beta}_{4}}\\ \vdots & \ddots & \vdots \\ \frac{\partial {K}_{\beta}\left({t}_{m}\right)}{\partial {\beta}_{1}}& \cdots & \frac{\partial {K}_{\beta}\left({t}_{m}\right)}{\partial {\beta}_{4}}\end{array}\right]$ , ${S}^{\prime}$ is the transpose of $S$ .

We also use $S=S\left({\beta}_{0}\right),{V}_{K}=V\left({\beta}_{0}\right),{\Sigma}_{2}={\Sigma}_{2}\left({\beta}_{0}\right)$ to emphasize that these matrices depend on ${\beta}_{0}$ . The matrix ${\Sigma}_{2}$ is derived below. For constructing test statistics with chi-square limiting distribution, use expression (3.4.2) given by Luong and Thompson ( [26] , p. 248) to obtain

$\sqrt{n}\left({z}_{n}-{z}_{\tilde{\beta}}\right)\stackrel{L}{\to}N\left(0,{\Sigma}_{2}\right)$ with ${\Sigma}_{2}$ , a covariance matrix which depends on ${\beta}_{0}$ and

${\Sigma}_{2}=\left[I-S{\left({S}^{\prime}S\right)}^{-1}{S}^{\prime}\right]{V}_{K}\left[I-S{\left({S}^{\prime}S\right)}^{-1}{S}^{\prime}\right].$ (21)

In practice, ${\beta}_{0}$ needs te be replaced by $\tilde{\beta}$ so that an estimate of ${\Sigma}_{2}$ can be defined as

${\tilde{\Sigma}}_{2}={\Sigma}_{2}\left(\tilde{\beta}\right)$ .

We need to find the Moore-Penrose (MP) generalized inverse for ${\tilde{\Sigma}}_{2}$ to constructa chi-square statistics. The quadratic form constructed with the MP inverse will follow a chi-square distribution asymptotically. Many computer packages provide prewritten functions to find the Moore-Penrose inverse of a matrix. It can also be computed easily using the spectral decomposition of ${\tilde{\Sigma}}_{2}$ , i.e., using the representation ${\tilde{\Sigma}}_{2}=PD{P}^{\prime}$ . The columns of the matrix $P$ are the eigenvectors of ${\tilde{\Sigma}}_{2}$ and $D$ is a diagonal matrix with the diagonal elements being the corresponding eigenvalues of ${\tilde{\Sigma}}_{2}$ given respectively by ${\lambda}_{i}\ge 0,i=1,\cdots ,m$ . The matrix $P$ is orthonormal with the property $P{P}^{\prime}=I$ .

The Moore Penrose inverse ${\tilde{\Sigma}}_{2}^{MP}$ can be obtained as

${\tilde{\Sigma}}_{2}^{MP}=P{D}^{-}{P}^{\prime}$ with

${D}^{-}$ being the diagonal matrix constructed based on the diagonal elements ${\lambda}_{i},i=1,\cdots ,20$ of $D$ . The diagonal elements of ${D}^{-}$ are given as

${\lambda}_{i}^{-}=\frac{1}{{\lambda}_{i}}$ if ${\lambda}_{i}>0$ and ${\lambda}_{i}^{-}=0$ if ${\lambda}_{i}=0$ .

For discussions on property of the Moore Penrose generalized inverse,

see Theil ( [27] , p 273-274), also see expressions (4.3 - 4.6) given by Harville ( [28] , p 504). For numerical computations using R, see section 8.3 given by Fieller ( [29] , p. 123-133). The chi-square test statistics for testing the null hypothesis which specifies that observations are drawn from the GAL family can be based on the criterion function

$Q\left(\beta \right)=n{\left({z}_{n}-{z}_{\beta}\right)}^{\prime}\left(P{D}^{-}{P}^{\prime}\right)\left({z}_{n}-{z}_{\beta}\right),$ (22)

$Q\left(\tilde{\beta}\right)=n{\left({z}_{n}-{z}_{\tilde{\beta}}\right)}^{\prime}\left(P{D}^{-}{P}^{\prime}\right)\left({z}_{n}-{z}_{\tilde{\beta}}\right)\stackrel{L}{\to}{\chi}^{2}\left(16\right)$ . (23)

The limiting distribution of the test statistics is chi-square with $r=16$ , based on Theorem 3.4.1 of Luong and Thompson ( [26] , p. 248). The test statistics can also be viewed as a generalized Pearson test statistics. The criterion function $Q\left(\beta \right)$ can also be used to find a good starting vector to initialize the algorithms for finding the QD estimators, see section (3) given by Andrews ( [30] , p. 917- 922) for more discussions and section (5.2) of this paper.

5. Numerical Issues

5.1. Simplemoment Estimators

The simple approximate moment estimate proposed by Senata [4] can be found explicitly and can be used as starting points for numerical optimization to find QDE or MLE. Let the first four moments be denoted by ${\widehat{\mu}}_{sj},j=1,2,3,4$ with

${\widehat{\mu}}_{j}=\frac{1}{n}{\displaystyle {\sum}_{i=1}^{n}{\left({X}_{i}-\overline{X}\right)}^{j}},j=1,\cdots ,4$ and equalizing with the model counterparts

and neglecting all the terms with ${{\theta}^{\prime}}^{j},j=2,3,4$ yields the following system of estimating equation for moment estimation,

${\widehat{\mu}}_{1}=c+{\theta}^{\prime},{\widehat{\mu}}_{2}={{\sigma}^{\prime}}^{2},{\widehat{\mu}}_{3}=3{{\sigma}^{\prime}}^{2}{\theta}^{\prime}\nu ,{\widehat{\mu}}_{4}=3{{\sigma}^{\prime}}^{4}\nu +3{{\sigma}^{\prime}}^{4}$ . The moment estimators are

${\left({\overline{\sigma}}^{\prime}\right)}^{2}={\widehat{\mu}}_{2},\overline{\nu}=\frac{\frac{{\widehat{\mu}}_{4}}{3}-{\left({\overline{\sigma}}^{\prime}\right)}^{4}}{{\left({\overline{\sigma}}^{\prime}\right)}^{4}},\overline{\theta}\text{'}=\frac{{\widehat{\mu}}_{3}}{3\overline{\nu}{\left({\overline{\sigma}}^{\prime}\right)}^{2}},\overline{c}={\widehat{\mu}}_{1}-\overline{\theta}\text{'}.$ When converted to the

parameterization given by Kotz et al. [16] , the approximate moment estimators

for $\tau ,{\sigma}^{2},\mu ,\theta $ are given respectively as $\overline{\tau}=\frac{1}{\overline{\nu}},{\overline{\sigma}}^{2}={{\overline{\sigma}}^{\prime}}^{2}\overline{\nu},\overline{\mu}={\overline{\theta}}^{\prime}\overline{\nu},\overline{\theta}=\overline{c}$ . The

approximate moment estimators are not efficient but they are simple and given explicitly .Therefore, they can be used as starting points for the numerical algorithms to implement QD or ML estimation. Moment estimators can also be verified to see whether they are appropriate as starting points. This will be discussed in the next section.

5.2. The Choice of an Initial Vector

Most of the algorithms will return a local minimizer and the vector which gives the estimators is defined to be the global minimizer. Due to this limitation, some cares are needed to ensure that we can identify the global minimizer. In practice, it is important to test the algorithm with various starting vectors, see Andrews [30] . Andrews [30] has suggested that it is preferable to have the starting vector

${\beta}^{\left(0\right)}={\left({\theta}^{\left(0\right)},{\mu}^{\left(0\right)},{\sigma}^{\left(0\right)},{\tau}^{\left(0\right)}\right)}^{\prime}$ close to the vector of the estimators given by $\tilde{\beta}$

which globally minimizes the objective function. We might look for a different starting vector if the vector of moment estimators cannot be used as a starting vector to initialize the numerical algorithm.

The criterion function $Q\left(\beta \right)$ given by expression (22) which is used to construct goodness of fit test can also be used to select a good starting vector. The starting vector ${\beta}^{\left(0\right)}$ is subject to the screening test by checking whether

$Q\left({\beta}^{\left(0\right)}\right)\le {\chi}_{0.95}^{2}\left(16\right)$ ,

${\chi}_{0.95}^{2}\left(16\right)$ is the 95th percentile of the chi-square distribution with 16 degree of freedom to be qualified as a suitable starting vector, see expression (3.5) given by Andrews ( [30] , p. 919). If ${\beta}^{\left(0\right)}$ passes the screening test then one might consider to use ${\beta}^{\left(0\right)}$ as the vector of starting points for the numerical algorithm used to find the vector of estimators, otherwise look for another one.

5.3. A limited Simulation Study

For financial data, observations are recorded as percentages so they are small in magnitude. We are in the situation of modeling with values for $\theta $ and $\mu $ are near 0. The plausible values for $\tau $ and $\sigma ,0<\tau \le 10,0<\sigma \le 0.1$ . For parameters with these ranges we observe that the ML estimators for $\tau $ and $\sigma $ do not perform well for sample size as large as $n=5000$ . For comparisons between QD methods vs ML methods, the ratio of total Mean square errors is used as a measure for the overall relative efficiency. Due to the limited capacity on computing as we only have access to a laptop computer, we can only use M = 100 samples with each sample is of a size n = 5000.

The overall relative efficiency for comparisons is defined as the ratio

$\frac{\text{TMSE}\left(QD\right)}{\text{TMSE}\left(ML\right)}=\frac{\text{MSE}\left(\tilde{\theta}\right)+\text{MSE}\left(\tilde{\mu}\right)+\text{MSE}\left(\tilde{\sigma}\right)+\text{MSE}\left(\tilde{\tau}\right)}{\text{MSE}\left(\widehat{\theta}\right)+\text{MSE}\left(\widehat{\mu}\right)+\text{MSE}\left(\widehat{\sigma}\right)+\text{MSE}\left(\widehat{\tau}\right)}.$

The expressions for MSE and TMSE which appear in Table 1 are estimated using simulated samples. The results of the simulation study are lengthy. We only extract the key findings, which is summarized using Table 1.

The study seems to indicate that overall ML methods are less efficient than QD methods but ML methods are more efficient for estimating the first two

(a)Overall relative efficiency:
$\frac{\text{TMSE}\left(\text{QD}\right)}{\text{TMSE}\left(\text{ML}\right)}=0.003$
.

(b)Overall relative efficiency: TMSE ( QD ) TMSE ( ML ) = 0.0207 .

(c)Overall relative efficiency:
$\frac{\text{TMSE}\left(\text{QD}\right)}{\text{TMSE}\left(\text{ML}\right)}=8.357\times {10}^{-5}$.

Table 1. Illustrations of simulation results.

parameters namely $\theta ,\mu $ for the AL family and for the entire GAL family in finite samples where little is known about the asymptotic distributions of the ML estimators.

6. Financial Applications

6.1. Option Pricing and Risk Neutral Parameters

For options as they are tradable, risk neutral parameters are used for pricing. Risk neutral parameters are related to the physical parameters which can be estimated using historical data. A set of risk neutral parameters can be obtained by using the Esscher transform change of measure, see Schoutens ( [31] , p 77) based on the seminal works of Gerber and Shiu [12] . They can also be viewed as minimum entropy risk neutral parameters, see Miyahara [13] . We keep the four historical parameters of the GAL distribution as risk neutral parameters but introduce an extra parameter ${h}^{\ast}$ which is given by the following equation with $h$ being the unknown variable and r is the known risk free rate,

$r=\mathrm{log}M\left(h+1\right)-\mathrm{log}M\left(h\right)$

where $M\left(s\right)$ is the moment generating function as given by expression (2).

Therefore, the risk neutral parameters are given by the vector

${\beta}_{0}^{N}={\left({h}^{\ast},{\theta}_{0},{\sigma}_{0},{\mu}_{0},{\tau}_{0}\right)}^{\prime}$ .

The price of the asset is modeled as ${S}_{T}={S}_{0}{\text{e}}^{{X}_{T}}$ where:

a) ${S}_{0}$ is the initial asset price at time $t=0$ ,

b) ${X}_{T}={\displaystyle {\sum}_{i=1}^{T}{R}_{i}}$ ,

c) the log returns ${R}_{i}=\mathrm{log}\left({S}_{i+1}\right)-\mathrm{log}\left({S}_{i}\right),i=1,\cdots ,T$ are i.i.d as $R~GAL\left(\beta \right)$ with mgf ${M}_{\beta}\left(s\right)$ .

We also assume $T\ge 1$ and $T$ is a positive integer.

For pricing an European call option with the initial price ${S}_{0}$ , strike price $K$ and interest rate $r$ , the price of the European call option is ${\text{e}}^{-rT}E\left({\left({S}_{T}-K\right)}_{+}\right)$ where ${\left({S}_{T}-K\right)}_{+}=\mathrm{max}\left(\left({S}_{T}-K\right),0\right)$ and the expectation is under risk neural parameters. Therefore, it is possible use simulated samples from a bilateral gamma distribution to obtain an estimate for $E\left({\left({S}_{T}-K\right)}_{+}\right)$ and price the option.

Senata ( [4] , p. 182-184) has illustrated the use of the GAL family, moment and ML methods to analyze historical data from the Dow Jones industrial average and other indexes. It is not difficult to see that QD methods can be considered as alternative methods for analyzing financial data.

Beside option pricing, measures of risks are used in finance and actuarial sciences. These measures will depend on the underlying distribution which is specified by a set of parameters. We briefly discuss these notions below. The inferences techniques can also be applied to estimate the parameters using historical data and quantify the level of risks incurred.

6.2. VaR, CVar, EvaR Using the GAL Distribution

The Value at Risk at confidence level $1-\alpha $ of a continuous loss random variable $X$ with distribution function $F\left(x\right)$ and density function $f\left(x\right)$ is defined as

$Va{R}_{1-\alpha}\left(L\right)={F}^{-1}\left(1-\alpha \right)$ is the quantile of the loss $X=-R$ specified by

$P\left(X>Va{R}_{1-\alpha}\left(X\right)\right)=\alpha $ , the probability of the potential loss encountered by the holder of a financial assetfor one unit of time. The conditional value at risk

$CVa{R}_{1-\alpha}\left(X\right)=\frac{1}{\alpha}{\displaystyle {\int}_{Va{R}_{1-\alpha}}^{\infty}xf\left(x\right)\text{d}x}$ , see Rockafellar and Uriyasev [32] for this mea-

sure of risk. If the log return random variable $R$ follows a $GAL\left(\theta ,\sigma ,\mu ,\tau \right)$ , $R~GAL\left(\theta ,\sigma ,\mu ,\tau \right)$ , then the loss random variable is $X~GAL\left(-\theta ,\sigma ,-\mu ,\tau \right)$ .

Ahmadi-Javid [33] proposed a coherent measure of risk, the entropic value-at risk (EVaR) using the Chernoff bound, see the seminal paper by Chernoff [34] for the bound. EVaR is defined implicitly using of the moment generating function $M\left(z\right)$ . Since the moment generating function of the GAL distribution is relatively simple and does not involve the Bessel function, Evar can also be computed easily. For more discussions on estimation and risk measures, see Toma and Dedu [35] .

7. Conclusion

As we can see in finite samples, ML methods only offer good estimators for two of the four parameters for the GAL family. Asymptotic normality can only be guaranteed for the AL family and the lack of a covariance matrix in closed form prevents hypotheses testing for the GAL family. Due to these restrictions, QD methods are developed as complementary methods to ML methods. The methods appear to be suitable for estimation and for parameter testing. The methods also produce a criterion function when evaluated at the values taken by the QD estimators gives a chi-square goodness-of-fit test statistics for the GAL model. The criterion function can be used to select a starting vector which is close to the vector of the QD estimators to start a numerical search algorithm. These last two features are not shared directly by ML methods and appear to be useful for applications.

Acknowledgements

The helpful and constructive comments of referees which lead to an improvement of the presentation of the paper and support from the editorial staffs of Open Journal of Statistics to process the paper are all gratefully acknowledged here.

Cite this paper

Luong, A. (2017) Likelihood and Quadratic Distance Methods for the Generalized Asymmetric Laplace Distribution for Financial Data. Open Journal of Statistics, 7, 347-368. https://doi.org/10.4236/ojs.2017.72025

References

- 1. Kotz, S., Kozubowski, T.J. and Podgorski, K. (2001) The Laplace Distribution and Generalizations. Birkhauser, Boston.

https://doi.org/10.1007/978-1-4612-0173-1 - 2. Madan, D.P. and Senata, E. (1990) The Variance Gamma (VG) Model for Share Market. Journal of Business, 63, 551-524.

https://doi.org/10.1086/296519 - 3. Madan, D.P., Carr, P. and Chang, E.C. (1998) The Variance Gamma Process and Option Pricing. European Finance Review, 2, 79-105.

https://doi.org/10.1086/296519 - 4. Seneta, E. (2004) Fitting the Variance Gamma Model to Financial Data. Journal of Applied Probability, 41, 177-187.

https://doi.org/10.1017/S0021900200112288 - 5. Podgorski, K. and Wegener, J. (2011) Estimation for Stochastic Models Driven by Laplace Motion. Communications in Statistics, Theory and Methods, 40, 3281-3302.

https://doi.org/10.1080/03610926.2010.499051 - 6. McNeil, A.J., Frey, R. and Embrechts, P. (2005) Quantitative Risk Management. Princeton University Press, Princeton.
- 7. Protassov, R.S. (2004) EM-Based Maximum Likelihood Parameter Estimation for Multivariate Generalized Hyperbolic Distributions with Fixed λ. Statistics and Computing, 14, 67-77.

https://doi.org/10.1023/B:STCO.0000009419.12588.da - 8. Hu, W. (2005) Calibration of Multivariate Generalized Hyperbolic Distributions Using the EM Algorithm with Applications in Risk Management, Portfolio Optimization and Portfolio Credit Risk. Unpublished PHD Thesis, Department of Mathematics, The Florida State University, Tallahassee.
- 9. Louis, T.A. (1982) Finding the Observed Information Using the EM Algorithm. Journal of the Royal Statistical Society Series B, 44, 98-130.
- 10. McLachlan, G.J. and Krishnan, T. (2008) The EM Algorithm and Extensions. 2nd Edition, Wiley, New York.

https://doi.org/10.1002/9780470191613 - 11. Küchler, U. and Tappe, S. (2008) Bilateral Gamma Distributions and Processes in Fi-nancial Mathematics. Stochastic Processes and Their Applications, 118, 261-283.
- 12. Gerber, H.U. and Shiu, E.S.W. (1994) Option Pricing by Esscher Transforms. Transactions of the Society of Actuaries, 46, 99-191.
- 13. Miyahara, Y. (2012) Option Pricing in Incomplete Markets: Modeling Based on Geometric Lévy Processes and Minimal Entropy Martingales Measures. Imperial College Press, London.
- 14. Bierlaire, M. (2006) Introduction à l’optimisation différentiable. Presses Polytechniques et Universités Romandes, Lausanne.
- 15. Huber, P. (1981) Robust Statistics. Wiley, New York.

https://doi.org/10.1002/0471725250 - 16. Kotz, S., Kozubowski, T.J. and Podgorski, K. (2002) Maximum Likelihood Estimation of Asymmetric Laplace Parameters. Annals of Institute of Statistical Mathematics, 54, 816-826.

https://doi.org/10.1023/A:1022467519537 - 17. Huber, P. (1967) The Behaviour of Maximum Likelihood Estimates under Nonstandard Conditions. In: Proceeding 5th Berkeley Symposium on Mathematical Statistics and Probability, Vol. 1, University of California Press, Berkeley.
- 18. Amemiya, T. (1985) Advanced Econometrics. Harvard University Press, Cambridge.
- 19. Newey, W.K. and McFadden, D. (1994) Large Sample Estimation and Hypothesis Testing. In: Engle, R.F. and McFadden, D., Eds., Handbook of Econometrics, Vol. 4, North Holland, Amsterdam.
- 20. Broniatowski, M., Toma, A. and Vajda, I. (2012) Decomposable Pseudodistances and Application in Statistical Estimation. Journal of Statistical and Planning Inference, 142, 2574-2585.
- 21. Hogg, R., McKean, J.W. and Craig, A.T. (2013) Introduction to Mathematical Statistics. 7th Edition, Pearson, New York.
- 22. Hinkley, D.V. and Revankar, N.S. (1977) Estimation of the Pareto Law form Under-reported Data: A Further Analysis. Journal of Econometrics, 5, 1-11.
- 23. Rudin, W. (1976) Principles of Mathematical Analysis. McGraw Hill, New York.
- 24. Woolridge, J.M. (2010) Econometric Analysis of Cross Section and Panel Data. 2nd Edition, MIT Press, Cambridge.
- 25. Lehmann, E.L. (1999) Elements of Large Sample Theory. Springer, New York.

https://doi.org/10.1007/b98855 - 26. Luong, A. and Thompson, M.E. (1987) Minimum Distance Methods Based on Quadratic Distances for Transforms. Canadian Journal of Statistics, 15, 239-251.

https://doi.org/10.2307/3314914 - 27. Theil, H. (1971) Principle of Econometrics. Wiley, New York.
- 28. Harville, D.A. (1997) Matrix Algebra from a Statistician’s Perspective. Springer, New York.

https://doi.org/10.1007/b98818 - 29. Fieller, N. (2016) Basics of Matrix Algebra with R. Chapman and Hall, New York.
- 30. Andrews, D.W.K. (1997) A Stopping Rule for the Computation of the Generalized Method of Moments Estimators. Econometrica, 65, 913-931.

https://doi.org/10.2307/2171944 - 31. Schoutens, W. (2003) Lévy Processes in Finance: Pricing Financial Derivatives. Wiley, New York.

https://doi.org/10.1002/0470870230 - 32. Rockafellar, R.T. and Uriyasev, S. (2002) Conditional Value-at-Risk for General Loss Distribution. Journal of Banking and Finance, 26, 1443-1471.
- 33. Ahmadi-Javid, A. (2012) Entropic Value-At Risk: A New Coherent Risk Measure. Journal of Optimization: Theory and Applications, 155, 1105-1123.

https://doi.org/10.1007/s10957-011-9968-2 - 34. Chernoff, H. (1952) A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on a Sum of Observations. Annals of Mathematical Statistics, 23, 497-507.

https://doi.org/10.1214/aoms/1177729330 - 35. Toma, A. and Dedu, S. (2014) Quantitative Techniques for Financial Risk Assessment: A Comparative Approach Using Different Risk Measures and Estimation Methods. Procedia Economics and Finance, 8, 712-719.