﻿ Limit Theory of Model Order Change-Point Estimator for GARCH Models

Journal of Mathematical Finance
Vol.08 No.02(2018), Article ID:84885,20 pages
10.4236/jmf.2018.82027

Limit Theory of Model Order Change-Point Estimator for GARCH Models

Irene W. Irungu1*, Peter N. Mwita2, Antony G. Waititu3

1Pan-African University Institute of Basic Sciences, Technology and Innovation, Nairobi, Kenya

2Machakos University, Machakos, Kenya

3Jomo Kenyatta University of Agriculture and Technology, Nairobi, Kenya    Received: March 16, 2018; Accepted: May 25, 2018; Published: May 28, 2018

ABSTRACT

The limit theory of a change-point process which is based on the Manhattan distance of the sample autocorrelation function with applications to GARCH processes is examined. The general theory of the sample autocovariance and sample autocorrelation functions of a stationary GARCH process forms the basis of this study. Specifically the point processes theory is utilized to obtain their weak convergence limit at different lags. This is further extended to the change-point process. The limits are found to be generally random as a result of the infinite variance.

Keywords:

Autocorrelation Function, Change-Point, Convergence, GARCH, Manhattan Distance, Model Order, Point Process, Regular Variation, Weak Limit 1. Introduction

Empirical observation made in Econometrics and applied financial time series literature for long time horizons reveal that log-returns of various series of share prices, exchange-rates and interest rates depict unique stylized features. These features include: the frequency of large and small values is rather high suggesting that the data do not come from a normal but rather a heavy tailed distribution and that exceedances of high thresholds occur in clusters which indicates that there is dependence in the tails. It is also observed that the sample autocorrelations of data are small whereas the sample autocorrelation of the absolute and squared values is significantly different from zero even for large lags. This behavior suggests that there is some kind of long-range dependence in the data.

Various models have been proposed in order to describe these features. Among these models is the GARCH model which has been found appropriate in capturing volatility dynamics in financial time series particularly in modelling of stock market volatility as seen in  and derivative market volatility as utilized by  . GARCH (1, 1) in particular is often used in applications as it is believed to capture, despite its simplicity, variety of the empirically observed stylized features of the log-returns. However the log-return data cannot be modelled by one particular GARCH model over a long period of time  . They observe that in real financial time series the effect of non-stationarity of log-return series can be seen by considering the sample autocorrelation function of moving blocks of the same length as the estimates seem to differ from block to block. They suggest the use of change-point analysis of financial time series modelled by GARCH processes with parameters varying with time. The likelihood ratio scan method has been proposed by  for estimating multiple change points in piecewise stationary processes where they use a scan statistics to reduce the computationally infeasible global multiple change point estimation problem to a number of single change point detection problems in various local windows. The cumulative sum test is considered by  in determining volatility shifts in GARCH model against long range dependence. Cumulative sum test has also been used by  for change-point detection in copula ARMAGARCH Models. Markov switching GARCH model has been proposed by  where the volatility in each state is a convex combination of two different GARCH components with time varying weights making the model have a dynamic behavior to capture the variants of shocks. According to  change-point in the series could also be attributed to change in GARCH model order specification. The trio proposes an estimator based on the Manhattan distance of the sample autocorrelation of squared values. This paper aims at furthering the works of  by deriving the distributional convergence of the process used in deriving the estimator of change-point ${D}_{n}^{k}$ . Since ${D}_{n}^{k}$ is based on Manhattan distance of sample autocorrelation, the limit theory for sums of strictly stationary sequences is utilized. Conditions that ensure that partial sums of strictly stationary processes converge in distribution to an infinite variance stable distribution have provided by  . This is achieved by relating the regular variation condition and weak convergence of point processes. This was utilized by  in deriving the limit theory for the autocovariance function of linear processes which they later extended to bilinear processes in  . Limit theory for sample autocovariance of GARCH processes was also considered by  where they used weak convergence of point processes in combination with continuous mapping theorem. Point processes were also utilized by  in examining the convergence of the partial sum process of stationary regularly varying GARCH (1, 1) sequences for which the clusters of high thresholds excesses are broken down into asymptotically independent blocks which they established to be a stable Levy’s process. We utilize the point processes theory and restrict ourselves to qualitative results.

The paper is organized as follows. Section 2 outlines the GARCH model specification and change-point estimator with corresponding assumptions utilized. The weak convergence of point processes associated with the sequence $\left({X}_{t}^{2},{\sigma }_{t}^{2}\right)$ is considered in Section 3. In Section 4, the asymptotic behavior of the change-point process ${D}_{n}^{k}$ is studied. Here the limiting distribution of ${D}_{n}^{k}$ is derived for a stationary GARCH sequence.

2. Change-Point Estimator

Let ${\left({X}_{t}\right)}_{t\in ℕ}$ be a GARCH process of order $\left(p,q\right)$ given by the equation

$\begin{array}{l}{X}_{t}={\sigma }_{t}{ϵ}_{t}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}t\in ℕ\\ {\sigma }_{t}^{2}={\alpha }_{0}+\underset{i=1}{\overset{p}{\sum }}{\alpha }_{i}{X}_{t-i}^{2}+\underset{j=1}{\overset{q}{\sum }}{\beta }_{j}{\sigma }_{t-j}^{2}\end{array}$ (1)

By iterating the defining difference Equation (1) for ${\sigma }_{t}^{2}$ the GARCH model can be further expressed as a stochastic differential equation as follows:

Let ${Y}_{t}=\left(\begin{array}{c}{\sigma }_{t+1}^{2}\\ ⋮\\ {\sigma }_{t-q+2}^{2}\\ {X}_{t}^{2}\\ ⋮\\ {X}_{t-p+1}^{2}\end{array}\right)$ , ${A}_{t}=\left(\begin{array}{ccccccccc}{\alpha }_{1}{ϵ}_{t}^{2}+{\beta }_{1}& {\beta }_{2}& \cdots & {\beta }_{q-1}& {\beta }_{q}& {\alpha }_{2}& {\alpha }_{3}& \cdots & {\alpha }_{p}\\ 1& 0& \cdots & 0& 0& 0& 0& \cdots & 0\\ 0& 1& \cdots & 0& 0& 0& 0& \cdots & 0\\ ⋮& ⋮& \ddots & ⋮& ⋮& ⋮& ⋮& \ddots & ⋮\\ 0& 0& \cdots & 1& 0& 0& 0& \cdots & 0\\ {ϵ}_{t}^{2}& 0& \cdots & 0& 0& 0& 0& \cdots & 0\\ 0& 0& \cdots & 0& 0& 1& 0& \cdots & 0\\ ⋮& ⋮& \ddots & ⋮& ⋮& ⋮& ⋮& \ddots & ⋮\\ 0& 0& \cdots & 0& 0& 0& 0& 1& 0\end{array}\right)$ , ${B}_{t}={\left(\begin{array}{cccc}{\alpha }_{0}& 0& \cdots & 0\end{array}\right)}^{\prime }$ then ( ${Y}_{t}$ ) satisfies the following stochastic differential equation

${Y}_{t}={A}_{t}{Y}_{t-1}+{B}_{t}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}t\in ℕ$ (2)

Specifically for the GARCH (1, 1) case with ${A}_{t}={\alpha }_{1}{ϵ}_{t}^{2}+{\beta }_{1}$ and ${B}_{t}={\alpha }_{0}$ Equation (2) reduces into a one-dimensional SDE

${\sigma }_{t}^{2}={\alpha }_{0}+\left({\alpha }_{1}{ϵ}_{t}^{2}+{\beta }_{1}\right){\sigma }_{t-1}^{2}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}t\in ℕ$ (3)

Assumption 1. (Strictly Stationary)

According to  the existence of a unique strictly stationary solution to (1) is the negativity of the top Lyapunov exponent. This however cannot be calculated explicitly but a sufficient condition for this is given by

$\underset{i=1}{\overset{p}{\sum }}{\alpha }_{i}+\underset{j=1}{\overset{q}{\sum }}{\beta }_{j}<1$

Assumption 2. (Ergodic Process)

According to  standard ergodic theory yields that ( ${X}_{t}$ ) is an ergodic process. Thus its properties can be deduced from a single sufficiently large random sample of the sample.

Consider the change-point test hypothesis to be investigated to be defined as:

$\begin{array}{l}{H}_{0}:{X}_{t}\sim GARCH\left(1,1\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}t=1,\cdots ,n\\ \text{against}\\ {H}_{1}:{X}_{t}\sim \left\{\begin{array}{l}\text{GARCH}\left(1,1\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}t=1,\cdots ,k\\ \text{GARCH}\left(p,q\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}t=k+1,\cdots ,n\end{array}\\ \text{where}\text{\hspace{0.17em}}p,q\in ℕ\\left\{0\right\}\end{array}$ (4)

Assumption 3. (Weight)

Let the weight ${w}_{k}$ be a measurable function that depends on the sample size n and change-point k. It is arbitrarily chosen such that it satisfies the condition that

$\begin{array}{l}\underset{i=1}{\overset{k}{\sum }}{\rho }_{i}=\frac{k}{n}\underset{i=1}{\overset{n}{\sum }}{\rho }_{i}\\ ⇒\frac{1}{n}\left(\underset{i=1}{\overset{k}{\sum }}{\rho }_{i}-\frac{k}{n}\underset{i=1}{\overset{n}{\sum }}{\rho }_{i}\right)=0\end{array}$ (5)

Consider Assumption  , Assumption  and Assumption  to be satisfied. According to  the change-point estimator $\stackrel{^}{k}$ as hypothesized in (4) is based on the lower bound of the weighted Manhattan divergence measure of the sample autocorrelation function drawn for the process ${D}_{n}^{k}$ as

${D}_{n}^{k}=\frac{k}{n}\left(1-\frac{k}{n}\right)|\frac{1}{k}\underset{i=1}{\overset{k}{\sum }}{\rho }_{i}-\frac{1}{n-k}\underset{i=k+1}{\overset{n}{\sum }}{\rho }_{i}|$ (6)

where ${\rho }_{i}$ and k denote sample autocorrelation function and the unknown change-point respectively which are estimated as:

$\begin{array}{l}{\rho }_{k}=\frac{{\sum }_{t=1}^{k-h}{X}_{t}^{2}{X}_{t+h}^{2}}{{\sum }_{t=1}^{k}{X}_{t}^{4}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}0 (7)

Proof. The works of  are utilized here. Let $X=\left({X}_{1},{X}_{2},\cdots ,{X}_{k}\right)$ be a k dimensional vector and $Y=\left({X}_{k+1},{X}_{k+2},\cdots ,{X}_{n}\right)$ be a $\left(n-k\right)$ dimensional vector. The autocovariance and autocorrelation functions can be expressed in terms of the inner product as

$acovar〈X,Y〉=〈X-E\left(X\right),Y-E\left(Y\right)〉$ (8)

$acorr〈X,Y〉=〈\frac{X-E\left(X\right)}{sd\left(X\right)},\frac{Y-E\left(Y\right)}{sd\left(Y\right)}〉$ (9)

where $sd\left(X\right)$ and $sd\left(Y\right)$ represents the standard deviation of X and Y respectively which represents an ${L}_{2}$ distance from the mean. Applying the Holder’s inequality in Theorem (7) to (8) and (9) yields

$\begin{array}{l}|acovar\left(X,Y\right)|\le sd\left(X\right)sd\left(Y\right)\in {L}_{1}space\\ |acorr\left(X,Y\right)|\le 1\in {L}_{1}space\end{array}$ (10)

Following (10) we can define a sequence of autocorrelation functions ${\rho }_{i+1,j}$ where for fixed $i=0$ , $1\le j\le n-1$ and for fixed $j=n$ , $1\le i\le n-1$ to be such that we have two subsequences ${\rho }_{1j}=\left({\rho }_{1,1},{\rho }_{1,2},\cdots ,{\rho }_{1,k},\cdots ,{\rho }_{1,n-1}\right)$ and

${\rho }_{in}=\left({\rho }_{2,n},{\rho }_{3,n},\cdots ,{\rho }_{k+1,n},\cdots ,{\rho }_{nn}\right)$ where ${\rho }_{1,k}$ and ${\rho }_{k+1,n}$ denote the autocorrelation of the sequence ${\left\{{X}_{t}^{2}\right\}}_{t=1}^{k}$ and ${\left\{{X}_{t}^{2}\right\}}_{t=k+1}^{n}$ for $1\le k\le n$ . A change-point

process ${D}_{n}^{k}$ quantifying the deviation between ${\rho }_{1,k}$ and ${\rho }_{k+1,n}$ using a divergence measure motivated by the weighted ${L}_{p}$ distance, with k denoting the change-point is proposed. Specifically, they assumed the case when $p=1$ resulting into a weighted Manhattan distance and by linearity and absolute value of inequalities of the expectation operator results into

$\begin{array}{l}{L}_{p}\left({\rho }_{1,k}-{\rho }_{k+1,n}\right)=\left(\underset{k=1}{\overset{n}{\sum }}{w}_{k}|{\rho }_{1,k}-{\rho }_{k+1,n}|\right)\ge {w}_{k}|E\left({\rho }_{1,k}\right)-E\left({\rho }_{k+1,n}\right)|\\ \text{where}\text{\hspace{0.17em}}{\rho }_{1,k}=\frac{{\sum }_{t=1}^{k-h}{X}_{t}^{2}{X}_{t+h}^{2}}{{\sum }_{t=1}^{k}{X}_{t}^{4}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}0 (11)

The change-point estimator is processes ${D}_{n}^{k}$ is assumed to be the lower bound of the Manhattan divergence measure (11) where the weight ${w}_{k}$ is as specified in Assumption 3. The resultant process is as specified in (6). The change-point estimator $\stackrel{^}{k}$ of a change point ${k}^{*}$ is the point at which there is maximal sample evidence for a break in the sample autocorrelation function of the squared returns process. It is therefore estimated as the least value of k that maximizes the value of ${D}_{n}^{k}$ where $1 is chosen as given in (7).

3. Point Process Theory

Point process techniques are utilized in obtaining the structure of limit variables and limit processes which occur in the theory of summation in time series analysis. The point process theory as developed by  is utilized. Consider the state space of the point process ${\stackrel{¯}{ℝ}}^{n}\\left\{0\right\}$ where $\stackrel{¯}{ℝ}=ℝ\cup \left\{\infty \right\}\cup \left\{-\infty \right\}$ . Let B be the collection of bounded Borel sets in ${\stackrel{¯}{ℝ}}^{n}\\left\{0\right\}$ . Let ${F}_{c}$ be a collection of bounded non-negative continuous functions on ${\stackrel{¯}{ℝ}}^{n}\\left\{0\right\}$ with bounded support and ${F}_{s}$ be a collection of bounded non-negative step functions on ${\stackrel{¯}{ℝ}}^{n}\\left\{0\right\}$ with bounded support. Write M for the collection of Radon counting measures on ${\stackrel{¯}{ℝ}}^{n}\\left\{0\right\}$ with null measure o. This means that $\mu \in M\\left\{0\right\}$ if and only if μ is of the form ${\sum }_{i=1}^{\infty }{n}_{i}{\epsilon }_{{X}_{i}}$ , where ${n}_{i}\in \left\{1,2,\cdots \right\}$ , the points ${X}_{i}$ are distinct and

${\vee }_{i=1}^{\infty }|{X}_{i}|<\infty$ and ${\epsilon }_{{X}_{i}}$ is a Dirac measure at ${X}_{i}$ , that is ${\epsilon }_{{X}_{i}}=\left\{\begin{array}{l}1\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{ }\text{for}\text{\hspace{0.17em}}{X}_{i}\in B\\ 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}{X}_{i}\notin B\end{array}$ for any $B\subset M$ . Let ${M}_{y}\subset M$ be the collection of measures μ such that $\mu \left(\left\{X:|X|>y\right\}\right)>0$ , so that, ${M}_{0}=M\\left\{0\right\}$ . Define $\stackrel{˜}{M}=\left\{\mu \in M:\mu \left(\left\{X:|X|>1\right\}\right)=0\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\mu \left(\left\{X:X\in {S}^{n-1}\right\}\right)>0\right\}$ and let $B\left(\stackrel{˜}{M}\right)$ be the Borel set on $\stackrel{˜}{M}$ .

Consider a strictly stationary sequence ${\left({X}_{t}\right)}_{t\in ℕ}$ of random row vectors with values in ${ℝ}^{n}$ , that is, $X=\left({X}_{1},\cdots ,{X}_{n}\right)$ . The characterization of the asymptotic behavior of the tails of the random variable X is examined through the regular variation condition.

Theorem 1. (Regular Variation Condition)

In light of  assume has a density with unbounded support, ${\alpha }_{0}>0$ , $E\left[\mathrm{ln}\left({\alpha }_{1}{ϵ}^{2}+{\beta }_{1}\right)\right]<0$ , $E{|{\alpha }_{1}{ϵ}^{2}+{\beta }_{1}|}^{\frac{p}{2}}\ge 1$ and $E{|ϵ|}^{p}\mathrm{ln}|ϵ|<\infty$ for some $p>0$ holds, then:

1) there exist a number $\kappa \in \left(0,p\right]$ which is a unique solution of the equation

$E\left[{\left({\beta }_{1}+{\alpha }_{1}{ϵ}_{n}^{2}\right)}^{\kappa /2}\right]=1$

and there exist a positive constant ${c}_{0}={c}_{0}\left({\alpha }_{0},{\alpha }_{1},{\beta }_{1}\right)$ such that

$P\left(\sigma >x\right)\sim {c}_{0}{x}^{-\kappa }\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{as}\text{\hspace{0.17em}}x\to \infty$

2) If $E{|ϵ|}^{\kappa +\xi }<\infty$ for some $\xi >0$ , then

$P\left(|X|>x\right)\sim E{|ϵ|}^{\kappa }P\left(\sigma >x\right)$

and the vector $\left(X,\sigma \right)$ is jointly regularly varying such that

$\frac{P\left(|X,\sigma |>xt,\left(X,\sigma \right)/|X,\sigma |\in B\right)}{P\left(|X,\sigma |>t\right)}\stackrel{v}{\to }{x}^{-\kappa }P\left(\Theta \in B\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{as}\text{\hspace{0.17em}}t\to \infty$

where $\stackrel{v}{\to }$ denotes vague convergence on the Borel σ-field of the unit sphere S1 of ${ℝ}^{2}$ , relative to the norm $|\text{ }\cdot \text{ }|$ with

$P\left(\Theta \in \cdot \right)=\frac{E{|\left(ϵ,1\right)|}^{\kappa }{I}_{\left\{\left(ϵ,1\right)/|\left(ϵ,1\right)|\in \text{ }\cdot \right\}}}{E{|\left(ϵ,1\right)|}^{\kappa }}$

Proof. Following the works of  and  , assume ξ and η are independent non-negative random variables such that $P\left(\xi >x\right)\sim L\left(x\right){x}^{-\kappa }$ for some slowly varying function L and $E{\eta }^{\kappa +\epsilon }<\infty$ for some $\epsilon >0$ , then $P\left(n\xi >x\right)\sim E{\eta }^{\kappa }P\left(\xi >x\right)$ as $x\to \infty$ .

Applying Theorem 1 yields

$\begin{array}{l}P\left(|X,\sigma |>xt,\left(X,\sigma \right)/|X,\sigma |\in B\right)\\ =P\left(\sigma |ϵ,1|>xt,\left(ϵ,1\right)/|ϵ,1|\in B\right)\\ =P\left(\sigma |ϵ,1|{I}_{\left\{\left(ϵ,1\right)/|ϵ,1|\in B\right\}}>xt\right)\\ ~E{|ϵ,1|}^{\kappa }{I}_{\left\{\left(ϵ,1\right)/|ϵ,1|\in B\right\}}P\left(\sigma >xt\right)\\ ~E{|ϵ,1|}^{\kappa }{I}_{\left\{\left(ϵ,1\right)/|ϵ,1|\in B\right\}}{x}^{-\kappa }P\left(\sigma >t\right)\end{array}$

also

$P\left(|X,\sigma |>t\right)=P\left(\sigma |ϵ,1|>t\right)~E{|ϵ,1|}^{\kappa }P\left(\sigma >t\right)$

which completes proof.

Theorem 2. (Strongly Mixing Condition)

Let ( ${a}_{n}$ ) be a sequence of positive numbers such that

$nP\left(|X|>{a}_{n}\right)\to 1$ (12)

The sequence ( ${a}_{n}$ ) can be chosen as the $\left(1-{n}^{-1}\right)$ -quantile of $|X|$ . Since $|X|$ is regularly varying, ${a}_{n}={n}^{\frac{1}{\alpha }}L\left(n\right)$ for slowly varying function $L\left(x\right)$ . The condition (12) holds for ( ${X}_{t}$ ) if there exists a sequence of positive integers ( ${r}_{n}$ ) such that ${r}_{n}\to \infty$ , ${k}_{n}=\left[n/{r}_{n}\right]\to \infty$ as $n\to \infty$ and

$E\left[\mathrm{exp}\left\{-\underset{t=1}{\overset{n}{\sum }}f\left(\frac{{X}_{t}}{{a}_{n}}\right)\right\}\right]-{\left(E\left[\mathrm{exp}\left\{-\underset{t=1}{\overset{{r}_{n}}{\sum }}f\left(\frac{{X}_{t}}{{a}_{n}}\right)\right\}\right]\right)}^{{k}_{n}}\to 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{as}\text{\hspace{0.17em}}n\to \infty ,\forall \text{\hspace{0.17em}}f\in {F}_{s}$ (13)

The condition (12) implies by the strong mixing condition of the stationary sequence ( ${X}_{t}$ ).

Assume that the joint regular variation in Theorem 1 and strongly mixing conditions in Theorem 2 are satisfied for a stationary sequence ( ${X}_{t}$ ), then, the statement can be made for the weak convergence of the sequence of point processes

${N}_{n}=\underset{t=1}{\overset{n}{\sum }}{\epsilon }_{{X}_{t}/{a}_{n}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}n=1,2,\cdots$ (14)

Define

${\stackrel{˜}{N}}_{n}=\underset{i=1}{\overset{{m}_{n}}{\sum }}{\stackrel{˜}{N}}_{{r}_{n},i},\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,2,\cdots ,{m}_{n}$ (15)

where ${\stackrel{˜}{N}}_{{r}_{n},i}$ are independent and identically distributed as ${\stackrel{˜}{N}}_{{r}_{n},0}={\sum }_{t=1}^{{r}_{n}}{\epsilon }_{{X}_{t}/{a}_{n}}$ . It therefore follows that ( ${N}_{n}$ ) converges weakly if and only if ${\stackrel{˜}{N}}_{n}$ does and they have the same limit N. N is identical in law to the point process ${\sum }_{i=1}^{\infty }{\sum }_{j=1}^{\infty }{\epsilon }_{{P}_{i}{Q}_{ij}}$ where ${\sum }_{i=1}^{\infty }{\epsilon }_{{P}_{i}}$ is a Poisson process ${ℝ}_{+}$ with ${P}_{i}$ describing the radial part of the points and ${\sum }_{j=1}^{\infty }{\epsilon }_{{Q}_{ij}}$ is a sequence of independent and identically distributed point processes with ${Q}_{ij}$ describing the spherical part and a joint distribution Q on $\left(\stackrel{˜}{M},B\left(\stackrel{˜}{M}\right)\right)$ .

Theorem 3. Assume that ( ${X}_{t}$ ) is a stationary sequence of random vectors for which all finite-dimensional distributions are jointly regularly varying index $\kappa >0$ . To be specific, let ${\theta }_{-m},\cdots ,{\theta }_{m}$ be the $\left(2m+1\right)n$ -dimensional random row vector with values in the unit sphere $\left({S}^{\left(2m+1\right)n-1}\right)$ , $m\ge 0$ . Assume that the strongly mixing condition for ( ${X}_{t}$ ) and that

$\underset{m\to \infty }{\mathrm{lim}}\underset{n\to \infty }{\mathrm{lim}}\mathrm{sup}P\left({V}_{t=1}^{{r}_{n}}|{X}_{t}|>{a}_{n}y||{X}_{0}|>{a}_{n}y\right)=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}y>0,$

Then the limit

$\gamma =\underset{m\to \infty }{\mathrm{lim}}\frac{E{\left({|{\theta }_{0}^{\left(m\right)}|}^{\kappa }-{V}_{j=1}^{m}{|{\theta }_{j}^{\left(m\right)}|}^{\kappa }\right)}_{+}}{E{|{\theta }_{0}^{\left(m\right)}|}^{\kappa }}$

exists and is the extremal index of the sequence $\left(|{X}_{t}|\right)$ .

1) If $\gamma =0$ , then ${N}_{n}\stackrel{d}{\to }o$

2) If $\gamma >0$ , then ${N}_{n}\stackrel{d}{\to }N\stackrel{d}{=}{\sum }_{i=1}^{\infty }{\sum }_{i=1}^{\infty }{\epsilon }_{{P}_{i}{Q}_{ij}}$

where ${\sum }_{i=1}^{\infty }{\epsilon }_{{P}_{i}}$ is a Poisson process on ${ℝ}_{+}$ with ${P}_{i}$ describing the radial part of the points and ${\sum }_{j=1}^{\infty }{\epsilon }_{{Q}_{ij}}$ is a sequence of independent and identically distributed point processes with ${Q}_{ij}$ describing the spherical part and a joint distribution Q on $\left(\stackrel{˜}{M},B\left(\stackrel{˜}{M}\right)\right)$ , where Q is the weak limit of

$Q=\underset{m\to \infty }{\mathrm{lim}}\frac{E{\left({|{\theta }_{0}^{\left(m\right)}|}^{\kappa }-{V}_{j=1}^{m}{|{\theta }_{j}^{\left(m\right)}|}^{\kappa }\right)}_{+}I\left({\sum }_{|t|\le m}{\epsilon }_{{\theta }_{t}^{\left(k\right)}}\right)}{E{\left({|{\theta }_{0}^{\left(m\right)}|}^{\kappa }-{V}_{j=1}^{m}{|{\theta }_{j}^{\left(m\right)}|}^{\kappa }\right)}_{+}}$

Theorem 4. Utilizing the theory developed by  , let ( ${X}_{t}$ ) be a stationary GARCH (1, 1) process and assume that the jointly regularly varying and strongly mixing conditions hold. For fixed $h\ge 0$ , set ${X}_{t}=\left({X}_{t},{\sigma }_{t},\cdots ,{X}_{t+h},{\sigma }_{t+h}\right)$ , then the conditions in the Theorem 2 above are met and hence

${\stackrel{˜}{N}}_{n}=\underset{i=1}{\overset{{m}_{n}}{\sum }}{\stackrel{˜}{N}}_{{r}_{n},i},\text{\hspace{0.17em}}i=1,2,\cdots ,{m}_{n};\text{\hspace{0.17em}}\text{\hspace{0.17em}}{N}_{n}\underset{t=1}{\overset{n}{\sum }}{\epsilon }_{{X}_{t}/{a}_{n}}\stackrel{d}{\to }N=\underset{i=1}{\overset{\infty }{\sum }}\underset{j=1}{\overset{\infty }{\sum }}{\epsilon }_{{P}_{i}{Q}_{ij}}$ (16)

where ${Q}_{ij}=\left({Q}_{ij}^{\left(0\right)},\cdots ,{Q}_{ij}^{\left(m\right)}\right)$ and ${P}_{i}$ are as previously defined.

We now consider the convergence of point processes which are products of random variables, which forms the basis of the results on the weak convergence of sample autocovariance and autocorrelation for stationary processes.

Theorem 5 Let ( ${X}_{t}$ ) be a strictly stationary sequence such that $\left({X}_{t}\right)=\left(\left({X}_{t},\cdots ,{X}_{t+m}\right)\right)$ satisfying the jointly regularly varying condition for some $m\ge 0$ and further assume that Theorem 2 and Theorem 3 hold, then:

${\stackrel{^}{N}}_{n}={\left({\stackrel{^}{N}}_{n,h}\right)}_{h=0,\cdots ,m}={\left(\underset{t=1}{\overset{n}{\sum }}{\epsilon }_{{a}_{n}^{-1}{X}_{t}{X}_{t+h}}\right)}_{h=0,\cdots ,m}\stackrel{d}{\to }\stackrel{^}{N}={\left(\underset{i=1}{\overset{\infty }{\sum }}\underset{j=1}{\overset{\infty }{\sum }}{\epsilon }_{{P}_{i}^{2}{Q}_{ij}^{\left(0\right)}{Q}_{ij}^{\left(h\right)}}\right)}_{h=0,\cdots ,m}$ (17)

where the points ${Q}_{ij}=\left({Q}_{ij}^{\left(0\right)},\cdots ,{Q}_{ij}^{\left(m\right)}\right)$ and ${P}_{i}$ are as previously defined, ${\stackrel{^}{N}}_{n}$ and $\stackrel{^}{N}$ are point processes on $\stackrel{¯}{ℝ}\\left\{0\right\}$ meaning that points are not included in the point processes if ${X}_{t}{X}_{t+h}=0$ or ${Q}_{ij}^{\left(0\right)}{Q}_{ij}^{\left(h\right)}=0$

We study the weak limit behaviour of the sample autocovariance and sample autocorrelation of a stationary sequence ( ${X}_{t}$ ). Construct from this process the strictly stationary n-dimensional processes $\left({X}_{t}\right)=\left(\left({X}_{t},\cdots ,{X}_{t+n}\right)\right)$ , $n\ge 0$ . Define the sample autocovariance function

${\gamma }_{n,X}\left(h\right)={n}^{-1}\underset{t=1}{\overset{n-h}{\sum }}{X}_{t}{X}_{t+h},\text{\hspace{0.17em}}\text{\hspace{0.17em}}h\ge 0$ (18)

and the corresponding sample autocorrelation function

${\rho }_{n,X}\left(h\right)=\frac{{\gamma }_{n,X}\left(h\right)}{{\gamma }_{n,X}\left(0\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}h\ge 1$ (19)

Define the deterministic counterparts of the autocovariance and autocorrelation functions as follows

${\gamma }_{X}\left(h\right)=E{X}_{0}{X}_{h},\text{\hspace{0.17em}}\text{\hspace{0.17em}}h\ge 0$ (20)

${\rho }_{X}\left(h\right)=\frac{{\gamma }_{X}\left(h\right)}{{\gamma }_{X}\left(0\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}h\ge 1$ (21)

Theorem 6. Assume that ( ${X}_{t}$ ) is a strictly stationary sequence of random variables and that for a fixed $m\ge 0$ , ( ${X}_{t}$ ) satisfies the regular variation condition and ${N}_{n}={\sum }_{t=1}^{n}{\epsilon }_{{X}_{t}/{a}_{n}}\stackrel{d}{\to }N={\sum }_{i=1}^{\infty }{\sum }_{j=1}^{\infty }{\epsilon }_{{P}_{i}{Q}_{ij}}$ where the points ${Q}_{ij}=\left({Q}_{ij}^{\left(0\right)},\cdots ,{Q}_{ij}^{\left(m\right)}\right)$ and ${P}_{i}$ are as previously defined.

1) If $\kappa \in \left(0,2\right)$ , then

${\left(n{a}_{n}^{-2}{\gamma }_{n,X}\left(h\right)\right)}_{h=0,\cdots ,m}\stackrel{d}{\to }{\left({V}_{h}\right)}_{h=0,\cdots ,m}$

${\left({\rho }_{n,X}\left(h\right)\right)}_{h=1,\cdots ,m}\stackrel{d}{\to }{\left(\frac{{V}_{h}}{{V}_{0}}\right)}_{h=1,\cdots ,m}$

where

${V}_{h}=\underset{i=1}{\overset{\infty }{\sum }}\underset{j=1}{\overset{\infty }{\sum }}{P}_{i}^{2}{Q}_{ij}^{\left(0\right)}{Q}_{ij}^{\left(h\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}h=0,1,\cdots ,m$

The vector $\left({V}_{0},\cdots ,{V}_{m}\right)$ is jointly $\kappa /2$ stable in ${ℝ}^{m+1}$ .

2) If $\kappa \in \left(2,4\right)$ and for $h=0,\cdots ,m$

$\underset{ϵ\to 0}{\mathrm{lim}}\underset{n\to \infty }{\mathrm{lim}\mathrm{sup}}Var\left({a}_{n}^{-2}\underset{t=1}{\overset{n-h}{\sum }}{X}_{t}{X}_{t+h}{I}_{\left\{|{X}_{t}{X}_{t+h}|\le {a}_{n}^{2}ϵ\right\}}\right)=0,$

then

${\left(n{a}_{n}^{-2}\left({\gamma }_{n,X}\left(h\right)-{\gamma }_{X}\left(h\right)\right)\right)}_{h=0,\cdots ,m}\stackrel{d}{\to }{\left({V}_{h}\right)}_{h=0,\cdots ,m}$

which implies that

${\left(n{a}_{n}^{-2}\left({\rho }_{n,X}\left(h\right)-{\rho }_{X}\left(h\right)\right)\right)}_{h=1,\cdots ,m}\stackrel{d}{\to }{\gamma }_{X}^{-1}\left(0\right){\left({V}_{h}-{\rho }_{X}\left(h\right){V}_{0}\right)}_{h=1,\cdots ,m}$

4. Limit Theory of Change-Point Estimator

The following proposition is our main result on weak convergence for our proposed change-point process ${D}_{n}^{k}\left(h\right)$ as specified in (6) for GARCH processes based on the point process theory. In addition to the previously stated Theorems are additional Theorems are utilized in the proof of the proposition, see Appendix.

Proposition 2 Let ${\left({X}_{t}\right)}_{t\in ℕ}$ be a strictly stationary sequence of random variables irrespective of the distribution of initial value ${X}_{0}$ . Specifically, let ${\left({X}_{t}\right)}_{t\in ℕ}$ be a GARCH (1, 1) process defined in the form of a stochastic differential Equation (3). For fixed $h\ge 0$ , set ${X}_{t}=\left({X}_{t},\cdots ,{X}_{t+h}\right)$ . Assume that the regular variation conditions hold. Let $\left({a}_{n}\right)$ be a sequence of constants such that the strongly mixing condition is satisfied, then ${N}_{n}={\sum }_{t=1}^{n}{\epsilon }_{{X}_{t}/{a}_{n}}\stackrel{d}{\to }N={\sum }_{i=1}^{\infty }{\sum }_{j=1}^{\infty }{\epsilon }_{{P}_{i}{Q}_{ij}}$ where the points ${Q}_{ij}=\left({Q}_{ij}^{\left(0\right)},\cdots ,{Q}_{ij}^{\left(m\right)}\right)$ and ${P}_{i}$ are as defined in Theorem 2. Thus the conditions in Theorem 5 are met and hence there exists a sequence of bounded constants $\left({C}_{n}\left(h\right)\right)$ which converge in distribution to ${C}_{n}$ such that the following statements hold:

1) If $\kappa \in \left(0,2\right)$ , then

${\left({D}_{n}^{k}\left(h\right)\right)}_{h=1,\cdots ,m}\stackrel{d}{\to }{C}_{h}{\left(\frac{{V}_{h}}{{V}_{0}}\right)}_{h=1.\cdots ,n}$

2) If $\kappa \in \left(2,4\right)$ and for $h=0,\cdots ,m$

$\underset{\delta \to 0}{\mathrm{lim}}\underset{n\to \infty }{\mathrm{lim}\mathrm{sup}}Var\left({a}_{n}^{-4}\underset{t=1}{\overset{n-h}{\sum }}{X}_{t}^{2}{X}_{t+h}^{2}{I}_{\left\{|{X}_{t}^{2}{X}_{t+h}^{2}|\le {a}_{n}^{4}ϵ\right\}}\right)=0,$

then

$\begin{array}{l}{\left(n{a}_{n}^{-4}{D}_{n}^{k}\left(h\right)\right)}_{h=1,\cdots ,m}=n{a}_{n}^{-4}{\left({C}_{n}{\rho }_{n,{X}^{2}}\left(h\right)-{\rho }_{{X}^{2}}\left(h\right)\right)}_{h=1,\cdots ,n}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\stackrel{d}{\to }{\gamma }_{X}^{-1}\left(0\right){\left({C}_{h}{V}_{h}-{\rho }_{{X}^{2}}\left(h\right){C}_{0}{V}_{0}\right)}_{h=1,\cdots ,m}\end{array}$

where

$\begin{array}{l}{C}_{h}=\frac{{V}_{0}}{{V}_{h}}\left(\frac{{V}_{0}^{k}{V}_{h}-{V}_{h}^{k}{V}_{0}}{{V}_{0}^{k}\left({V}_{0}-{V}_{0}^{k}\right)}\right)\\ {V}_{h}=\underset{i=1}{\overset{\infty }{\sum }}\underset{j=1}{\overset{\infty }{\sum }}{P}_{i}^{2}{Q}_{ij}^{\left(0\right)}{Q}_{ij}^{\left(h\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}h=0,1,\cdots ,n\\ {V}_{h}^{k}=\underset{i=k+1}{\overset{\infty }{\sum }}\underset{j=k+1}{\overset{\infty }{\sum }}{P}_{i}^{2}{Q}_{ij}^{\left(0\right)}{Q}_{ij}^{\left(h\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}h=0,1,\cdots ,n\end{array}$

Proof. Consider the GARCH (1, 1) model in the context of a stochastic differential Equation (3) defined as ${\sigma }_{t}^{2}={\alpha }_{0}+\left({\alpha }_{1}{ϵ}_{t-1}^{2}+{\beta }_{1}\right){\sigma }_{t-1}^{2}$ , then the necessary and sufficient conditions for stationarity are ${\alpha }_{0}>0$ and $E\left[\mathrm{log}\left({\beta }_{1}+{\alpha }_{1}{ϵ}_{n}^{2}\right)\right]<0$ where the latter implies that

${\sum }_{i=1}^{p}{\alpha }_{i}+{\sum }_{j=1}^{q}{\beta }_{j}<1$ .

If we assume that the sample vector ${X}_{1},\cdots ,{X}_{n}$ comes from a stationary model, then the initial values ${X}_{0}$ also have a stationary distribution. This means that the distribution of ${X}_{t}$ is stationary whatever the distribution of ${X}_{0}$ , given the latter is independent of ${\left({ϵ}_{t}\right)}_{t=1,2,\cdots }$ and stationarity conditions. To show this consider two sequences ${X}_{t}{\left({X}_{0}\right)}_{t=0,1,2,\cdots }$ and ${X}_{t}{\left(Z\right)}_{t=0,1,2,\cdots }$ given the same stochastic differential equation recursion (2) but with initial conditions ${X}_{0}$ and Z where both vectors are independent of the future values ${\left({A}_{t},{B}_{t}\right)}_{t=1,2,\cdots }$ . Further assume that ${X}_{0}$ has stationary distribution. By iteration of stochastic differential Equation (3) we have

${Y}_{t}={B}_{t}+\underset{i=1}{\overset{\infty }{\sum }}{A}_{t}\cdots {A}_{t-i+1}{B}_{t-i},\text{\hspace{0.17em}}\text{\hspace{0.17em}}t=1,2,\cdots$

Thus for any initial values Z we have the following recursion

${X}_{t}\left(Z\right)={A}_{t}\cdots {A}_{1}Z+\underset{j=1}{\overset{t}{\sum }}{A}_{t}\cdots {A}_{j+1}{B}_{j},\text{\hspace{0.17em}}\text{\hspace{0.17em}}t=1,2,\cdots$

Then for any $\epsilon >0$ and for GARCH (1, 1) model (3) the top Lyapunov exponent $\stackrel{˜}{\gamma }$ given by ${A}_{n}\cdots {A}_{1}={A}_{n}\underset{t=1}{\overset{n-1}{\prod }}\left({\beta }_{1}+{\alpha }_{1}{ϵ}_{n}^{2}\right)$

$\begin{array}{c}E{|{X}_{t}\left({X}_{0}\right)-{X}_{t}\left(Z\right)|}^{\epsilon }\le E{|{A}_{t}\cdots {A}_{1}\left({X}_{0}-Z\right)|}^{\epsilon }\\ =E{|{A}_{1}\left({X}_{0}-Z\right)|}^{\epsilon }{\left(E{|{\beta }_{1}+{\alpha }_{1}{ϵ}_{n}^{2}|}^{\epsilon }\right)}^{t-1}\\ \le E‖{A}_{1}‖E{|{X}_{0}-Z|}^{\epsilon }{\left(E{|{\beta }_{1}+{\alpha }_{1}{ϵ}_{n}^{2}|}^{\epsilon }\right)}^{t-1}\end{array}$ (22)

If $E{|ϵ|}^{2\epsilon }<\infty$ , $E{|{X}_{0}|}^{\epsilon }<\infty$ and $E{|Z|}^{\epsilon }<\infty$ , then the right hand side is also finite. In addition given the stationary conditions previously stated then $E{|{\beta }_{1}+{\alpha }_{1}{ϵ}_{n}^{2}|}^{\epsilon }<1$ for some sufficiently small ε. Thus the left hand side of (22) decays to zero as $t\to \infty$ . Thus we conclude that ${\left({X}_{t}\right)}_{t\in ℕ}$ is stationary irrespective of the distribution of the initial values ${X}_{0}$ .

Now, consider the sample autocorrelation function as defined in (19), then the following statements hold,

$\underset{t=1}{\overset{n-h}{\sum }}{X}_{t}^{2}{X}_{t+h}^{2}=\underset{t=1}{\overset{k}{\sum }}{X}_{t}^{2}{X}_{t+h}^{2}+\underset{t=k+1}{\overset{n-h}{\sum }}{X}_{t}^{2}{X}_{t+h}^{2}$ (24)

$\underset{t=1}{\overset{n-h}{\sum }}{X}_{t}^{4}=\underset{t=1}{\overset{k}{\sum }}{X}_{t}^{4}+\underset{t=k+1}{\overset{n-h}{\sum }}{X}_{t}^{4}$ (25)

From (23) and (24) it can be asserted that there exists constants ${c}_{k,{X}^{2}}\left(h\right)$ and ${c}_{n-k,{X}^{2}}\left(h\right)$ such that the autocorrelation functions ${\rho }_{k,{X}^{2}}\left(h\right)$ and ${\rho }_{n-k,{X}^{2}}\left(h\right)$ can be expressed in terms of the autocorrelation function ${\rho }_{n,{X}^{2}}\left(h\right)$ as follows:

${\rho }_{k,{X}^{2}}\left(h\right)={c}_{k,X}\left(h\right){\rho }_{n,{X}^{2}}\left(h\right)$ (25)

and

${\rho }_{n-k,{X}^{2}}\left(h\right)={c}_{n-k,{X}^{2}}\left(h\right){\rho }_{n,{X}^{2}}\left(h\right)$ (26)

The change-point process (6) can be expressed in terms of (25) and (26) as

${D}_{n}^{k}\left(h\right)={\rho }_{k,{X}^{2}}\left(h\right)-{\rho }_{n-k,{X}^{2}}\left(h\right)=\left({c}_{k,{X}^{2}}\left(h\right)-{c}_{n-k,{X}^{2}}\left(h\right)\right)\left({\rho }_{n,{X}^{2}}\left(h\right)\right)$

The weak limits of the process ${D}_{n}^{k}\left(h\right)$ is characterized in terms of the limiting point processes for the sample autocovariance and autocorrelation functions through the application of the Continuous Mapping Theorem 12. To complete the proof we independently prove the convergence of ${c}_{k,{X}^{2}}\left(h\right)-{c}_{n-k,{X}^{2}}\left(h\right)$ and ${\rho }_{n,{X}^{2}}\left(h\right)$ and apply Theorem 12.

Let $\delta >0$ , ${X}_{t}=\left({x}_{t,X}^{\left(0\right)},{x}_{t,\sigma }^{\left(0\right)},\cdots ,{x}_{t,X}^{\left(n\right)},{x}_{x,\sigma }^{\left(n\right)}\right)\in {\stackrel{¯}{ℝ}}^{n+1}\\left\{0\right\}$ . In order to proof the results, we define several mappings

${T}_{h,\delta ,X}:\mathcal{M}\to \stackrel{¯}{ℝ}$

as follows

${T}_{0,\delta ,X}\left({N}_{n}\right)=\underset{t=1}{\overset{\infty }{\sum }}{n}_{t}{\left({x}_{t,X}^{\left(0\right)}\right)}^{2}{I}_{\left\{|{x}_{t,X}^{\left(0\right)}|>\delta \right\}}$

${T}_{1,\delta ,X}\left({N}_{n}\right)=\underset{t=1}{\overset{\infty }{\sum }}{n}_{t}{\left({x}_{t,X}^{\left(1\right)}\right)}^{2}{I}_{\left\{|{x}_{t,X}^{\left(0\right)}|>\delta \right\}}$

${T}_{h,\delta ,X}\left({N}_{n}\right)=\underset{t=1}{\overset{\infty }{\sum }}{n}_{t}\left({x}_{t,X}^{\left(0\right)}\right)\left({x}_{t,X}^{\left(h-1\right)}\right){I}_{\left\{|{x}_{t,X}^{\left(0\right)}|>\delta \right\}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}h\in \left[2,n\right]$

The set $\left\{{X}_{t}\in \stackrel{¯}{ℝ}\\left\{0\right\}||{x}^{\left(h\right)}|>\delta \right\}$ is bounded for any $h\ge 0$ and thus the mappings are continuous with respect to the limit point processes N. Consequently by Continuous Mapping Theorem 12 we have that

$T\left({N}_{n}\right)\stackrel{d}{\to }T\left(N\right)$

where

$T\left(N\right)=\underset{i=1}{\overset{\infty }{\sum }}\underset{j=1}{\overset{\infty }{\sum }}{P}_{i}^{2}{Q}_{ij}^{\left(0\right)}{Q}_{ij}^{\left(h\right)}{I}_{\left\{|{P}_{i}{Q}_{ij}^{\left(0\right)}|>\delta \right\}}$

The prove of the convergence of ${\rho }_{n,{X}^{2}}\left(h\right)$ is examined for $\kappa \in \left(0,2\right)$ and $\kappa \in \left(2,4\right)$ .

For the case of $\kappa \in \left(0,2\right)$ , the point process results of Theorem 3 holds and a direct application of Theorem 5 yields:

${\left(n{a}_{n}^{-4}{\gamma }_{n,{X}^{2}}\left(h\right)\right)}_{h=0,\cdots ,m}\stackrel{d}{\to }{\left({V}_{h}\right)}_{h=0,\cdots ,m}$

${\left({\rho }_{n,{X}^{2}}\left(h\right)\right)}_{h=1,\cdots ,m}\stackrel{d}{\to }{\left(\frac{{V}_{h}}{{V}_{0}}\right)}_{h=1,\cdots ,m}$

For $\kappa \in \left(2,4\right)$ we commence with the $\left\{{\sigma }_{t}^{2}\right\}$ sequence and establish the convergence of ${\gamma }_{n,{\sigma }^{2}}\left(0\right)$ . We rewrite ${\gamma }_{n,{\sigma }^{2}}\left(0\right)$ using the recurrence structure of the SDE (3) so that ${ϵ}_{t}^{2}={\alpha }_{1}^{-1}\left(\left({\alpha }_{1}{ϵ}_{t}^{2}+{\beta }_{1}\right)-{\beta }_{1}\right)={\alpha }_{1}^{-1}\left({A}_{t+1}-{\beta }_{1}\right)$ and ${\sigma }_{t}^{2}={\alpha }_{0}+{A}_{t}{\sigma }_{t-1}^{2}\approx {A}_{t}{\sigma }_{t-1}^{2}=\left({\alpha }_{1}\left({\epsilon }_{t}^{2}-1\right)+\left({\alpha }_{1}+{\beta }_{1}\right)\right){\sigma }_{t-1}^{2}$

Now using this representation yields:

$\begin{array}{l}n{a}_{n}^{-4}\left({\gamma }_{n,{\sigma }^{2}}\left(0\right)-{\gamma }_{{\sigma }^{2}}\left(0\right)\right)\\ ={a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}{\sigma }_{t-1}^{4}{\left({ϵ}_{t}^{2}-1\right)}^{2}{\alpha }_{1}^{2}+2{\alpha }_{1}\left({\alpha }_{1}+{\beta }_{1}\right)\left({ϵ}_{t}^{2}-1\right){\sigma }_{t-1}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }+{\left({\alpha }_{1}+{\beta }_{1}\right)}^{2}{\sigma }_{t-1}^{4}-{\left({\alpha }_{1}+{\beta }_{1}\right)}^{2}E\left({\sigma }^{4}\right)+{\left({\alpha }_{1}+{\beta }_{1}\right)}^{2}E\left({\sigma }^{4}\right)-E\left({\sigma }^{4}\right)\\ ={\alpha }_{1}^{2}{a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\left[{\sigma }_{t-1}^{4}{\left({ϵ}_{t}^{2}-1\right)}^{2}\right]+{\left({\alpha }_{1}+{\beta }_{1}\right)}^{2}{a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\left[{\sigma }_{t-1}^{2}-E\left({\sigma }^{2}\right)\right]+Op\left(1\right)\end{array}$

$\begin{array}{l}\left[1-{\left({\alpha }_{1}+{\beta }_{1}\right)}^{2}\right]n{a}_{n}^{-4}\left({\gamma }_{n,{\sigma }^{2}}\left(0\right)-{\gamma }_{{\sigma }^{2}}\left(0\right)\right)\\ ={\alpha }_{1}^{2}{a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\left[{\sigma }_{t-1}^{4}{\left({ϵ}_{t}^{2}-1\right)}^{2}\right]+Op\left(1\right)\\ ={\alpha }_{1}^{2}{a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\left[{\sigma }_{t}^{4}{\left({ϵ}_{t}^{2}-1\right)}^{2}\right]{I}_{\left\{{\sigma }_{t}>{a}_{n}\delta \right\}}+{\alpha }_{1}{a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\left[{\sigma }_{t}^{4}{\left({ϵ}_{t}^{2}-1\right)}^{2}\right]{I}_{\left\{{\sigma }_{t}\le {a}_{n}\delta \right\}}+Op\left(1\right)\\ =I+II+Op\left(1\right)\end{array}$ (27)

Assuming that the condition $E\left({ϵ}_{t}^{4}\right)<\infty$ is satisfied, we first show that II converges in probability to zero by applying Karamata’s theorem (see  ) on the regular variation and tail behavior of a stationary distribution which yields the asymptotic equivalence.

$\begin{array}{c}Var\left(II\right)=Var\left[{\alpha }_{1}^{2}{a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\left[{\sigma }_{t}^{4}{\left({ϵ}_{t+1}^{2}-1\right)}^{2}\right]{I}_{\left\{{\sigma }_{t}\le {a}_{n}\delta \right\}}\right]\\ \le {a}_{n}^{-8}\underset{t=1}{\overset{n}{\sum }}E\left({\left({\sigma }_{t}^{4}\right)}^{2}{I}_{\left\{{\sigma }_{t}\le {a}_{n}\delta \right\}}\right)E\left({\left({ϵ}_{t+1}^{2}-1\right)}^{2}\right)\\ ~const\text{ }{\delta }^{8-\kappa }\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{as}\text{\hspace{0.17em}}n\to \infty \\ \to 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{as}\text{\hspace{0.17em}}\delta \to 0\end{array}$ (28)

Now examining I we have

$\begin{array}{l}I={\alpha }_{1}^{2}{a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\left[{\sigma }_{t}^{4}{\left({ϵ}_{t}^{2}-1\right)}^{2}\right]{I}_{\left\{{\sigma }_{t}>{a}_{n}\delta \right\}}+Op\left(1\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }={a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\left[{\alpha }_{1}^{2}{\sigma }_{t}^{4}{\left\{\left({A}_{t+1}-\left({\alpha }_{1}+{\beta }_{1}\right)\right){\alpha }_{1}^{-1}\right\}}^{2}\right]{I}_{\left\{{\sigma }_{t}>{a}_{n}\delta \right\}}+Op\left(1\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }={a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}{\sigma }_{t+1}^{4}{I}_{\left\{{\sigma }_{t}>{a}_{n}\delta \right\}}-{\left({\alpha }_{1}+{\beta }_{1}\right)}^{2}{a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}{\sigma }_{t}^{4}{I}_{\left\{{\sigma }_{t}>{a}_{n}\delta \right\}}+Op\left(1\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\stackrel{d}{\to }{T}_{1,\delta ,\sigma }\left({N}^{\left(2\right)}\right)-{\left({\alpha }_{1}+{\beta }_{1}\right)}^{2}{T}_{0,\delta ,\sigma }\left({N}^{\left(2\right)}\right)\simeq S\left(\delta ,\infty \right)\end{array}$ (29)

We utilize the argument given in Theorem 12 where $S\left(\delta ,\infty \right)\stackrel{d}{\to }{V}_{0}^{*}$ as $\delta \to 0$ . Therefore, we finally obtain that:

$n{a}_{n}^{-4}\left({\gamma }_{n,{\sigma }^{2}}\left(0\right)-{\gamma }_{{\sigma }^{2}}\left(0\right)\right)\stackrel{d}{\to }\frac{1}{1-\left(\left({\alpha }_{1}+{\beta }_{1}\right)\right)}{V}_{0}^{*}\simeq {V}_{0}$ (30)

In the presence of a change-point k as hypothesized (4) it is evident that $E\left({A}_{t}\right)\ne {\alpha }_{1}+{\beta }_{1}$ for all t but rather

$nE\left({A}_{t}\right)=\left\{\begin{array}{l}{\alpha }_{1}+{\beta }_{1}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}1 (31)

Thus the convergence of ${\gamma }_{k,{\sigma }^{2}}\left(0\right)$ and ${\gamma }_{n-k,{\sigma }^{2}}\left(0\right)$ are respectively given by

$k{a}_{k}^{-4}\left({\gamma }_{k,{\sigma }^{2}}\left(0\right)-{\gamma }_{{\sigma }^{2}}\left(0\right)\right)\stackrel{d}{\to }\frac{1}{1-\left(\left({\alpha }_{1}+{\beta }_{1}\right)\right)}{V}_{0}^{k*}\simeq {V}_{0}^{k}$ (32)

$\left(n-k\right){a}_{n-k}^{-4}\left({\gamma }_{n-k,{\sigma }^{2}}\left(0\right)-{\gamma }_{{\sigma }^{2}}\left(0\right)\right)\stackrel{d}{\to }\frac{1}{1-E\left(A\right)}{V}_{0}^{\left(n-k\right)*}\simeq {V}_{0}^{n-k}$ (33)

Following (31), (32) and (33) it is concluded that ${V}_{0}^{k}\ne {V}_{0}^{n-k}$ .

Convergence of ${\gamma }_{n,{\sigma }^{2}}\left(1\right)$ is determined in a similar manner where

$\begin{array}{l}n{a}_{n}^{-4}\left({\gamma }_{n,{\sigma }^{2}}\left(1\right)-{\gamma }_{{\sigma }^{2}}\left(1\right)\right)\\ ={a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\left[{\sigma }_{t}^{2}{\sigma }_{t+1}^{2}-E\left({\sigma }^{4}\right)\right]\\ ={a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\left[{\sigma }_{t}^{2}{\sigma }_{t+1}^{2}-{\sigma }_{t}^{4}EA\right]{I}_{\left\{{\sigma }_{t}>{a}_{n}\delta \right\}}+\underset{t=1}{\overset{n}{\sum }}\left[{\sigma }_{t}^{2}{\sigma }_{t+1}^{2}-{\sigma }_{t}^{4}EA\right]{I}_{\left\{{\sigma }_{t}\le {a}_{n}\delta \right\}}+Op\left(1\right)\\ ={T}_{2,\delta ,\sigma }\left({N}_{n}^{\left(2\right)}\right)-EA{T}_{1,\delta ,\sigma }\left({N}_{n}^{\left(2\right)}\right)\stackrel{d}{\to }{V}_{1}\end{array}$ (34)

Consequently for arbitrary lags we have

$n{a}_{n}^{-4}\left({\gamma }_{n,{\sigma }^{2}}\left(h\right)-{\gamma }_{{\sigma }^{2}}\left(h\right)\right)\stackrel{d}{\to }{V}_{h}$

In the presence of a change-point k the convergence of ${\gamma }_{k,{\sigma }^{2}}\left(1\right)$ and ${\gamma }_{n-k,{\sigma }^{2}}\left(1\right)$ are respectively given by

$k{a}_{k}^{-4}\left({\gamma }_{k,{\sigma }^{2}}\left(1\right)-{\gamma }_{{\sigma }^{2}}\left(1\right)\right)\stackrel{d}{\to }{V}_{1}^{k}$

$\left(n-k\right){a}_{n-k}^{-4}\left({\gamma }_{n-k,{\sigma }^{2}}\left(1\right)-{\gamma }_{{\sigma }^{2}}\left(1\right)\right)\stackrel{d}{\to }{V}_{1}^{n-k}$

Now we consider the $\left\{{X}_{t}^{2}\right\}$ sequence and establish the convergence of ${\gamma }_{n,{X}^{2}}\left(0\right)$ as follows:

$\begin{array}{l}n{a}_{n}^{-4}\left({\gamma }_{n,{X}^{2}}\left(0\right)-{\gamma }_{{X}^{2}}\left(0\right)\right)\\ ={a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\left[{X}_{t}^{4}-E\left({X}_{0}^{4}\right)\right]\\ =2{a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\left[{\sigma }_{t}^{4}{\left({ϵ}_{t}^{2}-1\right)}^{2}\right]{I}_{\left\{{\sigma }_{t}>{a}_{n}\delta \right\}}+2{a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\left[{\sigma }_{t}^{4}{\left({ϵ}_{t}-1\right)}^{2}\right]{I}_{\left\{{\sigma }_{t}\le {a}_{n}\delta \right\}}\\ =III+IV\end{array}$ (35)

Equation (35) follows directly from Equation (27). In a similar way to Equation (28), $\underset{\delta \to 0}{\mathrm{lim}}\underset{n\to \infty }{\mathrm{lim}}\mathrm{sup}Var\left(IV\right)=0$ .

Now examining III and following the results obtained in Equation (29) we have that III converges as follows

$\begin{array}{l}2{a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\left[{\sigma }_{t}^{4}{\left({ϵ}_{t}^{2}-1\right)}^{2}\right]{I}_{\left\{{\sigma }_{t}>{a}_{n}\delta \right\}}\stackrel{d}{\to }{T}_{1,\delta ,\sigma }\left({N}^{\left(2\right)}\right)-{\left({\alpha }_{1}+{\beta }_{1}\right)}^{2}{T}_{0,\delta ,\sigma }\left({N}^{\left(2\right)}\right)\\ \simeq S\left(\delta ,\infty \right)\stackrel{d}{\to }{V}_{0}\end{array}$

Thus we have that

$n{a}_{n}^{-4}\left({\gamma }_{n,{X}^{2}}\left(0\right)-{\gamma }_{{X}^{2}}\left(0\right)\right)\stackrel{d}{\to }{V}_{0}$

Similarly it can be shown that the convergence of ${\gamma }_{k,{X}^{2}}\left(0\right)$ and ${\gamma }_{n-k,{X}^{2}}\left(0\right)$ are respectively given by

$k{a}_{k}^{-4}\left({\gamma }_{k,{X}^{2}}\left(0\right)-{\gamma }_{{X}^{2}}\left(0\right)\right)\stackrel{d}{\to }{V}_{0}^{k}$

$\left(n-k\right){a}_{n-k}^{-4}\left({\gamma }_{n-k,{X}^{2}}\left(0\right)-{\gamma }_{{X}^{2}}\left(0\right)\right)\stackrel{d}{\to }{V}_{0}^{n-k}$

Next we consider the $\left\{{X}_{t}^{2}\right\}$ sequence and establish the convergence of ${\gamma }_{n,{X}^{2}}\left(1\right)$ as follows:

$\begin{array}{l}n{a}_{n}^{-4}\left({\gamma }_{n,{X}^{2}}\left(1\right)-{\gamma }_{{X}^{2}}\left(1\right)\right)\\ ={a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\left[{X}_{t}^{2}{X}_{t+1}^{2}-E\left({X}_{0}^{2}{X}_{1}^{2}\right)\right]\\ ={a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\left[{X}_{t}^{2}{\sigma }_{t+1}^{2}\left({ϵ}_{t+1}^{2}-E\left(ϵ\right)\right)\right]+{a}_{n}^{-2}E\left(ϵ\right)\underset{t=1}{\overset{n}{\sum }}\left[{X}_{t}^{2}{\sigma }_{t+1}^{2}-{\sigma }_{1}^{2}E\left({X}_{0}^{2}\right)\right]\\ =V+VI\end{array}$

Now examining VI we have

$\begin{array}{c}VI={a}_{n}^{-4}E\left(ϵ\right)\underset{t=1}{\overset{n}{\sum }}\left[{X}_{t}^{2}{\sigma }_{t+1}^{2}-{\sigma }_{1}^{2}E\left({X}_{0}^{2}\right)\right]\\ ={a}_{n}^{-4}E\left(ϵ\right)\underset{t=1}{\overset{n}{\sum }}\left[{X}_{t}^{2}\left({\sigma }_{t+1}^{2}-{\sigma }_{t}^{2}{A}_{t+1}\right)\right]-E\left[{X}_{0}^{2}\left({\sigma }_{1}^{2}-{\sigma }_{0}^{2}{A}_{1}\right)\right]\end{array}$

$\begin{array}{c}Var\left(VI\right)={a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}Var\left[{X}_{t}^{2}\left({\sigma }_{t+1}^{2}-{\sigma }_{t}^{2}{A}_{t+1}\right)\right]-E\left[{X}_{0}^{2}\left({\sigma }_{1}^{2}-{\sigma }_{0}^{2}{A}_{1}\right)\right]\\ ={a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\underset{s=1}{\overset{n}{\sum }}Cov\left[{X}_{t}^{2}\left({\sigma }_{t+1}^{2}-{\sigma }_{t}^{2}{A}_{t+1}\right),{X}_{s}^{2}\left({\sigma }_{s+1}^{2}-{\sigma }_{s}^{2}{A}_{s+1}\right)\right]\\ \le const\text{ }n{a}_{n}^{-8}\underset{h=1}{\overset{n}{\sum }}{q}^{h}\to 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{as}\text{\hspace{0.17em}}n\to \infty \end{array}$

where $q\in \left(0,1\right)$ is a constant and since $\left({X}_{t},{\sigma }_{t}\right)$ is strongly mixing with geometric rate, thus there exist a $\delta >0$ and a constant K such that $E{\left[{X}_{0}^{2}\left({\sigma }_{1}^{2}-{\sigma }_{0}^{2}{A}_{1}\right)\right]}^{2+\delta }<\infty$ and $Cov\left[{X}_{t}^{2}\left({\sigma }_{t+1}^{2}-{\sigma }_{t}^{2}{A}_{t+1}\right),{X}_{s}^{2}\left({\sigma }_{s+1}^{2}-{\sigma }_{s}^{2}{A}_{s+1}\right)\right]\le K{q}^{|t-s|}$ .

Now examining V we have

$\begin{array}{c}V={a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\left[{X}_{t}^{2}{\sigma }_{t}^{2}\left({ϵ}_{t}^{2}-E\left(ϵ\right)\right)\right]\\ ={a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\left[{X}_{t}^{2}{\sigma }_{t}^{2}{A}_{t+1}-{X}_{t}^{2}{\sigma }_{t}^{2}EA+{X}_{t}^{2}{\sigma }_{t}^{2}EA-E\left({X}_{0}^{2}{\sigma }_{0}^{2}{A}_{1}\right)\right]\\ ={a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\left[{X}_{t}^{2}{\sigma }_{t}^{2}\left({A}_{t+1}-EA\right)\right]{I}_{\left\{{\sigma }_{t}>{a}_{n}\delta \right\}}+{a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\left[{X}_{t}^{2}{\sigma }_{t}^{2}\left({A}_{t+1}-EA\right)\right]\end{array}$

$\begin{array}{l}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+EA{a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\left[{\sigma }_{t}^{4}\left({ϵ}_{t}^{2}-E\left(ϵ\right)\right)\right]{I}_{\left\{{\sigma }_{t}>{a}_{n}\delta \right\}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+EA{a}_{n}^{-2}\underset{t=1}{\overset{n}{\sum }}\left[{\sigma }_{t}^{4}\left({ϵ}_{t}^{2}-E\left(ϵ\right)\right)\right]{I}_{\left\{{\sigma }_{t}\le {a}_{n}\delta \right\}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+EAE\left(ϵ\right){a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\left[{\sigma }_{t}^{4}-E\left({\sigma }^{4}\right)\right]{I}_{\left\{{\sigma }_{t}>{a}_{n}\delta \right\}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+EAE\left(ϵ\right){a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\left[{\sigma }_{t}^{4}-E\left({\sigma }^{4}\right)\right]{I}_{\left\{{\sigma }_{t}\le {a}_{n}\delta \right\}}EA\\ =VII+VIII+IX+X+XI+XII\end{array}$ (36)

By applying Karamata Theorem  to (36)

$\begin{array}{l}\underset{\delta \to 0}{\mathrm{lim}}\underset{n\to 0}{\mathrm{lim}}\mathrm{sup}Var\left(VIII\right)=0\\ \underset{\delta \to 0}{\mathrm{lim}}\underset{n\to 0}{\mathrm{lim}}\mathrm{sup}Var\left(IX\right)=0\\ \underset{\delta \to 0}{\mathrm{lim}}\underset{n\to 0}{\mathrm{lim}}\mathrm{sup}Var\left(X\right)=0\\ \underset{\delta \to 0}{\mathrm{lim}}\underset{n\to 0}{\mathrm{lim}}\mathrm{sup}Var\left(XII\right)=0\end{array}$

Examining VII we have

$\begin{array}{c}VII={a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\left[{X}_{t}^{2}{\sigma }_{t}^{2}\left({A}_{t+1}-EA\right)\right]{I}_{\left\{{\sigma }_{t}>{a}_{n}\delta \right\}}\\ ={a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\left[{\sigma }_{t}^{2}{\sigma }_{t+1}^{2}{I}_{\left\{{\sigma }_{t}>{a}_{n}\delta \right\}}\right]-EA{a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\left[{\sigma }_{t}^{4}{I}_{\left\{{\sigma }_{t}>{a}_{n}\delta \right\}}\right]\\ \stackrel{d}{\to }{T}_{2,\delta ,\sigma }\left({N}^{\left(2\right)}\right)-EA{T}_{1,\delta ,\sigma }\left({N}^{\left(2\right)}\right)\\ \stackrel{d}{\to }{V}_{1}\end{array}$

Since $E\left(ϵ\right)=0$ then for XI we have

$XI=EAE\left(ϵ\right){a}_{n}^{-4}\underset{t=1}{\overset{n}{\sum }}\left[{\sigma }_{t}^{4}-E\left({\sigma }^{4}\right)\right]{I}_{\left\{{\sigma }_{t}>{a}_{n}\delta \right\}}=0$

Thus we have that

$n{a}_{n}^{-4}\left({\gamma }_{n,{X}^{2}}\left(1\right)-{\gamma }_{{X}^{2}}\left(1\right)\right)\stackrel{d}{\to }{V}_{1}$

By extending to arbitrary lags $h=0,\cdots ,n$ the convergence of ${\gamma }_{n,{X}^{2}}\left(h\right)$ is given by

$n{a}_{n}^{-4}\left({\gamma }_{n,{X}^{2}}\left(h\right)-{\gamma }_{{X}^{2}}\left(h\right)\right)\stackrel{d}{\to }{V}_{h}$

Consequently the convergence of ${\rho }_{n,{X}^{2}}\left(h\right)$ is given by

$n{a}_{n}^{-4}\left({\rho }_{n,{X}^{2}}\left(h\right)-{\rho }_{{X}^{2}}\left(h\right)\right)\stackrel{d}{\to }{\gamma }_{{X}^{2}}^{-1}\left(0\right)\left({V}_{h}-{\rho }_{{X}^{2}}\left(h\right){V}_{0}\right)$

We have been able to examine the limiting behavior of ${\rho }_{n,{X}^{2}}\left(h\right)$ for two cases. In the first case, when $\kappa \in \left(0,2\right)$ , the variance of ${X}_{n}$ is infinite and thus ${\rho }_{n,{X}^{2}}\left(h\right)$ has a random limit without any normalization. When $\kappa \in \left(2,4\right)$ , the

process has a finite variance but infinite fourth moment and $n{a}_{n}^{-4}\left({\rho }_{n,{X}^{2}}\left(h\right)\right)$ converges to an $\frac{\kappa }{2}$ -stable distribution. By Theorem 8 convergence of ${\rho }_{n,{X}^{2}}\left(h\right)$ implies that the sequence is bounded with $|{\rho }_{n,{X}^{2}}\left(h\right)|\le 1$ .

We now examine the convergence of ${c}_{k,{X}^{2}}\left(h\right)-{c}_{n-k,{X}^{2}}\left(h\right)$ . Consider ${D}_{n}^{k}\left(h\right)$ , we can express ${c}_{k,{X}^{2}}\left(h\right)-{c}_{n-k,{X}^{2}}\left(h\right)$ as follows:

${c}_{k,{X}^{2}}\left(h\right)-{c}_{n-k,{X}^{2}}\left(h\right)=\frac{{\rho }_{k,{X}^{2}}\left(h\right)-{\rho }_{n-k,{X}^{2}}\left(h\right)}{{\rho }_{n,{X}^{2}}\left(h\right)}$

By the Bolzano-Weierstrass theorem, a bounded sequence has always a convergent subsequence. This is further confirmed through the invariance property of subsequences in Theorem 10 which states that if ${\rho }_{n,{X}^{2}}\left(h\right)$ converges, then every subsequence say, ${\rho }_{k,{X}^{2}}\left(h\right)$ and ${\rho }_{n-k,{X}^{2}}\left(h\right)$ converges. By linearity rule of sequences as prescribed in Theorem 11, ${\rho }_{k,{X}^{2}}\left(h\right)-{\rho }_{n-k,{X}^{2}}\left(h\right)$ converges. This further implies that ${\rho }_{k,{X}^{2}}\left(h\right)$ and ${\rho }_{n-k,{X}^{2}}\left(h\right)$ are bounded since every convergent sequence is bounded. The subsequences ${\rho }_{k,{X}^{2}}\left(h\right)$ and ${\rho }_{n-k,{X}^{2}}\left(h\right)$

are also bounded with $|{\rho }_{k,{X}^{2}}\left(h\right)|\le 1$ and $|{\rho }_{n-k,{X}^{2}}\left(h\right)|\le 1$ , thus their absolute difference is also bounded as $|{\rho }_{k,{X}^{2}}\left(h\right)-{\rho }_{n-k,{X}^{2}}\left(h\right)|\le 2$ . Further assume that we are considering only significant sample autocorrelation coefficients where $|{\rho }_{n,{X}^{2}}\left(h\right)|\ge 0.05$ , then ${c}_{k,{X}^{2}}\left(h\right)-{c}_{n-k,{X}^{2}}\left(h\right)$ is also bounded. Applying the quotient property of subsequences, then ${c}_{k,{X}^{2}}\left(h\right)-{c}_{n-k,{X}^{2}}\left(h\right)$ is also convergent.

Consider the proposed change-point process ${D}_{n}^{k}\left(h\right)$ as defined in (6), then we can derive the limit of ${C}_{n}$ as follows:

$\begin{array}{c}{\left({D}_{n}^{k}\left(h\right)\right)}_{h=1,\cdots ,m}\approx {\rho }_{k,{X}^{2}}\left(h\right)-{\rho }_{n-k,{X}^{2}}\left(h\right)\\ =\frac{{\gamma }_{k,{X}^{2}}\left(h\right)}{{\gamma }_{k,{X}^{2}}\left(0\right)}-\frac{{\gamma }_{n-k,{X}^{2}}\left(h\right)}{{\gamma }_{n-k,{X}^{2}}\left(0\right)}\\ =\frac{{\sum }_{t=1}^{k}{X}_{t}^{2}{X}_{t+h}^{2}}{{\sum }_{t=1}^{k}{X}_{t}^{4}}-\frac{{\sum }_{t=k+1}^{n}{X}_{t}^{2}{X}_{t+h}^{2}}{{\sum }_{t=k+1}^{n}{X}_{t}^{4}}\\ =\frac{{\sum }_{t=k+1}^{n}{X}_{t}^{4}{\sum }_{t=1}^{n}{X}_{t}^{2}{X}_{t+h}^{2}-{\sum }_{t=k+1}^{n}{X}_{t}^{2}{X}_{t+h}^{2}{\sum }_{t=1}^{n}{X}_{t}^{4}}{{\sum }_{t=k+1}^{n}{X}_{t}^{4}\left({\sum }_{t=1}^{n}{X}_{t}^{4}-{\sum }_{t=k+1}^{n}{X}_{t}^{4}\right)}\end{array}$ (37)

Thus applying Theorem 5 to 37 we have

${\left({D}_{n}^{k}\left(h\right)\right)}_{h=1,\cdots ,m}\stackrel{d}{\to }\frac{{V}_{0}^{k}{V}_{h}-{V}_{h}^{k}{V}_{0}}{{V}_{0}^{k}\left({V}_{0}-{V}_{0}^{k}\right)}=\frac{{V}_{0}}{{V}_{h}}\left(\frac{{V}_{0}^{k}{V}_{h}-{V}_{h}^{k}{V}_{0}}{{V}_{0}^{k}\left({V}_{0}-{V}_{0}^{k}\right)}\right)\frac{{V}_{h}}{{V}_{0}}$ (38)

From (38) above, the sequence ${C}_{n}$ converges in distribution to ${C}_{h}$ as follows

${C}_{n}\stackrel{d}{\to }\frac{{V}_{0}}{{V}_{h}}\left(\frac{{V}_{0}^{k}{V}_{h}-{V}_{h}^{k}{V}_{0}}{{V}_{0}^{k}\left({V}_{0}-{V}_{0}^{k}\right)}\right)={C}_{h}$

By application of Continuous Mapping Theorem 12, we have the limiting behavior of the proposed change-point process ${D}_{n}^{k}\left(h\right)$ for the three cases $\kappa \in \left(0,2\right)$ , $\kappa \in \left(2,4\right)$ and $\kappa \in \left(4,\infty \right)$ as follows.

for $\kappa \in \left(0,2\right)$ and by application of Theorem 5 (i):

${\left({D}_{n}^{k}\left(h\right)\right)}_{h=1,\cdots ,m}={\left({C}_{n}\left(h\right){\rho }_{n,X}\left(h\right)\right)}_{h=1,\cdots ,m}\stackrel{d}{\to }{C}_{h}{\left(\frac{{V}_{h}}{{V}_{0}}\right)}_{h=1.\cdots ,m}$

for $\kappa \in \left(2,4\right)$ and by application of Theorem 5 (ii):

$\begin{array}{l}{\left(n{a}_{n}^{-4}{D}_{n}^{k}\left(h\right)\right)}_{h=1,\cdots ,m}=n{a}_{n}^{-4}{\left({C}_{n}{\rho }_{n,{X}^{2}}\left(h\right)-{\rho }_{{X}^{2}}\left(h\right)\right)}_{h=1,\cdots ,n}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\stackrel{d}{\to }{\gamma }_{X}^{-1}\left(0\right){\left({C}_{h}{V}_{h}-{\rho }_{{X}^{2}}\left(h\right){C}_{0}{V}_{0}\right)}_{h=1,\cdots ,m}\end{array}$

which completes proof.

5. Conclusion

The asymptotic behavior of the change-point process ${D}_{n}^{k}$ is established on the basis of examining the asymptotic behavior of the sample autocovariance and sample autocorrelation functions. The limits of the suitably normalized sample autocovariance and sample autocorrelation functions are expressed in terms of the limiting point processes. The limit distributions are the difference of ratios of the infinite variance stable vectors or functions of such vectors. As a result, determination of the quantiles from the limit distributions is difficult. The limits are also generally random as a result of the infinite variance. Future work will be aimed at identifying the limit distributions so as to make the results directly applicable for hypothesis testing purposes.

Acknowledgements

The authors thank the Pan-African University Institute of Basic Sciences, Technology and Innovation (PAUSTI) for funding this research.

Cite this paper

Irungu, I.W., Mwita, P.N. and Waititu, A.G. (2018) Limit Theory of Model Order Change-Point Estimator for GARCH Models. Journal of Mathematical Finance, 8, 426-445. https://doi.org/10.4236/jmf.2018.82027

References

1. 1. Chinzara, Z. (2010) Macroeconomic Uncertainty and Emerging Market Stock Market Volatility: The Case for South Africa. Working Paper 187, 1-19.

2. 2. Matteo Manera, M.N. and Vignati, I. (2012) Financial Speculation in Energy and Agriculture Futures Markets: A Multivariate Garch Approach. International Association for Energy Economics, 3.

3. 3. Mikosch, T. and Starica, C. (2004) Nonstationarities in Financial Time Series, the Long-Range Dependence and the Igarch Effects. Review of Economics and Statistics, 86, 378-390. https://doi.org/10.1162/003465304323023886

4. 4. Yau, C.Y. and Zhao, Z. (2015) Inference for Multiple Change Points in Time Series via Likelihood Ratio Scan Statistics. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 78, 895-916. https://doi.org/10.1111/rssb.12139

5. 5. Lee, T., Kim, M. and Baek, C. (2014) Tests for Volatility Shifts in Garch against Long Range Dependence. Journal of Time Series Analysis, 36, 127-153. https://doi.org/10.1111/jtsa.12098

6. 6. Na, O., Lee, J. and Lee, S. (2011) Change Point Detection in Copula Arma Garch Models. Journal of Time Series Analysis, 33, 554-569. https://doi.org/10.1111/j.1467-9892.2011.00763.x

7. 7. Alemohammad, N., Rezakhah, S. and Alizadeh, S.H. (2016) Markov Switching Component Garch Model Stability and Forecasting. Communications in Statistics Theory and Statistics, 45, 4332-4348. https://doi.org/10.1080/03610926.2013.841934

8. 8. Irungu, I., Mwita, P. and Waititu, A. (2018) Consistency of the Model Order Change-Point Estimator for Garch Models. Journal of Mathematical Finance, 8, 266-282. https://doi.org/10.4236/jmf.2018.82018

9. 9. Bartkiewicz, K., Jakubowski, A., Mikosch, T. and Wintenberger, O. (2011) Stable Limits for Sums of Dependent Infinite Variance Random Variables. Probabability Theory Related Fields, 150, 337-372. https://doi.org/10.1007/s00440-010-0276-9

10. 10. Davis, R.A. and Resnick, S.I. (2011) Limit Theory for the Sample Covariance and Correlation Functions of Moving Averages. Annals of Statistics, 14, 533-558. https://doi.org/10.1214/aos/1176349937

11. 11. Davis, R.A. and Resnick, S.I. (1996) Limit Theory for Bilinear Processes with Heavy-Tailed Noise. Annals of Statistics, 6, 1191-1210. https://doi.org/10.1214/aoap/1035463328

12. 12. Mikosch, T. and Starica, C. (2000) Limit Theory for the Sample Autocorrelations and Extremes of a Garch (1,1) Process. Annals of Statistics, 28, 1427-1451.

13. 13. Basrak, B., Krizmanic, D. and Segers, J. (2012) A Functional Limittheorem for Dependent Sequences with Infinite Variance Stable Limits. The Annals of Probability, 40, 2008-2033. https://doi.org/10.1214/11-AOP669

14. 14. Bougerol, P. and Picard, N. (1992) Strict Stationarity of Generalized Autoregressive Processes. Annals of Probability, 20, 1714-1730. https://doi.org/10.1214/aop/1176989526

15. 15. Krengel, U. (1985) Ergodic Theorems. De Gruyter, Berlin.

16. 16. Kallenberg, O. (1983) Random Measures. Akademie-Verlag, Berlin, 3.

17. 17. Kesten, H. (1973) Random Difference Equations and Renewal Theory for Products of Random Matrices. Acta Mathematica, 131, 207-248. https://doi.org/10.1007/BF02392040

18. 18. Breiman, L. (1965) On Some Limit Theorems Similar to Arc-Sin Law. Theory Probability Applications, 10, 323-331. https://doi.org/10.1137/1110037

19. 19. Bingham, N.H., Goldie, C.M. and Teugels, J.L. (1985) Regular Variation. Encyclopedia of Mathematics and Its Applications. Cambridge University Press, Cambridge.

Appendix

Theorem 7. (Holder’s Inequality)

Let I be a finite or countable index set. Given $1\le p\le \infty$ , if $X={\left({X}_{k}\right)}_{k\in I}\in {L}_{p}\left(I\right)$ and $Y={\left({Y}_{k}\right)}_{k\in I}\in {L}_{{p}^{\prime }}\left(I\right)$ , where $\frac{1}{p}+\frac{1}{{p}^{\prime }}=1$ then $XY={\left({X}_{k}{Y}_{k}\right)}_{k\in I}\in {L}_{1}\left(I\right)$ and

${‖XY‖}_{1}\le {‖{\left({X}_{k}\right)}_{k\in I}‖}_{p}{‖{\left({Y}_{k}\right)}_{k\in I}‖}_{{p}^{\prime }}={\left(\underset{k\in I}{\sum }{|{X}_{k}|}^{p}\right)}^{\frac{1}{p}}{\left(\underset{k\in I}{\sum }{|{Y}_{k}|}^{{p}^{\prime }}\right)}^{\frac{1}{{p}^{\prime }}}<\infty$

Theorem 8. (Convergent sequences are bounded)

Let ${\left\{{A}_{n}\right\}}_{n\in ℕ}$ be a convergent sequence. Then the sequence is bounded and the limit is unique.

Theorem 9. (Bolzano-Weierstrass)

Let ${\left\{{A}_{n}\right\}}_{n\in ℕ}$ be a sequence of real numbers that is bounded. Then there exists a subsequence ${\left\{{A}_{{n}_{k}}\right\}}_{{n}_{k}\in ℕ}$ that converges.

Theorem 10. (Invariance property of subsequences)

If ${\left\{{A}_{n}\right\}}_{n\in ℕ}$ is a convergent sequence, then every subsequence of that sequence converges to the same limit.

Theorem 11. (Algebra on Sequences)

If the sequences ${\left\{{A}_{n}\right\}}_{n\in ℕ}$ converges to L and ${\left\{{B}_{n}\right\}}_{n\in ℕ}$ converges to M then the following hold:

1) $\underset{n\to 0}{\mathrm{lim}}\left({A}_{n}+{B}_{n}\right)=\underset{n\to 0}{\mathrm{lim}}{A}_{n}+\underset{n\to 0}{\mathrm{lim}}{B}_{n}=L+M$

2) $\underset{n\to 0}{\mathrm{lim}}\left({A}_{n}\cdot {B}_{n}\right)=\underset{n\to 0}{\mathrm{lim}}{A}_{n}\underset{n\to 0}{\mathrm{lim}}{B}_{n}=L\cdot M$

3) $\underset{n\to 0}{\mathrm{lim}}\frac{{A}_{n}}{{B}_{n}}=\frac{\underset{n\to 0}{\mathrm{lim}}{A}_{n}}{\underset{n\to 0}{\mathrm{lim}}{B}_{n}}=\frac{L}{M}$ for ${B}_{n}\ne 0,\forall n\in ℕ$ and $M\ne 0$

Theorem 12. (Continuous Mapping)

Let a function $g:{ℝ}^{k}\to {ℝ}^{m}$ be continuous in every point of a set C such that $P\left(X\in C\right)=1$ . Then if ${X}_{n}\to X$ then $g\left({X}_{n}\right)\to g\left(X\right)$ .

Theorem 13. (Algebra on Series)

Let $\sum {A}_{n}$ and $\sum {B}_{n}$ be two absolutely convergent series. Then:

1) the sum of the two series is again absolutely convergent. Its limit is the sum of the limit of the two series.

2) the difference of the two series is again absolutely convergent. Its limit is the difference of the limit of the two series.

3) the product of the two series is again absolutely convergent. Its limit is the product of the limit of the two series.

Theorem 14. Let ${\left\{{X}_{t}\right\}}_{t\in ℕ}$ be a strictly stationary sequence. Define the partial sums of the sequence by ${S}_{n}={\sum }_{t=1}^{n}{X}_{t}$ .

1) if $\kappa \in \left(0,2\right)$ then

${a}_{n}^{-1}\stackrel{d}{\to }S$

where $S={\sum }_{i=1}^{\infty }{\sum }_{j=1}^{\infty }{P}_{i}{Q}_{ij}$ has a stable distribution

2) if $\kappa \in \left(2,4\right)$ and for all $\epsilon >0$ , $\underset{\epsilon \to 0}{\mathrm{lim}}\underset{n\to 0}{\mathrm{lim}}\mathrm{sup}P\left[|{S}_{n}\left(0,\delta \right]-E{S}_{n}\left(0,\delta \right]|>\epsilon \right]=0$ then

${a}_{n}^{-1}{S}_{n}-E{S}_{n}\left(0,1\right]\stackrel{d}{\to }S$

where S is the distributional limit of

$\underset{i=1}{\overset{\infty }{\sum }}\underset{j=1}{\overset{\infty }{\sum }}{P}_{i}{Q}_{ij}{I}_{\left\{|{P}_{i}{Q}_{ij}|>{a}_{n}\delta \right\}}-{\int }_{\delta <|x|\le 1}x\mu \left(dx\right)$

as $\delta \to 0$ , μ is the measure in section 2.1 which has a stable distribution.

For every $\delta >0$ , the mapping from M in section 2.1 into $ℝ$ is defined by

$T:\underset{t=1}{\overset{\infty }{\sum }}{\epsilon }_{{x}_{t}}\to \underset{t=1}{\overset{\infty }{\sum }}{x}_{t}{I}_{\left\{|{x}_{t}|>\delta \right\}}$

and is almost surely continuous with respect to the point process N. Thus by continuous mapping theorem

${S}_{n}\left(\delta ,\infty \right)=T\left({N}_{n}\right)\stackrel{d}{\to }T\left(N\right)=S\left(\delta ,\infty \right)$

As $\delta \to 0$ , $S\left(\delta ,\infty \right)\to S\left(0,\infty \right)={\sum }_{i=1}^{\infty }{\sum }_{j=1}^{\infty }{P}_{i}{Q}_{ij}$ .