Journal of Mathematical Finance
Vol.08 No.02(2018), Article ID:84885,20 pages
10.4236/jmf.2018.82027

Limit Theory of Model Order Change-Point Estimator for GARCH Models

Irene W. Irungu1*, Peter N. Mwita2, Antony G. Waititu3

1Pan-African University Institute of Basic Sciences, Technology and Innovation, Nairobi, Kenya

2Machakos University, Machakos, Kenya

3Jomo Kenyatta University of Agriculture and Technology, Nairobi, Kenya

Copyright © 2018 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: March 16, 2018; Accepted: May 25, 2018; Published: May 28, 2018

ABSTRACT

The limit theory of a change-point process which is based on the Manhattan distance of the sample autocorrelation function with applications to GARCH processes is examined. The general theory of the sample autocovariance and sample autocorrelation functions of a stationary GARCH process forms the basis of this study. Specifically the point processes theory is utilized to obtain their weak convergence limit at different lags. This is further extended to the change-point process. The limits are found to be generally random as a result of the infinite variance.

Keywords:

Autocorrelation Function, Change-Point, Convergence, GARCH, Manhattan Distance, Model Order, Point Process, Regular Variation, Weak Limit

1. Introduction

Empirical observation made in Econometrics and applied financial time series literature for long time horizons reveal that log-returns of various series of share prices, exchange-rates and interest rates depict unique stylized features. These features include: the frequency of large and small values is rather high suggesting that the data do not come from a normal but rather a heavy tailed distribution and that exceedances of high thresholds occur in clusters which indicates that there is dependence in the tails. It is also observed that the sample autocorrelations of data are small whereas the sample autocorrelation of the absolute and squared values is significantly different from zero even for large lags. This behavior suggests that there is some kind of long-range dependence in the data.

Various models have been proposed in order to describe these features. Among these models is the GARCH model which has been found appropriate in capturing volatility dynamics in financial time series particularly in modelling of stock market volatility as seen in [1] and derivative market volatility as utilized by [2] . GARCH (1, 1) in particular is often used in applications as it is believed to capture, despite its simplicity, variety of the empirically observed stylized features of the log-returns. However the log-return data cannot be modelled by one particular GARCH model over a long period of time [3] . They observe that in real financial time series the effect of non-stationarity of log-return series can be seen by considering the sample autocorrelation function of moving blocks of the same length as the estimates seem to differ from block to block. They suggest the use of change-point analysis of financial time series modelled by GARCH processes with parameters varying with time. The likelihood ratio scan method has been proposed by [4] for estimating multiple change points in piecewise stationary processes where they use a scan statistics to reduce the computationally infeasible global multiple change point estimation problem to a number of single change point detection problems in various local windows. The cumulative sum test is considered by [5] in determining volatility shifts in GARCH model against long range dependence. Cumulative sum test has also been used by [6] for change-point detection in copula ARMAGARCH Models. Markov switching GARCH model has been proposed by [7] where the volatility in each state is a convex combination of two different GARCH components with time varying weights making the model have a dynamic behavior to capture the variants of shocks. According to [8] change-point in the series could also be attributed to change in GARCH model order specification. The trio proposes an estimator based on the Manhattan distance of the sample autocorrelation of squared values. This paper aims at furthering the works of [8] by deriving the distributional convergence of the process used in deriving the estimator of change-point D n k . Since D n k is based on Manhattan distance of sample autocorrelation, the limit theory for sums of strictly stationary sequences is utilized. Conditions that ensure that partial sums of strictly stationary processes converge in distribution to an infinite variance stable distribution have provided by [9] . This is achieved by relating the regular variation condition and weak convergence of point processes. This was utilized by [10] in deriving the limit theory for the autocovariance function of linear processes which they later extended to bilinear processes in [11] . Limit theory for sample autocovariance of GARCH processes was also considered by [12] where they used weak convergence of point processes in combination with continuous mapping theorem. Point processes were also utilized by [13] in examining the convergence of the partial sum process of stationary regularly varying GARCH (1, 1) sequences for which the clusters of high thresholds excesses are broken down into asymptotically independent blocks which they established to be a stable Levy’s process. We utilize the point processes theory and restrict ourselves to qualitative results.

The paper is organized as follows. Section 2 outlines the GARCH model specification and change-point estimator with corresponding assumptions utilized. The weak convergence of point processes associated with the sequence ( X t 2 , σ t 2 ) is considered in Section 3. In Section 4, the asymptotic behavior of the change-point process D n k is studied. Here the limiting distribution of D n k is derived for a stationary GARCH sequence.

2. Change-Point Estimator

Let ( X t ) t be a GARCH process of order ( p , q ) given by the equation

X t = σ t ϵ t for t σ t 2 = α 0 + i = 1 p α i X t i 2 + j = 1 q β j σ t j 2 (1)

By iterating the defining difference Equation (1) for σ t 2 the GARCH model can be further expressed as a stochastic differential equation as follows:

Let Y t = ( σ t + 1 2 σ t q + 2 2 X t 2 X t p + 1 2 ) , A t = ( α 1 ϵ t 2 + β 1 β 2 β q 1 β q α 2 α 3 α p 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 ϵ t 2 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 ) , B t = ( α 0 0 0 ) then ( Y t ) satisfies the following stochastic differential equation

Y t = A t Y t 1 + B t for t (2)

Specifically for the GARCH (1, 1) case with A t = α 1 ϵ t 2 + β 1 and B t = α 0 Equation (2) reduces into a one-dimensional SDE

σ t 2 = α 0 + ( α 1 ϵ t 2 + β 1 ) σ t 1 2 for t (3)

Assumption 1. (Strictly Stationary)

According to [14] the existence of a unique strictly stationary solution to (1) is the negativity of the top Lyapunov exponent. This however cannot be calculated explicitly but a sufficient condition for this is given by

i = 1 p α i + j = 1 q β j < 1

Assumption 2. (Ergodic Process)

According to [15] standard ergodic theory yields that ( X t ) is an ergodic process. Thus its properties can be deduced from a single sufficiently large random sample of the sample.

Consider the change-point test hypothesis to be investigated to be defined as:

H 0 : X t G A R C H ( 1 , 1 ) for t = 1 , , n against H 1 : X t { GARCH ( 1 , 1 ) for t = 1 , , k GARCH ( p , q ) for t = k + 1 , , n where p , q \ { 0 } (4)

Assumption 3. (Weight)

Let the weight w k be a measurable function that depends on the sample size n and change-point k. It is arbitrarily chosen such that it satisfies the condition that

i = 1 k ρ i = k n i = 1 n ρ i 1 n ( i = 1 k ρ i k n i = 1 n ρ i ) = 0 (5)

Consider Assumption [1] , Assumption [2] and Assumption [3] to be satisfied. According to [8] the change-point estimator k ^ as hypothesized in (4) is based on the lower bound of the weighted Manhattan divergence measure of the sample autocorrelation function drawn for the process D n k as

D n k = k n ( 1 k n ) | 1 k i = 1 k ρ i 1 n k i = k + 1 n ρ i | (6)

where ρ i and k denote sample autocorrelation function and the unknown change-point respectively which are estimated as:

ρ k = t = 1 k h X t 2 X t + h 2 t = 1 k X t 4 for 0 < k < n , 0 < h < n k ^ = min { k : D n = max 1 < k < n ( D n k ) } (7)

Proof. The works of [8] are utilized here. Let X = ( X 1 , X 2 , , X k ) be a k dimensional vector and Y = ( X k + 1 , X k + 2 , , X n ) be a ( n k ) dimensional vector. The autocovariance and autocorrelation functions can be expressed in terms of the inner product as

a c o v a r X , Y = X E ( X ) , Y E ( Y ) (8)

a c o r r X , Y = X E ( X ) s d ( X ) , Y E ( Y ) s d ( Y ) (9)

where s d ( X ) and s d ( Y ) represents the standard deviation of X and Y respectively which represents an L 2 distance from the mean. Applying the Holder’s inequality in Theorem (7) to (8) and (9) yields

| a c o v a r ( X , Y ) | s d ( X ) s d ( Y ) L 1 s p a c e | a c o r r ( X , Y ) | 1 L 1 s p a c e (10)

Following (10) we can define a sequence of autocorrelation functions ρ i + 1 , j where for fixed i = 0 , 1 j n 1 and for fixed j = n , 1 i n 1 to be such that we have two subsequences ρ 1 j = ( ρ 1 , 1 , ρ 1 , 2 , , ρ 1 , k , , ρ 1 , n 1 ) and

ρ i n = ( ρ 2 , n , ρ 3 , n , , ρ k + 1 , n , , ρ n n ) where ρ 1 , k and ρ k + 1 , n denote the autocorrelation of the sequence { X t 2 } t = 1 k and { X t 2 } t = k + 1 n for 1 k n . A change-point

process D n k quantifying the deviation between ρ 1 , k and ρ k + 1 , n using a divergence measure motivated by the weighted L p distance, with k denoting the change-point is proposed. Specifically, they assumed the case when p = 1 resulting into a weighted Manhattan distance and by linearity and absolute value of inequalities of the expectation operator results into

L p ( ρ 1 , k ρ k + 1 , n ) = ( k = 1 n w k | ρ 1 , k ρ k + 1 , n | ) w k | E ( ρ 1 , k ) E ( ρ k + 1 , n ) | where ρ 1 , k = t = 1 k h X t 2 X t + h 2 t = 1 k X t 4 for 0 < k < n , 0 < h < n (11)

The change-point estimator is processes D n k is assumed to be the lower bound of the Manhattan divergence measure (11) where the weight w k is as specified in Assumption 3. The resultant process is as specified in (6). The change-point estimator k ^ of a change point k * is the point at which there is maximal sample evidence for a break in the sample autocorrelation function of the squared returns process. It is therefore estimated as the least value of k that maximizes the value of D n k where 1 < k < n is chosen as given in (7).

3. Point Process Theory

Point process techniques are utilized in obtaining the structure of limit variables and limit processes which occur in the theory of summation in time series analysis. The point process theory as developed by [16] is utilized. Consider the state space of the point process ¯ n \ { 0 } where ¯ = { } { } . Let B be the collection of bounded Borel sets in ¯ n \ { 0 } . Let F c be a collection of bounded non-negative continuous functions on ¯ n \ { 0 } with bounded support and F s be a collection of bounded non-negative step functions on ¯ n \ { 0 } with bounded support. Write M for the collection of Radon counting measures on ¯ n \ { 0 } with null measure o. This means that μ M \ { 0 } if and only if μ is of the form i = 1 n i ε X i , where n i { 1 , 2 , } , the points X i are distinct and

i = 1 | X i | < and ε X i is a Dirac measure at X i , that is ε X i = { 1 for X i B 0 for X i B for any B M . Let M y M be the collection of measures μ such that μ ( { X : | X | > y } ) > 0 , so that, M 0 = M \ { 0 } . Define M ˜ = { μ M : μ ( { X : | X | > 1 } ) = 0 and μ ( { X : X S n 1 } ) > 0 } and let B ( M ˜ ) be the Borel set on M ˜ .

Consider a strictly stationary sequence ( X t ) t of random row vectors with values in n , that is, X = ( X 1 , , X n ) . The characterization of the asymptotic behavior of the tails of the random variable X is examined through the regular variation condition.

Theorem 1. (Regular Variation Condition)

In light of [17] assume has a density with unbounded support, α 0 > 0 , E [ ln ( α 1 ϵ 2 + β 1 ) ] < 0 , E | α 1 ϵ 2 + β 1 | p 2 1 and E | ϵ | p ln | ϵ | < for some p > 0 holds, then:

1) there exist a number κ ( 0 , p ] which is a unique solution of the equation

E [ ( β 1 + α 1 ϵ n 2 ) κ / 2 ] = 1

and there exist a positive constant c 0 = c 0 ( α 0 , α 1 , β 1 ) such that

P ( σ > x ) c 0 x κ as x

2) If E | ϵ | κ + ξ < for some ξ > 0 , then

P ( | X | > x ) E | ϵ | κ P ( σ > x )

and the vector ( X , σ ) is jointly regularly varying such that

P ( | X , σ | > x t , ( X , σ ) / | X , σ | B ) P ( | X , σ | > t ) v x κ P ( Θ B ) as t

where v denotes vague convergence on the Borel σ-field of the unit sphere S1 of 2 , relative to the norm | | with

P ( Θ ) = E | ( ϵ , 1 ) | κ I { ( ϵ , 1 ) / | ( ϵ , 1 ) | } E | ( ϵ , 1 ) | κ

Proof. Following the works of [17] and [18] , assume ξ and η are independent non-negative random variables such that P ( ξ > x ) L ( x ) x κ for some slowly varying function L and E η κ + ε < for some ε > 0 , then P ( n ξ > x ) E η κ P ( ξ > x ) as x .

Applying Theorem 1 yields

P ( | X , σ | > x t , ( X , σ ) / | X , σ | B ) = P ( σ | ϵ , 1 | > x t , ( ϵ , 1 ) / | ϵ , 1 | B ) = P ( σ | ϵ , 1 | I { ( ϵ , 1 ) / | ϵ , 1 | B } > x t ) ~ E | ϵ , 1 | κ I { ( ϵ , 1 ) / | ϵ , 1 | B } P ( σ > x t ) ~ E | ϵ , 1 | κ I { ( ϵ , 1 ) / | ϵ , 1 | B } x κ P ( σ > t )

also

P ( | X , σ | > t ) = P ( σ | ϵ , 1 | > t ) ~ E | ϵ , 1 | κ P ( σ > t )

which completes proof.

Theorem 2. (Strongly Mixing Condition)

Let ( a n ) be a sequence of positive numbers such that

n P ( | X | > a n ) 1 (12)

The sequence ( a n ) can be chosen as the ( 1 n 1 ) -quantile of | X | . Since | X | is regularly varying, a n = n 1 α L ( n ) for slowly varying function L ( x ) . The condition (12) holds for ( X t ) if there exists a sequence of positive integers ( r n ) such that r n , k n = [ n / r n ] as n and

E [ exp { t = 1 n f ( X t a n ) } ] ( E [ exp { t = 1 r n f ( X t a n ) } ] ) k n 0 as n , f F s (13)

The condition (12) implies by the strong mixing condition of the stationary sequence ( X t ).

Assume that the joint regular variation in Theorem 1 and strongly mixing conditions in Theorem 2 are satisfied for a stationary sequence ( X t ), then, the statement can be made for the weak convergence of the sequence of point processes

N n = t = 1 n ε X t / a n , n = 1 , 2 , (14)

Define

N ˜ n = i = 1 m n N ˜ r n , i , i = 1 , 2 , , m n (15)

where N ˜ r n , i are independent and identically distributed as N ˜ r n , 0 = t = 1 r n ε X t / a n . It therefore follows that ( N n ) converges weakly if and only if N ˜ n does and they have the same limit N. N is identical in law to the point process i = 1 j = 1 ε P i Q i j where i = 1 ε P i is a Poisson process + with P i describing the radial part of the points and j = 1 ε Q i j is a sequence of independent and identically distributed point processes with Q i j describing the spherical part and a joint distribution Q on ( M ˜ , B ( M ˜ ) ) .

Theorem 3. Assume that ( X t ) is a stationary sequence of random vectors for which all finite-dimensional distributions are jointly regularly varying index κ > 0 . To be specific, let θ m , , θ m be the ( 2 m + 1 ) n -dimensional random row vector with values in the unit sphere ( S ( 2 m + 1 ) n 1 ) , m 0 . Assume that the strongly mixing condition for ( X t ) and that

lim m lim n sup P ( V t = 1 r n | X t | > a n y | | X 0 | > a n y ) = 0 , y > 0 ,

Then the limit

γ = lim m E ( | θ 0 ( m ) | κ V j = 1 m | θ j ( m ) | κ ) + E | θ 0 ( m ) | κ

exists and is the extremal index of the sequence ( | X t | ) .

1) If γ = 0 , then N n d o

2) If γ > 0 , then N n d N = d i = 1 i = 1 ε P i Q i j

where i = 1 ε P i is a Poisson process on + with P i describing the radial part of the points and j = 1 ε Q i j is a sequence of independent and identically distributed point processes with Q i j describing the spherical part and a joint distribution Q on ( M ˜ , B ( M ˜ ) ) , where Q is the weak limit of

Q = lim m E ( | θ 0 ( m ) | κ V j = 1 m | θ j ( m ) | κ ) + I ( | t | m ε θ t ( k ) ) E ( | θ 0 ( m ) | κ V j = 1 m | θ j ( m ) | κ ) +

Theorem 4. Utilizing the theory developed by [3] , let ( X t ) be a stationary GARCH (1, 1) process and assume that the jointly regularly varying and strongly mixing conditions hold. For fixed h 0 , set X t = ( X t , σ t , , X t + h , σ t + h ) , then the conditions in the Theorem 2 above are met and hence

N ˜ n = i = 1 m n N ˜ r n , i , i = 1 , 2 , , m n ; N n t = 1 n ε X t / a n d N = i = 1 j = 1 ε P i Q i j (16)

where Q i j = ( Q i j ( 0 ) , , Q i j ( m ) ) and P i are as previously defined.

We now consider the convergence of point processes which are products of random variables, which forms the basis of the results on the weak convergence of sample autocovariance and autocorrelation for stationary processes.

Theorem 5 Let ( X t ) be a strictly stationary sequence such that ( X t ) = ( ( X t , , X t + m ) ) satisfying the jointly regularly varying condition for some m 0 and further assume that Theorem 2 and Theorem 3 hold, then:

N ^ n = ( N ^ n , h ) h = 0 , , m = ( t = 1 n ε a n 1 X t X t + h ) h = 0 , , m d N ^ = ( i = 1 j = 1 ε P i 2 Q i j ( 0 ) Q i j ( h ) ) h = 0 , , m (17)

where the points Q i j = ( Q i j ( 0 ) , , Q i j ( m ) ) and P i are as previously defined, N ^ n and N ^ are point processes on ¯ \ { 0 } meaning that points are not included in the point processes if X t X t + h = 0 or Q i j ( 0 ) Q i j ( h ) = 0

We study the weak limit behaviour of the sample autocovariance and sample autocorrelation of a stationary sequence ( X t ). Construct from this process the strictly stationary n-dimensional processes ( X t ) = ( ( X t , , X t + n ) ) , n 0 . Define the sample autocovariance function

γ n , X ( h ) = n 1 t = 1 n h X t X t + h , h 0 (18)

and the corresponding sample autocorrelation function

ρ n , X ( h ) = γ n , X ( h ) γ n , X ( 0 ) , h 1 (19)

Define the deterministic counterparts of the autocovariance and autocorrelation functions as follows

γ X ( h ) = E X 0 X h , h 0 (20)

ρ X ( h ) = γ X ( h ) γ X ( 0 ) , h 1 (21)

Theorem 6. Assume that ( X t ) is a strictly stationary sequence of random variables and that for a fixed m 0 , ( X t ) satisfies the regular variation condition and N n = t = 1 n ε X t / a n d N = i = 1 j = 1 ε P i Q i j where the points Q i j = ( Q i j ( 0 ) , , Q i j ( m ) ) and P i are as previously defined.

1) If κ ( 0 , 2 ) , then

( n a n 2 γ n , X ( h ) ) h = 0 , , m d ( V h ) h = 0 , , m

( ρ n , X ( h ) ) h = 1 , , m d ( V h V 0 ) h = 1 , , m

where

V h = i = 1 j = 1 P i 2 Q i j ( 0 ) Q i j ( h ) , h = 0 , 1 , , m

The vector ( V 0 , , V m ) is jointly κ / 2 stable in m + 1 .

2) If κ ( 2 , 4 ) and for h = 0 , , m

lim ϵ 0 lim sup n V a r ( a n 2 t = 1 n h X t X t + h I { | X t X t + h | a n 2 ϵ } ) = 0 ,

then

( n a n 2 ( γ n , X ( h ) γ X ( h ) ) ) h = 0 , , m d ( V h ) h = 0 , , m

which implies that

( n a n 2 ( ρ n , X ( h ) ρ X ( h ) ) ) h = 1 , , m d γ X 1 ( 0 ) ( V h ρ X ( h ) V 0 ) h = 1 , , m

4. Limit Theory of Change-Point Estimator

The following proposition is our main result on weak convergence for our proposed change-point process D n k ( h ) as specified in (6) for GARCH processes based on the point process theory. In addition to the previously stated Theorems are additional Theorems are utilized in the proof of the proposition, see Appendix.

Proposition 2 Let ( X t ) t be a strictly stationary sequence of random variables irrespective of the distribution of initial value X 0 . Specifically, let ( X t ) t be a GARCH (1, 1) process defined in the form of a stochastic differential Equation (3). For fixed h 0 , set X t = ( X t , , X t + h ) . Assume that the regular variation conditions hold. Let ( a n ) be a sequence of constants such that the strongly mixing condition is satisfied, then N n = t = 1 n ε X t / a n d N = i = 1 j = 1 ε P i Q i j where the points Q i j = ( Q i j ( 0 ) , , Q i j ( m ) ) and P i are as defined in Theorem 2. Thus the conditions in Theorem 5 are met and hence there exists a sequence of bounded constants ( C n ( h ) ) which converge in distribution to C n such that the following statements hold:

1) If κ ( 0 , 2 ) , then

( D n k ( h ) ) h = 1 , , m d C h ( V h V 0 ) h = 1. , n

2) If κ ( 2 , 4 ) and for h = 0 , , m

lim δ 0 lim sup n V a r ( a n 4 t = 1 n h X t 2 X t + h 2 I { | X t 2 X t + h 2 | a n 4 ϵ } ) = 0 ,

then

( n a n 4 D n k ( h ) ) h = 1 , , m = n a n 4 ( C n ρ n , X 2 ( h ) ρ X 2 ( h ) ) h = 1 , , n d γ X 1 ( 0 ) ( C h V h ρ X 2 ( h ) C 0 V 0 ) h = 1 , , m

where

C h = V 0 V h ( V 0 k V h V h k V 0 V 0 k ( V 0 V 0 k ) ) V h = i = 1 j = 1 P i 2 Q i j ( 0 ) Q i j ( h ) , h = 0 , 1 , , n V h k = i = k + 1 j = k + 1 P i 2 Q i j ( 0 ) Q i j ( h ) , h = 0 , 1 , , n

Proof. Consider the GARCH (1, 1) model in the context of a stochastic differential Equation (3) defined as σ t 2 = α 0 + ( α 1 ϵ t 1 2 + β 1 ) σ t 1 2 , then the necessary and sufficient conditions for stationarity are α 0 > 0 and E [ log ( β 1 + α 1 ϵ n 2 ) ] < 0 where the latter implies that

i = 1 p α i + j = 1 q β j < 1 .

If we assume that the sample vector X 1 , , X n comes from a stationary model, then the initial values X 0 also have a stationary distribution. This means that the distribution of X t is stationary whatever the distribution of X 0 , given the latter is independent of ( ϵ t ) t = 1 , 2 , and stationarity conditions. To show this consider two sequences X t ( X 0 ) t = 0 , 1 , 2 , and X t ( Z ) t = 0 , 1 , 2 , given the same stochastic differential equation recursion (2) but with initial conditions X 0 and Z where both vectors are independent of the future values ( A t , B t ) t = 1 , 2 , . Further assume that X 0 has stationary distribution. By iteration of stochastic differential Equation (3) we have

Y t = B t + i = 1 A t A t i + 1 B t i , t = 1 , 2 ,

Thus for any initial values Z we have the following recursion

X t ( Z ) = A t A 1 Z + j = 1 t A t A j + 1 B j , t = 1 , 2 ,

Then for any ε > 0 and for GARCH (1, 1) model (3) the top Lyapunov exponent γ ˜ given by A n A 1 = A n t = 1 n 1 ( β 1 + α 1 ϵ n 2 )

E | X t ( X 0 ) X t ( Z ) | ε E | A t A 1 ( X 0 Z ) | ε = E | A 1 ( X 0 Z ) | ε ( E | β 1 + α 1 ϵ n 2 | ε ) t 1 E A 1 E | X 0 Z | ε ( E | β 1 + α 1 ϵ n 2 | ε ) t 1 (22)

If E | ϵ | 2 ε < , E | X 0 | ε < and E | Z | ε < , then the right hand side is also finite. In addition given the stationary conditions previously stated then E | β 1 + α 1 ϵ n 2 | ε < 1 for some sufficiently small ε. Thus the left hand side of (22) decays to zero as t . Thus we conclude that ( X t ) t is stationary irrespective of the distribution of the initial values X 0 .

Now, consider the sample autocorrelation function as defined in (19), then the following statements hold,

t = 1 n h X t 2 X t + h 2 = t = 1 k X t 2 X t + h 2 + t = k + 1 n h X t 2 X t + h 2 (24)

t = 1 n h X t 4 = t = 1 k X t 4 + t = k + 1 n h X t 4 (25)

From (23) and (24) it can be asserted that there exists constants c k , X 2 ( h ) and c n k , X 2 ( h ) such that the autocorrelation functions ρ k , X 2 ( h ) and ρ n k , X 2 ( h ) can be expressed in terms of the autocorrelation function ρ n , X 2 ( h ) as follows:

ρ k , X 2 ( h ) = c k , X ( h ) ρ n , X 2 ( h ) (25)

and

ρ n k , X 2 ( h ) = c n k , X 2 ( h ) ρ n , X 2 ( h ) (26)

The change-point process (6) can be expressed in terms of (25) and (26) as

D n k ( h ) = ρ k , X 2 ( h ) ρ n k , X 2 ( h ) = ( c k , X 2 ( h ) c n k , X 2 ( h ) ) ( ρ n , X 2 (h))

The weak limits of the process D n k ( h ) is characterized in terms of the limiting point processes for the sample autocovariance and autocorrelation functions through the application of the Continuous Mapping Theorem 12. To complete the proof we independently prove the convergence of c k , X 2 ( h ) c n k , X 2 ( h ) and ρ n , X 2 ( h ) and apply Theorem 12.

Let δ > 0 , X t = ( x t , X ( 0 ) , x t , σ ( 0 ) , , x t , X ( n ) , x x , σ ( n ) ) ¯ n + 1 \ { 0 } . In order to proof the results, we define several mappings

T h , δ , X : M ¯

as follows

T 0 , δ , X ( N n ) = t = 1 n t ( x t , X ( 0 ) ) 2 I { | x t , X ( 0 ) | > δ }

T 1 , δ , X ( N n ) = t = 1 n t ( x t , X ( 1 ) ) 2 I { | x t , X ( 0 ) | > δ }

T h , δ , X ( N n ) = t = 1 n t ( x t , X ( 0 ) ) ( x t , X ( h 1 ) ) I { | x t , X ( 0 ) | > δ } , h [ 2 , n ]

The set { X t ¯ \ { 0 } | | x ( h ) | > δ } is bounded for any h 0 and thus the mappings are continuous with respect to the limit point processes N. Consequently by Continuous Mapping Theorem 12 we have that

T ( N n ) d T (N)

where

T ( N ) = i = 1 j = 1 P i 2 Q i j ( 0 ) Q i j ( h ) I { | P i Q i j ( 0 ) | > δ }

The prove of the convergence of ρ n , X 2 ( h ) is examined for κ ( 0 , 2 ) and κ ( 2 , 4 ) .

For the case of κ ( 0 , 2 ) , the point process results of Theorem 3 holds and a direct application of Theorem 5 yields:

( n a n 4 γ n , X 2 ( h ) ) h = 0 , , m d ( V h ) h = 0 , , m

( ρ n , X 2 ( h ) ) h = 1 , , m d ( V h V 0 ) h = 1 , , m

For κ ( 2 , 4 ) we commence with the { σ t 2 } sequence and establish the convergence of γ n , σ 2 ( 0 ) . We rewrite γ n , σ 2 ( 0 ) using the recurrence structure of the SDE (3) so that ϵ t 2 = α 1 1 ( ( α 1 ϵ t 2 + β 1 ) β 1 ) = α 1 1 ( A t + 1 β 1 ) and σ t 2 = α 0 + A t σ t 1 2 A t σ t 1 2 = ( α 1 ( ε t 2 1 ) + ( α 1 + β 1 ) ) σ t 1 2

Now using this representation yields:

n a n 4 ( γ n , σ 2 ( 0 ) γ σ 2 ( 0 ) ) = a n 4 t = 1 n σ t 1 4 ( ϵ t 2 1 ) 2 α 1 2 + 2 α 1 ( α 1 + β 1 ) ( ϵ t 2 1 ) σ t 1 2 + ( α 1 + β 1 ) 2 σ t 1 4 ( α 1 + β 1 ) 2 E ( σ 4 ) + ( α 1 + β 1 ) 2 E ( σ 4 ) E ( σ 4 ) = α 1 2 a n 4 t = 1 n [ σ t 1 4 ( ϵ t 2 1 ) 2 ] + ( α 1 + β 1 ) 2 a n 4 t = 1 n [ σ t 1 2 E ( σ 2 ) ] + O p (1)

[ 1 ( α 1 + β 1 ) 2 ] n a n 4 ( γ n , σ 2 ( 0 ) γ σ 2 ( 0 ) ) = α 1 2 a n 4 t = 1 n [ σ t 1 4 ( ϵ t 2 1 ) 2 ] + O p ( 1 ) = α 1 2 a n 4 t = 1 n [ σ t 4 ( ϵ t 2 1 ) 2 ] I { σ t > a n δ } + α 1 a n 4 t = 1 n [ σ t 4 ( ϵ t 2 1 ) 2 ] I { σ t a n δ } + O p ( 1 ) = I + I I + O p ( 1 ) (27)

Assuming that the condition E ( ϵ t 4 ) < is satisfied, we first show that II converges in probability to zero by applying Karamata’s theorem (see [19] ) on the regular variation and tail behavior of a stationary distribution which yields the asymptotic equivalence.

V a r ( I I ) = V a r [ α 1 2 a n 4 t = 1 n [ σ t 4 ( ϵ t + 1 2 1 ) 2 ] I { σ t a n δ } ] a n 8 t = 1 n E ( ( σ t 4 ) 2 I { σ t a n δ } ) E ( ( ϵ t + 1 2 1 ) 2 ) ~ c o n s t δ 8 κ as n 0 as δ 0 (28)

Now examining I we have

I = α 1 2 a n 4 t = 1 n [ σ t 4 ( ϵ t 2 1 ) 2 ] I { σ t > a n δ } + O p ( 1 ) = a n 4 t = 1 n [ α 1 2 σ t 4 { ( A t + 1 ( α 1 + β 1 ) ) α 1 1 } 2 ] I { σ t > a n δ } + O p ( 1 ) = a n 4 t = 1 n σ t + 1 4 I { σ t > a n δ } ( α 1 + β 1 ) 2 a n 4 t = 1 n σ t 4 I { σ t > a n δ } + O p ( 1 ) d T 1 , δ , σ ( N ( 2 ) ) ( α 1 + β 1 ) 2 T 0 , δ , σ ( N ( 2 ) ) S ( δ , ) (29)

We utilize the argument given in Theorem 12 where S ( δ , ) d V 0 * as δ 0 . Therefore, we finally obtain that:

n a n 4 ( γ n , σ 2 ( 0 ) γ σ 2 ( 0 ) ) d 1 1 ( ( α 1 + β 1 ) ) V 0 * V 0 (30)

In the presence of a change-point k as hypothesized (4) it is evident that E ( A t ) α 1 + β 1 for all t but rather

n E ( A t ) = { α 1 + β 1 for 1 < t k E ( A ) for k < t < n (31)

Thus the convergence of γ k , σ 2 ( 0 ) and γ n k , σ 2 ( 0 ) are respectively given by

k a k 4 ( γ k , σ 2 ( 0 ) γ σ 2 ( 0 ) ) d 1 1 ( ( α 1 + β 1 ) ) V 0 k * V 0 k (32)

( n k ) a n k 4 ( γ n k , σ 2 ( 0 ) γ σ 2 ( 0 ) ) d 1 1 E ( A ) V 0 ( n k ) * V 0 n k (33)

Following (31), (32) and (33) it is concluded that V 0 k V 0 n k .

Convergence of γ n , σ 2 ( 1 ) is determined in a similar manner where

n a n 4 ( γ n , σ 2 ( 1 ) γ σ 2 ( 1 ) ) = a n 4 t = 1 n [ σ t 2 σ t + 1 2 E ( σ 4 ) ] = a n 4 t = 1 n [ σ t 2 σ t + 1 2 σ t 4 E A ] I { σ t > a n δ } + t = 1 n [ σ t 2 σ t + 1 2 σ t 4 E A ] I { σ t a n δ } + O p ( 1 ) = T 2 , δ , σ ( N n ( 2 ) ) E A T 1 , δ , σ ( N n ( 2 ) ) d V 1 (34)

Consequently for arbitrary lags we have

n a n 4 ( γ n , σ 2 ( h ) γ σ 2 ( h ) ) d V h

In the presence of a change-point k the convergence of γ k , σ 2 ( 1 ) and γ n k , σ 2 ( 1 ) are respectively given by

k a k 4 ( γ k , σ 2 ( 1 ) γ σ 2 ( 1 ) ) d V 1 k

( n k ) a n k 4 ( γ n k , σ 2 ( 1 ) γ σ 2 ( 1 ) ) d V 1 n k

Now we consider the { X t 2 } sequence and establish the convergence of γ n , X 2 ( 0 ) as follows:

n a n 4 ( γ n , X 2 ( 0 ) γ X 2 ( 0 ) ) = a n 4 t = 1 n [ X t 4 E ( X 0 4 ) ] = 2 a n 4 t = 1 n [ σ t 4 ( ϵ t 2 1 ) 2 ] I { σ t > a n δ } + 2 a n 4 t = 1 n [ σ t 4 ( ϵ t 1 ) 2 ] I { σ t a n δ } = I I I + I V (35)

Equation (35) follows directly from Equation (27). In a similar way to Equation (28), lim δ 0 lim n sup V a r ( I V ) = 0 .

Now examining III and following the results obtained in Equation (29) we have that III converges as follows

2 a n 4 t = 1 n [ σ t 4 ( ϵ t 2 1 ) 2 ] I { σ t > a n δ } d T 1 , δ , σ ( N ( 2 ) ) ( α 1 + β 1 ) 2 T 0 , δ , σ ( N ( 2 ) ) S ( δ , ) d V 0

Thus we have that

n a n 4 ( γ n , X 2 ( 0 ) γ X 2 ( 0 ) ) d V 0

Similarly it can be shown that the convergence of γ k , X 2 ( 0 ) and γ n k , X 2 ( 0 ) are respectively given by

k a k 4 ( γ k , X 2 ( 0 ) γ X 2 ( 0 ) ) d V 0 k

( n k ) a n k 4 ( γ n k , X 2 ( 0 ) γ X 2 ( 0 ) ) d V 0 n k

Next we consider the { X t 2 } sequence and establish the convergence of γ n , X 2 ( 1 ) as follows:

n a n 4 ( γ n , X 2 ( 1 ) γ X 2 ( 1 ) ) = a n 4 t = 1 n [ X t 2 X t + 1 2 E ( X 0 2 X 1 2 ) ] = a n 4 t = 1 n [ X t 2 σ t + 1 2 ( ϵ t + 1 2 E ( ϵ ) ) ] + a n 2 E ( ϵ ) t = 1 n [ X t 2 σ t + 1 2 σ 1 2 E ( X 0 2 ) ] = V + V I

Now examining VI we have

V I = a n 4 E ( ϵ ) t = 1 n [ X t 2 σ t + 1 2 σ 1 2 E ( X 0 2 ) ] = a n 4 E ( ϵ ) t = 1 n [ X t 2 ( σ t + 1 2 σ t 2 A t + 1 ) ] E [ X 0 2 ( σ 1 2 σ 0 2 A 1 ) ]

V a r ( V I ) = a n 4 t = 1 n V a r [ X t 2 ( σ t + 1 2 σ t 2 A t + 1 ) ] E [ X 0 2 ( σ 1 2 σ 0 2 A 1 ) ] = a n 4 t = 1 n s = 1 n C o v [ X t 2 ( σ t + 1 2 σ t 2 A t + 1 ) , X s 2 ( σ s + 1 2 σ s 2 A s + 1 ) ] c o n s t n a n 8 h = 1 n q h 0 as n

where q ( 0 , 1 ) is a constant and since ( X t , σ t ) is strongly mixing with geometric rate, thus there exist a δ > 0 and a constant K such that E [ X 0 2 ( σ 1 2 σ 0 2 A 1 ) ] 2 + δ < and C o v [ X t 2 ( σ t + 1 2 σ t 2 A t + 1 ) , X s 2 ( σ s + 1 2 σ s 2 A s + 1 ) ] K q | t s | .

Now examining V we have

V = a n 4 t = 1 n [ X t 2 σ t 2 ( ϵ t 2 E ( ϵ ) ) ] = a n 4 t = 1 n [ X t 2 σ t 2 A t + 1 X t 2 σ t 2 E A + X t 2 σ t 2 E A E ( X 0 2 σ 0 2 A 1 ) ] = a n 4 t = 1 n [ X t 2 σ t 2 ( A t + 1 E A ) ] I { σ t > a n δ } + a n 4 t = 1 n [ X t 2 σ t 2 ( A t + 1 E A ) ]

+ E A a n 4 t = 1 n [ σ t 4 ( ϵ t 2 E ( ϵ ) ) ] I { σ t > a n δ } + E A a n 2 t = 1 n [ σ t 4 ( ϵ t 2 E ( ϵ ) ) ] I { σ t a n δ } + E A E ( ϵ ) a n 4 t = 1 n [ σ t 4 E ( σ 4 ) ] I { σ t > a n δ } + E A E ( ϵ ) a n 4 t = 1 n [ σ t 4 E ( σ 4 ) ] I { σ t a n δ } E A = V I I + V I I I + I X + X + X I + X I I (36)

By applying Karamata Theorem [19] to (36)

lim δ 0 lim n 0 sup V a r ( V I I I ) = 0 lim δ 0 lim n 0 sup V a r ( I X ) = 0 lim δ 0 lim n 0 sup V a r ( X ) = 0 lim δ 0 lim n 0 sup V a r ( X I I ) = 0

Examining VII we have

V I I = a n 4 t = 1 n [ X t 2 σ t 2 ( A t + 1 E A ) ] I { σ t > a n δ } = a n 4 t = 1 n [ σ t 2 σ t + 1 2 I { σ t > a n δ } ] E A a n 4 t = 1 n [ σ t 4 I { σ t > a n δ } ] d T 2 , δ , σ ( N ( 2 ) ) E A T 1 , δ , σ ( N ( 2 ) ) d V 1

Since E ( ϵ ) = 0 then for XI we have

X I = E A E ( ϵ ) a n 4 t = 1 n [ σ t 4 E ( σ 4 ) ] I { σ t > a n δ } = 0

Thus we have that

n a n 4 ( γ n , X 2 ( 1 ) γ X 2 ( 1 ) ) d V 1

By extending to arbitrary lags h = 0 , , n the convergence of γ n , X 2 ( h ) is given by

n a n 4 ( γ n , X 2 ( h ) γ X 2 ( h ) ) d V h

Consequently the convergence of ρ n , X 2 ( h ) is given by

n a n 4 ( ρ n , X 2 ( h ) ρ X 2 ( h ) ) d γ X 2 1 ( 0 ) ( V h ρ X 2 ( h ) V 0 )

We have been able to examine the limiting behavior of ρ n , X 2 ( h ) for two cases. In the first case, when κ ( 0 , 2 ) , the variance of X n is infinite and thus ρ n , X 2 ( h ) has a random limit without any normalization. When κ ( 2 , 4 ) , the

process has a finite variance but infinite fourth moment and n a n 4 ( ρ n , X 2 ( h ) ) converges to an κ 2 -stable distribution. By Theorem 8 convergence of ρ n , X 2 ( h ) implies that the sequence is bounded with | ρ n , X 2 ( h ) | 1 .

We now examine the convergence of c k , X 2 ( h ) c n k , X 2 ( h ) . Consider D n k ( h ) , we can express c k , X 2 ( h ) c n k , X 2 ( h ) as follows:

c k , X 2 ( h ) c n k , X 2 ( h ) = ρ k , X 2 ( h ) ρ n k , X 2 ( h ) ρ n , X 2 (h)

By the Bolzano-Weierstrass theorem, a bounded sequence has always a convergent subsequence. This is further confirmed through the invariance property of subsequences in Theorem 10 which states that if ρ n , X 2 ( h ) converges, then every subsequence say, ρ k , X 2 ( h ) and ρ n k , X 2 ( h ) converges. By linearity rule of sequences as prescribed in Theorem 11, ρ k , X 2 ( h ) ρ n k , X 2 ( h ) converges. This further implies that ρ k , X 2 ( h ) and ρ n k , X 2 ( h ) are bounded since every convergent sequence is bounded. The subsequences ρ k , X 2 ( h ) and ρ n k , X 2 (h)

are also bounded with | ρ k , X 2 ( h ) | 1 and | ρ n k , X 2 ( h ) | 1 , thus their absolute difference is also bounded as | ρ k , X 2 ( h ) ρ n k , X 2 ( h ) | 2 . Further assume that we are considering only significant sample autocorrelation coefficients where | ρ n , X 2 ( h ) | 0.05 , then c k , X 2 ( h ) c n k , X 2 ( h ) is also bounded. Applying the quotient property of subsequences, then c k , X 2 ( h ) c n k , X 2 ( h ) is also convergent.

Consider the proposed change-point process D n k ( h ) as defined in (6), then we can derive the limit of C n as follows:

( D n k ( h ) ) h = 1 , , m ρ k , X 2 ( h ) ρ n k , X 2 ( h ) = γ k , X 2 ( h ) γ k , X 2 ( 0 ) γ n k , X 2 ( h ) γ n k , X 2 ( 0 ) = t = 1 k X t 2 X t + h 2 t = 1 k X t 4 t = k + 1 n X t 2 X t + h 2 t = k + 1 n X t 4 = t = k + 1 n X t 4 t = 1 n X t 2 X t + h 2 t = k + 1 n X t 2 X t + h 2 t = 1 n X t 4 t = k + 1 n X t 4 ( t = 1 n X t 4 t = k + 1 n X t 4 ) (37)

Thus applying Theorem 5 to 37 we have

( D n k ( h ) ) h = 1 , , m d V 0 k V h V h k V 0 V 0 k ( V 0 V 0 k ) = V 0 V h ( V 0 k V h V h k V 0 V 0 k ( V 0 V 0 k ) ) V h V 0 (38)

From (38) above, the sequence C n converges in distribution to C h as follows

C n d V 0 V h ( V 0 k V h V h k V 0 V 0 k ( V 0 V 0 k ) ) = C h

By application of Continuous Mapping Theorem 12, we have the limiting behavior of the proposed change-point process D n k ( h ) for the three cases κ ( 0 , 2 ) , κ ( 2 , 4 ) and κ ( 4 , ) as follows.

for κ ( 0 , 2 ) and by application of Theorem 5 (i):

( D n k ( h ) ) h = 1 , , m = ( C n ( h ) ρ n , X ( h ) ) h = 1 , , m d C h ( V h V 0 ) h = 1. , m

for κ ( 2 , 4 ) and by application of Theorem 5 (ii):

( n a n 4 D n k ( h ) ) h = 1 , , m = n a n 4 ( C n ρ n , X 2 ( h ) ρ X 2 ( h ) ) h = 1 , , n d γ X 1 ( 0 ) ( C h V h ρ X 2 ( h ) C 0 V 0 ) h = 1 , , m

which completes proof.

5. Conclusion

The asymptotic behavior of the change-point process D n k is established on the basis of examining the asymptotic behavior of the sample autocovariance and sample autocorrelation functions. The limits of the suitably normalized sample autocovariance and sample autocorrelation functions are expressed in terms of the limiting point processes. The limit distributions are the difference of ratios of the infinite variance stable vectors or functions of such vectors. As a result, determination of the quantiles from the limit distributions is difficult. The limits are also generally random as a result of the infinite variance. Future work will be aimed at identifying the limit distributions so as to make the results directly applicable for hypothesis testing purposes.

Acknowledgements

The authors thank the Pan-African University Institute of Basic Sciences, Technology and Innovation (PAUSTI) for funding this research.

Cite this paper

Irungu, I.W., Mwita, P.N. and Waititu, A.G. (2018) Limit Theory of Model Order Change-Point Estimator for GARCH Models. Journal of Mathematical Finance, 8, 426-445. https://doi.org/10.4236/jmf.2018.82027

References

  1. 1. Chinzara, Z. (2010) Macroeconomic Uncertainty and Emerging Market Stock Market Volatility: The Case for South Africa. Working Paper 187, 1-19.

  2. 2. Matteo Manera, M.N. and Vignati, I. (2012) Financial Speculation in Energy and Agriculture Futures Markets: A Multivariate Garch Approach. International Association for Energy Economics, 3.

  3. 3. Mikosch, T. and Starica, C. (2004) Nonstationarities in Financial Time Series, the Long-Range Dependence and the Igarch Effects. Review of Economics and Statistics, 86, 378-390. https://doi.org/10.1162/003465304323023886

  4. 4. Yau, C.Y. and Zhao, Z. (2015) Inference for Multiple Change Points in Time Series via Likelihood Ratio Scan Statistics. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 78, 895-916. https://doi.org/10.1111/rssb.12139

  5. 5. Lee, T., Kim, M. and Baek, C. (2014) Tests for Volatility Shifts in Garch against Long Range Dependence. Journal of Time Series Analysis, 36, 127-153. https://doi.org/10.1111/jtsa.12098

  6. 6. Na, O., Lee, J. and Lee, S. (2011) Change Point Detection in Copula Arma Garch Models. Journal of Time Series Analysis, 33, 554-569. https://doi.org/10.1111/j.1467-9892.2011.00763.x

  7. 7. Alemohammad, N., Rezakhah, S. and Alizadeh, S.H. (2016) Markov Switching Component Garch Model Stability and Forecasting. Communications in Statistics Theory and Statistics, 45, 4332-4348. https://doi.org/10.1080/03610926.2013.841934

  8. 8. Irungu, I., Mwita, P. and Waititu, A. (2018) Consistency of the Model Order Change-Point Estimator for Garch Models. Journal of Mathematical Finance, 8, 266-282. https://doi.org/10.4236/jmf.2018.82018

  9. 9. Bartkiewicz, K., Jakubowski, A., Mikosch, T. and Wintenberger, O. (2011) Stable Limits for Sums of Dependent Infinite Variance Random Variables. Probabability Theory Related Fields, 150, 337-372. https://doi.org/10.1007/s00440-010-0276-9

  10. 10. Davis, R.A. and Resnick, S.I. (2011) Limit Theory for the Sample Covariance and Correlation Functions of Moving Averages. Annals of Statistics, 14, 533-558. https://doi.org/10.1214/aos/1176349937

  11. 11. Davis, R.A. and Resnick, S.I. (1996) Limit Theory for Bilinear Processes with Heavy-Tailed Noise. Annals of Statistics, 6, 1191-1210. https://doi.org/10.1214/aoap/1035463328

  12. 12. Mikosch, T. and Starica, C. (2000) Limit Theory for the Sample Autocorrelations and Extremes of a Garch (1,1) Process. Annals of Statistics, 28, 1427-1451.

  13. 13. Basrak, B., Krizmanic, D. and Segers, J. (2012) A Functional Limittheorem for Dependent Sequences with Infinite Variance Stable Limits. The Annals of Probability, 40, 2008-2033. https://doi.org/10.1214/11-AOP669

  14. 14. Bougerol, P. and Picard, N. (1992) Strict Stationarity of Generalized Autoregressive Processes. Annals of Probability, 20, 1714-1730. https://doi.org/10.1214/aop/1176989526

  15. 15. Krengel, U. (1985) Ergodic Theorems. De Gruyter, Berlin.

  16. 16. Kallenberg, O. (1983) Random Measures. Akademie-Verlag, Berlin, 3.

  17. 17. Kesten, H. (1973) Random Difference Equations and Renewal Theory for Products of Random Matrices. Acta Mathematica, 131, 207-248. https://doi.org/10.1007/BF02392040

  18. 18. Breiman, L. (1965) On Some Limit Theorems Similar to Arc-Sin Law. Theory Probability Applications, 10, 323-331. https://doi.org/10.1137/1110037

  19. 19. Bingham, N.H., Goldie, C.M. and Teugels, J.L. (1985) Regular Variation. Encyclopedia of Mathematics and Its Applications. Cambridge University Press, Cambridge.

Appendix

Theorem 7. (Holder’s Inequality)

Let I be a finite or countable index set. Given 1 p , if X = ( X k ) k I L p ( I ) and Y = ( Y k ) k I L p ( I ) , where 1 p + 1 p = 1 then X Y = ( X k Y k ) k I L 1 ( I ) and

X Y 1 ( X k ) k I p ( Y k ) k I p = ( k I | X k | p ) 1 p ( k I | Y k | p ) 1 p <

Theorem 8. (Convergent sequences are bounded)

Let { A n } n be a convergent sequence. Then the sequence is bounded and the limit is unique.

Theorem 9. (Bolzano-Weierstrass)

Let { A n } n be a sequence of real numbers that is bounded. Then there exists a subsequence { A n k } n k that converges.

Theorem 10. (Invariance property of subsequences)

If { A n } n is a convergent sequence, then every subsequence of that sequence converges to the same limit.

Theorem 11. (Algebra on Sequences)

If the sequences { A n } n converges to L and { B n } n converges to M then the following hold:

1) lim n 0 ( A n + B n ) = lim n 0 A n + lim n 0 B n = L + M

2) lim n 0 ( A n B n ) = lim n 0 A n lim n 0 B n = L M

3) lim n 0 A n B n = lim n 0 A n lim n 0 B n = L M for B n 0 , n and M 0

Theorem 12. (Continuous Mapping)

Let a function g : k m be continuous in every point of a set C such that P ( X C ) = 1 . Then if X n X then g ( X n ) g ( X ) .

Theorem 13. (Algebra on Series)

Let A n and B n be two absolutely convergent series. Then:

1) the sum of the two series is again absolutely convergent. Its limit is the sum of the limit of the two series.

2) the difference of the two series is again absolutely convergent. Its limit is the difference of the limit of the two series.

3) the product of the two series is again absolutely convergent. Its limit is the product of the limit of the two series.

Theorem 14. Let { X t } t be a strictly stationary sequence. Define the partial sums of the sequence by S n = t = 1 n X t .

1) if κ ( 0 , 2 ) then

a n 1 d S

where S = i = 1 j = 1 P i Q i j has a stable distribution

2) if κ ( 2 , 4 ) and for all ε > 0 , lim ε 0 lim n 0 sup P [ | S n ( 0 , δ ] E S n ( 0 , δ ] | > ε ] = 0 then

a n 1 S n E S n ( 0 , 1 ] d S

where S is the distributional limit of

i = 1 j = 1 P i Q i j I { | P i Q i j | > a n δ } δ < | x | 1 x μ (dx)

as δ 0 , μ is the measure in section 2.1 which has a stable distribution.

For every δ > 0 , the mapping from M in section 2.1 into is defined by

T : t = 1 ε x t t = 1 x t I { | x t | > δ }

and is almost surely continuous with respect to the point process N. Thus by continuous mapping theorem

S n ( δ , ) = T ( N n ) d T ( N ) = S ( δ , )

As δ 0 , S ( δ , ) S ( 0 , ) = i = 1 j = 1 P i Q i j .