Applied Mathematics
Vol.08 No.12(2017), Article ID:81529,21 pages
10.4236/am.2017.812136

Some Applications of Higher Moments of the Linear Gaussian White Noise Process

I. S. Iwueze1, C. O. Arimie2, H. C. Iwu1, E. Onyemachi1

1Department of Statistics, Federal University of Technology, Owerri, Nigeria

2Department of Mathematics and Statistics, University of Portharcourt, Portharcourt, Nigeria

Copyright © 2017 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: August 28, 2017; Accepted: December 26, 2017; Published: December 29, 2017

ABSTRACT

The Linear Gaussian white noise process is an independent and identically distributed (iid) sequence with zero mean and finite variance with distribution N ( 0 , σ 2 ) . Hence, if X 1 , X 2 , , X n is a realization of such an iid sequence, this paper studies in detail the covariance structure of X 1 d , X 2 d , , X n d , d = 1 , 2 , . By this study, it is shown that: 1) all powers of a Linear Gaussian White Noise Process are iid but, not normally distributed and 2) the higher moments (variance and kurtosis) of X t d , d = 2 , 3 , can be used to distinguish between the Linear Gaussian white noise process and other processes with similar covariance structure.

Keywords:

Stochastic Process, Linear Gaussian White Noise Process, Covariance Structure, Stationarity, Test for White Noise Process, Test for Normality

1. Introduction

The objective of estimation procedures is to produce residuals (the estimated noise sequence) with no apparent deviations from stationarity, and in particular with no dependence among these residuals. If there is no dependence among these residuals, then we can regard them as observations of independent random variables; there is no further modeling to be done except to estimate their mean and variance. If there is significant dependence among the residuals, then we need to look for the noise sequence that accounts for the dependence [1] .

In this paper, we examine the covariance structure of powers of the noise sequence when the noise sequence is assumed to be independent and identically distributed normal (Gaussian) random variates with mean zero and finite variance, σ 2 > 0 . Some simple tests for checking the hypothesis that the residuals and their powers are observed values of independent and identically distributed random variables are also considered. Also considered are tests for normality of the residuals and their powers.

The stochastic process X t , t T is said to be strictly stationary if the distribution function is time invariant. That is;

F ( x t 1 , x t 2 , , x t m ) = F ( x t 1 + k , x t 2 + k , , x t m + k ) (1.1)

where

F ( x t 1 , x t 2 , , x t m ) = P ( X t 1 x t 1 , X t 2 x t 2 , , X t m x t m ) (1.2)

That is, the probability measure for the sequence X t is the same as that for X t + k for all k. If a series satisfies the next three equations, it is said to be weakly or covariance stationary.

1. E ( X t ) = μ , t = 1 , 2 , , 2. E [ ( X t μ ) ( X t μ ) ] = σ 2 < 3. E [ ( X t 1 μ ) ( X t 2 μ ) ] = R ( t 2 t 1 ) } (1.3)

If the process is covariance stationary, all the variances are the same and all the covariances depend on the difference between t 1 and t 2 . The moments

E [ ( X t μ ) ( X t + k μ ) ] = R ( k ) , k = 0 , 1 , 2 , . (1.4)

are known as the autocovariance function. The autocorrelations which do not depend on the units of measurements of X t are given by

ρ ( k ) = R ( k ) R ( 0 ) , k = 0 , 1 , 2 , (1.5)

A stochastic process X t , t Z , where Z = , 1 , 0 , 1 , , is called a white noise if with finite mean and variance all the autocovariances (1.4) are zero except at lag zero [ R ( k ) = 0 , for k 0 ]. In many applications, X t , t Z is assumed to be normally distributed with mean zero and variance, σ 2 < , and the series is called a linear Gaussian white noise process if:

E ( X t ) = 0 var ( X t ) = σ 2 R ( k ) = { σ 2 , k = 0 0 , otherwise ρ ( k ) = { 1 , k = 0 0 , otherwise } (1.6)

and

ϕ k k = c o r r ( X t , X t + k / X t + 1 , X t + 2 , , X t + k 1 ) = { 1 , k = 0 0 , otherwise (1.7)

where ϕ k k is known as the partial autocorrelation function. For large n, the sample autocorrelations:

ρ ^ X ( k ) = t = 1 n k ( X t X ¯ ) ( X t + k X ¯ ) t = 1 n ( X t X ¯ ) 2 (1.8)

of an iid sequence X 1 , X 2 , , X n with finite variance are approximately distributed as N ( 0 , 1 n ) [1] [2] [3] . We can use this to do significance tests for the

autocorrelation coefficients by constructing a confidence interval. Here X 1 , X 2 , , X n is a realization of such an iid sequence, about 100 ( 1 α ) % of the sample autocorrelations should fall between the bounds:

± Z 1 α 2 n (1.9)

where Z 1 α 2 is the 1 α 2 quartile of the normal distribution. If the null and alternative hypothesis are:

H 0 : ρ X ( k ) = 0 k 0 and H 1 : ρ X ( k ) 0 for some k 0 (1.10)

where ρ X ( k ) are autocorrelations at lag k computed for X 1 , X 2 , , X n .

We can also test the joint hypothesis that all m of the ρ X ( k ) correlation coefficients are simultaneously equal to zero. The null and alternative hypothesis are:

H 0 : ρ X ( 1 ) = ρ X ( 2 ) = = ρ X ( m ) = 0 and H 1 : ρ X ( i ) 0 for i = 1 , 2 , , m (1.11)

The most popular test for (1.11) is the [4] portmanteau test which admits the following form

Q B P ( m ) = n k = 1 m [ ρ ^ X ( k ) ] 2 (1.12)

where m is the so-called lag truncation number [5] and (typically) assumed to be fixed [6] . Under the assumption that X 1 , X 2 , , X n is an iid sequence, Q B P ( m ) is asymptotically a chi-squared random variable with m degree of freedom. [7] modified the Q ( m ) statistic to increase the power of the test in finite samples as

Q L B ( m ) = n ( n + 2 ) k = 1 m ( [ ρ ^ X ( k ) ] 2 n k ) (1.13)

Several values of m are often used and simulation studies suggest that the choice of m ln ( n ) provides better power performance [8] .

Another Portmanteau test formulated by [9] can be used as a further test for iid hypothesis, since if the data are iid, then the squared data are also iid. It is based on the same statistic used for the Ljung-Box test as

Q M L ( m ) = n ( n + 2 ) k = 1 m ( [ ρ ^ X 2 ( k ) ] 2 n k ) (1.14)

where the sample autocorrelations of the data are replaced by the sample autocorrelations of the squared data, ρ ^ X 2 ( k ) .

According to [6] , the methodology for testing for white noise can be roughly divided into two categories: time domain tests and frequency domain tests. Other time domain tests include the turning point test, the difference-sign test, the rank test [1] . Another time domain test is to fit an autoregressive model to the data and choosing the order which minimizes the AICC statistic. A selected order equal to zero suggests that the data is white noise [1] .

Let

f x ( ω ) = 1 2 π k = 0 ρ x ( k ) e i k ω , ω [ π , π ] (1.15)

be the normalized spectral density of X t , t Z . The normalized spectral density function for the linear Gaussian white noise process is

f x ( ω ) = 1 , ω [ π , π ] (1.16)

The equivalent frequency domain expressions to H0 and H1 are

H0: f x ( ω ) = 1 , ω [ π , π ] and H1: f x ( ω ) 1 , ω [ π , π ] (1.17)

In the frequency domain, [10] proposed test statistics based on the famous U p and T p processes [6] , and a rigorous theoretical treatment of their limiting distributions was provided by [11] . Some contributions to the frequency domain tests can be found in [12] and [13] , among others. This study will concentrate on the time domain approach only.

A stochastic process X t , t Z may have the covariance structure (1.6) even when it is not the linear Gaussian white noise process. Examples are found in the study of bilinear time series processes [14] [15] . Researchers are often confronted with the choice of the linear Gaussian white noise process for use in constructing time series models or generating other stationary processes in simulation experiments. The question now is, “How do we distinguish between the linear Gaussian white noise process from other processes with similar covariance structure”? Additional properties of the linear Gaussian white noise process are needed for proper identification and characterization of the process from other processes with similar covariance structure. Therefore, the ultimate aim of this study is on the use of higher moments for the acceptability of the linear Gaussian white noise process. The first moment (mean) and second or higher moments (variance, covariances, skewness and kurtosis) of powers of the linear Gaussian white noise process was established in Section 2. The methodology was discussed in Section 3, the results are contained in Section 4 while Section 5 is the conclusion.

2. Mean, Variance and Covariances of Powers of the Linear Gaussian White Noise Process

2.1. Mean of Powers of the Linear Gaussian White Noise Process

Let Y t = X t d , d = 1 , 2 , 3 , , where X t , t Z is the linear Gaussian white noise process. The expected value of Y t , t Z [ E ( Y t ) = E ( X t d ) ] are needed for the effective determination of the variance and covariance structure of Y t . Lemma 2.1 gives the required result.

Lemma 2.1: Let X t , t Z be a linear Gaussian white noise process with mean zero and variance σ 2 > 0 ( X t follows iid N ( 0 , σ 2 ) ), then

E ( X t d ) = { σ 2 m ( 2 m 1 ) ! ! , d = 2 m , m = 1 , 2 , 0 , d = 2 m + 1 , m = 0 , 1 , 2 , (2.1)

where [16]

( 2 m 1 ) ! ! = 1 × 3 × 5 × 7 × × ( 2 m 1 ) = k = 1 m ( 2 k 1 ) (2.2)

Proof:

Let X t = Z ~ N ( 0 , σ 2 ) , then

f ( z ) = 1 σ 2 π e z 2 2 σ 2 ; < z < ; σ 2 > 0 (2.3)

Note that

E ( Z d ) = z d f ( z ) d z (2.4)

= z d 1 σ 2 π e z 2 2 σ 2 d z (2.5)

1) Case 1: d = 2 m ( even )

Equation (2.5) reduces to

E ( Z d ) = 2 0 z d 1 σ 2 π e z 2 2 σ 2 d z (2.6)

Let y = z 2 2 σ 2 z 2 = 2 σ 2 y z = ( σ 2 ) y 1 2

d z d y = ( σ 2 ) 1 2 y 1 2 = ( 2 2 ) σ y 1 2 = ( 1 2 ) σ y 1 2 = ( σ 2 ) y 1 2

d z = ( σ y 1 2 2 ) d y (2.7)

E ( Z d ) = 2 σ 0 [ σ 2 y 1 2 ] 2 m e y ( σ y 1 2 2 ) d y = 2 m σ 2 m π 0 y m 1 2 e y d y (2.8)

The integral in Equation (2.8) is a gamma function [ 0 w t 1 e w d w = Γ ( t ) ] [17] and by definition

E ( Z d ) = 2 m σ 2 m π Γ ( m + 1 2 ) (2.9)

Γ ( m + 1 2 ) = [ 1 × 3 × 5 × 7 × × ( 2 m 1 ) ] Γ ( 1 2 ) 2 m = [ 1 × 3 × 5 × 7 × × ( 2 m 1 ) ] π 2 m = π × ( 2 m 1 ) ! ! 2 m (2.10)

Thus

E ( Z d ) = 2 m σ 2 m π π ( 2 m 1 ) ! ! 2 m = σ 2 m ( 2 m 1 ) ! ! (2.11)

2) Case II: d = 2 m + 1 ( odd )

E ( Z d ) = 1 σ z d e z 2 2 σ 2 d z = 1 σ 0 z d e Z 2 2 σ 2 d z + 1 σ 0 z d e z 2 2 σ 2 d z = 1 σ 0 z d e z 2 2 σ 2 d z 1 σ 0 z d e z 2 2 σ 2 d z = 0 (2.12)

Thus

E ( Z d ) = E ( X t d ) = { σ 2 m ( 2 m 1 ) ! ! , d = 2 m , m = 1 , 2 , 0 , d = 2 m + 1

2.2. Variances of Powers of the Linear Gaussian White Noise Process

Theorem 2.2: Let X t , t Z be a linear Gaussian white noise process with mean zero and variance σ 2 > 0 ( X t follows iid N ( 0 , σ 2 ) ), then

Var ( Y t ) = Var ( X t d ) = { σ 4 m [ k = 1 2 m ( 2 k 1 ) ( k = 1 m ( 2 k 1 ) ) 2 ] , d = 2 m σ 2 ( 2 m + 1 ) k = 1 2 m + 1 ( 2 k 1 ) , d = 2 m + 1 (2.13)

Proof:

Let X t ~ iid N ( 0 , σ 2 ) , then the expected value of Y t = X t d , d = 1 , 2 , 3 , is given by Equation (2.1).

Case I: d = 2 m , m = 1 , 2 , 3 , (d even)

Now

Y t = X t d = X t 2 m Y t 2 = X t 2 d = X t 2 ( 2 m ) = X t 4 m

From Equation (2.1)

E ( Y t ) = σ 2 m k = 1 m ( 2 k 1 ) (2.14)

and

E ( Y t 2 ) = σ 4 m k = 1 2 m ( 2 k 1 ) (2.15)

Var ( Y t ) = E ( Y t 2 ) E 2 ( Y t ) = σ 4 m k = 1 2 m ( 2 k 1 ) [ σ 2 m k = 1 m ( 2 k 1 ) ] 2 = σ 4 m [ k = 1 2 m ( 2 k 1 ) ( k = 1 m ( 2 k 1 ) ) 2 ] (2.16)

Case II d = 2 m + 1 , m = 0 , 1 , 2 , (d odd)

Y t = X t d = X t 2 m + 1 Y t 2 = X t 2 d = X t 2 ( 2 m + 1 )

From Equation (2.1)

E ( Y t ) = 0

E ( Y t 2 ) = σ 2 ( 2 m + 1 ) k = 1 2 m + 1 ( 2 k 1 ) (2.17)

and

Var ( Y t ) = E ( Y t 2 ) E 2 ( Y t ) = E ( Y t 2 ) = σ 2 ( 2 m + 1 ) k = 1 2 m + 1 ( 2 k 1 ) (2.18)

Generally

Var ( Y t ) = Var ( X t d ) = { σ 4 m [ k = 1 2 m ( 2 k 1 ) ( k = 1 m ( 2 k 1 ) ) 2 ] , d = 2 m σ 2 ( 2 m + 1 ) k = 1 2 m + 1 ( 2 k 1 ) , d = 2 m + 1 (2.19)

Table 1 summarizes the mean and variances of Y t = X t d , d = 1 , 2 , 3 , , 10 . The standard deviation of Y t = X t d , d = 1 , 2 , 3 , , 10 is also included when σ = 1.0 . A plot of σ y t = var ( Y t ) against d for fixed σ = 1 is given in Figure 1. From Figure 1, we note that for fixed σ , increase in d leads to an exponential increase in the standard deviation.

The specific objective of this paper is to investigate if powers of X t , t Z are also iid and to determine the distribution of Y t = X t d , d = 1 , 2 , 3 , , especially for d = 2 . The analytical proofs are provided in Section 2.3.

2.3. Covariances of Powers of the Linear Gaussian White Noise Process

Theorem 2.3: If X t , t Z is a linear Gaussian white noise process then

Figure 1. Plot of standard deviation of Y t = X t d ( σ Y t ) against power (d) for fixed σ = 1.

Table 1. Mean, variance and standard deviation of Y t = X t d , d = 1 , 2 , 3 , , 10 .

higher powers of ( Y t = X t d , d = 1 , 2 , 3 , ) are also white noise processes (iid) but not normally distributed.

Proof:

Since X t , t T are iid and Y t = X t d , d = 1 , 2 , 3 , , we consider for k 0 .

R y ( k ) = cov ( Y t Y t k ) = cov ( X t d X t k d ) = E ( X t d X t k d ) E ( X t d ) E ( X t k d ) = E ( X t d ) E ( X t k d ) E ( X t d ) E ( X t k d ) = 0 , k 0

However, for k = 0 , R y ( 0 ) = var ( Y t ) = var ( X t d ) . Hence

R y ( l ) = { σ 4 m [ k = 1 2 m ( 2 k 1 ) ( k = 1 m ( 2 k 1 ) ) 2 ] , d = 2 m , l = 0 σ 2 ( 2 m + 1 ) k = 1 2 m + 1 ( 2 k 1 ) , d = 2 m + 1 , l = 0 0 , l 0 (2.20)

It is clear from Equation (2.20) that when X t , t Z are iid, the powers Y t = X t d , d = 1 , 2 , 3 , of X t , t Z are also iid. That is,

R y ( l ) = { var ( Y t ) , l = 0 0 , l 0 (2.21)

The probability distribution function (p.d.f) of Y t = X t d , d = 1 , 2 , 3 , can be obtained to enable a detailed study of the series. Theorem 2.4 gives the p.d.f of Y t = X t 2

Theorem 2.4: If X t , t Z is a linear Gaussian white noise process, then Y t = X t 2 has the p.d.f

g ( y ) = { 1 σ y 1 2 e y 2 σ 2 , 0 < y < 0 , otherwise (2.22)

Proof:

If X t = X ~ N ( 0 , σ 2 ) and Y = X t 2 = X 2 , the distribution function of Y is, for y0 ,

G ( y ) = P ( X 2 y ) = P ( y X y ) = y y 1 σ e x 2 2 σ 2 d x = 2 0 y 1 σ e x 2 2 σ 2 d x

Let x = v , then since d x = ( 1 2 v ) d v , we have

G ( y ) = 2 0 y 1 σ e v 2 σ 2 ( 1 2 v ) d v = 0 y 1 σ v 1 2 e v 2 σ 2 d v

Of course G ( y ) = 0 , where y < 0 . The p.d.f of Y is g ( y ) = G ( y ) and by one form of the fundamental theorem of calculus [17]

g ( y ) = { 1 σ y 1 2 e y 2 σ 2 , 0 < y < 0 , otherwise

Note that the p.d.f of Y t = X t 2 is the p.d.f of a gamma distribution with parameters α = 1 2 , β = 2 σ 2 . That is, Y t = X t 2 ~ G ( α , β ) , α = 1 2 , β = 2 σ 2 .

However, for a more detailed study on the behavioral of the linear Gaussian white noise process, the coefficient of symmetry and kurtosis for powers of the process are provided in Section 2.4.

2.4. Coefficient of Symmetry and Kurtosis for Powers of the Linear Gaussian White Noise Process

Non-normality of higher powers of X t , t Z ( d = 2 , 3 , ) can also be confirmed by the coefficient of symmetry and kurtosis defined by

β 1 = μ 3 ( d ) ( μ 2 ( d ) ) 3 / 2 (2.23)

β 2 = μ 4 ( d ) ( μ 2 ( d ) ) 2 (2.24)

where

μ 2 ( d ) = E [ ( X t d E ( X t d ) ) 2 ] = var ( X t d ) (2.25)

μ 3 ( d ) = E [ ( X t d E ( X t d ) ) 3 ] (2.26)

and

μ 4 ( d ) = E [ ( X t d E ( X t d ) ) 4 ] (2.27)

Note that

μ 3 ( d ) = E ( X t 3 d ) 3 E ( X t 2 d ) E ( X t d ) + 2 E 3 ( X t d ) (2.28)

μ 4 ( d ) = E ( X t 4 d ) 4 E ( X t 3 d ) E ( X t d ) + 6 E ( X t 2 d ) E 2 ( X t d ) 3 E 4 ( X t d ) (2.29)

The kurtosis for d = 1 , 2 , 3 , 4 , 5 and 6 are given in Table 2. A plot of β 2 = μ 4 ( d ) ( μ 2 ( d ) ) 2 against d = 1 , 2 , 3 , 4 , 5 is given in Figure 2. From Figure 2, we note that increase in d leads to an exponential increase in the kurtosis.

Figure 2. Plot of kurtosis coefficient against power of the linear Gaussian white noise process.

Table 2. Coefficient of symmetry and kurtosis for Y t = X t d , d = 1 , 2 , 3 , , 6 .

3. Methodology

3.1. Checking for Normality

If the noise process is Gaussian (that is, if all of its joint distributions are normal), then stronger conclusions can be drawn when a model is fitted to the data. We have shown that all powers of the linear Gaussian process are non-normal. The only reasonable test is the one that enables us to check whether the observations are from an iid normal sequence. The Jarque-Bera (JB) test [18] [19] [20] for normality can be used. The JB test is based on the assumption that the normal distribution (with any mean or variance) has skewness coefficient of zero, and a kurtosis coefficient of three. We can test if these two conditions hold against a suitable alternative and the JB test statistic is

J B = n ( β ^ 1 2 6 + ( β ^ 2 3 ) 2 24 ) (3.1)

where

β ^ 1 = 1 n t = 1 n ( X t X ¯ ) 3 ( 1 n t = 1 n ( X t X ¯ ) 2 ) 3 / 2 (3.2)

β ^ 2 = 1 n t = 1 n ( X t X ¯ ) 4 ( 1 n t = 1 n ( X t X ¯ ) 2 ) 2 (3.3)

n is the sample size while, β ^ 1 and β ^ 2 are the sample skewness and kurtosis coefficients. The asymptotic null distribution of JB is χ 2 with 2 degrees of freedom.

3.2. White Noise Testing

We have shown that the sample autocorrelations of X 1 d , X 2 d , , X n d , d = 1 , 2 , 3 , . are those of the white noise series if the sample autocorrelations of X 1 , X 2 , , X n are also iid. We will adopt the Ljung-Box test by replacing the sample autocorrelations of the data X 1 , X 2 , , X n with those of X 1 d , X 2 d , , X n d , d = 1 , 2 , 3 , and use the statistic

Q * ( m ) = n ( n + 2 ) k = 1 m ( [ ρ ^ X d ( k ) ] 2 n k ) (3.4)

The hypothesis of iid data is then rejected at level α if the observed Q * ( m ) is larger than the 1 α n quartile of the χ 2 ( m ) distribution.

3.3. Determining the Optimal Value of d

Figure 1 suggests two growth models: 1) the quadratic growth model and 2) exponential growth model. We are going to use the behavior of the variance and kurtosis coefficient to determine the optimal value of d. The optimal value is that value of d that gives a perfect fit for either the quadratic or exponential growth curves. Using the standard deviation for 5 d 10 , the exponential growth curve performs better than the quadratic growth curve. The quadratic growth curve fitted negative values to positive values at the different data points while the exponential curve fitted only positive values. However, the residual of the resulting exponential curve is very large as measured by the following accuracy measures [21] .

Mean Absolute Error (MAE)

MAE = 1 m i = 1 m | e ^ i | (3.5)

Mean Absolute Percentage Error (MAPE)

MAPE = [ 1 m i = 1 m | e ^ i Z i | ] × 100 (3.6)

Mean Squared Error (MSE)

MSE = 1 m i = 1 m e i 2 (3.7)

where m is the value of d used in the trend analysis and,

e ^ i = { σ ^ y t σ y t for the standard deviation of Y t = X t d β ^ 2 β 2 for the Kurtosis coefficient of Y t = X t d (3.8)

Table 3 gives the accuracy measures for the trend analysis of the standard deviation of Y t = X t d when σ = 1 while Table 4 gives detailed results for optimality.

When d = 4 , the quadratic growth curve performs better than the exponential curve with minimal residual. Both curves fitted positive values at different data points. We also observed from Table 3 that with d = 3 , the quadratic

Table 3. Summary of accuracy measures for the exponential and quadratic curves using the standard deviation of Y t = X t d for d = 3 , 4 , , 10 .

Table 4. Fitting exponential and quadratic curves to the standard deviation of powers of linear Gaussian white noise process when σ = 1 and d = 3 , 4 .

*Exponential and Quadratic trend analysis cannot be possible for d = 2 or d = 1 .

growth curve performs optimally than the exponential growth curve. The resulting quadratic curve yielded zero residual. The implication of the result is that we obtain a perfect fit for the data point when d = 3 for the quadratic curve only. Hence, the optimal value of d is 3 when we use the standard deviation curve.

Figure 2 also suggests two growth models: 1) the quadratic growth model and 2) exponential growth model. Using the kurtosis coefficient for 4 d 6 , the exponential growth curve performs better than the quadratic growth curve. The quadratic growth curve fitted negative values to positive values at the different data points while the exponential curve fitted only positive values.

When d = 3 , the quadratic growth curve performs optimally than the exponential growth curve. The resulting quadratic curve yielded zero residual as that of the standard deviation curve. The implication of these results is that we obtain a perfect fit for the data point when d = 3 for the quadratic curve only. Hence, the optimal value of d is 3. Therefore, we recommend that in order to stop the variance from exploding, the order of the data points should not be raised to power greater that three.

3.4. On the Use of Higher Moment for the Acceptability of the Linear Gaussian White Noise Process

We have shown that if X t , t Z is a linear Gaussian white noise process, Y t = X t d ; d = 1 , 2 , is also iid but not normally distributed. Using the variances and kurtosis of Y t = X t d , we were able to establish that the optimal value of d is three. Variances and kurtosis of Y t = X t d have been given in Table 5 and Table 6 respectively. It is also clear from Equation (2.24) that the kurtosis itself is a function of variances. We, therefore, insist that for a stochastic process to be accepted as a linear Gaussian white noise process, the following variances must be true:

var ( X t ) = σ 2 (3.9)

var ( X t 2 ) = 2 σ 4 (3.10)

and

var ( X t 3 ) = 15 σ 6 (3.11)

Table 5. Summary of accuracy measures for the exponential and quadratic curves using the Kurtosis Coefficient of Y t = X t d for d = 3 , 4 , 5 , 6 .

*Exponential and Quadratic trend analysis cannot be possible for d = 2 or d = 1 .

Table 6. Fitting exponential and quadratic curves to the kurtosis coefficient of powers of linear Gaussian white noise process when σ = 1 and d = 3 , 4 .

In view of these, we suggest that the two following null hypothesis be tested before a stochastic process is accepted as a linear Gaussian white noise process:

H 01 : var ( X t 2 ) = 2 σ 0 4 (3.12)

and

H 02 : var ( X t 3 ) = 15 σ 0 6 (3.13)

Then, the chi-square test statistic [22] for testing (3.12) is

χ c a l 2 = ( n 1 ) S X t 2 2 2 σ 0 4 (3.14)

while that for (3.13) is

χ c a l 2 = ( n 1 ) S X t 3 2 15 σ 0 6 (3.15)

where S X t 2 2 and S X t 3 2 are the estimated variance of the second and third power of the stochastic process, σ 0 2 is the null value for the true variance of the stochastic process and n is the number of observations of the random digits. The null hypothesis is rejected at level α if the observed value of χ c a l 2 is larger

than 1 α 2 quartile of the chi-square distribution with n 1 . Degree of freedom.

4. Results

For an illustration, six (6) random digits were simulated using Minitab 16 series (see Appendix). The simulated series met the following conditions: 1) The simulated series ( X t ) are normal and 2) Powers of X t d , d = 1 , 2 , 3 , 4 , 5 are shown to be iid but not normally distributed (see Table 7).

Table 7. Descriptive statistics and estimate of the test statistic for rejecting the null hypothesis of equality of the variance of higher moment for six simulated series, X t = e t , e t ~ N ( 0 , 1 ) , as linear Gaussian white noise process.

The value of the chi-square test statistic for testing (3.12) and (3.13) are also shown in Table 7. We observed that the null hypothesis is rejected at level α equals 5% for two simulated series and is not rejected for the other four. The result clearly showed that testing the variance of higher moments for Y t = X t d , d = 2 , 3 is a necessary condition for accepting the linear Gaussian white noise process.

5. Conclusion

We have been able to show that if X t , t Z are iid then, all powers of X t , t Z are also iid but, non-normal. Hence, we computed the kurtosis of some higher powers of X t , t Z and established that an increase in the powers of X t , t Z leads to an exponential increase on the kurtosis. We recommend that stochastic processes (white noise processes) and processes with similar covariance structure should be considered for normality, white noise testing and for test of the variance of higher moments being equal to the theoretical values of Table 1 with d = 1 , 2 , 3 .

Cite this paper

Iwueze, I.S., Arimie, C.O., Iwu, H.C. and Onyemachi, E. (2017) Some Applications of Higher Moments of the Linear Gaussian White Noise Process. Applied Mathematics, 8, 1918-1938. https://doi.org/10.4236/am.2017.812136

References

  1. 1. Brockwell, P.J. and Davies, R.A. (2002) Introduction to Time Series and Forecasting. 2nd Edition, Springer, New York. https://doi.org/10.1007/b97391

  2. 2. Box, G.E.P., Jenkins, G.M. and Reinsel, G.C. (1994) Time Series Analysis: Forecasting and Control. 3rd Edition, John Wiley and Sons Inc. Publication, Hoboken.

  3. 3. Fuller, W.A. (1976) Introduction to Statistical Time Series. 2nd Edition, Wiley, New York.

  4. 4. Box, G.E.P. and Pierce, D.A. (1970) Distribution of Residual Autocorrelations in Autoregressive Integrated Moving Average Time Series Models. Journal of the American Statistical Association, 65, 1509-1526. https://doi.org/10.1080/01621459.1970.10481180

  5. 5. Hong, Y. (1996) Consistent Testing for Serial Correlation of Unknown Form. Econometrica, 64, 837-864. https://doi.org/10.2307/2171847

  6. 6. Shao, X. (2011) Testing for White Noise under Unknown Dependence and Its Applications to Goodness-of-Fit for Time Series Models. Econometric Theory, 27, 1-32. https://doi.org/10.1017/S0266466610000253

  7. 7. Ljung, G.M. and Box, G.E.P. (1978) On a Measure of Lack of Fit in Time Series Model. Biometrika, 65, 297-303. https://doi.org/10.1093/biomet/65.2.297

  8. 8. Tsay, R.S. (2002) Analysis of Financial Time Series. John Willey & Sons, New York.

  9. 9. Mcleod, A.I. and Li, W.K. (1983) Diagnostic Checking ARMA Time Series Models using Squared Residuals Autocorrelations. Journal of Time Series Analysis, 4, 269-273.

  10. 10. Bartlett, M.S. (1956) An Introduction to Stochastic Processes: With Special Reference to Methods and Applications. University Press, Cambridge.

  11. 11. Genander, U. and Rosenblast, M. (1957) Statistical Analysis of Stationary Time Series. Wiley, New York.

  12. 12. Durlauf, S. (1991) Spectral Based Testing for the Martingale Hypothesis. Journal of Econometrics, 50, 355-376. https://doi.org/10.1016/0304-4076(91)90025-9

  13. 13. Deo, R.S. (2000) Spectral Test of the Martingale Hypothesis under Conditional Heteroscedasticity. Journal of Econometrics, 99, 291-315. https://doi.org/10.1016/S0304-4076(00)00027-0

  14. 14. Granger, C.W. and Anderson, A.P. (1978) An Introduction to Bilinear Time Series Model. Vandenhoeck and Ruprecht, Guttingen.

  15. 15. Iwueze, I.S. (1988) Bilinear White Noise Processes. Nigerian Journal of Mathematics and Applications, 1, 51-63.

  16. 16. Ibrahim, A.M. (2013) Extension of Factorial Concept to Negative Numbers. Notes on Number Theory and Discrete Mathematics, 19, 30-42.

  17. 17. Grossman, S.I. (1981) Calculus. 2nd Edition, Academic Press, Inc., New York.

  18. 18. Jarque, C.M. and Bera, A.K. (1980) Efficient Tests for Normality, Homoscedasticity and Serial Independence of Regression Residuals. Economics Letters, 6, 255-259. https://doi.org/10.1016/0165-1765(80)90024-5

  19. 19. Jarque, C.M. and Bera, A.K. (1981) Efficient Tests for Normality, Homoscedasticity and Serial Independence of Regression Residuals: Monte Carlo Evidence. Economics Letters, 7, 313-318. https://doi.org/10.1016/0165-1765(81)90035-5

  20. 20. Jarque, C.M. and Bera, A.K. (1987) A Tests for Normality of Observations and Regression Residuals. International Statistical Review, 55, 163-172. https://doi.org/10.2307/1403192

  21. 21. Hyndman, R.J. and Athanasopoulos, G. (2012) Forecasting: Principles and Practice. OTexts. https://otexts.com/fpp

  22. 22. Milton, J.S. and Jesse, C.A. (1995) Introduction to Probability and Statistics: Principles and Applications for Engineering and the Computing Sciences. McGraw Hill Inc., New York.

Appendix

Table A1. Six simulated white noise series: X t = e t , e t ~ N ( 0 , 1 ) data.