The Linear Gaussian white noise process is an independent and identically distributed (iid) sequence with zero mean and finite variance with distribution N (0, σ 2 ) . Hence, if X 1, x 2, …, X n is a realization of such an iid sequence, this paper studies in detail the covariance structure of X1d, X2d, …, Xnd, d=1, 2, …. By this study, it is shown that: 1) all powers of a Linear Gaussian White Noise Process are iid but, not normally distributed and 2) the higher moments (variance and kurtosis) of Xtd, d=2, 3, … can be used to distinguish between the Linear Gaussian white noise process and other processes with similar covariance structure.
The objective of estimation procedures is to produce residuals (the estimated noise sequence) with no apparent deviations from stationarity, and in particular with no dependence among these residuals. If there is no dependence among these residuals, then we can regard them as observations of independent random variables; there is no further modeling to be done except to estimate their mean and variance. If there is significant dependence among the residuals, then we need to look for the noise sequence that accounts for the dependence [
In this paper, we examine the covariance structure of powers of the noise sequence when the noise sequence is assumed to be independent and identically distributed normal (Gaussian) random variates with mean zero and finite variance, σ 2 > 0 . Some simple tests for checking the hypothesis that the residuals and their powers are observed values of independent and identically distributed random variables are also considered. Also considered are tests for normality of the residuals and their powers.
The stochastic process X t , t ∈ T is said to be strictly stationary if the distribution function is time invariant. That is;
F ( x t 1 , x t 2 , ⋯ , x t m ) = F ( x t 1 + k , x t 2 + k , ⋯ , x t m + k ) (1.1)
where
F ( x t 1 , x t 2 , ⋯ , x t m ) = P ( X t 1 ≤ x t 1 , X t 2 ≤ x t 2 , ⋯ , X t m ≤ x t m ) (1.2)
That is, the probability measure for the sequence 〈 X t 〉 is the same as that for 〈 X t + k 〉 for all k. If a series satisfies the next three equations, it is said to be weakly or covariance stationary.
1. E ( X t ) = μ , t = 1 , 2 , ⋯ , ∞ 2. E [ ( X t − μ ) ( X t − μ ) ] = σ 2 < ∞ 3. E [ ( X t 1 − μ ) ( X t 2 − μ ) ] = R ( t 2 − t 1 ) } (1.3)
If the process is covariance stationary, all the variances are the same and all the covariances depend on the difference between t 1 and t 2 . The moments
E [ ( X t − μ ) ( X t + k − μ ) ] = R ( k ) , k = 0 , 1 , 2 , . ⋯ (1.4)
are known as the autocovariance function. The autocorrelations which do not depend on the units of measurements of X t are given by
ρ ( k ) = R ( k ) R ( 0 ) , k = 0 , 1 , 2 , ⋯ (1.5)
A stochastic process X t , t ∈ Z , where Z = 〈 ⋯ , − 1 , 0 , 1 , ⋯ 〉 , is called a white noise if with finite mean and variance all the autocovariances (1.4) are zero except at lag zero [ R ( k ) = 0 , for k ≠ 0 ]. In many applications, X t , t ∈ Z is assumed to be normally distributed with mean zero and variance, σ 2 < ∞ , and the series is called a linear Gaussian white noise process if:
E ( X t ) = 0 var ( X t ) = σ 2 R ( k ) = { σ 2 , k = 0 0 , otherwise ρ ( k ) = { 1 , k = 0 0 , otherwise } (1.6)
and
ϕ k k = c o r r ( X t , X t + k / X t + 1 , X t + 2 , ⋯ , X t + k − 1 ) = { 1 , k = 0 0 , otherwise (1.7)
where ϕ k k is known as the partial autocorrelation function. For large n, the sample autocorrelations:
ρ ^ X ( k ) = ∑ t = 1 n − k ( X t − X ¯ ) ( X t + k − X ¯ ) ∑ t = 1 n ( X t − X ¯ ) 2 (1.8)
of an iid sequence X 1 , X 2 , ⋯ , X n with finite variance are approximately distributed as N ( 0 , 1 n ) [
autocorrelation coefficients by constructing a confidence interval. Here X 1 , X 2 , ⋯ , X n is a realization of such an iid sequence, about 100 ( 1 − α ) % of the sample autocorrelations should fall between the bounds:
± Z 1 − α 2 n (1.9)
where Z 1 − α 2 is the 1 − α 2 quartile of the normal distribution. If the null and alternative hypothesis are:
H 0 : ρ X ( k ) = 0 ∀ k ≠ 0 and H 1 : ρ X ( k ) ≠ 0 for some k ≠ 0 (1.10)
where ρ X ( k ) are autocorrelations at lag k computed for X 1 , X 2 , ⋯ , X n .
We can also test the joint hypothesis that all m of the ρ X ( k ) correlation coefficients are simultaneously equal to zero. The null and alternative hypothesis are:
H 0 : ρ X ( 1 ) = ρ X ( 2 ) = ⋯ = ρ X ( m ) = 0 and H 1 : ρ X ( i ) ≠ 0 for i = 1 , 2 , ⋯ , m (1.11)
The most popular test for (1.11) is the [
Q B P ( m ) = n ∑ k = 1 m [ ρ ^ X ( k ) ] 2 (1.12)
where m is the so-called lag truncation number [
Q L B ( m ) = n ( n + 2 ) ∑ k = 1 m ( [ ρ ^ X ( k ) ] 2 n − k ) (1.13)
Several values of m are often used and simulation studies suggest that the choice of m ≈ ln ( n ) provides better power performance [
Another Portmanteau test formulated by [
Q M L ( m ) = n ( n + 2 ) ∑ k = 1 m ( [ ρ ^ X 2 ( k ) ] 2 n − k ) (1.14)
where the sample autocorrelations of the data are replaced by the sample autocorrelations of the squared data, ρ ^ X 2 ( k ) .
According to [
Let
f x ( ω ) = 1 2 π ∑ k = 0 ∞ ρ x ( k ) e i k ω , ω ∈ [ − π , π ] (1.15)
be the normalized spectral density of X t , t ∈ Z . The normalized spectral density function for the linear Gaussian white noise process is
f x ( ω ) = 1 2π , ω ∈ [ − π , π ] (1.16)
The equivalent frequency domain expressions to H0 and H1 are
H0: f x ( ω ) = 1 2π , ω ∈ [ − π , π ] and H1: f x ( ω ) ≠ 1 2π , ω ∈ [ − π , π ] (1.17)
In the frequency domain, [
A stochastic process X t , t ∈ Z may have the covariance structure (1.6) even when it is not the linear Gaussian white noise process. Examples are found in the study of bilinear time series processes [
Let Y t = X t d , d = 1 , 2 , 3 , ⋯ , where X t , t ∈ Z is the linear Gaussian white noise process. The expected value of Y t , t ∈ Z [ E ( Y t ) = E ( X t d ) ] are needed for the effective determination of the variance and covariance structure of Y t . Lemma 2.1 gives the required result.
Lemma 2.1: Let X t , t ∈ Z be a linear Gaussian white noise process with mean zero and variance σ 2 > 0 ( X t follows iid N ( 0 , σ 2 ) ), then
E ( X t d ) = { σ 2 m ( 2 m − 1 ) ! ! , d = 2 m , m = 1 , 2 , ⋯ 0 , d = 2 m + 1 , m = 0 , 1 , 2 , ⋯ (2.1)
where [
( 2 m − 1 ) ! ! = 1 × 3 × 5 × 7 × ⋯ × ( 2 m − 1 ) = ∏ k = 1 m ( 2 k − 1 ) (2.2)
Proof:
Let X t = Z ~ N ( 0 , σ 2 ) , then
f ( z ) = 1 σ 2 π e − z 2 2 σ 2 ; − ∞ < z < ∞ ; σ 2 > 0 (2.3)
Note that
E ( Z d ) = ∫ − ∞ ∞ z d f ( z ) d z (2.4)
= ∫ − ∞ ∞ z d 1 σ 2 π e − z 2 2 σ 2 d z (2.5)
1) Case 1: d = 2 m ( even )
Equation (2.5) reduces to
E ( Z d ) = 2 ∫ 0 ∞ z d 1 σ 2 π e − z 2 2 σ 2 d z (2.6)
Let y = z 2 2 σ 2 ⇒ z 2 = 2 σ 2 y ⇒ z = ( σ 2 ) y 1 2
d z d y = ( σ 2 ) ⋅ 1 2 ⋅ y − 1 2 = ( 2 2 ) σ y − 1 2 = ( 1 2 ) σ y − 1 2 = ( σ 2 ) y − 1 2
d z = ( σ y − 1 2 2 ) d y (2.7)
E ( Z d ) = 2 σ 2π ∫ 0 ∞ [ σ 2 y 1 2 ] 2 m e − y ( σ y − 1 2 2 ) d y = 2 m σ 2 m π ∫ 0 ∞ y m − 1 2 e − y d y (2.8)
The integral in Equation (2.8) is a gamma function [ ∫ 0 ∞ w t − 1 e − w d w = Γ ( t ) ] [
E ( Z d ) = 2 m σ 2 m π Γ ( m + 1 2 ) (2.9)
Γ ( m + 1 2 ) = [ 1 × 3 × 5 × 7 × ⋯ × ( 2 m − 1 ) ] Γ ( 1 2 ) 2 m = [ 1 × 3 × 5 × 7 × ⋯ × ( 2 m − 1 ) ] π 2 m = π × ( 2 m − 1 ) ! ! 2 m (2.10)
Thus
E ( Z d ) = 2 m σ 2 m π ⋅ π ( 2 m − 1 ) ! ! 2 m = σ 2 m ( 2 m − 1 ) ! ! (2.11)
2) Case II: d = 2 m + 1 ( odd )
E ( Z d ) = 1 σ 2π ∫ − ∞ ∞ z d e − z 2 2 σ 2 d z = 1 σ 2π ∫ − ∞ 0 z d e − Z 2 2 σ 2 d z + 1 σ 2π ∫ 0 ∞ z d e − z 2 2 σ 2 d z = 1 σ 2π ∫ 0 ∞ z d e − z 2 2 σ 2 d z − 1 σ 2π ∫ 0 ∞ z d e − z 2 2 σ 2 d z = 0 (2.12)
Thus
E ( Z d ) = E ( X t d ) = { σ 2 m ( 2 m − 1 ) ! ! , d = 2 m , m = 1 , 2 , ⋯ 0 , d = 2 m + 1
Theorem 2.2: Let X t , t ∈ Z be a linear Gaussian white noise process with mean zero and variance σ 2 > 0 ( X t follows iid N ( 0 , σ 2 ) ), then
Var ( Y t ) = Var ( X t d ) = { σ 4 m [ ∏ k = 1 2 m ( 2 k − 1 ) − ( ∏ k = 1 m ( 2 k − 1 ) ) 2 ] , d = 2 m σ 2 ( 2 m + 1 ) ∏ k = 1 2 m + 1 ( 2 k − 1 ) , d = 2 m + 1 (2.13)
Proof:
Let X t ~ iid N ( 0 , σ 2 ) , then the expected value of Y t = X t d , d = 1 , 2 , 3 , ⋯ is given by Equation (2.1).
Case I: d = 2 m , m = 1 , 2 , 3 , ⋯ (d even)
Now
Y t = X t d = X t 2 m ⇒ Y t 2 = X t 2 d = X t 2 ( 2 m ) = X t 4 m
From Equation (2.1)
E ( Y t ) = σ 2 m ∏ k = 1 m ( 2 k − 1 ) (2.14)
and
E ( Y t 2 ) = σ 4 m ∏ k = 1 2 m ( 2 k − 1 ) (2.15)
Var ( Y t ) = E ( Y t 2 ) − E 2 ( Y t ) = σ 4 m ∏ k = 1 2 m ( 2 k − 1 ) − [ σ 2 m ∏ k = 1 m ( 2 k − 1 ) ] 2 = σ 4 m [ ∏ k = 1 2 m ( 2 k − 1 ) − ( ∏ k = 1 m ( 2 k − 1 ) ) 2 ] (2.16)
Case II d = 2 m + 1 , m = 0 , 1 , 2 , ⋯ (d odd)
Y t = X t d = X t 2 m + 1 ⇒ Y t 2 = X t 2 d = X t 2 ( 2 m + 1 )
From Equation (2.1)
E ( Y t ) = 0
E ( Y t 2 ) = σ 2 ( 2 m + 1 ) ∏ k = 1 2 m + 1 ( 2 k − 1 ) (2.17)
and
Var ( Y t ) = E ( Y t 2 ) − E 2 ( Y t ) = E ( Y t 2 ) = σ 2 ( 2 m + 1 ) ∏ k = 1 2 m + 1 ( 2 k − 1 ) (2.18)
Generally
Var ( Y t ) = Var ( X t d ) = { σ 4 m [ ∏ k = 1 2 m ( 2 k − 1 ) − ( ∏ k = 1 m ( 2 k − 1 ) ) 2 ] , d = 2 m σ 2 ( 2 m + 1 ) ∏ k = 1 2 m + 1 ( 2 k − 1 ) , d = 2 m + 1 (2.19)
The specific objective of this paper is to investigate if powers of X t , t ∈ Z are also iid and to determine the distribution of Y t = X t d , d = 1 , 2 , 3 , ⋯ , especially for d = 2 . The analytical proofs are provided in Section 2.3.
Theorem 2.3: If X t , t ∈ Z is a linear Gaussian white noise process then
d | Y t | E ( Y t ) = μ Y t | var ( Y t ) = σ Y t 2 | σ Y t when σ = 1.0 |
---|---|---|---|---|
1 | X t | 0 | σ 2 | 1.0000 |
2 | X t 2 | σ 2 | 2 σ 4 | 1.4142 |
3 | X t 3 | 0 | 15 σ 6 | 3.8730 |
4 | X t 4 | 3 σ 4 | 96 σ 8 | 9.7980 |
5 | X t 5 | 0 | 945 σ 10 | 30.7409 |
6 | X t 6 | 15 σ 6 | 10170 σ 12 | 100.8464 |
7 | X t 7 | 0 | 135135 σ 14 | 367.6071 |
8 | X t 8 | 105 σ 8 | 2016000 σ 16 | 1419.8591 |
9 | X t 9 | 0 | 34459425 σ 18 | 5870.2151 |
10 | X t 10 | 10395 σ 10 | 653836050 σ 20 | 25570.2180 |
higher powers of ( Y t = X t d , d = 1 , 2 , 3 , ⋯ ) are also white noise processes (iid) but not normally distributed.
Proof:
Since X t , t ∈ T are iid and Y t = X t d , d = 1 , 2 , 3 , ⋯ , we consider for k ≠ 0 .
R y ( k ) = cov ( Y t Y t − k ) = cov ( X t d X t − k d ) = E ( X t d X t − k d ) − E ( X t d ) E ( X t − k d ) = E ( X t d ) E ( X t − k d ) − E ( X t d ) E ( X t − k d ) = 0 , k ≠ 0
However, for k = 0 , R y ( 0 ) = var ( Y t ) = var ( X t d ) . Hence
R y ( l ) = { σ 4 m [ ∏ k = 1 2 m ( 2 k − 1 ) − ( ∏ k = 1 m ( 2 k − 1 ) ) 2 ] , d = 2 m , l = 0 σ 2 ( 2 m + 1 ) ∏ k = 1 2 m + 1 ( 2 k − 1 ) , d = 2 m + 1 , l = 0 0 , l ≠ 0 (2.20)
It is clear from Equation (2.20) that when X t , t ∈ Z are iid, the powers Y t = X t d , d = 1 , 2 , 3 , ⋯ of X t , t ∈ Z are also iid. That is,
R y ( l ) = { var ( Y t ) , l = 0 0 , l ≠ 0 (2.21)
The probability distribution function (p.d.f) of Y t = X t d , d = 1 , 2 , 3 , ⋯ can be obtained to enable a detailed study of the series. Theorem 2.4 gives the p.d.f of Y t = X t 2
Theorem 2.4: If X t , t ∈ Z is a linear Gaussian white noise process, then Y t = X t 2 has the p.d.f
g ( y ) = { 1 σ 2π y − 1 2 e − y 2 σ 2 , 0 < y < ∞ 0 , otherwise (2.22)
Proof:
If X t = X ~ N ( 0 , σ 2 ) and Y = X t 2 = X 2 , the distribution function of Y is, for y ≥ 0 ,
G ( y ) = P ( X 2 ≤ y ) = P ( − y ≤ X ≤ y ) = ∫ − y y 1 σ 2π e − x 2 2 σ 2 d x = 2 ∫ 0 y 1 σ 2π e − x 2 2 σ 2 d x
Let x = v , then since d x = ( 1 2 v ) d v , we have
G ( y ) = 2 ∫ 0 y 1 σ 2π e − v 2 σ 2 ⋅ ( 1 2 v ) d v = ∫ 0 y 1 σ 2π v − 1 2 e − v 2 σ 2 d v
Of course G ( y ) = 0 , where y < 0 . The p.d.f of Y is g ( y ) = G ′ ( y ) and by one form of the fundamental theorem of calculus [
g ( y ) = { 1 σ 2π y − 1 2 e − y 2 σ 2 , 0 < y < ∞ 0 , otherwise
Note that the p.d.f of Y t = X t 2 is the p.d.f of a gamma distribution with parameters α = 1 2 , β = 2 σ 2 . That is, Y t = X t 2 ~ G ( α , β ) , α = 1 2 , β = 2 σ 2 .
However, for a more detailed study on the behavioral of the linear Gaussian white noise process, the coefficient of symmetry and kurtosis for powers of the process are provided in Section 2.4.
Non-normality of higher powers of X t , t ∈ Z ( d = 2 , 3 , ⋯ ) can also be confirmed by the coefficient of symmetry and kurtosis defined by
β 1 = μ 3 ( d ) ( μ 2 ( d ) ) 3 / 2 (2.23)
β 2 = μ 4 ( d ) ( μ 2 ( d ) ) 2 (2.24)
where
μ 2 ( d ) = E [ ( X t d − E ( X t d ) ) 2 ] = var ( X t d ) (2.25)
μ 3 ( d ) = E [ ( X t d − E ( X t d ) ) 3 ] (2.26)
and
μ 4 ( d ) = E [ ( X t d − E ( X t d ) ) 4 ] (2.27)
Note that
μ 3 ( d ) = E ( X t 3 d ) − 3 E ( X t 2 d ) E ( X t d ) + 2 E 3 ( X t d ) (2.28)
μ 4 ( d ) = E ( X t 4 d ) − 4 E ( X t 3 d ) E ( X t d ) + 6 E ( X t 2 d ) E 2 ( X t d ) − 3 E 4 ( X t d ) (2.29)
The kurtosis for d = 1 , 2 , 3 , 4 , 5 and 6 are given in
d | Y t | E ( Y t ) ( μ y ) | μ 2 ( d ) ( var ( Y t ) ) | μ 3 ( d ) | μ 4 ( d ) | β 1 | β 2 |
---|---|---|---|---|---|---|---|
1 | X t | 0 | σ 2 | 0 | 3 σ 4 | 0 | 3.000 |
2 | X t 2 | σ 2 | 2 σ 4 | 8 σ 6 | 60 σ 8 | 2.828 | 15.000 |
3 | X t 3 | 0 | 15 σ 6 | 0 | 10395 σ 12 | 0 | 46.200 |
4 | X t 4 | 3 σ 4 | 96 σ 8 | 9504 σ 12 | 1907712 σ 16 | 10.104 | 207.00 |
5 | X t 5 | 0 | 945 σ 10 | 0 | 654729075 σ 20 | 0 | 733.159 |
6 | X t 6 | 15 σ 6 | 10170 σ 12 | 33998400 σ 18 | 3.142 × 10 11 σ 24 | 33.150 | 3037.836 |
If the noise process is Gaussian (that is, if all of its joint distributions are normal), then stronger conclusions can be drawn when a model is fitted to the data. We have shown that all powers of the linear Gaussian process are non-normal. The only reasonable test is the one that enables us to check whether the observations are from an iid normal sequence. The Jarque-Bera (JB) test [
J B = n ( β ^ 1 2 6 + ( β ^ 2 − 3 ) 2 24 ) (3.1)
where
β ^ 1 = 1 n ∑ t = 1 n ( X t − X ¯ ) 3 ( 1 n ∑ t = 1 n ( X t − X ¯ ) 2 ) 3 / 2 (3.2)
β ^ 2 = 1 n ∑ t = 1 n ( X t − X ¯ ) 4 ( 1 n ∑ t = 1 n ( X t − X ¯ ) 2 ) 2 (3.3)
n is the sample size while, β ^ 1 and β ^ 2 are the sample skewness and kurtosis coefficients. The asymptotic null distribution of JB is χ 2 with 2 degrees of freedom.
We have shown that the sample autocorrelations of X 1 d , X 2 d , ⋯ , X n d , d = 1 , 2 , 3 , ⋯ . are those of the white noise series if the sample autocorrelations of X 1 , X 2 , ⋯ , X n are also iid. We will adopt the Ljung-Box test by replacing the sample autocorrelations of the data X 1 , X 2 , ⋯ , X n with those of X 1 d , X 2 d , ⋯ , X n d , d = 1 , 2 , 3 , ⋯ and use the statistic
Q * ( m ) = n ( n + 2 ) ∑ k = 1 m ( [ ρ ^ X d ( k ) ] 2 n − k ) (3.4)
The hypothesis of iid data is then rejected at level α if the observed Q * ( m ) is larger than the 1 − α n quartile of the χ 2 ( m ) distribution.
Mean Absolute Error (MAE)
MAE = 1 m ∑ i = 1 m | e ^ i | (3.5)
Mean Absolute Percentage Error (MAPE)
MAPE = [ 1 m ∑ i = 1 m | e ^ i Z i | ] × 100 (3.6)
Mean Squared Error (MSE)
MSE = 1 m ∑ i = 1 m e i 2 (3.7)
where m is the value of d used in the trend analysis and,
e ^ i = { σ ^ y t − σ y t for the standard deviation of Y t = X t d β ^ 2 − β 2 for the Kurtosis coefficient of Y t = X t d (3.8)
When d = 4 , the quadratic growth curve performs better than the exponential curve with minimal residual. Both curves fitted positive values at different data points. We also observed from
Exponential Curve | ||||||||
---|---|---|---|---|---|---|---|---|
d | 10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 |
MAD | 1192.79 | 270.02 | 63.70 | 15.80 | 4.14 | 1.44 | 0.43 | 0.29 |
MAPE | 30.28 | 27.92 | 25.50 | 22.58 | 19.87 | 18.42 | 14.92 | 15.17 |
MSE | 1,1265,334.00 | 518,067.00 | 25291.80 | 1385.29 | 75.87 | 5.70 | 0.31 | 0.10 |
Quadratic Curve | ||||||||
d | 10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 |
MAD | 3136.76 | 697.92 | 154.93 | 36.78 | 7.94 | 1.73 | 0.14 | 0.00 |
MAPE | 91,218.00 | 11,088.40 | 3059.10 | 872.67 | 240.26 | 63.46 | 7.10 | 0.00 |
MSE | 14,342,392.00 | 664,288.00 | 31,868.30 | 1610.77 | 74.10 | 3.66 | 0.03 | 0.00 |
d* | σ Y t ( σ = 1 ) | Fit to 4 points | Fit to 3 points | ||||||
---|---|---|---|---|---|---|---|---|---|
Exponential | Quadratic | Exponential | Quadratic | ||||||
Fits | Residual | Fits | Residual | Fits | Residual | Fits | Residual | ||
1 | 1.0000 | 0.8333 | 0.1667 | 1.0711 | −0.0711 | 0.8957 | 0.1043 | 1.0000 | 0.0000 |
2 | 1.4142 | 1.8276 | −0.4134 | 1.2010 | 0.2132 | 1.7627 | −0.3485 | 1.4142 | 0.0000 |
3 | 3.8730 | 4.0084 | −0.1354 | 4.0862 | −0.2132 | 3.4690 | 0.4040 | 3.8730 | 0.0000 |
4 | 9.7980 | 8.7916 | 1.0064 | 9.7269 | 0.0711 | ||||
5 | 30.7409 | ||||||||
6 | 100.8464 | ||||||||
7 | 367.6071 | ||||||||
8 | 1419.8591 | ||||||||
9 | 5870.2151 | ||||||||
10 | 25,570.2180 | ||||||||
MAPE | 14.9181 | 7.1044 | 15.1664 | 0.0000 | |||||
MAD | 0.4305 | 0.1422 | 0.2856 | 0.0000 | |||||
MD | 0.3075 | 0.0253 | 0.0986 | 0.0000 |
*Exponential and Quadratic trend analysis cannot be possible for d = 2 or d = 1 .
growth curve performs optimally than the exponential growth curve. The resulting quadratic curve yielded zero residual. The implication of the result is that we obtain a perfect fit for the data point when d = 3 for the quadratic curve only. Hence, the optimal value of d is 3 when we use the standard deviation curve.
When d = 3 , the quadratic growth curve performs optimally than the exponential growth curve. The resulting quadratic curve yielded zero residual as that of the standard deviation curve. The implication of these results is that we obtain a perfect fit for the data point when d = 3 for the quadratic curve only. Hence, the optimal value of d is 3. Therefore, we recommend that in order to stop the variance from exploding, the order of the data points should not be raised to power greater that three.
We have shown that if X t , t ∈ Z is a linear Gaussian white noise process, Y t = X t d ; d = 1 , 2 , ⋯ is also iid but not normally distributed. Using the variances and kurtosis of Y t = X t d , we were able to establish that the optimal value of d is three. Variances and kurtosis of Y t = X t d have been given in
var ( X t ) = σ 2 (3.9)
var ( X t 2 ) = 2 σ 4 (3.10)
and
var ( X t 3 ) = 15 σ 6 (3.11)
Exponential | ||||
---|---|---|---|---|
*d | 6 | 5 | 4 | 3 |
MAD | 4.14 | 1.44 | 0.43 | 0.29 |
MAPE | 19.87 | 18.42 | 14.92 | 15.17 |
MSE | 75.87 | 5.70 | 0.31 | 0.10 |
Quadratic | ||||
d | 6 | 5 | 4 | 3 |
MAD | 7.94 | 1.73 | 0.14 | 0.00 |
MAPE | 240.26 | 63.46 | 7.10 | 0.00 |
MSE | 74.10 | 3.66 | 0.03 | 0.00 |
*Exponential and Quadratic trend analysis cannot be possible for d = 2 or d = 1 .
d | β 2 ( σ = 1 ) | Fit to 4 points | Fit to 3 points | ||||||
---|---|---|---|---|---|---|---|---|---|
Exponential | Quadratic | Exponential | Quadratic | ||||||
Fits | Residual | Fits | Residual | Fits | Residual | Fits | Residual | ||
1 | 3.000 | 3.21 | −0.2188 | 8.52 | −5.52 | 3.2523 | −0.2523 | 3.0 | 0.0 |
2 | 15.000 | 12.829 | 2.1708 | −1.56 | 16.56 | 12.7630 | 2.2370 | 15.0 | 0.0 |
3 | 46.200 | 51.134 | −4.9342 | 62.76 | −16.56 | 50.0855 | −3.8855 | 46.0 | 0.0 |
4 | 207.000 | 203.808 | 3.1922 | 201.48 | 5.52 | ||||
5 | 733.157 | ||||||||
6 | 3037.836 | ||||||||
MAPE | 8.4966 | 83.2277 | 10.5780 | 0.00 | |||||
MAD | 2.6290 | 11.0400 | 2.1229 | 0.00 | |||||
MD | 9.8239 | 152.3520 | 6.7217 | 0.00 |
In view of these, we suggest that the two following null hypothesis be tested before a stochastic process is accepted as a linear Gaussian white noise process:
H 01 : var ( X t 2 ) = 2 σ 0 4 (3.12)
and
H 02 : var ( X t 3 ) = 15 σ 0 6 (3.13)
Then, the chi-square test statistic [
χ c a l 2 = ( n − 1 ) S X t 2 2 2 σ 0 4 (3.14)
while that for (3.13) is
χ c a l 2 = ( n − 1 ) S X t 3 2 15 σ 0 6 (3.15)
where S X t 2 2 and S X t 3 2 are the estimated variance of the second and third power of the stochastic process, σ 0 2 is the null value for the true variance of the stochastic process and n is the number of observations of the random digits. The null hypothesis is rejected at level α if the observed value of χ c a l 2 is larger
than 1 − α 2 quartile of the chi-square distribution with n − 1 . Degree of freedom.
For an illustration, six (6) random digits were simulated using Minitab 16 series (see Appendix). The simulated series met the following conditions: 1) The simulated series ( X t ) are normal and 2) Powers of X t d , d = 1 , 2 , 3 , 4 , 5 are shown to be iid but not normally distributed (see
Series S/No | Statistic | Mean | Median | True Value | Estimated Value | Min | Max | Skewness | Kurtosis | JB value | Q* | Estimate of Test Statistic | Decision at 5% level | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
σ 2 | S 2 | γ 1 | γ 2 | ( n − 1 ) S X t 2 2 2 σ ^ 0 4 | ( n − 1 ) S X t 3 2 15 σ ^ 0 6 | |||||||||
1 | X t | 0.0000 | −0.0011 | 1.0000 | 1.0000 | −2.05 | 2.39 | 0.11 | −0.60 | 1.70 | 1.05 | - | - | Do not Reject |
X t 2 | 0.9900 | 0.5866 | 2.0000 | 1.3546 | 0.00 | 5.71 | 1.82 | 3.60 | 109.21 | 6.33 | 67.05 | - | ||
X t 3 | 0.1079 | 0.0000 | 15.0000 | 7.8106 | −8.60 | 13.66 | 1.63 | 8.73 | 361.84 | 2.55 | - | 51.55 | ||
2 | X t | 0.0000 | 0.0131 | 1.0000 | 1.0000 | −2.09 | 2.43 | 0.08 | −0.69 | 2.09 | 0.43 | - | - | Do not Reject |
X t 2 | 0.9900 | 0.4951 | 2.0000 | 1.2681 | 0.00 | 5.90 | 1.72 | 3.39 | 97.19 | 5.04 | 62.77 | - | ||
X t 3 | 0.0753 | 0.0000 | 15.0000 | 7.1472 | −9.12 | 14.32 | 1.05 | 9.38 | 384.98 | 0.21 | - | 47.17 | ||
3 | X t | 0.0000 | 0.2008 | 1.0000 | 1.0000 | −2.29 | 2.07 | −0.16 | −0.61 | 1.98 | 3.25 | - | - | Do not Reject |
X t 2 | 0.9900 | 0.5060 | 2.0000 | 1.3493 | 0.00 | 5.25 | 1.79 | 2.74 | 84.68 | 4.84 | 66.79 | - | ||
X t 3 | −0.1592 | 0.0096 | 15.0000 | 7.7045 | −12.03 | 8.93 | −0.74 | 6.30 | 174.50 | 5.80 | - | 50.85 | ||
4 | X t | 0.0000 | −0.0543 | 1.0000 | 1.0000 | −3.07 | −2.88 | −0.06 | 0.41 | 0.76 | 0.45 | - | - | Do not Reject |
X t 2 | 0.9900 | 0.4760 | 2.0000 | 2.3030 | 0.00 | 9.44 | 3.27 | 13.81 | 972.87 | 2.56 | 114.00 | - | ||
X t 3 | −0.0627 | −0.0002 | 15.0000 | 19.0055 | −28.99 | 23.90 | −1.32 | 28.04 | 3305.05 | 0.60 | - | 125.43 | ||
5 | X t | 0.0000 | 0.0399 | 1.0000 | 1.0000 | −2.75 | 3.13 | −0.03 | 0.46 | 0.90 | 1.64 | - | - | Reject |
X t 2 | 0.9900 | 0.4353 | 2.0000 | 2.3529 | 0.00 | 9.77 | 3.30 | 13.82 | 977.30 | 2.80 | 116.47 | - | ||
X t 3 | −0.0284 | 0.0001 | 15.0000 | 19.6277 | −20.83 | 30.54 | 1.88 | 27.99 | 3323.24 | 2.59 | - | 129.54 | ||
6 | X t | 0.0000 | 0.1302 | 1.0000 | 1.0000 | −2.74 | 3.07 | −0.15 | 0.52 | 1.50 | 3.00 | - | - | Reject |
X t 2 | 0.9900 | 0.4605 | 2.0000 | 2.4129 | 0.00 | 9.42 | 3.12 | 11.80 | 742.41 | 2.56 | 119.44 | - | ||
X t 3 | −0.1487 | 0.0023 | 15.0000 | 19.5947 | −20.47 | 28.92 | 1.40 | 23.91 | 2414.70 | 0.23 | - | 129.33 |
The value of the chi-square test statistic for testing (3.12) and (3.13) are also shown in
We have been able to show that if X t , t ∈ Z are iid then, all powers of X t , t ∈ Z are also iid but, non-normal. Hence, we computed the kurtosis of some higher powers of X t , t ∈ Z and established that an increase in the powers of X t , t ∈ Z leads to an exponential increase on the kurtosis. We recommend that stochastic processes (white noise processes) and processes with similar covariance structure should be considered for normality, white noise testing and for test of the variance of higher moments being equal to the theoretical values of
Iwueze, I.S., Arimie, C.O., Iwu, H.C. and Onyemachi, E. (2017) Some Applications of Higher Moments of the Linear Gaussian White Noise Process. Applied Mathematics, 8, 1918-1938. https://doi.org/10.4236/am.2017.812136
S/No | X 1 | X 2 | X 3 | X 4 | X 5 | X 6 |
---|---|---|---|---|---|---|
1 | −0.27398 | −1.02796 | −0.04443 | 0.67426 | −0.84334 | 0.42972 |
2 | −1.02993 | −0.97605 | 0.49527 | 1.43828 | −1.89952 | 1.03306 |
3 | 0.38807 | −1.30594 | 1.95275 | −0.40151 | 0.34148 | 0.80854 |
4 | 0.68088 | 0.09151 | −1.04181 | −3.07185 | 3.12580 | −0.10717 |
5 | −0.96843 | 0.62066 | −0.57864 | 0.109 | −0.23441 | −0.9846 |
6 | 1.39035 | 1.05129 | 0.28400 | −1.52629 | −1.40929 | −2.04065 |
7 | 1.81134 | −0.6788 | −1.40899 | −0.53151 | −0.17057 | 1.12873 |
8 | −1.3766 | 0.97448 | 0.89222 | 1.57008 | 1.01262 | −0.11163 |
9 | −0.24121 | 1.77527 | 0.02342 | 0.72712 | −0.17059 | −0.80648 |
10 | −1.45076 | −0.13678 | 0.29285 | −0.10475 | 0.66291 | −1.08512 |
11 | −0.25423 | −0.46946 | −1.95159 | −0.08747 | 0.20546 | 0.07242 |
12 | 0.21163 | 0.82766 | −0.68752 | 1.07637 | −1.34176 | −2.50489 |
13 | 1.34799 | −0.56029 | 0.78114 | −1.89811 | −0.95515 | 0.17464 |
14 | −0.29782 | 0.01628 | −0.66970 | −0.2508 | −0.56939 | −0.86345 |
15 | 0.62809 | 0.20895 | −0.44001 | 0.93703 | 0.65664 | 0.77652 |
16 | −1.6913 | −0.946 | −0.04784 | −0.3515 | 0.91394 | 0.49688 |
17 | 0.4933 | 0.96825 | −1.13509 | 1.44387 | −1.35495 | 0.38705 |
18 | −0.51967 | 0.22284 | −0.04708 | 0.48667 | 0.02011 | −0.35363 |
19 | −0.6396 | 0.76324 | 1.23312 | 0.84948 | 0.20669 | 0.37068 |
20 | −0.82868 | 0.58037 | 0.29271 | −1.27291 | −0.60221 | 0.51689 |
21 | −1.11643 | 0.65455 | −0.50167 | −0.46987 | −0.03738 | 0.73852 |
22 | −1.44951 | −1.59485 | −0.73051 | 0.31361 | 0.78300 | 0.22635 |
23 | −1.16781 | −0.83839 | −0.89062 | 0.86961 | 1.02946 | −0.30452 |
24 | 0.5073 | −0.68632 | 1.32991 | −0.62985 | −0.48457 | 0.75797 |
25 | 0.87357 | 0.52189 | 0.46167 | −1.7023 | 1.26638 | 0.58846 |
26 | 0.92886 | 0.00997 | −0.67989 | −0.13366 | −0.37355 | −0.58715 |
27 | −0.19538 | 1.14368 | −0.64697 | 0.8744 | 1.00173 | 0.39232 |
28 | −0.89347 | −0.27941 | 0.44869 | −0.76926 | −1.04180 | −1.36701 |
29 | 0.22841 | 1.19672 | −2.29155 | −0.98832 | −0.03484 | 0.63325 |
30 | −0.41321 | 0.66025 | −0.62024 | 0.81164 | −2.27280 | 0.91453 |
31 | 0.24934 | 1.75558 | −1.96544 | 0.9269 | −2.36826 | 0.71918 |
32 | 2.24352 | 0.061 | −1.14678 | 0.23412 | 0.58710 | 0.62407 |
33 | −0.43648 | −1.90088 | −0.59296 | −1.43724 | −0.83297 | 0.91071 |
34 | −0.47532 | 1.40511 | −1.98847 | −0.94486 | 1.61033 | 1.14803 |
35 | −1.26658 | −0.24919 | 1.49152 | 1.36682 | 0.39868 | −1.06265 |
36 | 0.46604 | −0.46125 | 0.99116 | −0.86239 | 0.84830 | 0.33544 |
---|---|---|---|---|---|---|
37 | −0.26797 | −0.64382 | 1.57322 | 0.97428 | −0.28943 | −0.90818 |
38 | −1.8616 | −1.20993 | 0.31967 | −1.22535 | 0.14880 | −0.15342 |
39 | −0.79105 | 0.60132 | 0.09620 | 0.10762 | 0.05979 | −1.01534 |
40 | −0.7376 | −0.12083 | −1.23366 | −0.80141 | −0.13743 | −2.73551 |
41 | −0.54908 | −2.08959 | −0.96486 | 1.57005 | −0.24971 | −0.24047 |
42 | 0.75899 | −0.0693 | 0.98989 | −1.94304 | 1.48971 | 0.83852 |
43 | 0.87974 | 0.39937 | 0.66662 | −0.33209 | 0.11830 | −0.13159 |
44 | −1.56767 | −1.2644 | 0.25153 | 0.25179 | 0.57021 | 0.3024 |
45 | 0.88676 | −0.17061 | 0.73065 | −1.12438 | 0.21618 | −0.7871 |
46 | −0.83478 | −0.96567 | −1.49011 | −0.70519 | −0.01597 | −0.87175 |
47 | −0.09571 | −0.44299 | −0.98312 | −0.92953 | −0.43570 | −0.63546 |
48 | 0.08933 | −0.41813 | 0.61319 | −1.00549 | 1.60558 | −1.20903 |
49 | 1.03336 | −0.72059 | 0.91105 | −0.04879 | −0.88526 | 0.18635 |
50 | −1.63874 | 1.65666 | 1.05754 | −0.10511 | −0.73240 | 0.11214 |
51 | 0.13195 | 0.24313 | 0.83947 | −0.37358 | 0.94916 | −1.12998 |
52 | 0.13345 | 1.67588 | 0.34752 | 0.23772 | −2.75144 | 0.22946 |
53 | −0.04943 | −0.68234 | −0.69456 | −0.08023 | 1.32076 | 1.74814 |
54 | −0.18236 | 0.26408 | 1.23475 | 0.47796 | −0.55622 | 0.52767 |
55 | −0.26388 | 1.14863 | −2.04852 | −0.51304 | −0.25991 | 0.17793 |
56 | −0.12861 | 0.54258 | −0.54983 | 0.91927 | −0.29258 | 2.04162 |
57 | −0.70432 | −0.65895 | 0.52073 | 0.52957 | 0.27476 | −0.26149 |
58 | −1.72085 | −0.08292 | 1.08228 | −0.94107 | 0.20609 | −0.29193 |
59 | −1.32903 | 0.13364 | 1.20236 | −0.02343 | 0.57154 | −0.51553 |
60 | −1.20925 | −0.87405 | −1.04843 | 2.88022 | 0.12533 | −1.2401 |
61 | 0.49597 | 0.02139 | 0.15003 | 1.47823 | 0.67854 | −0.15581 |
62 | 0.95511 | −0.21064 | 0.87717 | 0.33566 | 0.10858 | −0.08128 |
63 | 0.25296 | −1.26454 | −0.30127 | 0.73055 | 0.43881 | 0.18683 |
64 | 0.81087 | 1.29401 | −1.00489 | 0.57767 | −1.16929 | 1.07444 |
65 | 2.06072 | 1.4557 | 0.32523 | −0.32369 | −0.54597 | −0.8368 |
66 | 2.39035 | −0.727 | −0.07202 | 0.41405 | 1.18591 | 0.44699 |
67 | −1.38261 | 0.97672 | 0.72710 | −0.61505 | 1.21889 | −0.26585 |
68 | −0.76678 | −1.25025 | −1.10466 | −0.67036 | 1.72606 | 1.26778 |
69 | 1.16598 | 0.66914 | −0.49042 | −0.40702 | −0.98953 | 0.05222 |
70 | 1.45608 | 0.22788 | −1.19467 | 0.28835 | −0.04517 | 1.44719 |
71 | 0.03912 | −0.64965 | 0.68138 | 1.18748 | 1.77876 | −1.28748 |
72 | 0.41341 | 0.81042 | 0.46675 | −0.86381 | 0.26484 | −1.61369 |
73 | 0.20976 | −1.30694 | 0.39714 | −0.10127 | −0.83961 | 0.53758 |
---|---|---|---|---|---|---|
74 | 0.54664 | 1.62919 | −0.63787 | −0.49827 | −0.21413 | −0.75779 |
75 | 0.2277 | 1.47017 | 0.33296 | 0.38573 | 1.54837 | 1.49182 |
76 | 0.43397 | 2.42827 | 0.90047 | −0.08696 | 1.11924 | 0.74011 |
77 | 1.03468 | −1.77708 | −0.03324 | −1.33189 | −1.16183 | −0.06952 |
78 | 0.92753 | 0.07674 | 1.36678 | −0.0266 | −0.12475 | 0.8712 |
79 | −2.04885 | 0.59972 | −0.41621 | −0.32919 | −1.21666 | −0.57515 |
80 | 1.23434 | −0.39571 | 2.07453 | 1.93271 | −0.37863 | 1.49873 |
81 | 1.74502 | −0.67093 | 0.69519 | −0.30482 | 0.17154 | 0.52483 |
82 | −0.3303 | −1.15588 | −0.91268 | 1.10958 | −1.03211 | −1.69178 |
83 | 1.22417 | −1.19194 | 0.60643 | 0.81764 | 1.04171 | 0.14834 |
84 | −1.39076 | 0.27032 | −0.29833 | 0.16774 | 0.90110 | 1.72858 |
85 | 1.2308 | 1.00547 | 1.75159 | 0.8735 | 0.06824 | −0.76692 |
86 | −1.01361 | 0.32435 | 0.54000 | 0.19267 | 0.52393 | 1.39012 |
87 | 1.31721 | 0.96086 | 0.60794 | −0.24791 | 1.59886 | −1.60376 |
88 | 0.0169 | 0.66278 | 0.45064 | −1.2737 | −1.18518 | −0.51405 |
89 | 0.68989 | −1.13499 | 1.32501 | −0.05978 | 0.21521 | −2.13481 |
90 | −0.44958 | −0.61601 | 0.11542 | −1.41891 | 0.21991 | 0.04175 |
91 | −0.89708 | 1.06236 | 0.28849 | 1.87618 | 0.37278 | −0.94765 |
92 | 0.38987 | 1.84019 | −1.67447 | −2.01358 | −0.97390 | 0.78005 |
93 | −0.73121 | 0.29223 | 1.03518 | −0.88304 | −1.43246 | 0.37597 |
94 | −0.68488 | −1.8725 | −1.02913 | 0.62784 | −0.92247 | 0.32093 |
95 | −0.01909 | −0.4742 | −0.89422 | 0.04727 | 0.13853 | 3.06963 |
96 | 1.45817 | −1.07199 | −1.32477 | 1.92723 | −0.36939 | −1.28983 |
97 | 0.89708 | −1.69795 | −1.37860 | 0.06466 | 1.08810 | −0.22214 |
98 | 0.79947 | −1.33792 | 0.30006 | 0.66493 | −1.27345 | 0.51469 |
99 | −0.76504 | 1.23803 | 0.43708 | 0.75755 | −1.22752 | 0.20206 |
100 | 0.61205 | −0.15894 | 2.02864 | −0.0729 | −0.02931 | 0.06008 |