Open Journal of Statistics
Vol.08 No.01(2018), Article ID:82593,13 pages
10.4236/ojs.2018.81010

A Chi-Square Approximation for the F Distribution

L. Jiang1, Augustine Wong2

1Beijing Education Examinations Authority, Beijing, China

2Department of Mathematics and Statistics, York University, Toronto, Canada

Copyright © 2018 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: January 18, 2018; Accepted: February 21, 2018; Published: February 24, 2018

ABSTRACT

F distribution is one of the most frequently used distributions in statistics. For example, it is used for testing: equality of variances of two independent normal distributions, equality of means in the one-way ANOVA setting, overall significance of a normal linear regression model, and so on. In this paper, a simple chi-square approximation for the cumulative distribution of the F- distribution is obtained via an adjusted log-likelihood ratio statistic. This new approximation exhibits remarkable accuracy even when the degrees of freedom of the F distribution are small.

Keywords:

Bartlett Correction, Homoscedasticity, Likelihood Ratio Statistic, One-Way ANOVA

1. Introduction

F distribution is one of the most frequently used distributions in statistics. It arises in many practical situations. For example, the test statistic for testing equality of variances of two independently distributed normal distributions is distributed as an F distribution. Another example is the test statistic for testing equality of means of k independent normal distributions with homogeneous variance is also distributed as an F distribution.

Johnson and Kotz [1] give a comprehensive review on the approximations to the cumulative distribution function (cdf) of the F distribution. Li and Martin [2] propose a shrinking factor approximation method and approximate the cdf of the F distribution by the cdf of the χ 2 distribution. On the other hand, considering testing equality of variances of two independent normal distributions, Wong [3] derives the modified signed log-likelihood ratio statistic. As a result, a normal approximation for the cdf of the F distribution is obtained. The approximation by Wong [3] has a theoretical order of convergence O ( n 3 / 2 ) .

In this paper, we consider the problem of testing equality of means of k independent normal distributions with homogeneous variance. Rather than the standard one-way ANOVA approach, we derive an adjusted log-likelihood ratio statistic, which is asymptotically distributed as χ 2 distribution such that the mean of this adjusted log-likelihood ratio statistic is exactly the same as the mean of the χ 2 distribution. As a result, a very accurate new χ 2 approximation for the cdf of the F distribution is obtained.

2. Bartlett Corrected Log-Likelihood Ratio Statistic

Let ( Y 1 , , Y n ) be identical independently distributed random variables with joint log-likelihood function l ( θ ) , where θ is a p-dimensional vector parameter. A frequently used asymptotic method for testing the hypothesis

H 0 : ψ ( θ ) = ψ 0 vs H a : ψ ( θ ) ψ 0 , (1)

is based on the asymptotic distribution of the log-likelihood ratio statistic. In particular, the log-likelihood ratio statistic is defined as

W = 2 { l ( θ ^ ) l ( θ ˜ ) }

where θ ^ is the unconstrained maximum likelihood estimator of θ , which is obtained by maximizing the log-likelihood function with respect to θ , and θ ˜ is the constrained maximum likelihood estimator of θ , which is obtained by maximizing the log-likelihood function with respect to θ subject to the constraint that ψ ( θ ) = ψ 0 . Generally, this constrained maximum likelihood estimator of θ can be obtained by the Lagrange multiplier method. With the regularity conditions stated in Cox and Hinkley [4] , it is well-known that W is asymptotically distributed as χ r 2 distribution, where r is the degrees of freedom, which is the difference in the number of unconstrained parameters being estimated and the number of constrained parameters being estimated. Hence, the observed level of significance for testing the hypothesis in (1) is P ( χ r 2 > w ) , where w is the observed value of the log-likelihood ratio statistic W. Note that Cox and Hinkley [4] show that this method of obtaining the observed level of significance has order of convergence of only O ( n 1 / 2 ) .

There exists many different ways of improving the accuracy of the convergence of the log-likelihood ratio statistic. Barndorff-Nielsen and Cox [5] and Brazzale et al. [6] give detail review of some higher order asymptotic methods and their applications. Recently, Davison et al. [7] derive a directional test for a vector parameter of interest for the linear exponential families. The method is quite complicated, both in terms of theories and computations.

In this paper, we propose a statistic, which is very similar to the Bartlett corrected log-ikelihood ratio statistic. Bartlett [8] [9] show that the expected value of W can be expressed as

E ( W ) = r ( 1 + b n + O ( n 2 ) ) ,

where b is known as the Bartlett factor. Since E ( W ) does not equal to the mean of the χ r 2 distribution, Bartlett [8] [9] propose to adjust the log-like- lihood ratio statistic by

W * = W 1 + b n

such that E ( W * ) = r with rate of convergence O ( n 2 ) . Lawley [10] shows that in fact all cumulants of W * agree with those of a χ r 2 distribution to the same order. Lawley’s proof is very complicated. Barndorff-Nielsen and Cox [11] discuss a much simpler derivation based on the saddlepoint approximation. However, the Bartlett factor, b, in general, is very difficult to obtain. This limited the use of the Bartlett corrected log-likelihood ratio statistic in applied statistic.

In this paper, we propose to adjust the log-likelihood ratio statistic W such that the adjusted log-likelihood ratio statistic has exactly the same mean as the χ r 2 distribution. In other words, let

W = W E ( W ) / r . (2)

W is asymptotically distributed as χ r 2 distribution. Thus, the observed level of significance for testing the hypothesis in (1) is P ( χ r 2 > w ) , where w is the observed value of W . Note that his adjusted log-likelihood ratio statistic is just a modified version of the Bartlett corrected log-likelihood ratio statistic.

In the next section, the proposed adjusted log-likelihood ratio statistic for testing the equality of means of k homoscedastic normally distributed populations is derived. By comparing to the standard F-test in the one-way ANOVA approach, an approximation of the cdf of the F distribution is obtained.

3. Main Result

Let X i j be independent normally distributed random variables with mean μ i and a common variance σ 2 , where i = 1 , , k and j = 1 , , n i . Our aim is to test

H 0 : μ 1 = = μ k = μ vs H a : the means are not all the same . (3)

From the one-way ANOVA approach, we have the following sum of squares:

S S T = S S T r + S S E i = 1 k j = 1 n i ( X i j X ¯ ) 2 = i = 1 k n i ( X ¯ i X ¯ ) 2 + i = 1 k j = 1 n i ( X i j X ¯ i ) 2 ,

and the degrees of freedom are

d f T r = k 1 , d f E = i = 1 k n i k .

For testing the hypothesis in (3), the F-test is used. Denote the test statistic as

F * = S S T r / d f T r S S E / d f E . (4)

It is well-known that F * is distributed as the F distribution with degrees of freedom ( d f T r , d f E ) . Hence, the observed level of significance for testing the hypothesis in (3) is P ( F d f T r , d f E > f * ) with f * being the observed value of F * .

From the likelihood analysis point of view, let θ = ( μ 1 , , μ k , σ 2 ) , and the log-likelihood function can be written as

l ( θ ) = l ( μ 1 , , μ k , σ 2 ) = i = 1 k [ n i 2 log σ 2 1 2 σ 2 j = 1 n i ( X i j μ i ) 2 ] .

It can be shown that the unconstrained maximum likelihood estimator is θ ^ = ( μ ^ 1 , , μ ^ k , σ ^ 2 ) , where

μ ^ 1 = X ¯ 1 , , μ ^ k = X ¯ k , σ ^ 2 = S S E n 1 + + n k .

Therefore

l ( θ ^ ) = n 1 + + n k 2 log σ ^ 2 n 1 + + n k 2 .

When the null hypothesis in (3) is true, the log-likelihood function can be written as

l ( μ , , μ , σ 2 ) = i = 1 k [ n i 2 log σ 2 1 2 σ 2 j = 1 n i ( X i j μ ) 2 ] ,

and the constrained maximum likelihood estimator is θ ˜ = ( μ ˜ , , μ ˜ , σ ˜ 2 ) , where

μ ˜ = X ¯ , σ ˜ 2 = S S T r n 1 + + n k .

Thus, we have

l ( θ ˜ ) = l ( μ ˜ , , μ ˜ , σ ˜ 2 ) = n 1 + + n k 2 log σ ˜ 2 n 1 + + n k 2 .

Therefore, the log-likelihood ratio statistic is

W = ( n 1 + + n k ) log S S T r S S E = ( d f T r + d f E + 1 ) log ( 1 + d f T r d f E F * ) ,

and W is asymptotically distributed as χ 2 distribution with d f T r degrees of freedom.

Our proposed method required to obtain E ( W ) . Since F * is distributed as F distribution with ( d f T r , d f E ) degrees of freedom,

E ( W ) = 0 ( d f T r + d f E + 1 ) log ( 1 + d f T r d f E y ) g ( y ; d f T r , d f E ) d y (5)

where g ( y ; d f T r , d f E ) is the probability density function of the F distribution with degrees of freedom ( d f T r , d f E ) . Therefore, the observed level of significance for testing the hypothesis in (3) based on the proposed adjusted loglikelihood ratio statistic is

P ( χ d f T r 2 > ( d f T r + d f E + 1 ) log ( 1 + d f T r d f E f * ) E ( W ) / d f T r )

where E ( W ) is defined in (5) and f * is the observed value of the test statistic given in (4).

By re-indexing the above approximation, let X be distributed as the F u , v distribution, where ( u , v ) are the corresponding degrees of freedom. Then the cdf of X is P ( F u , v x ) for x > 0 . Hence, the log-likelihood ratio statistic is

W = ( u + v + 1 ) log ( 1 + u v X ) .

Since W is asymptotically distributed as χ u 2 distribution, we have

P ( F u , v x ) P ( χ u 2 ( u + v + 1 ) log ( 1 + u v x ) ) .

However, this approximation has order of convergence O ( n 1 / 2 ) only.

The proposed approach gives

W = W E ( W ) / u = W b ( u , v )

where

b ( u , v ) = E [ ( u + v + 1 ) log ( 1 + u v X ) ] u = 0 ( u + v + 1 ) log ( 1 + u v x ) g ( x ; u , v ) d x u .

As a result,

P ( F u , v x ) P ( χ u 2 ( u + v + 1 ) log ( 1 + u v x ) b ( u , v ) ) .

Note that b ( u , v ) does not have a closed form solution but it can be obtained numerically by software like R, Maple and Matlab. Table 1 records some values of b ( u , v ) for u v . Moreover,

lim v b ( u , v ) = 1 and lim u b ( u , v ) = .

Hence, the proposed approximation will be problematic when u is large. Never- theless, the F u , v distribution has the inverse property:

P ( F u , v x ) = 1 P ( F v , u 1 / x )

Table 1. b(u,v).

that can be applied to obliviate this problem. Thus, the proposed approximation is:

P ( F u , v x ) = { P ( χ u 2 ( u + v + 1 ) log ( 1 + u v x ) b ( u , v ) ) if u v 1 P ( χ v 2 ( v + u + 1 ) log ( 1 + v u 1 x ) b ( u , v ) ) if u > v (6)

4. Numerical comparisons

Wong [3] gives a simple and accurate normal approximation to the cdf of the F u , v distribution, which has order of convergence O ( n 3 / 2 ) . It takes the form

P ( F u , v x ) = Φ ( r 1 r log r q )

where Φ ( ) is the cdf of the standard normal distribution,

r = sgn ( x 1 ) { ( u + v ) log u x + v u + v u log x } 1 / 2

q = x 1 u x + v { u v ( u + v ) 2 } 1 / 2

It is of interest to compare the proposed method, to the approximation by Wong [3] .

Figures 1(a)-8(a) are the plots of the cumulative distribution functions for

the F u , v distribution for various u and v obtained by the exact method, the

approximation by Wong [3] , and the proposed method. The difference between the two approximated cumulative distribution functions and the exact cumu- lative distribution function are barely noticeable. To explore the accuracy of the two approximations, we examine the relative error, which is defined as

(a) (b)

Figure 1. (a) cdf with (u,v) = (1,1); (b) Relative error.

(a) (b)

Figure 2. (a) cdf with (u,v) = (1,2); (b) Relative error.

(a) (b)

Figure 3. (a) cdf with (u,v) = (1,10); (b) Relative error.

(a) (b)

Figure 4. (a) cdf with (u,v) = (2,1); (b) Relative error.

(a) (b)

Figure 5. (a) cdf with (u,v) = (2,2); (b) Relative error.

(a) (b)

Figure 6. (a) cdf with (u,v) = (2,10); (b) Relative error.

(a) (b)

Figure 7. (a) cdf with (u,v) = (10,2); (b) Relative error.

(a) (b)

Figure 8. (a) cdf with (u,v) = (15,2); (b) Relative error.

relative error = approximation exact exact .

Figures 1(b)-8(b) are the plots of the corresponding relative errors. It is clear that the proposed method generally outperformed the approximation by Wong [3] in all cases.

5. Conclusion

In this paper, a simple chi-square approximation to the cumulative distribution function of the F-distribution is obtained via an adjusted log-likelihood ratio statistic. Simulation studies illustrated that the new approximation outperformed the higher-order asymptotic method discussed in Wong (2008), regardless of how show the degrees of freedom are.

Cite this paper

Jiang, L. and Wong, A. (2018) A Chi-Square Approximation for the F Distribution. Open Journal of Statistics, 8, 146-158. https://doi.org/10.4236/ojs.2018.81010

References

  1. 1. Johnson, N. and Kotz, S. (1994) Continuous Univariate Distributions. Volume 2, John Wiley & Sons, New York.

  2. 2. Li, B. and Martin, E.B. (2002) An Approximation to the F Distribution Using the Chi-Square Distribution. Computational Statistics and Data Analysis, 40, 21-26. https://doi.org/10.1016/S0167-9473(01)00097-4

  3. 3. Wong, A. (2008) Approximating the F Distribution via a General Version of the Modified Signed Log-Likelihood Ratio Statistic. Computational Statistics and Data Analysis, 52, 3902-3912. https://doi.org/10.1016/j.csda.2008.01.007

  4. 4. Cox, D.R. and Hinkley, D.V. (1997) Theoretical Statistics. Cambridge University Press, Cambridge.

  5. 5. Barndorff-Nielsen, O.E. and Cox, D.R. (1994) Inference and Asymptotics. Chapman and Hall, New York. https://doi.org/10.1007/978-1-4899-3210-5

  6. 6. Brazzale, A.R., Davison, A.C. and Reid, N. (2007) Applied Asymptotics. Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9780511611131

  7. 7. Davison, A.C., Fraser, D.A.S., Reid, N. and Sartori, N. (2014) Accurate Directional Inference for Vector Parameters in Linear Exponential Families. Journal of the American Statistical Association, 109, 302-314. https://doi.org/10.1080/01621459.2013.839451

  8. 8. Bartlett, M.S. (1937) Properties of Sufficiency and Statistical Test. Proceedings of the Royal Society A, 160, 268-282. https://doi.org/10.1098/rspa.1937.0109

  9. 9. Bartlett, M.S. (1953) Approximate Confidence Interval. Biometrika, 40, 12-19. https://doi.org/10.1093/biomet/40.1-2.12

  10. 10. Lawley, D.N. (1956) A General Method for Approximating to the Distribution of the Likelihood Ratio Criteria. Biometrika, 43, 295-303. https://doi.org/10.1093/biomet/43.3-4.295

  11. 11. Barndorff-Nielsen, O.E. and Cox, D.R. (1979) Edgeworth and Saddlepoint Approximation with Statistical Applications (with Discussion). Journal of the Royal Statistical Society, Series B, 41, 279-312.