Open Journal of Statistics, 2012, 2, 309-312
http://dx.doi.org/10.4236/ojs.2012.23038 Published Online July 2012 (http://www.SciRP.org/journal/ojs)
A Revision of AIC for Normal Error Models
Kunio Takezawa
Agroinformatics Division, Agricultural Research Center, National Agriculture and Food Research Organization,
Graduate School of Life and Environmental Sciences, University of Tsukuba, Tsukuba, Japan
Email: nonpara@gmail.com
Received March 21, 2012; revised April 22, 2012; accepted May 4, 2012
ABSTRACT
Conventional Akaike’s Information Criterion (AIC) for normal error models uses the maximum-likelihood estimator of
error variance. Other estimators of error variance, however, can be employed for defining AIC for normal error models.
The maximization of the log-likelihood using an adjustable error variance in light of future data yields a revised version
of AIC for normal error models. It also gives a new estimator of error variance, which will be called the “third variance”.
If the model is described as a constant plus normal error, which is equivalent to fitting a normal distribution to
one-dimensional data, the approximated value of the third variance is obtained by replacing (n − 1) (n is the number of
data) of the unbiased estimator of error variance with (n − 4). The existence of the third variance is confirmed by a sim-
ple numerical simulation.
Keywords: AIC; AICc; Normal Error Models; Third Variance
1. Introduction
Akaike’s Information Criterion (AIC) for multiple linear
models with normal i.i.d. errors is defined as (e.g., [1,2])
2
log 2πlog2 4q
ˆˆ
2,, 24,
j
AICnnRSS nn
la q
Xy
2
11
ˆˆ ,
ji
j i
ij
a xy
1
i
(1)
where n is the number of data and q is the number of
predictors of the multiple linear model. Hence, the num-
ber of regression coefficients in this model is (q + 1)
when the error variance is regarded as a regression coef-
ficient. X is a design matrix composed of the predictor
values in the data. y is the vector composed of values of
the target variable in the data. RSS stands for the residual
sum of squares:
0
q
n
RSS a
(2)
where are the estimators of regression
coefficients of a multiple linear model. xij(1 ≤ i ≤ n, 1 ≤ j
≤ q) is an element of X.
01
ˆˆ ˆ
,, ,
q
aa a
in
is an element of y.
2
ˆˆ
,,
j
la
Xy
is the log-likeli- hood of the regression
model in light of the data at hand. It is defined as
22
ˆˆ ˆ
,, log2πlog .
222
j
nnn
la
Xy
ˆˆ
,,aa
2
ˆ
(3)
The multiple linear model for obtaining Equations (1)
and (3) contains 01 given by the least squares
method (also called the maximum likelihood method for
normal errors), and the error variance (
ˆ
,
q
a
) given by the
maximum likelihood method. 2
ˆ
is derived using
2
ˆ.RSS n
(4)
2
ˆ
defined above is used as the error variance in AIC
because AIC is a statistic based on the maximum-likeli-
hood estimator. However, the unbiased error variance
shown below rather than the maximum-likelihood esti-
mator of error variance is utilized in most statistical cal-
culations.
2
ˆ1
ub RSS n q
. (5)
The maximum-likelihood estimator of error variance may
not be the only choice for the error variance for AIC.
Hence, in this paper, we discusses the adjustment of error
variance to calculate AIC for normal error models after
recalling the derivation of conventional AIC for normal
error models. Then, this consideration leads to a new
estimator of error variance, which will be called the
“third variance”. Finally, the existence of the third vari-
ance is shown by a simple numerical simulation.
2. Derivation of AIC for Normal Error
Models
Conventional AIC for normal error models is easily de-
rived when the multiple linear model with normal error
assumed by an analyst contains the real equation pro-
ducing the data as a special case. AIC based on these
assumption is an approximation of
C
opyright © 2012 SciRes. OJS