Open Journal of Statistics
Vol.07 No.02(2017), Article ID:75589,9 pages
10.4236/ojs.2017.72017
Minimizing the Variance of a Weighted Average
Doron J. Shahar
Department of Mathematics, University of Arizona, Tucson, Arizona, USA
Copyright © 2017 by author and Scientific Research Publishing Inc.
This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).
http://creativecommons.org/licenses/by/4.0/
Received: February 6, 2017; Accepted: April 21, 2017; Published: April 24, 2017
ABSTRACT
It is common practice in science to take a weighted average of estimators of a single parameter. If the original estimators are unbiased, any weighted average will be an unbiased estimator as well. The best estimator among the weighted averages can be obtained by choosing weights that minimize the variance of the weighted average. If the variances of the individual estimators are given, the ideal weights have long been known to be the inverse of the variance. Nonetheless, I have not found a formal proof of this result in the literature. In this article, I provide three different proofs of the ideal weights.
Keywords:
Variance, Weighted Average, Minimization
1. Introduction
Oftentimes in science, multiple point estimators of the same parameter are com- bined to form a better estimator. One method of forming the new estimator is taking a weighted average of the original estimators. If the original estimators are unbiased, the weighted average is guaranteed to be unbiased as well.
A weighted average may be used to combine the results of several studies (meta- analysis), or when several estimates are obtained within a study. For example, to deconfound it may be necessary to stratify on a covariate when estimating an effect. Assuming that the effect is the same in every stratum of the covariate, we may take a weighted average of the stratum-specific estimates.
The question remains though as to which weights should be chosen. Since the estimator will be unbiased regardless of the weights, we only need to consider the variance. In particular, the weights should be chosen to minimize the va- riance of the weighted average. Although it has long been known that the ideal weights should be the inverse of the variance, I have not found any complete, formal proof in the literature. Several sources mention the ideal weights either in general or for specific cases without any proof [1] - [6] . Others mention something similar to the ideal weights but again without proof [7] [8] . Hedges offers a so- called proof in his 1981 paper [9] , which is far from a complete proof. The first two sentences of his proof contain the same content as the first two sentences and the last sentence of proof 1 in this paper. Hedges then continues to prove approximations for the ideal weights under a certain condition. In his 1982 and 1983 papers, he writes that this result is “easy to show” referencing his 1981 (and 1982) papers [10] [11] . Goldberg and Kercheval [12] provide a “proof” that contains only slightly more content than Hedges’ proof in that they mention the use of Lagrange multipliers. Proof 1 in this paper goes over the details tho- roughly. Cochran also mentions the ideal weights, but proves only that these weights give the maximum likelihood estimate when the estimators are inde- pendent and normally distributed about a common mean [13] . Lastly, the pro- blem of finding the ideal weights is included as an exercise (exercise 7.42) in Casella and Berger [14] . The problem, however, is not worked out in their solution manual [15] . A version of the problem when taking a weighted average of only two estimators is also an exercise (exercise 24.1) in Anderson and Ban- croft [1] .
Searching through articles dating back to the 1930s, it seems that this basic result has not been formally proven in the literature. One reason may be that the result depends on the variances of the estimators being known. The case when the variances are unknown is more difficult and attracted more attention. For example, a few articles briefly mentioned the ideal weights when the variances are known before continuing to discuss the case when the variances are unknown [2] [4] [5] [13] . In this paper, I present three proofs of the ideal weights that minimize the variance of a weighted average.
2. Three Proofs
Let be estimators of a single parameter . In practice, the are independent, and they are often assumed to be unbiased. If that’s the case, then any weighted average ( and for all ) is an unbiased estimator of . Since the estimator is unbiased regardless of the weights, we want to choose weights that minimize the variance of .
As far as the ideal weights are concerned, it is not necessary though that the be independent and unbiased. The proof of the ideal weights only requires that the be uncorrelated and have a finite non-zero variance.
Proposition 1. If are uncorrelated random variables with finite non-zero variances, then is minimized when
and its minimum value is
The first proof uses the method of Lagrange multipliers.
Proof 1: , because the are uncorrelated. We wish to minimize the previous expression under the con- straint that and for all . The set of all for which and for all is closed and bounded. The extrema in the interior of can be found by considering only the first con- straint, which may be written as . Later we shall find the extrema on the boundary of . To find the extrema in the interior of , let
(1)
By the method of Lagrange multipliers, the values of for which
are the critical points of . (These contain all the ex-
trema of in the interior of .) equals zero only when .
(2)
Therefore,
(3)
and
(4)
is indeed in the interior of since . For these values of ,
(5)
Next, let’s find the extrema on the boundary of . The boundary of is characterized by having some of the equal zero. For any point on the boun- dary, let . At such a point, . Using the method of Lagrange multipliers again, the critical points of are found to be where
(6)
(These contain all the extrema of on the boundary of .) For these values of ,
(7)
Note that
(8)
That is, of all critical points, the one in the interior of minimizes
. Therefore, is minimized
when
and its minimum value is
The second proof is done by induction.
Proof 2: The case will be our base case for the induction.
(9)
The global minimum of the above expression occurs when
(10)
and
(11)
The minimum variance is then
(12)
For the induction step, suppose that for some , is minimized when
and its minimum value is
Then,
(13)
where are weights that do not depend on . So for any possible values of , the above expression is minimized when
(14)
By plugging the above value for into Equation (13), we find a lower bound for the variance of the weighted average:
(15)
where equality is achieved for the specified value of .
The variance of the weighted average will be minimized when it equals the right side of the above inequality and the right side of the inequality is minimized. The right side of the inequality is minimized when is minimized. We have assumed in the induction step that is minimized when
and its minimum value is
Therefore, is minimized when
(16)
and
(17)
for
Therefore, is minimized when
for all , and its minimum value is
This completes the induction step, and the proof.
The third proof utilizes the Cauchy-Schwarz inequality.
Proof 3: Using the Cauchy-Schwarz inequality, we obtain
(18)
Dividing both sides of Equation (18) by , we have a lower bound
for .
(19)
By the Cauchy-Schwarz inequality, equals the lower bound iff
and
are linearly dependent vectors. Since neither of these vectors is the zero vector, they are linearly dependent iff there exists an such that for all . Therefore, equals the lower bound iff there exists an such that for all . Since the are weights, this requires that
(20)
and
(21)
Therefore, obtains the lower bound, and hence, is mini- mized when
and its minimum value is
3. Discussion
Given the frequent use of inverse variance weighting, it is surprising that the proof of proposition 1 was never published, to the best of my knowledge, in any book or journal. That the proofs use standard techniques is no excuse for their absence in the literature; there is value in a proof beyond the result it proves. For example, it is interesting that the proposition can be proven by induction and more succinctly using the Cauchy-Schwarz inequality.
Even more surprising are the trails of citations leading nowhere. It appears that generations of statisticians simply assumed that a proof has been published somewhere, relying on old, inaccurate citations. In that sense, this article not only offers three different proofs but also a broader lesson: every so often it is worth- while to review the history of well-known facts. Surprises are possible.
Acknowledgements
Many thanks to Eyal Shahar for first asking me to prove this result after he was unable to find the proof in the literature and for inspiring the second proof. (An earlier version of the first two proofs was posted on his university website in an appendix to his commentary on standardization.) I would also like to thank Sunder Sethuraman and Shankar Venkataramani for inspiring the third proof. And I would like to thank all three for having commented on a draft manuscript.
Cite this paper
Shahar, D.J. (2017) Minimizing the Variance of a Weighted Average. Open Journal of Statistics, 7, 216- 224. https://doi.org/10.4236/ojs.2017.72017
References
- 1. Anderson, R.L. and Bancroft, T.A. (1952) Statistical Theory in Research. McGraw-Hill, New York, 358-366.
- 2. Meier, P. (1953) Variance of a Weighted Mean. Biometrics, 9, 59-73. https://doi.org/10.2307/3001633
- 3. Cochran, W.G. and Cox, G. (1957) Experimental Designs. John Wiley, New York, 561-562.
- 4. Graybill, F.A. and Deal, R.D. (1959) Combining Unbiased Estimators. Biometrics, 15, 543-550. https://doi.org/10.2307/2527652
- 5. Rukhin, A.L. (2007) Conservative Confidence Intervals Based on Weighted Mean Statistics. Statistics and Probability Letters, 77, 853-861.
- 6. Hartung, J., Knapp, G. and Sinha, B.K. (2008) Statistical Meta-Analysis with Applications. John Wiley & Sons, Hoboken, 44. https://doi.org/10.1002/9780470386347
- 7. Kempthorne, O. (1952) Design and Analysis of Experiments. John Wiley & Sons, New York, 534.
- 8. Cochran, W.G. (1954) The Combination of Estimates from Different Experiments. Biometrics, 10, 101-129. https://doi.org/10.2307/3001666
- 9. Hedges, L.V. (1981) Distribution Theory for Glass’s Estimator of Effect Size and Related Estimators. Journal of Educational Statistics, 6, 107-128. https://doi.org/10.2307/1164588
- 10. Hedges, L.V. (1982) Estimation of Effect Size from a Series of Independent Experiments. Psychological Bulletin, 92, 490-499. https://doi.org/10.1037/0033-2909.92.2.490
- 11. Hedges, L.V. (1983) A Random Effects Model for Effect Sizes. Psychological Bulletin, 93, 388-395. https://doi.org/10.1037/0033-2909.93.2.388
- 12. Goldberg, L.R. and Kercheval, A.N. (2002) t-Statistics for Weighted Means with Application to Risk Factor Models. The Journal of Portfolio Management, 28, 2.
- 13. Cochran, W.G. (1937) Problems Arising in the Analysis of a Series of Similar Experiments. Supplement to the Journal of the Royal Statistical Society, 4, 102-118. https://doi.org/10.2307/2984123
- 14. Casella, G. and Berger, R. (2001) Statistical Inference. 2nd Edition, Duxbury Press, Pacific Grove, 363.
- 15. Casella, G., Berger, R. and Sanatana, D. (2001) Solutions Manual for Statistical Inference, Second Edition. http://exampleproblems.com/Solutions-Casella-Berger.pdf