Open Journal of Statistics
Vol.07 No.02(2017), Article ID:75589,9 pages
10.4236/ojs.2017.72017

Minimizing the Variance of a Weighted Average

Doron J. Shahar

Department of Mathematics, University of Arizona, Tucson, Arizona, USA

Copyright © 2017 by author and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: February 6, 2017; Accepted: April 21, 2017; Published: April 24, 2017

ABSTRACT

It is common practice in science to take a weighted average of estimators of a single parameter. If the original estimators are unbiased, any weighted average will be an unbiased estimator as well. The best estimator among the weighted averages can be obtained by choosing weights that minimize the variance of the weighted average. If the variances of the individual estimators are given, the ideal weights have long been known to be the inverse of the variance. Nonetheless, I have not found a formal proof of this result in the literature. In this article, I provide three different proofs of the ideal weights.

Keywords:

Variance, Weighted Average, Minimization

1. Introduction

Oftentimes in science, multiple point estimators of the same parameter are com- bined to form a better estimator. One method of forming the new estimator is taking a weighted average of the original estimators. If the original estimators are unbiased, the weighted average is guaranteed to be unbiased as well.

A weighted average may be used to combine the results of several studies (meta- analysis), or when several estimates are obtained within a study. For example, to deconfound it may be necessary to stratify on a covariate when estimating an effect. Assuming that the effect is the same in every stratum of the covariate, we may take a weighted average of the stratum-specific estimates.

The question remains though as to which weights should be chosen. Since the estimator will be unbiased regardless of the weights, we only need to consider the variance. In particular, the weights should be chosen to minimize the va- riance of the weighted average. Although it has long been known that the ideal weights should be the inverse of the variance, I have not found any complete, formal proof in the literature. Several sources mention the ideal weights either in general or for specific cases without any proof [1] - [6] . Others mention something similar to the ideal weights but again without proof [7] [8] . Hedges offers a so- called proof in his 1981 paper [9] , which is far from a complete proof. The first two sentences of his proof contain the same content as the first two sentences and the last sentence of proof 1 in this paper. Hedges then continues to prove approximations for the ideal weights under a certain condition. In his 1982 and 1983 papers, he writes that this result is “easy to show” referencing his 1981 (and 1982) papers [10] [11] . Goldberg and Kercheval [12] provide a “proof” that contains only slightly more content than Hedges’ proof in that they mention the use of Lagrange multipliers. Proof 1 in this paper goes over the details tho- roughly. Cochran also mentions the ideal weights, but proves only that these weights give the maximum likelihood estimate when the estimators are inde- pendent and normally distributed about a common mean [13] . Lastly, the pro- blem of finding the ideal weights is included as an exercise (exercise 7.42) in Casella and Berger [14] . The problem, however, is not worked out in their solution manual [15] . A version of the problem when taking a weighted average of only two estimators is also an exercise (exercise 24.1) in Anderson and Ban- croft [1] .

Searching through articles dating back to the 1930s, it seems that this basic result has not been formally proven in the literature. One reason may be that the result depends on the variances of the estimators being known. The case when the variances are unknown is more difficult and attracted more attention. For example, a few articles briefly mentioned the ideal weights when the variances are known before continuing to discuss the case when the variances are unknown [2] [4] [5] [13] . In this paper, I present three proofs of the ideal weights that minimize the variance of a weighted average.

2. Three Proofs

Let X 1 , , X n ( n 2 ) be estimators of a single parameter θ . In practice, the X i are independent, and they are often assumed to be unbiased. If that’s the case, then any weighted average X = i = 1 n w i X i ( i = 1 n w i = 1 and w i 0 for all i ) is an unbiased estimator of θ . Since the estimator is unbiased regardless of the weights, we want to choose weights that minimize the variance of X .

As far as the ideal weights are concerned, it is not necessary though that the X i be independent and unbiased. The proof of the ideal weights only requires that the X i be uncorrelated and have a finite non-zero variance.

Proposition 1. If X 1 , , X n ( n 2 ) are uncorrelated random variables with finite non-zero variances, then Var ( i = 1 n w i X i ) is minimized when

w i = 1 Var ( X i ) j = 1 n 1 Var ( X j )

and its minimum value is

1 i = 1 n 1 Var ( X i )

The first proof uses the method of Lagrange multipliers.

Proof 1: Var ( i = 1 n w i X i ) = i = 1 n Var ( w i X i ) = i = 1 n w i 2 Var ( X i ) , because the X i are uncorrelated. We wish to minimize the previous expression under the con- straint that i = 1 n w i = 1 and w i 0 for all i . The set T of all ( w 1 , , w n ) n for which i = 1 n w i = 1 and w i 0 for all i is closed and bounded. The extrema in the interior of T can be found by considering only the first con- straint, which may be written as i = 1 n w i 1 = 0 . Later we shall find the extrema on the boundary of T . To find the extrema in the interior of T , let

F ( w 1 , , w n , λ ) = i = 1 n w i 2 Var ( X i ) λ ( i = 1 n w i 1 ) (1)

By the method of Lagrange multipliers, the values of w 1 , , w n for which

F / w j = 0 are the critical points of Var ( i = 1 n w i X i ) . (These contain all the ex-

trema of Var ( i = 1 n w i X i ) in the interior of T .) F / w j = 2 w j Var ( X j ) λ equals zero only when w j = ( λ / 2 ) /Var ( X j ) .

1 = j = 1 n w j = j = 1 n λ / 2 Var ( X j ) = λ 2 j = 1 n 1 Var ( X j ) (2)

Therefore,

λ = 2 j = 1 n 1 Var ( X j ) (3)

and

w i = λ / 2 Var ( X i ) = 1 Var ( X i ) j = 1 n 1 Var ( X j ) > 0 (4)

( w 1 , , w n ) is indeed in the interior of T since w i > 0 . For these values of w i ,

i = 1 n w i 2 Var ( X i ) = i = 1 n ( 1 Var ( X i ) j = 1 n 1 Var ( X j ) ) 2 Var ( X i ) = i = 1 n 1 Var ( X i ) ( j = 1 n 1 Var ( X j ) ) 2 = 1 i = 1 n 1 Var ( X i ) (5)

Next, let’s find the extrema on the boundary of T . The boundary of T is characterized by having some of the w i equal zero. For any point on the boun- dary, let S = { i : w i 0 } . At such a point, Var ( i = 1 n w i X i ) = Var ( i S w i X i ) . Using the method of Lagrange multipliers again, the critical points of Var ( i = 1 n w i X i ) = Var ( i S w i X i ) are found to be ( w 1 , , w n ) where

w i = { 1 Var ( X i ) j S 1 Var ( X j ) if i S 0 if i S (6)

(These contain all the extrema of Var ( i = 1 n w i X i ) on the boundary of T .) For these values of w i ,

i = 1 n w i 2 Var ( X i ) = i S ( 1 Var ( X i ) j S 1 Var ( X j ) ) 2 Var ( X i ) = i S 1 Var ( X i ) ( j S 1 Var ( X j ) ) 2 = 1 i S 1 Var ( X i ) (7)

Note that

1 i S 1 Var ( X i ) 1 i = 1 n 1 Var ( X i ) (8)

That is, of all critical points, the one in the interior of T minimizes

i = 1 n w i 2 Var ( X i ) . Therefore, Var ( i = 1 n w i X i ) = i = 1 n w i 2 Var ( X i ) is minimized

when

w i = 1 Var ( X i ) j = 1 n 1 Var ( X j )

and its minimum value is

1 i = 1 n 1 Var ( X i )

The second proof is done by induction.

Proof 2: The case n = 2 will be our base case for the induction.

Var ( w 1 X 1 + w 2 X 2 ) = w 1 2 Var ( X 1 ) + w 2 2 Var ( X 2 ) = w 1 2 Var ( X 1 ) + ( 1 w 1 ) 2 Var ( X 2 ) = w 1 2 ( Var ( X 1 ) + Var ( X 2 ) ) 2 w 1 Var ( X 2 ) + Var ( X 2 ) (9)

The global minimum of the above expression occurs when

w 1 = Var ( X 2 ) Var ( X 1 ) + Var ( X 2 ) = 1 Var ( X 1 ) 1 Var ( X 1 ) + 1 Var ( X 2 ) (10)

and

w 2 = 1 w 1 = 1 Var ( X 2 ) 1 Var ( X 1 ) + 1 Var ( X 2 ) (11)

The minimum variance is then

i = 1 2 ( 1 Var ( X i ) j = 1 2 1 Var ( X j ) ) 2 Var ( X i ) = i = 1 2 1 Var ( X i ) ( j = 1 2 1 Var ( X j ) ) 2 = 1 i = 1 2 1 Var ( X i ) (12)

For the induction step, suppose that for some n 2 , Var ( i = 1 n w i X i ) is minimized when

w i = 1 Var ( X i ) j = 1 n 1 Var ( X j )

and its minimum value is

1 i = 1 n 1 Var ( X i )

Then,

Var ( i = 1 n + 1 w i X i ) = Var ( ( j = 1 n w j ) ( i = 1 n w i j = 1 n w j X i ) + w n + 1 X n + 1 ) = ( j = 1 n w j ) 2 Var ( i = 1 n w i j = 1 n w j X i ) + w n + 1 2 Var ( X n + 1 ) = ( 1 w n + 1 ) 2 Var ( i = 1 n u i X i ) + w n + 1 2 Var ( X n + 1 ) = w n + 1 2 ( Var ( X n + 1 ) + Var ( i = 1 n u i X i ) ) 2 w n + 1 Var ( i = 1 n u i X i ) + Var ( i = 1 n u i X i ) (13)

where u i = w i / j = 1 n w j are weights that do not depend on w n + 1 . So for any possible values of u i , the above expression is minimized when

w n + 1 = Var ( i = 1 n u i X i ) Var ( X n + 1 ) + Var ( i = 1 n u i X i ) (14)

By plugging the above value for w n + 1 into Equation (13), we find a lower bound for the variance of the weighted average:

Var ( i = 1 n + 1 w i X i ) Var ( X n + 1 ) Var ( i = 1 n u i X i ) Var ( X n + 1 ) + Var ( i = 1 n u i X i ) (15)

where equality is achieved for the specified value of w n + 1 .

The variance of the weighted average will be minimized when it equals the right side of the above inequality and the right side of the inequality is minimized. The right side of the inequality is minimized when Var ( i = 1 n u i X i ) is minimized. We have assumed in the induction step that Var ( i = 1 n u i X i ) is minimized when

u i = 1 Var ( X i ) j = 1 n 1 Var ( X j )

and its minimum value is

1 i = 1 n 1 Var ( X i )

Therefore, Var ( i = 1 n + 1 w i X i ) is minimized when

w n + 1 = Var ( i = 1 n u i X i ) Var ( X n + 1 ) + Var ( i = 1 n u i X i ) = 1 j = 1 n 1 Var ( X j ) Var ( X n + 1 ) + 1 j = 1 n 1 Var ( X j ) = 1 Var ( X n + 1 ) j = 1 n + 1 1 Var ( X j ) (16)

and

w i = ( j = 1 n w j ) u i = ( 1 w n + 1 ) u i = ( 1 1 Var ( X n + 1 ) j = 1 n + 1 1 Var ( X j ) ) 1 Var ( X i ) j = 1 n 1 Var ( X j ) = 1 Var ( X i ) j = 1 n + 1 1 Var ( X j ) (17)

for i { 1, , n } .

Therefore, Var ( i = 1 n + 1 w i X i ) is minimized when

w i = 1 Var ( X i ) j = 1 n + 1 1 Var ( X j )

for all i , and its minimum value is

1 i = 1 n + 1 1 Var ( X i )

This completes the induction step, and the proof.

The third proof utilizes the Cauchy-Schwarz inequality.

Proof 3: Using the Cauchy-Schwarz inequality, we obtain

1 = | i = 1 n w i | 2 = | i = 1 n ( w i Var ( X i ) ) ( 1 Var ( X i ) ) | 2 ( i = 1 n ( w i Var ( X i ) ) 2 ) ( i = 1 n ( 1 Var ( X i ) ) 2 ) = ( i = 1 n w i 2 Var ( X i ) ) ( i = 1 n 1 Var ( X i ) ) (18)

Dividing both sides of Equation (18) by i = 1 n 1 Var ( X i ) , we have a lower bound

for Var ( i = 1 n w i X i ) .

1 i = 1 n 1 Var ( X i ) i = 1 n w i 2 Var ( X i ) = Var ( i = 1 n w i X i ) (19)

By the Cauchy-Schwarz inequality, Var ( i = 1 n w i X i ) equals the lower bound iff

( w 1 Var ( X 1 ) , , w n Var ( X n ) )

and

( 1 Var ( X 1 ) , , 1 Var ( X n ) )

are linearly dependent vectors. Since neither of these vectors is the zero vector, they are linearly dependent iff there exists an α such that w i Var ( X i ) = α / Var ( X i ) for all i . Therefore, Var ( i = 1 n w i X i ) equals the lower bound iff there exists an α such that w i = α / Var ( X i ) for all i . Since the w i are weights, this requires that

α = 1 j = 1 n 1 Var ( X j ) (20)

and

w i = 1 Var ( X i ) j = 1 n 1 Var ( X j ) . (21)

Therefore, Var ( i = 1 n w i X i ) obtains the lower bound, and hence, is mini- mized when

w i = 1 Var ( X i ) j = 1 n 1 Var ( X j )

and its minimum value is

1 i = 1 n 1 Var ( X i )

3. Discussion

Given the frequent use of inverse variance weighting, it is surprising that the proof of proposition 1 was never published, to the best of my knowledge, in any book or journal. That the proofs use standard techniques is no excuse for their absence in the literature; there is value in a proof beyond the result it proves. For example, it is interesting that the proposition can be proven by induction and more succinctly using the Cauchy-Schwarz inequality.

Even more surprising are the trails of citations leading nowhere. It appears that generations of statisticians simply assumed that a proof has been published somewhere, relying on old, inaccurate citations. In that sense, this article not only offers three different proofs but also a broader lesson: every so often it is worth- while to review the history of well-known facts. Surprises are possible.

Acknowledgements

Many thanks to Eyal Shahar for first asking me to prove this result after he was unable to find the proof in the literature and for inspiring the second proof. (An earlier version of the first two proofs was posted on his university website in an appendix to his commentary on standardization.) I would also like to thank Sunder Sethuraman and Shankar Venkataramani for inspiring the third proof. And I would like to thank all three for having commented on a draft manuscript.

Cite this paper

Shahar, D.J. (2017) Minimizing the Variance of a Weighted Average. Open Journal of Statistics, 7, 216- 224. https://doi.org/10.4236/ojs.2017.72017

References

  1. 1. Anderson, R.L. and Bancroft, T.A. (1952) Statistical Theory in Research. McGraw-Hill, New York, 358-366.

  2. 2. Meier, P. (1953) Variance of a Weighted Mean. Biometrics, 9, 59-73. https://doi.org/10.2307/3001633

  3. 3. Cochran, W.G. and Cox, G. (1957) Experimental Designs. John Wiley, New York, 561-562.

  4. 4. Graybill, F.A. and Deal, R.D. (1959) Combining Unbiased Estimators. Biometrics, 15, 543-550. https://doi.org/10.2307/2527652

  5. 5. Rukhin, A.L. (2007) Conservative Confidence Intervals Based on Weighted Mean Statistics. Statistics and Probability Letters, 77, 853-861.

  6. 6. Hartung, J., Knapp, G. and Sinha, B.K. (2008) Statistical Meta-Analysis with Applications. John Wiley & Sons, Hoboken, 44. https://doi.org/10.1002/9780470386347

  7. 7. Kempthorne, O. (1952) Design and Analysis of Experiments. John Wiley & Sons, New York, 534.

  8. 8. Cochran, W.G. (1954) The Combination of Estimates from Different Experiments. Biometrics, 10, 101-129. https://doi.org/10.2307/3001666

  9. 9. Hedges, L.V. (1981) Distribution Theory for Glass’s Estimator of Effect Size and Related Estimators. Journal of Educational Statistics, 6, 107-128. https://doi.org/10.2307/1164588

  10. 10. Hedges, L.V. (1982) Estimation of Effect Size from a Series of Independent Experiments. Psychological Bulletin, 92, 490-499. https://doi.org/10.1037/0033-2909.92.2.490

  11. 11. Hedges, L.V. (1983) A Random Effects Model for Effect Sizes. Psychological Bulletin, 93, 388-395. https://doi.org/10.1037/0033-2909.93.2.388

  12. 12. Goldberg, L.R. and Kercheval, A.N. (2002) t-Statistics for Weighted Means with Application to Risk Factor Models. The Journal of Portfolio Management, 28, 2.

  13. 13. Cochran, W.G. (1937) Problems Arising in the Analysis of a Series of Similar Experiments. Supplement to the Journal of the Royal Statistical Society, 4, 102-118. https://doi.org/10.2307/2984123

  14. 14. Casella, G. and Berger, R. (2001) Statistical Inference. 2nd Edition, Duxbury Press, Pacific Grove, 363.

  15. 15. Casella, G., Berger, R. and Sanatana, D. (2001) Solutions Manual for Statistical Inference, Second Edition. http://exampleproblems.com/Solutions-Casella-Berger.pdf