Open Journal of Statistics
Vol.07 No.02(2017), Article ID:75521,12 pages
10.4236/ojs.2017.72014

The Coordinate-Free Prediction in Finite Populations with Correlated Observations

Silvia N. Elian

Department of Statistics, Institute of Mathematics and Statistics, University of São Paulo, São Paulo, Brazil

Copyright © 2017 by author and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: January 18, 2017; Accepted: April 17, 2017; Published: April 20, 2017

ABSTRACT

In this paper, we got the best linear unbiased predictor of any linear function of the elements of a finite population under coordinate-free models. The optimal predictor of these quantities was obtained in an earlier work considering models with a known diagonal covariance matrix. We extended this result assuming any known covariance matrix. It is shown that in the particular case of the coordinatized models, this general predictor coincides with the optimal predictor of the total population under a regression super population model with correlated observations.

Keywords:

Coordinate-Free Models, Best Linear Unbiased Predictor, Covariance Matrix, Orthogonal Projection

1. Introduction

A coordinate-free approach in finite populations was introduced by [1] as an alternative to the Gauss-Markov set up, used with the purpose of predicting li- near functions. The Gauss-Markov approach is characterized by a dependence on a particular basis matrix, but in the coordinate-free language, we need only to describe a parametric subspace of I R N , where N is the size of the finite po- pulation. Coordinate-free models in the linear models context are discussed by [2] and [3] .

In a finite population P = { 1 , 2 , , N } , where N is the known population size, let y i , i = 1 , 2 , , N be the value of a random variable y associated to each population unit. Under the superpopulation approach, we will assume that Y is a random vector such that Y Q , where Q is an N -dimensional real vector space with the usual inner product.

The superpopulation model is expressed by

E ( Y ) = μ Ω Var ( Y ) = σ 2 V , (1.1)

where Ω is a p -dimensional subspace of Q , σ 2 is a unknown positive pa- rameter and V is a known positive definite matrix.

The considered model is coordinate free, in the sense that no basis is defined for Ω , the parametric space of μ .

Our main objective is predicting l Y , a linear combination of the elements of Y . With this purpose, a sample of n observations is drawn of the population and the values of y i in Y become known for the sample elements. Let s and r be the sets of sample and non sample elements, respectively, such that P = s r .

We will consider, without loss of generality that Y and V are reordered as

Y = [ Y s Y r ] and V = [ V s V s r V r s V r ] ,

with Y s containing the n observed sample elements, Y r containing the unobserved elements, V s = Var ( Y s ) , V r = Var ( Y r ) and V s r = Cov ( Y s , Y r ) are the covariance matrix.

Under a less general model, with Var ( Y ) = σ 2 D , D a known diagonal matrix, [1] presented the optimal linear predictor of l Y . In the next section, we extended the result, obtaining the best linear unbiased predictor of l Y in the model (1.1) and this was the main contribution of the paper. In Section 3, we show that under the coordinatized model, this predictor coincides with that given by [4] . Finally, we conclude the paper with some examples in Section 4.

2. Best Linear Unbiased Predictor of Linear Functions

The linear function θ = l Y to be predicted may be written as

θ = l Y = l I s Y + l ( I I s ) Y ,

where I s = diag ( i 1 , i 2 , , i N ) is a diagonal matrix with its k -th diagonal element i k , where i k = 1 if k s and i k = 0 if k r , s = { 1 , 2 , , n } , r = { n + 1 , n + 2 , , N } .

We note that with this notation, l I s Y corresponds to the linear combina- tion of the components of Y in the sample and l ( I I s ) Y is the com- bination of the unobserved elements.

Before stating the predicting results, it is necessary to introduce some de- finitions and preliminary results.

Let

Ω s = { μ s | μ s = I s μ , μ Ω }

Ω r = { μ r | μ r = ( I I s ) μ , μ Ω } , and

Y s = I s Y , Y r = ( I I s ) Y , N × 1 matrices .

Since after the sample is observed, I s Y will be known, we restrict our atten- tion to linear predictors of l Y in the form

θ ^ = l I s Y + b I s Y ,

where b is a N -dimensional vector.

Definition. A linear predictor θ ^ of θ is unbiased if and only if

E μ ( θ ^ θ ) = 0 ,

for every μ Ω .

The class of all linear unbiased predictors of l Y will be denoted by U l .

Finally, next definition states the concept of optimality of the linear predictor of θ .

Definition. The linear predictor θ ^ 0 is the best linear unbiased predictor of θ or the optimal linear predictor of θ if θ ^ 0 U l and

E μ ( θ ^ 0 θ ) 2 E μ ( θ ^ θ ) 2 ,

for every μ Ω and every θ ^ U l .

The value of E μ ( θ ^ 0 θ ) 2 corresponds to the mean-squared error of the predictor θ ^ 0 .

The optimal linear predictor of θ under the model

E ( Y ) = μ Ω Var ( Y ) = σ 2 D ,

where D is a known diagonal matrix and σ 2 is unknown was obtained by [1] . It was shown that if dim ( Ω ) = dim ( Ω s ) , where dim ( Ω ) is the dimension of the linear space Ω , then the best linear unbiased predictor of θ = l Y is given by

θ ^ * = l I s Y + l μ ^ *

where μ ^ * = [ 0 μ ^ r ] , 0 is a null vector of dimension n , μ ^ * is such that

μ ^ * = ( I I s ) P Ω ( Y s + μ ^ * ) (1.2)

and P Ω is the orthogonal projector onto Ω .

Returning to the model (1.1), with a non diagonal covariance matrix V , let us consider the decomposition V = P P , with P a lower triangular matrix. As shown by [5] (Theorem 7.2.1) there is a unique lower triangular matrix P such that V = P P . In addition, P is nonsingular. Then, we define the random vector Z = P 1 Y and, as a consequence, by multivariate properties of covariance matrix of random vectors and matrix results,

Var ( Z ) = P 1 V P 1 σ 2 = P 1 P P P 1 σ 2 = P P 1 σ 2 = σ 2 I .

Next theorem presents the best linear unbiased predictor of l Y under model (1.1).

Theorem 1. In the model (1.1)

E ( Y ) = μ Ω Var ( Y ) = σ 2 V .

V a known positive definite matrix, the optimal linear predictor of any linear function of Y , h Y , is

h I s Y + h μ ^ Y (2.1)

where μ ^ Y = [ 0 Y ^ r ] , 0 is the null vector of dimension n , Y ^ r is the solution in Y r

of the system of linear equations

( I I s ) P 1 Y = ( I I s ) P 1 P Ω Y ,

and P Ω is the orthogonal projection matrix onto Ω .

Proof. Let Z = P 1 Y = [ Z s Z r ] with P the lower triangular matrix such that

V = P P ,

Ω * = { μ * | μ * = P 1 μ , μ Ω } ,

Γ = h P I s Z + h P μ ^ Z ,

where μ ^ Z = [ 0 Z ^ r ] , 0 is the null vector of dimension n and Z ^ r the solution in

Z r of the system of linear equations

μ ^ Z = ( I I s ) P Ω * ( I s Z + μ ^ Z ) .

We note that Γ does not depend on unknown quantities because, as it will be shown in the appendix, h P I s Z and h P μ ^ Z do not depend on unknown quantities.

Since

E ( Z ) = P 1 E ( Y ) = P 1 μ = μ * Ω *

and

Var ( Z ) = σ 2 I ,

by [1] results, the optimal linear predictor of l Z is

l I s Z + l μ ^ Z

with μ ^ Z = [ 0 Z ^ r ] , where 0 is the null vector of dimension n and Z ^ r obtained

by (1.2) is the solution of the system of linear equations

μ ^ Z = ( I I s ) P Ω * ( I s Z + μ ^ Z ) .

Taking l = h P , this predictor reduces to Γ and l Z = h P P 1 Y = h Y . So, by (1.2), we have just proved that Γ is the optimal linear predictor of h Y .

To finish the proof, it is enough to show that Γ = h I s Y + h μ ^ Y . For this purpose we write some of matrices already defined in the partitioned form as

P = [ P 1 0 P 3 P 4 ] , P 1 = [ C 0 B 1 B 2 ] , P Ω * = [ H 1 H 2 A 1 A 2 ] , I s = [ I n 0 0 0 ] , I I s = [ 0 0 0 I N n ] ,

where the submatrix are of dimension n × n , n × ( N n ) , ( N n ) × n and ( N n ) × ( N n ) and 0 denotes the null matrix.

Since Z = [ Z s Z r ] ,

μ ^ Z = ( I I s ) P Ω * ( I s Z + μ ^ Z )

implies that

Z ^ r = A 1 Z s + A 2 Z ^ r and Z ^ r = ( I A 2 ) 1 A 1 Z s .

Further, P Ω = P P Ω * P 1 [6] , then P Ω * = P 1 P Ω P and after some calculations

we have

( I I s ) P 1 Y = [ 0 B 1 Y s + B 2 Y r ]

and

( I I s ) P 1 P Ω Y = ( I I s ) P 1 P Ω P P 1 Y = ( I I s ) P Ω * P 1 Y = [ 0 ( A 1 C + A 2 B 1 ) Y s + A 2 B 2 Y r ] .

Thus, if Y ^ r is the solution in Y r of

( I I s ) P 1 Y = ( I I s ) P 1 P Ω Y ,

it follows that

Y ^ r = ( B 2 A 2 B 2 ) 1 ( A 1 C + A 2 B 1 B 1 ) Y s .

Now, with this notation,

Z = P 1 Y = [ C Y s B 1 Y s + B 2 Y r ]

which implies that

Z s = C Y s .

So,

B 1 Y s + B 2 Y ^ r = B 1 Y s + B 2 ( B 2 A 2 B 2 ) 1 ( A 1 C + A 2 B 1 B 1 ) Y s = { B 1 + B 2 B 2 1 ( I A 2 ) 1 ( A 1 C + A 2 B 1 B 1 ) } Y s = { B 1 + ( I A 2 ) 1 A 1 C + ( I A 2 ) 1 ( A 2 I ) B 1 } Y s = ( I A 2 ) 1 A 1 C Y s = Z ^ r .

Hence,

Γ = h P I s Z + h P μ ^ Z = h P [ Z s 0 ] + h P [ 0 Z ^ r ] = h [ P 1 0 P 3 P 4 ] [ C Y s B 1 Y s + B 2 Y ^ r ] = h [ P 1 C Y s P 3 C Y s + P 4 B 1 Y s + P 4 B 2 Y ^ r ]

and because

P P 1 = [ P 1 C 0 P 3 C + P 4 B 1 P 4 B 2 ] = [ I n 0 0 I N n ] ,

then

Γ = h [ Y s Y ^ r ] = h [ Y s 0 ] + h [ 0 Y ^ r ] = h I s Y + h μ ^ Y .

It is important to observe that P Ω has N ( N + 1 ) 2 unknown elements and it

may be difficult to calculate by the above definition. But it can be obtained as P Ω = A ( A V 1 A ) 1 A V 1 , when A is a basis matrix for Ω .

Some applications of the result in Theorem 1 will be presented in the examples.

3. Best Linear Unbiased Predictor in the Coordinatized Model

We now consider a coordinatized version of the model (1.1), given by

E ( Y ) = X β , β I R p Var ( Y ) = σ 2 V . (3.1)

σ 2 > 0 , with V a known positive definite matrix and X a basis matrix of Ω .

Under this formulation, X is a N × p matrix of full rank p and there exists a unique β I R p such that μ = X β . Regression models are included in the class of models defined in (3.1).

[4] derived the best linear unbiased predictor of the population total T = i = 1 N y i .

This predictor, adapted to the notation introduced here and to predict any linear combination of Y is given by

T ^ = h Y s + h ( I I s ) [ 0 X r β ^ + V r s V s 1 ( Y s X s β ^ ) ] , (3.2)

where β ^ = ( X s V s 1 X s ) 1 X s V s 1 Y s and X = [ X s X r ] .

Next theorem shows that in the coordinatized model (3.1), the optimal linear predictor obtained in Theorem 1 reduces to the Royall’s predictor defined in (3.2).

Theorem 2. Under model (3.1), the optimal linear predictor h I s Y + h μ ^ Y given in (2.1) is equal to T ^ .

Proof. We must show that Y ^ r in (2.1) is equal to X r β ^ + V r s V s 1 ( Y s X s β ^ ) .

As proved in Theorem 1

Y ^ r = ( B 2 A 2 B 2 ) 1 ( A 1 C + A 2 B 1 B 1 ) Y s

which is equivalent to

Y ^ r = B 2 1 [ ( I A 2 ) 1 A 1 C B 1 ] Y s .

Applying (A.3), (A.1) and (A.2) of the appendix, it follows that

Y ^ r = B 2 1 [ ( I A 2 ) 1 ( B 1 X s + B 2 X r ) ( X V 1 X ) 1 X s C C B 1 ] Y s = B 2 1 [ ( I A 2 ) 1 ( B 1 X s + B 2 X r ) ( X V 1 X ) 1 X s V s 1 B 1 ] Y s = V r s V s 1 Y s + B 2 1 ( I A 2 ) 1 ( B 1 X s + B 2 X r ) ( X V 1 X ) 1 X s V s 1 Y s .

Now, it is enough showing that

B 2 1 ( I A 2 ) 1 ( B 1 X s + B 2 X r ) ( X V 1 X ) 1 = [ X r V r s V s 1 X s ] ( X s V s 1 X s ) 1 .

By (A.6),

B 2 1 ( I A 2 ) 1 ( B 1 X s + B 2 X r ) ( X V 1 X ) 1 = B 2 1 [ I + ( B 1 X s + B 2 X r ) ( X s V s 1 X s ) 1 ( X s B 1 + X r B 2 ) ] × ( B 1 X s + B 2 X r ) ( X V 1 X ) 1

and employing (A.2), last expression reduces to

[ B 2 1 V r s V s 1 X s ( X s V s 1 X s ) 1 ( X s B 1 + X r B 2 ) + X r ( X s V s 1 X s ) 1 ( X s B 1 + X r B 2 ) ] ( B 1 X s + B 2 X r ) ( X V 1 X ) 1 = { V r s V s 1 X s + X r + [ X r V r s V s 1 X s ] ( X s V s 1 X s ) 1 ( X s B 1 + X r B 2 ) × ( B 1 X s + B 2 X r ) } ( X V 1 X ) 1 .

Finally, using (A.5), we get

B 2 1 ( I A 2 ) 1 ( B 1 X s + B 2 X r ) ( X V 1 X ) 1 = ( X r V r s V s 1 X s ) [ I + ( X s V s 1 X s ) 1 ( X V 1 X X s V s 1 X s ) ] ( X V 1 X ) 1 = ( X r V r s V s 1 X s ) ( X s V s 1 X s ) 1 .

4. Examples

In this section, we present two examples to illustrate the optimal predictors that are obtained in the theorems.

In the first one, we consider a coordinate free model and the predictor is derived applying Theorem 1. Second example shows an application of Theorem 2 in a particular coordinatized model.

Example 1. Our objective is to predict the population total T = i = 1 N y i in the

model

E ( Y ) = μ and

Var ( Y ) = σ 2 1 ρ 2 [ 1 ρ ρ 2 ρ N 1 1 ρ ρ N 2 1 ρ N 3 1 ] ,

with ρ a known parameter and σ 2 > 0 .

Because of the great quantity of calculations, without loss of generality, we restrict the attention to the situation where N = 4 , n = 3 , such that

Y = [ y 1 y 2 y 3 y 4 ] , Y s = [ y 1 y 2 y 3 ] , Y r = [ y 4 ] and V = 1 1 ρ 2 [ 1 ρ ρ 2 ρ 3 ρ 1 ρ ρ 2 ρ 2 ρ 1 ρ ρ 3 ρ 2 ρ 1 ] , | ρ | 1 , ρ known .

In this case,

V 1 = [ 1 ρ 0 0 ρ 1 + ρ 2 ρ 0 0 ρ 1 + ρ 2 ρ 0 0 ρ 1 ] and P 1 = [ 1 ρ 2 0 0 0 ρ 1 0 0 0 ρ 1 0 0 0 ρ 1 ] .

Since

Ω = { v I R 4 | v = ( μ , μ , μ , μ ) , μ IR } ,

a base for Ω is given by A = [ 1 1 1 1 ] .

Then, it is easy to see that

P Ω = A ( A V 1 A ) 1 A V 1 = 1 4 6 ρ + 2 ρ 2 [ 1 ρ 1 + ρ 2 2 ρ 1 + ρ 2 2 ρ 1 ρ 1 ρ 1 + ρ 2 2 ρ 1 + ρ 2 2 ρ 1 ρ 1 ρ 1 + ρ 2 2 ρ 1 + ρ 2 2 ρ 1 ρ 1 ρ 1 + ρ 2 2 ρ 1 + ρ 2 2 ρ 1 ρ ] .

Also

( I I s ) P 1 Y = [ 0 0 0 ρ y 3 + y 4 ]

and

( I I s ) P 1 P Ω Y = 1 4 6 ρ + 2 ρ 2 [ 0 0 0 ( 1 2 ρ + ρ 2 ) ( y 1 + y 4 ) + ( 1 3 ρ + 3 ρ 2 ρ 3 ) ( y 2 + y 3 ) ] .

By Theorem 1, the optimal linear predictor of T is T ^ = i = 1 3 y i + y ^ 4 , where y ^ 4

is the solution in y 4 of the equation

( I I s ) P 1 Y = ( I I s ) P 1 P Ω Y .

After calculations, we get

y ^ 4 = ( 1 2 ρ + ρ 2 ) y 1 + ( 1 3 ρ + 3 ρ 2 ρ 3 ) y 2 + ( 1 + ρ 3 ρ 2 + ρ 3 ) y 3 3 4 ρ + ρ 2

and

T ^ = a 1 y 1 + a 2 y 2 + a 3 y 3 ,

where a 1 = 4 6 ρ + 2 ρ 2 3 4 ρ + ρ 2 , a 2 = 4 7 ρ + 4 ρ 2 ρ 3 3 4 ρ + ρ 2 and a 3 = 4 3 ρ 2 ρ 2 + ρ 3 3 4 ρ + ρ 2 .

It is interesting to note that, if ρ = 0 , such that V = I and y i and y j are uncorrelated, i j , then T ^ = 4 y ¯ s , where y ¯ s is the sample mean. In this case, T ^ is the expansion predictor which was found by [1] under the model E ( Y ) = μ and Var ( Y ) = σ 2 I .

Example 2. Let us consider the superpopulation model

y i = β x i + ϵ i , i = 1 , 2 , , N ,

with E ( ϵ i ) = 0 , Var ( ϵ i ) = 1 , Cov ( ϵ i , ϵ j ) = ρ for i j , i , j = 1 , 2 , , N , and ρ a known parameter, | ρ | 1 .

Our objective is to calculate the best linear unbiased predictor of the popula-

tion total T = i = 1 N y i .

In this situation, the model is coordinatized, and by Theorem 2, it is enough to obtain the value

Y ^ r = X r β ^ + V r s V s 1 ( Y s X s β ^ ) .

Let V s and V r s be written as

V s = ( 1 ρ ) I n + ρ J n V r s = ρ J N n , n ,

where J n and J N n , n are respectively the n × n and ( N n ) × n matrix of ones.

Thus, it is easy to see that

β ^ = i = 1 n x i y i ρ 1 + ( n 1 ) ρ i = 1 n x i i = 1 n y i i = 1 n x i 2 ρ 1 + ( n 1 ) ρ ( i = 1 n x i ) 2

and

Y ^ r = [ a n + 1 β ^ + b a n + 2 β ^ + b a N β ^ + b ] ,

where

a n + j = ( x n + j ρ 1 + ( n 1 ) ρ i = 1 n x i ) β ^ , j = 1 , 2 , , N n ,

and

b = ρ i = 1 n y i 1 + ( n 1 ) ρ .

Cite this paper

Elian, S.N. (2017) The Coordinate-Free Prediction in Finite Populations with Correlated Observations. Open Journal of Statistics, 7, 182-193. https://doi.org/10.4236/ojs.2017.72014

References

  1. 1. Rodrigues, J. (1989) The Coordinate-Free Prediction in Finite Populations. Pakistan Journal of Statistics, 5, 119-129.

  2. 2. Arnold, S.F. (1980) The Theory of Linear Models and Multivariate Analysis. John Wiley, New York.

  3. 3. Drygas, H. (1970) The Coordinate-Free Approach to Gauss-Markov Estimation. Springer-Verlag, Berlin. https://doi.org/10.1007/978-3-642-65148-9

  4. 4. Royall, R.M. (1976) The Linear Least-Squares Prediction Approach to Two-Stage Sampling. Journal of the American Statistical Association, 71, 657-664. https://doi.org/10.1080/01621459.1976.10481542

  5. 5. Graybill, F.A. (1976) Theory and Application of the Linear Model. Duxbury Press, North Scituate, MA.

  6. 6. Rao, C.R. (1973) Linear Statistical Inference and Its Applications. 2nd Edition, Wiley, New York. https://doi.org/10.1002/9780470316436

Appendix

First, we show that Γ = h P I s Z + h P μ ^ Z defined in the proof of Theorem 1 does not depend on unknown quantities.

Since P is a lower triangular matrix, P 1 is lower triangular also, then

P 1 Y = [ δ 11 0 0 δ 21 δ 22 0 δ N 1 δ N 2 δ N N ] Y = [ δ 11 y 1 δ 21 y 1 + δ 22 y 2 δ n 1 y 1 + δ n 2 y 2 + + δ n n y n δ n + 11 y 1 + δ n + 12 y 2 + + δ n + 1 n + 1 y n + 1 δ N 1 y 1 + δ N 2 y 2 + + δ N N y N ]

and

I s Z = [ I n 0 0 0 ] P 1 Y = [ δ 11 y 1 y 21 y 1 + δ 22 y 2 δ n 1 y 1 + δ n 2 y 2 + + δ n n y n 0 0 ] .

So, it is shown that h P I s Z does not depend on unknown quantities. By the proof of Theorem 1, we can see that Z ^ r = ( I A 2 ) 1 A 1 C Y s and thus, h P μ ^ Z also does not depend on unknown quantities. Then Γ is a predictor of h Y .

Now we derive the results (A.1) through (A.6) which are necessary to prove Theorem 2.

Let P 1 partitioned as in the proof of Theorem 1, P 1 = [ C 0 B 1 B 2 ] which

implies that

P = [ C 1 0 B 2 1 B 1 C 1 B 2 1 ] .

Then using the equality V = P P and after some algebraic manipulations, it follows that

C 1 C 1 = V s ,

and so,

C C = V s 1 . (A.1)

Furthermore,

C 1 C 1 B 1 B 2 1 = V s B 1 B 2 1 = V s r

and hence

B 2 1 B 1 = V r s V s 1 . (A.2)

In the coordinatized model with μ = X β and covariance matrix V , it is well known [6] , that

P Ω = X ( X V 1 X ) 1 X V 1

and thus

P Ω * = P 1 P Ω P = P 1 X ( X V 1 X ) 1 X P 1 .

In the partitioned form, this matrix can be written as

P Ω * = [ H 1 H 2 A 1 A 2 ] = [ C 0 B 1 B 2 ] [ X s X r ] ( X V 1 X ) 1 [ X s X r ] [ C B 1 0 B 2 ] = [ C X s ( X V 1 X ) 1 ( B 1 X s + B 2 X r ) ( X V 1 X ) 1 ] [ X s C X s B 1 + X r B 2 ] = [ C X s ( X V 1 X ) 1 X s C C X s ( X V 1 X ) 1 ( X s B 1 + X r B 2 ) ( B 1 X s + B 2 X r ) ( X V 1 X ) 1 X s C ( B 1 X s + B 2 X r ) ( X V 1 X ) 1 ( X s B 1 + X r B 2 ) ] ,

then

A 1 = ( B 1 X s + B 2 X r ) ( X V 1 X ) 1 X s C , (A.3)

and

A 2 = ( B 1 X s + B 2 X r ) ( X V 1 X ) 1 ( X s B 1 + X r B 2 ) . (A.4)

Using the fact that V 1 = P 1 P 1 , it follows that

V 1 = [ C B 1 0 B 2 ] [ C 0 B 1 B 2 ] = [ C C + B 1 B 1 B 1 B 2 B 2 B 1 B 2 B 2 ]

and

X V 1 X = X s C C X s + X s B 1 B 1 X s + X r B 2 B 1 X s + X s B 1 B 2 X r + X r B 2 B 2 X r .

Applying (A.1),

X V 1 X = X s V s 1 X s + X s B 1 B 1 X s + X r B 2 B 1 X s + X s B 1 B 2 X r + X r B 2 B 2 X r . (A.5)

Application of a result of inverse matrix in conjunction with (A.4) and (A.5) yields

( I A 2 ) 1 = [ I ( B 1 X s + B 2 X r ) ( X V 1 X ) 1 ( X s B 1 + X r B 2 ) ] 1 = I ( B 1 X s + B 2 X r ) [ ( X r B 2 + X s B 1 ) ( B 1 X s + B 2 X r ) X V 1 X ] 1 ( X s B 1 + X r B 2 ) = I + ( B 1 X s + B 2 X r ) ( X s V s 1 X s ) 1 ( X s B 1 + X r B 2 ) . (A.6)