﻿ Estimating a Finite Population Mean under Random Non-Response in Two Stage Cluster Sampling with Replacement

Open Journal of Statistics
Vol.07 No.05(2017), Article ID:79925,15 pages
10.4236/ojs.2017.75059

Estimating a Finite Population Mean under Random Non-Response in Two Stage Cluster Sampling with Replacement

Nelson Kiprono Bii1, Christopher Ouma Onyango2, John Odhiambo1

1Institute of Mathematical Sciences, Strathmore University, Nairobi, Kenya

2Department of Statistics, Kenyatta University, Nairobi, Kenya    Received: September 1, 2017; Accepted: October 24, 2017; Published: October 27, 2017

ABSTRACT

Non-response is a regular occurrence in Sample Surveys. Developing estimators when non-response exists may result in large biases when estimating population parameters. In this paper, a finite population mean is estimated when non-response exists randomly under two stage cluster sampling with replacement. It is assumed that non-response arises in the survey variable in the second stage of cluster sampling. Weighting method of compensating for non-response is applied. Asymptotic properties of the proposed estimator of the population mean are derived. Under mild assumptions, the estimator is shown to be asymptotically consistent.

Keywords:

Non-Response, Nadaraya-Watson Estimation, Two Stage Cluster Sampling 1. Introduction

In survey sampling, non-response is one source of errors in data analysis. Nonresponse introduces bias into the estimation of population characteristics. It also causes samples to fail to follow the distributions determined by the original sampling design. This paper seeks to reduce the non-response bias in the estimation of a finite population mean in two stage cluster sampling.

Use of regression models is recognized as one of the procedures for reducing bias due to non-response using auxiliary information. In practice, information on the variables of interest is not available for non-respondents but information on auxiliary variables may be available for non-respondents. It is therefore desirable to model the response behavior and incorporate the auxiliary data into the estimation so that the bias arising from non-response can be reduced. If the auxiliary variables are correlated with the response behavior, then the regression estimators would be more precise in estimation of population parameters, given the auxiliary information is known.

Many authors have developed estimators of population mean where non-response exists in the study and auxiliary variables. But there exist cases that do not exhibit non-response in the auxiliary variables, such as: number of people in a family, duration one takes to go through education. Imputation techniques have been used to account for non-response in the study variable. For instance,  applied compromised method of imputation to estimate a finite population mean under two stage cluster sampling, this method however produced a large bias. In this study, the Nadaraya-Watson regression technique is applied in deriving the estimator for the finite population mean. Kernel weights are used to compensate for non-response.

Reweighting Method

Non-response causes loss of observations and therefore reweighting means that the weights are increased for all or almost all of the elements that fail to respond in a survey. The population mean, $\stackrel{¯}{Y}$ , is estimated by selecting a sample of size n at random with replacement. If responding units to item y are independent so that the probability of unit j responding in cluster i is ${p}_{ij}\left(i=1,2,\cdots ,n;j=1,2,\cdots ,m\right)$ then an imputed estimator, ${\stackrel{¯}{y}}_{I}$ , for $\stackrel{¯}{Y}$ , is given by

${\stackrel{¯}{y}}_{I}=\frac{1}{{\sum }_{i,j\in s}{w}_{ij}}\left[\underset{i,j\in {s}_{r}}{\sum }{w}_{ij}{y}_{ij}+\underset{i,j\in {s}_{m}}{\sum }{w}_{ij}{y}_{ij}^{\ast }\right]$ (1.0)

where ${w}_{ij}=\frac{1}{{\pi }_{ij}}$ gives sample survey weight tied to unit j in cluster i and

${\pi }_{ij}=p\left[i,j\in s\right]$ is its second order probability of inclusion, ${s}_{r}$ , is the set of r units responding to item y and ${s}_{m}$ is the set of m units that failed to respond to item y so that $r+m=n$ and ${y}_{ij}^{\ast }$ is the imputed value generated so that the missing value ${y}_{ij}$ is compensated for,  .

2. The Proposed Estimator of Finite Population Mean

Consider a finite population of size M consisting of N clusters with ${N}_{i}$ elements in the ith cluster. A sample of n clusters is selected so that ${n}_{1}$ units respond and ${n}_{2}$ units fail to respond. Let ${y}_{ij}$ denote the value of the survey variable Y for unit j in cluster i, for $i=1,2,\cdots ,N$ , $j=1,2,\cdots ,{N}_{i}$ and let population mean be given by

$\stackrel{¯}{\stackrel{¯}{Y}}=\frac{1}{M{N}_{i}}\underset{i=1}{\overset{N}{\sum }}\underset{j=1}{\overset{{M}_{i}}{\sum }}\text{ }\text{ }{Y}_{ij}$ (2.1)

Let an estimator of the finite population mean be defined by $\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}$ as follows:

$\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}=\frac{1}{M}\left\{\frac{1}{{n}_{1}}\underset{i\in s}{\sum }\underset{j\in s}{\sum }\frac{{Y}_{ij}}{{\pi }_{ij}}{\delta }_{ij}+\frac{1}{{n}_{2}}\underset{i\in s}{\sum }\underset{j\notin s}{\sum }\left(1-\frac{1}{{\pi }_{ij}}\right){\stackrel{^}{Y}}_{ij}{\delta }_{ij}\right\}$ (2.2)

where ${\delta }_{ij}$ is an indicator variable defined by

${\delta }_{ij}=\left\{\begin{array}{l}1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{if}\text{\hspace{0.17em}}{j}^{\text{th}}\text{\hspace{0.17em}}\text{unit}\text{\hspace{0.17em}}\text{in}\text{\hspace{0.17em}}\text{the}\text{\hspace{0.17em}}{i}^{\text{th}}\text{\hspace{0.17em}}\text{cluster}\text{\hspace{0.17em}}\text{responds}\\ 0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{elsewhere}\end{array}$

and ${n}_{1}$ and ${n}_{2}$ are the number of units that respond and those that fail to respond respectively.

${\pi }_{ij}$ is the probability of selecting the jth unit in the ith cluster into the sample.

Let $w\left({x}_{ij}\right)=\frac{1}{{\pi }_{ij}}$ to be the inverse of the second order inclusion probabilities and that ${x}_{ij}$ is the ith auxiliary random variable from the jth cluster. It follows that Equation (2.2) becomes

$\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}=\frac{1}{M}\left\{\frac{1}{{n}_{1}}\underset{i\in s}{\sum }\underset{j\in s}{\sum }\text{ }w\left({x}_{ij}\right){Y}_{ij}{\delta }_{ij}+\frac{1}{{n}_{2}}\underset{i\in s}{\sum }\underset{j\notin s}{\sum }\left(1-w\left({x}_{ij}\right)\right){\stackrel{^}{Y}}_{ij}{\delta }_{ij}\right\}$ (2.3)

Suppose ${\delta }_{ij}$ is known to be Bernoulli random variables with probability of success ${\delta }_{ij}^{\ast }$ , then, $E\left({\delta }_{ij}\right)={p}_{r}\left({\delta }_{ij}=1\right)={\delta }_{ij}^{\ast }$ and $\left({\delta }_{ij}\right)={\delta }_{ij}^{\ast }\left(1-{\delta }_{ij}^{\ast }\right)$ ,  . Thus, the expected value of the estimator of population mean is given by

$E\left(\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}\right)=\frac{1}{M}\left\{\frac{1}{{n}_{1}}\underset{i\in s}{\sum }\underset{j\in s}{\sum }E\left(w\left({x}_{ij}\right){Y}_{ij}\right){\delta }_{ij}+\frac{1}{{n}_{2}}\underset{i\in s}{\sum }\underset{j\notin s}{\sum }E\left(\left(1-w\left({x}_{ij}\right)\right){\stackrel{^}{Y}}_{ij}\right){\delta }_{ij}^{\ast }\right\}$ (2.4)

Assuming non-response in the second stage of sampling, the problem is therefore to estimate the values of ${\stackrel{^}{Y}}_{ij}$ . To do this, a linear regression model applied by  and  given below is used;

${\stackrel{^}{Y}}_{ij}=m\left({\stackrel{^}{x}}_{ij}\right)+{\stackrel{^}{e}}_{ij}$ (2.5)

where $m\left(.\right)$ is a smooth function of the auxiliary variables and ${\stackrel{^}{e}}_{ij}$ is the residual term with mean zero and variance which is strictly positive, Substituting Equation (2.5) in Equation (2.4) the following result is obtained:

$\begin{array}{c}E\left(\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}\right)=\frac{1}{M}\left\{\frac{1}{{n}_{1}}\underset{i\in s}{\sum }\underset{j\in s}{\sum }E\left(\left(m\left({\stackrel{^}{x}}_{ij}\right)+{\stackrel{^}{e}}_{ij}\right)w\left({x}_{ij}\right)\right){\delta }_{ij}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\frac{1}{{n}_{2}}\underset{i\in s}{\sum }\underset{j\notin s}{\sum }E\left(1-w\left({x}_{ij}\right)\right)\left(m\left({\stackrel{^}{x}}_{ij}\right)+{\stackrel{^}{e}}_{ij}\right){\delta }_{ij}^{\ast }\right\}\end{array}$ (2.6)

Assuming that ${n}_{1}={n}_{2}=n$ , and simplifying Equation (2.6) we obtain the following

$\begin{array}{c}E\left(\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}\right)=\frac{1}{Mn}\left\{\underset{i\in s}{\sum }\underset{j\in s}{\sum }E\left(\left(m\left({\stackrel{^}{x}}_{ij}\right)+{\stackrel{^}{e}}_{ij}\right)w\left({x}_{ij}\right)\right){\delta }_{ij}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\underset{i\in s}{\sum }\underset{j\notin s}{\sum }E\left(1-w\left({x}_{ij}\right)\right)\left(m\left({\stackrel{^}{x}}_{ij}\right)+{\stackrel{^}{e}}_{ij}\right){\delta }_{ij}^{\ast }\right\}\end{array}$ (2.7)

A detailed work done by  proved that $E\left({\stackrel{^}{e}}_{ij}\right)=0$ . Therefore Equation (2.7) reduces to

$\begin{array}{c}E\left(\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}\right)=\frac{1}{Mn}\left\{\underset{i\in s}{\sum }\underset{j\in s}{\sum }E\left(m\left({\stackrel{^}{x}}_{ij}\right)\right)E\left(w\left({x}_{ij}\right)\right){\delta }_{ij}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\underset{i\in s}{\sum }\underset{j\notin s}{\sum }E\left(1-w\left({x}_{ij}\right)\right)E\left(m\left({\stackrel{^}{x}}_{ij}\right)+{\stackrel{^}{e}}_{ij}\right){\delta }_{ij}^{\ast }\right\}\end{array}$ (2.8)

The second term in Equation (2.8) is simplified as follows:

$\begin{array}{l}\frac{1}{Mn}\left\{\underset{i\notin s}{\sum }\underset{j\notin s}{\sum }E\left(1-w\left({x}_{ij}\right)\right)E\left(m\left({\stackrel{^}{x}}_{ij}\right)+{\stackrel{^}{e}}_{ij}\right){\delta }_{ij}{}^{*}\right\}\\ =\frac{1}{Mn}\left\{\underset{i\notin s}{\sum }\underset{j\notin s}{\sum }E\left(1-w\left({x}_{ij}\right)\right)m\left({\stackrel{^}{x}}_{ij}\right){\delta }_{ij}\right\}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }+\frac{1}{Mn}\left\{\underset{i\notin s}{\sum }\underset{j\notin s}{\sum }E\left(1-w\left({x}_{ij}\right)\right){e}_{ij}{\delta }_{ij}\right\}\end{array}$ (2.9)

But $E\left(m\left({x}_{ij}\right)\right)=m\left({\stackrel{^}{x}}_{ij}\right)=m\left({x}_{ij}\right)$ ,  . Thus we get the following:

$\begin{array}{l}\frac{1}{Mn}\left\{\underset{i\notin s}{\sum }\underset{j\notin s}{\sum }E\left(1-w\left({x}_{ij}\right)\right)E\left(m\left({\stackrel{^}{x}}_{ij}\right)+{\stackrel{^}{e}}_{ij}\right){\delta }_{ij}^{\ast }\right\}\\ =\frac{1}{Mn}\left\{\underset{i=m+1}{\overset{M}{\sum }}\underset{j=n+1}{\overset{N}{\sum }}{\delta }_{ij}m\left({x}_{ij}\right)-w\left({x}_{ij}\right){\delta }_{ij}m\left({x}_{ij}\right)\right\}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{ }+\frac{1}{Mn}\left\{\underset{i=m+1}{\overset{M}{\sum }}\underset{j=n+1}{\overset{N}{\sum }}E\left({e}_{ij}{\delta }_{ij}\right)-E\left(w\left({x}_{ij}\right)\left({e}_{ij}{\delta }_{ij}\right)\right)\right\}\end{array}$ (2.10)

$\begin{array}{l}\frac{1}{Mn}\left\{\underset{i\notin s}{\sum }\underset{j\notin s}{\sum }E\left(1-w\left({x}_{ij}\right)\right)E\left(m\left({\stackrel{^}{x}}_{ij}\right)+{\stackrel{^}{e}}_{ij}\right){\delta }_{ij}^{\ast }\right\}\\ =\frac{1}{Mn}\left\{\left(M-\left(m+1\right)\right)\left(N-\left(n+1\right)\right)\left[\left({\delta }_{ij}\right)m\left({x}_{ij}\right)-w\left({x}_{ij}\right){\delta }_{ij}m\left({x}_{ij}\right)\right]\right\}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{ }+\frac{1}{Mn}\left\{\left(M-\left(m+1\right)\right)\left(N-\left(n+1\right)\right)\left[{\delta }_{ij}E\left({e}_{ij}\right)-E\left({e}_{ij}\right){\delta }_{ij}w\left({x}_{ij}\right)\right]\right\}\end{array}$ (2.11)

But $E\left({e}_{ij}\right)=0$ , for details see  .

On simplification, Equation (2.11) reduces to

$\begin{array}{l}\frac{1}{Mn}\left\{\underset{i\notin s}{\sum }\underset{j\notin s}{\sum }E\left(1-w\left({x}_{ij}\right)\right)E\left(m\left({\stackrel{^}{x}}_{ij}\right)+{\stackrel{^}{e}}_{ij}\right){\delta }_{ij}^{\ast }\right\}\\ =\frac{\left(M-\left(m+1\right)\right)\left(N-\left(n+1\right)\right)}{Mn}\left\{{\delta }_{ij}m\left({x}_{ij}\right)\left(1-w\left({x}_{ij}\right)\right)\right\}\end{array}$ (2.12)

Recall $w\left({x}_{ij}\right)=\frac{1}{{\pi }_{ij}}$

so that Equation (2.12) may be re-written as follows:

$\begin{array}{l}\frac{1}{Mn}\left\{\underset{i\notin s}{\sum }\underset{j\notin s}{\sum }E\left(1-w\left({x}_{ij}\right)\right)E\left(m\left({\stackrel{^}{x}}_{ij}\right)+{\stackrel{^}{e}}_{ij}\right){\delta }_{ij}^{\ast }\right\}\\ =\frac{\left(M-\left(m+1\right)\right)\left(N-\left(n+1\right)\right)}{Mn}\left\{{\delta }_{ij}m\left({x}_{ij}\right)\left(\frac{{\pi }_{ij}-1}{{\pi }_{ij}}\right)\right\}\end{array}$ (2.13)

Assume the sample sizes are large i.e. as $n\to N$ and $m\to M$ , Equation (2.13) simplifies to

$\begin{array}{l}\frac{1}{Mn}\left\{\underset{i\notin s}{\sum }\underset{j\notin s}{\sum }E\left(1-w\left({x}_{ij}\right)\right)E\left(m\left({\stackrel{^}{x}}_{ij}\right)+{\stackrel{^}{e}}_{ij}\right){\delta }_{ij}^{\ast }\right\}\\ =\frac{1}{Mn}\left\{{\delta }_{ij}m\left({x}_{ij}\right)\left(\frac{{\pi }_{ij}-1}{{\pi }_{ij}}\right)\right\}\end{array}$ (2.14)

Combining Equation (2.14) with the first term in Equation (2.08) becomes;

$E\left(\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}\right)=\frac{1}{Mn}\left\{\underset{i\in s}{\sum }\underset{j\in s}{\sum }E\left(m\left({x}_{ij}\right)\right)E\left(\frac{{\delta }_{ij}}{{\pi }_{ij}}\right)+\underset{i\in s}{\sum }\underset{j\notin s}{\sum }{\delta }_{ij}\left(m\left({\stackrel{^}{x}}_{ij}\right)\right)\left(\frac{{\pi }_{ij}-1}{{\pi }_{ij}}\right)\right\}$ (2.15)

Since the first term represents the response units, their values are all known. The problem is to estimate the non-response units in the second term. Let the indicator variable ${\delta }_{ij}=1$ , the problem now reduces to that of estimating the function $m\left({\stackrel{^}{x}}_{ij}\right)$ , which is a function of the auxiliary variables, ${x}_{ij}$ . Hence the expected value of the estimator of the finite population mean under non-response is given as;

$E\left(\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}\right)=\frac{1}{Mn}\left\{\underset{i\in s}{\sum }\underset{j\in s}{\sum }{Y}_{ij}+\underset{i\in s}{\sum }\underset{j\notin s}{\sum }{\delta }_{ij}\left(m\left({\stackrel{^}{x}}_{ij}\right)\right)\left(\frac{{\pi }_{ij}-1}{{\pi }_{ij}}\right)\right\}$ (2.16)

In order to derive the asymptotic properties of the expected value of the proposed estimator in 2.16, first a review of Nadaraya-Watson estimator is given below.

Given a random sample of bivariate data $\left({x}_{i},{y}_{i}\right),\cdots ,\left({x}_{n},{y}_{n}\right)$ having a joint pdf $g\left(x,y\right)$ with the regression model given by

${Y}_{ij}=m\left({x}_{ij}\right)+{e}_{ij}$ as in Equation (2.5), where $m\left(.\right)$ is unknown. Let the error term satisfy the following conditions:

$E\left({e}_{ij}\right)=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}Var\left({e}_{ij}\right)={\sigma }_{ij}^{2},\text{\hspace{0.17em}}\text{\hspace{0.17em}}cov\left({e}_{i},{e}_{j}\right)=0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}i\ne j$ (3.0)

Furthermore, let $K\left(.\right)$ denote a symmetric kernel density function which is twice continuously differentiable with:

$\begin{array}{l}{\int }_{-\infty }^{\infty }k\left(w\right)\text{d}w=1\\ {\int }_{-\infty }^{\infty }wk\left(w\right)\text{d}w=0\\ {\int }_{-\infty }^{\infty }{k}^{2}\left(w\right)\text{d}w<\infty \\ {\int }_{-\infty }^{\infty }{w}^{2}k\left(w\right)\text{d}w={d}_{k}\\ k\left(w\right)=k\left(-w\right)\end{array}\right\}$ (3.1)

In addition, let the smoothing weights be defined by

$w\left({x}_{ij}\right)=\frac{K\left(\frac{x-{X}_{ij}}{b}\right)}{{\sum }_{i\in s}{\sum }_{i\in s}K\left(\frac{x-{X}_{ij}}{b}\right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}i=1,2,\cdots ,n;j=1,2,\cdots ,m$ (3.2)

where b is a smoothing parameter, normally referred to as the bandwidth such that, ${\sum }_{i}{\sum }_{j}w\left({x}_{ij}\right)=1$ .

Using Equation (3.2), the Nadaraya-Watson estimator of $m\left({x}_{ij}\right)$ is given by:

$m\left({\stackrel{^}{x}}_{ij}\right)=\underset{i\in s}{\sum }\underset{j\in s}{\sum }w\left({x}_{ij}\right){Y}_{ij}=\frac{{\sum }_{i\in s}{\sum }_{j\in s}K\left(\frac{x-{X}_{ij}}{b}\right){Y}_{ij}}{{\sum }_{i\in s}{\sum }_{j\in s}K\left(\frac{x-{X}_{ij}}{b}\right)},\text{\hspace{0.17em}}i=1,2,\cdots ,n;j=1,2,\cdots ,m$ (3.3)

Given the model ${\stackrel{^}{Y}}_{ij}=m\left({\stackrel{^}{x}}_{ij}\right)+{\stackrel{^}{e}}_{ij}$ and the conditions of the error term as explained in 3.0 above, the expression for the survey variable ${Y}_{ij}$ relative to the auxiliary variable ${X}_{ij}$ can be given as a joint pdf of $g\left({x}_{ij},{y}_{ij}\right)$ as follows:

$m\left({x}_{ij}\right)=E\left({Y}_{ij}/{X}_{ij}={x}_{ij}\right)={\int }^{\text{​}}yg\left[y/x\right]\text{d}y=\frac{{\int }^{\text{​}}yg\left(x,y\right)\text{d}y}{{\int }^{\text{​}}g\left(x,y\right)\text{d}y}$ (3.4)

where ${\int }^{\text{​}}g\left(x,y\right)\text{d}y$ is the marginal density of ${X}_{ij}$ . The numerator and the denominator of Equation (3.4) can be estimated separately using kernel functions as follows:

$g\left(x,y\right)$ is estimated by;

$\stackrel{^}{g}\left(x,y\right)=\frac{1}{mn}\underset{i}{\sum }\underset{j}{\sum }\left(\frac{1}{b}K\left(\frac{x-{X}_{ij}}{b}\right)\frac{1}{b}K\left(\frac{y-{Y}_{ij}}{b}\right)\right)$ (3.5)

and

${\int }^{\text{​}}y\stackrel{^}{g}\left(x,y\right)\text{d}y=\frac{1}{mn}\underset{i}{\sum }\underset{j}{\sum }{\int }^{\text{​}}\left(\frac{1}{b}K\left(\frac{x-{X}_{ij}}{b}\right)\frac{1}{b}K\left(\frac{y-{Y}_{ij}}{b}\right)\right)y\text{d}y$ (3.6)

Using change of variables technique; let

$\begin{array}{l}w=\frac{y-{Y}_{ij}}{b}\\ y=wb+{Y}_{ij}\\ \text{d}y=b\text{d}w\end{array}\right\}$ (3.7)

So that

${\int }^{\text{​}}y\stackrel{^}{g}\left(x,y\right)\text{d}y=\frac{1}{mn}\underset{i}{\sum }\underset{j}{\sum }{\int }^{\text{​}}\frac{1}{b}K\left(\frac{x-{X}_{ij}}{b}\right)\frac{1}{b}\left(bw+{Y}_{ij}\right)K\left(w\right)b\text{d}w$ (3.8)

$=\frac{1}{mnb}\underset{i}{\sum }\underset{j}{\sum }K\left(\frac{x-{X}_{ij}}{b}\right)\left[{\int }^{\text{​}}wK\left(w\right)b\text{d}w+\frac{1}{b}{Y}_{ij}{\int }^{\text{​}}K\left(w\right)b\text{d}w\right]$ (3.9)

From the conditions specified in Equation (3.1), the following (3.9) simplifies to

${\int }^{\text{​}}y\stackrel{^}{g}\left(x,y\right)\text{d}y=\frac{1}{mnb}\underset{i}{\sum }\underset{j}{\sum }K\left(\frac{x-{X}_{ij}}{b}\right)\left[0+{Y}_{ij}\right]$ (3.10)

which reduces to:

${\int }^{\text{​}}y\stackrel{^}{g}\left(x,y\right)\text{d}y=\frac{1}{mnb}\underset{i}{\sum }\underset{j}{\sum }K\left(\frac{x-{X}_{ij}}{b}\right){Y}_{ij}$ (3.11)

Following the same procedure, the denominator can be obtained as follows:

$\begin{array}{c}{\int }^{\text{​}}\stackrel{^}{g}\left(x,y\right)\text{d}y=\frac{1}{mn}\underset{i}{\sum }\underset{j}{\sum }{\int }^{\text{​}}\left(\frac{1}{b}K\left(\frac{x-{X}_{ij}}{b}\right)\frac{1}{b}K\left(\frac{y-{Y}_{ij}}{b}\right)\right)\text{d}y\\ =\frac{1}{mnb}\underset{i=1}{\overset{n}{\sum }}\underset{j=1}{\overset{m}{\sum }}K\left(\frac{x-{X}_{ij}}{b}\right){\int }^{\text{​}}\frac{1}{b}K\left(\frac{y-{Y}_{ij}}{b}\right)\text{d}y\end{array}$ (3.12)

Using change of variable technique as in Equation (3.7), Equation (3.12) can be re-written as follows:

${\int }^{\text{​}}\stackrel{^}{g}\left(x,y\right)\text{d}y=\frac{1}{mnb}\underset{i=1}{\overset{n}{\sum }}\underset{j=1}{\overset{m}{\sum }}K\left(\frac{x-{X}_{ij}}{b}\right){\int }^{\text{​}}\frac{1}{b}K\left(w\right)b\text{d}w$ (3.13)

which yields

${\int }^{\text{​}}\stackrel{^}{g}\left(x,y\right)\text{d}y=\frac{1}{mnb}\underset{i=1}{\overset{n}{\sum }}\underset{j=1}{\overset{m}{\sum }}K\left(\frac{x-{X}_{ij}}{b}\right)$ (3.14)

Since ${\int }^{\text{​}}\frac{1}{b}K\left(w\right)b\text{d}w$ is a pdf and therefore integrates to 1.

It follows from Equations ((3.11) and (3.14)) that the estimator $m\left({\stackrel{^}{x}}_{ij}\right)$ is as given in Equation (3.3). Thus the estimator of $m\left({x}_{ij}\right)$ is a linear smoother since it is a linear function of the observations, ${Y}_{ij}$ . Given a sample and a specified kernel function, then for a given auxiliary value ${x}_{ij}$ , the corresponding y-estimate is obtained by the estimator outlined in Equation (3.3), which can be written as:

${\stackrel{^}{y}}_{ij}={m}_{NW}\left({\stackrel{^}{x}}_{ij}\right)=\underset{i}{\sum }\underset{j}{\sum }{W}_{ij}\left({x}_{ij}\right){Y}_{ij}$ (3.15)

where ${m}_{NW}\left({\stackrel{^}{x}}_{ij}\right)$ is the Nadaraya-Watson estimator for estimating the unknown function $m\left(.\right)$ , for details see   .

This provides a way of estimating for instance the non-response values of the survey variable ${Y}_{ij}$ , given the auxiliary values ${x}_{ij}$ , for a specified kernel function.

4. Asymptotic Bias of the Mean Estimator $\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}$

Equation (2.16) may be written as

$E\left(\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}\right)=\frac{1}{MN}\left\{\underset{i=1}{\overset{n}{\sum }}\underset{j=1}{\overset{m}{\sum }}{Y}_{ij}+\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}{m}_{NW}\left({\stackrel{^}{y}}_{ij}\right)\right\}$ (4.1)

Replacing $x$ by ${x}_{ij}$ and re-writing Equation (3.15) using the property of symmetry associated with Nadaraya-Watson estimator, then

${m}_{NW}\left({\stackrel{^}{x}}_{ij}\right)=\frac{{\sum }_{i\in s}{\sum }_{j\in s}K\left(\frac{{X}_{ij}-{x}_{ij}}{b}\right){Y}_{ij}}{{\sum }_{i\in s}{\sum }_{j\in s}K\left(\frac{{X}_{ij}-{x}_{ij}}{b}\right)},i=1,2,\cdots ,n;j=1,2,\cdots ,m$ (4.2)

$=\frac{1}{\stackrel{^}{g}\left({x}_{ij}\right)}\left[\frac{1}{mnb}\underset{i}{\sum }\underset{j}{\sum }K\left(\frac{{X}_{ij}-{x}_{ij}}{b}\right){Y}_{ij}\right]$ (4.3)

where $\stackrel{^}{g}\left({x}_{ij}\right)$ is the estimated marginal density of auxiliary variables ${X}_{ij}$ .

But for a finite population mean, the expected value of the estimator is given in Equation (4.1). The bias is given by

$\text{Bias}\left(\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}\right)=E\left(\stackrel{^}{\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}}-\stackrel{¯}{\stackrel{¯}{Y}}\right)$ (4.4)

$\begin{array}{c}\text{Bias}\left(\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}\right)=E\left\{\frac{1}{MN}\left[\underset{i=1}{\overset{n}{\sum }}\underset{j=1}{\overset{m}{\sum }}{Y}_{ij}+\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}m\left({\stackrel{^}{x}}_{ij}\right)\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}-\frac{1}{MN}\left[\underset{i=1}{\overset{n}{\sum }}\underset{j=1}{\overset{m}{\sum }}{Y}_{ij}+\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}{Y}_{ij}\right]\right\}\end{array}$ (4.5)

Which reduces to

$\text{Bias}\left(\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}\right)=\frac{1}{MN}\left\{\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}m\left({\stackrel{^}{x}}_{ij}\right)-\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}{Y}_{ij}\right\}$ (4.6)

$=\frac{1}{MN}\left\{\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}m\left({\stackrel{^}{x}}_{ij}\right)-\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}m\left({x}_{ij}\right)\right\}$ (4.7)

Re-writing the regression model given by ${Y}_{ij}=m\left({X}_{ij}\right)+{e}_{ij}$ as

${Y}_{ij}=m\left({x}_{ij}\right)+\left[m\left({X}_{ij}\right)-m\left({x}_{ij}\right)\right]+{e}_{ij}$ (4.8)

So that from Equation (4.3) the first term in Equation (4.7) before taking the expectation is given as:

$\begin{array}{l}\frac{1}{MN}\left\{\frac{\frac{1}{mnb}{\sum }_{i=n+1}^{N}{\sum }_{j=m+1}^{M}K\left(\frac{{X}_{ij}-{x}_{ij}}{b}\right){Y}_{ij}}{\stackrel{^}{g}\left({x}_{ij}\right)}\right\}\\ =\frac{1}{MN}\left\{\frac{1}{\stackrel{^}{g}\left({x}_{ij}\right)}\left\{\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}K\left(\frac{{X}_{ij}-{x}_{ij}}{b}\right)m\left({x}_{ij}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }+\frac{1}{mnb}\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}K\left(\frac{{X}_{ij}-{x}_{ij}}{b}\right)\left[m\left({X}_{ij}\right)-m\left({x}_{ij}\right)\right]\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }+\frac{1}{mnb}\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}K\left(\frac{{X}_{ij}-{x}_{ij}}{b}\right){e}_{ij}\right\}\right\}\end{array}$ (4.9)

Simplifying Equation (4.9) the following is thus obtained:

$\begin{array}{l}\frac{1}{MN}\left\{\frac{1}{mnb\stackrel{^}{g}\left({x}_{ij}\right)}\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}K\left(\frac{{X}_{ij}-{x}_{ij}}{b}\right){Y}_{ij}\right\}\\ =\frac{1}{MN}\left\{\frac{{\sum }_{i=n+1}^{N}{\sum }_{j=m+1}^{M}\left[\stackrel{^}{g}\left({x}_{ij}\right)m\left({x}_{ij}\right)+{\stackrel{^}{m}}_{1}\left({x}_{ij}\right)+{\stackrel{^}{m}}_{2}\left({x}_{ij}\right)\right]}{mnb\stackrel{^}{g}\left({x}_{ij}\right)}\right\}\end{array}$ (4.10)

where

${\stackrel{^}{m}}_{1}\left({x}_{ij}\right)=\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}K\left(\frac{{X}_{ij}-{x}_{ij}}{b}\right)\left[m\left({X}_{ij}\right)-m\left({x}_{ij}\right)\right]$

${\stackrel{^}{m}}_{2}\left({x}_{ij}\right)=\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}K\left(\frac{{X}_{ij}-{x}_{ij}}{b}\right){e}_{ij}$

Taking conditional expectation of Equation (4.10) we get

$\begin{array}{l}E\left[\frac{{\sum }_{i=n+1}^{N}{\sum }_{j=m+1}^{M}M\left({\stackrel{^}{x}}_{ij}\right)}{{x}_{ij}}\right]\\ =\frac{1}{MN}E\left[\frac{1}{mnb}\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}\left[m\left({x}_{ij}\right)+\frac{{\stackrel{^}{m}}_{1}\left({x}_{ij}\right)}{\stackrel{^}{g}\left({x}_{ij}\right)}+\frac{{\stackrel{^}{m}}_{2}\left({x}_{ij}\right)}{\stackrel{^}{g}\left({x}_{ij}\right)}\right]\right]\end{array}$ (4.11)

To obtain the relationship between the conditional mean and the selected bandwidth, the following theorem due to  is applied;

Theorem: (Dorfman, 1992)

Let $k\left(w\right)$ be a symmetric density function with ${\int }^{\text{​}}wk\left(w\right)\text{d}w=0$ and ${\int }^{\text{​}}{w}^{2}k\left(w\right)\text{d}w={k}_{2}$ . Assume n and N increase together such that $\frac{n}{N}\to \pi$ with

$0<\pi <1$ . Besides, assume the sampled and non-sampled values of x are in the interval $\left[c,d\right]$ and are generated by densities ${d}_{s}$ and ${d}_{p-s}$ respectively both bounded away from zero on $\left[c,d\right]$ and assumed to have continuous second derivatives. If for any variable $\mathcal{Z}$ , $E\left(\mathcal{Z}/U=u\right)=A\left(u\right)+O\left(B\right)$ and $Var\left(\mathcal{Z}/U=u\right)=O\left(C\right)$ , then $\mathcal{Z}=A\left(u\right)+{O}_{p}\left(B+{C}^{\frac{1}{2}}\right)$ .

Applying this theorem, we have

$\begin{array}{l}MSE\left(\frac{\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}}{{x}_{ij}}\right)=\frac{1}{{\left(MN\right)}^{2}}\left\{\frac{{\left(MN-mn\right)}^{2}{\int }^{\text{​}}k\left({w}^{2}\right)\text{d}w}{mnbg\left({x}_{ij}\right)}\\ +\frac{{\left(MN-mn\right)}^{2}}{4{m}^{2}{n}^{2}}{b}^{4}{k}_{2}^{2}\left(k\right){\left[{m}^{″}\left({x}_{ij}\right)+\frac{2{g}^{″}\left({x}_{ij}\right){m}^{\prime }\left({x}_{ij}\right)}{g\left({x}_{ij}\right)}\right]}^{2}\\ +O\left({b}^{4}\right)+O\left[\frac{{\left(MN-mn\right)}^{2}}{mnb}+\frac{1}{mnb}\right]\right\}\end{array}$ (4.12)

This theorem is stated without proof. To prove it, we partition it into the bias and variance terms and separately prove them as follows:

From Equation (3.0) it follows that $E\left({e}_{ij}/{X}_{ij}\right)=0$ . Therefore, $E\left[{\stackrel{^}{m}}_{2}\left({x}_{ij}\right)\right]=0$ . Thus $E\left[{\stackrel{^}{m}}_{1}\left({x}_{ij}\right)\right]$ can be obtained as follows:

$\begin{array}{l}E\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}\left[{\stackrel{^}{m}}_{1}\left({x}_{ij}\right)\right]\\ =\frac{1}{MN}\left\{\frac{1}{mnb}E\left\{\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}K\left(\frac{{X}_{ij}-{x}_{ij}}{b}\right)\left[m\left({X}_{ij}\right)-m\left({x}_{ij}\right)\right]\right\}\right\}\end{array}$ (4.13)

Using substitution and change of variable technique below

$w=\frac{V-{x}_{ij}}{b}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{so}\text{\hspace{0.17em}}\text{that}\text{\hspace{0.17em}}\text{ }V={x}_{ij}+bw\text{\hspace{0.17em}}\text{ }\text{and}\text{\hspace{0.17em}}\text{ }\text{d}V=b\text{d}w$ (4.14)

Equation (4.13) can simplify to:

$\begin{array}{l}E\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}\left[{\stackrel{^}{m}}_{1}\left({x}_{ij}\right)\right]\\ =\frac{1}{MN}\left\{\frac{MN-mn}{mnb}{\int }^{\text{​}}k\left(w\right)\left[m\left({x}_{ij}+bw\right)-m\left({x}_{ij}\right)\right]{\int }^{\text{​}}g\left({x}_{ij}+bw\right)b\text{d}w\right\}\end{array}$ (4.15)

$=\frac{1}{MN}\left\{\frac{MN-mn}{mn}{\int }^{\text{​}}k\left(w\right)\left[m\left({x}_{ij}+bw\right)-m\left({x}_{ij}\right)\right]g\left({x}_{ij}+bw\right)\text{d}w\right\}$ (4.16)

Using the Taylor’s series expansion about the point ${x}_{ij}$ , the kth order kernel can be derived as follows:

$g\left({x}_{ij}+bw\right)=g\left({x}_{ij}\right)+{g}^{\prime }\left({x}_{ij}\right)bw+\frac{1}{2}{g}^{″}\left({x}_{ij}\right){b}^{2}{w}^{2}+\cdots +\frac{1}{k!}{g}^{k}\left({x}_{ij}\right){b}^{k}{w}^{k}+O\left({b}^{2}\right)$ (4.17)

Similarly,

$m\left({x}_{ij}+bw\right)=m\left({x}_{ij}\right)+{m}^{\prime }\left({x}_{ij}\right)bw+\frac{1}{2}{m}^{″}\left({x}_{ij}\right){b}^{2}{w}^{2}+\cdots +\frac{1}{k!}{m}^{k}\left({x}_{ij}\right){b}^{k}{w}^{k}+O\left({b}^{2}\right)$ (4.18)

Expanding up to the 3rd order kernels, Equation (4.18) becomes

$\left[m\left({x}_{ij}+bw\right)-m\left({x}_{ij}\right]={m}^{\prime }\left({x}_{ij}\right)bw+\frac{1}{2}{m}^{″}\left({x}_{ij}\right){b}^{2}{w}^{2}+\frac{1}{3!}{m}^{‴}\left({x}_{ij}\right){b}^{3}{w}^{3}$ (4.19)

In a similar manner, the expansion of Equation (4.16) up to order $O\left({b}^{2}\right)$ is given by:

$\begin{array}{l}E\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}\left[{\stackrel{^}{m}}_{1}\left({x}_{ij}\right)\right]\\ =\frac{1}{MN}\left\{\frac{MN-mn}{mn}{\int }^{\text{​}}k\left(w\right)\left({m}^{\prime }\left({x}_{ij}\right)bw+\frac{1}{2}{m}^{″}\left({x}_{ij}\right){b}^{2}{w}^{2}\right)\left(g\left({x}_{ij}\right)+{g}^{\prime }\left({x}_{ij}\right)bw\right)\text{d}w\right\}\end{array}$ (4.20)

Simplifying Equation (4.20) gives;

$\begin{array}{c}E\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}\left[{\stackrel{^}{m}}_{1}\left({x}_{ij}\right)\right]=\frac{1}{MN}\left\{\left(\frac{MN-mn}{mn}\right)g\left({x}_{ij}\right){m}^{\prime }\left({x}_{ij}\right)b{\int }^{\text{​}}wk\left(w\right)\text{d}w\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\left(\frac{MN-mn}{mn}\right){g}^{\prime }\left({x}_{ij}\right){m}^{\prime }\left({x}_{ij}\right){b}^{2}{\int }^{\text{​}}{w}^{2}k\left(w\right)\text{d}w\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+\left(\frac{MN-mn}{mn}\right)\frac{1}{2}g\left({x}_{ij}\right){m}^{″}\left({x}_{ij}\right){b}^{2}{\int }^{\text{​}}{w}^{2}k\left(w\right)\text{d}w+O\left({b}^{2}\right)\right\}\end{array}$ (4.21)

Using the conditions stated in Equation (3.1), the derivation in (4.21) can further be simplified to obtain:

$\begin{array}{l}E\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}\left[{\stackrel{^}{m}}_{1}\left({x}_{ij}\right)\right]\\ =\frac{1}{MN}\left\{\left(\frac{MN-mn}{mn}\right)\left[{g}^{\prime }\left({x}_{ij}\right){m}^{\prime }\left({x}_{ij}\right)+\frac{1}{2}g\left({x}_{ij}\right){m}^{″}\left({x}_{ij}\right)\right]{b}^{2}{d}_{k}+O\left({b}^{2}\right)\right\}\end{array}$ (4.22)

Hence the expected value of the second term in Equation (4.11) then becomes:

$\begin{array}{l}E\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}\left[{\stackrel{^}{m}}_{1}\left({x}_{ij}\right)\right]\\ =\frac{1}{MN}\left\{\left(\frac{MN-mn}{mn}\right)\left[\frac{1}{2}{m}^{″}\left({x}_{ij}\right)+\frac{{g}^{\prime }\left({x}_{ij}\right){m}^{\prime }\left({x}_{ij}\right)}{g\left({x}_{ij}\right)}\right]{b}^{2}{d}_{k}+O\left({b}^{2}\right)\right\}\end{array}$ (4.23)

$=\frac{1}{MN}\left\{\left(\frac{MN-mn}{mn}\right)\left[\frac{{m}^{″}\left({x}_{ij}\right)}{2}+{\left[g\left({x}_{ij}\right)\right]}^{-1}{g}^{\prime }\left({x}_{ij}\right){m}^{\prime }\left({x}_{ij}\right)\right]{b}^{2}{d}_{k}+O\left({b}^{2}\right)\right\}$ (4.24)

$=\frac{1}{MN}\left\{\left(\frac{MN-mn}{mn}\right){b}^{2}{d}_{k}C\left(x\right)+O\left({b}^{2}\right)\right\}$ (4.25)

where

$C\left(x\right)=\frac{1}{2}{m}^{″}\left({x}_{ij}\right)+{\left[g\left({x}_{ij}\right)\right]}^{-1}{g}^{\prime }\left({x}_{ij}\right){m}^{\prime }\left({x}_{ij}\right)$ (4.26)

and ${d}_{k}$ is as stated in Equation (3.1)

Using equation of the bias given in (4.4) and the conditional expectation in Equation (4.11), we obtain the following equation for the bias of the estimator:

$\begin{array}{c}\text{Bias}\left(\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}\right)=\frac{1}{MN}\left\{\left(\frac{MN-mn}{mn}\right){b}^{2}{d}_{k}C\left(x\right)+O\left({b}^{2}\right)\right\}\\ =\frac{1}{MN}\left\{\left(\frac{MN-mn}{mn}\right){b}^{2}{d}_{k}C\left(x\right)+O\left({b}^{2}\right)\right\}\end{array}$ (4.27)

5. Asymptotic Variance of the Estimator, $\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}$

From Equations ((4.9) and (4.11)),

${{m}^{\prime }}_{2}\left({x}_{ij}\right)=\frac{1}{mnb}\underset{i=1}{\overset{n}{\sum }}\underset{j=1}{\overset{m}{\sum }}K\left(\frac{{X}_{ij}-{x}_{ij}}{b}\right){e}_{ij}$ (5.0)

Hence

$\text{Var}\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}\left[{\stackrel{^}{m}}_{2}\left({x}_{ij}\right)\right]=\frac{1}{{\left(MN\right)}^{2}}{\left(\frac{MN-mn}{mnb}\right)}^{2}\underset{i=1}{\overset{n}{\sum }}\underset{j=1}{\overset{m}{\sum }}\text{Var}\left({D}_{x}\right)$ (5.1)

where

${D}_{x}=K\left(\frac{{X}_{ij}-{x}_{ij}}{b}\right){e}_{ij}$

Expressing Equation (5.1) in terms of expectation we obtain:

$\text{Var}\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}\left[{\stackrel{^}{m}}_{2}\left({x}_{ij}\right)\right]=\frac{1}{{\left(MN\right)}^{2}}\left[\frac{{\left(MN-mn\right)}^{2}}{mn{b}^{2}}\right]\left\{E{\left[{D}_{x}\right]}^{2}-{\left[E\left({D}_{x}\right)\right]}^{2}\right\}$ (5.2)

Using the fact that the conditional expectation

$E\left({e}_{ij}/{X}_{ij}\right)=0$ , the second term in Equation (4.13) reduces to zero. Therefore,

$\text{Var}\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}\left[{\stackrel{^}{m}}_{2}\left({x}_{ij}\right)\right]=\frac{1}{{\left(MN\right)}^{2}}\left[\frac{{\left(MN-mn\right)}^{2}}{mn{b}^{2}}\right]{\sigma }_{\left({x}_{ij}\right)}^{2}$ (5.3)

where

$E{\left({e}_{ij}/{X}_{ij}\right)}^{2}={\sigma }_{\left({x}_{ij}\right)}^{2}$

Let $X={X}_{ij}$ , and $x={x}_{ij}$ , and making the following substitutions

$\begin{array}{l}w=\frac{X-x}{b}\\ X-x=bw\\ \text{d}X=b\text{d}w\end{array}\right\}$ (5.4)

$\text{Var}\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}\left[{\stackrel{^}{m}}_{2}\left({x}_{ij}\right)\right]=\frac{{\left(MN-mn\right)}^{2}}{mn{b}^{2}{\left(MN\right)}^{2}}{\int }^{\text{​}}K{\left(\frac{X-x}{b}\right)}^{2}{\sigma }_{\left({x}_{ij}\right)}^{2}g\left(X\right)\text{d}X$ (5.5)

$=\frac{{\left(MN-mn\right)}^{2}}{mn{b}^{2}{\left(MN\right)}^{2}}{\int }^{\text{​}}K{\left(w\right)}^{2}{\sigma }_{\left({x}_{ij}\right)}^{2}g\left(x+bw\right)b\text{d}w$ (5.6)

which can be simplified to get:

$\text{Var}\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}\left[{\stackrel{^}{m}}_{2}\left({x}_{ij}\right)\right]=\frac{{\left(MN-mn\right)}^{2}}{mnb{\left(MN\right)}^{2}}{\int }^{\text{​}}K{\left(w\right)}^{2}g\left(x\right){\sigma }_{\left({x}_{ij}\right)}^{2}\text{d}w+O\left(\frac{1}{mnb}\right)$ (5.7)

$\begin{array}{l}\text{Var}\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}\left[{\stackrel{^}{m}}_{1}\left({x}_{ij}\right)\right]\\ =\frac{1}{{\left(MN\right)}^{2}}\text{Var}\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}\left[\frac{1}{mnb}\underset{i=1}{\overset{n}{\sum }}\underset{j=1}{\overset{m}{\sum }}K\left(\frac{{X}_{ij}-{x}_{ij}}{b}\right)\right]\left[M\left({X}_{ij}\right)-m\left({x}_{ij}\right)\right]\end{array}$ (5.8)

$\text{Var}\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}\left[{\stackrel{^}{m}}_{1}\left({x}_{ij}\right)\right]=\frac{{\left(MN-mn\right)}^{2}}{mn{b}^{2}{\left(MN\right)}^{2}}\text{Var}\text{ }K\left(\frac{{X}_{ij}-{x}_{ij}}{b}\right)\left[M\left({X}_{ij}\right)-m\left({x}_{ij}\right)\right]$ (5.9)

Hence

$\begin{array}{l}\text{Var}\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}\left[{\stackrel{^}{m}}_{1}\left({x}_{ij}\right)\right]\\ =\frac{{\left(MN-mn\right)}^{2}}{mn{b}^{2}{\left(MN\right)}^{2}}E\left[{\int }^{\text{​}}K{\left(\frac{X-x}{b}\right)}^{2}{\left[M\left(X\right)-m\left(x\right)\right]}^{2}\right]g\left(X\right)\text{d}X\end{array}$ (5.10)

where $X=bw+x$ so that $\text{d}X=b\text{d}w$ .

Changing variables and applying Taylor’s series expansion then

$\begin{array}{l}\text{Var}\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}\left[{\stackrel{^}{m}}_{1}\left({x}_{ij}\right)\right]\\ =\frac{{\left(MN-mn\right)}^{2}}{mn{b}^{2}{\left(MN\right)}^{2}}{\int }^{\text{​}}K{\left(w\right)}^{2}{\left[m\left(x+bw\right)-m\left(x\right)\right]}^{2}g\left(x+bw\right)\text{d}w\end{array}$ (5.11)

$=\frac{{\left(MN-mn\right)}^{2}}{mn{b}^{2}{\left(MN\right)}^{2}}{\int }^{\text{​}}K{\left(w\right)}^{2}{\left[m\left(x\right)+{m}^{\prime }\left(x\right)bw+\cdots -m\left(x\right)\right]}^{2}\left(g\left(x\right)+{g}^{\prime }\left(x\right)bw\right)\text{d}w$ (5.12)

which simplifies to

$\text{Var}\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}\left[{\stackrel{^}{m}}_{1}\left({x}_{ij}\right)\right]=O\left[\frac{{\left(MN-mn\right)}^{2}{b}^{2}}{mnb}\right]$ (5.13)

For large samples, as $n\to N$ , $m\to M$ and for $b\to 0$ , then $mnb\to \infty$ . Hence the variance in Equation (5.12) asymptotically tends to zero, that is,

$\text{Var}\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}\left[{\stackrel{^}{m}}_{1}\left({x}_{ij}\right)\right]\to 0$

$\text{Var}\left(\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}\right)=\frac{{\left(MN-mn\right)}^{2}}{mnb{\left(MN\right)}^{2}}\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}\text{Var}\left[m\left({x}_{ij}\right)+\frac{{{m}^{\prime }}_{1}\left({x}_{ij}\right)+{{m}^{″}}_{2}\left({x}_{ij}\right)}{\stackrel{^}{g}\left({x}_{ij}\right)}\right]$ (5.14)

On simplification,

$\text{Var}\left(\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}\right)=\frac{{\left(MN-mn\right)}^{2}}{mnb{\left(MN\right)}^{2}{\left[\stackrel{^}{g}\left({x}_{ij}\right)\right]}^{2}}\text{Var}\left\{\underset{i=n+1}{\overset{N}{\sum }}\underset{j=m+1}{\overset{M}{\sum }}\left[{\stackrel{^}{m}}_{2}\left({x}_{ij}\right)\right]\right\}$ (5.15)

Substituting Equations ((5.7) into (5.15)) yields the following:

$\text{Var}\left(\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}\right)=\frac{1}{{\left(MN\right)}^{2}}\left\{\frac{{\left(MN-mn\right)}^{2}{\int }^{\text{​}}K{\left(w\right)}^{2}{\sigma }_{\left({x}_{ij}\right)}^{2}\text{d}w}{mnb\left(g\left({x}_{ij}\right)\right)}+O\left[\frac{{\left(MN-mn\right)}^{2}}{mnb}+\frac{1}{mnb}\right]\right\}$ (5.16)

$=\frac{1}{{\left(MN\right)}^{2}}\left\{\frac{{\left(MN-mn\right)}^{2}H\left(w\right){\sigma }_{\left({x}_{ij}\right)}^{2}}{mnb\left(g\left({x}_{ij}\right)\right)}+O\left[\frac{{\left(MN-mn\right)}^{2}}{mnb}+\frac{1}{mnb}\right]\right\}$ (5.17)

where, $H\left(w\right)={\int }^{\text{​}}K{\left(w\right)}^{2}\text{d}w$

It is notable that the variance term still depends on the marginal density function, $g\left({x}_{ij}\right)$ of the auxiliary variables ${X}_{ij}$ . It can also be observed that the variance is inversely related to the smoothing parameter b. This implies that an increase in b results in a smaller variance. However, increasing the bandwidth would give a larger bias. Therefore there is a trade-off between the bias and variance of the estimated population mean. A bandwidth that provides a compromise between the two measures would therefore be desirable.

6. Mean Squared Error (MSE) of the Finite Population Mean Estimator $\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}$

The MSE of $\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}$ combines the bias and the variance terms of this estimator that is,

$MSE\left(\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}\right)=E{\left(\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}-\stackrel{¯}{\stackrel{¯}{Y}}\right)}^{2}$ (6.0)

$MSE\left(\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}\right)=E{\left(\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}-E\left[\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}\right]+E\left[\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}\right]-\stackrel{¯}{\stackrel{¯}{Y}}\right)}^{2}$ (6.1)

Expanding Equation (6.1) gives:

$\begin{array}{c}MSE\left(\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}\right)=E{\left(\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}-E\left[\stackrel{¯}{\stackrel{¯}{Y}}\right]\right)}^{2}+E{\left(E\left[\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}\right]-\left[\stackrel{¯}{\stackrel{¯}{Y}}\right]\right)}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}+2E\left(\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}-E\left[\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}\right]\right)\left(E\left[\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}\right]-\stackrel{¯}{\stackrel{¯}{Y}}\right)\end{array}$ (6.2)

$=\text{Var}\left(\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}\right)+Bia{s}^{2}+0$ (6.3)

Combining the bias in Equation (4.27) and the variance in Equation (5.17) and conditioning on the auxiliary values ${x}_{ij}$ of the auxiliary variables ${X}_{ij}$ then

$\begin{array}{l}MSE\left(\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}/{X}_{ij}={x}_{ij}\right)\\ =\frac{1}{{\left(MN\right)}^{2}}\left\{\frac{{\left(MN-mn\right)}^{2}H\left(w\right){\sigma }_{\left({x}_{ij}\right)}^{2}}{mnb\left(g\left({x}_{ij}\right)\right)}+O\left(\frac{1}{MN}\left\{\frac{{\left(MN-mn\right)}^{2}}{mnb}+\frac{1}{mnb}\right\}\right)\right\}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{ }+\frac{1}{MN}\left\{\left(\frac{MN-mn}{mn}\right){b}^{2}{d}_{k}C\left(x\right)+O\left({b}^{2}\right)\right\}\end{array}$ (6.4)

$\begin{array}{l}MSE\left(\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}/{X}_{ij}={x}_{ij}\right)\\ =\frac{1}{{\left(MN\right)}^{2}}\left\{\frac{{\left(MN-mn\right)}^{2}H\left(w\right){\sigma }_{\left({x}_{ij}\right)}^{2}}{mnb\left(g\left({x}_{ij}\right)\right)}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }+\left[\frac{{\left(MN-mn\right)}^{2}}{4{\left(mn\right)}^{2}{\left(MN\right)}^{2}}{b}^{4}{d}_{k}^{2}{\left[{m}^{″}\left({x}_{ij}\right)+\frac{2{g}^{\prime }\left({x}_{ij}\right){m}^{\prime }\left({x}_{ij}\right)}{g\left({x}_{ij}\right)}\right]}^{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }+O\left({b}^{4}\right)+\frac{1}{MN}\left(O\left\{\left(\frac{MN-mn}{mnb}\right)+\frac{1}{mnb}\right\}\right)\right]\right\}\end{array}$ (6.5)

where $H\left(w\right)={\int }^{\text{​}}K{\left(w\right)}^{2}\text{d}w$ , ${d}_{k}={\int }^{\text{​}}{w}^{2}K\left(w\right)\text{d}w$ , $C\left(x\right)=\frac{1}{2}{m}^{″}\left({x}_{ij}\right)+{\left[g\left({x}_{ij}\right)\right]}^{-1}{g}^{\prime }\left({x}_{ij}\right){m}^{\prime }\left({x}_{ij}\right)$ as used earlier in the rest of the derivations.

7. Conclusion

If the sample size is large enough, that is as $n\to N$ and $m\to M$ the of $\left(\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}\right)$ in Equation (6.5) due to the kernel tends to zero for sufficiently a small bandwidth b. The estimator $\left(\stackrel{^}{\stackrel{¯}{\stackrel{¯}{Y}}}\right)$ is therefore asymptotically consistent since its MSE converges to zero.

Cite this paper

Bii, N.K., Onyango, C.O. and Odhiambo, J. (2017) Estimating a Finite Population Mean under Random Non-Response in Two Stage Cluster Sampling with Replacement. Open Journal of Statistics, 7, 834-848. https://doi.org/10.4236/ojs.2017.75059

References

1. 1. Singh, S. and Horn, S. (2000) Compromised Imputation in Survey Sampling. Metrika, 51, 267-276. https://doi.org/10.1007/s001840000054

2. 2. Lee, H., Rancourt, E. and Sarndal, C. (2002) Variance Estimation from Survey Data under Single Imputation. Survey Nonresponse, 315-328.

3. 3. Bethlehem, J.G. (2012) Using Response Probabilities for Assessing Representativity. Statistics Netherlands, International Statistical Review, 80, 382-399.

4. 4. Ouma, C. and Wafula, C. (2005) Bootstrap Confidence Intervals for Model-Based Surveys. East African Journal of Statistics, 1, 84-90.

5. 5. Onyango, C.O., Otieno, R.O. and Orwa, G.O. (2010) Generalized Model Based Confidence Intervals in Two Stage Cluster Sampling. Pakistan Journal of Statistics and Operation Research, 6. https://doi.org/10.18187/pjsor.v6i2.128

6. 6. Dorfman, A.H. (1992) Nonparametric Regression for Estimating Totals in Finite Populations. In: Proceedings of the Section on Survey Research Methods, American Statistical Association Alexandria, VA, 622-625.

7. 7. Nadaraya, E.A. (1964) On Estimating Regression. Theory of Probability & Its Applications, 9, 141-142. https://doi.org/10.1137/1109020

8. 8. Watson, G.S. (1964) Smooth Regression Analysis. Sankhya: The Indian Journal of Statistics, Series A, 359-372.