In this article we study the estimation method of nonparametric regression measurement error model based on a validation data. The estimation procedures are based on orthogonal series estimation and truncated series approximation methods without specifying any structure equation and the distribution assumption. The convergence rates of the proposed estimator are derived. By example and through simulation, the method is robust against the misspecification of a measurement error model.
Let Y be a scalar response variable and X be an explanatory variable in regression. We consider the nonparametric regression model
Y = g ( X ) + ε (1)
where g ( ⋅ ) is an unknown nonparametric regression function, ε is a noise variable, and given X the errors ε = Y − g ( X ) are assumed to be independent and identically distributed. We consider the model (1) with explanatory variable X measured with error and Y measured exactly. That is, instead of the true X, the surrogate variable W is observed. Throughout we assume
E [ ε | W ] = 0 with probability 1 (2)
which is always satisfied if, for example, W is a function of X and some independent noise (see e.g. [
Nonparametric regression model (1) in presence of errors in covariables has attracted considerable attention in the literature, and is by now well understood. See Carroll et al. [
We consider settings where some validation data are available for relating X and W. To be specific, we assume that independent validation data ( W j , X j ) , N + 1 ≤ j ≤ N + n are available in addition to the independent primary data { ( Y i , W i ) } i = 1 N . Recently, several approaches to statistical inference based on surrogate data and a validation sample are available (see, for example, [
In this paper, without specifying any structural equations, an orthogonal series method is proposed to estimate g with the help of validation data. As explained in Section 2, we estimate g by solving the following Fredholm equation of the first kind,
T g = m (3)
Here, we propose orthogonal series estimator of T using the validation data. Using a similar approach, we estimate m based on primary data set. Then an estimator of g is obtained by Tikhonov regularization method.
This paper is arranged as follows. In Section 2, we define an orthogonal series estimation method. In Section 3, we state the convergence rates of the proposed estimator. Simulation results are reported in Section 4 and a brief discussion is given in Section 5. Proofs of the theorems are presented in Appendix.
Recall model (1) and the assumptions below it. Assume that in addition to the primary data set consisting of N independent and identically distributed obser- vations { ( Y i , W i ) } i = 1 N from model (1), validation consisting of n independent and identically distributed observations { ( X j , W j ) } j = N + 1 N + n are available. Furthermore, we suppose that X and W are both real-valued random variables. The extension to random vectors complicates the notation but does not affect the main ideas and results. Without loss of generality, let the supports of X and W both be contained in [ 0,1 ] (otherwise, one can carry out monotone transformations of X and W).
Let f X W and f W denote respectively the joint density of ( X , W ) and marginal density of W. Then, according to (2), we have
E ( Y | W = w ) = E [ g ( X ) | W = w ] = ∫ g ( x ) f X W ( x , w ) f W ( w ) d x (4)
Let m ( w ) = E ( Y | W = w ) f W ( w ) and
L 2 ( [ 0 , 1 ] ) = { φ : [ 0 , 1 ] → R , s .t . ‖ φ ‖ = ( ∫ | φ ( x ) | 2 d x ) 1 / 2 < ∞ }
Define the operator T : L 2 ( [ 0,1 ] ) → L 2 ( [ 0,1 ] ) as
( T φ ) ( w ) = ∫ φ ( x ) f X W ( x , w ) d x
So that Equation (4) is equivalent to the operator equation
m ( w ) = ( T g ) ( w ) (5)
According to Equation (5), the function g is the solution of a Fredholm integral equation of the first kind, and this inverse problem is known to be ill-posed and needs a regularization method. A variety of regulation schemes are available in the literature (see e.g. [
g α = arg min g [ ‖ T g − m ‖ 2 + α ‖ g ‖ 2 ] (6)
where the penalization term α > 0 is the regularization parameter.
We define the adjoint operator T ∗ of T
( T ∗ ψ ) ( x ) = ∫ ψ ( w ) f X W ( x , w ) d w
where ψ ( w ) ∈ L 2 ( [ 0,1 ] ) . Then the regularized solution (6) is equivalently:
g α = ( α I + T ∗ T ) − 1 T ∗ m (7)
In order to estimate the solution (7), we need to estimate T , T ∗ and m . In this paper, we consider the orthogonal series method. Under some regularity conditions in Section 4.1, the density function f X W ( x , w ) and m ( w ) may be approximated with any wished accuracy by a truncated orthogonal series,
f X W K ( x , w ) = ∑ k = 1 K ∑ l = 1 K d k l ϕ k ( x ) ϕ l ( w ) and m K ( w ) = ∑ k = 1 K m k ϕ k ( w )
where
d k l = ∫ ∫ f X W ( x , w ) ϕ k ( x ) ϕ l ( w ) d x d w and m k = ∫ m ( w ) ϕ k ( w ) d w
Here, { ϕ k } is an orthonormal basis of L 2 ( [ 0,1 ] ) which may be trigonome- tric, polynomial, spline, wavelet, and so on. A discussion of different bases and their properties can be found in the literature (see e.g. [
ϕ k ( x ) = 1 k ! 2 k + 1 d k d x k [ ( x 2 − x ) k ] (8)
The integer K is a truncation point which is the main smoothing parameter in the approximating series, and d k l and m k represent the generalized Fourier coefficients of f X W and m, respectively.
Note that d k l = E [ ϕ k ( X ) ϕ l ( W ) ] and m k = E [ Y ϕ k ( W ) ] . Intuitively, we can obtain the estimators of d k l , f X W ( x , w ) , m k and m ( w ) by
d ^ k l = 1 n ∑ j = N + 1 N + n ϕ k ( X j ) ϕ l ( W j ) , f ^ X W ( x , w ) = ∑ k = 1 K ∑ l = 1 K d ^ k l ϕ k ( x ) ϕ l ( w )
m ^ k = 1 N ∑ i = 1 N Y i ϕ k ( W i ) and m ^ ( w ) = ∑ k = 1 K m ^ k ϕ k ( w )
respectively. The operators T and T ∗ can then be consistently estimated by
( T ^ φ ) ( w ) = ∫ φ ( x ) f ^ X W ( x , w ) d x and ( T ^ ∗ ψ ) ( x ) = ∫ ψ ( w ) f ^ X W ( x , w ) d w
Conclude that, the estimator of g ( x ) is obtained by
g ^ α = ( α I + T ^ ∗ T ^ ) − 1 T ^ ∗ m ^ (9)
The main objective of this section is to derive the statistical properties of the estimator proposed in Section 2.2. For this purpose, we assume:
Assumption 1. 1) The support of ( X , W ) is contained in [ 0,1 ] 2 ; 2) The joint density of ( X , W ) is square integrable w.r.t the Lesbegue measure on [ 0,1 ] 2 .
This is sufficient condition for T to be a Hilbert-Schmidt operator and therefore to be compact (see [
T φ k = λ k ψ k , T * ψ k = λ k φ k ; T * T φ k = λ k 2 φ k , T T * ψ k = λ k 2 ψ k , for k ≥ 0
We define Φ β as a b-regularity space for β > 0 :
Φ β = { φ ∈ L 2 ( [ 0 , 1 ] ) such that ∑ k ≥ 0 〈 φ , φ k 〉 λ k 2 β < + ∞ }
Here and blow, we denote by 〈 ⋅ , ⋅ 〉 the scalar product in L 2 ( [ 0,1 ] ) .
Assumption 2. We have g ∈ Φ β for some β > 0 .
We then obtain the following result (see [
Proposition 3.1. Suppose Assumptions 1 and 2 hold, then we have ‖ g − g α ‖ 2 = O ( α β ∧ 2 ) , where β ∧ 2 = min { β , 2 } .
In order to obtain the rate of convergence for ‖ g ^ α − g ‖ 2 , we impose the following additional conditions:
Assumption 3. 1) The joint density f X W is r-times continuously differen- tiable on [ 0,1 ] 2 ; 2) The function m ( ⋅ ) is s-times continuously differentiable on [ 0,1 ] .
Assumption 4. The function E ( Y 2 | W = w ) is bounded uniformly on [ 0,1 ] .
Assumption 5. 1) lim n / N = μ ∈ [ 0 , ∞ ) ; 2) α → 0 , K = K ( N , n ) → ∞ , K / N → 0 , K 2 / n → 0 as n → ∞ , N → ∞ .
Theorem 3.1. Suppose Assumptions 1 - 5 hold. Let γ = min { r , s } , then we have
‖ g ^ α − g ‖ 2 = O P [ 1 α × ( K N + 1 K 2 γ + K 2 n ) + α β ∧ 2 ] (10)
In (10), the term K − 2 γ arises from the bias of g ^ caused by truncating the series approximation of f X W and m . The truncation bias decreases as γ increases. The terms N − 1 K and n − 1 K 2 are respectively induced by random surrogate sampling errors and random validation sampling errors in the estimates of the generalized Fourier coefficients m k and d k l . By Theorem 3.1, it is easy to obtain the following corollary.
Corollary 3.1. Suppose the assumptions of Theorem 3.1 are satisfied. Let K = O ( n 1 / ( 2 γ + 2 ) ) and α = O ( n − γ / [ ( γ + 1 ) ( β ∧ 2 + 1 ) ] ) , then we have
‖ g ^ α − g ‖ 2 = O P ( n − κ β ∧ 2 β ∧ 2 + 1 )
where κ = γ / ( γ + 1 ) .
The proofs of all the results are reported in the Appendix.
In this section, we conducted simulation studies of the finite-sample perfor- mance of the proposed estimators. First, for comparison, we consider the standard Nadaraya-Watson estimator base on the primary dataset { ( Y i , W i ) } i = 1 N (denoted as g ^ N ). It should be pointed out that g ^ N can serve as a gold standard in the simulation study, even though it is practically unachievable due to measurement errors. Second, The performance of estimator g e s t is assessed by using the square root of average square errors (RASE)
RASE = { 1 M ∑ s = 1 M [ g e s t ( u s ) − g ( u s ) ] 2 } 1 / 2
where u s , s = 1 , ⋯ , M , are grid points at which g e s t ( u s ) is evaluated.
We considered model (1) with the regression function being
1) g ( x ) = ϕ 0,1.5 ( 4 x ) + ϕ 1,2 ( 4 x ) + ϕ 2,5 ( 4 x ) , ε ~ N ( 0,0.2 ) ,
2) g ( x ) = 5 s i n ( 2 x ) e x p ( − 16 x 2 / 50 ) , ε ~ N ( 0,0.2 ) ,
where ϕ μ , σ is the density of an Normal ( μ , σ 2 ) variable. To perform this simulation, we generate W from f W and δ from f δ , and put X = W + δ . The densities f W and f δ , chosen in the beta family, are
f W ( w ) = ( 1 − w 2 / 4 ) 2 B ( 1 / 2 ,2 ) I ( w ∈ [ − 2,2 ] )
f δ ( u ) = ( 1 − u 2 ) ρ δ B ( 1 / 2 , ρ δ + 1 ) I ( u ∈ [ − 1 , 1 ] )
where we chose ρ δ = 1 , ρ δ = 3 and ρ δ = 5 (in fact, the greater the value of ρ δ , the smaller the variance of δ ). Simulations were run with different validation and primary data sizes ( n , N ) ranging from ( 20,60 ) to ( 50,250 ) according to the ratio κ = N / n = 3 and κ = N / n = 5 , respectively. For each case, 500 simulated data sets were generated for each sample size of ( n , N ) .
To implement our method (9), the regularization parameter α and truncating parameter K should be chosen. Here, we estimate α and K by minimizing the following two-dimensional cross-validation score selection criterion
CV ( α , K ) = ∑ i = 1 N { Y i − ∑ k = 1 K g ^ k ( − i ) ϕ k ( W i ) } 2
where g ^ k ( − i ) are the solutions based on (9), after deleting the ith primary observation ( Y i , W i ) . In addition, for the naive estimator g ^ N , we used the standard normal kernel, and the bandwidth was selected by leave-one-out CV approach. In all graphs, to illustrate the performance of an estimator, we show the estimated curves corresponding to the first (Q1), second (Q2) and third (Q3) quartiles of the ordered RASEs. The target curve is always represented by a solid curve.
Curve | κ | ( n , N ) | ρ δ = 1 | ρ δ = 3 | ρ δ = 5 | |||
---|---|---|---|---|---|---|---|---|
g ^ α ( x ) | g ^ N ( x ) | g ^ α ( x ) | g ^ N ( x ) | g ^ α ( x ) | g ^ N ( x ) | |||
κ = 3 | (20, 60) | 1.2136 | 2.0637 | 1.0597 | 1.9101 | 1.0537 | 1.7948 | |
(30 ,90) | 0.9319 | 1.7127 | 0.8452 | 1.5598 | 0.8012 | 1.4746 | ||
(50, 150) | 0.7002 | 1.3890 | 0.6397 | 1.3086 | 0.5590 | 1.2341 | ||
κ = 5 | (20, 100) | 0.8332 | 1.5661 | 0.7981 | 1.4378 | 0.7266 | 1.3667 | |
(30, 150) | 0.7511 | 1.3930 | 0.6546 | 1.3074 | 0.5635 | 1.2514 | ||
(50, 250) | 0.5373 | 1.1940 | 0.4824 | 1.0696 | 0.3993 | 1.0033 |
Curve | κ | ( n , N ) | ρ δ = 1 | ρ δ = 3 | ρ δ = 5 | |||
---|---|---|---|---|---|---|---|---|
g ^ α ( x ) | g ^ N ( x ) | g ^ α ( x ) | g ^ N ( x ) | g ^ α ( x ) | g ^ N ( x ) | |||
κ = 3 | (20, 60) | 9.1934 | 22.3251 | 8.5573 | 21.9729 | 7.2736 | 21.7931 | |
(30, 90) | 9.0040 | 20.7143 | 7.4141 | 20.0651 | 5.2429 | 19.9803 | ||
(50, 150) | 8.5422 | 19.4815 | 6.5897 | 18.6286 | 3.7987 | 18.5734 | ||
κ = 5 | (20, 100) | 9.1158 | 20.3010 | 6.8672 | 19.7296 | 4.4416 | 19.5391 | |
(30, 150) | 8.9352 | 19.3803 | 6.9529 | 18.6887 | 4.0148 | 18.5831 | ||
(50, 250) | 7.7743 | 18.5282 | 5.3232 | 18.3610 | 2.5054 | 18.2475 |
the estimator g ^ α outperforms g ^ N . Also, the performance of g ^ α improves (i.e., the corresponding RASEs decrease) considerably as the sample sizes increases. For any nonparametric method in measurement error regression problem, the quality of the estimator also depends on the discrepancy of the observed sample. That is, the performance of the estimator depends on the variances of measurement error. Here, we compare the results for different values of ρ δ . As expected,
In this paper, we have proposed a new method for estimating non-parametric regression models when the explanatory variable is measured with error under the assumption that a proper validation data set is available. The validation data set allows us to estimate joint density f X W of the true variable and the surrogate variable via an orthogonal series method. In practice, our proposed method can be extended to multidimensional cases in which X may be a p-variate explanatory variable. When the dimension of X and hence of W is large, the curse of dimensionality may occur because of the multivariate density estimation of f X W . In this case, exponential series estimator proposed by [
This work was supported by GJJ160927 and Natural Science Foundation of Jiangxi Province of China under grant number 20142BAB211018.
Yin, Z.H. (2017) Orthogonal Series Estimation of Nonparametric Regression Measurement Error Models with Validation Data. Applied Mathematics, 8, 1820-1831. https://doi.org/10.4236/am.2017.812130
Proofs of Theorem 3.1 and Corollary 3.1:
We first present some Lemmas that are need to prove the main theorem.
Lemma 7.1. Suppose Assumptions 1 and 3(1) hold. Then:
1) ‖ T ^ − T ‖ H S 2 = O P ( K − 2 r + n − 1 K 2 ) ;
2) ‖ T ^ * − T * ‖ H S 2 = O P ( K − 2 r + n − 1 K 2 ) .
where ‖ ⋅ ‖ H S denotes the Hilbert-Schmidt norm, i.e.:
‖ T ^ − T ‖ H S 2 = ∫ ∫ [ f ^ X W ( x , w ) − f X W ( x , w ) ] 2 d x d w
Proof of Lemma 7.1. According to Lemma A1 of Wu [
‖ f X W − f X W K ‖ 2 = O ( K − 2 r )
Note that the Legendre polynomials ϕ k in (8) are orthonormal and complete on L 2 ( [ 0,1 ] ) . Then
‖ f ^ X W − f X W K ‖ 2 = ∑ k = 1 K ∑ l = 1 K ( d ^ k l − d k l ) 2
By E d ^ k l = d k l , we have
E { | ∑ k = 1 K ∑ l = 1 K ( d ^ k l − d k l ) 2 | } = ∑ k = 1 K ∑ l = 1 K V a r ( d ^ k l ) ≤ 1 n ∑ k = 1 K ∑ l = 1 K E [ ϕ k ( X ) ϕ l ( W ) ] 2 = O [ 1 n ∑ k = 1 K ∑ l = 1 K ‖ ϕ k ( x ) ϕ l ( w ) ‖ 2 ] = O ( K 2 / n )
where we have used the fact that f X W is uniformly bounded on [ 0,1 ] 2 .
By Chebyshev’s inequality, then we have ‖ f ^ X W − f X W K ‖ 2 = O P ( K 2 / n ) . Then the desired result follows immediately.
Lemma 7.2. Suppose Assumptions 1, 3 and 4 hold. Let γ = m i n { r , s } , then
‖ m ^ − T ^ g ‖ 2 = O P ( N − 1 K + K − 2 γ + n − 1 K 2 )
Proof of Lemma 7.2. Note that T g = m . By the triangle inequality and Jensen inequality, we have
‖ m ^ − T ^ g ‖ 2 ≤ 2 [ ‖ m ^ − m ‖ 2 + ‖ ( T − T ^ ) g ‖ 2 ]
If g ∈ L 2 ( [ 0,1 ] ) , Lemma 7.1 gives ‖ ( T − T ^ ) g ‖ 2 = O P ( K − 2 r + n − 1 K 2 ) . According to the proof of Lemma 7.1, under Assumptions 3(2) and 4, we can show that ‖ m ^ − m ‖ 2 = O P ( K − 2 s + N − 1 K ) . Then we obtain the result in Lemma 7.2.
Proof of Theorem 3.1. Define A ^ α = ( α I + T ^ * T ^ ) − 1 and A α = ( α I + T * T ) − 1 . Notice that g α = A α T * T g , then we have
g ^ α − g = A ^ α ( T ^ * m ^ − T ^ * T ^ g ) + ( A ^ α T ^ * T ^ g − A α T * T g ) + ( g α − g )
The second right-hand side term can itself be decomposed into two components:
A ^ α T ^ * T ^ g − A α T * T g = A ^ α ( T ^ * T ^ − T * T ) g + ( A ^ α − A α ) T * T g
Actually, since in this case A ^ α = ( α I + T ^ * T ^ ) − 1 and A α = ( α I + T * T ) − 1 , the identity B − 1 − C − 1 = B − 1 ( C − B ) C − 1 gives:
A ^ α − A α = − A ^ α ( T ^ * T ^ − T * T ) A α
Thus,
A ^ α T ^ * T ^ g − A α T * T g = A ^ α ( T ^ * T ^ − T * T ) ( g − g α )
From the properties of norm, we have
‖ g ^ α − g ‖ 2 ≤ 3 [ ‖ A ^ α T ^ * ( m ^ − T ^ g ) ‖ 2 + ‖ A ^ α ( T ^ * T ^ − T * T ) ( g − g α ) ‖ 2 + ‖ g α − g ‖ 2 ]
Let us consider the first term, we have
‖ A ^ α T ^ * ( m ^ − T ^ g ) ‖ 2 ≤ ‖ A ^ α T ^ * ‖ 2 ‖ m ^ − T ^ g ‖ 2
The first norm ‖ A ^ α T ^ * ‖ 2 = ‖ ( α I + T ^ * T ^ ) − 1 T ^ * ‖ 2 is equal to the larger eigen value of the operator. These eigen values converges to λ k / ( α + λ k 2 ) and are then smaller than 1 / α . It follows from Lemma 7.2 that
‖ A ^ α T ^ * ( m ^ − T ^ g ) ‖ 2 = O P [ α − 1 ( N − 1 K + K − 2 γ + n − 1 K 2 ) ] (11)
Next, we consider the term ‖ A ^ α ( T ^ * T ^ − T * T ) ( g − g α ) ‖ 2 . Note that
T ^ * T ^ − T * T = T ^ * ( T ^ − T ) + ( T ^ * − T * ) T
Then
‖ A ^ α ( T ^ * T ^ − T * T ) ( g − g α ) ‖ 2 ≤ 2 [ ‖ A ^ α T ^ * ‖ 2 ‖ T ^ − T ‖ 2 ‖ g − g α ‖ 2 + ‖ A ^ α ‖ 2 ‖ T ^ * − T * ‖ 2 ‖ T ( g − g α ) ‖ 2 ]
We have ‖ A ^ α T ^ * ‖ 2 = O P ( 1 / α ) , and ‖ A ^ α ‖ 2 = O P ( 1 / α 2 ) (see [
By Proposition 3.1:
‖ g α − g ‖ 2 = O ( α β ∧ 2 ) (12)
The term T ( g − g α ) identical to α A α T * g , is the regularity bias of T * g equal to O ( α ( β + 1 ) ∧ 2 ) .
Therefore, we have
‖ A ^ α ( T ^ * T ^ − T * T ) ( g − g α ) ‖ 2 = O P [ α ( β − 1 ) ∧ 0 ( K − 2 r + n − 1 K 2 ) ] (13)
Combining (11), (12) and (13) gives the desired result of Theorem 3.1.
Proof of Corollary 3.1. By Theorem 3.1, the proof of Corollary 3.1 is straightforward and is omitted.