 Journal of Signal and Information Processing, 2011, 2, 152-158 doi:10.4236/jsip.2011.23019 Published Online August 2011 (http://www.SciRP.org/journal/jsip) Copyright © 2011 SciRes. JSIP RLS Wiener Predictor with Uncertain Observations in Linear Discrete-Time Stochastic Systems Seiichi Nakamori1, Raquel Caballero-Águila2, Aurora Hermoso-Carazo3, Josefa Linares-Pérez3 1Department of Technical Education, Kagoshima University, Kagoshima, Japan; 2Departamento de Estadística e I.O, Universidad de Jaén, Paraje Las Lagunillas, Jaén, Spain; 3Departamento de Estadística e I.O, Universidad de Granada, Campus Fuentenueva S/N, Granada, Spain. Email: nakamori@edu.kagoshima-u.ac.jp, raguila@ujaen.es, {ahermoso, jlinares}@ugr.es Received June 27th, 2011; revised July 29th, 2011; accepted August 8th, 2011. ABSTRACT This paper proposes recursive least-squares (RLS) l-step ahead predictor and ﬁltering algorithms with uncertain ob- servations in linear discrete-time stochastic systems. The observation equation is given by  ykkzkvk, , where is a binary switching sequence with cond itional probability. The estimators req uire the information of the system state-transition matrix  zk Hxkk, the observation matrix H, the variance ,Kkk of the state vector xk, the varia nce Rk of the observation noise, the probability 1pk Pk that the signal exists in the uncertain observation equation and the 2, 2 element 2,|Pk j2 of the conditional probability of k, given . j Keywords: Estimation Theory, Synthe sis of Stochastic Sy stems, RLS Wiener Predic tor, Uncertain Observations, Markov Probability 1. Introduction The estimation problem given uncertain observations is an important research topic in the area of detection and estimation problems in communication systems . Nahi , assuming that the state-space model is given, pro- poses the RLS estimation method with uncertain obser- vations, when the uncertainty is modeled in terms of in- dependent random variables, and the probability that the signal exists in each observation is available. The term uncertain observations refers to the fact that some obser- vations may not contain the signal and consist only of observation noise. In Hadidi and Schwartz , Nahi’s results are extended to the case where the variables mod- eling the uncertainty are not necessarily independent. In the above studies, it is assumed that the state-space model for the signal is given. However, in real applica- tions, the state-space modeling errors might degrade the estimation accuracy. Nakamori  derived the RLS Wiener fixed-point smoothing and filtering algorithms, based on the invariant imbedding method, from uncertain observations with uncertainty modeled by independent random variables. In the derivation of such RLS Wiener estimators, the state-transition matrix Φ, the observation matrix H, the variance ,Kkk of the state vector xk, the variance Rk of the observation noise vk and the observed values yk are used. More- over, Nakamori et al. , based on the innovation ap- proach, proposed the RLS Wiener fixed-point smoother and filter in linear discrete-time stochastic systems. Here, the observation equation is given by ykkzkvk zk Hxk, ,where k is a binary switching sequence with conditional probabil-ity. The innovation process is given by  ˆ,1,sysyss2,2ˆˆ,1 1sPsH xs,1ys s in terms of the element ,2 of the conditional probability of , given (2,2)2|Pk j()jk (see Na- kamori et al. [5,6] for details). In the current paper, under the same assumptions for the observation equation as in Nakamori et al. , an algorithm for the RLS Wiener ahead predictor is derived, based on the invariant imbedding method. Thus, the observation equation is given by stepl RLS Wiener Predictor with Uncertain Observations in Linear Discrete-Time Stochastic Systems153  ykkzkvk, ,  zk Hxkwhere is a binary switching sequence with con- ditional probability. The observation equation adopted in this paper is suitable, for example, to model remote sensing situations with data transmission in multichan- nels, where the independence assumption of the variables describing the uncertainty in the observations is not real- istic. kThe estimators require the information of the system state-transition matrix , the observation matrix H, the variance ,Kkk of the state vector xk, the variance of the observation noise, the probability that the signal exists in the uncer- tain observation equation and the element RkP1kpk(2,2)2,2|Pk j of the conditional probability of k, given j. The RLS Wiener prediction and filtering algorithms are summarized in Theorem 1 and its proof is deferred to the Appendix. The main issues in this paper which are different from those in Nakamori et al.  are concerned with the algorithm derivation; namely: 1) The prediction estimate is given as a linear trans- formation of the observed values. 2) The prediction algorithms are derived on the basis of the invariant imbedding method. The current paper’s main contribution is the derivation of a recursive least-squares algorithm for the predictor and filter design in systems with non-independent uncer- tain observations, using covariance information. Without making use of the state-space model, the algorithm is obtained from the autocovariance functions of the signal and the observation noise, the probability that the signal exists in the observed values and the (2,2) element of the conditional probability matrices of the sequence which describes the uncertainty in the observations. This ap- proach is suitable in many practical situations where the equation generating the signal process is unknown, thus being not possible to use the state-space model to address the estimation problem. The deduction of the algorithm is mainly based on an invariant imbedding method. 2. Problem Formulation Consider the following observation equation  ,,ykkzk vkzkHxk  (1) where is a signal, zkxk is the zero-mean state vector and 1nH is the mn observation matrix.  The sequence vk is a white noise with zero mean and the variance of vk is Rk; that is,  KEvkvsRkksT, (2) where denotes the Kronecker delta function. K The random sequence k, which describes the uncertainty in the observations, has the following stochastic properties : (P-1) k is a discrete-time random variable taking the values 0 or 1 with 1Pkp k. Therefore, represents the probability that the observed value ()pkyk contains the signal ; this probability is as- sumed to be nonzero. zk(P-2) The noise k is a sequence of random variables with initial probability vector and conditional probability matrix  10,0Tpp|Pk j(2,. The element of the conditional prob- ability matrix of 2)k given j, is independent of , for jjk; that is,  2,22,2|,10, ,1.EjkPk jPkPjjk  (3)  The state process xk and the sequences k and vk are mutually independent. Let us introduce the system matrix in the state- space model for the state vector xk and the variance ,Kss of the state vector xs. Then the autocovari- ance function ,zKks of the signal is factor- ized as zk ,,,,()(),0,,TzTkT sKks HKksH.KksAkB sskAkBsKss  (4) The purpose of this paper is to design a covariance- based recursive algorithm to obtain the ahead prediction estimate of steplxkl from uncertain observa-tions yi, 1ik. Due to the presence of a multipli-cative noise component in the observation Equation (1), even if the additive noise is Gaussian, the conditional expectation of xkl given ,yi , which provides the least-squares estimator, is not a linear func-tion of the observations and its computation can be very complicated requiring, in general, an exponentially growing memory. For this reason, our attention is fo- cused on the least-squares linear estimation problem. Specifically, we are interested in obtaining the least- squares linear estimator of the state vector 1ikxkl based on the observations,yi This estima- tor, 1i.kˆ,xklk, is the orthogonal projection of xkl on the space of ndimensional linear transformations of the observations. So, ˆ,xklk is given by  1ˆ,,,kixklkhklikyi  (5) as a linear transformation of the observed values ,yi 1ik, where ,,hklik, , denotes the 1ikCopyright © 2011 SciRes. JSIP RLS Wiener Predictor with Uncertain Observations in Linear Discrete-Time Stochastic Systems 154 impulse-response function. Let us consider the least-squares prediction problem, which minimizes the criterion   ˆˆ,,TJExklxk lkxk lxk lk . (6) The orthogonal projection lemma  assures that ,xklk is the only linear combination of the obser- vations yi, such that the estimation error is orthogonal to them, 1ik ˆ,, 1,xkl xklkyssk (7) that is,  1,,0, 1.kTiExk lhklikyiyssk  This condition is equivalent to the Wiener–Hopf equa- tion   1,,, 1,kTTiExk lyshklikEyiyssk   (8) useful to determine the optimum impulse-response func- tion,,hklik, , which minimizes the cost function (6). From , the left-hand side of (7) is written as 1ikPk1pk,TTExk lysKklsHps. (9) Let E denote the statistical expectation with re- spect to vk. Then, from the observation equation (1) and the covariance function (2) for white observation noise , is reduced to TEyiy sT  ,.TKEyiysEis HKisHRii s (10) Substituting (9) and (10) into (8), we have 1,, ,,,, .TkTihk lskRsKk lsHpshklikEisHK is H  (11) Under these conditions, in Section 3 the RLS Wiener prediction and filtering algorithms are presented. 3. RLS Wiener Prediction and Filtering Algorithm Nakamori et al. [5,6], based on the innovation approach, proposed the algorithms for the ﬁxed-point smoothing estimate and the ﬁltering estimate. These algorithms are derived taking into account that the innovation process is expressed as 2,2ˆ,1,ˆˆ,1 1,1sysyssyssPsH xss . Under the preliminary assumptions made in Section 2, Theorem 1 proposes the RLS Wiener algorithms for the stepl ahead prediction estimates of the signal zk l and the state vector xkl. These algo- rithms are derived, starting with (11), by iterative use of the invariant imbedding method. Theorem 1. Consider the observation equation de- scribed in (1) and assume that the probability pk and the element (2,2)2,2Pk of the conditional probabil- ity matrix ,|Pk j are given. Let the system state- transition matrix  the observation matrix ,H the autovariance function ,Kss of the state vector xs, the variance Rk of the white observation noise vk and the observed value ykl be given. Then the RLS Wiener algorithms for the ahead predic-tion estimate step,lkˆzk of the signal and the zkllstep ahead prediction estimate ,ˆxklk of the state vector xkl consist of (12)-(17). lstep ahead prediction estimate of the signal ˆ:,lzk lkzk ˆˆ,zk lkHxk lk , (12) lstep ahead prediction estimate of the state vector ˆ:,xklxklk 2,2ˆˆ,1,1ˆ,,1,1,ˆ,0 0lxk lkxk lkhkkkykPkH xkkxl  (13) Filtering estimate of  ˆ:,zk zkk ˆˆ,zkk Hxkk, (14) Filtering estimate of ˆ:,xkxkk 2,2ˆˆ,1,1ˆ,,1,1,ˆ0,00xkkxk khkkkykPkHxkkx (15) Filter gain: ,,hkkk    2,2122,2,, ,1,1TTTTTThkkkKkkH pkPkSk HRk pkHKkkHPkHSk H (16) Copyright © 2011 SciRes. JSIP RLS Wiener Predictor with Uncertain Observations in Linear Discrete-Time Stochastic Systems155   2,21,,,100TTSk SkhkkkHKkkpkPkHSkS , (17) Proof of Theorem 1 is detailed in the Appendix. Clearly, the algorithms for the filtering estimate are the same as those proposed in Nakamori et al. . From Theorem 1, the innovation process k is represented by  2,2 ˆ1,1 .kykPkHxkk  (18) 4. A Numerical Simulation Example In order to illustrate the application of the RLS Wiener prediction algorithm proposed in Theorem 1, we consider a scalar signal zk whose autocovariance function zKm is given as follows    2221212 121221221210,11110,zmzmKKmm,    (19) with 212112,4aaa 2, where and 1 20.1,a 0.8a0.5. The covariance function (19) corresponds to a signal process generated by a second-order AR model. There- fore, according to Nakamori , the observation vector ,H the variance of the state vector  ,Kkk K0xk and the system matrix in the state equation are as follows:   210110, ,,1001,00.25,1 0.125.zzzzzzKKHKkkKKKKaa  (20) As in Nakamori et al. , we consider that the signal is transmitted through one of two channels, char- acterized by its observation equation as follows: zkChannel 1: ,ykzkvk  Channel 2: ,ykUkzkvk where is a zero-mean white observation noise and is a sequence of independent random variables taking values or 1 with vkUk01PU kp0.8.7is described by , for all . kWe assume that channel 1 is chosen at random with probability and, hence, channel 2 is selected with probability . Then, the observation equation 10q0.3q,ykkzkvk (21) where 11k Uk and  is a random variable, independent of , taki values 0 or 1 with Uk ng10.3Pq. Ck is a sequence of randh take valu 1 with learlyom variables whic, es 0 or1pk Pk1, 1 010.94PUk Ppq q for all , and conditional probability matrix k 21pp|11111 110.2 0.8 ,0.0510638 0.9489362Pk jqpqp pqpqp for all .,0,,1kj k (3), From2,22,2| 0.9489362,Pk jPk1. for all ,0,,j kk ,H ,Kkkgorithm Substituting and , given byin (20), to the predictof Theom 1, the predict- tion estimate of the signal has been calculated recur- sively. Figure 1 illustrates the signal zk and its prediction esion alretimate ˆ3,zkk for zero-mhite observation noise with20.3. Figure 2 illustrates the mean- square values (MSVf the filtering and prediction er- rors for zero-mean white observation noises with vari- ances 20.1, 20.3 , 20.5 and 20.7 , comparing both the uncertandservatihe latter cor- ean wns cases (t variances a certain) o obino ()zkFigure 1. Signal and its prediction estimate ˆ,(3)zk k for the zero-mean white observation noise with 2the variance 0.3. Copyright © 2011 SciRes. JSIP RLS Wiener Predictor with Uncertain Observations in Linear Discrete-Time Stochastic Systems Copyright © 2011 SciRes. JSIP 156 responds redated by to the case  2,2 1pkP k). The MSVs of the filtering and pobservation Equation (1), the RLS Wiener algorithms for the lstep ahead prediction estimates of the signal ˆ,zk lkzk l and ˆ,iction errors are evalu xklk of the state vector xkl are derived by iterative use of the invariant imbedding method. The prediction algorithms take into account the stochastic properties of the random variables k in the observation Equation (1) such as the probability 1kpk P that the signal exists in the uncertain observation equation, and the (2,2)2000 2ˆ,2000,1,2, ,5.zizi lil  1iHere, corresponds to the calculation f the MSVs of the filtering errors. igurns for both the uncer- tasprocess in the simulations, is give5. Conclusions y assumptions of Section 2, for0loFrom F e 2, it is deduced that, as l becomes lar- ger, the prediction accuracy worsein and the certain observations cases, with each differ- ent observation noise. It might also be noticed that the MSVs with uncertain observations are almost equal to those with certain observations except for the observation noise with variance 20.1 . For the observation noise with variance 20.1 , the MSVs of the prediction errors with the certain observatiare smaller than those with the uncertain ob ervations, particularly for the 2 and 4-step ahead predictions. For reference, the autoregressive (AR) model used to generate the signal ons element 2,2Pkj of the conditional probability of k, given j. A numerical simulation example in Section 4 shows that the prediction algorithm proposed in this paper is feasible. REFERENCES n by  12111,zkazk azkwk (22)  H. L. Van Trees, “Detection, Estimation and Modulation Theory (Part I),” Wiley, New York, 1968.  N. Nahi, “Optimal Recursive Estimation with Uncertain Observation,” IEEE Transactions on Information Theory, Vol. IT-15, No. 4, 1969, pp. 457-462. doi:10.1109/TIT.1969.1054329   2.KEwkwsk s N. Hadidi and S. Schwartz, “Linear Recursive State Es- timators under Uncertain Observations,” IEEE Transac-tions on Automatic Control, Vol. AC-24, No. 6, 1979, pp. 944-948. doi:10.1109/TAC.1979.1102171 Under the preliminar the  S. Nakamori, “Estimation Technique Using Covariance Information in Linear Discrete-Time Systems,” Signal Processing, Vol. 58, No. 3, 1997, pp. 309-317. doi:10.1016/S0165-1684(97)00032-7  S. Nakamori, R. Caballero-Águila, A. Hermoso-Carazo and J. Linares-Pérez, “Linear Recursive Discrete-Time Estimators Using Covariance Information under Uncertain Observations,” Signal Processing, Vol. 83, No. 7, 2003, pp. 1553-1559. doi:10.1016/S0165-1684(03)00056-2  S. Nakamori, R. Caballero-Águila, A. Hermoso-Carazo and J. Linares-Pérez, “Fixed-Point Smoothing with Non- Independent Uncertainty Using Covariance Information,” International Journal of Systems Science, Vol. 34, No. 7, 2003, pp. 439-452. doi:10.1080/00207720310001636390  A. P. Sage and J. L. Melsa, “Estimation Theory with Ap- plications to Communications and Control,” McGraw-Hill, New York, 1971. Figure 2. Mean-square values (MSVs) of the filtering andprediction errors for the zero-mean white observatio n  S. Haykin, “Adaptive Filter Theory,” Prentice-Hall, New Jersey, 2003. noises with the variances 0.12, 0.32, 0.52 and 0.72 for both the uncertain and certain observations. RLS Wiener Predictor with Uncertain Observations in Linear Discrete-Time Stochastic Systems157 et us introduce the equation concerned with the function Appendix A. Proof of Theorem 1 L,Jks as ,TTJksRspsB sH 1,() ,kTi.JkiEisHK isH(A-1) From (11) and (A-1) it follows that  (A-2) putting in (A-1) from (A-1) yields ,,,, .hk lskAk lJks Subtracting the equation obtained by1kk   ,1,,JksJksRs  11,1,,.TkiTJkk EksHKksHJki JkiEisHKisH (A-3) From (A-1), (A-3) and the relationship , it follows that 2,2EksPkps ,1,1,,Jks Jks2,2,1.JkkPkHAsk kJk s (A-4) Putting sk in (A-1) yields         11112,212,2(,),,(,),,,,,1, ,1,.TTTiTTTkTiTT TkiTTJkkRkpkB kHJkiEikHKikHpkB kHJkkEkk HKkkHJkiEikHKikHpk Bk HpkJ kk HK kk HkJkiJkkPkHAkJkipiPkHBiA kH (A-5) Here, the relationship Ekkpk, (4) and (A-4) are used. Let us introduce a function Hence,  1kirk J (A-6)  ,kipiHBi.    2,2122,2,1,1.TT TTTTJkkpkBkHrkA kHPkRk pkHKkkHPkHAkrk Ak (A-7) Subtracting the equation obtained by putting in (A-6) from (A-6) yields 1kk       1112,21,,1,,,1,1kiirk rkJkkpkHBkJkiJki piHBiJkkpkHBk  12,2 ,k.Jkk piHBiJkkpkHBkPkHAkrkPk HAkJ ki(A-8) Here, (A-5) and (A-7) have been used. Clearly, from (A-6), the initial condition for the recursive Eq(A-8) of uation rk at 0k is given byThe 00r. ahiction estiead predmate lstep ˆ,xklk of xkl is given by (5). From (5) and (Aws -2), it follothat 1ˆ,,.ikxklk AklJkiyi (A-9) Let us uce ation introd funcHence, the  1,kiek Jkiyi. (A-10) lstep ahead prediction estimate ˆ,xklk of the state vector xkl and the filtering estimate ,ˆxkk of the state vector xk are given by ˆˆ,, ,xklkAklekxkkAkek. (A-11) Subtracting the equation obtained by in (A-10) from (A-10) yields putting 1kk 11.kek 1,,1,iekJkkykJkiJ kiyi (A-12) From (A-4) and (A-10), we get 1,ek Jkkyk 12,212,2,1,(kiek,1.Jkk Pk HAkJ kiyiJkkykPkHAkek (A-13) ws that From (A-2), (A-11) and (A-13), it folloCopyright © 2011 SciRes. JSIP RLS Wiener Predictor with Uncertain Observations in Linear Discrete-Time Stochastic Systems 158  2,2ˆˆ,1,1ˆ,,1,1,ˆ0,00,ˆˆ,1,1ˆ,0 0.xkkxk khkkkykPkHxkkxxk lkxk lkxl 2,2 ˆ,, 1,1,lhkkkykPkHxkk (A-14) Introducing  ,TSkAkrkA k it follows, from (A-8) and (A-15), that (A-15)  2,22,2(1) (,)1(1) ,,,100.TTTTSkAkrkA kAkJkkpkHBkPkHAkrkA kSk hkkkpkHKkkPkH SkS  Finally, the filter gain ,, ,hkkk AkJkk is expressed as follows  2,2122,2,, ,1(),1 .TTTThkkkpkKkkHPkSkHRkpk HK kk HPk HSkT (Q.E.D.) Copyright © 2011 SciRes. JSIP