 Applied Mathematics, 2011, 2, 363-368 doi:10.4236/am.2011.23043 Published Online March 2011 (http://www.SciRP.org/journal/am) Copyright © 2011 SciRes. AM Iterated Logarithm Laws on GLM Randomly Censored with Random Regressors and Incomplete Information# Qiang Zhu1, Zhihong Xiao1*, Guanglian Qin1, Fang Ying2 1Statistics Research Institute, Huazhong Agricultural University, Wuhan, China 2School of Science, Hubei University of Technology, Wuhan, China E-mail: xzhhsfxym@mail.hzau.edu.cn Received December 8, 2010; revised January 26, 2011; accepted Janu ary 29, 2011 Abstract In this paper, we define the generalized linear models (GLM) based on the observed data with incomplete information and random censorship under the case that the regressors are stochastic. Under the given condi-tions, we obtain a law of iterated logarithm and a Chung type law of iterated logarithm for the maximum li-kelihood estimator (MLE) ˆn in the present model. Keywords: Generalized Linear Model, Incomplete Information, Stochastic Regressor, Iterated Logarithm Laws 1. Introduction The generalized linear model(GLM) was put forward by Nelder and Wedderburn  in 1970s and has been stu-died widely since then. The maximum likelihood esti-mator (MLE) ˆnof the parameter vector  in GLM was given and its strong consistency and asymptotic normality were discussed by Fahrmeir and Kaufmann  in 1985.The randomly censored model with the incom-plete information was presented by Elperin and Gertsba-kin  in 1988.The analysis of the randomly censored data with incomplete information has become a new branch of the Mathematical Statistics. Xiao and Liu  in 2008 discussed the strong consistency and the asymp-totic normality of MLE ˆnof GLM based on the data with random censorship and incomplete information. Xiao and Liu  discussed laws of iterated logarithm for quasi-maximum likelihood estimator of GLM in 2008, meanwhile, Xiao and Liu  in 2009 discussed laws of iterated logarithm for maximum likelihood estimator of generalized linear model randomly censored with in-complete information under the regressors given. How-ever, Lai and Wei , Zeger and Karim  have studied the linear regression model under the case that the re- gressors are stochastic. In the practical application, espe-cially in the biomedical social sciences, the regressors in GLM are often stochastic, Fahrmeir  investigated GLM with the regressors 1,,nXXwhich are indepen-dent and identically distributed and gave MLE of matrix parameter without proof under the given conditions. Ding and Chen  in 2006 gave asymptotic properties of MLE in GLM with stochastic regressors. So, in the present paper, we will investigate the law of iterated lo-garithm and the Chung type law of iterated logarithm for maximum likelihood estimator of generalized linear mo- del randomly censored with incomplete information un-der the case that regressive variables,1iXiare inde-pendent but not n ecessarily identically distributed. From a statistical perspective, the importance of those laws stem from the fact that the first one gives in an asy- mptotic sense the smallest 100% confidence interval for the parameter, while the second one gives an almost sure lower bound on the accuracy that the estimator can achi- eve. 2. Model with the Random Regressor Suppose that the respondence variables ,1,2,,iYi n are one dimension random variables, and regressor vari-able ,1,2,,iXin are q-dimension r andom variables which have the distribution functions ,1,2,,iKires- pectively. Here, ixis the observation value ofiX, iX i. Write 1ii. Suppose that the observations ,,1,2,,iiYX iare mutually independent and satisfy. *Correspondin g Au tho r. #Foundation items: supported by Huazhong Agricultural UniversityDoctoral Fund (52204-02067) and Interdisciplinary Fund (2008 xkjc008and Torch Plan F und (2009XH003). Q. ZHU ET AL. Copyright © 2011 SciRes. AM 3641) The regression equation: |,1Tii iiEY Xxmxi  (2.1) where the unknown parameter .qB 2) The conditional distribution of iY under iiXxis the exponent distribution, i.e. exp, 1iiiiiPYdyX xCyybdyi  (2.2) where is a -finite measure, parameter i, 1, 2,,in, :0 expCyy dy  is the natural parameter space and 0is the interior of . Since this conditional density integrates to 1, we see that  expiibC ydy, from which the standard expressions for the conditional mean |ii iiEY Xxb, and the variance, |ii iVar YXx ib ， where,bb  denote the first and second derivatives ofb, respectively. Suppose that the censor random variables ,iU i 1, 2,, n are mutually independent but not necessarily identically distributed, with the distribution function iGu and  iidG ugudu. Denote iKdx  ,ixdx1, 2,,in. Suppose that iU is indepen- dent of ,iiYX For 1, 2,,in, let iiiYUI, 0,if ,but the real va isn't observed ,lue of 1,else, iiiiUYY,if 1,1,otherwiseii iiiYZU Obviously, ,,,,1,2,iii iZXi is a mutually independent and observable sample. The conditional density and distribution function of iY underiiXx are respectively denoted as ;expTTTiiifyxCyx ybx ;exp|zTTTiiiiiiFzxC yxybxdyPYzXx  Let  1,iiGz Gz ;1;TTiiFzx Fzx , 1, 2,,in. Suppose ,1| ,,if ,,ii i iiPYyUuXxpyu x   (2.3) 0|,,1 ,if ,,ii iiiYyU uXxyu xPp (2.4) where 01p. This assumption came from T. Elperin and I. Gertsbak, . In the reliab ility study, the instant of an item's failure is observed if it occurs before a ran-domly chosen inspection time and the failure is signaled. Otherwise, the experiment is terminated at the instant of inspection during which the true state of the item is dis-covered. T. Elperin and I. Gertsbak, assumed that the fai- lure time of every item was signaled randomly with p ro- bability p before the rando mly chosen inspectio n time. Then, we have ,,,,iiiii iPY yU uXxPYyXxPUu yux  Without loss of generality, assuming that iX is dis-crete, we have iiiii iPYy,UuXxPYyXxPUu  (2.5) We first give the following propositions. Proposition 2.1. Under the regular assumptions above, we have  ,1,1|,iii iziiPZ zXxpGyfy dy  (2.6) ,1,0|1;,iii izTiiiPZ zXxpFyxdGy   (2.7) ,0|;.ii izTiiiPZ zXxFyxdG y (2.8) Proof. We only show (2.6) for the discrete case, the continuous case can be shown in the way similar to that of the discrete case. ,1,1 ,,11,, ,ii iiiiiiiiiiiizii i iiiiiiyzzii iiPZzX x EIIEIYUX xX xYzYUPYyUuXxP YdyUduXxpGydFypGyfydy     (2.9) Q. ZHU ET AL. Copyright © 2011 SciRes. AM 365where (2.9) follows from (2.3) and (2.5). Similarly, we can demonstrate (2.7) and (2.8). Suppose that izis the observation of iZ,i is the observation of i, i is the observation of i, (2.6), (2.7) and (2.8) imply that for all 1i, the conditional distribution of ,,iiiZ under iiXxis the follow-ing (1 )1( )(;)][(1)(;)();iiiiiTTii iiii iiTiii iipGzfzxpFzx gzFzxgzdz (2.10) Let 1,, ,nnZZZ1,, ,nnzzz1,, ,nn 1,, ,nn1,, ,nn1,, ,nn 12,,,XXX1,,,nnXXX1,, ,nnxxx 12,, .xxx We easily get the following proposition. Proposition 2.2. For all 1n, we have   ,,,,,,1nnn nnnnnn nnnnniii iiiiiPZ zX xPZ zXxnPZ zXxi    (2.11) and   ,,,,(, ,),1,iii iiiiii iiinniii iiiiiPZ zXxPZ zXxPZ zXxi     (2.12) where () ()nnZz means iiZz for 1in . Remark 2.1. Proposition 2.2 implies that under |,,1iPX xUi are mutually independent and so are ,1iYi, and ,,, 1.iiiZi (2.10) and (2.11) imply that the conditional distribu-tion of 111,, ,,,,nnnZZ  under  nnXx is  111;,1;1;iii inTT Ti iiiiii iiiiiiiFxgd nipGzfzxp Fzxgzzzz     (2.13) The conditional probability measure corresponding to (2.13) is written as|.PXxMeanwhile, let xE and xVar denote the conditional expecta-tion and conditional variance under the conditional pro- bability measure|,PXxrespectively. Set 0do- note the real value of . For notational simplicity, let xxEE and  xxVar Var. (2.13) im-plies that the joint distribution of is 111 1,,,,, ,,,nnnnZXZ X   111;1; ;iii iinTT Ti iiiiii iiii iiiiiipGzfzxpFzxgzF zxgzdzxdx      (2.14) The probability measure (unconditional) correspond-ing to (2.14) is denoted as P. Meanwhile, let E and Var denote the expectation and vari- ance under the probability measure P, respectively. For notational simplicity, let  ,PPEE  and Var Var It is that the parameters in (2.14) are studied by us. 3. Main Results Furthermore, from (2.14) we get the likelihood function of 111 1,,,,, ,,,nnnnZXZ X  as follows 1111,,,, ,,,,,nnnnLZX ZX    1;111;;,1iiiTii iiTiiii iiTiiiii inpGZf ZXipFZ XgZFZXgZXn  (3.1) Taking the logarithm to (3.1) and dropping the terms which are free of  yield the logarithm likelihood fun- ction: *11111log ;1log;1 log;;,,, ,,,,,,nTniiiiiiiTTiii iinnnnnlfZXFZX FZXlZ XZX    (3.2) where 1111;,,,,,,,,nnnnnlZ xZx  is the loga-rithm likelihood function defined in Xiao and Liu . Q. ZHU ET AL. Copyright © 2011 SciRes. AM 366We have the score function  **111111;;1;;,,,,,,,,,;iiZniiTTnn iiiiiiiiTiiiiTii nnnnnTZiiTl XZbXyfyXdyFZXyfyXdyTZXZXFZX    (3.3) where 1111;,,, ,,,,,nnnnnTZ XZX is defined as in Xiao an d Liu . And  2* 2*1222221111=1 ;;11;1 ;;;1;;;,,iiiiZnnTT Tiiiiii inTiTiiiZTTii iiiTT Ziiii iiTiiTZiiinlHXXbX yfyXdyFZXyf yXdyyf yXdyFZXF ZXyf yXdyFZXHZ    1,,, , ,,nnnnxZ x where 1111;,,,,,,,,nnnnnHZxZ x  is defined as in Xiao and Liu . Write  ** **1;,, Txnnn nnxxET T  *1;,, ,xnn nEHx x  where 1;,,nnxx is defined as in Xiao and Liu . The solution of the log arithm lik elihood equation *0nT (3.4) is written as 1111,,, ,,,,,.nn nnnnZXZ X   (3.5) (3.3) and (3.4) imply that 111 1ˆ,,,,,,,,,nn nnnnZXZ X  (3.6) where 1111ˆ,,, ,,,,,nnnnnZXZ X  is defined as in Xiao and Liu . The norm of matrix ijpqAa is defined as 2.11pqAaijij We write , as the usual inner product and se as the sth canonical basis in q. Let 11,,;zzFzyfydy   122,, ;zzFzyfydy    13,,;zzFz yfy dy   124,, ;zzFz yfy dy   and  |.TnnnnnET TXx  We state the following assump tions: (1C) For all 1i, for all B，0,TiXa.s., where iX. Here is compact. (2C) 100;lim;nnnQXnX is a q-order positive define matrix. (3C) For all12,B, 221112112;; ,,TTzxzxL zx  (3.7) 21222 12;; ,,TTzxzxLzx  (3.8) 2231323 12;; ,,TTzxzxL zx  (3.9) 41424 12;; ,,TTzxzxLzx  (3.10) where 1sup;,. .1,2,3,4,1.bTiji jEL ZxXLasjb  (4C) 2,1, 0,XTiBiEt X  a.s. It is also easy to see that the conditions in the present paper imply the conditions (C1), (C2), (C3) and (C4) given in Xiao and Liu. So, there almost sure exists the maximum likelihood estimator of0. Hence, our first result states a law of the iterated logarithm for the max-imum likelihood estimator of 0. Theorem 3.1. Under conditions (1C), (2C), (3C) and (4C), if ˆn is the MLE of 0, then for 1sq, we have Q. ZHU ET AL. Copyright © 2011 SciRes. AM 367000ˆlimsup,1,. .2log logTsnssnnP