**Open Journal of Statistics**

Vol.07 No.01(2017), Article ID:74046,7 pages

10.4236/ojs.2017.71002

The Conditional Poisson Process and the Erlang and Negative Binomial Distributions

Anurag Agarwal, Peter Bajorski, David L. Farnsworth, James E. Marengo, Wei Qian^{ }

School of Mathematical Sciences, Rochester Institute of Technology, Rochester, New York, USA

Copyright © 2017 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: December 3, 2016; Accepted: February 6, 2017; Published: February 9, 2017

ABSTRACT

It is a well known fact that for the hierarchical model of a Poisson random variable $Y$ whose mean has an Erlang distribution, the unconditional distri- bution of $Y$ is negative binomial. However, the proofs in the literature [1] [2] provide no intuitive understanding as to why this result should be true. It is the purpose of this manuscript to give a new proof of this result which provides such an understanding. The memoryless property of the exponential distribution allows one to conclude that the events in two independent Poisson processes may be regarded as Bernoulli trials, and this fact is used to achieve the research purpose. Another goal of this manuscript is to give another proof of this last fact which does not rely on the memoryless property.

**Keywords:**

Conditional Distribution, Hierarchical Model, Mixture Distribution, Poisson Process, Stochastic Process

1. Introduction

There is much current interest in compounding or mixing distributions and their applications. Indeed, the early history of statistics was greatly concerned with the problem [3] . The work by Greenwood and Yule [4] in the more modern era has been followed up with new results and extensive applications [5] , including many based on the Poisson distribution because of its centrality in statistical analysis and probability modeling [6] [7] [8] [9] . The present derivations supply new insights into the structure of this type of modeling by revealing how compounded Poisson variables produce a negative binomial distribution.

There is a relatively simple fact about mixture distributions which says that if the mean of a conditional Poisson random variable has an Erlang distri- bution, then the unconditional distribution of this variable is negative binomial. In particular,

Theorem 1. Let $m\in N$ and $\theta \in {R}^{+}$ . Suppose that the random variable $\Lambda $ has the Erlang distribution with probability density function

$f\left(\lambda \right)=\theta {e}^{-\theta \lambda}\frac{{\left(\theta \lambda \right)}^{m-1}}{\left(m-1\right)!}\text{for}\lambda >0$ (1)

and that, given that $\Lambda =\lambda $ , $N\left(t\right)$ has the Poisson distribution with probability mass function

$p\left(k|\lambda \right)={e}^{-\lambda \text{\hspace{0.05em}}t}\frac{{\left(\lambda \text{\hspace{0.05em}}t\right)}^{k}}{k!}.$ (2)

Then, the unconditional distribution of $N\left(t\right)$ is that of the number of failures before the ${m}^{th}$ success in Bernoulli trials with success probability

$p=\frac{\theta}{\theta +t}$ . That is,

$P\left(N\left(t\right)=k\right)=\left(\begin{array}{c}k+m-1\\ k\end{array}\right){\left(\frac{\theta}{\theta +t}\right)}^{m}{\left(\frac{t}{\theta +t}\right)}^{k}.$ (3)

The proof involves an application of the law of total probability which conditions on the value of $\Lambda $ . That is, an integration of the product of (1) and (2) yields (3). This result appears in many different settings ( [10] , pp. 194-195; [11] , p. 191; [2] , pp. 332-333; [12] , p. 84).

Could this result have been guessed? This proof provides no intuitive under- standing as to why it is true. One purpose of this paper is to give another proof of Theorem 1 which provides such an understanding. In that proof, two indepen- dent Poisson processes are carefully chosen and the memoryless property of their exponentially distributed interarrival times is used to conclude that the events in these processes may be regarded as Bernoulli trials. In Section 3, without using the memoryless property, another proof that these events are Bernoulli trials has been provided.

2. Proof of Theorem 1

This section contains an alternative proof of Theorem 1 which can facilitate one’s intuitive understanding of this result. The proof uses properties of the Poisson process and exponential distribution to obtain (3).

Fix $t\ge 0$ , let $\left\{{N}_{1}\left(u\right),u\ge 0\right\}$ be a Poisson process with rate $\theta $ , and let $\left\{{N}_{2}\left(u\right),u\ge 0\right\}$ be an independent Poisson process with rate $t$ . One may think of the events in the first process as “successes” and those in the second as “failures”. Using the well-known facts [2] that the interarrival times in the first (second) process are independent exponentially distributed random variables with rate $\theta $ (rate $t$ ) and the memoryless property of the exponential distri- bution ( [2] , pp. 150, 159; [10] , p. 102) one may regard these successes and failures as being Bernoulli trials. That is, the trials are independent, and the probability of success is the same on each trial. Intuitively, the process proba- bilistically restarts itself at any point in time. Specifically, suppose an event has just occurred in one of the two processes. Then, regardless of the amount of time that has elapsed since the last event in the other process, the distribution of the amount of time remaining until the next event occurs in the other process is exponential with the rate for that process. Hence, independently of what has occurred up to that point of time, the probability that the next event is a success is the probability that an exponential random variable with rate $\theta $ is less than an independent exponential random variable with rate $t$ , and this probability is

easily seen to be $\frac{\theta}{\theta +t}$ ( [12] , p. 287). A proof (which does not make direct

reference to the memoryless property) that the events in the two processes constitute Bernoulli trials is given in Section 3.

Since the sum of $m$ independent exponential random variables each having rate $\theta $ has the Erlang distribution in (1), one may think of $\Lambda $ as being the time of occurrence of the ${m}^{th}$ event in the process $\left\{{N}_{1}\left(u\right),u\ge 0\right\}$ . That is, $\Lambda $ is the occurrence time of the ${m}^{th}$ success ( [2] , p. 150). Given that $\Lambda =\lambda $ , the conditional distributions of both $N\left(t\right)$ and ${N}_{2}\left(\lambda \right)$ are the same Poisson distribution with mean $\lambda t$ . By conditioning on $\Lambda $ , the unconditional distribu- tion of $N\left(t\right)$ is the same as the unconditional distribution of ${N}_{2}\left(\Lambda \right)$ . The proof now follows by observing that ${N}_{2}\left(\Lambda \right)$ is the number of failures before the time of the ${m}^{th}$ success.

3. Proof That the Trials Are Bernoulli

This section contains a proof, which does not depend on the memoryless property, that the events in two independent Poisson processes may be regarded as Bernoulli trials.

Theorem 2. Consider two independent Poisson processes with respective rates ${\lambda}_{1}$ and ${\lambda}_{2}$ in which the events that occur in either process are called trials and are referred to as successes or failures according as they come from the first or second process. Then the trials are independent and the probability of

success is $\frac{{\lambda}_{1}}{{\lambda}_{1}+{\lambda}_{2}}$ on each trial. That is, these trials are Bernoulli trials.

Before proving Theorem 2, the following two lemmas are needed.

Lemma 1. For nonnegative integers $m$ and $n$ ,

$\underset{k=0}{\overset{n}{\sum}}}\left(\begin{array}{c}m+k\\ m\end{array}\right)=\left(\begin{array}{c}m+n+1\\ m+1\end{array}\right).$

Proof. The number of possible choices of $m+1$ distinct numbers from the

set $\left\{1,2,3,\cdots ,m+n+1\right\}$ is $\left(\begin{array}{c}m+n+1\\ m+1\end{array}\right)$ . By conditioning on the value of the

largest number chosen, one can see that this number of choices is also given by

${\sum}_{k=0}^{n}}\left(\begin{array}{c}m+k\\ m\end{array}\right)$ . □

Lemma 2. Using the terminology in Theorem 2, let ${E}_{n,k}$ be the event that there are exactly $k$ successes among the first $n$ trials. Then

$P\left({E}_{n,k}\text{andsuccessontrial}n+1\right)=\left(\begin{array}{c}n\\ k\end{array}\right){\left(\frac{{\lambda}_{1}}{{\lambda}_{1}+{\lambda}_{2}}\right)}^{k+1}{\left(\frac{{\lambda}_{2}}{{\lambda}_{1}+{\lambda}_{2}}\right)}^{n-k}.$

Proof. Let ${T}_{i,j}$ be the time between the $\left(i-1\right)$ st event and the $i$ th event (i.e., the $i$ th interarrival time) in the $j$ th process. Also, let ${S}_{j}$ and ${F}_{j}$ be the respective times until the $j$ th success and $j$ th failure, so that

${S}_{j}={\displaystyle \underset{i=1}{\overset{j}{\sum}}}{T}_{i,1}\text{and}{F}_{j}={\displaystyle \underset{i=1}{\overset{j}{\sum}}}{T}_{i,2}.$

Then,

$\begin{array}{c}P\left({E}_{n,k}\text{andsuccessontrial}n+1\right)=P\left({F}_{n-k}<{S}_{k+1}<{F}_{n-k+1}\right)\\ =P\left({F}_{n-k}<{S}_{k}+{T}_{k+1,1}<{F}_{n-k}+{T}_{n-k+1,2}\right).\end{array}$

By conditioning on the independent random variables ${S}_{k}$ and ${F}_{n-k}$ , and using the fact that the interarrival times in a Poisson process are independent random variables, it follows that the last probability is

${\int}_{0}^{\infty}}{\displaystyle {\int}_{0}^{\infty}}P\left(v<u+{T}_{k+1,1}<v+{T}_{n-k+1,2}\right){f}_{{S}_{k}}\left(u\right){f}_{{F}_{n-k}}\left(v\right)\text{d}u\text{d}v,$ (4)

where ${f}_{{S}_{k}}$ and ${f}_{{F}_{n-k}}$ are the pdf’s of ${S}_{k}$ and ${F}_{n-k}$ , respectively. Now, use the fact that ${T}_{k+1,1}$ and ${T}_{n-k+1,2}$ are independent and have respective exponential distributions with failure rates ${\lambda}_{1}$ and ${\lambda}_{2}$ . If $u\le v$ , it follows that

$P\left(v<u+t<v+{T}_{n-k+1,2}\right)=(\begin{array}{cc}0& \text{if}t\le v-u\\ {e}^{-{\lambda}_{2}\left(t+u-v\right)}& \text{if}t>v-u.\end{array}$

By conditioning on the value of ${T}_{k+1,1}$ , it can be concluded that

$\begin{array}{c}P\left(v<u+{T}_{k+1,1}<v+{T}_{n-k+1,2}\right)={\displaystyle {\int}_{v-u}^{\infty}}{e}^{-{\lambda}_{2}\left(t+u-v\right)}{\lambda}_{1}{e}^{-{\lambda}_{1}t}\text{d}t\\ =\frac{{\lambda}_{1}}{{\lambda}_{1}+{\lambda}_{2}}{e}^{-{\lambda}_{1}\left(v-u\right)}.\end{array}$ (5)

Similarly, if $u>v$ ,

$P\left(v<u+t<v+{T}_{n-k+1,2}\right)={e}^{-{\lambda}_{2}\left(t+u-v\right)},$

and hence

$\begin{array}{c}P\left(v<u+{T}_{k+1,1}<v+{T}_{n-k+1,2}\right)={\displaystyle {\int}_{0}^{\infty}}{e}^{-{\lambda}_{2}\left(t+u-v\right)}{\lambda}_{1}{e}^{-{\lambda}_{1}t}\text{d}t\\ =\frac{{\lambda}_{1}}{{\lambda}_{1}+{\lambda}_{2}}{e}^{-{\lambda}_{2}\left(u-v\right)}.\end{array}$ (6)

Using the fact that ${S}_{k}$ and ${F}_{n-k}$ have Erlang distributions with respective shape parameters $k$ and $n-k$ and respective scale parameters ${\lambda}_{1}$ and ${\lambda}_{2}$ , substituting (5) and (6) into (4) leads to

$\begin{array}{l}P\left({E}_{n,k}\text{andsuccessontrial}n+1\right)\\ \begin{array}{l}=\frac{{\lambda}_{1}}{{\lambda}_{1}+{\lambda}_{2}}({\displaystyle {\int}_{0}^{\infty}}{\displaystyle {\int}_{u}^{\infty}}{e}^{-{\lambda}_{1}\left(v-u\right)}\frac{{\left({\lambda}_{1}u\right)}^{k-1}}{\left(k-1\right)!}{\lambda}_{1}{e}^{-{\lambda}_{1}u}\frac{{\left({\lambda}_{2}v\right)}^{n-k-1}}{\left(n-k-1\right)!}{\lambda}_{2}{e}^{-{\lambda}_{2}v}\text{d}v\text{d}u\hfill \\ \text{}+{\displaystyle {\int}_{0}^{\infty}}{\displaystyle {\int}_{v}^{\infty}}{e}^{-{\lambda}_{2}\left(u-v\right)}\frac{{\left({\lambda}_{1}u\right)}^{k-1}}{\left(k-1\right)!}{\lambda}_{1}{e}^{-{\lambda}_{1}u}\frac{{\left({\lambda}_{2}v\right)}^{n-k-1}}{\left(n-k-1\right)!}{\lambda}_{2}{e}^{-{\lambda}_{2}v}\text{d}u\text{d}v).\hfill \end{array}\end{array}$ (7)

To evaluate the first double integral in (7), one can use the fact that the waiting time until the ${\left(n-k\right)}^{th}$ event in a Poisson process with rate ${\lambda}_{1}+{\lambda}_{2}$ exceeds $u$ if and only if the number of events in this process that occur by time $u$ is at most $n-k-1$ , and that this number has a Poisson distribution with mean $\left({\lambda}_{1}+{\lambda}_{2}\right)u$ . Hence the first integral in (7) is

$\begin{array}{l}\frac{{\lambda}_{1}^{k}{\lambda}_{2}^{n-k}}{{\left({\lambda}_{1}+{\lambda}_{2}\right)}^{n-k}}{\displaystyle {\int}_{0}^{\infty}}\frac{{u}^{k-1}}{\left(k-1\right)!}{\displaystyle {\int}_{u}^{\infty}}\frac{{\left(\left({\lambda}_{1}+{\lambda}_{2}\right)u\right)}^{n-k-1}}{\left(n-k-1\right)!}\left({\lambda}_{1}+{\lambda}_{2}\right){e}^{-\left({\lambda}_{1}+{\lambda}_{2}\right)v}\text{d}v\text{d}u\\ =\frac{{\lambda}_{1}^{k}{\lambda}_{2}^{n-k}}{{\left({\lambda}_{1}+{\lambda}_{2}\right)}^{n-k}}{\displaystyle {\int}_{0}^{\infty}}\frac{{u}^{k-1}}{\left(k-1\right)!}{\displaystyle \underset{j=0}{\overset{n-k-1}{\sum}}}{e}^{-\left({\lambda}_{1}+{\lambda}_{2}\right)u}\frac{{\left(\left({\lambda}_{1}+{\lambda}_{2}\right)u\right)}^{j}}{j!}\text{d}u\\ =\frac{{\lambda}_{1}^{k}{\lambda}_{2}^{n-k}}{{\left({\lambda}_{1}+{\lambda}_{2}\right)}^{n}}{\displaystyle \underset{j=0}{\overset{n-k-1}{\sum}}}\left(\begin{array}{c}k-1+j\\ k-1\end{array}\right){\displaystyle {\int}_{0}^{\infty}}\frac{{\left(\left({\lambda}_{1}+{\lambda}_{2}\right)u\right)}^{j+k-1}}{\left(j+k-1\right)!}\left({\lambda}_{1}+{\lambda}_{2}\right){e}^{-\left({\lambda}_{1}+{\lambda}_{2}\right)u}\text{d}u\\ ={\displaystyle \underset{j=0}{\overset{n-k-1}{\sum}}}\left(\begin{array}{c}k-1+j\\ k-1\end{array}\right){\left(\frac{{\lambda}_{1}}{{\lambda}_{1}+{\lambda}_{2}}\right)}^{k}{\left(\frac{{\lambda}_{2}}{{\lambda}_{1}+{\lambda}_{2}}\right)}^{n-k}\\ =\left(\begin{array}{c}n-1\\ k\end{array}\right){\left(\frac{{\lambda}_{1}}{{\lambda}_{1}+{\lambda}_{2}}\right)}^{k}{\left(\frac{{\lambda}_{2}}{{\lambda}_{1}+{\lambda}_{2}}\right)}^{n-k},\end{array}$ (8)

where the penultimate equation follows from the fact that the integral in the preceding expression is one, and the last equation follows by an application of Lemma 1. By interchanging ${\lambda}_{1}$ with ${\lambda}_{2}$ , $k$ with $n-k$ , and once again applying Lemma 1, it can be concluded in a similar manner that the value of the second double integral in (7) is

$\begin{array}{l}{\displaystyle \underset{j=0}{\overset{k-1}{\sum}}}\left(\begin{array}{c}n-k-1+j\\ n-k-1\end{array}\right){\left(\frac{{\lambda}_{1}}{{\lambda}_{1}+{\lambda}_{2}}\right)}^{k}{\left(\frac{{\lambda}_{2}}{{\lambda}_{1}+{\lambda}_{2}}\right)}^{n-k}\\ =\left(\begin{array}{c}n-1\\ k-1\end{array}\right){\left(\frac{{\lambda}_{1}}{{\lambda}_{1}+{\lambda}_{2}}\right)}^{k}{\left(\frac{{\lambda}_{2}}{{\lambda}_{1}+{\lambda}_{2}}\right)}^{n-k}.\end{array}$ (9)

From (7), (8) and (9), it now follows that

$\begin{array}{l}P\left({E}_{n,k}\text{andsuccessontrial}n+1\right)\\ =\left[\left(\begin{array}{c}n-1\\ k\end{array}\right)+\left(\begin{array}{c}n-1\\ k-1\end{array}\right)\right]{\left(\frac{{\lambda}_{1}}{{\lambda}_{1}+{\lambda}_{2}}\right)}^{k+1}{\left(\frac{{\lambda}_{2}}{{\lambda}_{1}+{\lambda}_{2}}\right)}^{n-k}\\ =\left(\begin{array}{c}n\\ k\end{array}\right){\left(\frac{{\lambda}_{1}}{{\lambda}_{1}+{\lambda}_{2}}\right)}^{k+1}{\left(\frac{{\lambda}_{2}}{{\lambda}_{1}+{\lambda}_{2}}\right)}^{n-k}.\end{array}$

The case $n=0$ is easy and left to the reader. The argument just presented assumes that $0<k<n$ . The cases $k=0$ and $k=n$ are simpler and are also left to the reader. The proof of Lemma 2 is complete. □

Proof of Theorem 2. It will now be shown by induction on $m$ that for $m\ge 2$ , the first $m$ trials are independent and that the probability of success on each of

these trials is $\frac{{\lambda}_{1}}{{\lambda}_{1}+{\lambda}_{2}}$ .

First, suppose $m=2$ . Set $n=k=0$ in Lemma 2 to conclude that

$P\left(\text{successonthe}1\text{sttrial}\right)=\frac{{\lambda}_{1}}{{\lambda}_{1}+{\lambda}_{2}},$ (10)

and hence $P\left(\text{failureonthe1sttrial}\right)=1-\frac{{\lambda}_{1}}{{\lambda}_{1}+{\lambda}_{2}}=\frac{{\lambda}_{2}}{{\lambda}_{1}+{\lambda}_{2}}$ .

Set $n=1$ and $k=1$ in Lemma 2 to see that

$P\left(\text{successoneachofthefirsttwotrials}\right)={\left(\frac{{\lambda}_{1}}{{\lambda}_{1}+{\lambda}_{2}}\right)}^{2},$ (11)

and set $n=1$ and $k=0$ to obtain

$P\left(\text{failureonthe1sttrialandsuccessonthe2ndtrial}\right)=\frac{{\lambda}_{1}{\lambda}_{2}}{{\left({\lambda}_{1}+{\lambda}_{2}\right)}^{2}}.$

Adding the last two probabilities gives

$P\left(\text{successonthe2ndtrial}\right)={\left(\frac{{\lambda}_{1}}{{\lambda}_{1}+{\lambda}_{2}}\right)}^{2}+\frac{{\lambda}_{1}{\lambda}_{2}}{{\left({\lambda}_{1}+{\lambda}_{2}\right)}^{2}}=\frac{{\lambda}_{1}}{{\lambda}_{1}+{\lambda}_{2}}.$ (12)

The independence of the first two trials now follows from (10), (11) and (12), and consequently Theorem 2 is true for $m=2$ .

Suppose that Theorem 2 is true for some $m\ge 2$ . The first $m$ trials are there- fore Bernoulli trials, so that the number of successes has a binomial distribution. Specifically,

$P\left({E}_{m,k}\right)=\left(\begin{array}{c}m\\ k\end{array}\right){\left(\frac{{\lambda}_{1}}{{\lambda}_{1}+{\lambda}_{2}}\right)}^{k}{\left(\frac{{\lambda}_{2}}{{\lambda}_{1}+{\lambda}_{2}}\right)}^{m-k}.$ (13)

From Lemma 2, we can condition on the number of successes in the first $m$ trials to see that

$\begin{array}{c}P\left(\text{successontrial}m+1\right)={\displaystyle \underset{k=0}{\overset{m}{\sum}}}P\left({E}_{m,k}\text{andsuccessontrial}m+1\right)\\ ={\displaystyle \underset{k=0}{\overset{m}{\sum}}}\left(\begin{array}{c}m\\ k\end{array}\right){\left(\frac{{\lambda}_{1}}{{\lambda}_{1}+{\lambda}_{2}}\right)}^{k+1}{\left(\frac{{\lambda}_{2}}{{\lambda}_{1}+{\lambda}_{2}}\right)}^{m-k}\\ =\frac{{\lambda}_{1}}{{\lambda}_{1}+{\lambda}_{2}}.\end{array}$ (14)

It follows that on each of the first $m+1$ trials, the probability of success is

$\frac{{\lambda}_{1}}{{\lambda}_{1}+{\lambda}_{2}}$ . Furthermore, it follows from Lemma 2, (13), and (14) that for

$k=0,1,\cdots ,m$

$P\left({E}_{m,k}\text{andsuccessontrial}m+1\right)=P\left({E}_{m,k}\right)P\left(\text{successontrial}m+1\right),$

from which one may conclude that the first $m+1$ trials are independent. The proof of Theorem 2 is now complete.

□

4. Conclusion

In this paper, a new proof has been provided for the fact that, in the hierarchical model of a Poisson random variable $Y$ whose mean has an Erlang distribution, the unconditional distribution of $Y$ is negative binomial. A new proof that the events in two independent Poisson processes may be regarded as Bernoulli trials has also been provided. The distinguishing feature of this proof is that it does not make use of the memoryless property of the exponential distribution.

Cite this paper

Agarwal, A., Bajorski, P., Farnsworth, D.L., Marengo, J.E. and Qian, W. (2017) The Conditional Poisson Process and the Erlang and Negative Binomial Distributions. Open Journal of Statistics, 7, 16-22. https://doi.org/10.4236/ojs.2017.71002

References

- 1. Karlin, S. and Taylor, H.M. (1975) A First Course in Stochastic Processes. 2nd Edition, Academic Press, San Deigo, CA.
- 2. Ross, S.M. (2014) Introduction to Probability Models. 11th Edition, Academic Press, San Diego, CA.
- 3. Stigler, S.M. (1986) The History of Statistics: The Measurement of Uncertainty before 1900. Belknap Press of Harvard University Press, Cambridge, MA.
- 4. Greenwood, M. and Yule, G.U. (1920) An Inquiry into the Nature of Frequency Distributions of Multiple Happenings, with Particular Reference to the Occurrence of Multiple Attacks of Disease or Repeated Accidents. Journal of the Royal Statistical Society A, 83, 255-279. https://doi.org/10.2307/2341080
- 5. Sundt, B. and Vernic, R. (2009) Recursions for Convolutions and Compound Distributions with Insurance Applications. Springer-Verlag, Berlin Heidelberg.
- 6. Albrecht, P. (1982) On Some Statistical Methods Connected with the Mixed Poisson Process. Scandinavian Actuarial Journal, No. 1, 1-14. https://doi.org/10.1080/03461238.1982.10405427
- 7. Antzoulakos, D. and Chadjiconstantinidis, S. (2004) On Mixed and Compound Mixed Poisson Distributions. Scandinavian Actuarial Journal, No. 3, 161-188. https://doi.org/10.1080/03461230110106525
- 8. Nadarajah, S. and Kotz, S. (2006) Compound Mixed Poisson Distributions I. Scandinavian Actuarial Journal, No. 3, 141-162. https://doi.org/10.1080/03461230600783384
- 9. Nadarajah, S. and Kotz, S. (2006) Compound Mixed Poisson Distributions II. Scandinavian Actuarial Journal, No. 3, 163-181. https://doi.org/10.1080/03461230600715253
- 10. Casella, G. and Berger, R.L. (1990) Statistical Inference. Wadsworth & Brooks/Cole, Pacific Grove, CA.
- 11. Hogg, R., McKean, J.W. and Craig, A.T. (2005) Introduction to Mathematical Statistics. 6th Edition, Pearson, Upper Saddle River, NJ.
- 12. Taylor, H.M. and Karlin, S. (1998) An Introduction to Stochastic Modeling. 3rd Edition, Academic Press, San Deigo, CA.