^{1}

^{*}

^{2}

^{3}

^{4}

The purpose of this article offers different algorithms of Weibull Geometric (WG) distribution estimation depending on the progressive Type II censoring samples plan, spatially the joint confidence intervals for the parameters. The approximate joint confidence intervals for the parameters, the approximate confidence regions and percentile bootstrap intervals of confidence are discussed, and several Markov chain Monte Carlo (MCMC) techniques are also presented. The parts of mean square error (MSEs) and credible intervals lengths, the estimators of Bayes depend on non-informative implement more effective than the maximum likelihood estimates (MLEs) and bootstrap. Comparing the models, the MSEs, average confidence interval lengths of the MLEs, and Bayes estimators for parameters are less significant for censored models.

The statistical distributions have a very important location of computer branches because of the great number of their particular applications. Being applied to images using Weibull distribution, the structured masks yield good results for the diagnosis of the early Alzheimer’s disease [

The Weibull distributions display significant statistics―because of their large number of particular features, and practitioners―due to their efficiency to suit data from several scopes, beginning with real data in life, to observation made in economics, weather data, acceptance sampling, hydrology, biology etc. [

The Weibull-geometric (WG), Exponential-Poisson (EP), Weibull-Power-Se- ries (WPS), Complementary-Exponential-geometric (CEG), Exponential-Geo- metric (EG), Generalized-Exponential-Power-Series (GEPS), Exponential Weibull-Poisson (EWP), and Generalized-Inverse-Weibull-Poisson (GIWP) distributions are introduced and presented by Adamidis and Loukas [

Barreto-Souza [

The paper is organized as follows: the probability density function and cumulative functions of the WG distribution are presented in Section 2. Section 3 provides Markov chain Monte Carlo’s algorithms. The maximum likelihood estimates of the parameters of the WG distribution, the point and interval estimates of the parameters, as well as the approximate joint confidence region are studied in Section 4. The parametric bootstrap confidence intervals of parameters are discussed in Section 5. Bayes estimation of the model parameters and Gibbs sampling algorithm are provided in Section 6. Data analysis and Monte Carlo simulation results are presented in Section 7. Section 8 concludes the paper.

It is assumed that there are n groups, independent and separated. Each group contains k items that are put in a lifetime test. Consider that the progressive censored scheme R = { R 1 , R 2 , ⋯ , R m } such that: R 1 represents a set of groups isolated and deleted from the current test, randomly, when the first failure X 1 ; m , n , k R takes place. Similarly, R 2 represents a combination of groups and the group that the second failure is observed is deleted from the current test as soon as the second failure X 2 ; m , n , k R occurs randomly. In final R m , groups are randomly deleted from the current test when there is an m-th failure X m ; m , n , k R . Therefore, x 1 ; m , n , k R < x 2 ; m , n , k R < ⋯ < x m ; m , n , k R , are known as progressively 1-failure censoring order statistics, where m is the number of the 1-failures 1 < m ≤ n . The relation of the distribution function F ( x ) and probability density function f ( x ) are founded in the function of joint probability density for X 1 ; m , n , k R , X 2 ; m , n , k R , ⋯ , X m ; m , n , k R . The failure times of the k × n items from a continuous population are defined by: (see Balakrishnan and Sandhu [

f 1 , 2 , ⋯ , m ( x 1 ; m , n , k R , x 2 ; m , n , k R , ⋯ , x m ; m , n , k R ) = κ k m ∏ i = 1 m f ( x i ; m , n , k R ) [ 1 − F ( x i ; m , n , k R ) ] k ( R i + 1 ) − 1 , 0 < x 1 ; m , n , k R < x 2 ; m , n , k R < ⋯ < x m ; m , n , k R < ∞ , (1)

and

κ = n ( n − R 1 − 1 ) ( n − R 1 − R 2 − 2 ) ⋯ ( n − R 1 − R 2 − ⋯ − R m − 1 − m − 1 ) . (2)

There are special cases of the progressive first-failure censoring scheme of Equation (1) as follows:

1) When R = { 0 , 0 , ⋯ , 0 } , the first-failure censoring scheme is obtained.

2) When k = 1 , the censoring order statistics of progressive Type II is found.

3) When R = { 0 , 0 , ⋯ , 0 } and k = 1 , sampling case in the complete form is obtained.

Generally, the progressively first-failure censoring order statistics X 1 ; m , n , k R , X 2 ; m , n , k R , ⋯ , X m ; m , n , k R can be represented as a censoring order statistics of progressive Type II from the size of a population with function of distribution 1 − ( 1 − F ( x ) ) k . Hence, the results of progressive type II can be expanded to progressive first-failure censoring order statistic easily. The testing time in the progressive first-failure-censoring plan is reduced with n × k items, which contains only m failures.

The probability density function (pdf) of the WG distribution is represented by the following equation:

f ( x ) = α β ( 1 − p ) ( β x ) α − 1 e − ( β x ) α / ( 1 − p e − ( β x ) α ) 2 , x > 0 , (3)

and the cumulative distribution function (cdf) of the WG distribution is shown by:

F ( x ) = ( 1 − e − ( β x ) α ) / ( 1 − p e − ( β x ) α ) , x ≥ 0 , (4)

where α > 0 ; β > 0 and p ( 0 < p < 1 ) are parameters. The parameters α and β stand for the shape and scale while p stands for the mixing parameters, respectively.

The WG distribution in Equation (3) produces some special models as follows:

1) Weibull distribution, when p = 0 .

2) WG distribution tends to a distribution that is degenerated in zero, when p → 1 .

Hence, the parameter p can be explained as a focus parameter or concentration parameter.

respectively, with β = 0.9 and α = 5 for the various rates of p. The EG distribution related to two-parameter with decreasing failure rate is introduced by Adamidis and Loukas [

u + p − 1 e u ( u − 1 + 1 / α ) = − 1 + 1 / α (5)

The WG density can be unimodal when p < − 1 . For instance, when p < − 1 and α = 1 , the EEG distribution is unimodal. The hazard and survival functions of X are:

h ( t ) = α β ( p − 1 ) ( β t ) α − 1 / ( p e − ( β t ) α − 1 − 1 ) (6)

and

S ( t ) = − 1 + ( e − ( β t ) α − 1 ) / ( p e − ( β t ) α − 1 − 1 ) , (7)

Markov chain Monte Carlo (MCMC) technique has spread widely for Bayesian calculation in compound statistical modeling. In general, it gives a beneficial application for real statistical modeling (Gilks et al. [

Markov Chain is a randomly determined and stochastic process, having a random probability distribution or pattern that may be resolved statistically in that future cases are independent of previous cases specified the current case.

Monte Carlo chain is an emulation and simulation, therefore; it used to solve integrals to some extent rather than analyze performance, a procedure named integration of Monte Carlo. In this way, interested quantities of a distribution can be picked from emulated draws and charts from the distribution. Bayesian test needs integration over probably high-dimension of probability distributions to produce predictions or to yield inference and deduction about parameters of model. Basically, Monte Carlo integration is utilized with chains of Markov in MCMC techniques. The patterns of integration draw from the desired distribution, and then form pattern rates to sacrificial expectations (see Geman [

The Metropolis-Hastings (MH) procedure is employed by Metropolis et al. [

When the proposal distribution is symmetric, so ℏ ( τ | η ) = ℏ ( η | τ ) for all

possible η and τ then, in particular, the result is ℏ ( τ ( t − 1 ) | τ ∗ ) = ℏ ( τ ∗ | τ ( t − 1 ) ) ,

so that the acceptance probability (5) is given by

χ ( τ ( t − 1 ) , τ ∗ ) = min [ 1 , f ( τ ∗ | x ) f ( τ ( t − 1 ) | x ) ] . (9)

Gibbs’ sampler (GS) procedure is a straightforward branch of MCMC algorithms. This procedure was implemented by Geman [

The three unknown parameters of WG distribution will be studied through the various algorithms of estimation based on progressive Type-II censoring. The MCMC procedures are used with Bayesian technique to produce from the posterior distributions.

This section determines the maximum likelihood estimates (MLEs) of the WG distribution parameters. Let’s assume that X i = X i ; m , n R , i = 1 , 2 , ⋯ , m are the progressive first-failure censoring order statistics from a WG distribution, with censoring plane R. Using Equations (1)-(3), the function of likelihood is shown by:

L ( α , β , p | x _ ) = κ α m β α m ( 1 − p ) n × exp ( ( α − 1 ) ∑ i = 1 m log x i − ∑ i = 1 m ( β x i ) α ( R i + 1 ) ) × ∏ i = 1 m [ 1 − p exp ( − ( β x i ) α ) ] − ( R i + 2 ) (10)

where κ is given in (2). The logarithm of the function of likelihood may be obtained as follow:

l ( α , β , p | x _ ) = m log α + m α log β + n log ( 1 − p ) + ( α − 1 ) ∑ i = 1 m log x i − ∑ i = 1 m ( β x i ) α ( R i + 1 ) − ∑ i = 1 m ( R i + 2 ) log ( 1 − p exp ( − ( β x i ) α ) ) . (11)

Compute the derivatives ∂ l ∂ α , ∂ l ∂ β and ∂ l p α , then put each equation equal

to zero, the likelihood equations can be obtained in the following:

∂ l ( α , β , p | x _ ) ∂ α = m α + m log β + ∑ i = 1 m log x i − ∑ i = 1 m ( R i + 1 ) ( β x i ) α log ( β x i ) − p ∑ i = 1 m ( R i + 2 ) ( β x i ) α log ( β x i ) exp ( − ( β x i ) α ) 1 − p exp ( − ( β x i ) α ) = 0 , (12)

∂ l ( α , β , p | x _ ) ∂ β = m α β − α ∑ i = 1 m ( R i + 1 ) x i ( β x i ) α − 1 − p α ∑ i = 1 m ( R i + 2 ) x i ( β x i ) α − 1 exp ( − ( β x i ) α ) 1 − p exp ( − ( β x i ) α ) = 0 (13)

and

∂ l ( α , β , p | x _ ) ∂ p = − n 1 − p + ∑ i = 1 m ( R i + 2 ) exp ( − ( β x i ) α ) 1 − p exp ( − ( β x i ) α ) = 0 , (14)

The analytical solution of α ^ , β ^ and p ^ in Equations (12)-(14) is very difficult. Hence, some numerical techniques like Newton’s method may be used.

From the function of log-likelihood in (11), the Fisher information matrix I ( α , β , p ) is obtained by taking expectation of minus Equations (12)-(14). Under some mild regularity conditions, ( α ^ , β ^ , p ^ ) are approximately normal bivariate with the means ( α , β , p ) and covariance matrix I − 1 ( α , β , p ) . Commonly, in practice, I − 1 ( α , β , p ) is estimated by I − 1 ( α ^ , β ^ , p ^ ) . This procedure is simpler and valid to employ the approximation.

( α ^ , β ^ , p ^ ) → N ( ( α , β , p ) , I 0 − 1 ( α ^ , β ^ , p ^ ) ) , (15)

where I 0 ( α , β , p ) is observed as information matrix.

I 0 − 1 ( α ^ , β ^ , p ^ ) = [ − ∂ 2 l ( x _ ; α , β , p ) ∂ α 2 − ∂ 2 l ( x _ ; α , β , p ) ∂ α ∂ β − ∂ 2 l ( x _ ; α , β , p ) ∂ α ∂ p − ∂ 2 l ( x _ ; α , β , p ) ∂ β ∂ α − ∂ 2 l ( x _ ; α , β , p ) ∂ β 2 − ∂ 2 l ( x _ ; α , β , p ) ∂ β ∂ p − ∂ 2 l ( x _ ; α , β , p ) ∂ p ∂ α − ∂ 2 l ( x _ ; α , β , p ) ∂ p ∂ β − ∂ 2 l ( x _ ; α , β , p ) ∂ p 2 ] ( α ^ , β ^ , p ^ ) − 1 = [ var ( α ^ ) cov ( α ^ , β ^ ) cov ( α ^ , p ^ ) cov ( β ^ , α ^ ) var ( β ^ ) cov ( β ^ , p ^ ) cov ( p ^ , α ^ ) cov ( p ^ , β ^ ) var ( p ^ ) ] . (16)

Confidence intervals can be calculated approximately for α , β and p to be bivariate normal distributed with the means ( α , β , p ) and covariance matrix I 0 − 1 ( α ^ , β ^ , p ^ ) . Hence, the 100 ( 1 − α ) % confidence intervals approximately for α , β and p are

α ^ ± z α 2 v 11 , β ^ ± z α 2 v 22 and p ^ ± z α 2 v 33 (17)

respectively, where the values v 11 , v 22 and v 33 are on the major diagonal of the covariance matrix I 0 − 1 ( α ^ , β ^ , p ^ ) and z α 2 is the percentage of the standard

normal distribution with right-tail probability α 2 .

The bootstrap technique is used for resampling in statistical inference cases. It is usually utilized to evaluate confidence regions and it can be applied to evaluate bias and variance of a calibrator or estimator assumption tests. Additional scanning of the parametric and nonparametric bootstrap technique is applied, see Davison and Hinkley [

Percentile bootstrap confidence interval: Assume that G ( y ) = P ( φ ^ j ∗ ≤ y ) is the cumulative distribution function of φ ^ j ∗ . Determine φ ^ j b o o t ∗ = G − 1 ( y ) for the given y. The bootstrap confidence interval approximately with 100 ( 1 − γ ) % of φ ^ j ∗ may be obtained as follows:

[ φ ^ j b o o t ∗ ( γ 2 ) , φ ^ j b o o t ∗ ( 1 − γ 2 ) ] . (18)

First, locate the sort statistics δ j ∗ [ 1 ] < δ j ∗ [ 2 ] < ⋯ < δ j ∗ [ N ] , wherever

δ k ∗ [ i ] = φ ^ j ∗ [ i ] − φ ^ j var ( φ ^ j ∗ [ i ] ) , i = 1 , 2 , ⋯ , N , j = 1 , 2 , 3 , (19)

and φ ^ 1 = α ^ , φ ^ 2 = β ^ , φ ^ 3 = p ^ .

Consider that H ( y ) = P ( δ j ∗ < y ) is the cumulative distribution function of δ j ∗ . If y is given, then

φ ^ j b o o t − t = φ ^ j + var ( φ ^ j ) H − 1 ( y ) . (20)

In the consideration that each of the parameters α , β and p are unknown, it may be considered that the joint prior density is a product of gamma density of α and β uniform prior of p, where

π 1 ( α ) = b a Γ ( a ) α a − 1 exp ( − b α ) , α > 0 , ( a , b > 0 ) , (21)

π 2 ( β ) = d c Γ ( c ) β c − 1 exp ( − d β ) , β > 0 , ( c , d > 0 ) , (22)

and

π 3 ( p ) = 1 (23)

By multiplying π 1 ( α ) by π 2 ( β ) and π 3 ( p ) , we get the joint prior density of α , β and p computed by

π ( α , β , p ) = b a d c Γ ( a ) Γ ( c ) α a − 1 β c − 1 exp ( − ( b α + d β ) ) , ( α , β > 0 and 0 ≤ p ≤ 1 ) . (24)

Based on the prior of joint distribution of α , β and p the posterior of joint density function of α , β and p known as the data, indicated by π ∗ ( α , β , p | x _ ) can be expressed as follows:

π ∗ ( α , β , p | x _ ) = L ( α , β , p | x _ ) × π ( α , β , p ) ∫ 0 ∞ ∫ 0 ∞ ∫ 0 ∞ L ( α , β , p | x _ ) × π ( α , β , p ) d α d β d p . (25)

Hence, using squared error loss function (SEL) of any function φ ( α , β , p ) , the Bayes estimate of α , β and p can be expressed as

φ ^ ( α , β , p ) = E α , β , p | x _ ( φ ( α , β , p ) ) = ∫ 0 ∞ ∫ 0 ∞ ∫ 0 ∞ φ ( α , β , p ) L ( α , β , p | x _ ) × π ( α , β , p ) d α d β d p ∫ 0 ∞ ∫ 0 ∞ ∫ 0 ∞ L ( α , β , p | x _ ) × π ( α , β , p ) d α d β d p (26)

In general, the value of two integrals specified by (26) cannot be acquired in a cleared and closed format. In this situation, the MCMC procedure is used to create patterns from the posterior distributions and; therefore, is calculated the Bayes estimator of φ ( α , β , p ) along with the function of SEL. A wide diversity of MCMC techniques is available and can be troublesome to select any of them. A significant type of MCMC technique is Gibbs samplers and widespread Metropolis within-Gibbs samplers.

The MCMC procedure has the advantage over the MLE procedure that we can permanently gain an appropriate estimation of intervals of the parameters by building the probability intervals and using the experimental posterior distribution.

This, sometimes, is not obtainable in MLE. The samples of MCMC can be utilized to fully brief the uncertainty of posterior about the parameters α , β and p , by using a kernel estimation of the posterior distribution.

The function of joint posterior density of α , β and p may be described as

π ∗ ( α , β , p | x _ ) ∝ α m + a − 1 β α m + c − 1 ( 1 − p ) n exp { − b α − d β + α ∑ i = 1 m log x i − ∑ i = 1 m ( R i + 2 ) log [ 1 − p exp ( − ( β x i ) α ) ] − ∑ i = 1 m ( R i + 1 ) ( β x i ) α } . (27)

The conditional posterior PDF’s of α , β and p are shown as

π 1 ∗ ( α | β , p , x _ ) ∝ α m + a − 1 exp { α ( m log β − b + ∑ i = 1 m log x i ) − ∑ i = 1 m ( R i + 2 ) log [ 1 − p exp ( − ( β x i ) α ) ] − ∑ i = 1 m ( R i + 1 ) ( β x i ) α } , (28)

π 2 ∗ ( β | α , p , x _ ) ∝ β α m + c − 1 exp { − d β − ∑ i = 1 m ( R i + 2 ) log [ 1 − p exp ( − ( β x i ) α ) ] − ∑ i = 1 m ( R i + 1 ) ( β x i ) α } , (29)

and

π 3 ∗ ( p | α , β , x _ ) ∝ ( 1 − p ) n exp { − ∑ i = 1 m ( R i + 2 ) log [ 1 − p exp ( − ( β x i ) α ) ] } . (30)

The Metropolis-Hastings procedure [

To explain the procedures evolved of estimation in this paper, gamma distribution for given hybrid parameters ( a = 1.5 , b = 1 ) is used and produce sample of

space 10, randomly (21), the average of the sample α ≅ 1 10 ∑ i = 1 10 α i , is computed

and supposed as the real population rate of α = 1.5 . So that they are obtained to

verify E ( α ) = b a ≅ α with the past parameters is nearly the average of gamma

distribution. Similarly, when the valued c = 2 and d = 1 are given, create π 2 ( β ) based on the last β = 2 , from gamma distribution (22). The previously

parameters selected to verify E ( β ) = d c ≅ β , are nearly the average of gamma

distribution. A progressive Type II samples are created by employing the procedures of Balakrishnan and Sandhu [

The approximate bootstrap, Bayes estimates and MLEs are calculated of α , β , and p under these data utilizing MCMC algorithm outputs are explained in

φ ^ k ¯ = 1 M ∑ i = 1 M φ ^ k ( i ) , ( φ 1 = α , φ 2 = β , φ 3 = p ) , (34)

and

MSE = 1 M ∑ i = 1 M ( φ ^ k ( i ) − φ k ) 2 . (35)

In studies of simulation, the researchers assume that the population parameter rates ( α = 0.5 , β = 1.5 , p = 0.1 ) , various sample values n, different effected sample size m and different censored scheme R . For computing Bayes estimators, without loss of generality using non-informative priors, ( a = 0.0001 , b = 0.0001 , c = 0.0001 , d = 0.0001 ). Under function of squared error loss, the researchers calculate the Bayes estimations. The estimations of Bayes and 95% credible intervals using 11,000 sets of MCMC are also calculated. The mean Bayes estima-

Procedure | |||
---|---|---|---|

(.)_{ML} | 1.6178 | 1.9614 | 0.4566 |

(.)_{Boot} | 1.8122 | 2.2451 | 0.7724 |

(.)_{MCMC} | 1.6299 | 1.9787 | 0.4727 |

Procedure | |||
---|---|---|---|

(.)ML | (1.1424, 2.0931) | (1.3282, 2.5946) | (0.4654, 0.9210) |

(.)Boot | (1.0004, 2.3599) | (1.1012, 2.5840) | (0.4655, 0.8821) |

(.)Boot-t | (1.1235, 2.0147) | (0.8561, 2.6523) | (0.1361, 0.8111) |

(.)MCMC | (1.1444, 1.9440) | (0.9569, 2.5709) | (0.0361, 0.7768) |

tions, MSEs, coverage percentages, and average lengths of confidence interval based on 500 times are reported.

Comparatively, the MLEs with the 95% confidence intervals are calculated based on the observation of Fisher information matrix and two bootstrap confidences.

1) From

2) From

3) The MSE and average confidence interval lengths nearly reduce the estimators in whole situations when the performance sample rate n / m raises.

Several algorithms of estimation of WG distribution, based on the progressive Type II censored sampling plan, are discussed. The joint confidence intervals for the parameters are also studied. The approximate confidence regions, percentile bootstrap confidence intervals, as well as approximate joint confidence region

m (scheme) | |||||||||
---|---|---|---|---|---|---|---|---|---|

15 (15, 14^{0}) | 0.6245 | 1.6664 | 0.1021 | 0.7241 | 1.4217 | 0.1539 | 0.5241 | 1.4399 | 0.1597 |

0.1234 | 0.4736 | 0.0664 | 0.2235 | 0.4840 | 0.1021 | 0.1200 | 0.1914 | 0.1056 | |

15 (15^{1}) | 0.6754 | 1.6828 | 0.1016 | 0.6788 | 1.4027 | 0.1482 | 0.5751 | 1.4194 | 0.1533 |

0.2101 | 0.5398 | 0.0614 | 0.2479 | 0.5438 | 0.0906 | 0.1101 | 0.4530 | 0.0936 | |

15 (14^{0}, 15) | 0.5364 | 1.7667 | 0.1001 | 0.5388 | 1.3614 | 0.1554 | 0.5064 | 1.3977 | 0.1621 |

0.2351 | 0.7337 | 0.0707 | 0.3331 | 0.8748 | 0.1069 | 0.1111 | 0.5925 | 0.1105 | |

20 (10, 19^{0}) | 0.6200 | 1.6165 | 0.1023 | 0.7070 | 1.4327 | 0.1432 | 0.5205 | 1.4424 | 0.1471 |

0.1088 | 0.3890 | 0.0623 | 0.1282 | 0.3347 | 0.0874 | 0.0988 | 0.3389 | 0.0596 | |

20 (1, 0, ・・・, 1, 0) | 0.6151 | 1.6173 | 0.1024 | 0.6999 | 1.4234 | 0.1404 | 0.5151 | 1.4322 | 0.1439 |

0.1901 | 0.4028 | 0.057 | 0.2220 | 0.4500 | 0.0793 | 0.1102 | 0.3543 | 0.0511 | |

20 (19^{0}, 10) | 0.6164 | 1.6782 | 0.0968 | 0.7162 | 1.4204 | 0.1400 | 0.5146 | 1.4375 | 0.1443 |

0.2225 | 0.5091 | 0.0577 | 0.2335 | 0.6194 | 0.0813 | 0.1325 | 0.4272 | 0.0537 | |

30 (30, 29^{0}) | 0.5777 | 1.7881 | 0.0974 | 0.7357 | 1.4828 | 0.1297 | 0.5577 | 1.4934 | 0.1320 |

0.1100 | 0.7534 | 0.054 | 0.2102 | 0.7296 | 0.0707 | 0.0985 | 0.6377 | 0.0518 | |

30 (1^{30}) | 0.6360 | 1.9606 | 0.094 | 0.5399 | 1.5464 | 0.1233 | 0.6355 | 1.5575 | 0.1254 |

0.1840 | 1.0418 | 0.0447 | 0.1990 | 0.9999 | 0.0570 | 0.1000 | 0.8539 | 0.0480 | |

30 (29^{0}, 30) | 0.6321 | 2.1049 | 0.0950 | 0.6355 | 1.4381 | 0.1320 | 0.6333 | 1.4661 | 0.1351 |

0.2114 | 1.5074 | 0.0548 | 0.2554 | 1.5381 | 0.0741 | 0.2000 | 1.2615 | 0.0758 |

m (scheme) | |||||||||
---|---|---|---|---|---|---|---|---|---|

15 (15, 14^{0}) | 0.90 | 0.923 | 0.87 | 0.892 | 0.933 | 0.86 | 0.9021 | 0.916 | 0.936 |

0.7502 | 1.6373 | 0.2507 | 0.9840 | 1.8201 | 0.3503 | 0.7001 | 1.5616 | 0.3504 | |

15 (15^{1}) | 0.901 | 0.923 | 0.874 | 0.9011 | 0.963 | 0.894 | 0.910 | 0.952 | 0.952 |

0.8821 | 1.9465 | 0.2358 | 0.9991 | 2.012 | 0.4458 | 0.8111 | 1.8784 | 0.3231 | |

15 (14^{0}, 15) | 0.880 | 0.908 | 0.886 | 0.874 | 0.918 | 0.896 | 0.891 | 0.933 | 0.939 |

0.9921 | 2.5915 | 0.3571 | 1.001 | 2.5313 | 0.4574 | 0.8821 | 2.4269 | 0.3695 | |

20 (10, 19^{0}) | 0.911 | 0.945 | 0.883 | 0.933 | 0.947 | 0.890 | 0.924 | 0.943 | 0.936 |

0.7001 | 1.4041 | 0.2222 | 0.8801 | 1.3331 | 0.3332 | 0.3001 | 1.3522 | 0.2932 | |

20 (1, 0, ・・・, 1, 0) | 0.905 | 0.962 | 0.895 | 0.915 | 0.920 | 0.905 | 0.932 | 0.958 | 0.943 |

0.7721 | 1.5183 | 0.2133 | 0.8723 | 1.6683 | 0.2442 | 0.7028 | 1.4749 | 0.2778 | |

20 (19^{0}, 10) | 0.891 | 0.963 | 0.857 | 0.901 | 0.980 | 0.889 | 0.901 | 0.968 | 0.965 |

0.9823 | 1.9051 | 0.2237 | 0.9977 | 1.9352 | 0.2299 | 0.7023 | 1.8133 | 0.302 | |

30 (30, 29^{0}) | 0.933 | 0.943 | 0.855 | 0.906 | 0.955 | 0.921 | 0.935 | 0.945 | 0.946 |

0.6002 | 2.4938 | 0.1935 | 0.6880 | 2.8888 | 0.1991 | 0.5001 | 1.4563 | 0.2011 | |

30 (1^{30}) | 0.925 | 0.946 | 0.873 | 0.933 | 0.920 | 0.913 | 0.965 | 0.959 | 0.975 |

0.7522 | 3.4213 | 0.13 | 0.7588 | 1.4211 | 0.118 | 0.6122 | 1.4142 | 0.223 | |

30 (29^{0}, 30) | 0.901 | 0.945 | 0.855 | 0.921 | 0.944 | 0.952 | 0.933 | 0.956 | 0.956 |

0.7923 | 1.1995 | 0.1061 | 0.8925 | 1.1997 | 0.1880 | 0.6982 | 1.0515 | 0.260 |

for the parameters are expanded and developed. Some numerical examples with actual data set and simulated data are used to compare the proposed joint confidence regions. The parts of MSEs and credible intervals lengths, the estimators of Bayes depend on non-informative implement more effective than the MLEs and bootstrap. Comparing the models, the MSEs, average confidence interval lengths of the MLEs, and Bayes estimators for parameters are less significant for censored models.

El-Sayed, M.A., Riad, F.H., Elsafty, M.A. and Estaitia, Y.A. (2017) Algorithms of Confidence Intervals of WG Distribution Based on Progressive Type-II Censoring Samples. Journal of Computer and Communications, 5, 101- 116. https://doi.org/10.4236/jcc.2017.57011