**Open Journal of Statistics**

Vol.07 No.02(2017), Article ID:75696,24 pages

10.4236/ojs.2017.72024

Inference on Constant-Partially Accelerated Life Tests for Mixture of Pareto Distributions under Progressive Type-II Censoring

Tahani A. Abushal^{1}, Areej M. AL-Zaydi^{2}^{ }

^{1}Department of Mathematics, Umm AL-Qura University, Mecca, KSA

^{2}Department of Mathematics, Taif University, Taif, KSA

Copyright © 2017 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: January 7, 2017; Accepted: April 24, 2017; Published: April 27, 2017

ABSTRACT

The main purpose of this paper is to obtain the inference of parameters of heterogeneous population represented by finite mixture of two Pareto (MTP) distributions of the second kind. The constant-partially accelerated life tests are applied based on progressively type-II censored samples. The maximum likelihood estimates (MLEs) for the considered parameters are obtained by solving the likelihood equations of the model parameters numerically. The Bayes estimators are obtained by using Markov chain Monte Carlo algorithm under the balanced squared error loss function. Based on Monte Carlo simulation, Bayes estimators are compared with their corresponding maximum likelihood estimators. The two-sample prediction technique is considered to derive Bayesian prediction bounds for future order statistics based on progressively type-II censored informative samples obtained from constant-partially accelerated life testing models. The informative and future samples are assumed to be obtained from the same population. The coverage probabilities and the average interval lengths of the confidence intervals are computed via a Monte Carlo simulation to investigate the procedure of the prediction intervals. Analysis of a simulated data set has also been presented for illustrative purposes. Finally, comparisons are made between Bayesian and maximum likelihood estimators via a Monte Carlo simulation study.

**Keywords:**

Pareto Distribution, Finite Mixtures, Constant―Partially ALT, Progressive

Type-II Censoring, Bayesian Estimation, Maximum Likelihood Estimation, Bayesian Prediction, the Two-Sample Prediction, MCMC

1. Introduction

Accelerated life tests (ALTs) are used to obtain information quickly on the lifetime distribution of materials or products. The test units are run at higher-than- usual levels of stress to induce early failures. A model relating life length to stress is fitted to the accelerated failure times and then extrapolated to estimate the failure time distribution under the normal use condition. ALTs are preferred to be used in manufacturing industries to obtain enough failure data, in a short period of time, necessary to make inferences regarding its relationship with external stress variables.

According to [1] , there are mainly three ALT methods. The first method is called the constant stress ALT; the stress is kept at a constant level throughout the life of test products, (see for example [2] [3] [4] [5] ). The second one is referred to as progressive stress ALT; the stress applied to a test product is continuously increasing in time (see for example, [6] [7] [8] ).

The third is the step-stress ALT, in which the test condition changes at a given time or upon the occurrence of a specified number of failures, has been studied by several authors. [9] obtained the optimal simple step-stress ALT plans for the case, where test products had exponentially distributed lives and were observed continuously until all test products failed; [10] extended their results to the case of censoring. The optimal step-stress test under progressive type-I censoring, assuming exponential lifetime distribution was considered by [11] . For more recent research on step-stress ALTs, see [12] [13] [14] [15] .

When the acceleration factor cannot be assumed as a known value, the partially accelerated life test (PALT) will be a good choice to perform the life test. In ALTs, the units are tested only at accelerated conditions (see [5] ) whereas in partially ALTs (PALTs), the units are tested at both accelerated and normal conditions. PALTs include two types; one is called step PALTs (see [16] ) and the other is called constant PALTs (see [17] ).

From the Bayesian viewpoint, few studies have been considered on PALT such as [18] used the Bayesian approach for estimating the acceleration factor and the parameters in the case of step-stress PALT with complete sampling for items having exponential and uniform distributions. [19] investigated the optimal Bayesian design of a PALT in the case of the exponential distribution under complete sampling. [20] discussed the Bayesian approach to estimate the parameters of Weibull distribution in step-stress PALT with censoring. [21] considered the Baye- sian estimates of the Pareto distribution parameters under step-stress PALT with censored data. [4] considered the Bayesian estimates of the parameters, reliability and hazard rate functions by using an approximate form due to Tierney and Kadane of a mixtures of two Weibull components under ALT. Finally, [22] obtained the Bayesian estimation of Gompertz distribution parameters in the case of step-stress PALT with two stress levels and Type-I censoring and the appro- ximation Bayes estimates are computed using the method of Lindley.

Pareto distribution of the second type (also known as the Lomax distribution) has been widely used in economic studies and to analyze business failure data. The Pareto distribution has been studied by several authors. According to [23] , the Pareto distribution is well adapted for modeling reliability problems, since many of its properties are interpretable in that context and could be an alternative to the well-known distributions used in reliability. This distribution was used for modeling size spectra data in aquatic ecology by [24] . [25] considered order statistics from non-identical right-truncated Lomax distributions and provided applications for this situation. [26] used the Pareto distribution as a mixing distribution for the Poisson parameter and obtained the discrete Poisson- Pareto distribution.

[27] investigated the Bayesian estimation of the Pareto survival function. More recently, [28] discussed some Bayesian inferences based on censored samples from the Pareto distribution. [29] determined the optimal times of changing stress level for simple stress plans under a cumulative exposure model using the Pareto distribution. Finite mixture of distributions has proved to be of considerable interest in recent years in terms of both the methodological development and multiple applications. Mixture distribution modeling was studied as early as 1890s by [30] , see also [31] [32] [33] . [4] [5] used a finite mixture model to study the effect of a constant stress on the parameters, reliability and hazard rate functions. [8] considers the progressive stress ALT applied to a product whose lifetime under design condition is assumed to follow a mixture of k components each of which represents a different cause of failure.

A random variable T is said to have a Mixture of two Pareto distributions (MTPD) if its probability density function (PDF) is given by

${f}_{1\Theta}\left(t\right)={p}_{1}{f}_{11}\left(t,{\theta}_{1}\right)+{p}_{2}{f}_{12}\left(t,{\theta}_{2}\right),$ (1)

where $\Theta =\left({\theta}_{1},{\theta}_{2},{p}_{1},{p}_{2}\right)$ and for $j=1,2$ ,

${\theta}_{j}=\left({\alpha}_{j},{\beta}_{j}\right)$ ,

$\begin{array}{l}{f}_{1j}\left(t;{\theta}_{j}\right)={\alpha}_{j}{\beta}_{j}^{{\alpha}_{j}}{\left({\beta}_{j}+t\right)}^{-\left({\alpha}_{j}+1\right)},\\ t>0,\left({\alpha}_{j},{\beta}_{j}>0\right),\text{}0\le {p}_{j}\le 1,\text{}{p}_{1}+{p}_{2}=1.\end{array}$ (2)

Also, the cumulative distribution function (CDF), the reliability function (RF) and the hazard rate function (HRF) take the forms.

${F}_{1j}\left(t;{\theta}_{j}\right)=1-{\beta}_{j}^{{\alpha}_{j}}{\left({\beta}_{j}+t\right)}^{-{\alpha}_{j}},$ (3)

${R}_{1j}\left(t;{\theta}_{j}\right)={\beta}_{j}^{{\alpha}_{j}}{\left({\beta}_{j}+t\right)}^{-{\alpha}_{j}},$ (4)

${H}_{1j}\left(t;{\theta}_{j}\right)={\alpha}_{j}{\left({\beta}_{j}+t\right)}^{-1},$ (5)

where ${H}_{1j}(\cdot )=\frac{{f}_{1j}(\cdot )}{{R}_{1j}(\cdot )}.$ (2) is a special form of Pearson type VI distribution. In

life-testing and reliability studies, the experimenter may not always obtain complete information on failure times for all experimental units. Data obtained from such experiments are called censored data. Saving the total time on test and the cost associated with it are some of the major reasons for censoring. A censoring scheme, which can balance between total time spent for the experiment, number of units used in the experiment and the efficiency of statistical inference based on the results of the experiment, is desirable. The most common censoring schemes are Type-I (time) censoring, and Type-II (item) censoring. The conventional Type-I and Type-II censoring schemes do not have the exorability of allowing removal of units at points other than the terminal point of the experiment. Because of that, a more general censoring scheme called progressive Type- II right censoring has been used in this article. Censored data are of progressively Type II right type when they are censored by the removal of a prospected number of survivors whenever an individual fails; this continues until a fixed number of failures has occurred, at which stage the remainder of the surviving individuals are also removed or censored. This scheme includes ordinary Type II censoring and complete scheme as special cases. A general account of theoretical developments and applications concerning progressive censoring is given in the book by [34] [35] .

An important problem that may face the experimenter in life testing experiments is the prediction of unknown observations that belong to a future sample, based on the current available sample, known in the literature as the informative sample. For example, the experimenters or the manufacturers would like to have the bounds for the life of their products so that their warranty limits could be plausibly set and customers purchasing manufactured products would like to know the bounds for the life of the product to be purchased. For different application areas, the reader can see [36] [37] . The prediction of progressive Type-II censored data from the Gompertz and Rayleigh distributions has considered, respectively, by [38] [39] . [40] presented methods for constructing prediction limits for a step-stress model in ALT. Bayesian inference and prediction for the inverse Weibull distribution and Weibull distribution under Type-II censored data are described by [41] and by [42] , respectively.

The novelty of this paper is to consider the constant PALT applied to items whose life-times under design condition are assumed to follow MTPD under a progressive Type-II censoring and the main aim is to obtain the Bayes estimators (BEs) and prediction of the acceleration factor and the parameters under consi- deration using the method of MCMC. The rest of this paper is organized as follows. In Section 2, a description of the model is presented and the MLEs of the parameters are derived. In Section 3, Bayes estimates are obtained using the balanced square error loss (BSEL) function. Bayesian two-sample prediction is presented in Section 4. Monte Carlo simulation results are presented in Section 5. Finally, some concluding remarks are introduced in Section 6.

2. Model Description and Basic Assumptions

2.1. Model Description

In a constant-PALT, ${n}_{1}$ items randomly chosen among $n$ test items sampled are allocated to use condition and ${n}_{2}=n-{n}_{1}$ remaining items are subjected to an accelerated condition progressive type-II censoring is performed as follows.

At the time of the first failure ${t}_{s1:{m}_{s}:{n}_{s}}^{{R}_{s}},{R}_{s1}$ items are randomly withdrawn from the remaining ${n}_{s}-1$ surviving items. At the second failure ${t}_{s2:{m}_{s}:{n}_{s}}^{{R}_{s}},{R}_{s2}$ items from the remaining ${n}_{s}-2-{R}_{s1}$ items are randomly withdrawn. The test continues until the ${m}_{s}-\text{th}$ failure ${t}_{s{m}_{s}:{m}_{s}:{n}_{s}}^{{R}_{s}}$ at which time, all remaining

${R}_{s{m}_{s}}={n}_{s}-{m}_{s}-{\displaystyle {\sum}_{\upsilon =1}^{{m}_{s}-1}{R}_{s\upsilon}}$ items are withdraws for $s=1,2$ . In our study, ${R}_{si}$

are fixed prior and ${m}_{s}<{n}_{s}$ .

If the failure times of the ${n}_{s}$ items originally in the test are from a continuous population with distribution function ${F}_{j}\left(x\right)$ and probability density function ${f}_{j}\left(x\right)$ , the joint probability density function for ${t}_{s1:{m}_{s}:{n}_{s}}^{{R}_{s}}<{t}_{s2:{m}_{s}:{n}_{s}}^{{R}_{s}}<\cdots <{t}_{s{m}_{s}:{m}_{s}:{n}_{s}}^{{R}_{s}}$ and $s=1,2$ is given by

$L\left(\theta ;t\right)=\underset{s=1}{\overset{2}{{\displaystyle \prod}}}\left\{{A}_{s}\underset{i=1}{\overset{{m}_{s}}{{\displaystyle \prod}}}{f}_{s\Theta}\left({t}_{si:{m}_{s}:{n}_{s}}\right){\left[{R}_{s\Theta}\left({t}_{si:{m}_{s}:{n}_{s}}\right)\right]}^{{R}_{si}}\right\},$ (6)

where $t=\left({t}_{1},{t}_{2}\right)$ and, for $s=1,2$ , ${t}_{s}=\left({t}_{s1},\cdots ,{t}_{s{m}_{s}}\right)$ and

${A}_{s}={n}_{s}\left({n}_{s}-1-{R}_{s1}\right)\left({n}_{s}-2-{R}_{s1}-{R}_{s2}\right)\cdots \left({n}_{s}-{m}_{s}+1-{R}_{s1}-{R}_{s2}\cdots -{R}_{s\left({m}_{s}-1\right)}\right).$

It is clear from (6) that the constant PALTs progressively Type-II censored scheme containing the following censoring schemes as special cases:

1) Type-II censored scheme when $R=\left\{0,0,\cdots ,0,{n}_{s}-{m}_{s}\right\}.$

2) The complete sample case when $R=\left\{0,0,\cdots ,0\right\}$ and ${n}_{s}={m}_{s}$ .

2.2. Assumptions

1) The lifetimes ${T}_{1i}\equiv T,\text{}i=1,\cdots ,{n}_{1}$ of items allocated to use condition, are independent and identically distributed random variables (i.i.d. r.v.’s) and follows a mixture of MTP distribution with PDF, given in (1).

2) The lifetimes ${T}_{2i}\equiv X,\text{}i=1,\cdots ,{n}_{2}$ of items allocated to accelerated condition, are i.i.d r.v.’s.

3) The PDF, RF, CDF and HRF of an item tested at accelerated condition are given, respectively, by

$\begin{array}{l}{f}_{2\Theta}\left(x\right)={p}_{1}{f}_{21}\left(x;{\theta}_{1}\right)+{p}_{2}{f}_{22}\left(x;{\theta}_{2}\right),\hfill \\ {R}_{2\Theta}\left(x\right)={p}_{1}{R}_{21}\left(x;{\theta}_{1}\right)+{p}_{2}{R}_{22}\left(x;{\theta}_{2}\right),\hfill \\ {F}_{2\Theta}\left(x\right)={p}_{1}{F}_{21}\left(x;{\theta}_{1}\right)+{p}_{2}{F}_{22}\left(x;{\theta}_{2}\right),\hfill \\ {H}_{2\Theta}\left(x\right)=\frac{{f}_{2\Theta}\left(x\right)}{{R}_{2\Theta}\left(x\right)},\hfill \end{array}\}$ , (7)

where for $j=1,2,{\theta}_{j}=\left({\alpha}_{j},{\beta}_{j},{\lambda}_{j}\right),$ and

${H}_{2j}\left(x;{\theta}_{j}\right)={\lambda}_{j}{H}_{1j}\left(x,{\theta}_{j}\right)={\lambda}_{j}{\alpha}_{j}{\left({\beta}_{j}+x\right)}^{-1},$ (8)

${R}_{2j}\left(x;{\theta}_{j}\right)={\beta}_{j}^{{\lambda}_{j}{\alpha}_{j}}{\left({\beta}_{j}+x\right)}^{-{\lambda}_{j}{\alpha}_{j}},$ (9)

${F}_{2j}\left(x;{\theta}_{j}\right)=1-{\beta}_{j}^{{\lambda}_{j}{\alpha}_{j}}{\left({\beta}_{j}+x\right)}^{-{\lambda}_{j}{\alpha}_{j}},$ (10)

${f}_{2j}\left(x;{\theta}_{j}\right)={\alpha}_{j}{\lambda}_{j}{\beta}_{j}^{{\lambda}_{j}{\alpha}_{j}}{\left({\beta}_{j}+x\right)}^{-\left({\lambda}_{j}{\alpha}_{j}+1\right)},$ (11)

where ${\lambda}_{j}$ is an accelerated factor satisfying ${\lambda}_{j}>1$ .

4) The i.i.d lifetimes ${T}_{1i}$ and ${T}_{2i}$ , $i=1,2,\cdots ,{n}_{j}$ are mutually statistically- independent.

2.3. ML Estimation

Let, for $s=1,2$ , ${T}_{s1:{m}_{s}:{n}_{s}}^{\left({R}_{s1},\cdots ,{R}_{s{m}_{s}}\right)}<{T}_{s2:{m}_{s}:{n}_{s}}^{\left({R}_{s1},\cdots ,{R}_{s{m}_{s}}\right)}<\cdots <{T}_{s{m}_{s}:{m}_{s}:{n}_{s}}^{\left({R}_{s1},\dots ,{R}_{s{m}_{s}}\right)}$ denote two progressively type-II censored samples from two populations whose PDFs are as given by (1) and (2), respectively, with ${R}_{s}=\left({R}_{s1},\cdots ,{R}_{s{m}_{s}}\right)$ being the two progressive censoring schemes. We denote also the observed values by, ${t}_{s1:{m}_{s}:{n}_{s}}<{t}_{s2:{m}_{s}:{n}_{s}}<\cdots <{t}_{s{m}_{s}:{m}_{s}:{n}_{s}}$ . The log-likelihood function $l\left(\alpha ,\beta ,\lambda |\stackrel{\xaf}{x}\right)=\mathrm{log}L\left(\alpha ,\beta ,\lambda |\stackrel{\xaf}{x}\right)$ without normalized constant is then given by

$\begin{array}{c}l\equiv \mathrm{ln}L\left(\theta ;t\right)={\displaystyle {\sum}_{s=1}^{2}\mathrm{ln}{A}_{s}}+{\displaystyle {\sum}_{s=1}^{2}{\displaystyle {\sum}_{i=1}^{{m}_{s}}\mathrm{ln}{f}_{s\Theta}\left({t}_{si:{m}_{s}:{n}_{s}}\right)}}\\ \text{}+{\displaystyle {\sum}_{s=1}^{2}{\displaystyle {\sum}_{i=1}^{{m}_{s}}{R}_{si}\mathrm{ln}{R}_{s\Theta}\left({t}_{si:{m}_{s}:{n}_{s}}\right)}}.\end{array}$ (12)

Assuming that the parameters $p,\text{}{\lambda}_{j}$ and ${\beta}_{j}$ are unknown and ${\alpha}_{j}$ , is known, the likelihood equations are given, for $j=1,2$ , by

$\begin{array}{c}\frac{\partial \mathcal{l}}{\partial {p}_{j}}={\displaystyle {\sum}_{s=1}^{2}{\displaystyle {\sum}_{i=1}^{{m}_{s}}{\psi}_{s}\left({t}_{si}\right)}}+{\displaystyle {\sum}_{s=1}^{2}{\displaystyle {\sum}_{i=1}^{{m}_{s}}{R}_{si}{\psi}_{s}^{*}\left({t}_{si}\right)}}=0,\\ \frac{\partial l}{\partial {\lambda}_{j}}={\displaystyle {\sum}_{i=1}^{{m}_{2}}\frac{{p}_{j}{\xi}_{j}\left({t}_{2i}\right)}{{f}_{2\Theta}\left({t}_{2i:{m}_{2}:{n}_{2}}\right)}}+{\displaystyle {\sum}_{i=1}^{{m}_{2}}\frac{{R}_{2i}{p}_{j}{\xi}_{j}^{*}\left({t}_{2i}\right)}{{R}_{2\Theta}\left({t}_{2i:{m}_{2}:{n}_{2}}\right)}}=0,\text{}j=1,2\\ \frac{\partial l}{\partial {\beta}_{j}}={\displaystyle \underset{s=1}{\overset{2}{\sum}}{\displaystyle \underset{i=1}{\overset{{m}_{s}}{\sum}}\frac{{p}_{j}{\vartheta}_{sj}\left({t}_{si}\right)}{{f}_{s\Theta}\left({t}_{si:{m}_{s}:{n}_{s}}\right)}}}+{\displaystyle \underset{s=1}{\overset{2}{\sum}}{\displaystyle \underset{i=1}{\overset{{m}_{s}}{\sum}}\frac{{R}_{si}{p}_{j}{\vartheta}_{sj}^{*}\left({t}_{si}\right)}{{R}_{s\Theta}\left({t}_{si:{m}_{s}:{n}_{s}}\right)}}}=0,\text{}j=1,2\end{array}\}$ (13)

where, for $j=1,2$

$\begin{array}{c}{\psi}_{s}\left({t}_{si}\right)=\frac{{f}_{s1}\left({t}_{si};{\theta}_{1}\right)-{f}_{s2}\left({t}_{si};{\theta}_{2}\right)}{{f}_{s\text{\Theta}}\left({t}_{si:{m}_{s}:{n}_{s}}\right)},\\ {\psi}_{s}^{*}\left({t}_{si}\right)=\frac{{R}_{s1}\left({t}_{si};{\theta}_{1}\right)-{R}_{s2}\left({t}_{si};{\theta}_{2}\right)}{{R}_{s\text{\Theta}}\left({t}_{si:{m}_{s}:{n}_{s}}\right)},\\ {\xi}_{j}\left({t}_{2i}\right)=\frac{\partial {f}_{2j}\left({t}_{2i};{\theta}_{j}\right)}{\partial {\lambda}_{j}}={\alpha}_{j}^{2}{\lambda}_{j}{\beta}_{j}^{{\lambda}_{j}{\alpha}_{j}}{\left({\beta}_{j}+{t}_{2i}\right)}^{-\left({\lambda}_{j}{\alpha}_{j}+1\right)}\left[\mathrm{ln}\frac{{\beta}_{j}}{{\beta}_{j}+{t}_{2i}}+\frac{1}{{\lambda}_{j}{\alpha}_{j}}\right]\\ {\xi}_{j}^{*}\left({t}_{2i}\right)=\frac{\partial {R}_{2j}\left({t}_{2i};{\theta}_{j}\right)}{\partial {\lambda}_{j}}={\alpha}_{j}{\beta}_{j}^{{\lambda}_{j}{\alpha}_{j}}{\left({\beta}_{j}+{t}_{2i}\right)}^{-{\lambda}_{j}{\alpha}_{j}}\left[\mathrm{ln}\frac{{\beta}_{j}}{{\beta}_{j}+{t}_{2i}}\right],\\ {\vartheta}_{1j}\left({t}_{1i}\right)=\frac{\partial {f}_{1j}\left({t}_{1i};{\theta}_{j}\right)}{\partial {\beta}_{j}}={\alpha}_{j}{\beta}_{j}^{{\alpha}_{j}-1}{\left({\beta}_{j}+{t}_{1i}\right)}^{-\left({\alpha}_{j}+2\right)}\left({\alpha}_{j}{t}_{1i}-{\beta}_{j}\right),\\ {\vartheta}_{2j}\left({t}_{2i}\right)=\frac{\partial {f}_{2j}\left({t}_{2i};{\theta}_{j}\right)}{\partial {\beta}_{j}}={\alpha}_{j}{\lambda}_{j}{\beta}_{j}^{{\alpha}_{j}{\lambda}_{j}-1}{\left({\beta}_{j}+{t}_{2i}\right)}^{-\left({\lambda}_{j}{\alpha}_{j}+2\right)}\left({\alpha}_{j}{\lambda}_{j}{t}_{2i}-{\beta}_{j}\right),\\ {\vartheta}_{1j}^{*}\left({t}_{1i}\right)=\frac{\partial {R}_{1j}\left({t}_{1i};{\theta}_{j}\right)}{\partial {\beta}_{j}}={\alpha}_{j}{\beta}_{j}^{{\alpha}_{j}-1}{t}_{1i}{\left({\beta}_{j}+{t}_{1i}\right)}^{-\left({\alpha}_{j}+1\right)},\\ {\vartheta}_{2j}^{*}\left({t}_{2i}\right)=\frac{\partial {R}_{2j}\left({t}_{2i};{\theta}_{j}\right)}{\partial {\beta}_{j}}={\alpha}_{j}{\lambda}_{j}{\beta}_{j}^{{\alpha}_{j}{\lambda}_{j}-1}{t}_{2i}{\left({\beta}_{j}+{t}_{2i}\right)}^{-\left({\lambda}_{j}{\alpha}_{j}+1\right)},\end{array}\}.$ (14)

Equations (13) do not yield explicit solutions for $p,\text{}{\lambda}_{j}$ and ${\beta}_{j},\text{}j=1,2,$ and have to be solved numerically to obtain the ML estimates of the five parameters, Newton-Raphson iteration is employed to solve (13).

3. Bayes Estimation of the Model Parameters

For Bayesian approach, in order to select a single value as representing our “best” estimators of the unknown parameter, a loss function must be specified. A wide variety of loss functions have been developed in the literature to describe various types of loss structures. The balanced loss function which is introduced [43] . [44] introduced an extended class of the balanced loss function of the form

${L}_{\Phi ,\Omega ,{\delta}_{o}}\left(\Psi \left(\theta \right),\delta \right)=\Omega \Upsilon \left(\theta \right)\Phi \left({\delta}_{o},\delta \right)+\left(1-\Omega \right)\Upsilon \left(\theta \right)\Phi \left(\Psi \left(\theta \right),\delta \right),$ (15)

where $\Upsilon (\cdot )$ is a suitable positive weight function and $\Phi \left(\Psi \left(\theta \right),\delta \right)$ is an arbitrary loss function when estimating $\Psi \left(\theta \right)$ by $\delta $ . The parameter ${\delta}_{o}$ is a chosen prior estimator of $\Psi \left(\theta \right)$ , obtained for instance from the criterion of ML, least squares or unbiasedness among others. They give a general Bayesian connection between the case of $\Omega >0$ and $\Omega =0$ where $0\le \Omega <1$ .

This section deals with studying the Bayes estimates of the parameters under consideration using the balanced square error loss (BSEL) function using the non-informative prior NIP distribution. It follows that a NIP for the acceleration factor ${\lambda}_{j}$ is given by

${\pi}_{1}\left({\lambda}_{j}\right)\propto \frac{1}{{\lambda}_{j}},\left({\lambda}_{j}>1\right).$ (16)

Also, the NIP’s for the scale parameter ${\beta}_{j}$ and the parameter ${p}_{j}$ are, respectively, as

${\pi}_{2}\left({\beta}_{j}\right)\propto \frac{1}{{\beta}_{j}}\text{,}\left({\beta}_{j}>0\right),$ (17)

${\pi}_{3}\left({p}_{j}\right)\propto \frac{1}{{p}_{j}},\text{}\left({p}_{j}>0\right).$ (18)

Therefore, the joint NIP of the three parameters can be expressed by

$\pi \left(\text{\Theta}\right)={\pi}_{1}\left({\lambda}_{j}\right){\pi}_{2}\left({\beta}_{j}\right){\pi}_{3}\left({p}_{j}\right)\propto \frac{1}{{p}_{j}{\lambda}_{j}{\beta}_{j}},\text{}\left({\lambda}_{j}>1,{\beta}_{j},{p}_{j}>0\right),$ (19)

where $\text{\Theta}=\left({p}_{j},{\lambda}_{j},{\beta}_{j}\right).$

It is to be noted that our objective is to consider vague priors so that the priors do not have any significant roles in the analyses that follow. However, if one uses the prior beliefs different from (19) and resorts to sample based approaches for analyzing the posterior, one may use the concept of sampling-importance-re- sampling without working afresh with the new prior-likelihood setup (see, [45] ).

3.1. Bayes Estimation Based on BSEL Function

The symmetric square-error loss (SE) is one of the most popular loss functions. By choosing $\Phi \left(\Psi \left(\theta \right),\delta \right)={\left(\delta -\Psi \left(\theta \right)\right)}^{2}$ and $\Upsilon \left(\theta \right)=1$ , in (15), the balanced loss function reduced to the BSEL function, used by [46] [47] , in the form

${L}_{\Omega ,{\delta}_{o}}\left(\Psi \left(\theta \right),\delta \right)=\Omega {\left(\delta -{\delta}_{o}\right)}^{2}+\left(1-\Omega \right){\left(\delta -\Psi \left(\theta \right)\right)}^{2},$ (20)

and the corresponding Bayes estimate of the function $\Psi \left(\theta \right)$ is given by

${\delta}_{\Omega ,\Psi ,{\delta}_{o}}\left(t\right)=\Omega {\delta}_{o}+\left(1-\Omega \right)E\left(\Psi \left(\theta \right)|t\right).$ (21)

Under the BSEL function, the estimator of a parameter (or a given function of the parameters) is the posterior mean. Thus, Bayes estimators of the parameters are obtained by using the loss function (20). The Bayes estimators of a function $u\equiv u\left({p}_{j},{\lambda}_{j},{\beta}_{j}\right)={p}_{j},{\lambda}_{j}\text{or}{\beta}_{j}$ is given by

${\stackrel{^}{u}}_{BS}=\Omega {\stackrel{^}{u}}_{ML}+\left(1-\Omega \right){\displaystyle {\int}_{0}^{\infty}u{\pi}^{*}}\left({p}_{j},{\lambda}_{j},{\beta}_{j}|t\right)\text{d}\Theta ,$ (22)

where, ${\stackrel{^}{u}}_{ML}$ is the ML estimate of $u$ . It is not possible to compute (22) analytically, therefore, we propose to approximate (22) by using MCMC technique to generate samples from the posterior distributions and then compute the Bayes estimators of the individual parameters.

3.2. MCMC Method

The MCMC method is a useful technique for computing Bayes estimates of the function $u\equiv u\left({p}_{j},{\lambda}_{j},{\beta}_{j}\right)$ . A wide variety of MCMC schemes are available, and it can be difficult to choose among them. An important sub-class of MCMC methods is Gibbs sampling and more general Metropolis within- Gibbs samplers. The advantage of using the MCMC method over the MLE method is that we can always obtain a reasonable interval estimate of the parameters by constructing the probability intervals based on the empirical posterior distribution. This is often unavailable in maximum likelihood estimation. Indeed, the MCMC samples may be used to completely summarize the posterior uncertainty about the parameters ${p}_{j},\text{}{\lambda}_{j}$ and ${\beta}_{j}$ , through a kernel estimate of the posterior distribution. This is also true of any function of the parameters. For more detailes about the MCMC methods see, for example, [48] [49] [50] .

The Metropolis-Hasting algorithm generates sampling from an (essentially) arbitrary proposal distribution (i.e. a Markov transition kernel). From the product of Equations (19) and (6), the joint posterior density function of ${p}_{j},\text{}{\lambda}_{j}$ and ${\beta}_{j}$ given the data can be written as

${\pi}^{*}\left({p}_{j},{\lambda}_{j},{\beta}_{j}|t\right)={B}_{1}{\left({p}_{j}{\lambda}_{j}{\beta}_{j}\right)}^{-1}\underset{s=1}{\overset{2}{{\displaystyle \prod}}}\left\{{A}_{s}\underset{i=1}{\overset{{m}_{s}}{{\displaystyle \prod}}}{f}_{s\Theta}\left({t}_{si:{m}_{s}:{n}_{s}}\right){\left[{R}_{s\Theta}\left({t}_{si:{m}_{s}:{n}_{s}}\right)\right]}^{{R}_{si}}\right\},$ (23)

where

${B}_{1}^{-1}={\displaystyle {\int}_{\text{\Theta}}\pi \left(\text{\Theta}\right)L\left({p}_{j},{\lambda}_{j},{\beta}_{j}\right)\text{d\Theta}}.$

$t=\left({t}_{i1},{t}_{i2},\cdots ,{t}_{i{m}_{i}}\right).$ The conditional posterior distribution of the parameters ${p}_{j},\text{}{\lambda}_{j}$ and ${\beta}_{j}$ can be computed and written, respectively, by

${\pi}^{*}\left({p}_{j}|{\lambda}_{j},{\beta}_{j},t\right)\propto {p}_{j}^{-1}\underset{s=1}{\overset{2}{{\displaystyle \prod}}}\left\{{A}_{s}\underset{i=1}{\overset{{m}_{s}}{{\displaystyle \prod}}}{f}_{s\Theta}\left({t}_{si:{m}_{s}:{n}_{s}}\right){\left[{R}_{s\Theta}\left({t}_{si:{m}_{s}:{n}_{s}}\right)\right]}^{{R}_{si}}\right\},$ (24)

${\pi}^{*}\left({\lambda}_{j}|{p}_{j},{\beta}_{j},t\right)\propto {\lambda}_{j}^{-1}\underset{s=1}{\overset{2}{{\displaystyle \prod}}}\left\{{A}_{s}\underset{i=1}{\overset{{m}_{s}}{{\displaystyle \prod}}}{f}_{s\Theta}\left({t}_{si:{m}_{s}:{n}_{s}}\right){\left[{R}_{s\Theta}\left({t}_{si:{m}_{s}:{n}_{s}}\right)\right]}^{{R}_{si}}\right\},$ (25)

${\pi}^{*}\left({\beta}_{j}|{p}_{j},{\lambda}_{j},t\right)\propto {\beta}_{j}^{-1}\underset{s=1}{\overset{2}{{\displaystyle \prod}}}\left\{{A}_{s}\underset{i=1}{\overset{{m}_{s}}{{\displaystyle \prod}}}{f}_{s\Theta}\left({t}_{si:{m}_{s}:{n}_{s}}\right){\left[{R}_{s\Theta}\left({t}_{si:{m}_{s}:{n}_{s}}\right)\right]}^{{R}_{si}}\right\}.$ (26)

The posterior of ${p}_{j},\text{}{\lambda}_{j}$ and ${\beta}_{j}$ in (24), (25) and (26) is not known, but the plot of it shows that it is similar to normal distribution. Therefore to generate from this distribution, we use the Metropolis {Hastings method ( [51] with normal proposal distribution)}. For details regarding the implementation of Metropolis-Hastings algorithm, the readers may refer to [52] . To run the Gibbs sampler algorithm we started with the ML estimates. We then drew samples from various full conditionals, in turn, using the most recent values of all other conditioning variables unless some systematic pattern of convergence was achieved. The following algorithm of Gibbs sampling is proposed to compute Bayes estimators of $u\equiv u\left({p}_{j},{\lambda}_{j},{\beta}_{j}\right)$ based on BSEL function.

1) Start with initial guess of $\left({p}_{j},{\lambda}_{j},{\beta}_{j}\right)$ say $\left({p}_{j}^{0},{\lambda}_{j}^{0},{\beta}_{j}^{0}\right),$ respectively.

2) Set $i=1$ .

3) Generate ${p}^{i}$ from (24) and ${\lambda}^{i}$ from (25).

4) Generate ${\beta}^{i}$ from (26).

5) Set $i=i+1.$

6) Repeat steps 3 - 5 N times.

7) An approximate Bayes estimator of $u$ under BSEL function is given by

$E\left(u|t\right)=\left(1/\left(N-\nu \right)\right)\underset{i=\nu +1}{\overset{N}{{\displaystyle \sum}}}u\left({p}^{i},{\lambda}^{i},{\beta}^{i}\right),$ (27)

where $\text{\nu}$ is the burn-in period. So that, the Bayes estimators of $u$ based on BSEL function is given by

${\stackrel{^}{u}}_{BS}=\Omega {\stackrel{^}{u}}_{ML}+\left(1-\Omega \right)E\left(u|t\right).$ (28)

4. Bayesian Two-Sample Prediction

The two-sample prediction technique is considered to derive Bayesian prediction bounds for future order statistics based on progressively Type-II censored informative samples obtained from constant-PALT models. The coverage probabilities and the average interval lengths of the confidence intervals are computed via a Monte Carlo simulation to investigate the procedure of the prediction intervals. Suppose that, for $S=1,2,$ the two sample scheme is used in which the informative sample $\left({T}_{s1:{m}_{s}:{n}_{s}}<{T}_{s2:{m}_{s}:{n}_{s}}<\cdots <{T}_{s{m}_{s}:{m}_{s}:{n}_{s}}\right)$ re- presents an observed informative progressively type-II right censored sample of size ${m}_{s}$ obtained from a sample of size ${n}_{s}$ with progressive CS ${R}_{s}=\left({R}_{s1},\cdots ,{R}_{s{m}_{s}}\right)$ drawn from a population whose PDFs are as given by (1) and (7). Suppose also that ${Y}_{1:M:N},{Y}_{2:M:N},\cdots ,{Y}_{M:M:N}$ represents a future (unobserved) independent progressively type-II right censored sample of size $M$ obtained from a sample of size $N$ with progressive CS ${R}^{*}=\left({R}_{1}^{*},\cdots ,{R}_{M}^{*}\right),$ drawn from the population whose CDF is (9). We want to predict any future (unobserved) ${Y}_{b},\text{}b=1,2,\cdots ,M,$ in the future sample of size $M$ . The PDF of ${Y}_{b},\text{}b=1,2,\cdots ,M,$ given the vector of parameters $\theta $ , is obtained as (see [34] ):

${g}^{*}\left({y}_{b}|\theta \right)=={C}_{b-1}{f}_{2\Theta}\left({y}_{b}\right)\underset{i=1}{\overset{b}{{\displaystyle \sum}}}{\kappa}_{i}{\left[1-{F}_{2\Theta}\left({y}_{b}\right)\right]}^{{\gamma}_{i}-1},$ (29)

where

$\begin{array}{c}{\gamma}_{i}=\underset{j=i}{\overset{M}{{\displaystyle \sum}}}\left({R}_{j}^{*}+1\right)=N-\underset{j=i}{\overset{i-1}{{\displaystyle \sum}}}\left({R}_{j}^{*}+1\right),\text{}{C}_{b-1}=\underset{i=1}{\overset{b}{{\displaystyle \prod}}}{\gamma}_{i},\\ {\kappa}_{i}=\underset{j=1}{\overset{b}{{\displaystyle \prod}}}\frac{1}{{\gamma}_{j}-{\gamma}_{i}},\forall i\ne j,b>1,\text{and}{\kappa}_{1}=1\text{for}b=1.\end{array}$

Substituting from (7) and (9) in (29), we have:

$\begin{array}{l}{g}^{*}\left({y}_{b}|\theta \right)\\ ={C}_{b-1}\left({p}_{1}{f}_{21}\left(y;{\theta}_{1}\right)+{p}_{2}{f}_{22}\left(y;{\theta}_{2}\right)\right){\displaystyle {\sum}_{i=1}^{b}{\kappa}_{i}{\left[1-\left({p}_{1}{F}_{21}\left(y;{\theta}_{1}\right)+{p}_{2}{F}_{22}\left(y;{\theta}_{2}\right)\right)\right]}^{{\gamma}_{i}-1}}.\end{array}$ (30)

4.1. Maximum Likelihood Prediction When ${\alpha}_{j}$ Is Known

Maximum likelihood prediction (MLP) can be obtained using (30) by replacing the parameters $\theta =\left(p,{\beta}_{1},{\beta}_{2},{\lambda}_{1},{\lambda}_{2}\right)$ by ${\stackrel{^}{\theta}}_{\left(ML\right)}=\left({\stackrel{^}{p}}_{\left(ML\right)},{\stackrel{^}{{\beta}_{1}}}_{\left(ML\right)},{\stackrel{^}{{\beta}_{2}}}_{\left(ML\right)},{\stackrel{^}{{\lambda}_{1}}}_{\left(ML\right)},{\stackrel{^}{{\lambda}_{2}}}_{\left(ML\right)}\right).$

1) Interval prediction:

The maximum likelihood prediction interval (MLPI) for any future observation ${y}_{b},\text{}1\le b\le M$ can be obtained by

$\mathrm{Pr}\left[{y}_{b}\ge \upsilon |t\right]={\displaystyle {\int}_{\upsilon}^{\infty}{g}^{*}\left({y}_{b}|{\stackrel{^}{\theta}}_{\left(ML\right)}\right)\text{d}{y}_{b}}.$ (31)

A $\left(1-\tau \right)\times 100\%$ MLPI $\left(L,U\right)$ of the future observation ${y}_{b}$ is given by solving the following two nonlinear equations

$\mathrm{Pr}\left[{y}_{b}\ge L\left(t\right)|t\right]=1-\frac{\tau}{2},\text{}\mathrm{Pr}\left[{y}_{b}\ge U\left(t\right)t|\right]=\frac{\tau}{2}.$ (32)

2) Point prediction:

The maximum likelihood prediction point (MLPP) for any future observation ${y}_{b}$ can be obtained by replacing the parameters $\theta =\left(p,{\beta}_{1},{\beta}_{2},{\lambda}_{1},{\lambda}_{2}\right)$ by ${\stackrel{^}{\theta}}_{\left(ML\right)}=\left({\stackrel{^}{p}}_{\left(ML\right)},{\stackrel{^}{{\beta}_{1}}}_{\left(ML\right)},{\stackrel{^}{{\beta}_{2}}}_{\left(ML\right)},{\stackrel{^}{{\lambda}_{1}}}_{\left(ML\right)},{\stackrel{^}{{\lambda}_{2}}}_{\left(ML\right)}\right).$

${\stackrel{^}{y}}_{b\left(ML\right)}=E\left[{y}_{b}|t\right]={\displaystyle {\int}_{0}^{\infty}{y}_{b}{g}^{*}\left({y}_{b}|{\stackrel{^}{\theta}}_{\left(ML\right)}\right)\text{d}{y}_{b}}.$ (33)

4.2. Bayesian Prediction When ${\alpha}_{j}$ Is Known

The predictive density function of ${Y}_{b},\text{}1\le b\le M$ is given by:

${\Psi}^{*}\left({y}_{b}|t\right)={\displaystyle {\int}_{0}^{\infty}{g}^{*}\left({y}_{b}|\theta \right){\pi}^{*}\left(\theta |t\right)\text{d}\theta},\text{}{y}_{b}>0,$ (34)

1) Interval prediction:

Bayesian prediction interval (BPI), for the future observation ${Y}_{b},\text{}1\le b\le M,$ can be computed using (34) which can be approximated using MCMC algorithm by the form

${\Psi}^{\star}\left({y}_{b}|t\right)=\frac{{{\displaystyle \sum}}_{i=1}^{\mu}{g}^{*}\left({y}_{b}|{\theta}^{i}\right)}{{{\displaystyle \sum}}_{i=1}^{\mu}{{\displaystyle \int}}_{0}^{\infty}{g}^{*}\left({y}_{b}|{\theta}^{i}\right)\text{d}{y}_{b}},$ (35)

where ${\theta}^{i},\text{}i-1,2,\cdots ,\mu $ are generated from the posterior density function (23) using Gibbs sampler and Metropolis-Hastings techniques.

A $\left(1-\tau \right)\times 100\%$ BPI $\left(L,U\right)$ of the future observation ${y}_{b}$ is obtained by solving the following two nonlinear equations

$\frac{{{\displaystyle \sum}}_{i=1}^{\mu}{{\displaystyle \int}}_{L}^{\infty}{g}^{*}\left({y}_{b}|{\theta}^{i}\right)\text{d}{y}_{b}}{{{\displaystyle \sum}}_{i=1}^{\mu}{{\displaystyle \int}}_{0}^{\infty}{g}^{*}\left({y}_{b}|{\theta}^{i}\right)\text{d}{y}_{b}}=1-\frac{\tau}{2},$ (36)

$\frac{{{\displaystyle \sum}}_{i=1}^{\mu}{{\displaystyle \int}}_{U}^{\infty}{g}^{*}\left({y}_{b}|{\theta}^{i}\right)\text{d}{y}_{b}}{{{\displaystyle \sum}}_{i=1}^{\mu}{{\displaystyle \int}}_{0}^{\infty}{g}^{*}\left({y}_{b}|{\theta}^{i}\right)\text{d}{y}_{b}}=\frac{\tau}{2}.$ (37)

Numerical methods such as Newton-Raphson are necessary to solve the above two nonlinear Equations (36) and (37), to obtain $L$ and $U$ for a given.

2) Point prediction:

a) Bayesian prediction point (BPP) for the future observation ${y}_{b}$ based on BSEL function can be obtained using

${\stackrel{\u02dc}{y}}_{b\left(BS\right)}=\Omega {\stackrel{^}{y}}_{b\left(ML\right)}+\left(1-\Omega \right)E\left({y}_{b}|t\right),$ (38)

where ${\stackrel{^}{y}}_{b\left(ML\right)}$ is the ML prediction for the future observation ${y}_{b}$ which can be obtained using (36) and $E\left({y}_{b}|t\right)$ can be obtained using

$E\left({y}_{b}|t\right)={\displaystyle {\int}_{0}^{\infty}{y}_{b}{\Psi}^{*}\left({y}_{b}|t\right)\text{d}{y}_{b}.}$ (39)

b) BPP for the future observation ${y}_{b}$ based on BLINX loss function can be obtained using

${\stackrel{^}{y}}_{b\left(BL\right)}=-\frac{1}{a}\mathrm{ln}\left[\Omega \mathrm{exp}\left[-a{\stackrel{^}{y}}_{b\left(ML\right)}\right]+\left(1-\Omega \right)E\left({\text{e}}^{-a{y}_{b}}|t\right)\right],$ (40)

where ${\stackrel{^}{y}}_{b\left(ML\right)}$ is the ML prediction for the future observation ${y}_{b}$ which can be obtained using (36) and $E\left({\text{e}}^{-a{y}_{b}}|t\right)$ can be obtained using

$E\left({\text{e}}^{-a{y}_{b}}|t\right)={\displaystyle {\int}_{0}^{\infty}{\text{e}}^{-a{y}_{b}}{\Psi}^{*}\left({y}_{b}|t\right)\text{d}{y}_{b}}.$ (41)

5. Simulation Studies

In this subsection, numerical examples are provided to demonstrate the theoretical results given in this paper. All computations were performed using (MA- THEMATICA ver. 8.0).

To generate progressively type-II censored Pareto samples, we used the algorithm proposed by [34] . The MLEs and Bayes estimates of the parameters are computed and compared based on Monte Carlo simulation study according to the following steps:

1) For given values of the parameters, ${n}_{s}$ and ${m}_{s}\left(1\le {m}_{s}\le {n}_{s}\right),\text{}s=1,2$ we generate type II progressively samples from the MTP distribution as follows:

a) For given values of
${m}_{s}$ , we generate two independent random samples of sizes m_{1} and
${m}_{2}$ from Uniform (0,1) distribution
$\left({U}_{s1},{U}_{s2},\cdots ,{U}_{s{m}_{s}}\right),\text{}s=1,2.$

b) For given values of the progressive censoring scheme

${R}_{si},s=1,2,\text{}i=1,\cdots ,{m}_{s},$ we set ${E}_{si}=1/\left(i+{{\displaystyle \sum}}_{\kappa ={m}_{s}-i+1}^{{m}_{s}}{R}_{s\kappa}\right)$ where

$s=1,2,i=1,\cdots ,{m}_{s}.$

c) Set ${V}_{si}={U}_{si}^{{E}_{si}}.$

d) Set ${U}_{si}^{*}=1-{\displaystyle {\prod}_{\kappa ={m}_{s}-i+1}^{{m}_{s}}{V}_{s\kappa}},\text{}s=1,2,i=1,\cdots ,{m}_{s}.$

e) For given values of $p,\text{}{\alpha}_{j},\text{}{\beta}_{j},\text{}{\lambda}_{j}$ and ${n}_{s},\text{}{m}_{s}$ , set:

${U}_{si}^{*}=p\left[1-{\beta}_{1}^{{\lambda}_{1}^{\left(s-1\right)}{\alpha}_{1}}{\left({\beta}_{1}+{t}_{si}\right)}^{-{\lambda}_{1}^{\left(s-1\right)}{\alpha}_{1}}\right]+\left(1-p\right)\left[1-{\beta}_{2}^{{\lambda}_{2}^{\left(s-1\right)}{\alpha}_{2}}{\left({\beta}_{2}+{t}_{si}\right)}^{-{\lambda}_{2}^{\left(s-1\right)}{\alpha}_{2}}\right],$

which is the required progressive Type II censored samples of sizes ${m}_{s}$ from MTP distribution under constant PALT.

2) The MLEs of the parameters are obtained by solving the nonlinear equations (13) numerically.

3) Based on BSEL loss function the Bayes estimates of the parameters are computed, from (28) according to the above MCMC method.

Simulation studies have been performed using (Mathematica ver. 8.0) for illustrating the theoretical results of estimation problem. The performance of the resulting estimators of the acceleration, shape and scale parameters has been considered in terms of their average (AVG), relative absolute bias (RAB) and mean square error (MSE), where

$\begin{array}{c}{\stackrel{\xaf}{\stackrel{^}{\Phi}}}_{k}=\left(1/M\right){\displaystyle \underset{i=1}{\overset{M}{\sum}}{\stackrel{^}{\Phi}}_{k}^{\left(i\right)}}\\ k=1,2,\cdots ,5,\left({\Phi}_{1}=p,{\Phi}_{2}={\lambda}_{1},{\Phi}_{3}={\lambda}_{2},{\Phi}_{4}={\beta}_{1},{\Phi}_{5}={\beta}_{2}\right),\end{array}$

$RAB=\frac{\left|{\stackrel{\xaf}{\stackrel{^}{\Phi}}}_{k}-{\Phi}_{k}\right|}{{\Phi}_{k}},$

$MSE=\left(1/M\right){\displaystyle {\sum}_{i=1}^{M}{\left({\stackrel{^}{\Phi}}_{k}^{\left(i\right)}-{\Phi}_{k}\right)}^{2}}$ .

In our study, we have used three different censoring schemes (C.S), namely:

Scheme I: ${R}_{m}={n}_{s}-{m}_{s},{R}_{i}=0$ for $i\ne {m}_{s}$ .

Scheme II: ${R}_{1}={n}_{s}-{m}_{s},{R}_{i}=0$ for $i\ne 1$ .

Scheme III: ${R}_{\left(\left({m}_{s}+1\right)/2\right)}={n}_{s}-{m}_{s},{R}_{i}=0$ for $i\ne \left({m}_{s}+1\right)/2$ ; if ${m}_{s}$ odd, and ${R}_{\left({m}_{s}/2\right)}={n}_{s}-{m}_{s},{R}_{i}=0$ for $i\ne \left({m}_{s}/2\right)$ ; if ${m}_{s}$ even.

In simulation studies, we consider two case separately:

a) The population parameter values $\left({\alpha}_{1}=1.1,{\alpha}_{2}=2.3,{\beta}_{1}=0.3,{\beta}_{2}=0.7,{\lambda}_{1}=1.5,{\lambda}_{2}=2,p=0.5\right)$ , the sample sizes $\left({n}_{1}={n}_{2}=n\right)$ and observed failure times $\left({m}_{1}={m}_{2}=m\right)$ the results shown in Table 1. The progressive censoring schemes used in this case are displaying in Table 2.

b) The population parameter values

$\left({\alpha}_{1}=1.1,{\alpha}_{2}=2.3,{\beta}_{1}=0.3,{\beta}_{2}=0.7,{\lambda}_{1}=1.5,{\lambda}_{2}=2,p=0.5\right)$ , the sample sizes

Table 1. MLEs and Bayes estimates of the parameters and their MSEs and RABs at $\left({\alpha}_{1}=1.1,{\alpha}_{2}=2.3,{\beta}_{1}=0.3,{\beta}_{2}=0.7,{\lambda}_{1}=1.5,{\lambda}_{2}=2,\Omega =0.5,p=0.5\right)$ .

Table 2. Progressive censoring schemes used in simulation study at ${n}_{1}={n}_{2}=n$ and ${m}_{1}={m}_{2}=m$ .

$\left({n}_{1}\ne {n}_{2}\right)$ and observed failure times $\left({m}_{1}\ne {m}_{2}\right)$ the results shown in Table 3. Figure 1 and Figure 2 represents the MSE and RAB of the estimates of $\theta =\left(p,{\alpha}_{1},{\alpha}_{2},{\beta}_{1},{\beta}_{2}\right)$ when the sample sizes $\left({n}_{1}={n}_{2}=n\right)$ . While Table 4 gives the progressive censoring schemes used in simulation study at ${n}_{1}\ne {n}_{2}$ and ${m}_{1}\ne {m}_{2}$ .

The ML prediction (point and interval) and Bayesian prediction (point and interval) are computed according to the following steps:

Generate ${\theta}^{i}=\left({p}^{i},{\beta}_{1}{}^{i},{\beta}_{2}{}^{i},{\lambda}_{1}{}^{i},{\lambda}_{2}{}^{i}\right)$ , from the posterior PDF using MCMC algorithm.

Solving Equation (32) we get the 95% MLPI for the ${b}^{\text{th}}$ order statistics in a future progressively Type-II censored sample also the MLPP for the future observation ${y}_{b}$ ,is computed using (33).

Table 3. MLEs and Bayes estimates of the parameters and their MSEs and RABs at $\left({\alpha}_{1}=1.1,{\alpha}_{2}=2.3,{\beta}_{1}=0.3,{\beta}_{2}=0.7,{\lambda}_{1}=1.5,{\lambda}_{2}=2,\Omega =0.5,p=0.5\right)$ .

Table 4. Progressive censoring schemes used in simulation study at ${n}_{1}\ne {n}_{2}$ and ${m}_{1}\ne {m}_{2}\text{.}$

Figure 1. Mean square error (MSE) of the estimates of $\theta =\left(p,{\alpha}_{1},{\alpha}_{2},{\beta}_{1},{\beta}_{2}\right)$ when the sample sizes $\left({n}_{1}={n}_{2}=n\right).$

Figure 2. Relative absolute bias (RAB) of the estimates of $\theta =\left(p,{\alpha}_{1},{\alpha}_{2},{\beta}_{1},{\beta}_{2}\right)$ when the sample sizes $\left({n}_{1}={n}_{2}=n\right).$

Table 5. Point and 95% interval predictors for ${Y}_{b}^{*},b=1,$ when $N=M=10,{R}_{i}^{*}=0,$ $i=1,2,\cdots ,M,$ C.S I and ( ${\alpha}_{1}=1.1,{\alpha}_{2}=2.3,{\beta}_{1}=0.3,{\beta}_{2}=0.7,{\lambda}_{1}=1.5,{\lambda}_{2}=2,\Omega =0.5,p=$ $0.5$ ).

Table 6. Point and 95% interval predictors for ${Y}_{b}^{*},b=1,$ when $N=M=10,{R}_{i}^{*}=0,$ $i=1,2,\cdots ,M,$ C.S II and ( ${\alpha}_{1}=1.1,{\alpha}_{2}=2.3,{\beta}_{1}=0.3,{\beta}_{2}=0.7,{\lambda}_{1}=1.5,{\lambda}_{2}=2,\Omega =0.5,p=$ $0.5$ ).

The 95% BPI for the future observation ${y}_{b}$ are obtained by solving Equations (36) and (37).

Table 7. Point and 95% interval predictors for ${Y}_{b}^{*},b=1,$ when $N=M=10,{R}_{i}^{*}=0,$ $i=1,2,\cdots ,M,$ C.S III and ( ${\alpha}_{1}=1.1,{\alpha}_{2}=2.3,{\beta}_{1}=0.3,{\beta}_{2}=0.7,{\lambda}_{1}=1.5,{\lambda}_{2}=2,\Omega =0.5,p=$ $0.5$ ).

The BPP for the future observation ${y}_{b}$ , is computed based on BSEL function using (38) and based on BLINX loss function using (40).

Generate $10,000$ progressively Type-II censored samples each of size $M$ from a population whose CDF is as (7) with ${R}_{i}^{*},\text{}i=1,2,\cdots ,M,$ then calculate the coverage percentage (CP) of ${Y}_{b}$ . For simplicity, we will consider ${R}_{i}^{*}=0,\text{}i=1,2,\cdots ,M$ which represents the ordinary order statistics and $M=N=10.$

6. Conclusions

The progressive Type-II censoring is of great importance in planning duration experiments in reliability studies. It has been shown by [53] that the inference is possible and practical when the sample data are gathered according to a progressive Type-II censored scheme. This paper dealt with the constant PALT in the case of progressive Type II censoring. It is assumed that the lifetime of test units follows the MTP distributions. MLEs and BEs of the acceleration factor and the parameters under consideration are derived. The BEs were obtained under the assumptions of BSEL and NIPs. It was observed that the BEs cannot be obtained in explicit forms. Instead, the MCMC method was used to obtain the Bayesian estimates. One can clearly see the scope of MCMC based Bayesian solutions which make every inferential development routinely available.

From the result, we observe the following:

It is noticed from the numerical calculations that the Bayes estimates under the BSEL function have the smallest MSEs as compared with their corresponding MLEs.

In general, for increasing the effective sample size $m/n,$ the MSEs and ARBs of the considered parameters decrease.

For fixed values of the sample and failure time sizes, the Scheme II in which the censoring occurs after the first observed failure gives more accurate results through the MSEs and RABs than the other schemes and this coincides with Theorem [2.2] by [54] .

The MLEs of ${\beta}_{1}$ are better than the BEs in general.

In most cases, we observed that when the sample size increased, the MSEs and RABs decreased for all censoring schemes.

The results in Tables 5-7 show that the lengths of the prediction intervals using the ML procedure are shorter than that of prediction intervals using the Bayes procedure.

The simulation results show that the proposed prediction levels are satisfactory compared with the actual prediction level 95%.

Cite this paper

Abushal, T.A. and AL-Zaydi, A.M. (2017) Inference on Constant-Partially Accelerated Life Tests for Mixture of Pareto Distributions under Pro- gressive Type-II Censoring. Open Journal of Statistics, 7, 323-346. https://doi.org/10.4236/ojs.2017.72024

References

- 1. Nelson, W. (1990) Accelerated Life Testing: Statistical Models, Data Analysis and Test Plans. John Wiley and Sons, New York. https://doi.org/10.1002/9780470316795
- 2. Bagdonavicius, V. and Nikulin, M. (2002) Accelerated Life Models: Modeling and Statistical Analysis. Chapman and Hall/CRC Press, Boca Raton, Florida.
- 3. Kim, C.M. and Bai, D.S. (2002) Analysis of Accelerated Life Test Data under Two Failure Modes. International Journal of Reliability, Quality and Safety Engineering, 9, 111-125. https://doi.org/10.1142/S0218539302000706
- 4. AL-Hussaini, E.K. and Abdel-Hamid, A.H. (2004) Bayesian Estimation of the Parameters, Reliability and Hazard Rate Functions of Mixtures under Accelerated Life Tests. Communications in Statistics-Simulation and Computation, 33, 963-982.
- 5. AL-Hussaini, E.K. and Abdel-Hamid, A.H. (2006) Accelerated Life Tests under Finite Mixture Models. Journal of Statistical Computation and Simulation, 76, 673-690. https://doi.org/10.1080/10629360500108087
- 6. Bai, D.S., Cha, M.S. and Chung, S.W. (1992) Optimum Simple Ramp-Tests for the Weibull Distribution and Type-I Censoring. IEEE Transactions on Reliability, 41, 407-413. https://doi.org/10.1109/24.159808
- 7. Wang, R. and Fei, H. (2004) Inference of Weibull Distribution for Tampered Failure Rate Model in Progressive Stress Accelerated Life Testing. Journal of Systems Science and Complexity, 17, 237-243.
- 8. Abdel-Hamid, A.H. and Al-Hussaini, E.K. (2007) Progressive Stress Accelerated Life Tests under Finite Mixture Models. Metrika, 66, 213-231. https://doi.org/10.1007/s00184-006-0106-3
- 9. Miller, R. and Nelson, W. (1983) Optimum Simple Step-Stress Plans for Accelerated Life Testing. IEEE Transactions on Reliability, R-32, 59-65. https://doi.org/10.1109/TR.1983.5221475
- 10. Bai, D.S., Kim, M.S. and Lee, S.H. (1989) Optimum Simple Step-Stress Accelerated Life Tests with Censoring. IEEE Transactions on Reliability, 38, 528-532. https://doi.org/10.1109/24.46476
- 11. Gouno, E., Sen, A. and Balakrishnan, N. (2004) Optimal Step-Stress Test under Progressive Type-I Censoring. IEEE Transactions on Reliability, 53, 388-393. https://doi.org/10.1109/TR.2004.833320
- 12. Fan, T.H., Wang, W.L. and Balakrishnan, N. (2008) Exponential Progressive Step-Stress Life-Testing with Link Function Based on Box-Cox Transformation. Journal of Statistical Planning and Inference, 138, 2340-2354.
- 13. Ma, H. and Meeker, W.Q. (2008) Optimum Step-Stress Accelerated Life Test Plans for Log-Location-Scale Distributions. Naval Research Logistics, 55, 551-562. https://doi.org/10.1002/nav.20299
- 14. Nelson, W. (2008) Residuals and Their Analysis for Accelerated Life Tests with Step and Varying Stress. IEEE Transactions on Reliability, 57, 360-368. https://doi.org/10.1109/TR.2008.920789
- 15. Wu, S.J. and Lin, Y.P. (2008) Optimal Step-Stress Test under Type I Progressive Group-Censoring with Random Removals. Journal of Statistical Planning and Inference, 138, 817-826.
- 16. Abdel-Hamid, A.H. and AL-Hussaini, E.K. (2008) Step Partially Accelerated Life Tests under Finite Mixture Models. Journal of Statistical Computation and Simulation, 78, 911-924. https://doi.org/10.1080/00949650701447084
- 17. Abdel-Hamid, A.H. (2009) Constant-Partially Accelerated Life Tests for Burr Type-XII Distribution with Progressive Type-II Censoring. Computational Statistics & Data Analysis, 53, 2511-2523.
- 18. Goel, P.K. (1971) Some Estimation Problems in the Study of Tampered Random Variables. Technical Rep. No. 50, Department of Statistics, Carnegie Mellon University, Pittsburgh, Pennsylvania.
- 19. DeGroot, M.H. and Goel, P.K. (1979) Bayesian Estimation and Optimal Designs in Partially Accelerated Life Testing. Naval Research Logistics, 26, 223-235. https://doi.org/10.1002/nav.3800260204
- 20. Abdel-Ghani, M.M. (1998) Investigation of Some Lifetime Models under Partially Accelerated Life Tests. PhD Thesis, Department of Statistics, Faculty of Economics & Political Science, Cairo University, Cairo.
- 21. Ismail, A.A. (2004) The Test Design and Parameter Estimation of Pareto Lifetime Distribution under Partially Accelerated Life Tests. PhD Thesis, Department of Statistics, Faculty of Economics and Political Science, Cairo University, Cairo.
- 22. Ismail, A.A. (2009) Bayes Estimation of Gompertz Distribution Parameters and Acceleration Factor under Partially Accelerated Life Tests with Type-I Censoring. Journal of Statistical Computation and Simulation, 80, 1253-1264. https://doi.org/10.1080/00949650903045058
- 23. Arnold, B.C. (1983) Pareto Distributions. International Cooperative Publishing House, Fairland, MD.
- 24. Vidondo, B., Prairie, Y.T., Blanco, J.M. and Duarte, C.M. (1997) Some Aspects of the Analysis of Size Spectra in Aquatic Ecology. Limnology and Oceanography, 42, 184-192. https://doi.org/10.4319/lo.1997.42.1.0184
- 25. Childs, A., Balakrishnan, N. and Moshref, M. (2001) Order Statistics from Non-Identical Right Truncated Lomax Random Variables with Applications. Statistical Papers, 42, 187-206. https://doi.org/10.1007/s003620100050
- 26. Al-Awadhi, S.A. and Ghitany, M.E. (2001) Statistical Properties of Poisson-Lomax Distribution and Its Application to Repeated Accidents Data. Journal of Applied Statistical Science, 10, 365-372.
- 27. Howlader, H.A. and Hossain, A.M. (2002) Bayesian Survival Estimation of Pareto Distribution of the Second Kind Based on Failure Censored Data. Computational Statistics & Data Analysis, 38, 301-314.
- 28. Abd-Elfattah, A.M., Alaboud, F.M. and Alharby, A.H. (2007) On Sample Size Estimation for Lomax Distribution. Australian Journal of Basic and Applied Sciences, 1, 373-378.
- 29. Hassan, A.S. and Al-Ghamdi, A.S. (2009) Optimum Step Stress Accelerated Life Testing for Lomax Distribution. Journal of Applied Sciences Research, 5, 2153-2164.
- 30. Pearson, K. (1894) Contributions to the Mathematical Theory of Evolution. Philosophical Transactions of the Royal Society Series A, 185, 71-110. https://doi.org/10.1098/rsta.1894.0003
- 31. Richardson, S. and Green, P.J. (1997) On Bayesian Analysis of Mixtures with an Unknown Number of Components (with Discussion). Journal of the Royal Statistical Society: Series B, 59, 731-792. https://doi.org/10.1111/1467-9868.00095
- 32. Ahmad, K.E., Jaheen, Z.F. and Mohammed, H.S. (2011) Finite Mixture of Burr Type XII Distribution and Its Reciprocal: Properties and Applications. Statistical Papers, 52, 835-845. https://doi.org/10.1007/s00362-009-0290-0
- 33. Ahmad, K.E., Jaheen, Z.F. and Mohammed, H.S. (2011) Bayesian Estimation under a Mixture of Burr Type XII Distribution and Its Reciprocal. Journal of Statistical Computation and Simulation, 81, 2121-2130. https://doi.org/10.1080/00949655.2010.519703
- 34. Balakrishnan, N. and Aggarwala, R. (2000) Progressive Censoring: Theory, Methods and Applications. Birkhauser, Boston. https://doi.org/10.1007/978-1-4612-1334-5
- 35. Balakrishnan, N. (2007) Progressive Censoring Methodology: An Appraisal (with Discussions). Test, 16, 211-296. https://doi.org/10.1007/s11749-007-0061-y
- 36. AL-Hussaini, E.K. (1999) Predicting Observables from a General Class of Distributions. Journal of Statistical Planning and Inference, 79-91.
- 37. AL-Hussaini, E.K. and Ahmad, A.A. (2003) On Bayesian Interval Prediction of Future Records. Test, 12, 79-99. https://doi.org/10.1007/BF02595812
- 38. Jaheen, Z.F. (2003) Prediction of Progressive Censored Data from the Gompertz Model. Communications in Statistics-Simulation and Computation, 32, 663-676. https://doi.org/10.1081/SAC-120017855
- 39. Ali Mousa, M.A.M. and AL-Sagheer, S.A. (2005) Bayesian Prediction for Progressively Type-II Censored Data from the Rayleigh Model. Communications in Statistics-Simulation and Computation, 34, 2353-2361. https://doi.org/10.1080/03610920500313767
- 40. Xiong, C. and Milliken, G.A. (2002) Prediction for Exponential Lifetimes Based on Step-Stress Testing. Communications in Statistics-Simulation and Computation, 31, 539-556.
- 41. Kundu, D. and Howlader, H. (2010) Bayesian Inference and Prediction of the Inverse Weibull Distribution for Type-II Censored Data. Computational Statistics & Data Analysis, 54, 1547-1558.
- 42. Kundu, D. and Raqab, M.Z. (2012) Bayesian Inference and Prediction of Order Statistics for a Type-II Censored Weibull Distribution. Journal of Statistical Planning and Inference, 142, 41-47.
- 43. Zellner, A. (1994) Bayesian and Non-Bayesian Estimation Using Balanced Loss Functions. In: Berger, J.O. and Gupta, S.S., Eds., Statistical Decision Theory and Methods, Springer, New York.
- 44. Jozani, M.J., Marchand, E. and Parsian, A. (2006) Bayes Estimation under a General Class of Balanced Loss Functions. Rapport Derecherche 36, Departement de Mathematiques, Universite de Sherbrooke.
- 45. Upadhyay, S.K., Agrawal, R. and Smith, A.F.M. (1996) Bayesian Analysis of Inverse Gaussian Non-Linear Regression by Simulation. Sankhyā B, 58, 363-378.
- 46. Ahmadi, J., Jozani, M.J., Marchand, E. and Parsian, A. (2009) Bayes Estimation Based on k-Record Data from a General Class of Distributions under Balanced Type Loss Functions. Journal of Statistical Planning and Inference, 139, 1180-1189.
- 47. Ahmadi, J., Jozani, M.J., Marchand, E. and Parsian, A. (2009) Prediction of k-Records from a General Class of Distributions under Balanced Type Loss Functions. Metrika, 70, 19-33. https://doi.org/10.1007/s00184-008-0176-5
- 48. Upadhyay, S.K., Vasishta, N. and Smith, A.F.M. (2001) Bayes Inference in Life Testing and Reliability via Markov Chain Monte Carlo Simulation. Sankhyā A, 63, 15-40.
- 49. Press, S.J. (2003) Subjective and Objective Bayesian Statistics: Principles, Models and Applications. IEEE Transactions on Reliability, 57, 435-444.
- 50. Upadhyay, S.K. and Gupta, A. (2010) A Bayes Analysis of Modified Weibull Distribution via Markov Chain Monte Carlo Simulation. Journal of Statistical Computation and Simulation, 80, 241-254. https://doi.org/10.1080/00949650802600730
- 51. Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A.H. and Teller, E. (1953) Equations of State Calculations by Fast Computing Machines. Journal Chemical Physics, 21, 1087-1091. https://doi.org/10.1063/1.1699114
- 52. Robert, C.P. and Casella, G. (2004) Monte Carlo Statistical Methods. Springer, New York. https://doi.org/10.1007/978-1-4757-4145-2
- 53. Viveros and Balakrishnan (1994) Interval Estimation of Parameters of Life from Progressively Censored Data. Technimetrics, 36, 84-91.
- 54. Burkschat, M., Cramer, E. and Kamps, U. (2006) On Optimal Schemes in Progressive Censoring. Statistics & Probability Letters, 76, 1032-1036.