Journal of Statistical Theory and Applications

Volume 19, Issue 2, June 2020, Pages 197 - 211

A 3-Component Mixture of Exponential Distribution Assuming Doubly Censored Data: Properties and Bayesian Estimation

Authors
Muhammad Tahir1, *, Muhammad Aslam2, Muhammad Abid1, Sajid Ali3, Mohammad Ahsanullah4
1Department of Statistics, Government College University, Faisalabad, Pakistan
2Department of Mathematics and Statistics, Riphah International University, Islamabad, Pakistan
3Department of Statistics, Quaid-i-Azam University, Islamabad, Pakistan
4Department of Management Sciences, Rider University, Lawrenceville, NJ, 08648, USA
*Corresponding author. Email: tahirqaustat@yahoo.com
Corresponding Author
Muhammad Tahir
Received 18 November 2018, Accepted 11 December 2019, Available Online 20 May 2020.
DOI
10.2991/jsta.d.200508.002How to use a DOI?
Keywords
Mixture model; Doubly censoring sampling; Priors; Bayes estimators; Loss function; Posterior risks
Abstract

The output of an engineering process is the result of several inputs, which may be homogeneous or heterogeneous and to study them, we need a model which should be flexible enough to summarize efficiently the nature of such processes. As compared to simple models, mixture models of underlying lifetime distributions are intuitively more appropriate and appealing to model the heterogeneous nature of a process in survival analysis and reliability studies. Moreover, due to time and cost constraints, in the most lifetime testing experiments, censoring is an unavoidable feature. This article focuses on studying a mixture of exponential distributions, and we considered this particular distribution for three reasons. The first reason is its application in reliability modeling of electronic components and the second important reason is its skewed behavior. Similarly, the third and the most important reason is that exponential distribution has the memory-less property. In particular, we deal with the problem of estimating the parameters of a 3-component mixture of exponential distributions using type-II doubly censoring sampling scheme. The elegant closed-form expressions for the Bayes estimators and their posterior risks are derived under squared error loss function, precautionary loss function and DeGroot loss function assuming the noninformative (uniform and Jeffreys') and the informative priors. A detailed Monte Carlo simulation and real data studies are carried out to investigate the performance (in terms of posterior risks) of the Bayes estimators. From results, it is observed that the Bayes estimates assuming the informative prior perform better than the noninformative priors.

Copyright
© 2020 The Authors. Published by Atlantis Press SARL.
Open Access
This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).

1. INTRODUCTION

Exponential distribution has been successfully used to model lifetimes of industrial objects. It is a very flexible distribution that can express a wide range of distribution shapes and can be fitted to a wide range of empirical data. Because of its memory-less property, it is commonly used in testing of the objects whose lifetimes do not depend on their age.

Mixture models play a significant role in many real-life applications since the last few decades. Finite mixtures of lifetime distributions have been proved to be of considerable interest both in terms of their methodological development and practical applications. As defined in Mendenhall and Hader [1], for practical purposes, an engineer may divide causes of failure of a system, or a device, into two or more different types of causes. For example, to know the proportion of failure due to a certain cause and to improve the manufacturing process, Acheson and McElwee [2] divided causes of electronic tube failures into gaseous defects, mechanical defects and normal deterioration of the cathode. Similarly, an engineering system may be composed of different homogeneous and/or heterogeneous subsystems. Instead of single probability models, heterogeneity in the nature of such systems can be captured through mixture models. Another important feature of mixture models is that when a population is supposed to comprise a number of subpopulations mixed in an unknown proportion, common available distributions do not exhibit the situation at hand. Some applications of mixture of exponential distributions include McCullagh [3], Hebert and Scariano [4], Raqab and Ahsanullah [5], Ali et al. [6] and Abu-Taleb et al. [7]. Direct applications of mixture models can be seen mostly in industrial engineering (Ali et al. [8]), medicine (Chivers [9] and Burekhardt [10]), biology (Bhattacharya [11] and Gregor [12]), social sciences (Harris [13]), economics (Jedidi et al. [14]), life testing (Shawky and Bakoban [15]) and reliability analysis (Sultan et al. [16]).

In many applications, data can be considered as coming from a mixture of two or more distributions. This motivates the researchers to mix common statistical distributions to get a new distribution. Several authors have extensively applied 2-component mixture modeling in different practical problems using the Bayesian approach. For instance, we refer to Liu [17], Saleem and Irfan [18], Saleem et al. [19], Santos [20], Al-Hussaini and Hussein [21], Mohammadi and Salehi-Rad [22], Kazmi et al. [23], Ahmad and Al-Zaydi [24], Ali et al. [25], Mohammadi et al. [26], Ali [27], Ateya [28], Feroze and Aslam [29], Mohamed et al. [30] and Zhang and Huang [31] for the Bayesian estimation of mixture models. However, limited work is available in the literature on the Bayesian analysis of the 3-component mixture distribution.

Due to time and cost limitations, it is impossible to continue the testing until the last observation. Therefore, the values which are greater than pre-fixed life-test termination time are taken as censored observations. It is worth mentioning that censoring is a property of data sets and not of parameters and commonly used in lifetime experiments. A valuable account on censoring is given in Romeu [32], Gijbels [33] and Kalbfleisch and Prentice [34] and the referenced cited therein.

Motivated by the applications of the exponential distribution and mixture models, the focus of the present article is to develop a 3-component mixture of exponential distributions (3-CMEDs) from Bayesian perspective. We assume that all the parameters of a 3-CMED are unknown and estimate them by considering different priors and loss functions. In addition to this, a type-II doubly censoring sampling scheme is also considered in this article.

The rest of the article is arranged as follows: Development of a 3-CMEDs is given in Section 2. Sampling scheme and likelihood function of the mixture model are defined in Section 3. The joint posterior distributions assuming the non-informative and the informative priors are derived in Sections 4 and 5, respectively. The marginal posterior distributions are derived in Sections 6. In Section 7, the Bayesian estimation under squared error loss function (SELF), precautionary loss function (PLF) and DeGroot loss function (DLF) are presented. The posterior predictive distribution and the Bayesian predictive intervals are described in Section 8. The elicitation of hyperparameters is discussed in Section 9. The simulation study and the real-life data application are explained in Sections 10 and 11, respectively. Finally, some concluding remarks are given in Section 12.

2. A 3-COMPONENT MIXTURE OF THE EXPONENTIAL DISTRIBUTIONS

As defined by Barger [35] and Střelec and Stehlík [36], the probability density function of a finite 3-CMED with mixing proportions p1 and p2, is given by

fy;Φ=p1f1y;Φ1+p2f2y;Φ2+1p1p2f3y;Φ3,p1,p20,p1+p21,(1)
where Φ=θ1,θ2,θ3,p1,p2, Φm=θm, m=1,2,3 and fmy;Φm is the pdf of the mth component defined as
fmy;Φm=θmexpθmy,0<y<,θm>0,m=1,2,3.

The cdf of a 3-component mixture of the exponential distributions is defined as

Fy;Φ=p1F1y;Φ1+p2F2y;Φ2+1p1p2F3y;Φ3,(2)
where Fmy;Φm, the cdf of the mth component, is
Fmy;Φm=1expθmy,0<y<,θm>0,m=1,2,3.

Following two theorems give the characterizations of von Mises distribution by truncated first moment.

Theorem 1

Suppose that the random variable Y satisfies the conditions given in Assumption A, with pdf fy;Φ, cdf Fy;Φ with α=0 and β=. Then EY|Y<y=gy;Φτy;Φ, where τy;Φ=fy;ΦFy;Φ and gy;Φ=p1θ1Γθ1y2+p2θ2Γθ2y2+1p1p2θ3Γθ3y2p1θ1expθ1y+p2θ2expθ2y+1p1p2θ3expθ3y if and only if fy;Φ=p1θ1expθ1y+p2θ2expθ2y+1p1p2θ3expθ3y.

Proof:

We have fy;Φgy;Φ=0yufu;Φdu.

If fy;Φ=p1θ1expθ1y+p2θ2expθ2y+1p1p2θ3expθ3y,

then fy;Φgy;Φ=0yup1θ1expθ1u+p2θ2expθ2u+1p1p2θ3expθ3udu=p1θ1Γθ1y2+p2θ2Γθ2y2+1p1p2θ3Γθ3y2,
where Γy2=0yuexpudu.

Thus gy;Φ=p1θ1Γθ1y2+p2θ2Γθ2y2+1p1p2θ3Γθ3y2p1θ1expθ1y+p2θ2expθ2y+1p1p2θ3expθ3y then

gy;Φ=yp1θ1Γθ1y2+p2θ2Γθ2y2+1p1p2θ3Γθ3y2p1θ1expθ1y+p2θ2expθ2y+1p1p2θ3expθ3ypy;Φ,
where
py;Φ=p1θ12expθ1y+p2θ22expθ2y+1p1p2θ32expθ3yp1θ1expθ1y+p2θ2expθ2y+1p1p2θ3expθ3y.

Thus gy;Φ=ygy;Φpy;Φ and

ygy;Φgy;Φ=p1θ12expθ1y+p2θ22expθ2y+1p1p2θ32expθ3yp1θ1expθ1y+p2θ2expθ2y+1p1p2θ3expθ3y.

By Lemma 1, we obtain

fy;Φfy;Φ=p1θ12expθ1y+p2θ22expθ2y+1p1p2θ32expθ3yp1θ1expθ1y+p2θ2expθ2y+1p1p2θ3expθ3y.

Integrating both sides of the equation, we obtain

fy;Φ=cp1θ1expθ1y+p2θ2expθ2y+1p1p2θ3expθ3y, where c is a constant.

Using the boundary conditions F0=0 and F=1, we obtain

fy;Φ=p1θ1expθ1y+p2θ2expθ2y+1p1p2θ3expθ3y.

Theorem 2

Suppose that the random variable Y satisfies the conditions given in Assumption A, with pdf fy;Φ, cdf Fy;Φ with α=0 and β=. Then EY|Y>y=hy;Φry;Φ, where ry;Φ=fy;Φ1Fy;Φ and hy;Φ=EYp1θ1Γθ1y2+p2θ2Γθ2y2+1p1p2θ3Γθ3y2p1θ1expθ1y+p2θ2expθ2y+1p1p2θ3expθ3y if and only if fy;Φ=p1θ1expθ1y+p2θ2expθ2y+1p1p2θ3expθ3y.

Proof:

We have fy;Φhy;Φ=yufu;Φdu=EY0yufu;Φdu

If fy;Φ=p1θ1expθ1y+p2θ2expθ2y+1p1p2θ3expθ3y,

then fy;Φhy;Φ=EY0yup1θ1expθ1u+p2θ2expθ2u+1p1p2θ3expθ3udu=EYp1θ1Γθ1y2+p2θ2Γθ2y2+1p1p2θ3Γθ3y2.

Thus

hy;Φ=EYp1θ1Γθ1y2+p2θ2Γθ2y2+1p1p2θ3Γθ3y2p1θ1expθ1y+p2θ2expθ2y+1p1p2θ3expθ3y.

Now

hy;Φ=yEYp1θ1Γθ1y2+p2θ2Γθ2y2+1p1p2θ3Γθ3y2p1θ1expθ1y+p2θ2expθ2y+1p1p2θ3expθ3ypy,
hy;Φ=yhy;Φpy;Φ
and y+hy;Φhy;Φ=py;Φ where py;Φ=p1θ12expθ1y+p2θ22expθ2y+1p1p2θ32expθ3yp1θ1expθ1y+p2θ2expθ2y+1p1p2θ3expθ3y.

By Lemma 2, we obtain

fy;Φfy;Φ=py=p1θ12expθ1y+p2θ22expθ2y+1p1p2θ32expθ3yp1θ1expθ1y+p2θ2expθ2y+1p1p2θ3expθ3y.

Integrating both sides of the equation, we obtain

fy;Φ=cp1θ1expθ1y+p2θ2expθ2y+1p1p2θ3expθ3y, where c is a constant.

Using the boundary conditions F0=0 and F=1, we obtain

fy;Φ=p1θ1expθ1y+p2θ2expθ2y+1p1p2θ3expθ3y.

3. LIKELIHOOD FUNCTION FOR A 3-CMED UNDER DOUBLY CENSORED DATA

To explain the construction of likelihood function, suppose n units are placed in a life testing experiment. Let yr,yr+1,,yw be the ordered observations that can only be observed. The remaining r1 smallest observations and the nw largest observations are censored from the study. So, y1r1,,y1w1, y2r2,,y2w2 and y3r3,,y3w3 are failed observations belonging to subpopulation-I, subpopulation-II and subpopulation-III, respectively. Therefore, the rest of the observations which are either less than yr or greater than yw assumed to be censored from each component, such that yr=miny1r1,y2r2,y3r3 and yw=maxy1w1,y2w2,y3w3, whereas the numbers s1=w1r1+1, s2=w2r2+1 and s3=w3r3+1 of failed observations can be obtained from subpopulation-I, subpopulation-II and subpopulation-III, respectively. The remaining nwr+3 observations are assumed to be censored observations where r=r1+r2+r3, w=w1+w2+w3 and s=s1+s2+s3. The likelihood function of type-II doubly censored sample, y=y1r1,,y1w1,y2r2,,y2w2,y3r3,,y3w3, from a 3-component mixture distribution is

LΦ|yi=r1w1p1f1y1ii=r2w2p2f2y2ii=r3w31p1p2f3y3i×F1y1r1r11F2y2r2r21F3y3r3r311Fywnw.

On simplification, the likelihood function of the 3-CMED becomes

LΦ|yv1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5expθ1i=r1w1y1i+v1y1r1+nwv4ywexpθ2i=r2w2y2i+v2y2r2+v4v5ywexpθ3i=r3w3y3i+v3y3r3+v5ywθ1s1θ2s2θ3s3p1s1+nwv4p2s2+v4v51p1p2s3+v5.(3)

In the next section, we discuss the joint posterior distribution for Bayesian analysis.

4. THE JOINT POSTERIOR DISTRIBUTIONS ASSUMING THE NIPS

There are some situations where no prior information about the parameter(s) of interest is available or a researcher is uncomfortable with the subjective knowledge. Thus, in such situations, one can use the noninformative priors (NIPs). Box and Taio [37] argued that NIP as a prior which gives little information relative to the experiment. Similarly, Bernardo [38] also argued that NIP should be regarded as a reference prior, that is, a prior convenient to use as a standard when analyzing statistical data. Later, Bernardo and Smith [39] defined NIP as the priors having the minimal effect relative to data.

In the existing literature, the most commonly used NIPs are the uniform prior (UP) and the Jeffreys' prior (JP). Both priors are used only when no formal prior information is available. To obtain JP, Jeffreys [40,41] suggested a method based on the square-root of the Fisher information. Later on, Geisser [42] also proposed some techniques to determine NIP.

In this section, the joint posterior distributions of parameters given data y are derived assuming the UP and the JP.

4.1. The Joint Posterior Distribution Assuming the UP

To derive the joint posterior distribution, we assume the improper UP for the unknown component parameter θm, that is, θm~Uniform0,, m=1,2,3 and the UP over the interval 0,1 for the unknown proportion parameter pu, that is, pu~Uniform0,1, u=1,2. Assuming independence of parameters, the joint prior distribution of the parameters θ1,θ2,θ3,p1 and p2 is given by π1Φ1. Therefore, the joint posterior distribution of parameters θ1,θ2,θ3,p1 and p2 given data y is given by

g1Φ|y=LΦ|yπ1ΦΦLΦ|yπ1ΦdΦ
g1Φ|y=1Ω1v1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5θ1A111θ2A211θ3A311expB11θ1expB21θ2expB31θ3p1A011p2B0111p1p2C011,(4)
where
A11=s1+1,A21=s2+1,A31=s3+1,A01=s1+nwv4+1,B01=s2+v4v5+1,C01=s3+v5+1,
B11=i=r1w1y1i+v1y1r1+nwv4yw,B21=i=r2w2y2i+v2y2r2+v4v5yw,B31=i=r3w3y3i+v3y3r3+v5yw,
Ω1=v1=0r11v2=0r21v3=0r31v4=0nwv5=0v4l=131vlrl1vlnwv4v4v5ΓA11ΓA21ΓA31B11A11B21A21B31A31BA01,B01,C01.

4.2. The Joint Posterior Distribution Assuming the JP

The JP is defined as pθm|Iθm|, m=1,2,3, where Iθm=E2fy;θmθm2 is the Fisher's information. The prior distributions of the proportion parameters p1 and p2 are again assumed as pu~Uniform0,1, u=1,2. Assuming independence of parameters, the joint prior distribution of parameters θ1,θ2,θ3,p1 and p2 is π2Φ1θ1θ2θ3. By combining the likelihood function and the joint priordistribution, we obtain the joint posterior distribution of parameters θ1,θ2,θ3,p1 and p2 given data y as

g2Φ|y=LΦ|yπ2ΦΦLΦ|yπ2ΦdΦ
g2Φ|y=1Ω2v1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5θ1A121θ2A221θ3A321expB12θ1expB22θ2expB32θ3p1A021p2B0211p1p2C021,(5)
where
A12=s1,A22=s2,A32=s3,A02=s1+nwv4+1,B02=s2+v4v5+1,C02=s3+v5+1,
B12=i=r1w1y1i+v1y1r1+nwv4yw,B22=i=r2w2y2i+v2y2r2+v4v5yw,B32=i=r3w3y3i+v3y3r3+v5yw,
Ω2=v1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓA12ΓA22ΓA32B12A12B22A22B32A32BA02,B02,C02.

5. THE JOINT POSTERIOR DISTRIBUTION ASSUMING THE INFORMATIVE PRIOR

The information available on the parameter(s) of interest is quantified as an informative prior. In this article, we consider the gamma distributions as the prior distributions for component parameters θm, that is, θm~Gammaam,bm, m=1,2,3 and bivariate beta distribution is assumed as the prior distribution for the proportion parameters p1 and p2, that is, p1,p2~BivariateBetaa,b,c. So, the joint prior distribution of parameters θ1,θ2,θ3,p1 and p2 is written as

π3Φ=b1a1Γa1θ1a11expb1θ1b2a2Γa2θ2a21expb2θ2b3a3Γa3θ3a31expb3θ3p1a1p2b11p1p2c1Ba,b,c.

Thus, the joint posterior distribution of parameters θ1,θ2,θ3,p1 and p2 given data y is

g3Φ|y=LΦ|yπ3ΦΦLΦ|yπ3ΦdΦ
g3Φ|y=1Ω3v1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5θ1A131θ2A231θ3A331expB13θ1expB23θ2expB33θ3p1A031p2B0311p1p2C031,(6)
where

A13=s1+a1, A23=s2+a2, A33=s3+a3, B13=i=r1w1y1i+v1y1r1+nwv4yw+b1, B23=i=r2w2y2i+v2y2r2+v4v5yw+b2, B33=i=r3w3y3i+v3y3r3+v5yw+b3,

A03=s1+nwv4+a, B03=s2+v4v5+b, C03=s3+v5+c,

Ω3=v1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓA13ΓA23ΓA33B13A13B23A23B33A33BA03,B03,C03.

6. BAYESIAN ESTIMATION UNDER LOSS FUNCTIONS

In this section, we derived the algebraic expressions of Bayes estimators and their associated posterior risks using the UP, the JP and the IP under three different loss functions, namely SELF, PLF and DLF. If Lθ,ω^ is a loss function, the expected value of the loss function for a given decision with respect to the posterior distribution is known as the posterior risk function. The Bayes estimator ω^ is obtained by minimizing the posterior expectation with respect to parameter, that is, defined as ω^=Eθ|yLθ,ω^, where Lθ,ω^ is the loss incurred estimating θ by ω^. The SELF, is defined as Lθ,ω^=θω^2, was introduced by Legendre [43] to develop the least square theory. Later, Norstrom [44] discussed an asymmetric PLF and also introduced a special case of general class of PLFs defined as Lθ,ω^=ω^1θω^2. The DLF is presented by DeGroot [45] and is defined as Lθ,ω^=ω^2θω^2. For a given prior, the general form of the Bayes estimators and their posterior risks under SELF, PLF and DLF are given in Table 1.

Loss Function Bayes Estimators Posterior Risks
SELF=Lθ,ω^=θω^2 ω^=Eθ|yθ ρω^=Eθ|yθ2Eθ|yθ2
PLF=Lθ,ω^=ω^1θω^2 ω^=Eθ|yθ2 ρω^=2Eθ|yθ22Eθ|yθ
DLF=Lθ,ω^=ω^2θω^2 ω^=Eθ|yθ2Eθ|yθ1 ρω^=1Eθ|yθ2Eθ|yθ21

SELF, squared error loss function; PLF, precautionary loss function; DLF, DeGroot loss function.

Table 1

Bayes estimators and posterior risks under SELF, PLF and DLF.

6.1. Expressions of the Bayes Estimators and Posterior Risks Under SELF

The algebraic expressions for Bayes estimators and posterior risks assuming the UP, the JP and the IP of parameters θ1,θ2,θ3,p1 and p2 under SELF are obtained with respective marginal distribution as

θ^ϖSELF=1Ωξv1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓAϖξ+1ΓAπξΓAηξBϖξAϖξ+1BπξAπξBηξAηξBA0ξ,C0ξBB0ξ,A0ξ+C0ξ(7)
p^αSELF=1Ωξv1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓA1ξΓA2ξΓA3ξB1ξA1ξB2ξA2ξB3ξA3ξBϒ0ξ,C0ξBΔ0ξ+1,ϒ0ξ+C0ξ(8)
ρθ^ϖSELF=1Ωξv1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓAϖξ+2ΓAπξΓAηξBϖξAϖξ+2BπξAπξBηξAηξBA0ξ,C0ξBB0ξ,A0ξ+C0ξθ^ϖSELF2(9)
ρp^αSELF=1Ωξv1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓA1ξΓA2ξΓA3ξB1ξA1ξB2ξA2ξB3ξA3ξBγ0ξ,C0ξBΔ0ξ+2,γ0ξ+C0ξp^αSELF2,(10)
where α,β,γ and Δ take the values as (i) α=1,β=2,γ=B,Δ=A and (ii) α=2,β=1, α=2,β=1,γ=A,Δ=B. Also, ξ=1 for the UP, ξ=2 for the JP and ξ=3 for the IP.

6.2. Expressions of the Bayes Estimators and Posterior Risks Under PLF

The respective marginal distribution yields the algebraic expressions for Bayes estimators and posterior risks assuming the UP, the JP and the IP of parameters θ1,θ2,θ3,p1 and p2 under PLF as

θ^ϖPLF=1Ωξv1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓAϖξ+2ΓAπξΓAηξBϖξAϖξ+2BπξAπξBηξAηξBA0ξ,C0ξBB0ξ,A0ξ+C0ξ.(11)
p^αPLF=1Ωξv1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓA1ξΓA2ξΓA3ξB1ξA1ξB2ξA2ξB3ξA3ξBϒ0ξ,C0ξBΔ0ξ+2,ϒ0ξ+C0ξ.(12)
ρθ^ϖPLF=21Ωξv1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓAϖξ+2ΓAπξΓAηξBϖξAϖξ+2BπξAπξBηξAηξBA0ξ,C0ξBB0ξ,A0ξ+C0ξ2Ωξv1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓAϖξ+1ΓAπξΓAηξBϖξAϖξ+1BπξAπξBηξAηξBA0ξ,C0ξBB0ξ,A0ξ+C0ξ.(13)
ρp^αPLF=21Ωξv1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓA1ξΓA2ξΓA3ξB1ξA1ξB2ξA2ξB3ξA3ξBγ0ξ,C0ξBΔ0ξ+2,γ0ξ+C0ξ2Ωξv1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓA1ξΓA2ξΓA3ξB1ξA1ξB2ξA2ξB3ξA3ξBγ0ξ,C0ξBΔ0ξ+1,γ0ξ+C0ξ.(14)

6.3. Expressions of the Bayes Estimators and Posterior Risks Under DLF

The algebraic expressions for Bayes estimators and posterior risks assuming the UP, the JP and the IP of parameters θ1,θ2,θ3,p1 and p2 under DLF are derived with respective marginal distribution as

θ^ϖDLF=v1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓAϖξ+2ΓAπξΓAηξBϖξAϖξ+2BπξAπξBηξAηξBA0ξ,C0ξBB0ξ,A0ξ+C0ξv1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓAϖξ+1ΓAπξΓAηξBϖξAϖξ+1BπξAπξBηξAηξBA0ξ,C0ξBB0ξ,A0ξ+C0ξ.(15)
p^αDLF=v1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓA1ξΓA2ξΓA3ξB1ξA1ξB2ξA2ξB3ξA3ξBγ0ξ,C0ξBΔ0ξ+2,γ0ξ+C0ξv1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓA1ξΓA2ξΓA3ξB1ξA1ξB2ξA2ξB3ξA3ξBγ0ξ,C0ξBΔ0ξ+1,γ0ξ+C0ξ.(16)
ρθ^ϖDLF=1v1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓAϖξ+1ΓAπξΓAηξBϖξAϖξ+1BπξAπξBηξAηξBA0ξ,C0ξBB0ξ,A0ξ+C0ξ2Ωξv1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓAϖξ+2ΓAπξΓAηξBϖξAϖξ+2BπξAπξBηξAηξBA0ξ,C0ξBB0ξ,A0ξ+C0ξ.(17)
ρp^αDLF=1v1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓA1ξΓA2ξΓA3ξB1ξA1ξB2ξA2ξB3ξA3ξBγ0ξ,C0ξBΔ0ξ+1,γ0ξ+C0ξ2Ωξv1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓA1ξΓA2ξΓA3ξB1ξA1ξB2ξA2ξB3ξA3ξBγ0ξ,C0ξBΔ0ξ+2,γ0ξ+C0ξ.(18)

7. THE POSTERIOR PREDICTIVE DISTRIBUTION AND BAYESIAN PREDICTIVE INTERVAL

A significant feature of the Bayesian methodology is the predictive distribution used to predict the future observation X=Yn+1 of a random variable given the data y, already observed. Al-Hussaini et al. [46], Bolstad [47] and Bansal [48] have given a detailed discussion on prediction and predictive distribution in the Bayesian paradigm. We, now, present the derivation of posterior predictive distribution and Bayesian predictive interval.

The posterior predictive distribution of a future observation X=Yn+1 given data y assuming the UP, the JP and the IP is defined as

fx|y=Φpx|ΦgξΦ|ydΦ.(19)

So, the posterior predictive distribution given in (19) after substituting and simplifying is

fx|y=1Ωξv1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓA1ξ+1ΓA2ξΓA3ξB1ξ+xA1ξ+1B2ξA2ξB3ξA3ξBA0ξ+1,C0ξBB0ξ,A0ξ+C0ξ+1+1Ωξv1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓA1ξΓA2ξ+1ΓA3ξB1ξA1ξB2ξ+xA2ξ+1B3ξA3ξBA0ξ,C0ξBB0ξ+1,A0ξ+C0ξ+1Ωξv1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓA1ξΓA2ξΓA3ξ+1B1ξA1ξB2ξA2ξB3ξ+xA3ξ+1BA0ξ,C0ξ+1BB0ξ,A0ξ+C0ξ+1.(20)

In order to construct a Bayesian predictive interval, suppose L and U be the two endpoints of the Bayesian predictive interval. These two endpoints can be obtained using the posterior predictive distribution defined in (20). A 100(1γ)% Bayesian predictive interval (L, U) can be obtained by solving the following equations:

0Lfx|ydx=γ2=Ufx|ydx.

On simplifying the above equations, the Bayesian predictive interval (L, U) can be written as

1Ωξv1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓA1ξ+1ΓA2ξΓA3ξA1ξB1ξA1ξB1ξ+LA1ξB2ξA2ξB3ξA3ξBA0ξ+1,C0ξBB0ξ,A0ξ+C0ξ+1+1Ωξv1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓA1ξΓA2ξ+1ΓA3ξA2ξB1ξA1ξB2ξA2ξB2ξ+LA2ξB3ξA3ξBA0ξ,C0ξBB0ξ+1,A0ξ+C0ξ+1Ωξv1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓA1ξΓA2ξΓA3ξ+1A3ξB1ξA1ξB2ξA2ξB3ξA3ξB3ξ+LA3ξBA0ξ,C0ξ+1BB0ξ,A0ξ+C0ξ+1=γ2
and
1Ωξv1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓA1ξ+1ΓA2ξΓA3ξA1ξB1ξ+UA1ξB2ξA2ξB3ξA3ξBA0ξ+1,C0ξBB0ξ,A0ξ+C0ξ+1+1Ωξv1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓA1ξΓA2ξ+1ΓA3ξA2ξB1ξA1ξB2ξ+UA2ξB3ξA3ξBA0ξ,C0ξBB0ξ+1,A0ξ+C0ξ+1Ωξv1=0r11v2=0r21v3=0r31v4=0nwv5=0v4k=131vkrk1vknwv4v4v5ΓA1ξΓA2ξΓA3ξ+1A3ξB1ξA1ξB2ξA2ξB3ξ+UA3ξBA0ξ,C0ξ+1BB0ξ,A0ξ+C0ξ+1=γ2.

8. ELICITATION OF HYPERPARAMETERS

Elicitation is a tool used to quantify a person's belief and knowledge about the parameter(s) of interest and in the Bayesian perspective, elicitation, most often, arises as a method for specifying the hyperparameter of the prior distribution for the random parameter(s). Elicitation has remained a challenging problem for the Bayesian statistician. However, in this study, we adopt a method based on predictive probabilities, given by Aslam [49].

For eliciting the hyperparameters, we use the prior predictive distribution (PPD). The PPD for a random variable Y is

py=Φpy|Φπ3ΦdΦ
py=1a+b+caa1b1a1b1+ya1+1+ba2b2a2b2+ya2+1+ca3b3a3b3+ya3+1.(21)

To elicit the nine hyperparameters involved in the PPD in (21), we considered the following nine intervals (0, 1), (1, 2), (2, 3), (3, 4), (4, 5), (5, 6), (6, 7), (7, 8) and (8, 9) and assumed the following probabilities 0.57, 0.20, 0.10, 0.05, 0.02, 0.015, 0.01, 0.005 and 0.003, respectively. It is worth mentioning that these probabilities might have been obtained from the expert(s) as their opinion about the likelihood of these intervals. Moreover, different intervals could also be considered. Then, nine equations are solved simultaneously by using Mathematica software for eliciting the hyperparameters a1,b1,a2,b2,a3,b3,a,b and c. Using the above defined procedure, we obtain the following hyperparameter values: 3.8330, 3.7310, 3.3570, 3.1360, 2.9030, 2.7330, 3.0280, 0.6995 and 2.7350.

9. MONTE CARLO SIMULATION STUDY

From Equations (78), (1112) and (1516), it is clear that comparing Bayes estimators (under different priors and loss functions) analytically is almost impossible. Therefore, a Monte Carlo simulation study is conducted to assess the performance of the Bayes estimators under different priors, loss functions, parametric values, sample sizes and left and right test termination times. For different values of each of the five parameters θ1,θ2,θ3,p1 and p2 of a 3-CMED, we calculate the Bayes estimates and their posterior risks by using the following steps:

  1. A sample from the mixtures may be generated through the Mathematica package as follows:

    1. Generate p1n observations randomly from first component density f1y;Φ1.

    2. Generate p2n observations randomly from second component density f2y;Φ2.

    3. Generate remaining 1p1p2n observations randomly from third component density f3y;Φ3.

  2. Select a sample censored at fixed test termination times on left and right, that is, yr and yw.

  3. Take observations which are less than yr and greater than yw as censored ones.

  4. Using the Steps 1–3 for the fixed values of parameters, test termination time and sample size, generate 1000 samples.

  5. Calculate the Bayes estimates and posterior risks of parameters θ1,θ2,θ3,p1 and p2 based on 1000 repetitions by solving (7)(18).

The above steps 1–5 are used for each of the sample sizes n=40,80, and 140. The choice of the vector of the parameters is θ1,θ2,θ3,p1,p22,3,4,0.2,0.4 by taking the following left and right test termination times yr,yw0.01,0.8. It is worth mentioning that the choices of left and right test termination times are made in such a way that the censoring rate in the resulting sample remains in between 7% to 20%. The resulting simulated results have been presented in Tables 24. The simulated results for θ1,θ2,θ3,p1,p22,3,4,0.2,0.4 with yr,yw0.005,1.2 are available with the first author and can be obtained on demand.

yr,yw n Prior Distribution θ^1SELF θ^2SELF θ^3SELF p^1SELF p^2SELF
0.01, 0.8 40 UP BE 3.395080 3.605950 4.564170 0.206830 0.394006
PR 3.189950 1.380540 1.915510 0.004616 0.006416
JP BE 2.784800 3.430350 4.284070 0.205418 0.395564
PR 2.936520 1.313790 1.790290 0.004524 0.006386
IP BE 1.556880 2.355800 2.722950 0.238333 0.351486
PR 0.278267 0.386267 0.506775 0.004511 0.005442
80 UP BE 2.719220 3.291960 4.265730 0.204338 0.397148
PR 1.146740 0.636585 0.911545 0.002552 0.003508
JP BE 2.456510 3.253020 4.144790 0.203833 0.397956
PR 0.986246 0.621705 0.870667 0.002469 0.003467
IP BE 1.723430 2.612290 3.163760 0.220376 0.373973
PR 0.240709 0.293481 0.410092 0.002441 0.003127
140 UP BE 2.452820 3.176990 4.153470 0.203616 0.398351
PR 0.620311 0.380634 0.520377 0.001613 0.001898
JP BE 2.284210 3.164150 4.066220 0.203075 0.398527
PR 0.569558 0.315852 0.434408 0.001586 0.001858
IP BE 1.831800 2.747620 3.432180 0.213538 0.381421
PR 0.196568 0.227209 0.331019 0.001510 0.001718

SELF, squared error loss function; UP, uniform prior; JP, Jeffreys' prior; CMED, component mixture of exponential distribution.

Table 2

Bayes estimates (BE) and posterior risks (PR) of 3-CMED using the UP, the JP and the IP under SELF with parameters θ1=2,θ2=3,θ3=4, p1=0.2,p2=0.4.

yr,yw n Prior Distribution θ^1PLF θ^2PLF θ^3PLF p^1PLF p^2PLF
0.01, 0.8 40 UP BE 3.699000 3.849730 4.767100 0.217908 0.405841
PR 0.717074 0.356772 0.394466 0.022028 0.016086
JP BE 3.053300 3.664720 4.384080 0.216180 0.404422
PR 0.693655 0.352933 0.380654 0.022006 0.016010
IP BE 1.634710 2.431220 2.795510 0.245329 0.360427
PR 0.173252 0.154085 0.178664 0.018648 0.015317
80 UP BE 2.888690 3.446400 4.376220 0.210699 0.403334
PR 0.369371 0.186865 0.206311 0.012482 0.008699
JP BE 2.638560 3.326240 4.190550 0.210432 0.403134
PR 0.361023 0.184805 0.204036 0.012440 0.008668
IP BE 1.781800 2.667790 3.237220 0.226288 0.378155
PR 0.134257 0.108032 0.126989 0.010931 0.008318
140 UP BE 2.623740 3.268870 4.229650 0.205550 0.402903
PR 0.242676 0.125873 0.129712 0.008339 0.005497
JP BE 2.476790 3.165040 4.142140 0.205369 0.402307
PR 0.225903 0.121273 0.125100 0.008282 0.005367
IP BE 1.875440 2.777690 3.451640 0.217994 0.384240
PR 0.107602 0.088266 0.096329 0.007372 0.005126

PLF, precautionary loss function; UP, uniform prior; JP, Jeffreys' prior; CMED, component mixture of exponential distribution.

Table 3

Bayes estimates (BE) and posterior risks (PR) of 3-CMED using the UP, the JP and the IP under PLF with parameters θ1=2,θ2=3,θ3=4, p1=0.2,p2=0.4.

yr,yw n Prior Distribution θ^1DLF θ^2DLF θ^3DLF p^1DLF p^2DLF
0.01, 0.8 40 UP BE 4.159730 4.017760 4.978260 0.227945 0.412073
PR 0.191222 0.092280 0.083481 0.100028 0.039774
JP BE 3.488430 3.779970 4.726180 0.221350 0.411067
PR 0.215770 0.095264 0.085669 0.102242 0.039905
IP BE 1.724710 2.487160 2.887600 0.256362 0.367605
PR 0.104512 0.062960 0.063214 0.074873 0.038481
80 UP BE 3.082070 3.515470 4.415310 0.217892 0.407875
PR 0.129499 0.055152 0.048539 0.060574 0.021708
JP BE 2.818450 3.420770 4.355840 0.214324 0.406578
PR 0.134775 0.055252 0.048982 0.060620 0.021766
IP BE 1.870780 2.735790 3.306760 0.231815 0.381260
PR 0.075300 0.039535 0.038694 0.048479 0.021646
140 UP BE 2.694600 3.341800 4.324170 0.209112 0.405395
PR 0.104543 0.021811 0.024297 0.042675 0.019338
JP BE 2.511670 3.303690 4.193300 0.207597 0.404229
PR 0.110332 0.023206 0.028588 0.043811 0.020176
IP BE 1.928560 2.760290 3.556690 0.221630 0.386456
PR 0.053041 0.019671 0.023283 0.032184 0.017784

DLF, DeGroot loss function; UP, uniform prior; JP, Jeffreys' prior; CMED, component mixture of exponential distribution.

Table 4

Bayes estimates (BE) and posterior risks (PR) of 3-CMED using the UP, the JP and the IP under DLF with parameters θ1=2,θ2=3,θ3=4, p1=0.2,p2=0.4.

From the Tables 24, it was observed that the degree of over-estimation (and/or under-estimation) of Bayes estimate of component and proportion parameters using the NIP (UP and JP) and the IP under SELF, PLF and DLF was greater for smaller sample size as compared to a larger sample size for the fixed left and right test termination times yr,yw. Also, the degree of under-estimation (and/or over-estimation) of component and proportion parameters was observed lower for a smaller left test termination time yr and larger for the right test termination time yw as compared to a larger left test termination time yr and a smaller right test termination time yw for a fixed sample size. It has also been observed that the bias in the Bayes estimates reduced to zero as the sample size was increased at varying left and right test termination times. Moreover, we also observed from the simulation study that the Bayes estimates tend to converge to the true parameter values with a smaller left test termination time yr and a larger right test termination time yw as compared to a larger left test termination time yr and a smaller right test termination time yw for different sample sizes.

The posterior risk of the Bayes estimates is a suitable criterion for comparing the performance of the different loss functions. From our study, we observed that the posterior risk was directly proportional to true parametric values and was inversely proportional to the sample size. It was seen that the posterior risks of the component and the proportion parameters using the NIP (UP and JP) and the IP under SELF, PLF and DLF were inversely proportional to sample size for fixed left and right test termination times. The same observation was made for smaller left and larger right test termination times as compared to larger left and smaller right test termination times at varying sample sizes.

As far as the problem of selecting a suitable prior is concerned, it can be seen that, having the least associated amount of posterior risk of the Bayes estimates for a given loss function, the IP is more efficient prior amongst the different considered priors in this study. Also, it can be seen that the UP (JP) emerges as the best prior than the JP (UP) under DLF (SELF and PLF) due to smaller associated posterior risk. On the other hand, for estimating the component parameters, the DLF performs better than PLF and SELF, whereas the performance of SELF is superior to PLF and DLF for estimating the proportion parameters. It was also observed that the selection of the best prior or suitable loss function is independent on left and right test termination times and sample sizes. It is worth mentioning that the selection of the best prior (loss function) for a given loss function (prior) is made on the basis of the minimum posterior risks.

10. REAL DATA APPLICATION

In this section, we present the analysis of a real-life data to illustrate the methodology discussed in the previous sections. The main idea of the present section is to determine whether the results and properties of the Bayes estimators explored by a simulation study, have the same behavior under a real-life situation. Therefore, for this purpose, we use the data set reported in Gómez et al. [50] about the fatigue life fracture of Kevlar 373/epoxy that are subject to constant pressure at the 90% stress level until all had failed, that is, we have a complete data with the exact times of failure. To illustrate the proposed methodology, the data are randomly grouped into three sets of observations with 26 observations belonging to subpopulation-I, 25 observations belonging to subpopulation-II and remaining 25 observations belonging to subpopulation-III. To implement doubly censored samplings, we consider y1r1,,y1w1, y2r2,,y2w2 and y3r3,,y3w3 failed observations belong to subpopulation-I, subpopulation-II and subpopulation-III, respectively. The rest of the observations which are less than 0.05 and greater than 0.34 assumed to be censored observations from each component, such that yr=miny1r1,y2r2,y3r3=0.05 and yw=maxy1w1,y2w2,y3w3=0.34. Notice that the numbers of failed observations, s1=w1r1+1=19, s2=w2r2+1=20 and s3=w3r3+1=19, can be observed from subpopulation-I, subpopulation-II and subpopulation-III, respectively. The remaining nwr+3=18 observations are assumed to be censored observations and wr+3=58 are the uncensored observations, such that r=r1+r2+r3, w=w1+w2+w3 and s=s1+s2+s3. The data are summarized as below:

n1=26, r1=4, w1=22, n2=25, r2=3, w2=22, n3=25, r3=3, w3=21, n=76, r=10, w=65, s=58, i=r1w1y1i=3.05256, i=r2w2y2i=3.19514, i=r3w3y3i=2.97166.

Since nwr+3=18, we have almost 23.68% doubly censored data. Bayes estimates and posterior risks using the NIP (UP and JP) and the IP under SELF, PLF and DLF are showcased in Table 5.

Loss Function Prior θ^1 θ^2 θ^3 p^1 p^1
SELF UP BE 5.565440 5.131310 5.357700 0.324222 0.347934
PR 2.304870 1.980750 2.285350 0.003952 0.004180
JP BE 5.313890 4.913600 5.101590 0.324376 0.347532
PR 2.136780 1.840220 2.106690 0.003924 0.004138
IP BE 3.163980 3.375500 3.465220 0.344839 0.320345
PR 0.450884 0.527196 0.601823 0.003577 0.003425
PLF UP BE 5.768790 5.320820 5.566900 0.330260 0.353890
PR 0.406710 0.379013 0.418386 0.012076 0.011912
JP BE 5.511280 5.097420 5.304050 0.330369 0.353435
PR 0.394780 0.367639 0.404913 0.011986 0.011807
IP BE 3.234450 3.452710 3.551000 0.349986 0.325647
PR 0.140936 0.154417 0.171552 0.010295 0.010605
DLF UP BE 5.979580 5.517320 5.784260 0.336411 0.359948
PR 0.069259 0.069964 0.073744 0.036231 0.033378
JP BE 5.716000 5.288110 5.514540 0.336472 0.359438
PR 0.070348 0.070822 0.074883 0.035950 0.033126
IP BE 3.306490 3.531680 3.638900 0.355210 0.331037
PR 0.043099 0.044223 0.047727 0.029200 0.032300

SELF, squared error loss function; PLF, precautionary loss function; DLF, DeGroot loss function; CMED, component mixture of exponential distribution; UP, uniform prior; JP, Jeffreys' prior.

Table 5

Bayes estimates (BE) and posterior risks (PR) of 3-CMED using the UP, the JP and the IP under SELF, PLF and DLF with a real-life mixture data.

It is observed that results obtained through real data are compatible with the simulation results, as discussed in the previous section. Table 5 also reveals that the performance of the IP is the best as compared to the NIP (UP and JP), that is, in terms of the minimum posterior risks. Moreover, it is noticed that results are relatively more precise with the UP (JP) than the JP (UP) under DLF (SELF and PLF). In addition, it can be seen that SELF (DLF) performs better than PLF and DLF (PLF and SELF) for estimating proportion (component) parameters.

The 90% Bayesian predictive intervals (L, U) using the NIP (UP and JP) and the IP are presented in Table 6. It can be seen that the 90% Bayesian predictive intervals using the IP are narrower than the Bayesian predictive intervals using the NIP (UP and JP).

UP
JP
IP
L U L U L U
0.009775 1.102830 0.010232 1.183630 0.015505 1.026150

CMED, component mixture of exponential distribution; UP, uniform prior; JP, Jeffreys' prior.

Table 6

Bayesian predictive interval (L, U) of 3-CMED using the UP, the JP and the IP with real-life mixture data.

11. CONCLUSION

In this article, a 3-CMED under doubly censoring sampling scheme is considered for modeling lifetime data. Assuming different NIP and IP, expressions of the Bayes estimators and their posterior risks under different loss functions are derived. To judge the relative performance of the Bayes estimators and also to deal with the problems of selecting a suitable priors and loss functions by assuming different sample sizes and various left and right test termination times, a comprehensive simulation and real-life study have been conducted in this article. The simulation study revealed some important and interesting properties of the Bayes estimators. From numerical results given in Tables 24, we observed that an increase in sample size or decrease in left and increase in right test termination times provides improved Bayes estimates. We also observed that the effect of left and right test termination times, sample size and different parameter values on the Bayes estimates is in the form of over-estimation and/or under-estimation. To be more specific, a larger (smaller) sample size results in a smaller (larger) degree of under-estimation and/or over-estimation of parameters at the fixed left and right test termination times. However, the extent of over-estimation and/or under-estimation of parameters is quite smaller (larger) with a relatively smaller left and larger right (larger left and smaller right) test termination times for a fixed sample size. It is also observed that as the sample size (left and right test termination time) increases (increases and decreases) the posterior risks of Bayes estimates of parameters decrease (increase) for a fixed left and right test termination times (sample size). Finally, we conclude that for a Bayesian analysis of mixture data under doubly censoring sampling scheme, the IP performance is good for the DLF (SELF) for estimating component (proportion) parameters. However, if only the NIPs are considered, the JP (UP) are suitable with SELF (DLF) for estimating proportion (component) parameters. Moreover, the results of real data coincide with the simulated results that confirm the correctness of our simulation scheme.

In future, this work can be extended by comparing the Bayesian estimates with the maximum likelihood estimates by assuming record values and other different types of censoring schemes. Moreover, the performance of the Bayes estimators under other different loss functions can also be assessed.

CONFLICT OF INTEREST

The authors have declared that there is no conflict of interests exist.

AUTHORS' CONTRIBUTIONS

Conceptualization: Muhammad Tahir, Muhammad Aslam; Formal analysis: Muhammad Abid, Sajid Ali; Methodology: Muhammad Tahir, Mohammad Ahsanullah; Software: Muhammad Abid, Sajid Ali; Supervision: Muhammad Aslam; Writing – original draft: Muhammad Tahir, Sajid Ali, Muhammad Abid and Writing – review & editing: Muhammad Aslam, Mohammad Ahsanullah.

Funding Statement

The authors received no specific funding for this work.

ACKNOWLEDGMENTS

The authors are grateful to the editor and referees for their constructive comments that led to substantial improvements in the article.

REFERENCES

15.A.I. Shawky and R.A. Bakoban, J. Appl. Sci. Res., Vol. 5, 2009, pp. 1351-1369.
17.Z. Liu, Bayesian Mixture Models, McMaster University, Hamilton, Canada, 2010. MS Thesis
18.M. Saleem and M. Irfan, Pak. J. Stat., Vol. 26, 2010, pp. 547-555.
20.A.M. Santos, Robust Estimation of Censored Mixture Models, University of Colorado Denver, Denver, CO, USA, 2011. PhD Thesis
23.S.M.A. Kazmi, M. Aslam, and S. Ali, Int. J. Appl. Sci. Technol., Vol. 2, 2012, pp. 197-218.
25.S. Ali, M. Aslam, and S.M.A. Kazmi, Electron. J. Appl. Stat. Anal., Vol. 6, 2013, pp. 32-56.
31.H. Zhang and Y. Huang, Austin Biom. Biostat., Vol. 2, 2015, pp. 1-6.
32.L.J. Romeu, Strategic Arms Reduction Treaty, Vol. 11, 2004, pp. 1-8.
34.J.D. Kalbfleisch and R.L. Prentice, The Statistical Analysis of Failure Time Data, John Wiley & Sons, Inc., New York, NY, USA, 2011.
35.K.J. Barger, Mixtures of Exponential Distributions to Describe the Distribution of Poisson Means in Estimating the Number of Unobserved Classes, Cornell University, 2006. PhD Thesis
37.G.E.P. Box and G.C. Taio, Bayesian Inference in Statistics Analysis, Wiley & Sons, New York, NY, USA, 1973.
41.H. Jeffreys, Theory of Probability, Claredon Press, Oxford, UK, 1961.
43.A.M. Legendre, Nouvelles Méthodes Pour la Détermination des Orbites des Comètes, Courcier, Paris, France, pp. 1805.
45.M.H. DeGroot, Optimal Statistical Decision, John Wiley & Sons, New York, NY, USA, 2005.
48.A.K. Bansal, Bayesian Parametric Inference, Narosa Publishing House Pvt. Ltd., New Delhi, India, 2007.
49.M. Aslam and J. Stat, Theory Appl., Vol. 2, 2003, pp. 70-83.
Journal
Journal of Statistical Theory and Applications
Volume-Issue
19 - 2
Pages
197 - 211
Publication Date
2020/05/20
ISSN (Online)
2214-1766
ISSN (Print)
1538-7887
DOI
10.2991/jsta.d.200508.002How to use a DOI?
Copyright
© 2020 The Authors. Published by Atlantis Press SARL.
Open Access
This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).

Cite this article

TY  - JOUR
AU  - Muhammad Tahir
AU  - Muhammad Aslam
AU  - Muhammad Abid
AU  - Sajid Ali
AU  - Mohammad Ahsanullah
PY  - 2020
DA  - 2020/05/20
TI  - A 3-Component Mixture of Exponential Distribution Assuming Doubly Censored Data: Properties and Bayesian Estimation
JO  - Journal of Statistical Theory and Applications
SP  - 197
EP  - 211
VL  - 19
IS  - 2
SN  - 2214-1766
UR  - https://doi.org/10.2991/jsta.d.200508.002
DO  - 10.2991/jsta.d.200508.002
ID  - Tahir2020
ER  -