Journal of Statistical Theory and Applications

Volume 18, Issue 1, March 2019, Pages 79 - 86

Bayes and Non-Bayes Estimation of Change Point in Nonstandard Mixture Inverse Weibull Distribution

Authors
Masoud Ganji*, Roghayeh Mostafayi
Department of Statistics, Faculty of Mathematical Science, University of Mohaghegh Ardabili, Ardabil, Iran
*

Corresponding author. Email: mganji@uma.ac.ir

Received 1 March 2015, Accepted 13 March 2017, Available Online 31 March 2019.
DOI
10.2991/jsta.d.190306.011How to use a DOI?
Keywords
Bayes estimate; change point; mixture distribution; inverse Weibull distribution; maximum likelihood estimate
Abstract

We consider a sequence of independent random variables X1,X2,,Xm,,Xnn3 exhibiting a change in the probability distribution of the data generating mechanism. We suppose that the distribution changes at some point, called a change point, to a second distribution for the remaining observations. We propose Bayes estimators of change point under symmetric loss functions and asymmetric loss functions. The sensitivity analysis of Bayes estimators are carried out by simulation and numerical comparisons with R-programming.

Copyright
© 2019 The Authors. Published by Atlantis Press SARL.
Open Access
This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).

1. INTRODUCTION

It is generally recognized that a physical entity experiences a structural change as it evolves over time. Such structural change problem are often used to describe abrupt changes in the mechanism underlying a sequence of random measurements. Further, in many real-life problems theoretical or empirical deliberations suggest models with occasionally changing one or more of its parameters. There is enormous frequentist and Bayesian literature on problems of detecting the change, inference concerning the change point, and related problems for various statistical models.

Control charts are one of the most important tools in statistical process control to monitor manufacturing processes and services. When a control chart shows an out-of-control condition, a search begins to identify and eliminate the root cause(s) of the process disturbance. The time when the disturbance has manifested itself to the process is referred to as change point. Identification of the change point is considered as an essential step in analyzing and eliminating the disturbance source(s) effectively.

Nonstandard mixture inverse Weibull (IW) distribution happens in many applied situations, for instance; life of a unit may have a IW distribution but some of the units fail instantaneously. In the study of tooth decay, the number of surfaces in a mouth which are filled, missing, or decayed are scored to produce a decay index. Healthy teeth are scored (0) for no evidence of decay. The distribution is a mixture of a mass point at (0) and a nontrivial continuous distribution of decay score. In the study of tumor characteristics, two variates can be recorded. A discrete variable to indicate the absence (0) or presence (1) of a tumor and a continuous variable measuring the tumor size.

A sequence of random variables X1,X2,,Xm,,Xn has a change point at m1mn1, if Xii=1,,m has a probability distribution F1xi|θ1 and Xii=m+1,,n has a probability distribution F2xi|θ2, where F1xi|θ1F2xi|θ2 and θ1θ2. Change point inference has a long history. Many of statisticians like Ganji [1], Chernoff and Zacks [2], Kander and Zacks [3], Smith [4], Jani and Pandya [5], Pandya and Jani [6], Pandya and Jadav [7], and Ebrahimi and Ghosh [8] studied the change point models in Bayesian framework. The monograph of Broemeling and Tsurumi [9] is also useful reference.

2. CHANGE POINT MODEL

Let the sequence of observations X1,X2,,Xm come from mixture of IW and degenerate distribution. The probability density function of the sequences is as follows

fxi;α1,β,p1=1p1Ixi=0xi+p1βα1βxiβ1eα1xiβIxi>0xi;α1>0,β>0,0<P1<1,i=1,2,,m,
and later nm observations Xm+1,,Xn come from mixture of IW and degenerate distribution. The probability density function of the sequences is as follows:
fxi;α2,β,p2=1p2Ixi=0xi+p2βα2βxiβ1eα2xiβIxi>0xi;        α2>0,β>0,0<P2<1,i=m+1,m+2,,n.

3. BAYES ESTIMATORS OF PARAMETERS

The likelihood function of the given sample information is

Lα1,α2,β,p1,p2,m|x=1p1Nmp1Am1p2NNmp2BmβnNα1βAm×α2βBmuAmβ1eα1βvAmuBmβ1eα2βvBm,
where
uAm=i=1Amyi;uBm=j=1Bmzj;vAm=i=1Amyiβ;vBm=j=1Bmzjβ;

Let N be a number of observations equal to zero, Nm be a number of observations equal to zero before change point m, Am be a number of the nonzero observations before change point m, Bm be a number of the nonzero observations after change point m. Denote by y1,y2,,yAm the nonzero observations before the change point m, and denote by z1,z2,zBm the nonzero observations after the change point m.

For Bayesian estimation, we need to specify a prior distribution for the parameters. As in Broemeling and Tsurumi [9], suppose that the marginal prior distribution for m is discrete uniform over the set 1,2,,n1.

As in Calabria and Pulcini [10] and Erto and Guida [11], we assume that some prior information on the mechanism of failures in terms of reliability level at a prefixed time value are available. In addition, we assume that these prior technical information are given in terms of mean values μ1 and μ2. Following Pandya and Jadav [12] let a log inverse exponential density be represent this prior knowledge on R1t and R1t at a common prefixed time t with respective means μ1 and μ2,

gR1t=Ln11R1ta11Гa1,0R1t1,a1>0
gR2t=Ln11R2ta21Гa2,0R2t1,a2>0.

If the prior information is given in terms of the prior means μ1 and μ2then the parameters ai,i=1,2 can be obtained as ai=Ln11μiLn2, i=1,2.

Making change of variables Rit=1eαitβ, densities on Rit can be converted into conditional prior densities on α1 and α2 as

giαi|β=tβaiβαiβai1expαitβΓai,ai>0,

Suppose the marginal prior distributions of p1 and p2 are Beta priors with respective means μ3, μ4 and common standard deviation σ1,

gp1=p1a311p1b31Ba3,b3;a3,b3>0;  0p11,
gp2=p2a411p2b41Ba4,b4;a4,b4>0;  0p21

Mean and standard deviation of p1 and p2 are

μi=aiai+bi;σ1=aibiai+bi2ai+bi+1;i=3,4,
then,
ai=1μiμi2μiσ1σ1;bi=1μiaiμi;i=3,4.

For β, consider the uniform density on β1,β2, that is,

gβ=1β2β1,β1ββ2.

Then, the joint prior distribution of α1, α2, β, p1, p2, and m is given by

gα1,a2,β,m,p1,p2=ktβa1,a2β2α1βa11exptα1β×α2βa21exptα2β×α2βa21exptα2β×p1a311p1b31p2a411p2b41,
where
k=1Γa1Γa2β2β1n1Ba3,b3Ba4,b4.

Also, the joint posterior distribution of α1, α2, β, p1, p2, and m is given by

gα1,α2,β,m,p1,p2|x=ktβa1+a2p1Am+a311p1Nm+b31×1p2NNm+b41p2Bm+a41βnN+2×α1βAm+a11α2βBm+a21uAmβ1×eα1βvAm+tβuBmβ1eα2βvBm+tβh(x)1
where
hx=km=1n1I1m,
and
I1m=β1β2uAmβ1uBmβ1βnNvAm+tβAm+a1vBm+tβBm+a2×BAm+a3,Nm+b3ΓAm+a1BBm+a4,NNm+b4ΓBm+a2dβ.

So, the marginal posterior distribution of p1, p2 and m is given by

gm|x=I1mm=1n1I1m,
gp1|x=km=1n1p1Am+a311p1Nm+b31k2mJmhx1,
gp2|x=km=1n11p2NNm+b41p2Bm+a41k3mJmhx1,
where
k2m=ΓAm+a1ΓBm+a2BBm+a4,NNm+b4,
k3m=ΓAm+a1ΓBm+a2BAm+a3,Nm+b3,
and
Jm=βlβ2uAmβ1uBmβ1βnNvAm+tβAm+a1vBm+tβBm+a2dβ.

3.1. Point Estimation under Symmetric Loss Functions

In Bayesian framework a loss function is used to minimize the expected loss an estimator generates. The Bayes estimator of a generic parameter (or function thereof) θ based on symmetric loss function (SEL) function

L1θ,dθd2,θ,dϵ,
is the posterior mean, where d is the decision rule to estimate θ. For estimation of change point m, which has a nonnegative integer value, the loss function L1m,ν is defined only for integer value m and ν. Hence, Bayes estimator of change point under SEL function, m*, is the posterior mean. The posterior mean is
Posterior mean=m=1n1m.I1mm=1n1I1m

The Bayes estimators of p1 and p2 under SEL function are as follows:

p1*=km=1n1BAm+a3+1,Nm+b3k2mJmhx1,
and
p2*=km=1n1BBm+a4+1,NNm+b4k3mJmhx1.

Other Bayes estimators of change point under loss functions,

L2m,d=|md|,
and
L3m,d=0,if|md|<ε,ε>0,1,    otherwise
are the posterior median and the posterior mode, respectively.

3.2. Point Estimation under Asymmetric Loss Functions

In this section, we obtain Bayes estimator of change point under Linex loss function. The Linex loss function, proposed by Varian (1975) and discussed its behavior by Zellner (1986), is defined as

L4θ,dexpq1dθq1dθ1,q1>0,θ,dϵ,
where d is the decision role to estimate parameter θ. It was found to be appropriate in the situation where over estimation is considered more heavily penalized than underestimation and vice versa. The Bayes estimate of change point, m, under Linex loss function is mL* as
mL*=1q1Lnm=1n1eq1mI1m/m=1n1I1m.

Calabria and Pulcini [13] introduced the following asymmetric loss function

L5θ,d=dθq2q2Lndθ1.

This loss function is known as general entropy loss function (GEL). The Bayes estimate of change point, m, under GEL is mE*

mE*=m=1n1I1m/m=1n1mq2I1m1q2.

Also, the Bayes estimates of p1 and p2 are given by

p1E*=km=1n1BAm+a3q2,Nm+b3k2mJmhx11q2,
p2E*=km=1n1BBm+a4q2,NNm+b4k3mJmhx11q2.

4. MAXIMUM LIKELIHOOD ESTIMATORS

In this section, we obtain the maximum likelihood estimate of change point. We suppose β, α1, and α2 are known. Logarithm of the likelihood function is

LnLα1,α2,β,p1,p2,m|x=NmLn1p1+AmLn(p1)+NNmLn1p2+BmLnp2+nNLnββAmLnα1βBmLnα2+β1LnuAmα1βvAm+β1LnuBmα2βvBm.

Then, the maximum likelihood estimates of p1 and p2 are given by

p^1=AmNm+Am,    p^2=BmNNm+Bm.

So, the maximum likelihood estimate of change point is the value of m which maximize the likelihood function

Lα1,α2,β,p^1,p^2,m|x=1p^1Nmp^1Am1p^2NNmp^2BmβnNα1βAm×α2βBmuAmβ1eα1βvAmuBmβ1eα2βvBm.

5. NUMERICAL STUDY, SENSITIVITY ANALYSIS OF BAYES ESTIMATES

The data given in Table 1 is a random sample of size n = 20 which is generated by using R-programming from the introduced change point model. We considered m = 10, its mean that, the change point in sequence is occurred after 10th observation. The first 10 observations from mixture of IW and degenerate distribution with β=1,α1=0.06,p1=0.8,R1t=0.96 at t=5 and next 10 observations from mixture of IW and degenerate distribution with β=1,α2=9.5,p2=0.6,R2t=0.02 at t=5. The posterior median and the posterior mode of change point, m, under informative prior are also calculated. The results are shown in Table 2. We calculated Bayes estimators proportions p1 and p2 under squared error loss function and GEL by making programs in R-Programming which is a statistical software. The results are shown in Tables 3 and 4.

i 1 2 3 4 5 6 7 8 9 10
Xi 2.72 4.40 11.94 6.91 25.15 4.53 261.35 0 46.18 0
i 11 12 13 14 15 16 17 18 19 20
Xi 0 0.10 0.12 0.07 0.16 0.06 0 0 0.17 0

IW, inverse Weibull.

Table 1

Generated observations from mixture of IW and degenerate distribution.

Bayes estimates of change point
Prior m* Posterior median Posterior mode
Informative 10 10 10
Table 2

The values of Bayes estimators of change point.

Bayes estimates of proportions
Prior Posterior mean
p1
Posterior mean
p2
Informative 0.83 0.62
Table 3

The values of Bayes estimators of proportions p1 and p2.

Prior Bayes estimates of proportions p1 and p2
q2 p1E*t0 p2E*t0
Informative −2 0.84 0.63
−1 0.83 0.62
0.09 0.82 0.60
0.5 0.81 0.59
0.9 0.84 0.58
Table 4

The Bayes estimates using general entropy loss.

The results of Bayes estimates of change point, m, under Linex Loss function and GEL function by considering the different values of the shape parameters q1 and q2, which are shown in Table 5. Also, the sensitivity of the Bayes estimators of change point and proportions p1 and p2 with respect to the parameters of prior distribution have been studied. In Tables 6 and 7, we computed Bayes estimator of change point under SEL function considering different set of values of μ1,μ2 and μ3,μ4. In addition, Table 8 contains Bayes estimates of proportions under GEL function by considering different set of the values of μ3,μ4. The mean square error (MSE) of the estimators are given in Table 9. The results of Tables 68 lead to the conclusion that, Bayes estimates of change point and proportions are robust with appropriate choice of parameters of the prior distribution. From Fig. 1, by repeated the experiment 1000 times, we see that the Bayes estimator is better than MLE.

Prior Shape parameter Bayes estimates of change point
q1 q2 mL* mE*
Iinformative 2 −2 11 10
−1 −1 10 10
0.09 0.09 9 10
0.5 0.5 9 9
0.9 0.9 8 9
Table 5

The Bayes estimates using asymmetric loss functions.

μ1 μ2 m*
0.95 0.05 10
0.85 0.05  9
0.75 0.05  9
0.95 0.04 10
0.95 0.03 10
0.95 0.02  9
0.85 0.01  9
0.75 0.01  9

SEL, symmetric loss function.

Table 6

The Bayes estimates of m under SEL function for different values of μ1 and μ2.

μ3 μ4 m*
0.8 0.6 10
0.7 0.6 10
0.6 0.6 10
0.8 0.5 10
0.8 0.4 10
0.8 0.3 10
0.7 0.5 10

SEL, symmetric loss function.

Table 7

The Bayes estimates of m under SEL function for different values of μ3 and μ4.

q2=1.2 q2=0.5 q2=2
μ3 μ4 p1E* p2E* p1E* p2E* p1E* p2E*
0.8 0.6 0.83 0.62 0.81 0.59 0.84 0.63
0.8 0.5 0.84 0.59 0.85 0.57 0.84 0.61
0.7 0.6 0.80 0.60 0.79 0.59 0.82 0.63
0.7 0.5 0.80 0.60 0.79 0.56 0.82 0.61
0.6 0.6 0.78 0.61 0.76 0.60 0.79 0.63
0.6 0.5 0.77 0.60 0.77 0.56 0.79 0.62
Table 8

Bayes estimates of proportions.

m mMLE MSEmMLE mB MSEmB
1 1 0.15 2 2.3
2 2 0.17 2 0.90
3 3 0.20 3 0.5
4 4 0.25 4 0.30
5 5 0.26 5 0.25
6 6 0.29 6 0.24
7 7 0.31 7 0.27
8 8 0.35 8 0.22
9 9 0.37 9 0.21
10 10 0.38 10 0.16
11 11 0.39 11 0.15
12 12 0.40 12 0.18
13 13 0.40 13 0.23
14 14 0.42 14 0.24
15 15 0.45 15 0.22
16 16 0.48 16 0.30
17 17 0.48 17 0.33
18 18 0.35 18 0.34
19 19 0.10 18 2.9

MSE, mean square error.

Table 9

The values of MSE estimates of change point.

Figure 1

Comparison of Bayes and MLE estimators.

We would like to thank the referees for a careful reading of our paper and lot of valuable suggestions on the first draft of the manuscript.

REFERENCES

1.M. Ganji, World Acad. Sci. Eng. Technol., Vol. 4, 2010, pp. 5-24.
2.H. Chernoff and S. Zacks, Ann. Math. Stat., Vol. 35, 1964, pp. 999-1018.
3.Z. Kander and S. Zacks, Ann. Math. Stat., Vol. 37, 1996, pp. 1196-1210.
4.A.F.M. Smith, Biometrika, Vol. 62, 1975, pp. 407-416.
5.P.N. Jani and M. Pandya, Commun. Stat. Theory Methods, Vol. 28, No. 11, 1999, pp. 2623-2639.
6.M. Pandya and P.N. Jani, Commun. Stat. Theory. Methods, Vol. 35, No. 12, 2006, pp. 2223-2237.
7.M. Pandya and P. Jadav, Int. J. Agric. Stat. Sci., Vol. 3, No. 2, 2007, pp. 589-595.
8.N. Ebrahimi and S.K. Ghosh, N. Balakrishna and C.R. Rao (editors), Handbook of Statistics. Advance in Reliability, Vol. 20, 2001, pp. 777-787.
9.L.D. Broemeling and H. Tsurumi, Econometrics and Structural Change, Marcel Dekker, New York, 1987.
10.R. Calabria and G. Pulcini, Micro Electron. Reliab., Vol. 34, 1994, pp. 789-802.
11.P. Erto and M. Guida, Qual. Reliab. Eng. Int., Vol. 1, 1985, pp. 161-164.
12.M. Pandya and P. Jadav, Commun. Stat. Theory Methods, Vol. 39, No. 15, 2010, pp. 2725-2742.
13.R. Calabria and G. Pulcini, Micro Electron. Reliab., Vol. 34, 1994, pp. 1897-1907.
Journal
Journal of Statistical Theory and Applications
Volume-Issue
18 - 1
Pages
79 - 86
Publication Date
2019/03/31
ISSN (Online)
2214-1766
ISSN (Print)
1538-7887
DOI
10.2991/jsta.d.190306.011How to use a DOI?
Copyright
© 2019 The Authors. Published by Atlantis Press SARL.
Open Access
This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).

Cite this article

TY  - JOUR
AU  - Masoud Ganji
AU  - Roghayeh Mostafayi
PY  - 2019
DA  - 2019/03/31
TI  - Bayes and Non-Bayes Estimation of Change Point in Nonstandard Mixture Inverse Weibull Distribution
JO  - Journal of Statistical Theory and Applications
SP  - 79
EP  - 86
VL  - 18
IS  - 1
SN  - 2214-1766
UR  - https://doi.org/10.2991/jsta.d.190306.011
DO  - 10.2991/jsta.d.190306.011
ID  - Ganji2019
ER  -