Journal of Statistical Theory and Applications

Volume 19, Issue 2, June 2020, Pages 173 - 184

Bayesian Analysis of Misclassified Generalized Power Series Distributions Under Different Loss Functions

Authors
Peer Bilal Ahmad*
Department of Mathematical Sciences, Islamic University of Science & Technology, Awantipora, Pulwama, Jammu and Kashmir, India
Corresponding Author
Peer Bilal Ahmad
Received 28 June 2017, Accepted 26 January 2019, Available Online 25 May 2020.
DOI
10.2991/jsta.d.200513.001How to use a DOI?
Keywords
Posterior distributions; Squared error loss function (SELF); Weighted squared error loss function (WSELF); Bayes estimators; Misclassified distributions; Negative binomial distribution; Logarithmic series distribution; Poisson distribution
Abstract

In certain experimental investigations involving discrete distributions external factors may induce measurement error in the form of misclassification. For instance, a situation may arise where certain values are erroneously reported; such a situation termed as modified or misclassified has been studied by many researchers. Cohen (J. Am. Stat. Assoc. 55 (1960), 139–143; Ann. Inst. Stat. Math. 9 (1960), 189–193; Technometrics. 2 (1960), 109–113) studied misclassification in Poisson and the binomial random variables. In this paper, we discuss misclassification in the most general class of discrete distributions, the generalized power series distributions (GPSDs), where some of the observations corresponding to x=c+1;c0 are erroneously observed or at least reported as being x=c with probability α. This class includes among others the binomial, negative binomial, logarithmic series and Poisson distributions. We derive the Bayes estimators of functions of parameters of the misclassified GPSD under different loss functions. The results obtained for misclassified GPSD are then applied to its particular cases like negative binomial, logarithmic series and Poisson distributions. Finally, few numerical examples are provided to illustrate the results.

Copyright
© 2020 The Authors. Published by Atlantis Press SARL.
Open Access
This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).

1. INTRODUCTION

Binomial, negative binomial, Poisson and logarithmic series distributions, some well-known examples of generalized power series (GPS) distributions, are widely used for modeling count data. Modality and divisibility properties of these distributions are known in the literature. Stochastic ordering comparison between these distributions and their mixtures has also been recently of interest by Misra et al. [1], Alamatsaz and Abbasi [2], Aghababaei Jazi and Alamatsaz [3], Abbasi et al. [4] and Aghababaei Jazi et al. [5].

In certain experimental investigations involving discrete distributions external factors may induce measurement error in the form of misclassification. For instance, a situation may arise where certain values are erroneously reported; e.g., when defective item is inspected wrongly as nondefective item and vice versa. Such a situation termed as modified or misclassified has been studied by many researchers. Cohen [68] studied misclassification in Poisson and the binomial random variables. Cohen [6] with a suitable alteration of the data to reflect the misplacement of ones to zeroes, used Bortkiewicz's [9] classical example of deaths from the kick of a horse per army corps per year, for ten Prussian army corps for twenty years (1875–1894). For the purpose of this paper, Cohen assumed that twenty of the records which should have shown one death were in error by reporting no deaths.

Jani and Shah [10] studied misclassification in modified power series distributions (MPSDs) and Patel and Patel [11] in case of generalized power series distribution (GPSD), where some of the values of one are sometimes reported as zero. Hassan and Ahmad [12] studied misclassification in size-biased modified power series distributions (SBMPSDs), where some of the observations corresponding to x=2 are misclassified as x=1. Patel and Patel [13] also studied misclassification in MPSD and Hassan and Ahmad [14] in SBMPSD for a more general situation where sometimes the value c+1 is reported erroneously to c. In all these five papers the authors studied the structural properties of the respective distributions.

Our aim is to give Bayes estimators of functions of parameters under squared error loss function (SELF) and weighted square error loss function (WSELF) of misclassified generalized power series distribution (MGPSD) where some of the observations corresponding to x=c+1;c0 are erroneously observed or at least reported as being x=c with probability α. This class includes among others the binomial, negative binomial, logarithmic series and Poisson distributions (PD).

A random variable X is said to have the MGPSD, if the resulting distribution is of the form

PX=x=θcac+αac+1θfθ,for   x=c1αac+1θc+1fθ,for   x=c+1axθxfθfor   xS(1)
where 0α1, fθ=xaxθx is positive, finite and differentiable and coefficients ax are nonnegative and free of θ. S is the subset of the set x of nonnegative integers not containing c and c+1.

It is interested to note that for α=0, the model (1) reduces to simple GPSD introduced by Patil [15] and is given by

PX=x=axθxfθ      for      xT(2)
where T is a subset of the set I of nonnegative intergers.

In Section 2, we obtained the Bayes estimators of functions of parameters under SELF and weighted square error loss function of MGPSD. In Sections 3, 4 and 5, the results of misclassified GPSD are used to obtain the Bayes estimators of functions of parameters of misclassified Poisson, misclassified negative binomial and misclassified logarithmic series distributions respectively under SELF and weighted square error loss function using different prior distributions. Finally, in Section 6, few numerical examples are provided to illustrate the results.

2. BAYESIAN ESTIMATION OF MGPSD

Let X1,X2,,XN represents a random sample of size N drawn from the misclassified GPSD (1), then the likelihood function of X1,X2,,XN is of the form

Lθ,α/x_j=0NcNcjacac+1jαNcj1αNc+1θy+NcjfθN(3)
where x_=x1,x2,,xN, y=i=1Nxi and Ni is the number of observations in the ith class such that i0Ni=N.

As the parameter α represents the probability of misclassifying the observations corresponding to the class x=c+1 by reporting it as being corresponding to the class x=c, we may take Betau,v prior for α with probability density function

gα=αu11αv1Bu,v,0<α<1,u,v>0(4)
where, Bu,v=ΓuΓvΓu+v and the prior distribution for θ is taken to be conjugate or nonconjugate prior distribution h(θ).

The Joint posterior p.d.f. of θ and α corresponding to the prior hθ and gα respectively is given by

Πθ,α/x_=j=0NcNcjacac+1jαNcj+u11αNc+1+v1θy+NcjfθNhθj=0NcNcjacac+1jBNcj+u,Nc+1+vΘθy+NcjfθNhθdθ(5)

The marginal posterior distribution of θ and α are respectively given by

Πθ/x_=j=0NcNcjacac+1jBNcj+u,Nc+1+vθy+NcjfθNhθj=0NcNcjacac+1jBNcj+u,Nc+1+vΘθy+NcjfθNhθdθ(6)
Πα/x_=j=0NcNcjacac+1jαNcj+u11αNc+1+v1Θθy+NcjfθNhθdθj=0NcNcjacac+1jBNcj+u,Nc+1+vΘθy+NcjfθNhθdθ(7)

Under the SELF given by Lηθ,d=ηθd2 and Lγα,d=γαd2, where ηθ and γα are respectively the functions of θ and α, d is a decision, the Bayes estimates η^θ of ηθ and γ^α of γα are given by

η^B=j=0NcNcjacac+1jBNcj+u,Nc+1+vΘηθθy+NcjfθNhθdθj=0NcNcjacac+1jBNcj+u,Nc+1+vΘθy+NcjfθNhθdθ(8)
γ^B=j=0NcNcjacac+1j01ΘγααNcj+u11αNc+1+v1θy+NcjfθNhθdθdαj=0NcNcjacac+1jBNcj+u,Nc+1+vΘθy+NcjfθNhθdθ(9)

Similarly, under the WSELF given by Lηθ,d=wθηθd2 and Lγα,d=zαγαd2 where wθ is a function of θ, and zα is a function of α, the Bayes estimate η^w of ηθ and γ^w of γα are given by

η^w=j=0NcNcjacac+1jBNcj+u,Nc+1+vΘwθηθθy+NcjfθNhθdθj=0NcNcjacac+1jBNcj+u,Nc+1+vΘwθθy+NcjfθNhθdθ(10)
γ^w=j=0NcNcjacac+1j01ΘzαγααNcj+u11αNc+1+v1θy+NcjfθNhθdθdαj=0NcNcjacac+1j01ΘzααNcj+u11αNc+1+v1θy+NcjfθNhθdθdα(11)

We consider two different forms of wθ and zα as given below:

  1. Let wθ=θ2,zα=α2, the Bayes estimate η^M of ηθ and γ^M of γα known as the minimum expected loss (MEL) estimate are given by

    η^M=j=0NcNcjacac+1jBNcj+u,Nc+1+vΘηθθy+Ncj2fθNhθdθj=0NcNcjacac+1jBNcj+u,Nc+1+vΘθy+Ncj2fθNhθdθ(12)
    γ^M=j=0NcNcjacac+1j01Θγαα(Ncj+u2)11αNc+1+v1θy+NcjfθNhθdθdαj=0NcNcjacac+1jBNcj+u2,Nc+1+vΘθy+NcjfθNhθdθ(13)

    This loss function was used by Tummala and Sathe [16] for estimating the reliability of certain life time distributions and by Zellner and Park [17] for estimating functions of parameters of some econometric models.

  2. Let wθ=θ2eδθ;δ>0 and zα=α2eλα;λ>0. The Bayes estimate η^E of ηθ and γ^E of γα known as the exponentially weighted minimum expected loss (EWMEL) estimate are given by

    η^E=j=0NcNcjacac+1jBNcj+u,Nc+1+vΘηθθy+Ncj2fθNeδθhθdθj=0NcNcjacac+1jBNcj+u,Nc+1+vΘθy+Ncj2fθNeδθhθdθ(14)
    γ^E=j=0NcNcjacac+1j01Θγαα(Ncj+u2)11αNc+1+v1eλαθy+NcjfθNhθdθdαj=0NcNcjacac+1jBNcj+u2,Nc+1+vMNcj+u2,Nc+Nc+1+u+vJ2;λΘθy+NcjfθNhθdθ(15)

    For solving (15), we have used the relation ΓbaΓaMa,b,zΓb=01tb11tba1eztdt given by Abramowitz and Stegum [18]. Ma,b;z is the confluent hypergeometric function and has a series representation given by

    Ma,b;z=j=0ajzjbjj!(16)
    where a0=1 and aj=aa+1a+2a+j1(17)

    Atanasiu [19] used this loss function for estimating the premium for risks, containing a fraction (a part) of the variance of the risk as a loading on the net risk premium.

    Now, we shall consider some special cases of the p.m.f. (1) and obtain the corresponding Bayesian estimation in each case.

3. BAYESIAN ESTIMATION OF MPD

A discrete random variable X is said to have misclassified Poisson distribution (MPD) if its probability mass function is given by

PX=x=θcc!1+αθ1c+1eθ,forx=c1αeθθc+1c+1!,forx=c+1θxeθx!forxS(18)
where θ>0,0α1 and S=T{c,c+1). For c=0, it reduces to the modified PD defined by Cohen [6] and if α=0, the model (18) reduces to classical PD. It is a special case of misclassified GPSD (1) with
fθ=eθ,ax=1x!,ac=1c!,ac+1=1c+1!

In this case, the likelihood function Lθ,α/x_ is of the form

Lθ,α/x_j=0NcNcjc+1jαNcj1αNc+1θy+NcjeθN(19)

With the gamma prior for θ given by

hθ=abΓbeaθθb1,θ,a,b>0(20)
and beta prior for α given by (4), the joint Posterior probability density function of θ and α is given by
Πθ,α/x_=j=0NcNcjc+1jαNcj+u11αNc+1+v1θy+Nc+bj1eθN+aj=0NcNcjc+1jBNcj+u,Nc+1+vΓy+Nc+bjN+ay+Nc+bj(21)

The marginal posterior distribution of θ and α are respectively given by

Πθ/x_=j=0NcNcjc+1jBNcj+u,Nc+1+vθy+Nc+bj1eθN+aj=0NcNcjc+1jBNcj+u,Nc+1+vΓy+Nc+bjN+ay+Nc+bj(22)
Πα/x_=j=0NcNcjc+1jαNcj+u11αNc+1+v1Γy+Nc+bjN+ay+Nc+bjj=0NcNcjc+1jBNcj+u,Nc+1+vΓy+Nc+bjN+ay+Nc+bj(23)

Under the SELF, given by Lθ,d=θd2 and Lα,d=αd2, where d is a decision, the Bayes estimate θ^Br of θr and α^Br of αr are given by

θ^Br=j=0NcNcjc+1jBNcj+u,Nc+1+vΓy+Nc+bj+rN+ay+Nc+bj+rj=0NcNcjc+1jBNcj+u,Nc+1+vΓy+Nc+bjN+ay+Nc+bj(24)
α^Br=j=0NcNcjc+1jBNcj+u+r,Nc+1+vΓy+Nc+bjN+ay+Nc+bjj=0NcNcjc+1jBNcj+u,Nc+1+vΓy+Nc+bjN+ay+Nc+bj(25)

Similarly, under the WSELF when wθ=θ2,zα=α2, the MEL estimate of ηθ=θr and γα=αr are obtained as

θ^Mr=j=0NcNcjc+1jBNcj+u,Nc+1+vΓy+Nc+bj+r2N+ay+Nc+bj+r2j=0NcNcjc+1jBNcj+u,Nc+1+vΓy+Nc+bj2N+ay+Nc+bj2(26)
α^Mr=j=0NcNcjc+1jBNcj+u+r2,Nc+1+vΓy+Nc+bjN+ay+Nc+bjj=0NcNcjc+1jBNcj+u2,Nc+1+vΓy+Nc+bjN+ay+Nc+bj(27)

Finally, under WSELF, when wθ=θ2eδθ;δ>0 and zα=α2eλα;λ>0, the EWMEL estimate of ηθ=θr and γα=αr are given by

θ^Er=j=0NcNcjc+1jBNcj+u,Nc+1+vΓy+Nc+bj+r2N+a+δy+Nc+bj+r2j=0NcNcjc+1jBNcj+u,Nc+1+vΓy+Nc+bj2N+a+δy+Nc+bj2(28)
α^Er=j=0NcNcjc+1jBNcj+u+r2,Nc+1+vM1Γy+Nc+bjN+ay+Nc+bjj=0NcNcjc+1jBNcj+u2,Nc+1+vM2Γy+Nc+bjN+ay+Nc+bj(29)
where M1=Ncj+u+r2,Nc+Nc+1j+u+v+r2;λ
M2=Ncj+u2,Nc+Nc+1j+u+v2;λ

4. BAYESIAN ESTIMATION OF MNBD

A discrete random variable X is said to have misclassified negative binomial distribution (MNBD) if its probability mass function is given by

PX=x=θcm+c1!c!m1!+αθm+c!c+1!m1!1θm,forx=c1αm+c!c+1!m1!θc+11θmforx=c+1m+x1xθx1θm,forxS(30)
where 0<θ<1,0α1 and S is the subset of the set x of nonnegative integers not containing c and c+1. For α=0, (30) reduces to the negative binomial distribution.

It is a special case of (1) with fθ=1θm and ax=m+x1x,ac=m+c1!c!m1!,ac+1=m+c!c+1!m1!.

The likelihood function Lθ,α/x_ is of the form

Lθ,α/x_j=0NcNcjc+1m+cjαNcj1αNc+1θy+Ncj1θmN(31)

Since 0<θ<1, we have taken two different prior distributions for θ given below

h1θ=θa11θb1Ba,b,0<θ1,a,b0(32)
where B(a,b)=ΓaΓbΓ(a+b) and
h2θ=ekθθa11θb1Ba,bMa,a+b;k,0<θ1,a,b0,(33)
where Ma,b;k is the confluent hypergeometric function and has a series representation given by (16) and (17)

Both h1θ and h2θ are natural conjugate prior density. The prior density h2θ is known as the generalized beta density considered by Holla [20] and Bhattacharya [21].

The joint posterior p.d.f of θ and α corresponding to the prior h1θ and gα is given by

Π1θ,α/x_=j=0NcNcjc+1m+cjαNcj+u11αNc+1+v1θy+Ncj+a11θmN+b1j=0NcNcjc+1m+cjBNcj+u,Nc+1+vBy+Ncj+a,mN+b(34)

The marginal posterior distribution of θ and α are respectively given by

Π1θ/x_=j=0NcNcjc+1m+cjBNcj+u,Nc+1+vθy+Ncj+a11θmN+b1j=0NcNcjc+1m+cjBNcj+u,Nc+1+vBy+Ncj+a,mN+b(35)
Π1α/x_=j=0NcNcjc+1m+cjBy+Ncj+a,mN+bαNcj+u11αNc+1+v1j=0NcNcjc+1m+cjBNcj+u,Nc+1+vBy+Ncj+a,mN+b(36)

Similarly, the joint posterior p.d.f of θ and α corresponding to the prior h2θ and gα is given by

Π2θ,α/x_=j=0NcNcjc+1m+cjαNcj+u11αNc+1+v1θy+Ncj+a11θmN+b1ekθj=0NcNcjc+1m+cjBNcj+u,Nc+1+vBy+Ncj+a,mN+bM3(37)
where M3=y+Ncj+a,y+Ncj+mN+a+b;k

The marginal posterior distribution of θ and α are respectively given by

Π2θ/x_=j=0NcNcjc+1m+cjBNcj+u,Nc+1+vθy+Ncj+a11θmN+b1ekθj=0NcNcjc+1m+cjBNcj+u,Nc+1+vBy+Ncj+a,mN+bM3(38)
Π2α/x_=j=0NcNcjc+1m+cjBy+Ncj+a,mN+bM3αNcj+u11αNc+1+v1j=0NcNcjc+1m+cjBNcj+u,Nc+1+vBy+Ncj+a,mN+bM3(39)

Under the SELF, the Bayes estimate of θr and αr corresponding to the posterior density (35) and (36) respectively, are given by

θ^1Br=j=0NcNcjc+1m+cjBNcj+u,Nc+1+vBy+Ncj+a+r,mN+bj=0NcNcjc+1m+cjBNcj+u,Nc+1+vBy+Ncj+a,mN+b(40)
α^1Br=j=0NcNcjc+1m+cjBy+Ncj+a,mN+bBNcj+u+r,Nc+1+vj=0NcNcjc+1m+cjBNcj+u,Nc+1+vBy+Ncj+a,mN+b(41)

Under the WSELF when wθ=θ2,zα=α2, the MEL estimate of ηθ=θr and γα=αr corresponding to the posterior density (35) and (36) respectively, are given by

θ^1Mr=j=0NcNcjc+1m+cjBNcj+u,Nc+1+vBy+Ncj+a+r2,mN+bj=0NcNcjc+1m+cjBNcj+u,Nc+1+vBy+Ncj+a2,mN+b(42)
α^1Mr=j=0NcNcjc+1m+cjBy+Ncj+a,mN+bBNcj+u+r2,Nc+1+vj=0NcNcjc+1m+cjBNcj+u2,Nc+1+vBy+Ncj+a,mN+b(43)

Finally, under WSELF, when wθ=θ2eδθ;δ>0 and zα=α2eλα;λ>0, the EWMEL estimate of θr and αr corresponding to the posterior density (35) and (36) respectively, are given by

θ^1Er=j=0NcNcjc+1m+cjBNcj+u,Nc+1+vBy+Ncj+a+r2,mN+bM4j=0NcNcjc+1m+cjBNcj+u,Nc+1+vBy+Ncj+a2,mN+bM5(44)
where M4=My+Ncj+a+r2,y+Ncj+a+b+mN+r2;δ
M5=My+Ncj+a2,y+Ncj+a+b+mN2;δ
α^1Er=j=0NcNcjc+1m+cjBy+Ncj+a,mN+bBNcj+u+r2,Nc+1+vM1j=0NcNcjc+1m+cjBNcj+u2,Nc+1+vBy+Ncj+a,mN+bM2(45)

Also, under the SELF, the Bayes estimate of θr and αr corresponding to the posterior density (38) and (39) respectively, are given by

θ^2Br=j=0NcNcjc+1m+cjBNcj+u,Nc+1+vBy+Ncj+a+r,mN+bM6j=0NcNcjc+1m+cjBNcj+u,Nc+1+vBy+Ncj+a,mN+bM3(46)
where M6=My+Ncj+a+r,y+Ncj+a+b+mN+r;k
α^2Br=j=0NcNcjc+1m+cjBNcj+u+r,Nc+1+vBy+Ncj+a,mN+bM3j=0NcNcjc+1m+cjBNcj+u,Nc+1+vBy+Ncj+a,mN+bM3(47)

Under the WSELF when wθ=θ2,zα=α2, the MEL estimate of θr and αr corresponding to the posterior density (38) and (39) respectively, are given by

θ^2Mr=j=0NcNcjc+1m+cjBNcj+u,Nc+1+vBy+Ncj+a+r2,mN+bM7j=0NcNcjc+1m+cjBNcj+u,Nc+1+vBy+Ncj+a2,mN+bM8(48)
where M7=My+Ncj+a+r2,y+Ncj+a+b+mN+r2;k
M8=My+Ncj+a2,y+Ncj+a+b+mN2;k
α^2Mr=j=0NcNcjc+1m+cjBNcj+u+r2,Nc+1+vBy+Ncj+a,mN+bM3j=0NcNcjc+1m+cjBNcj+u2,Nc+1+vBy+Ncj+a,mN+bM3(49)

Finally, under WSELF, when wθ=θ2eδθ;δ>0 and zα=α2eλα,λ>0, the EWMEL estimate of θr and αr corresponding to the posterior density (38) and (39) respectively, are given by

θ^2Er=j=0NcNcjc+1m+cjBNcj+u,Nc+1+vBy+Ncj+a+r2,mN+bM9j=0NcNcjc+1m+cjBNcj+u,Nc+1+vBy+Ncj+a2,mN+bM10(50)
where M9=M(y+Ncj+a+r2,y+Ncj+a+b+mN+r2;δ+k
M10=M(y+Ncj+a2,y+Ncj+a+b+mN2;δ+k
α^2Mr=j=0NcNcjc+1m+cjBNcj+u+r2,Nc+1+vM1By+Ncj+a,mN+bM3j=0NcNcjc+1m+cjBNcj+u2,Nc+1+vM2By+Ncj+a,mN+bM3(51)

5. BAYESIAN ESTIMATION OF MLSD

The probability mass function of misclassified logarithmic series distribution (MLSD) is given by

PX=x=θc1c+αθ1c+1log1θ,for   x=c1α1c+1θc+1log1θ,for   x=c+1θxxlog1θ,for   xS(52)
where 0<θ<1,0α1 and S is the subset of the set x of nonnegative integers not containing c and c+1. For α=0, we note that (52) reduces to the usual logarithmic series distribution (LSD). It is also a special case of (1) with f(θ)=log(1θ),ax=1x,ac=1c, ac+1=1c+1

The likelihood function in this case is given by

Lθ,α/x_j=0NcNcjc+1cjαNcj1αNc+1θy+Ncjlog1θN(53)

Since 0<θ<1, we have taken the prior distribution for θ, as given below

hθ=k+1N+11θklog1θNΓN+1;       0<θ1,k0(54)
where N is a positive integer and is same as the size of the random sample. This is non-conjugate prior p.d.f. We can also take the conjugate priors given by (32) and (33).

The joint posterior p.d.f of θ and α corresponding to the prior hθ and gα is given by

Πθ,α/x_=j=0NcNcjc+1cjαNcj+u11αNc+1+v1θy+Ncj1θkj=0NcNcjc+1cjBNcj+u,Nc+1+vBy+Ncj+1,k+1(55)

The marginal posterior distribution of θ and α are respectively given by

Πθ/x_=j=0NcNcjc+1cjBNcj+u,Nc+1+vθy+Ncj1θkj=0NcNcjc+1cjBNcj+u,Nc+1+vBy+Ncj+1,k+1(56)
Πα/x_=j=0NcNcjc+1cjBy+Ncj,k+1αNcj+u11αNc+1+v1j=0NcNcjc+1cjBNcj+u,Nc+1+vBy+Ncj+1,k+1(57)

Under the SELF, the Bayes estimate of θr and αr corresponding to the posterior density (56) and (57) respectively, are given by

θ^Br=j=0NcNcjc+1cjBNcj+u,Nc+1+vBy+Ncj+r+1,k+1j=0NcNcjc+1cjBNcj+u,Nc+1+vBy+Ncj+1,k+1(58)
α^Br=j=0NcNcjc+1cjBNcj+u+r,Nc+1+vBy+Ncj+1,k+1j=0NcNcjc+1cjBNcj+u,Nc+1+vBy+Ncj+1,k+1(59)

Under the WSELF when wθ=θ2,zα=α2, the MEL estimate of ηθ=θr and γα=αr corresponding to the posterior density (56) and (57) respectively, are given by

θ^Mr=j=0NcNcjc+1cjBNcj+u,Nc+1+vBy+Ncj+r1,k+1j=0NcNcjc+1cjBNcj+u,Nc+1+vBy+Ncj1,k+1(60)
α^Mr=j=0NcNcjc+1cjBNcj+u+r2,Nc+1+vBy+Ncj+1,k+1j=0NcNcjc+1cjBNcj+u2,Nc+1+vBy+Ncj+1,k+1(61)

Finally, under WSELF, when wθ=θ2eδθ;δ>0 and zα=α2eλα,λ>0, the EWMEL estimate of θr and αr corresponding to the posterior density (56) and (57) respectively, are given by

θ^Er=j=0NcNcjc+1cjBNcj+u,Nc+1+vBy+Ncj+r1,k+1M11j=0NcNcjc+1cjBNcj+u,Nc+1+vBy+Ncj1,k+1M12(62)
where, M11=My+Ncj+r1,y+Ncj+k+r;δ
M12=My+Ncj1,y+Ncj+k;δ
α^Er=j=0NcNcjc+1cjBy+Ncj+1,k+1BNcj+u+r2,Nc+1+vM1j=0NcNcjc+1cjBy+Ncj+1,k+1BNcj+u2,Nc+1+vM2(63)

6. SOME ILLUSTRATIVE EXAMPLES

In this section we will consider few numerical examples in order to apply and illustrate the foregoing results. We will compare the Bayesian estimates with the moment estimates and maximum likelihood estimates.

The data set presented in Tables 13 pertains to the number of strikes in 4-week period in three leading industries in the United Kingdom during 1948–1958. The same set of data was used by Kendall [22] for fitting of PD model. For our purpose of illustration, it has been assumed that ten of the observations, which correspond to the value one, are reported erroneously as value zero. Both the original and the altered frequencies are given in Tables 13. From the altered data we have derived the parameters of misclassified PD using moment method of estimation, maximum likelihood estimation and Bayesian estimation using different loss functions. The prior values used for the beta distribution (4) will be u=v=3, while those used for the gamma distribution (20) will be a=0.25,b=1 and for EWMEL estimates (28) and (29) will be δ=λ=0.25. The values chosen for a and b in (20) provide a fairly flat form of the gamma distribution. Hassan [23] and Islam and Consul [24] while discussing the Bayesian estimation of generalized PD and generalized negative binomial distribution, respectively, has shown that the estimated Bayes frequencies were quite close to the simulated sample frequencies when u and v were equal. The values for the prior parameters a,b,u,v,δ,λ were chosen so that the posterior distribution would reflect the data as much, and the prior information as little, as possible.

No. ofOutbreaks of Strike Frequencies
Estimation of Parameter
Original Data Altered Data Moments Estimates Maximum Likelihood Estimates Bayesian Estimates (Under Different Loss Functions)
SELF MEL EWMEL
0 114 124
1 35 25 θ^=0.4190 θ^=0.3767 θ^=0.3899 θ^=0.3678 θ^=0.3668
2 4 4 α^=0.4795 α^=0.3868 α^=0.4068 α^=0.3306 α^=0.3296
3 2 2
4 1 1
Total 156 156

SELF, squared error loss function; MEL, minimum expected loss; EWMEL, exponentially minimum expected loss.

Estimate of θ when the calculations are based on the original unaltered sample =θ^=0.3397.

Actual proportion of observations misclassified = 10/35 = 0.2857.

Table 1

Number of outbreaks of strike in transport industry in United Kingdom during 1948–1958 (Kendall [22]).

No. ofOutbreaks of Strike Frequencies
Estimation of Parameter
Original Data Altered Data Moments Estimates Maximum Likelihood Estimates Bayesian Estimates (Under Different Loss Functions)
SELF MEL EWMEL
0 110 120
1 33 23 θ^=0.5133 θ^=0.5073 θ^=0.5082 θ^=0.4874 θ^=0.4861
2 9 9 α^=0.5263 α^=0.5221 α^=0.5046 α^=0.4560 α^=0.4547
3 3 3
4 1 1
Total 156 156

SELF, squared error loss function; MEL, minimum expected loss; EWMEL, exponentially minimum expected loss.

Estimate of θ when the calculations are based on the original unaltered sample =θ^=0.4103.

Actual proportion of observations misclassified = 10/33 = 0.3030.

Table 2

Number of outbreaks of strike in vehicle manufacturing industry in United Kingdom during 1948–1958 (Kendall [22]).

No. of Outbreaks of Strike Frequencies
Estimation of Parameter
Original Data Altered Data Moments Estimates Maximum Likelihood Estimates Bayesian Estimates (Under Different Loss Functions)
SELF MEL EWMEL
0 117 127
1 29 19 θ^=0.4165 θ^=0.4159 θ^=0.4151 θ^=0.3920 θ^=0.3909
2 9 9 α^=0.5613 α^=0.5570 α^=0.5298 α^=0.4735 α^=0.4724
3 0 0
4 1 1
Total 156 156

SELF, squared error loss function; MEL, minimum expected loss; EWMEL, exponentially minimum expected loss.

Estimate of θ when the calculations are based on the original unaltered sample =θ^=0.3269.

Actual proportion of observations misclassified = 10/29 = 0.3448.

Table 3

Number of outbreaks of strike in ship building industry in United Kingdom during 1948–1958 (Kendall [22]).

The estimates of θ^ obtained for the altered data is to be compared with θ^ obtained when the calculations are based on the original unaltered sample. The estimates of α^ is to be compared with the proportion of ones that were misclassified in the process of altering the original data for this illustration. It is clear from the tables that the Bayes estimates are closer than the maximum likelihood estimates as well as the moment estimates. It is also encouraging to observe from the tables that the Bayes estimates obtained under WSELF is much closer than the Bayes estimates obtained under SELF. Also, it is clear from the tables that EWMEL estimates are closer than the MEL estimates.

CONFLICT OF INTEREST

There is no potential conflict of Interest related to this study.

REFERENCES

4.S. Abbasi, M. Aghababaei Jazi, and M.H. Alamatsaz, Global J. Pure Appl. Math., Vol. 6, 2010, pp. 305-316.
7.A.C. Cohen, Ann. Inst. Stat. Math., Vol. 9, 1960, pp. 189-193.
9.L. von Bortkiewicz, Das Gesetz der Kleinen Zahlen, Teubner, Leipzig, Germany, 1898.
10.P.N. Jani and S.M. Shah, Metron, Vol. 37, 1979, pp. 121-136.
11.A.I. Patel and I.D. Patel, ASR, Vol. 10, 1996, pp. 107-119.
12.A. Hassan and P.B. Ahmad, Math. Bohemica, Vol. 134, 2009, pp. 1-17.
13.A.I. Patel and I.D. Patel, ASR, Vol. 15, 2001, pp. 55-69.
14.A. Hassan and P.B. Ahmad, J. Korean Soc. Ind. Appl. Math., Vol. 13, 2009, pp. 55-72.
15.G.P. Patil, Sankhya, Vol. 23, 1961, pp. 269-280.
18.M. Abramowitz and I.A. Stegum, Handbook of Mathematical Functions, Dover, New York, NY, USA, 1964.
19.V. Atanassiu, Revista Inf. Econ., Vol. 2, 2008, pp. 12-17.
23.A. Hassan, Problems of Estimation in Lagrangian Probability Distributions, Patna University, Patna, India, 1995. PhD Thesis
Journal
Journal of Statistical Theory and Applications
Volume-Issue
19 - 2
Pages
173 - 184
Publication Date
2020/05/25
ISSN (Online)
2214-1766
ISSN (Print)
1538-7887
DOI
10.2991/jsta.d.200513.001How to use a DOI?
Copyright
© 2020 The Authors. Published by Atlantis Press SARL.
Open Access
This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).

Cite this article

TY  - JOUR
AU  - Peer Bilal Ahmad
PY  - 2020
DA  - 2020/05/25
TI  - Bayesian Analysis of Misclassified Generalized Power Series Distributions Under Different Loss Functions
JO  - Journal of Statistical Theory and Applications
SP  - 173
EP  - 184
VL  - 19
IS  - 2
SN  - 2214-1766
UR  - https://doi.org/10.2991/jsta.d.200513.001
DO  - 10.2991/jsta.d.200513.001
ID  - Ahmad2020
ER  -