# Journal of Statistical Theory and Applications

Volume 17, Issue 2, June 2018, Pages 230 - 241

# A note on Sum, Difference, Product and Ratio of Kumaraswamy Random Variables

Authors
Avishek Mallickmallicka@marshall.edu
Department of Mathematics, Marshall University, Huntington, West Virginia 25755, USA
Indranil Ghoshghoshi@uncw.edu
Department of Mathematics and Statistics, University of North Carolina, Wilmington, North Carolina 28403, USA
G. G. Hamedanigholamhoss.hamedani@marquette.edu
Department of Mathematics, Statistics and Computer Science, Marquette University, Milwaukee, Wisconsin 53201, USA
Received 14 December 2016, Accepted 15 September 2017, Available Online 30 June 2018.
DOI
10.2991/jsta.2018.17.2.4How to use a DOI?
Keywords
Ratio of random variables; product of random variables; Kumaraswamy distribution; sub-independence
Abstract

Explicit expressions for the densities of S = X1 + X2 , D = X1X2 , P = X1X2 and R = X1/X2 are derived when X1 and X2 are independent or sub-independent Kumaraswamy random variables. The expressions appear to involve the incomplete gamma functions. Some possible real life scenarios are mentioned in which such quantities might be of interest.

Open Access
This is an open access article under the CC BY-NC license (http://creativecommons.org/licences/by-nc/4.0/).

## 1. Introduction

Kumaraswamy (1980) [3] introduced a two parameter absolutely continuous distribution which compares extremely favorably, in terms of simplicity, with the beta distribution. The Kumaraswamy distribution on the interval (0,1), has its probability density function (pdf) with two shape parameters a > 0 and b > 0 defined by

f(x)=abxa1(1xa)b1I(0<x<1)
and its cumulative distribution function (cdf) is given by
F(x)=1(1xa)b.

If a random variable X has pdf given in (1.1) then we will write XK(a,b).

The density function in (1.1) has similar properties to those of the beta distribution but has some advantages in terms of tractability. The Kumaraswamy pdf is unimodal, uniantimodal, increasing, decreasing or constant depending (similar to the beta distribution) on the values of the parameters. It has some basic properties of the beta distribution: a > 1 and b > 1 (unimodal); a < 1 and b < 1 (uniantimodal); a > 1 and b ≤ 1 (increasing); a ≤ 1 and b > 1 (decreasing); a = b = 1 (constant). For a detailed survey of properties of the Kumaraswamy distribution, the reader is referred to Jones (2009) [2]. This distribution has a close relation with beta and generalized beta (first kind) as listed below:

• If X ∼ Beta(1,b) then X ∼ K(1,b)

• If X ∼ Beta(a,1) then X ∼ K(a,1)

• If X ∼ K(a,b), then X ∼ GB1(a,1,1,b),

where GB1 stands for the generalized beta distribution of the first kind.

In this article we consider two independent (or sub-independent) Kumaraswamy random variables with appropriate parameters and study the convolution of their distributions (i.e., sum, difference, product and ratio). Also, we discuss some real life scenarios in which such convolutions might be an attractive model. This is the major contribution of this article. We list in sequel, the scenarios where sums, differences, products and ratios of two independent Kumaraswamy might be useful.

• Situations in which Kumaraswamy sums will be useful:

• Length or weight of a chain with n links.

• Electric resistance of a series circuit.

• The total life length of n products, where the next one is put on operation after failure of the preceding one.

In each of these cases, individual items (for example Xi denoting individual links for the first example, individual resistance for the second example and individual life length for the third example) can be considered as having univariate Kumaraswamy densities.

• Situations in which Kumaraswamy differences will be useful:

• Let us suppose that in a certain mass-produced assembly, a 5 cm shaft must slide into a cylindrical sleeve. Shafts are manufactured whose diameter X1 follows a Kumaraswamy distribution and cylindrical sleeves are manufactured whose internal diameter X2 follows another Kumaraswamy distribution. Assembly is performed by selecting a shaft and a cylindrical sleeve at random. Suppose our interest is the following: In what proportion of cases will it be impossible to fit the selected shaft and cylindrical sleeve together. Clearly, the shaft and cylindrical sleeve will fit together only if the diameter of the shaft is smaller than the internal diameter of the cylindrical sleeve. We need the difference of the two random variables X2 and X1 to be greater than zero. We can take the difference X2X1 and find its distribution.

• Situations in which Kumaraswamy product will be useful:

We cite a real life application below:

• An observer’s information about a classical system is captured by the observation that he/she assigns to it. It is a legitimate assumption that the two observers do not have to have the same information about the classical system. Next, if two observers have obtained their information about the system independently, then clearly together they have gathered more information about the system than each has individually. A natural question then is: is it possible to come up with a single observation which embodies their combined information? An answer to this question may be provided by considering distribution of the product of the two independent random variables.

• *

In evaluating the revenue from holding an asset defined by initial investment (X1) × net return(X2).

• *

Consider the portfolio value accumulation scenario: A portfolio with current value X1 will become X1 (1 + X2) after some period, here X2 is the interest rate and X1 and X2 are independent.

• *

ARCH models in time series studies.

• *

Number of cancer cells in tumor biology.

We believe that the above list of real life scenarios merits for the study of the distribution of the ratio X1X2 and of the product X1X2 when the random variables X1 and X2 are independent and each follow a univariate Kumaraswamy distribution with the density function in (1.1).

The rest of the paper is organized as follows: In Section 1, we discuss the distribution of the sum, difference, product and the ratio. Section 2 deals with the distribution (as in Section 1) under non standard Kumaraswamy univariate distributions. In Section 3, we present some ideas of constructing bivariate Kumaraswamy distribution via copula and possible extension to the multivariate case. In Section 4, some concluding remarks are provided.

For the derivation of the distributions of the sum S = X1 + X2, and of the difference D = X1X2, the assumption of independence is not needed, a much weaker concept called sub-independence (defined below) can replace that of independence.

For the sake of completeness we will state below a few definitions related to the concept of sub-independence. The concept of sub-independence is stated as follows: The rv ’s (random variables) X and Y with cd f ’s FX and FY are s.i. (sub-independent) if the cd f of X + Y is given by

FX+Y(z)=(FX*FY)(z)=FX(zy)dFY(y),z,
or equivalently if and only if
φX+Y(t)=φX,Y(t,t)=φX(t)φY(t),for all t,
where φX , φY , φX+Y and φX,Y are c f ’s (characteristic functions) of X , Y , X + Y and (X,Y) , respectively.

The equations (1.3) and (1.4) above are in terms of cd f and c f . The definition of sub-independence in terms of events, similar to that of independence, is as follows.

We observe that the half-plane H = {(x,y) : x + y < 0}(⊆ ℝ2) can be written as a countable disjoint union of rectangles:

H=i=1Ei×Fi,
where Ei and Fi are intervals. Now, let (X,Y) : Ω → ℝ2 be a continuous random vector and for c ∈ ℝ, let
Ac={ωΩ:X(ω)+Y(ω)<c}
and
Ai(c)={ωΩ:X(ω)c2Ei},Bi(c)={ωΩ:Y(ω)c2Fi}.

### Definition 1.1.

The continuous rv ’s X and Y are s.i. if for every c ∈ ℝ

P(Ac)=i=1P(Ai(c))P(Bi(c)).

### Remark 1.1.

1. (a)

The representation (1.4) can be extended to the multivariate case as well.

2. (b)

For a detailed treatment of the concept of sub-independence, please refer to Hamedani(2013) [1].

Consider the distribution of the sum S = X1 + X2 when the rv’s X1 and X2 are s.i., then pd f of S is the convolution of the pd f ’s of X1 and X2, which is given by

fS(s)={0sfX1(t)fX2(st)dt,if0<s<102sfX1(s1+t)fX2(1t)dt,if1<s<2.

The distribution of the difference D = X1X2 when the rv’s X1 and −X2 are s.i., has a pd f which is the convolution of the pd f’s of X1 and −X2, given by

fD(d)={01+dfX1(t)fX2(td)dt,if1<d<001dfX1(d+t)fX2(t)dt,if0<d<1.

Now, assuming that the rv’s X1 and X2 are independent, then pd f ’s of P = X1 X2 and of R=X2X1 are given, respectively, by

fP(p)=p1fX1(t)tfX2(pt)dt,0<p<1,
and
fR(r)={01tfX1(t)fX2(rt)dt,ifr1,01/rtfX1(t)fX2(rt)dt,ifr>1.

## 2. Kumaraswamy Sums, Differences, Products and Ratios

In this section we consider explicit expressions for the pdfs of S, D, P and R respectively. Note that for S and D, we employ the concept of sub-independence.

## 2.1. Explicit expression for the pdf of the Sum (S)

### Theorem 2.1.

For 0 < s < 2, the pdf of S will be

fS(s)={(a2b1b2)j=0k=0(1)j+k(b21j)(a2(1+j)1k)skIsa1(a2(1+j)1ka1,b1),0<s<1,(a2b1b2)j=0k=0(1)j+k(b11j)(a1(1+j)1k)skδ(s1)a2(a1(1+j)+ka2+1,b2),1<s<2,
where Ix(a,b)=0xua1(1u)b1du, δx(a,b)=x1ua1(1u)b1du are the lower and upper incomplete beta functions respectively, and (b2j)=(b2)(j)j!, (b2)(j) = b2(b2 − 1)⋯(b2 − 1 + j).

### Proof.

Consider the case 0 < s < 1, we have from (1.6),

0sf1(t)f2(st)dt=a1b1a2b20sta11(1ta1)b11(st)a21[1(st)a2]b21dt=a1b1a2b2j=0(1)j(b21j)ta11(1ta1)b11(st)a2(1+j)1dt,on using the generalized binomial expansion=b1a2b2j=0(1)j(b21j)0sa1(1u)b11[su1/a1]a2(1+j)1du,on using the transformationu=ta1=b1a2b2j=0k=0(1)j+k(b2j)(a2(1+j)1k)0sa1(1u)b11ua2(1+j)1ka1du=a2b1b2j=0k=0(1)j+k(b21j)(a2(1+j)1k)skIsa1(a2(1+j)1ka1+1,b1).

The result for 1 < s < 2 can be established similarly. Hence the proof.

Some representative density plots for S is provided in Figures 1 and 2. The following observations can be made from these two graphs.

• When b1 and b2 are kept fixed, if all other possible combinations of the first two shape parameters (i.e., a1 and a2), the density appears to be slightly left-skewed (except for the case a1 = a2, in which it is approximately symmetric, as expected) (see Figure 1).

• When a1 and a2 are kept fixed, if all other possible combinations of the second two shape parameters (i.e., b1 and b2), the density appears to be slightly right-skewed (except for the case b1 = b2, in which it is approximately symmetric, as expected) (see Figure 2).

As a consequence, in order to model bounded risks (with a proper modification, if required, to be in the interval [0,1]) with a right skewed data, one might consider the scenario of changing the second shape parameter and keeping the first shape parameter fixed and vice versa for the left skewed data.

## 2.2. Explicit expression for the pdf of the Difference (D)

### Theorem 2.2.

For −1 < d < 1, the pdf of D will be

fD(d)={(a2b1b2)j=0k=0(1)j+k(b21j)(a2(1+j)1k)dkI(1+d)a1(a2(1+j)1ka1+1,b1),1<d<0,(a2b1b2)j=0k=0(1)j+k(b11j)(a1(1+j)1k)da1(1+j1)1kδ(1d)a2(ka2+1,b2),0<d<1.

### Proof.

Similar to that of Theorem 2.1.

Some representative density plots of D are provided in Figures 3 and 4. The following observations can be made from these two graphs.

• When b1 and b2 are kept fixed, the density appears to be slightly right skewed, if a1 < a2 and left-skewed if a1 > a2, with the obvious symmetric scenario is which a1 = a2 (see Figure 3).

• When a1 and a2 are kept fixed, the density appears to be slightly left -skewed, if b1 < b2. Noticeably, the density appears to be approximately symmetric in the remaining two parametric configurations (i.e., in b1 = b2 and b1 > b2) (see Figure 4).

It appears that for the difference, the form of the density is very sensitive to any change in the first shape parameters (i.e., a1 and a2).

## 2.3. Explicit expression for the pdf of the Product (P)

### Theorem 2.3.

For 0 < p < 1, the pdf of P will be

fP(p)=(a2b1b2)j=0(1)j(b21j)pa2(1+j)1δ(p)a1(2a2(1+j)a1+1,b1)

### Proof.

From (1.8), we can write

fP(p)=(a1a2b1b2)p1ta1[(1ta1)b11][(pt)a21(1(pt)a2)b21]dt=(a1a2b1b2)j=0(1)j(b21j)pa2(1+j)1p1ta1+1a2(1+j)(1ta1)b11dt,again using the binomial series expansion=(a2b1b2)j=0(1)j(b21j)pa2(1+j)1δ(p)a1(2a2(1+j)a1+1,b1).

Hence the result.

Some representative density plots of P are provided in Figures 5 and 6. From both of these plots it appears that regardless of the choice of the for four shape parameters, the density of P is right skewed.

## 2.4. Explicit expression for the pdf of the Ratio (R)

### Theorem 2.4.

For 0 < r < ∞, the pdf of R will be

fR(r)={(a2b1b2)j=0(1)j(b21j)r2a2+j1B(2a2+j+1a1+1,b1),0<r<1,(a2b1b2)j=0(1)j(b21j)r2a2+j1I(1/r)a1(2a2+j+1a1+1,b1),1<r<.

### Proof.

Let us consider the case 0 < r ≤ 1. From (1.9), we can write

01t×f1(t)f2(rt)dt=(a1b1a2b2)0st×ta11(1ta1)b11(rt)a21[1(rt)a2]b21dt=(a1b1a2b2)j=0(1)j(b21j)ra21ta1+a21(1ta1)b11(rt)a2+jdton using the generalized binomial expansion=(b1a2b2)j=0(1)j(b21j)r2a2+j101u(2a2+j)/a1[1u]b11duon using the transformationu=ta1=(a2b1b2)j=0(1)j(b21j)r2a2+j1B(2a2+ja1+1,b1).

The result for 1 < r < ∞ can be established similarly. Hence the proof.

Note: If b2 − 1 is an integer, the sum will stop at b2 − 1.

Some representative density plots of R are provided in Figures 7 and 8. From both these plots it appears that regardless of the choice of the for four shape parameters, the density of R is strictly right skewed.

### Remark 2.1.

We have also provided some representative density plots (see Figures 9 and 10) for the density of W=X1X1+X2. It appears that when we consider different combinations of the first shape parameters (i.e., a1 and a2), keeping the second set of shape parameters fixed, the density is approximately symmetric. While for the other case, it is slightly left-skewed.

## 3. Sum, Difference, Product and Ratio for non-central Kumaraswamy distribution

In this section we will make frequent use of the following representation of a Kumaraswamy variable as a power of a Beta variable. If YBeta(1,b), then X = Y1/aK(a,b). Next, we start with three non-central beta models (for details, see Nagar et al. (2013) [4]) and subsequently obtain the expression of the densities of Sum (S) , Product (P) and the Ratio (R).

The non-central Type I beta distribution is given by

f(u)=exp(δ)ua1(1u)b1B(a,b)F11(b+1;b;δ(1u)),0<u<1,
where F11(a;c;z)=j=0Γ(c)Γ(a+j)zjj!Γ(a)Γ(c+j)..

The Type I beta distribution is well known in Bayesian methodology as a prior distribution on the success probability of a binomial distribution. Next, setting a = 1 and then by making the transformation X = U1/α, (for any α > 0), the corresponding non-central Kumaraswmy (Type I) will be

f(x)=bαxα1exp(δ)(1xα)b1F11(b+1;b;δ(1xα)),0<x<1.

For notational simplicity, henceforth we call the above distribution as XNCKW(TypeI)(α,δ,b). Next, we have the following theorem:

### Theorem 3.1.

Suppose X1NCKW(TypeI)(α1,δ1,b1) and X1NCKW(TypeI)(α2,δ2,b2) and they are sub-independent. Then for 0 < s < 2, the pdf of S will be

fS(s)={θj1=0j2=0j3=0j4=0(1)j3+j4(b11+j1j3)(b21+j2j4)A1,0<s<1,θj1=0j2=0j3=0j4=0(1)j3+j4(b11+j1j3)(b21+j2j4)(b21+j2j4)A2,1<s<2,
where θ = (b1b2α1α2 exp(−(δ1 + δ2))),
A1=[Γ(b1)Γ(b2)(b1+j1)(b2+j2)δ1j1δ2j2j1!j2!][Γ((j3+1)α1)Γ((j4+1)α2)sα1(j3+1)+α2(j34+1)1Γ((j3+1)α1+(j4+1)α2)],
and
A2=s2(s1)α2(j4+1)(s1s)α2((j4+1))×[s2F1((j3+1)α1,(j4+1)α2;(j3+1)α1+1;1s)α1(j3+1)+F12((j3+1)α1+1,1(j4+1)α2;(j3+1)α1+2;1s)α1(j3+1)+1](s1)α1(j3+1)(1s)α2((j4+1))×[s2F1((j3+1)α1,(j4+1)α2;(j3+1)α1+1;s1s)α1(j3+1)+(s1)2F1((j3+1)α1+1,1(j4+1)α2;(j3+1)α1+2;s1s)α1(j3+1)+1].

### Proof.

Similar to that of Theorem 2.1.

### Theorem 3.2.

For −1 < d < 1, the pdf of D will be

fD(d)={θj1=0j2=0j3=0j4=0(1)j3+j4(b11+j1j3)(b21+j2j4)B1,1<d<0,θj1=0j2=0j3=0j4=0(1)j3+j4(b11+j1j3)(b21+j2j4)B2,0<d<1,
where
B1=(d1)α1(j31)(d)α1(j3+1)(d)α2(j4+1)1(d+1)α1(j3+1)B1+1d((j3+1)α1,(j4+1)α2),
and
B2=(d1)α1(j31)(d1)α2(j41)dα1(j3+1)1((1d)d)α2(j4+1)Bd1d((j4+1)α2,(j3+1)α1).

### Proof.

Similar to that of Theorem 2.2.

### Theorem 3.3.

For 0 < p < 1, the pdf of P will be

fP(p)=θj1=0j2=0j3=0j4=0(1)j3+j4(b11+j1j3)(b21+j2j4)pα2(j4+1)pα1(j3+1)+1p(α1(j3+1)α2(j4+1)+1),
provided α1α2 − 1 ≥ 0.

### Theorem 3.4.

For 0 < r < ∞, the pdf of R will be

fR(r)={θj1=0j2=0j3=0j4=0(1)j3+j4(b11+j1j3)(b21+j2j4)[rα2(j4+1)1α1(j3+1)+α2(j4+1)1],0<r<1,θj1=0j2=0j3=0j4=0(1)j3+j4(b11+j1j3)(b21+j2j4)[(1r)α1(j3+1)α1(j3+1)+α2(j4+1)1],1<r<.

### Remark 3.1.

One can also obtain in a similar fashion, the distribution of the Sum, Difference, Product, Ratios from the following two other non-central Kumaraswamy distributions:

• Non central Kumaraswamy (Type II) given by

f(x)=bexp(δ)xα(1+xα)(b+1)F11(b+1;b;δ(1+xα)),0<x<1.

• Non central Kumaraswamy (Type III) given by

f(x)=b2aexp(δ)xα1(1xα)b1(1+xα)b+1F11(b+1;b;δ(1xα)(1+xα)),0<x<1.

## 4. Conclusion

In this article, by using the traditional method of transformation of variables, we have obtained probability density functions of sum, difference, product of two independent random variables both having regular Kumaraswamy and also non-central Kumaraswamy (Type I) distribution. Furthermore, in recent times, the construction of bivariate and multivariate Kumaraswamy distributions has received a significant amount of attention. Possible future works that can be done based on this article are as follows:

• Possible applications of independent Kumarswamy sums, products and ratios.

• Construction of Kumaraswamy sums, products etc. via dependent set up.

• A comparison study between beta distribution Sums, Products, Ratios and Differences etc.

Future efforts will be made to address these points and subsequently be reported elsewhere.

Journal
Journal of Statistical Theory and Applications
Volume-Issue
17 - 2
Pages
230 - 241
Publication Date
2018/06/30
ISSN (Online)
2214-1766
ISSN (Print)
1538-7887
DOI
10.2991/jsta.2018.17.2.4How to use a DOI?
Open Access
This is an open access article under the CC BY-NC license (http://creativecommons.org/licences/by-nc/4.0/).

TY  - JOUR
AU  - Avishek Mallick
AU  - Indranil Ghosh
AU  - G. G. Hamedani
PY  - 2018
DA  - 2018/06/30
TI  - A note on Sum, Difference, Product and Ratio of Kumaraswamy Random Variables
JO  - Journal of Statistical Theory and Applications
SP  - 230
EP  - 241
VL  - 17
IS  - 2
SN  - 2214-1766
UR  - https://doi.org/10.2991/jsta.2018.17.2.4
DO  - 10.2991/jsta.2018.17.2.4
ID  - Mallick2018
ER  -