Divergence Measures Estimation and Its Asymptotic Normality Theory Using Wavelets Empirical Processes I
- Amadou Diadié Baamadouemail@example.comLERSTAD, Gaston Berger University, Saint-Louis, SENEGALGane Samb LOganefirstname.lastname@example.org email@example.comLERSTAD, Gaston Berger University, Saint-Louis, SENEGAL *, Associate Researcher, LASTA, Pierre et Marie University, Paris, FRANCE, Assiated Professor, African University of Sciences and Technology, Abuja, NIGERIADiam Badiam.firstname.lastname@example.orgLERSTAD, Gaston Berger University, Saint-Louis, SENEGAL*
1178, Evanston Drive, NW, Calgary, Canada, T3P 0J9
- https://doi.org/10.2991/jsta.2018.17.1.12How to use a DOI?
- Divergence measures estimation, Asymptotic normality, Wavelet theory, wavelets empirical processes, Besov spaces
We deal with the normality asymptotic theory of empirical divergences measures based on wavelets in a series of three papers. In this first paper, we provide the asymptotic theory of the general of ϕ-divergences measures, which includes the most common divergence measures : Renyi and Tsallis families and the Kullback-Leibler measures. Instead of using the Parzen nonparametric estimators of the probability density functions whose discrepancy is estimated, we use the wavelets approach and the geometry of Besov spaces. One-sided and two-sided statistical tests are derived. This paper is devoted to the foundations the general asymptotic theory and the exposition of the mains theoretical tools concerning the ϕ-forms, while proofs and next detailed and applied results will be given in the two subsequent papers which deal important key divergence measures and symmetrized estimators.
- Copyright © 2018, the Authors. Published by Atlantis Press.
- Open Access
- This is an open access article under the CC BY-NC license (http://creativecommons.org/licences/by-nc/4.0/).
1.1. General Introduction
In this paper, we deal with divergence measures estimation using essentially wavelets density function estimation. Let 𝒫 be a class of probability measures on ℝd, d ≥ 1, a divergence measure on 𝒫 is a functionsuch that 𝒟(ℚ, ℚ) = 0 for any ℚ such that (ℚ, ℚ) in the domain of application of 𝒟.
The function 𝒟 is not necessarily an application. And if it is, it is not always symmetrical and it does neither have to be a metric. A great number of them are based on probability density functions (pdf). So let us suppose that any ℚ ∈ 𝒫 admits a pdf fℚ with respect to a σ-finite measure ν on (ℝd, ℬ(ℝd)), which is usually the Lebesgue measure λd (with λ1 = λ) or a counting measure on ℝd.
We may present the following divergence measures.
The -divergence measure :
The family of Renyi’s divergence measures indexed by α ≠ 1, α > 0, known under the name of Renyi-α :
The family of Tsallis divergence measures indexed by α ≠ 1, α > 0, also known under the name of Tsallis-α :
The Kullback-Leibler divergence measureThe latter, the Kullback-Leibler measure, may be interpreted as a limit case of both the Renyi’s family and the Tsallis’ one by letting α → 1. As well, for α near 1, the Tsallis family may be seen as derived from 𝒟R,α(ℚ, 𝕃) based on the first order expansion of the logarithm function in the neighborhood of the unity.
From this small sample of divergence measures, we may give the following remarks.
The -divergence measure is both an application and a metric on 𝒫2, where 𝒫 is the class of probability measures on ℝd such that
For example, for both the Renyi and the Tsallis families, we may have integrability problems and lack of symmetry. From this sample tour, we have to be cautious, when speaking about divergence measures as applications and/or metrics. In the most general case, we have to consider the divergence measure between two specific probability measures as a number or a real parameter.
Originally, divergence measures came as extensions and developments of information theory that was first set for discrete probability measures. In such a situation, the boundedness of these discrete probability measures above zero and below +∞ was guaranteed. That is, the following assumption holds :
Boundedness Assumption (BD). There exist two finite numbers 0 < κ1 < κ2 < +∞ such thatIf Assumption (1.6) holds, we do not have to worry about integrability problems, especially for Tsallis, Renyi and Kullback-Leibler measures, in the computations arising in the estimation theories. This explains why Assumption (1.6) is systematically used in a great number of works in that topic, for example, in [Singh and Poczos (2014)], [Krishnamurthy et al.2014], [Hall(1987)], to cite a few. But instead of Assumption (1.6), we use the following
Modified Boundedness Condition : There exist 0 < κ1 < κ2 < +∞ and a compact domain D as large as possible such thatThis implies that the modified divergence measure, denoted by 𝒟(m), is applied to the modified pdf’s : where D1 and D2 are the integrals of fℚ and f𝕃 of D, respectively. Based on this technique, that we apply in case of integrability problems, we will suppose, when appropriate, that Assumption (1.6) holds on a compact set D.
Although we are focusing on the aforementioned divergence measures in this paper, it is worth mentioning that there exist quite a few number of them. Let us cite for example the ones named after : Ali-Silvey or f-divergence [Topsoe(2000)], Cauchy-Schwarz, Jeffrey divergence (see [Evren(2012)]), Chernoff (See [Evren(2012)]), Jensen-Shannon (See [Evren(2012)]). According to [Cichocki and Amari(2010)], there is more than a dozen of different divergence measures in the literature. In a longer version of this paper (see [Ba et al.(2017)]), some important applications of them are highlighted with there references. The reader, who is interested by a so important review topic is referred to that paper.
In the next subsection, we describe the frame in which we place the estimation problems we deal in this paper.
1.2. Statistical Estimation
The divergence measures may be applied to two statistical problems among others.
First, it may be used as a fitting problem as described here. Let X1, X2,.... a sample from X with an unknown probability distribution ℙX and we want to test the hypothesis that ℙX is equal to a known and fixed probability ℙ0. Theoretically, we can answer this question by estimating a divergence measure 𝒟(ℙX, ℙ0) by a plug-in estimator where, for each n ≥ 1, ℙX is replaced by an estimator of the probability law, which is based on sample X1, X2, ..., Xn, to be precised.
From there establishing an asymptotic theory of is thought to be necessary to conclude.
Next, it may be used as tool of comparing for two distributions. We may have two samples and wonder whether they come from the same probability measure. Here, we also may two different cases.
In the first, we have two independent samples X1,X2,.... and Y1,Y2,.... respectively from a random variable X and Y. Here the estimated divergence , where n and m are the sizes of the available samples, is the natural estimator of D(ℙX, ℙY) on which depends the statistical test of the hypothesis : ℙX = ℙY.
But the data may also be paired (X, Y), (X1, Y1), (X2, Y2),..., that is Xi and Yi are measurements of the same case i = 1,2,... In such a situation, testing the equality of the margins ℙX = ℙY should be based on an estimator of the joint probability law of the couple (X, Y) based on the paired observations (Xi,Yi), i = 1,2,...,n.
We did not encounter the approach (B2) in the literature. In the (B1) approach, almost all the papers used the same sample size, at the exception of [Poczos and Jeff(2011)], for the double-size estimation problem. In our view, the study case should rely on the available data so that using the same sample size may lead to a loss of information. To apply their method, one should take the minimum of the two sizes and then loose information. We suggest to come back to a general case and then study the asymptotic theory of based on samples X1,X2,..,Xn. and Y1,Y2,...,Ym. In this paper, we will systematically use arbitrary samples sizes.
In the context of the situation (B1), there are several papers dealing with the estimation of the divergence measures. As we are concerned in this paper by the weak laws of the estimators, our review on that problematic did return only of a few results. Instead, the literature presented us many kinds of results on almost-sure efficiency of the estimation, with rates of convergences and laws of the iterated logarithm, Lp (p = 1,2) convergence, etc. To be precise, [Dhakher et al.2016] used recent techniques based on functional empirical process to provide a series of interesting rates of convergence of the estimators in the case of one-sided approach for the class de Renyi, Tsallis, Kullback-Leibler to cite a few. Unfortunately, the authors did not address the problem of integrability, taking for granted that the divergence measures are finite. Although the results should be correct under the boundedness assumption BD we described earlier, a new formulation in that frame would be welcome.
The paper of [Krishnamurthy et al.2015] is exactly what we want to do, except that it is concentrated on the L2-divergence measure and used the Parzen approach. Instead, we will handle the most general case of ϕ-divergence measure and will use the wavelets probability density estimators.
In the context of the situation (B1), we may cite first the works of [Krishnamurthy et al.2014] and [Singh and Poczos (2014)]. They both used divergence measures based on probability density functions and concentrated on Renyi-α, Tsallis-α and Kullback-Leibler. In the description of the results below, the estimated pfd’s - f and g - are usually in a periodic Hőlder class of a known smoothness s..
Specifically, [Krishnamurthy et al.2014] defined Renyi and Tsallis estimators by correcting the plug-in estimator and established that, as long as 𝒟R,α(f, g) ≥ c and 𝒟T,α(f, g) ≥ c, for some constant c > 0, then[Poczos and Jeff (2011)] used a k–nearest-neighbor approach to prove that if |α − 1| < k, (α ≠ 1) then There has been a recent interest in deriving convergence rates for divergence estimators ([Moon and Hero(2014)], [Krishnamurthy et al.2014]). The rates are typically derived in terms of smoothness s of the densities :
The estimator of [Liu et al. 2012] converges at rate , achieving the parametric rate when s > d.
Similarly, [Sricharan et al.(2012)] showed that when s > d a k-nearest-neighbor style estimator achieves the rate n−2/d (in absolute error) ignoring logarithmic factors. In a follow up work, the authors improved this result to O(n−1/2) by using a set of weak estimators, but they required s > d orders of smoothness. One can also see [Singh and Poczos (2014)], [Kallberg and Seleznjev(2012)] for other contributions.
The majority of the aforementioned articles worked with densities in Hőlder classes, whereas our work applies for densities in the Besov classes.
Here, we will focus on divergence measures between absolutely continuous probability laws with respect to the Lebesgue measure. As well, our results applied to the approaches (A) and (B1) defined above. As a sequence, we estimate divergence measures by their plug-in counterparts, meaning that we replace the probability density functions (pdf) in the expression of the divergence measure by a nonparametric estimators of the pdf ’s. From now, we have on our probability space, two independent sequences :
a sequence of independent and identically distributed random variables with common pdf fℙX :
a sequence of independent and identically distributed random variables with common pdf gℙY :To make the notations more simple, we write We focus on using pdf ’s estimates provided by the wavelets approach. We will deal on the Parzen approach in a forthcoming study. So, we need to explain the frame in which we are going to express our results.
We also wish to get, first, general laws for an arbitrary functional of the formwhere ϕ(x,y) is a measurable function of on which we will make the appropriate conditions. The results on the functional J(f, g), which is also known under the name of ϕ-divergence, will lead to those on the particular cases of the Renyi, Tsallis, and Kullback-Leibler measures.
The exposure of all our results will be given in three a series of three papers. This paper is devoted to the foundations the general asymptotic theory and the exposition of the mains theoretical tools concerning the ϕ-forms. The second paper will deal with important key divergence measures and symmetrized estimators. Finally a third paper will focus on the proofs.
1.3. Wavelets estimation of pdf’s
To begin with the wavelets theory and its statistical applications, we say that the wavelets setting involves two functions φ and ψ in L2(ℝ) respectively called father and mother such thatis a orthonormal basis of L2(ℝ). We adopt the following notation, for j ≥ 0, k ∈ ℤ : Thus, any function f in L2(ℝ) is characterized by its coordinates in the orthonormal basis, in the form with for j ≥ 0, k ∈ ℤ, For an easy introduction to the wavelets theory and to its applications to statistics, see for instance [Hardle et al.1998], [Daubechies(1992)], [Blatter(1998)], etc. In this paper we only mention the unavoidable elements of this frame.
Based on the orthonormal basis defined below, the following Kernel function is introducedFor any j ≥ 1 fixed, called a resolution level, we define and for measurable function h, we define the operator projection Kj of h onto the space Vj of L2(ℝ) (spanned by 2j/2φ(2j(.) − k)), by Therefore we can write, for all x ∈ ℝ, In the frame of this wavelets theory, for each n ≥ 1, we fix the resolution level depending on n and denoted by j = jn, and we use the following estimator of the pdf f associated to X, based on the sample of size n from X, as defined in (1.8), As well, in a two samples problem, we will estimate the pdf g associated to Y, based on a sample of size n from Y, as defined in (1.9), by The aforementioned estimator is known under the name of linear wavelets estimators.
Before we give the main assumptions on the wavelets we are working, we have to define the concept of weak differentiation. Denote by 𝒟(ℝ) the class of functions from ℝ to ℝ with compact support and infinitely differentiable. A function f : ℝ → ℝ is weak differentiable if and only if there exists a function g : ℝ → ℝ locally integrable (on compact sets) such that, for any ϕ ∈ 𝒟(ℝ), we haveIn such a case, g is called the weak derivative function of f and denoted f. If the first weak derivative has itself a weak derivative, ans so forth up to the p − 1-th derivative, we get the p-th derivative function f[p]. Now we may expose the four assumptions we require on the wavelets.
Assumption 1.. The wavelets φ and ψ are bounded and have compact support and either (i) the father wavelet φ has weak derivatives up to order T in Lp(ℝ)(1 ≤ p ≤ ∞) or (ii) the mother wavelet ψ associated to φ satisfies ∫ xmψ(x)dx = 0 for all m = 0,...,T.
Assumption 2. φ : ℝ → ℝ is of bounded p-variation for some 1 ≤ p < ∞ and vanishes on (B1, B2]c for some −∞ < B1 < B2 < ∞.
Wavelets generators with compact supports are available in the literature. We may cite those named after Daubechies, Coiflets and Symmlets (See [Hardle et al.(1998)]). The cited generators fulfill our two main assumption.
Assumption 3. There exists a non-negative symmetrical and continuous function Φ(t) of t ∈ ℝ with a compact support 𝒦 such that :The fourth assumption concerns the resolution level we choose. We set for once an increasing sequence (jn)n≥1 such that
Assumption 4. limn→+∞ n−1/42jn = 1.
By the way, we have as n → ∞, andThese conditions allow the use the [Giné and Nickl(2009)]’s results.
We also denotewhere ‖h‖∞ stands for supx∈D(h) |h(x)|, and D(h) is the domain of application of h.
In the sequel we suppose the densities f and g belong to the Besov space . We will say a word of simple conditions under which our pdf’s do belong to such spaces.
Suppose that the densities f and g belong to , that φ satisfies Assumption 2, and φ, ψ satisfy Assumption 1. Then Theorem 3 [Giné and Nickl(2009)] implies that the rates of convergence an, bn and cn are of the formalmost-surely and converge all to zero at this rate (with 0 < t < T).
In order to establish the asymptotic normality of the divergences estimators, we need this key tool concerning the wavelets empirical process denoted by , where and defined as follows bywhere and 𝔼X(h) = ∫ h(x) f(x)dx denotes the expectation of the measurable function h with respect to the probability distribution function ℙX. The superscript w refers to wavelets. We have since, by Fubini’s Theorem, We are ready to give our results on the functional J introduced in Formula (1.10).
2.1. Main Results
Here, we present a general asymptotic theory of a class of divergence measures estimators including the Renyi and Tsallis families and the Kullback-Leibler ones.
Actually, we gather them in the ϕ-divergence measure form. We will obtain a general frame from which we will derive a number of corollaries. The assumption (1.6) will be used in the particular cases to ensure the finiteness of the divergence measure as mentioned in the beginning of the article. However, in the general results, the assumption (1.6) is part of the general conditions.
We begin to state a result as a general tool for establishing asymptotic normality and related to the wavelets empirical process, which we will use for establishing the asymptotic normality of divergence measures.
C-A. All the constants Ai are finite.
C-h. All the functions hi used in the theorem below are bounded and lie in a Besov space for some t such that t > 1/2.
C1-ϕ. The following integralus finite.
C2-ϕ. For any measurable sequences of functions , , , and of x ∈ D, uniformly converging to zero, that iswe have as n → ∞ and
To check C-h, we may use criteria based on properties of Besov spaces derived on high order differentiability and on the fact we work on compact sets, as it will be seen in the second part of this paper, or in the Appendix section on [Ba et al.2017]. These techniques show that our results apply to all the usual distributions.
The conditions in C2-ϕ may be justified by the Dominated Convergence Theorem or the monotone Convergence Theorem or from other limit theorems. We may either express conditions on the general function ϕ under which these results hold true. But here, we choose to state the final results and next, to check them for particular cases, in which we may use convergence theorems.
I - Statements of the main results.
The first concerns the almost sure efficiency of the estimators.
Under the assumptions 1–3, C-A, C-h, C1-ϕ, C2-ϕ and (BD), we havewhere an, bn and cn are as in (1.16).
The second concerns the asymptotic normality of the estimators.
Under the assumptions 1–3, C-A, C-h, C1-ϕ, C2-ϕ and (BD), we haveand as n → +∞ and m → +∞,
3. Comments and Announcements
In a second paper, we will give versions of our main results on specific and classical divergence measures. The references below, in general, will not be repeated in the two other papers.
The fourth (1 & 2 & 3) author acknowledges support from the World Bank Excellence Center (CEA-MITIC) that is continuously funding his research activities from starting 2014.
Cite this article
TY - JOUR AU - Amadou Diadié Ba AU - Gane Samb LO AU - Diam Ba PY - 2018 DA - 2018/03 TI - Divergence Measures Estimation and Its Asymptotic Normality Theory Using Wavelets Empirical Processes I JO - Journal of Statistical Theory and Applications SP - 158 EP - 171 VL - 17 IS - 1 SN - 2214-1766 UR - https://doi.org/10.2991/jsta.2018.17.1.12 DO - https://doi.org/10.2991/jsta.2018.17.1.12 ID - DiadiéBa2018 ER -