International Journal of Computational Intelligence Systems

Volume 13, Issue 1, 2020, Pages 1619 - 1635

Information Structures in an Ordered Information System Under Granular Computing View and Their Optimal Selection Based on Uncertainty Measures

Authors
Yini Wang1, 2, Sichun Wang1, *, Hongxiang Tang1
1Guangxi Key Laboratory of Cross-Border E-commerce Intelligent Information Processing, Guangxi University of Finance and Economics, Guangxi 530003, P.R. China
2Panyapiwat Institute of Management, Bangkok 10310, Thailand
*Corresponding author. Email: sichunwang123@126.com
Corresponding Author
Sichun Wang
Received 29 August 2019, Accepted 21 September 2020, Available Online 21 October 2020.
DOI
10.2991/ijcis.d.201007.001How to use a DOI?
Keywords
Ordered information system; Information granule; Information structure; Dependence; Information distance; Inclusion degree; Lattice; Map
Abstract

Information structures (i-structures) in an ordered information system (OIS) are mathematical structures of the information granules (i-granules) granulated from the data set of this OIS. This article investigates i-structures in an OIS with granular computing (GrC) view, i.e., i-structures in an OIS are seen as granular structures. Dependence and independence between i-structures are first depicted in the same OIS. Then, information distance (i-distance) between i-structures in the same OIS are proposed, and information entropy in an OIS can be expressed by i-distance between i-structures in this OIS is proved. Next, properties of i-structures in an OIS are shown. Moreover, group, lattice and map characters of i-structures in an OIS are received, and some invariant characters of i-structures in an OIS under homomorphisms are obtained. Finally, the optimal selection of i-structures in an OIS based on the proposed measures are studied. These results will contribute to build a framework of GrC in an OIS.

Copyright
© 2020 The Authors. Published by Atlantis Press B.V.
Open Access
This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).

1. INTRODUCTION

Granular computing (GrC), presented by Zadeh [1,2], is a world view and methodology for viewing the objective world. Its main idea is to use granular thinking to solve complex problems, and transform them into the theory, method, technique and tool of the process of solving several relatively simple problems by abstracting and dividing complex problems.

An information granule (i-granule) exists in many fields, and its manifestation is also different for different fields. i-granules are elements that are similar, or indistinguishable, or have some function that binds them together by indistinguishability or similarity [3,4]. Granulation of objects is to decompose a whole into small parts and then study the decomposed parts. Each divided part is an i-granule. A granular structure is the family of i-granules [5,6].

An information system (IS) based on rough set theory was presented by Pawlak [7]. In GrC in an IS, the investigation of information structure (i-structure) is a meaningful subject. In an IS, an attribute subset determines an equivalence relation which divides the object set into equivalence classes that are disjoint and may be said to be i-granules of the IS [8,3,9]. All these i-granules constitute a vector that is viewed as i-structure induced by this attribute subset in the IS. On the light of this idea, Zhang et al. [10] investigated i-structures in fully fuzzy ISs, Liang et al. [4] discussed i-structures in ISs and Li et al. [11,12] considered i-structures in covering ISs and fuzzy relation ISs. Besides, Liang et al. [13] researched i-granules and entropy theory. Qian et al. [14] studied i-granularity in fuzzy binary GrC model. Li et al. [15,12] inquired into knowledge structures in a knowledge base. Yao et al. [16] investigated i-granulation and rough set approximation. These results have been shown to establish a framework of GrC in knowledge bases [17].

Given an attribute of an IS, if the domain of this attribute is a partial order set in accordance with an increasing preference, then this attribute is a criterion in this IS. If every attribute of an IS is a criterion in this IS, then this IS is viewed as an ordered IS (an OIS). So far, i-structures in an OIS have not been studied. This article aims to research i-structures in an OIS which is described by set vectors.

This article's research route is described in Figure 1.

Figure 1

The work process of the article.

This article is organized as follows: Section 2 reviews some elementary concepts on binary relations, OISs and homomorphisms. Section 3 introduces i-structures in an OIS through set vectors. Section 4 discusses relationships between i-structures in an OIS from dependence and separation. Section 5 researches properties of i-structures in an OIS. Section 6 obtains group, lattice and map characters of i-structures. Section 7 studies the optimal selection of i-structures in an OIS based on the proposed measures. Section 8 makes a comparison. Section 9 concludes this article.

2. PRELIMINARIES

Throughout this article, U and V express finite sets, 2U denotes the collection of all subsets of U and |X| means the cardinality of X2U.

Put

U={u1,u2,,un},

V={v1,v2,,vm},

δ=U×U,={(u,u):uU},

p(X)=|X||U|(X2U).

2.1. Binary Relations

Throughout this article, 2U×U expresses the collection of all binary relations on U.

Suppose that R2U×U. Then R is called

  1. reflexive, if xUxRx.

  2. symmetric, if x,yU, xRyyRx.

  3. transitive, if x,y,zU, xRy and yRzxRz.

If R satisfies the conditions of (1), (2) and (3), it is addressed as an equivalence relation on U.

Given R2U×U. If R=δ(resp.R=), then R is regarded as a universal relation (respectively identity relation).

2.2. An OIS

Definition 2.1. [7]

Suppose that U and A are finite object and attribute sets, respectively. Then (U,A) is regarded as an IS, if aA is able to decide a function a:UWa, where Va={a(u):uU}.

Definition 2.2. [18]

Let (U,A) be an IS. Then (U,A) is regarded as an ordered information system (OIS), if aA is a criterion ( i.e., Ua={a(u):uU} is a partially ordered set ).

(U,A) is addressed as a subsystem of (U,A), if AA.

In this article, an OIS is denoted by S=(U,A), the partial order on the partially ordered set Ua is denoted by a.

Given aA and u,vU. Define

uava(u)aa(v).

Given AA and u,vU. Define

uAvaA,uav.

Clearly, A=aAa.

Definition 2.3. [18]

Let S=(U,A) be an OIS. Suppose AA. Denote

RA={(u,v)U×U:aA,a(u)aa(v)}.

Then RA is addressed as the dominance relation (d-relation) in regard to A.

Clearly, RA=A.

Denote

[u]A={vU:(v,u)RA}.

Then, [u]A is viewed as a dominance class (a d-class) uU in regard to A.

Clearly, [u]A={vU:vAu}.

Denote

URA={[u]A:uU}.

Then URA is deemed as the quotient set of U in regard to A.

Proposition 2.4. [18]

Suppose that S=(U,A) is an OIS and RA is the dominance relation of S. Then the following properties can be obtained:

  1. RA satisfies reflexive and transitive, but does not satisfy symmetric;

  2. A1A2A, then RA2RA1;

  3. A1A2A, then uU, [u]A2[u]A1;

  4. Given AA. If v[u]A, then [v]A[u]A;

  5. Given AA. Then [u]A=[v]A if for any aA, a(u)=a(v);

  6. Given AA. Then URA constitute a covering of U.

Let S=(U,A) be an OIS. Then AA, X2U can be described as A¯(X) and A̲(X) by means of A, where

A̲(X)={YURA:YX},
A¯(X)={YURA:YX}.

It should be noted that URA provides i-granules for depicting the concept X2U.

2.3. Homomorphisms between OISs

Communication between ISs is a significant subject in rough set, and could be viewed as a map between ISs. The notion of homomorphisms between ISs was first presented by Grzymala-Busse in [19,20].

Definition 2.5.

Assume that (U,A) and (W,P) are OISs. Assume that h1:UW, h2:AP and h3:UW are three maps, where

U=aAUa,Ua={a(u):uU},
W=bPWb,Wb={b(w):wW}.

The triple h=(h1,h2,h3) is viewed as a homomorphism from (U,A) to (W,P), if aA, h3Ua:UaWh2(a) is order-preserving, and uU, aA,

h3(a(u))=h2(a)(h1(u)).

Denote

(U,A)~h(W,P)

3. THE CONCEPT OF i-STRUCTURES IN AN OIS

Suppose that S=(U,A) is an OIS. Given AA. Then, the d-relation RA on U is obtained. For uU, [u]A is the d-class of u under RA. If u1,u2 belong to the same d-class, then u1,u2 will be said to be indistinguishable under [u]A. Each d-class can be viewed as an i-granule, the objects of which are indistinguishable. All of d-classes or i-granules constitutes a vector that is viewed as i-structure of (U,A).

Definition 3.1.

Assume that S=(U,A) is an OIS. Then AA,

I(A)=([u1]A,[u2]A,,[un]A)
is addressed as the i-structure of the subsystem (U,A).

If RA is an identity relation, then

I(A)=({u1},{u2},,{un}).

Example 1.

I()=Ũ.

Definition 3.2.

Let S=(U,A) be an OIS. Assume that I(A1), I(A2) are the i-structures of (U,A1) and (U,A2), respectively. Then I(A1) and I(A2) are viewed as the same, if i, [ui]A1=[ui]A2. Denote I(A1)=I(A2).

Definition 3.3.

Let S=(U,A) be an OIS. Then

K(U,A)={I(A):A2A}
is viewed as the i-structure base of (U,A).

Example 3.4. (Continued from Example 2.1 in [21])

Given an OIS S=(U,A) in Table 1, where U={u1,u2,u3,u4,u5,u6}, A={a1,a2,a3}.

a1 a2 a3
u1 1 2 1
u2 3 2 2
u3 1 1 2
u4 2 1 3
u5 3 3 2
u6 3 2 3
Table 1

An ordered information system (OIS) (U,A).

Pick A1={a1}, A2={a2}, A3={a3}, A4={a1,a2}. A5={a1,a3}, A6={a2,a3}.

Note that

[u1]A1=[u3]A1={u1,u2,u3,u4,u5,u6}, [u2]A1=[u5]A1=[u6]A1={u2,u5,u6}, [u4]A1={u2,u4,u5,u6};

[u1]A2=[u2]A2=[u6]A2={u1,u2,u5,u6}, [u3]A2=[u4]A2={u1,u2,u3,u4,u5,u6}, [u5]A2={u5};

[u1]A3={u1,u2,u3,u4,u5,u6}, [u2]A3=[u3]A3=[u5]A3={u2,u3,u4,u5,u6}, [u4]A3=[u6]A3={u4,u6};

[u1]A4={u1,u2,u5,u6}, [u2]A4=[u6]A4={u2,u5,u6}, [u3]A4={u1,u2,u3,u4,u5,u6}, [u4]A4={u2,u4,u5,u6}, [u5]A4={u5};

[u1]A5={u1,u2,u3,u4,u5,u6}, [u2]A5=[u5]A5={u2,u5,u6}, [u3]A5={u2,u3,u4,u5,u6}, [u4]A5={u4,u6}, [u6]A5={u6};

[u1]A6={u1,u2,u5,u6}, [u2]A6={u2,u5,u6}, [u3]A6={u2,u3,u4,u5,u6}, [u4]A6={u4,u6}, [u5]A6={u5}, [u6]A6={u6};

[u1]A={u1,u2,u5,u6}, [u2]A={u2,u5,u6}, [u3]A={u2,u3,u4,u5,u6}, [u4]A={u4,u6}, [u5]A={u5}, [u6]A={u6}.

Then

I()=(U,U,U,U,U,U)=Ũ.

I(A1)=(U,{u2,u5,u6},U,{u2,u4,u5,u6},{u2,u5,u6},{u2,u5,u6}),

I(A2)=({u1,u2,u5,u6},{u1,u2,u5,u6},U,U,{u5},{u1,u2,u5,u6}),

I(A3)=(U,{u2,u3,u4,u5,u6},{u2,u3,u4,u5,u6},{u4,u6},{u2,u3,u4,u5,u6},{u4,u6}),

I(A4)=({u1,u2,u5,u6},{u2,u5,u6},U,{u2,u4,u5,u6},{u5},{u2,u5,u6}),

I(A5)=(U,{u2,u5,u6},{u2,u3,u4,u5,u6},{u4,u6},{u2,u5,u6},{u6}),,

I(A)=I(A6)=({u1,u2,u5,u6},{u2,u5,u6},{u2,u3,u4,u5,u6},{u4,u6},{u5},{u6}).

Then

K(U,A)={I(),I(A1),I(A2),I(A3),I(A4),I(A5),I(A)}.

4. RELATIONSHIPS BETWEEN i-STRUCTURES IN AN OIS

4.1. Dependence and Independence between i-Structures

Definition 4.1.

Let S=(U,A) be an OIS. Assume that I(A1), I(A2) are the i-structures of (U,A1) and (U,A2), respectively.

  1. I(A2) is addressed as depend on I(A1), if i, [ui]A1[ui]A2, it can be written as I(A1)I(A2); I(A2) is viewed as depend strictly on I(A1), if I(A1)I(A2) and I(A1)I(A2), it can be written as I(A1)I(A2).

  2. I(A2) is addressed as depend partially on I(A1), if i, [ui]A1[ui]A2, it can be written as I(A1)I(A2); I(A2) is viewed as depend partially strictly on I(A1), if I(A1)I(A2) and I(A1)I(A2), it can be written as I(A1)I(A2).

  3. I(A2) is viewed as independent of I(A1), if i, [ui]A1[ui]A2, it can be written as I(A1)I(A2).

Example 4.2. (Continued from Example 3.5)

It can be obtained that

I(A)=I(A6);

I(A)I(A4)I(A1)I(), I(A)I(A5)I(A3)I(), I(A)I(A2)I(), I(A4)I(A2), I(A4)I(A3), I(A5)I(A1);

I(A1)I(A2),I(A2)I(A1), I(A1)I(A3),I(A3)I(A1), I(A2)I(A3),I(A3)I(A2), I(A2)I(A5),I(A5)I(A2), I(A4)I(A5),I(A5)I(A4) (see Figure 2).

Figure 2

Relationships among i-structures.

4.2. Information Distance between Two i-Structures

X,YU, XY is addressed as the symmetric difference X and Y, where

XY=XYXY.

Definition 4.3.

Let S=(U,A) be an OIS. Assume that I(A1), I(A2) are the i-structures of (U,A1) and (U,A2), respectively. Information distance (i-distance) between I(A1) and I(A2) is defined as

d(I(A1),I(A2))=1n2i=1n|[ui]A1[ui]A2|.

Lemma 4.4. ([10])

Given X,YU. Then

X=Y|XY|=0.

Lemma 4.5. ([10])

Given X,Y,ZU. Then

|XY|+|YZ||XZ|.

Lemma 4.6. ([10])

Given X,Y,ZU. If XYZ or ZYX, then

|XY|+|YZ|=|XZ|.

Theorem 4.7.

Let S=(U,A) be an OIS. Then (K(U,A),d) is a distance space.

Proof.

Assume A1,A2,A3A.

Clearly,

d(I(A1),I(A2))0, d(I(A1),I(A2))=d(I(A2),I(A1)).

By Lemma 4.4,

d(I(A1),I(A2))=0i, |[ui]A1[ui]A2|=0i, [ui]A1=[ui]A2I(A1)=I(A2).

By Lemma 4.5,

|[ui]A1[ui]A2|+|[ui]A2[ui]A3||[ui]A1[ui]A3|.

Then d(I(A1),I(A2))+d(I(A2),I(A3))

=1n2i=1n|[ui]A1[ui]A2|+1n2i=1n|[ui]A2[ui]A3|

=1n2i=1n(|[ui]A1[ui]A2|+|[ui]A2[ui]A3|)

1n2i=1n|[ui]A1[ui]A3|

=d(I(A1),I(A3))

So, (K(U,A),d) is a distance space.

Proposition 4.8.

Let S=(U,A) be an OIS. Then A1,A2A,

  1. 0d(I(A1),I(A2))11n;

  2. If I(A1)I(A2) and RA1 is an identity relation on U, then

    d(I(A1),I(A1))d(I(A2),I(A1));

  3. If I(A1)I(A2), then

    d(I(A1),I())d(I(A2),I()).

Proof.

(1) Clearly,

i, 1|[ui]A1[ui]A2|n and 1|[ui]A1[ui]A2|n.

Then, i,

0|[ui]A1[ui]A2|n1.

Therefore

0d(I(A1),I(A2))1n2i=1n(n1)=n2nn2=11n.

(2) Due to I(A1)I(A2), it can be obtained that [ui]A1[ui]A2, i.

Then, i, |[ui]A1||[ui]A2|.

Thus

d(I(A1),I(A1))=1n2i=1n|[ui]A1{ui}|

=1n2i=1n(|[ui]A1{ui}||[ui]A1{ui}|)

=1n2i=1n(|[ui]A1|1)1n2i=1n(|[ui]A2|1)

=d(I(A2),I(A1)).

(3) On account of I(A1)I(A2), then, i, |[ui]A1||[ui]A2|.

So,

d(I(A1),I())=1n2i=1n|[ui]A1U|

=1n2i=1n(|[ui]A1U||[ui]A1U|)

=1n2i=1n(n|[ui]A1|)1n2i=1n(n|[ui]A2|)

=d(I(A2),I()).

Proposition 4.9.

Let S=(U,A) be an OIS. If RB(BA) is an identity relation on U, then for any BA,

d(I(B),I(B))+d(I(B),I())=11n.

Proof.

It can be obtained that

d(I(B),I(B))+d(I(B),I())

=1n2i=1n|[ui]B{ui}|+1n2i=1n|[ui]BU|

=1n2i=1n(|[ui]B|1)+1n2i=1n(n|[ui]B|)

=1n2n(n1)=11n.

Proposition 4.10.

Let S=(U,A) be an OIS. Put A1,A2,A3A. If I(A1)I(A2)I(A3) or I(A3)I(A2)I(A1), then

d(I(A1),I(A2))+d(I(A2),I(A3))=d(I(A1),I(A3)).

Proof.

Due to I(A1)I(A2)I(A3) or I(A3)I(A2)I(A1), i, it can be obtained that

[ui]A1[ui]A2[ui]A3or[ui]A3[ui]A2[ui]A1.

By Lemma 4.6, i, it can be obtained that

|[ui]A1[ui]A2|+|[ui]A2[ui]A3|=|[ui]A1[ui]A3|.

d(I(A1),I(A2))+d(I(A2),I(A3))

=1n2i=1n|[ui]A1[ui]A2|+1n2i=1n|[ui]A2[ui]A3|

=1n2i=1n(|[ui]A1[ui]A2|+|[ui]A2[ui]A3|)

=1n2i=1n|[ui]A1[ui]A3|

=d(I(A1),I(A3)).

Example 4.11.

(Continued from Example 3.4)

  1. By Definition 4.3, one can obtain that

    d(I(),I(A1))=1136,d(I(),I(A2))=1136,

    d(I(),I(A3))=1136,d(I(),I(A4))=1536,

    d(I(),I(A5))=1636,d(I(),I(A))=2036,

    d(I(A1),I(A2))=836,d(I(A1),I(A3))=1036,

    d(I(A1),I(A4))=436,d(I(A1),I(A5))=536,

    d(I(A1),I(A))=936,d(I(A2),I(A3))=1836,

    d(I(A2),I(A4))=436,d(I(A2),I(A5))=1536,

    d(I(A2),I(A))=936,d(I(A3),I(A4))=1436,

    d(I(A3),I(A5))=536,d(I(A3),I(A))=936,

    d(I(A4),I(A5))=936,d(I(A4),I(A))=536,

    d(I(A5),I(A))=436 (see Figure 3).

    Figure 3

    Information distance between i-structures.

  2. Let RA1 be an identity relation on U. Then

    d(I(A3),I(A1))=1936,d(I(A5),I(A1))=1436.

    It can be obtained that I(A5)I(A3),

    d(I(A5),I(A1))=14361936=d(I(A3),I(A1)),
    d(I(A5),I())=16361136=d(I(A3),I()).

  3. On account of RA1=, then A1A,

    d(I(A1),I(A1))+d(I(A1),I())=116=11n.

  4. On account of I(A)I(A4)I(A1)I(), then

    d(I(A),I(A4))+d(I(A4),I(A1))=536+436=936=d(I(A),I(A1)),

    d(I(A4),I(A1))+d(I(A1),I())=436+1136=1536=d(I(A4),I()).

5. PROPERTIES OF i-STRUCTURES IN AN OIS

Theorem 5.1.

Let S=(U,A) be an OIS. Then A1,A2A, the following are equivalent:

  1. I(A1)=I(A2);

  2. RA1=RA2;

  3. d(I(A1),I(A2))=0

Proof.

(1) (2). Obviously.

(1) (3). It can be proved by Theorem 4.7.

Theorem 5.2.

Assuming S=(U,A) is an OIS. Then A1,A2A,

I(A1)I(A2)RA1RA2.

Proof.

Evidently.

Proposition 5.3.

Let S=(U,A) be an OIS. If A1A2A, then I(A2)I(A1).

Proof.

On account of A1A2, by it can be obtained that RA2RA1. By Theorem 5.2, I(A2)I(A1).

Proposition 5.4.

Let S=(U,A) be an OIS. Given AA. Then I(A)I(A)I().

Proof.

It can be proved by Proposition 5.3.

Definition 5.5. [22]

Let S=(U,A) be an OIS. Let K(U,A) be the i-structure base of (U,A). Then a map D:K(U,A)×K(U,A)[0,1] is addressed as the inclusion degree on K(U,A), if

  1. 0D(I(A2)I(A1))1;

  2. I(A1)I(A2)D(I(A2)I(A1))=1;

  3. I(A1)I(A2)I(A3)D(I(A1)I(A3))D(I(A1)I(A2)).

Definition 5.6.

Let S=(U,A) be an OIS, A1,A2A, define

D(I(A2)I(A1))=l=1n|[ul]A2|i=1n|[ui]A2|χ[ul]A2([ul]A1),
where
χ[ul]A2([ul]A1)=1,if[ul]A1[ul]A2,0,if[ul]A1[ul]A2.

Proposition 5.7.

D in Definition 5.6 is the inclusion degree.

Proof.

Obviously.

Theorem 5.8.

Let S=(U,A) be an OIS. Suppose P,A2A. Then

  1. I(A1)I(A2)D(I(A2)I(A1))=1.

  2. I(A1)I(A2)D(I(A2)I(A1))=0.

  3. I(A1)I(A2)0<D(I(A2)I(A1))1.

Proof.

Please see “Appendix.”

Corollary 5.9.

Let S=(U,A) be an OIS. Then A1,A2A,

I(A1)I(A2)D(I(A2)I(A1))=1,d(I(A1),I(A2))0.

Proof.

It can be proved by Theorems 4.7 and 5.8.

6. CHARACTERS OF i-STRUCTURES IN AN OIS

In this section, we obtain group, lattice and map characters of i-structures in an OIS. These characters may be helpful for understanding the nature of i-structures in an OIS.

6.1. Group Characters of i-Structures

Theorem 6.1.

(K(U,A),) is a commutative semigroup with the identity element I().

Proof.

Suppose A1,A2,A3A. Then

I(A1)I(A2)

=([u1]A1[u1]A2,[u2]A1[u2]A2,,[un]A1[un]A2)

=([u1]A1A2,[u2]A1A2,,[un]A1A2)

=I(A1A2).

Clearly, I(A2)I(A1)=I(A2A1).

Thus

I(A1)I(A2)=I(A2)I(A1).

Additionally, (I(A1)I(A2))I(L)=I(PA2)I(A)=I(A1A2A3),

I(A1)(I(A2)I(A3))=I(A1)I(A2A3)=I(A1A2A3). Then

(I(A1)I(A2))I(A3)=I(A1)(I(A2)I(A3)).

Thus, (I(U),) is a commutative semigroup.

Clearly, I() is the identity element.

Example 6.2.

(Continued from Example 3.4 )(K(U,A),) is not a group.

By Theorem 6.1, (K(U,A),) is a commutative semigroup with the identity element I().

Additionally, PA,

I(A1)I(A)=I(A1A)=I(A)I().

Then I(A) does not contain inverse elements.

Thus (K(U,A),) is not a group.

6.2. Lattice Characters of i-Structures

Theorem 6.3.

Let S=(U,A) be an OIS. Then

  1. L=(K(U,A),) is a lattice with 1L=I() and 0L=I(A);

  2. If A1,A2A and A(A1,A2)={A:AA,RA1RA2RA}, then

    I(A1)I(A2)=I(A1)I(A2)=I(A1A2),
    I(A1)I(A2)=AA(A1,A2)I(A1)=I(A(A1,A2)).

Proof.

Please see “Appendix.”

Corollary 6.4.

(K(U,A),) is a commutative semigroup with the identity element I().

Example 6.5.

(Continued from Example 3.4 )L=(K(U,A),) is not a distributive lattice.

By Theorem 6.3, L=(K(U,A),) is a lattice with 1L=I() and 0L=I(A).

It can be obtained that

A(A1,A2A3)=A(A1,A6)={AA:RA1RA6RA1}={,A1},

A(A1,A2)={AA:RA1RA2RA1}={},

A(A1,A3)={AA:RA1RA3RA1}={}.

Then

A(A1,A2A3)={a1},(A(A1,A2))(A(A1,A3))=.

So

I(A(A1,A2A3))I((A(A1,A2))(A(A1,A3))).

By Theorem 6.3,

I(A1)(I(A2)I(A3))=I(A1)I(A2A3)=I((A(A1,A2A3))),

(I(A1)I(A2))(I(A1)I(A3))=I((A(A1,A2))I((A(A1,A3)))=I((A(A1,A2))(A(A1,A3))).

Thus

I(A1)(I(A2)I(A3))(I(A1)I(A2))(I(A1)I(A3)). So (K(U,A),) is not a distributive lattice.

Example 6.6.

(Continued from Example 3.4 )(K(U,A),) is not a partially ordered set.

It can be obtained that I(A1)I(A2),I(A2)I(A1). But I(A1)I(A2). So is not anti-symmetric.

Hence (K(U,A),) is not a partially ordered set.

6.3. Map Characters of i-Structures

Lemma 6.7.

Assume that (U,A) and (W,P) are OISs. If (U,A)~h(W,P) with h=(h1,h2,h3), then AA and uU,

h1(u)[h1(u)]h2(A)ugA(u),
where ga=h3a, gA={ga:aA}, gA(u)={uU:aA,ga(u)=ga(u)}.

Proof.

”. Assume h1(u)[h1(u)]h2(A). Then (h1(u),h1(u))Rh2(A). Thus aA,

h2(a)(h1(u))=h2(a)(h1(u)).

On account of (U,A)~h(W,P), then aA,

h2(a)(h1(u))=h3(a(u))=ga(u),h2(a)(h1(u))=h3(a(u))=ga(u).

So aA, ga(u)=ga(u).

Thus ugA(u).

”. Suppose ugA(u). Then aA, ga(u)=ga(u), i.e., aA, h3(a(u))=(h3(a(u)). On account of (U,A)~h(W,P), it can be obtained that aA, h2(a)(h1(u))=h3(a(u)), h3(a(u))=h2(a)(h1(u). Then aA,

h2(a)(h1(u))=h2(a)(h1(u).

Thus h1(u)[h1(u)]h2(A).

Hence

h1(u)[h1(u)]h2(A)ugA(u).

Theorem 6.8.

Let (U,A) and (W,P) be two OISs. Assume (U,A)~h(W,P) with h=(h1,h2,h3). Denote wj=h1(ukj)(j=1,2,,m). Then A2A,

I(h2(A))=(h1(gA(uk1)),h1(gA(uk2)),h1(gA(ukm))),
where ga=h3a, gA={ga:aA}, gA(ukj)={uU:aA,ga(u)=ga(ukj)}(j=1,2,,m).

Proof.

By Lemma 6.7,

h1(u)[h1(ukj)]h2(A)ugA(ukj)(j=1,2,,m).

Then

h1(gA(ukj))=[h1(ukj)]h2(A)(j=1,2,,m).

Thus

I(h2(A))=([w1]h2(A),[w2]h2(A),,[wm]h2(A))

=([h1(uk1)]h2(A),[h1(uk2)]h2(A),,[h1(ukm)]h2(A))

=(h1(gA(uk1)),h1(gA(uk2)),h1(gA(ukm))).

Proposition 6.9.

Let (U,A) and (W,P) be two OISs. Assume (U,A)~h(W,P) with h=(h1,h2,h3). Then A1,A2A,

RA1RA2Rh2(A1)Rh2(A2).

Proof.

Please see “Appendix.”

Corollary 6.10.

Assume that (U,A) and (W,P) are OISs. Given (U,A)~h(W,P) with h=(h1,h2,h3). Then A1,A2A,

RA1=RA2Rh2(A1)=Rh2(A2).

Proof.

It can be proved by Proposition 6.9.

Theorem 6.11.

Assume that (U,A) and (W,P) are OISs. Given (U,A)~h(W,P) with h=(h1,h2,h3). Then A1,A2A,

  1. I(A1)=I(A2)I(h2(A1))=I(h2(A2));

  2. I(A1)I(A2)I(h2(A1))I(h2(A2)).

Proof.

(1) It can be proved by Theorem 5.1 and Corollary 6.10.

(1) This follows from Theorem 5.2 and Proposition 6.9.

Corollary 6.12.

Let (U,A) and (W,P) be two OISs. Suppose (U,A)~h(W,P) with h=(h1,h2,h3). Then A1,A2A,

I(A1)I(A2)I(h2(A1))I(h2(A2)).

Proof.

It can be proved by Theorem 6.11.

Theorem 6.13.

Assume that (U,A) and (W,P) are two OISs. Suppose (U,A)~h(W,P) with h=(h1,h2,h3). Then A1,A2A,

I(A1)I(A2)I(h2(A1))I(h2(A2)).

Proof.

Please see “Appendix.”

Corollary 6.14.

Assume that (U,A) and (W,P) are OISs, as well as (U,A)~h(W,P) with h=(h1,h2,h3). Then A1,A2A,

  1. I(A1)I(A2)I(h2(A1))I(h2(A2));

  2. I(A1)I(A2)I(h2(A1))I(h2(A2)).

Proof.

(1) It can be proved by Theorems 6.11 and 6.13.

(2) This follows from Theorem 6.13.

Example 6.15.

Assume that (U,A) and (W,P) are two OISs (see Tables 2 and 3 ).

a1 a2 a3 a4 a5 a6 a7 a8 a9 a10
u1 2 {0, 1} 2 0.1 {0} 2 0.1 4 2 0.2
u2 6 {0, 2} 4 0.3 {0, 1} 4 0.2 4 4 0.2
u3 2 {1} 4 0.1 {0, 1} 4 0.2 2 4 0.1
u4 4 {2} 6 0.2 {0, 1, 2} 6 0.3 2 6 0.1
u5 6 {0, 1, 2} 4 0.3 {1, 2} 4 0.2 6 4 0.3
u6 2 {0} 4 0.1 {0, 2} 4 0.2 2 4 0.1
u7 2 {0} 4 0.1 {0, 1} 4 0.2 2 4 0.1
u8 6 {1, 2} 4 0.3 {1, 2} 4 0.2 4 4 0.2
u9 6 {0, 1, 2} 4 0.3 {0, 1} 4 0.2 6 4 0.3
u10 2 {1, 2} 2 0.1 {1} 2 0.1 4 2 0.2
u11 6 {0, 2} 4 0.3 {0, 1} 4 0.2 4 4 0.2
u12 6 {0, 1, 2} 4 0.3 {1, 2} 4 0.2 6 4 0.3
u13 6 {0, 2} 6 0.3 {0, 1, 2} 6 0.3 4 6 0.2
u14 2 {2} 4 0.1 {0, 2} 4 0.2 2 4 0.1
u15 6 {0, 1} 6 0.3 {0, 1, 2} 6 0.3 4 6 0.2
Table 2

An ordered information system (OIS) (U,A).

b1 b2 b3
w1 1 2 1
w2 3 2 2
w3 1 1 2
w4 2 1 3
w5 3 3 2
w6 3 2 3
Table 3

An ordered information system (OIS) (W,P).

A map h1:UW is defined as follows:

u1,u10u2,u8,u11u3,u6,u7,u14u4u5,u9,u12w1w2w3w4w5
u13,u15w6.

A map h2:AP is defined as follows:

a1,a4a2,a8,a10a3,a5,a6,a7,a9b1b2b3.

Put Uai={ai(us):1s15}, Wbj={bj(wt):1t6}. Then

U=i=110Uai,W=j=13Wbj.

A map h3:UW is defined as follows:

wUa,h3Ua(w)=12w,a=a1,a3,a6,a8ora9|w|,a=a2ora510w,a=a4,a7ora10.

Obviously, uU and aA,

h3(a(u))=h2(a)(h1(u)).

Therefore (U,A)~h(W,P) with h=(h1,h2,h3).

Pick Q1={b1}, Q2={b2}, Q3={b3}, Q4={b1,b2}. Q5={b1,b3}, Q6={b2,b3}.

It is easy to verify that

I(Q)=I(Q6);

I(Q)I(Q4)I(Q1)I(), I(Q)I(Q5)I(Q3)I(), I(Q)I(Q2)I(), I(Q4)I(Q2), I(Q4)I(Q3), I(Q5)I(Q1).

I(Q1)I(Q2),I(Q2)I(Q1), I(Q1)I(Q3),I(Q3)I(Q1), I(Q2)I(Q3),I(Q3)I(Q2), I(Q2)I(Q5),I(Q5)I(Q2), I(Q4)I(Q5),I(Q5)I(Q4).

Now, dependence and independence between i-structures of the image OIS (W,P) are obtained. So these also can be obtained from the original OIS (U,A).

Pick P={a1,a4}, P={a9}. Then h2(P)=Q1, h2(P)=Q3.

Note that

U/RP={U,{u2,u5,u8,u9,u11,u12,u13,u15},{u2,u4,u5,u8,u9,u11,u12,u13,u15}},

U/RP={U,{u2,u3,u4,u5,u6,u7,u8,u9,u11,u12,u13,u14,u15},{u4,u13,u15}}.

WRQ1={W,{w2,w5,w6},{w2,w4,w5,w6}},

WRQ3={W,{w2,w3,w4,w5,w6},{w4,w6}}.

Clearly,

I(P)=(U,{u2,u5,u8,u9,u11,u12,u13,u15},U,{u2,u4,u5,u8,u9,u11,u12,u13,u15},{u2,u5,u8,u9,u11,u12,u13,u15},U,U,{u2,u5,u8,u9,u11,u12,u13,u15},{u2,u5,u8,u9,u11,u12,u13,u15},U,{u2,u5,u8,u9,u11,u12,u13,u15},{u2,u5,u8,u9,u11,u12,u13,u15},{u2,u5,u8,u9,u11,u12,u13,u15},U,{u2,u5,u8,u9,u11,u12,u13,u15}),

I(P)=(U,{u2,u3,u4,u5,u6,u7,u8,u9,u11,u12,u13,u14,u15},{u2,u3,u4,u5,u6,u7,u8,u9,u11,u12,u13,u14,u15},{u4,u13,u15},{u2,u3,u4,u5,u6,u7,u8,u9,u11,u12,u13,u14,u15},{u2,u3,u4,u5,u6,u7,u8,u9,u11,u12,u13,u14,u15},{u2,u3,u4,u5,u6,u7,u8,u9,u11,u12,u13,u14,u15},{u2,u3,u4,u5,u6,u7,u8,u9,u11,u12,u13,u14,u15},{u2,u3,u4,u5,u6,u7,u8,u9,u11,u12,u13,u14,u15},U,{u2,u3,u4,u5,u6,u7,u8,u9,u11,u12,u13,u14,u15},{u2,u3,u4,u5,u6,u7,u8,u9,u11,u12,u13,u14,u15},{u4,u13,u15},{u2,u3,u4,u5,u6,u7,u8,u9,u11,u12,u13,u14,u15},{u4,u13,u15}),

I(Q1)=(W,{w2,w5,w6},W,{w2,w4,w5,w6},{w2,w5,w6},{w2,w5,w6}),

I(Q3)=(W,{w2,w3,w4,w5,w6},{w2,w3,w4,w5,w6},{w4,w6},{w2,w3,w4,w5,w6},{w4,w6}).

Then

d(I(P),I(P))=1152i=115|[ui]P[ui]P|=58225,

d(I(h2(P)),I(h2(P)))=d(I(Q1),I(Q3))=162j=16|[wj]Q1[wj]Q3|=1036.

Thus d(I(P),I(P))d(I(h2(P)),I(h2(P))).

From this example and these discussions, we can know that a complex massive OIS can be compressed into a relatively small-scale OIS and some same data structures can be received.

7. UNCERTAINTY MEASURES OF AN OIS AND THE OPTIMAL SELECTION OF i-STRUCTURES BASED ON THEM

In this section, we select the optimal i-structure in an OIS based on uncertainty measures.

7.1. Uncertainty Measures of an OIS

Definition 7.1.

Consider that (U,A) is an OIS. Suppose BA. Then information granulation (i-granulation) of the subsystem (U,B) is defined as

G(B)=1n2i=1n|[ui]B|2.

In [21], RB is viewed as a knowledge and then GK(RB)=1n2i=1n|[ui]B|2 is called granulation of the knowledge RB. Actually,

GK(RB)=G(B).

Theorem 7.2.

(Equivalence) Let (U,A) be an OIS, and UB={[u]B|uU}, UC={[u]C|uU} be classifications of two dominance relations B and C respectively. If |UB|=|UC|, and it exists a bijective map h: UBUC such that |[u]B|=|h([u]C)|, then G(B)=G(C).

Proof.

It can be achieved by Definition 7.1.

Proposition 7.3.

Assume that (U,A) is an OIS. Then for any BA,

1nG(B)n.

Moreover, if RB is an identity relation on U, then G(B) achieves the minimum value 1n; if RB is a universal relation on U, then G(B) achieves the maximum value n.

Proof.

Since for each i, 1|[u]B|n, we have ni=1n|[u]B|2n3.

By Definition 7.1,

1nG(B)n.

If RB is an identity relation on U, then i, |[u]B|=1. So G(B)=1n.

If RB is a universal relation on U, then i, |[u]B|=n. So G(B)=n.

Theorem 7.4.

Suppose that (U,A) is an OIS. Suppose B,CA.

  1. If I(B)I(C), then G(B)G(C).

  2. If I(B)I(C), then G(B)<G(C).

Proof.

(1) This is obvious.

(2) By Definition 7.1,

G(B)=1n2i=1n|[ui]B|2,G(C)=1n2i=1n|[ui]C|2.

Notice the I(B)I(C). Then i, [ui]B[ui]C and j, [uj]B[uj]C. Thus i, |[ui]B||[ui]C| and j, |[uj]B|<|[uj]C|.

Consequently, G(B)<G(C).

This theorem shows that when the available information turns into coarse, the i-granulation increases, and when the available information develops into finer, the i-granulation decreases. That is to say, the greater the uncertainty of the existing information, the greater the value of the i-granulation. Thus, we can conclude that the i-granulation introduced in Definition 7.1 can be used to assess the degree of OIS.

Proposition 7.5.

Assume that (U,A) is an OIS. If BCA, then G(C)G(B).

Proof.

It can be proved by Theorem 7.4 and Proposition 5.3.

Definition 7.6. [21]

Suppose that (U,A) is an OIS. Given BA. Then rough entropy (r-entropy) of the subsystem (U,B) is defined as

Er(B)=i=1n1nlog21|[ui]B|.

In [21], RB is viewed as a knowledge and then Er(RB)=i=1n1nlog21|[u]B| is called r-entropy of the knowledge RB. Actually,

Er(RB)=Er(B).

Proposition 7.7.

Assume that (U,A) is an OIS. Given BA. Then

0Er(B)nlog2n.

Furthermore, if RB is an identity relation on U, then Er(B) achieves the minimum value 0; if RB is a universal relation on U, then Er(B) achieves the maximum value nlog2n.

Proof.

Since i, 1|[ui]B|n, we have 0log21|[ui]B|=log2|[ui]B|log2n,0|[ui]B|n1.

Then

0|[ui]B|nlog21|[ui]B|log2n.

By Definition 7.6,

0Er(B)nlog2n.

If RB is an identity relation on U, then i, |[ui]B|=1. So Er(B)=0.

If RB is a universal relation on U, then i, |[ui]B|=n. So Er(B)=nlog2n.

Theorem 7.8.

Let (U,A) be an OIS. Suppose B,CA.

  1. If I(B)I(C), then Er(B)Er(C).

  2. If I(B)I(C), then Er(B)<Er(C).

Proof.

(1) This is obvious.

(2) By Definition 7.6, Er(B)=i=1n|[ui]B|nlog21|[ui]B|,Er(C)=i=1n|[ui]C|nlog21|[ui]C|.

Note that I(B)I(C). Then i, [ui]B[ui]C and j, [uj]B[uj]C. Thus i, |[ui]B||[ui]C| and j, |[uj]B|<|[uj]C|.

Hence, i,

|[ui]B|log21|[ui]B|=|[ui]B|log2|[ui]B|
|[ui]C|log2|[ui]C|=|[ui]C|log21|[ui]C|,

j,

|[uj]B|log21|[uj]B|=|[uj]B|log2|[uj]B|
<|[uj]C|log2|[uj]C|=|[uj]C|log21|[uj]C|.

Therefore Er(B)<Er(C).

This theorem shows that the greater the uncertainty of the available information, the greater the r-entropy. Therefore, we can draw the conclusion that the r-entropy proposed in Definition 7.6 can be used to evaluate the degree of determination of OIS.

Proposition 7.9.

Given that (U,A) is an OIS. If BCA, then Erθ(C)Erθ(B).

Proof.

This follows from Theorem 7.8.

Definition 7.10.

Let (U,A) be an OIS. Given BA. Then information entropy (i-entropy) of (U,B) is defined by

E(B)=i=1n1n1|[ui]B|n.

In [21], RB is viewed as a knowledge and then E(RB)=i=1n1n1|[ui]B|n is called i-entropy of the knowledge RB. Actually,

E(RB)=E(B).

Theorem 7.11.

Let (U,A) be an OIS. Then for BA,

E(B)=d(I(B),I()).

Proof.

By,

E(B)=i=1n1n1|[ui]B|n.

But

d(I(B),I())=1n2i=1n|[ui]BU|

=1n2i=1n(|[ui]BU||[ui]BU|)

=1n2i=1n(n|[ui]B|)

=i=1n1n1|[ui]B|n.

Therefore

E(B)=d(I(B),I()).

Example 7.12.

(Continued from Example 3.4 )

E(A1)=E(A2)=1136,
E(A3)=d(I(A3),I())=1136,
E(A4)=d(I(A4),I())=1536,
E(A5)=d(I(A5),I())=1636,
E(A6)=E(A)=2036.

Proposition 7.13.

Assume that (U,A) is an OIS. Suppose BA. Then

0E(B)11n.

Proof.

The proof is similar to Proposition 7.7.

Theorem 7.14.

Let (U,A) be an OIS. Given B,CA.

  1. If I(B)I(C), then E(B)E(C).

  2. If I(B)I(C), then E(B)<E(C).

Proof.

(1) This is obvious.

(2) By Definition 7.10,

E(B)=i=1n1n1|[ui]B|n,E(C)=i=1n1n1|[ui]C|n.

Note that I(B)I(C). Then i, [ui]B[ui]C and j, [uj]B[uj]C. Thus i, |[ui]B||[ui]C| and j, |[uj]B|<|[uj]C|.

It is evident that E(B)<E(C).

This theorem shows that if the structure of OIS turns into finer, the i-entropy decreases, and if the structure of OIS becomes rough, the i-entropy increases.

Proposition 7.15.

Consider that (U,A) is an OIS. If BCA, then E(C)E(B).

Proof.

It can be proved by Theorem 7.14.

7.2. Effectiveness Analysis

To evaluate the expression of the presented measures for the uncertainty of FA-spaces, effectiveness analysis is performed by using the standard deviation coefficient.

Assume that D={d1,d2,,dn} is a dataset. Its arithmetic average is d¯=1ni=1ndi, its standard deviation is σ(D)=1ni=1n(did¯)2, and its standard deviation coefficient (for short, CV) is

CV(D)=σ(D)d¯.

We select four datasets (i.e., Energy efficiency, Airfoil Self-Noise, QSAR fish toxicity and HCV for Egyptian patients) from UCI for effectiveness analysis.

Energy efficiency may express OIS (U,A) with |U|=768,|A|=8. Denote Ai={a1,,ai}(i=1,,8). Then for each i, RAi is the dominance relation induced by Ai in Energy efficiency. Three measure sets on Energy efficiency are defined as follows:

DG(En)={G(A1),,G(A8)},
DE(En)={E(A1),,E(A8)},
DEr(En)={Er(A1),,Er(A8)}.

Airfoil Self-Noise may express OIS (V,B) with |V|=1503,|B|=6. Denote Bi={b1,,bi}(i=1,,6). Then for each i, RBi is the dominance relation induced by Bi in Airfoil Self-Noise. Three measure sets on Airfoil Self-Noise are defined as follows:

DG(Ai)={G(B1),,G(B6)},
DE(Ai)={E(B1),,E(B6)},
DEr(Ai)={Er(B1),,Er(B6)}.

QSAR fish toxicity may express OIS (W,C) with |W|=908,|C|=7. Denote Ci={c1,,ci}(i=1,,7). Then for each i, RCi is the dominance relation induced by Ci in QSAR fish toxicity. Three measure sets on QSAR fish toxicity are defined as follows:

DG(QS)={G(C1),,G(C7)},
DE(QS)={E(C1),,E(C7)},
DEr(QS)={Er(C1),,Er(C7)}.

HCV for Egyptian patients may express OIS (X,D) with |X|=1385,|D|=29. Denote Di={d1,,di}(i=1,,29). Then for each i, RDi is the dominance relation induced by Di in HCV for Egyptian patients. Three measure sets on HCV for Egyptian patients are defined as follows:

DG(Ht)={G(D1),,G(D29)},
DE(Ht)={E(D1),,E(D29)},
DEr(Ht)={Er(D1),,Er(D29)}.

Then CV-values of three measure sets are compared. The results are shown in Figure 4.

Figure 4

CV-values of three measure sets.

The dispersion degree of E is minimum when Energy efficiency is selected as the test set. Similarly, the dispersion degree of E is minimum when other data sets (i.e., Airfoil Self-Noise, QSAR fish toxicity and HCV for Egyptian patients) are selected as the test sets.

Thus, i-entropy E has much better performance for measuring uncertainty of an OIS.

7.3. The Optimal Selection of i-Structures in an OIS Based on i-Granulation

In this subsection, we select the optimize i-structure based on i-granulation.

Definition 7.16.

Let (U,A) be an OIS.

  1. If there exists BA such that G(B)=max{G(B):BA}, then I(B) is called the maximum i-structure in (U,A) based on i-granulation;

  2. If there exists BA such that G(B)=min{G(B):BA}, then I(B) is called the minimum i-structure in (U,A) based on i-granulation.

The maximum i-structure and the minimum i-structure in (U,A) are collectively called the optimal i-structure in (U,A) based on i-granulation.

Theorem 7.17.

Let (U,A) be an OIS. Then I(A) is the minimum i-structure in (U,A) based on i-granulation.

Proof.

This follows from Proposition 7.5.

Example 7.18.

(Continued from Example 3.4 ) Given an OIS (U,A) in Table 1.

Pick A1={a1}, A2={a2}, A3={a3}, A4={a1,a2}, A5={a1,a3}, A6={a2,a3},A={a1,a2,a3}.

G(A1)=3.194;G(A2)=3.361,
G(A3)=3.306,G(A4)=2.417;
G(A5)=2.333,G(A6)=G(A)=1.556.

Then G(A6)=G(A)<G(A5)<G(A4)<G(A3)<G(A1)<G(A2).

Thus, I(A2) is the maximum i-structure in (U,A) based on i-granulation; I(A6) and I(A) are the minimum i-structure in (U,A) based on i-granulation.

7.4. The Optimal Selection of i-Structures in an OIS Based on r-Entropy

In this subsection, we select the optimal i-structure in (U,A) based on r-entropy.

Definition 7.19.

Let (U,A) be an OIS.

  1. If there exists BA such that Er(B)=max{Er(B):BA}, then I(B) is called the maximum i-structure in (U,A) based on r-entropy;

  2. If there exists BA such that Er(B)=min{Er(B):BA}, then I(B) is called the minimum i-structure in (U,A) based on r-entropy.

The maximum i-structure and the minimum i-structure in (U,A) are collectively called the optimal i-structure in (U,A) based on r-entropy.

Example 7.20.

(Continued from Example 3.4 ) Given an OIS (U,A) in Table 1.

Pick A1={a1}, A2={a2}, A3={a3}, A4={a1,a2}, A5={a1,a3}, A6={a2,a3},A={a1,a2,a3}.

Er(A1)=1.988;Er(A2)=1.862,
Er(A3)=1.925,Er(A4)=1.626;
Er(A5)=1.513,Er(A6)=Er(A)=1.151.

We have Er(A6)=Er(A)<Er(A5<Er(A4)<Er(A2)<Er(A3)<Er(A1).

Thus, I(A1) is the maximum i-structure in (U,A) based on r-entropy; I(A6) and I(A) are the minimum i-structure in (U,A) based on r-entropy.

7.5. The Optimal Selection of i-Structures in an OIS Based on i-Entropy

In this subsection, we select the optimal i-structure in (U,A) based on i-entropy.

Definition 7.21.

Let (U,A) be an OIS.

  1. If there exists BA such that E(B)=max{E(B):BA}, then I(B) is called the maximum i-structure in (U,A) based on i-entropy;

  2. If there exists BA such that E(B)=min{E(B):BA}, then I(B) is called the minimum i-structure in (U,A) based on i-entropy.

The maximum i-structure and the minimum i-structure in (U,A) are collectively called the optimal i-structure in (U,A) based on i-entropy.

Example 7.22.

(Continued from Example 3.4 ) Given an OIS (U,A) in Table 1.

Pick A1={a1}, A2={a2}, A3={a3}, A4={a1,a2}, A5={a1,a3}, A6={a2,a3},A={a1,a2,a3}.

E(A1)=0.306;E(A2)=0.305,
E(A3)=0.306,E(A4)=0.417;
E(A5)=0.445,E(A6)=E(A)=0.556.

We have E(A2)<E(A1=E(A3)<E(A4)<E(A5)<E(A6)=E(A).

Thus, I(A6) and I(A) are the maximum i-structure in (U,A) based on i-entropy; I(A2) is the minimum i-structure in (U,A) based on i-entropy.

8. COMPARISONS

In this section, we make a comparison with literatures [11,23] so as to see the innovation of this article more clearly.

  1. Three articles are based on GrC. Thus, the research ideas of three articles are the same and the obtained results are similar.

  2. The research path of three articles is the same, and the details are as follows: Granular structures or i-structures in three ISs are first introduced, and then dependence, independence and difference between i-structures are discussed. Finally, mathematical characteristics of i-structures are obtained.

  3. The differences of three articles are as below.

    1. The studied ISs are different: This article considers i-structure in an OIS, literature [11] studies i-structure in a covering IS and literature [23] investigates i-structure in an incomplete interval-valued IS.

    2. The introduced relations are different: This article introduces the dominance relation on the object set in an OIS by defining the order between two information values, literature [11] proposes the similarity relation on the universe in a covering IS by means of the neighborhood of each pointand literature [23] presents the tolerance relation on the object set in an incomplete interval-valued IS by defining the similarity degree between two information values.

    3. The constructed i-granules are different: This article constructs an i-granule by means of the dominance class on each object in an OIS, literature [11] constructs an i-granule by means of the similarity class on each point in a covering IS and literature [23] constructs an i-granule by means of the tolerance class on each object in an incomplete interval-valued IS.

    4. This article considers homomorphisms between OISs and then obtains map characters of i-structures. Literature [11] considers also homomorphisms between covering ISs and obtains map characters of i-structures. But these two homomorphisms are totally different. One is based on an IS and the other is based on coverings of the universe. Moreover, literature [23] does not obtain map characters of i-structures as literature [23] does not considers homomorphisms between incomplete interval-valued ISs.

    5. This article studies the optimal selection of i-structures in an OIS based on uncertainty measures. But literature [11,23] do not study the optimal selection of i-structures in an incomplete interval-valued IS.

9. CONCLUSIONS

In this article, i-structures in an OIS have been introduced. Dependence and independence between i-structures have been depicted. Properties of i-structures are investigated by utilizing the inclusion degree. Groups, lattice and map characters of i-structures in an OIS have been obtained. Invariant characters of i-structures in an OIS under homomorphisms have been shown (see Table 4). The optimal i-structures in an OIS based on the proposed measures have been selected. These results will contribute to establishing a framework of GrC in an OIS. This vector-based framework is able to be employ to describe information's different dimension in an OIS and may contribute to knowledge discovery in an OIS. In future, we will provide some applications to deal with knowledge discovery in an OIS.

Characterizations Invariant
Equalitybetween i-structuresinanOIS(=)
Dependencebetween i-structuresinanOIS()
Strictdependencebetween i-structuresinanOIS()
Partialdependencebetween i-structuresinanOIS()
Strictlypartialdependencebetween i-structuresinanOIS()
Independencebetween i-structuresinanOIS()
Informationdistancebetween i-structuresinanOIS(d) ×
Table 4

Invariant characters of i-structures in an ordered information system (OIS) under homomorphisms.

CONFLICTS OF INTEREST

The authors declare that they have no conflict of interest.

AUTHORS' CONTRIBUTIONS

Y.N. Wang designs the overall structure of this paper and improves the language, S.C. Wang writes the paper, and H.X. Tang collects the data.

ACKNOWLEDGMENTS

The authors would like to thank the editors and the anonymous reviewers for their valuable comments and suggestions, which have helped immensely in improving the quality of the paper. This work was supported by Natural Science Foundation of Guangxi (2016GXNSFAA380282, 2018GXNSFAA294134), Guangxi Higher Education Institutions of China (Document No. [2019] 52), National Social Science Fund's Major Research Special Project (18VHQ013), China-ASEAN Institute for Innovation Governance and Intellectual Property Research (2019ZCY04), and Collaborative Innovation Center for Integration of Terrestrial and Marine Economies (2019YB22).

APPENDIX

Theorem 5.8.

Let S=(U,A) be an OIS. Suppose P,A2A. Then

  1. I(A1)I(A2)D(I(A2)I(A1))=1.

  2. I(A1)I(A2)D(I(A2)I(A1))=0.

  3. I(A1)I(A2)0<D(I(A2)I(A1))1.

Proof.

(1) Obviously, “” holds.

”. Put

|[ul]A2|=ql,l=1n|[ul]A2|=q.

Then q=l=1nql. Due to D(I(A2)I(A1))=1, it can be obtained that

l=1nqlχ[ul]A2([ul]A1)=q=l=1nql.

Then

l=1nql(1χ[ul]A2([ul]A1))=0.

Thus l,

1χ[ul]A2([ul]A1)=0.

It follows that l, [ul]A1[ul]A2.

Hence I(A1)I(A2).

(2) “”. On account of I(A1)I(A2), l, [ul]A1[ul]A2, then l,

χ[ul]A2([ul]A1)=0.

Thus D(I(A2)I(A1))=0.

”. Due to D(I(A2)I(A1))=0, l,

χ[ul]A2([ul]A1)=0.

Then l, [ul]A1[ul]A2. Therefore I(A1)I(A2).

(3) It can be obtained from (2).

Theorem 6.3.

Let S=(U,A) be an OIS. Then

  1. L=(K(U,A),) is a lattice with 1L=I() and 0L=I(A);

  2. If A1,A2A and A(A1,A2)={A:AA,RA1RA2RA}, then

    I(A1)I(A2)=I(A1)I(A2)=I(A1A2),
    I(A1)I(A2)=AA(A1,A2)I(A1)=I(A(A1,A2)).

Proof.

Clearly, PA, I(A1)I(A1).

Assume that I(A1)I(A2),I(A2)I(A1). By Theorem 5.2, RA1RA2, RA2RA1. Then RA1=RA2. Thus I(A1)I(A2).

Assume that I(A1)I(A2),I(A2)I(L). By Theorem 5.2, RA1RA2, RA2RL. Then RA1RL. By Theorem 5.2, I(A1)I(L).

Hence (K(U,A),) is a partially ordered set.

By the proof of Theorem 5.1, it can be obtained that

I(A1)I(A2)=I(A1A2),

PA(A1,A2)I(A1)=I(A(A1,A2)).

On account of A1,A2A1A2, by Proposition 5.3, I(A1A2)I(A1) and I(A1A2)I(A2). Then I(A1A2) is the lower bound of {I(A1),I(A2)}.

Assume that I(A1) is the lower bound of {I(A1),I(A2)} with PA. Then I(A1)I(A1), I(A1)I(A2). By Theorem 5.2, RA1RA1 and RA1RA2. Then RA1RA1RA2=RA1A2. By Theorem 5.2, I(A1)I(A1A2).

Thus

I(A1)I(A2)=I(A1A2).

AA(A1,A2), since RA1RA2RA, it can be obtained that

RA1,RA2RA.

Then

RA1,RA2AA(A1,A2)RA=RAA(A1,A2)A=R(A(A1,A2)).

By Theorem 5.2, I(A1),I(A2)I(A(A1,A2)). Then I(A(A1,A2)) is the upper bound of {I(A1),I(A2)}.

Assume that I(S) is the upper bound of {I(A1),I(A2)} with SA. Then I(A1)I(S), I(A2)I(S). By Theorem 5.2, RA1RS and RA2RS. Then SA(A1,A2). So SA(A1,A2). By Proposition 5.3, I(A(A1,A2))I(S).

Hence, I(A1)I(A2)=I(A(A1,A2)).

Thus, L is a lattice.

Clearly, 1L=I(), 0L=I(A).

Proposition 6.9.

Let (U,A) and (W,P) be two OISs. Assume (U,A)~h(W,P) with h=(h1,h2,h3). Then A1,A2A,

RA1RA2Rh2(A1)Rh2(A2).

Proof.

Suppose RA1RA2. (w,y)Rh2(A1), it can be obtained that

aA1,h2(a)(w)h2(a)h2(a)(y).

Let y=h1(x), w=h1(u).

Then aA1,h2(a)(h1(u))h2(a)h2(a)(h1(x)).

Additionally, (U,A)~h(W,P). Then aA,

h2(a)(h1(x))=h3(a(x)),h2(a)(h1(u))=h3(a(u)).

So aA1, h3(a(u))h2(a)h3(a(x)).

Because h3Ua is order-preserving, then aA1, a(u)aa(x). This means (u,x)RA1. By hypothesis, (u,x)RA2, i.e., aA2, a(u)aa(x). Because h3Ua is order-preserving, it can be obtained that aA2,

h3(a(u))h2(a)h3(a(x)).

Then aA2,

h2(a)(w)=h2(a)(h1(u))=h3(a(u))h2(a)h3(a(x))=h2(a)(h1(x))=h2(a)(y).

This implies (w,y)Rh2(A2).

Thus

Rh2(A1)Rh2(A2).

Conversely, suppose Rh2(A1)Rh2(A2),(u,x)RA1, it can be obtained that

aA1,a(u)aa(x).

Because h3Ua is order-preserving, then aA1, h3(a(u))h2(a)h3(a(x)).

Additionally, (U,A)~h(W,P). Then aA,

h2(a)(h1(x))=h3(a(x)),h3(a(x))=h2(a)(h1(u)).

Thus aA1, h2(a)(h1(u))h2(a)h2(a)(h1(x)). This implies (h1(x),h1(u))Rh2(A1).

On account of Rh2(A1)Rh2(A2), it can be obtained that (h1(x),h1(u))Rh2(A2), i.e., aA2, h2(a)(h1(u))h2(a)h2(a)(h1(x)).

Then aA2, h3(a(u))h2(a)h3(a(x)).

Because h3Ua is order-preserving, it can be obtained that aA2, a(u)aa(x). This implies (u,x)RA2.

Thus

RA1RA2.

Theorem 6.13.

Assume that (U,A) and (W,P) are two OISs. Suppose (U,A)~h(W,P) with h=(h1,h2,h3). Then A1,A2A,

I(A1)I(A2)I(h2(A1))I(h2(A2)).

Proof.

Suppose I(A1)I(A2). Then i, [xi]A1[xi]A2.

Let yj=h1(xi). w[yj]h2(A1), it can be obtained that (w,yj)Rh2(A1). This implies

aA1,h2(a)(w)h2(a)h2(a)(yj).

Let w=h1(u). Then aA1,h2(a)(h1(u))h2(a)h2(a)(h1(xi)).

Additionally, (U,A)~h(W,P). Then aA,

h2(a)(h1(xi))=h3(a(xi)),h2(a)(h1(u))=h3(a(u)).

So aA1, h3(a(u))h2(a)h3(a(xi)).

Because h3Ua is order-preserving, it can be obtained that aA1, a(u)aa(xi).

This implies (u,xi)RA1. Then u[xi]A1.

On account of [xi]A1[xi]A2, it can be obtained that u[xi]A2. So (u,xi)RA2, i.e., aA2, a(u)aa(xi).

Because h3Ua is order-preserving, then aA2,

h3(a(u))h2(a)h3(a(xi)).

Then aA2,

h2(a)(w)=h2(a)(h1(u))=h3(a(u))h2(a)h3(a(xi))=h2(a)(h1(xi))=h2(a)(yj).

So (w,yj)Rh2(A2).

This implies w[yj]h2(A2)

Thus

I(h2(A1))I(h2(A2)).

Conversely, suppose I(h2(A1))I(h2(A2)). Then j, [yj]h2(A1)[yj]h2(A2).

Let h1(xi)=yj. u[xi]A1, it can be obtained that (u,xi)RA1. Then

aA1,a(u)aa(xi).

Because h3Ua is order-preserving, it can be obtained that aA1, h3(a(u))h2(a)h3(a(xi)).

Additionally, (U,A)~h(W,P). Then aA,

h2(a)(h1(xi))=h3(a(xi)),h3(a(u))=h2(a)(h1(u)).

Then aA1, h2(a)(h1(u))h2(a)h2(a)(h1(xi)). This implies (h1(u),h1(xi))Rh2(A1), i.e., h1(u)[h1(xi)]h2(A1)=[yj]h2(A1).

On account of [yj]h2(A1)[yj]h2(A2), it can be obtained that h1(u)[yj]h2(A2), i.e., (h1(u),h1(xi))=(h1(u),yj)Rh2(A2).

Then, aA2, h2(a)(h1(u))h2(a)h2(a)(h1(xi)).

So aA2, h3(a(u))h2(a)h3(a(xi)).

Because h3Ua is order-preserving, it can be obtained that aA2, a(u)aa(xi). Then (u,xi)RA2. This implies u[xi]A2.

Thus

I(A1)I(A2).

REFERENCES

3.T.Y. Lin, Granular computing on binary relations II: rough set representations and belief functions, A. Skowron and L. Polkowski (editors), Rough Sets in Knowledge Discovery, Physica-Verlag, Berlin, 1998, pp. 121-140.
4.J.Y. Liang and K.S. Qu, Information measures of roughness of knowledge and rough sets for information systems, J. Syst. Sci. Syst. Eng., Vol. 10, 2002, pp. 95-103.
8.T.Y. Lin, Granular computing on binary relations I: data mining and neighborhood systems, A. Skowron and L. Polkowski (editors), Rough Sets in Knowledge Discovery, Physica-Verlag, Berlin, 1998, pp. 107-121.
20.J.W. Grzymala-Busse and W.A. Sedelow Jr., On rough sets and information system homomorphism, Bull. Polish Acad. Technol. Sci., Vol. 36, 1988, pp. 233-239.
22.W.X. Zhang and G.F. Qiu, Uncertain Decision Making Based on Rough Sets, Tsinghua University Publishers, Beijing, China, 2005.
Journal
International Journal of Computational Intelligence Systems
Volume-Issue
13 - 1
Pages
1619 - 1635
Publication Date
2020/10/21
ISSN (Online)
1875-6883
ISSN (Print)
1875-6891
DOI
10.2991/ijcis.d.201007.001How to use a DOI?
Copyright
© 2020 The Authors. Published by Atlantis Press B.V.
Open Access
This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).

Cite this article

TY  - JOUR
AU  - Yini Wang
AU  - Sichun Wang
AU  - Hongxiang Tang
PY  - 2020
DA  - 2020/10/21
TI  - Information Structures in an Ordered Information System Under Granular Computing View and Their Optimal Selection Based on Uncertainty Measures
JO  - International Journal of Computational Intelligence Systems
SP  - 1619
EP  - 1635
VL  - 13
IS  - 1
SN  - 1875-6883
UR  - https://doi.org/10.2991/ijcis.d.201007.001
DO  - 10.2991/ijcis.d.201007.001
ID  - Wang2020
ER  -