International Journal of Computational Intelligence Systems

Volume 10, Issue 1, 2017, Pages 212 - 233

Incremental update of rough set approximation under the grade indiscernibility relation

Authors
Junfang Luo1, junfangluo@163.com, Yaya Liu1, 1109731034@qq.com, Keyun Qin1, *, keyunqin@263.net, Heng Ding2, dh_swjtu@163.com
*Corresponding author.
Corresponding Author
Received 28 March 2016, Accepted 30 September 2016, Available Online 1 January 2017.
DOI
10.2991/ijcis.2017.10.1.15How to use a DOI?
Keywords
Rough set; Fuzzy relation; The grade indiscernibility relation; Incremental learning; Approximation operators
Abstract

The incremental updating of lower and upper approximations under the variation of information systems is an important issue in rough set theory. Many incremental updating approaches with respect to different kinds of indiscernibility relations have been proposed. The grade indiscernibility relation is a fuzzification of classical Pawlak’s indiscernibility relation which can characterize the similarity between objects more precisely. Based on fuzzy rough set model, this paper discusses the approaches for dynamically acquiring of the upper and lower approximations with respect to the grade indiscernibility relation when adding and removing an attribute or an object, and changing the attribute value of the object, respectively. Since the approaches are used in succession, they make the approximations can be updated correctly and effectively when any kind of possible change in the information system. Finally, extensive experiments on data sets from University of California, Irvine (UCI) show that the incremental methods effectively reduce the computing time in comparison with the traditional non-incremental method.

Copyright
© 2017, the Authors. Published by Atlantis Press.
Open Access
This is an open access article under the CC BY-NC license (http://creativecommons.org/licences/by-nc/4.0/).

1. Introduction

Rough set theory, a mathematical tool for dealing with vagueness and uncertainty, was introduced by Pawlak in 19821. It can be used in attribute value representation models to describe the dependencies among attributes, evaluate the significance of attributes and derive decision rules2,3,4,5,6. Rough set-based data analysis starts from a data table, also called an information system, which contains data about objects of interest that are characterized by a finite set of attributes. Objects with the same information are indiscernible and the indiscernibility relation generated in this way forms the mathematical basis for the theory of rough sets. By using the indiscernibility relation, a rough set is characterized by a pair of sets, called the lower and upper approximations. In recent years, classical rough sets have been extended to several general models, such as covering rough set model7, fuzzy rough set model8, variable precision rough set model9, generalized rough set model10, probabilistic rough set model11, etc.

With the rapid development of modern information technology, different types of data have increased dramatically. In many real-time cases, information systems may evolve over time, in other words, some new information becomes available continuously while some information is no longer useful. One can retrain the system from scratch whenever adding or removing data(attributes or objects), which is known as a non-incremental approach12. However, the non-incremental approach becomes very costly or even intractable as the number of data grows. Alternatively, one can also apply an incremental learning scheme12. The essence of incremental learning is to allow the learning process to take place in a continuous and progressive manner rather than a one-shot experience13. The research on updating knowledge incrementally has shown its importance in many areas, such as clinical decision making, intrusion detection, stock evaluation, and text categorization14. Some incremental learning methods with respect to rough set theory have been proposed14,15,16,17. Chan firstly put forward an incremental method for updating the approximations of a crisp concept based on the lower and upper boundary sets15. Li et al. presented an incremental method of updating decision rules when multi-attributes are deleted or added simultaneously under rough set based on the characteristic relation16. Zhang et al. investigated the approach for updating approximations under neighborhood rough sets17. Cheng proposed two incremental methods for the fast computing of the rough fuzzy approximations based on the boundary set and the cut sets of a fuzzy set, respectively14. These studies have significantly enriched the theory of rough set and guided a way for dynamic data mining, even big data mining.

The indiscernibility relation is a key notion of rough set theory, which partitions the object set of an information system into a collection of equivalence classes. Zhao proposed the notion of grade indiscernibility relation which is a fuzzy relation for information system to characterize the difference between the grades of discernibility18. Based on fuzzy rough set model8, Qin investigated rough approximation operators based on the grade indiscernibility relation19. For the sake of better applying of the grade indiscernibility relation, Luo extend it to incomplete information system20. The value tolerance relation based rough approximation operators are investigated21. In this way, we defined the approximation operators based on the grade indiscernibility relation in the same manner20. Furthermore, the rule acquisition and attribute reduction are discussed, and the advantages of the grade of indiscernibility relation are also explained. In this paper, we discusses the approaches for incrementally acquiring approximations based on the grade indiscernibility relation when the information system changes. Due to these approaches are used in succession, they can effectively updated approximations when any possible changes in the information system occur. In order to show the succession of the approaches, examples in this paper are used as input from the output of the example before it. And it should be noted that the order of the information system changes can be arbitrary.

This paper is arranged as follows. In Section 2, we review some fundamental concepts of Pawlak rough sets and the grade indiscernibility relation. The remainders of the sections are focused on the approaches for incrementally updating approximations based on the grade indiscernibility relation when the information system varies with time. With changes of the attribute set, we discuss how to acquire approximation operators in Section 3. In Section 4, we investigate the methods for updating approximations when adding or removing an object in the universal set. The approaches for updating approximations when changing the attribute value of object is given in Section 5. In Section 6, we analyze the time complexity of algorithms presented in Section 3,4,5. In Section 7, the incremental methods are evaluated on data sets from UCI. Finally we conclude the work of this paper and preview the further work.

2. Preliminaries

In this section, for our further development, we briefly review some basic notions of Pawlak rough set1 and the grade indiscernibility relation18. Meanwhile, the traditional non-incremental algorithm of calculating approximations is presented.

2.1. The grade indiscernibility relation

Definition 1 1

An information system is a quadruple S = (U,A,V,F), where

  1. (1)

    U = {x1,x2,⋯,xn} is a nonempty finite set of objects called the universe of discourse;

  2. (2)

    A is a nonempty finite set of attributes;

  3. (3)

    V=aAVa , Va is the values domain of a;

  4. (4)

    f : U × AV is an information function such that f(x,a) ∈ Va, for any xU, aA.

Definition 2 1

Let S = (U,A,V,F) be an information system, BA, the indiscernibility relation ind(B) induced by B is defined as:

ind(B)={(x,y)U×U;bB(f(x,b)=f(y,b))}.

Clearly, ind(B) is an equivalence relation. If (x,y) ∈ ind(B), then x and y are indiscernible with respect to B. It is noticed that, if x and y are discernible with respect to B, i.e. (x,y) ∉ ind(B), then there exists at least one attribute bB such that f(x,b) ≠ f(y,b). Thus the grade of discernibility may be different for different pairs of objects. The difference has not been described in Pawlak’s indiscernibility relation. To address this issue, Zhao proposed the grade indiscernibility relation for information system18, which is defined as follows.

Definition 3 18

Let S = (U,A,V,F) be an information system, BA, the grade indiscernibility relationship GRB on B is a binary fuzzy relation on U, i.e. GRB : U × U → [0, 1], and for any x,yU,

GRB(x,y)=1|B||{bB;f(x,b)=f(y,b)}|.

According to this definition, GRB(x,y) represents the proportion of undistinguishable attribute of x and y in B. Clearly, (x,y) ∈ ind(B) if and only if GRB(x,y) = 1. Thus, GRB is a kind of fuzzification of a indiscernibility relation ind(B). It is noticed that GRB is a reflexive and symmetric fuzzy relation, but not necessarily transitive, i.e., GRB(x,y) ∧ GRB(y,z) ≤ GRB(x,z) is not necessarily hold. From the point of fuzzy rough set model8, Qin presented approximation operators based on grade indiscernibility relation19.

Definition 4 19

Let S = (U,A,V,F) be an information system, BA. For any fuzzy subset μF(U), the lower approximation GR¯B(μ) and the upper approximations GR¯B(μ) of μ are defined as follows:

xU,GR¯B(μ)(x)=yU((1GRB(x,y))μ(y));GR¯B(μ)(x)=yU(GRB(x,y)μ(y)).

In this definition, if XP(U) is a subset of U, then

GR¯B(X)(x)=yUX(1GRB(x,y)),GR¯B(X)(x)=yXGRB(x,y).

Clearly, by the reflexivity of GRB(x,y), we have GR¯B(X)XGR¯B(X) for any XU.

2.2. The non-incremental algorithm of computing approximations

In order to get the approximations based on the grade indiscernibility relation, the non-incremental method will firstly build the relation matrix and then get the approximations from the matrix. The calculation process of the traditional way can be represented as the following Steps in Algorithm 2.1.

  • Step 1: Input S = (U,A,V,f), X, B.

  • Step 2: Build the relation matrix Mn×m, where n = |X|, m = |UX|.

    Mn×m=GRB(xi,xj)=1|B||{bB|,f(xi,b)=f(xj,b)}|,
    xiX, xjUX.

  • Step 3: Calculate the lower approximations GR¯B(X)(xi)=yUX(1GRB(xi,y))=1yUXGRB(xi,y) and the set Yxi={yUX;yUXGRB(xi,y)} , xiX

  • Step 4: Calculate the upper approximations GR¯B(X)(xj)=yXGRB(xj,y) and the set Yxj={yX;yXGRB(xj,y)} , xjUX.

  • Step 5: Output the Mn×m; GR¯B(X)(xi) and Yxi , xiX; GR¯B(X)(xj and Yxj , xjUX; GR¯B(X)(xi)=0 , xiUX; GR¯B(X)(xj=1 , xjX.

Algorithm 2.1

(The traditional non-incremental algorithm of computing approximations)

It is easy to see that Algorithm 2.1 has a time complexity of O(|X| |UX| |B|), which is mainly decided by the time cost of building the relation matrix in Step 2. The following Example 1 shows the progress of using the Algorithm 2.1 to get the approximations based on the grade indiscernibility relation.

Example 1

Let U = {x1,x2,x3,x4,x5,x6} be the universal set, A = {b1,b2,b3,b4,b5} be the conditional attribute set, B = {b1,b2,b3,b4}, X = {x2,x3,x6} be the decision set. The related information system is given in Table 1.

b1 b2 b3 b4 b5 d
x1 2 2 1 1 3 UX
x2 2 2 1 2 2 X
x3 2 2 2 1 3 X
x4 1 1 1 2 2 UX
x5 1 2 2 2 3 X
x6 1 1 2 2 2 UX

From the Step 2 of Algorithm 2.1 we have a relation matrix M3×3, where GRB(x,y), xiX, xjUX.

x1x4x5M3×3=x2x3x6(3/42/42/43/402/403/43/4)

The first column is the object that xiX. The first row is the object that xjUX.

From the step 3 of algorithm 2.1 we have the lower approximations:

  • GR¯B(X)(x2)=yUX(1GRB(x2,y)=1yUXGRB(x2,y)=134=14 , and Yx2={y;yUXGRB(x2,y)}={x1} .

  • GR¯B(X)(x3)=yUX(1GRB(x3,y)=1yUXGRB(x3,y)=134=14 , and Yx3={y;yUXGRB(x3,y)}={x1} .

  • GR¯B(X)(x6)=yUX(1GRB(x6,y)=1yUXGRB(x6,y)=134=14 , and Yx6={y;yUXGRB(x6,y)}={x4,x5} .

  • GR¯B(X)(xi)=0 , xi ∈ {x1,x4,x5}.

From the step 4 of algorithm 2.1 we have the upper approximations:

  • GR¯B(X)(x1)=yXGRB(x1,y)=34 , and Yx1={yX;yXGRB(x1,y)}={x2,x3} .

  • GR¯B(X)(x4)=yXGRB(x4,y)=34 , and Yx4={yX;yXGRB(x4,y)}={x6} .

  • GR¯B(X)(x5)=yXGRB(x5,y)=34 , and Yx5={yX;yXGRB(x5,y)}={x6} .

  • GR¯B(X)(xj)=1 , xj ∈ {x2,x3,x6}.

3. Incrementally updating approximations while adding or removing an attribute

The traditional non-incrementally update method is based on static information system, which has huge time complexity as the number of data grows. Incremental update method can improve the efficiency by using the existing approximation knowledge18.

3.1. Incrementally updating approximations when adding an attribute

Proposition 1

Let S = (U,A,V,F) be an information system, BA, bA and bB. For any x,yU,

GRB{b}(x,y)=|B|·GRB(x,y)+GR{b}(x,y)|B|+1={|B|·GRB(x,y)+1|B|+1,f(x,b)=f(y,b);|B|·GRB(x,y)|B|+1,f(x,b)f(y,b).

Proof.

This proof is straightforward.

Proposition 2

Let S = (U,A,V,F) be an information system, BA, bA, bB, XU. The lower and upper approximations of X by adding b to B can be updated respectively as follows.

  • Lower approximation: If xiUX, then GR¯B{b}(X)(xi)=GR¯B(X)(xi)=0 ; If xiX, then

    GR¯B{b}(X)(xi)={|B|GR¯B(X)(xi)|B|+1,yYxi(f(xi,b)=f(y,b));1+|B|GR¯B(X)(xi)|B|+1,y(yYxif(xi,b)f(y,b)).

    Where Yxi={yUX;GR¯B(X)(xi)=1GRB(xi,y)}={y;yUXGRB(xi,y)} .

  • Upper approximation: If xjX, then GR¯B{b}(X)(xj)=GR¯B(X)(xj)=1 ; If xjUX, then

    GR¯B{b}(X)(xj)={|B|GR¯B(X)(xj)+1|B|+1,yYxj(f(xj,b)=f(y,b));|B|GR¯B(X)(xj)|B|+1,y(yYxjf(xj,b)f(y,b)).

Proof.

  • Lower approximation: For any xiUX, It is obviously that GR¯B{b}(X)(xi)=GR¯B(X)(xi)=0 .

    For any xiX, if yYxi such that f(xi,b) = f(y,b), then GR¯B{b}(X)(xi)=1GRB{b}(xi,y)=1|B|GRB(xi,y)+1|B|+1=1|B|(1GR¯B(X)(xi))+1|B|+1=|B|GR¯B(X)(xi)|B|+1 .

    If f(xi,b) ≠ f(y,b) for any yYxi , then

    GR¯B{b}(X)(xi)=yUX(1GRB{b}(xi,y))=1(GRB{b}(xi,y0)(yUXYxi,f(xi,b)=f(y,b)GRB{b}(xi,y)))=1(|B|GRB(xi,y0)|B|+1(yUXYxi,f(xi,b)=f(y,b)|B|GRB(xi,y)+1|B|+1),
    where y0Yxi . For any yUXYxi , we have GRB(xi,y0) > GRB(xi,y), that is |B|GRB(xi,y0)|B|+1|B|GRB(xi,y)+1|B|+1 . Therefore, GR¯B{b}(X)(xi)=1|B|GRB(xi,y0)|B|+1=|B|GR¯B(X)(xi)+1|B|+1 .

  • Upper approximation: For any xjX, it is obviously that GR¯B{b}(X)(xj)=GR¯B(X)(xj)=1 .

    For any xjUX, if yYxj , such that f(xj,b) = f(y,b), then GR¯B{b}(X)(xj)=GRB{b}(xj,y)=|B|GRB(xj,y)+1|B|+1=|B|GR¯B(X)(xj)+1|B|+1 .

    If f(xj,b) ≠ f(y,b) for any yYxj , then

    GR¯B{b}(X)(xj)=yXGRB{b}(xj,y)=GRB{b}(xj,y0)(yXYxj,f(xj,b)=f(y,b)GRB{b}(xj,y))=|B|GRB(xj,y0)|B|+1(yXYxj,f(xj,b)=f(y,b)|B|GRB(xj,y)+1|B|+1),
    where y0Yxj . For any yXYxj , we have GRB(xj,y0) > GRB(xj,y), that is |B|GRB(xj,y0)|B|+1|B|GRB(xj,y)+1|B|+1 . Therefore, GR¯B{b}(X)(xj)=|B|GRB(xj,y0)|B|+1=|B|GR¯B(X)(xj)|B|+1 .

Particularly, if Yxi={yk} and Yxj={yl} are subsets of the universe with single element, the lower and upper approximations of X by adding b to B can be updated respectively as follows.

  • Lower approximation: If xiUX, then GR¯B{b}(X)(xi)=GR¯B(X)(xi)=0 ; If xiX, then

    GR¯B{b}(X)(xi)={|B|GR¯B(X)(xi)|B|+1,f(xi,b)=f(yk,b);|B|GR¯B(X)(xi)+1|B|+1,f(xi,b)f(yk,b).

  • Upper approximation: If xjX, then GR¯B{b}(X)(xj)=GR¯B(X)(xj)=1 ; If xjUX, then

    GR¯B{b}(X)(xj)={|B|GR¯B(X)(xj)+1|B|+1,f(xj,b)=f(yl,b);|B|GR¯B(X)(xj)|B|+1,f(xj,b)f(yl,b).

Since the incremental update approaches need to use the set of Yxi and Yxj , in order to maintain the continuity of the approaches, Yxi and Yxj also need to update.

Proposition 3

The set of Yxi and Yxj by adding b to B can be updated respectively as follows.

Yxi*={{yYxi;f(xi,b)=f(y,b)},yYxi(f(xi,b)=f(y,b));{y;yUXGRB{b}(xi,y)},y(yYxif(xi,b)f(y,b)).Yxj*={{yYxj;f(xj,b)=f(y,b)},yYxj(f(xj,b)=f(y,b));{y;yXGRB{b}(xj,y)},y(yYxjf(xj,b)f(y,b)).

  • Step 1: Input the relation matrix Mn×m; GR¯B(X)(xi) , Yxi , xiX; GR¯B(X)(xj , Yxj , xjUX; the increasing attribute b.

  • Step 2: We get a new relation matrix

    Mn×m*=GRB{b}(xi,xj)={|B|·GRB(xi,xj)+1|B|+1,f(xi,b)=f(xj,b);|B|·GRB(xi,xj)|B|+1,f(xi,b)f(xj,b).
    xiX, xjUX. // According to Proposition 1.

  • Step 3: Calculate the lower approximations xiX.

    If yYxi , such that f(xi,b) = f(y,b), then GR¯B{b}(X)(xi)=|B|GR¯B(X)(xi)|B|+1 and Yxi*={yYxi;f(xi,b)=f(y,b)} .

    Else GR¯B{b}(X)(xi)=1+|B|GR¯B(X)(xi)|B|+1 and Yxi*={y;yUXGRB{b}(xi,y)} . // According to Proposition 2 and 3.

  • Step 4: Calculate the upper approximations xjUX.

    If yYxj , such that f(xj,b) = f(y,b) then GR¯B{b}(X)(xj)=|B|GR¯B(X)(xj)+1|B|+1 and Yxj*={yYxj;f(xj,b)=f(y,b)} .

    Else GR¯B{b}(X)(xj)=|B|GR¯B(X)(xj)|B|+1 and Yxj*={y;yXGRB{b}(xj,y)} . // According to Proposition 2 and 3.

  • Step 5: Output the relation matrix Mn×m* ; GR¯B{b}(X)(xi) , Yxi* , xiX; GR¯B{b}(X)(xj) , Yxj* , xjUX; GR¯B{b}(X)(xi)=0 , xiUX; GR¯B{b}(X)(xj)=1 , xjX.

Algorithm 3.1

(Incremental algorithm for updating approximations when adding an attribute b)

The time complexity of Algorithm 3.1 is O(|X| |UX|), which is mainly decided by the time cost of building the relation matrix in Step 2. In the following Example 2, We use the results from Example 1 to demonstrate how algorithm 3.1 update the approximations when adding an attribute.

Example 2

We consider the information system given in Table 1. Let U = {x1,x2,x3,x4,x5,x6} be the universal set, B = {b1,b2,b3,b4} be the conditional attribute set, X = {x2,x3,x6} be the decision set. Adding an attribute b5 to B.

Using the result of Example 1.

x1x4x5M3×3=x2x3x6(3/42/42/43/402/403/43/4)
GR¯B(X)(x2)=14 , Yx2={x1} ; GR¯B(X)(x3)=14 , Yx3={x1} ; GR¯B(X)(x6)=14 , Yx6={x4,x5} ; GR¯B(X)(xi)=0 , xi ∈ {x1,x4,x5}.

GR¯B(X)(x1)=34 , Yx1={x2,x3} ; GR¯B(X)(x4)=34 , Yx4={x6} ; GR¯B(X)(x5)=34 , Yx5={x6} ; GR¯B(X)(xj)=1 , xj ∈ {x2,x3,x6}.

From the Step 2 of Algorithm 3.1 we have a new relation matrix M3×3* :

x1x4x5M3×3*=x2x3x6(3/53/52/54/503/504/53/5)

The lower and upper approximations of X = {x2,x3,x6} by adding b5 to B are updated as follows.

From the step 3 of algorithm 3.1 we have the lower approximations:

  • GR¯B{b5}(X)(x2) : For any yYx2 , f(x2,b5) ≠ f(y,b5). So we have GR¯B{b5}(X)(x2)=1+14+1=25 and Yx2*={y;yUXGRB{b5}(x2,y)}={x1,x4} .

  • GR¯B{b5}(X)(x3) : x1Yx3 , such that f(x3,b5) = f(x1,b5), then GR¯B{b5}(X)(x3)=14+1=15 and Yx3*={yYx3;f(x,b5)=f(y,b5)}={x1} .

  • GR¯B{b5}(X)(x6) : x4Yx6 , such that f(x6,b5) = f(x4,b5), then GR¯B{b5}(X)(x6)=14+1=15 and Yx6*={yYx6;f(x,b5)=f(y,b5)}={x4} .

  • GR¯B{b5}(X)(xi)=0 , xi ∈ {x1,x4,x5}.

From the step 4 of algorithm 3.1 we have the upper approximations:

  • GR¯B{b5}(X)(x1) : x2Yx1 , such that f(x1,b5) = f(x2,b5), then GR¯B{b5}(X)(x1)=3+14+1=45 and Yx1*={yYx1;f(x1,b5)=f(y,b5)}={x3} .

  • GR¯B{b5}(X)(x4) : x6Yx4 , such that f(x4,b5) = f(x6,b5), then GR¯B{b5}(X)(x4)=3+14+1=45 and Yx4*={yYx4;f(x4,b5)=f(y,b5)}={x6} .

  • GR¯B{b5}(X)(x5) : For any yYx5 , f(x5,b5) ≠ f(y,b5), so we have GR¯B{b5}(X)(x5)=34+1=35 and Yx5*={y;yXGRB{b5}(x5,y)}={x3,x6} .

  • GR¯B{b5}(X)(xj)=1 , xj ∈ {x2,x3,x6}.

3.2. Incrementally updating approximations when removing an attribute

Proposition 4

Let S = (U,A,V,F) be an information system, BA, bB. For any x,yU,

GRB{b}(x,y)=|B|·GRB(x,y)GR{b}(x,y)|B|1={|B|·GRB(x,y)+1|B|+1,f(x,b)=f(y,b);|B|·GRB(x,y)|B|+1,f(x,b)f(y,b).

Proof.

This proof is straightforward.

Proposition 5

Let S = (U,A,V,F) be an information system, BA, bB, XU. The lower and upper approximations of X by removing b from B can be updated respectively as follows.

  • Lower approximation: If xiUX, then GR¯B{b}(X)(xi)=GR¯B(X)(xi)=0 ; If xiX, then

    GR¯B{b}(X)(xi)={|B|GR¯B(X)(xi)1|B|1,yYxi(f(xi,b)f(y,b));|B|GR¯B(X)(xi)|B|1,y(yYxif(xi,b)=f(y,b)).

    Where Yxi={yUX;GR¯B(X)(xi)=1GRB(xi,y)}={y;yUXGRB(xi,y)} .

  • Upper approximation: If xjX, then GR¯B{b}(X)(xj)=GR¯B(X)(xj)=1 ; If xjUX, then

    GR¯B{b}(X)(xj)={|B|GR¯B(X)(xj)|B|1,yYxj(f(xj,b)f(y,b));|B|·GRB(x,y)|B|1,f(x,b)f(y,b).

    Where Yxj={yX;GR¯B(X)(xj)=GRB(xj,y)}={y;yXGRB(xj,y)} .

Proof.

  • Lower approximation: For any xiUX, it is obviously that GR¯B{b}(X)(xi)=GR¯B(X)(xi)=0 .

    For any xiX, if yYxi , such that f(xi,b) ≠ f(y,b), then GR¯B{b}(X)(xi)=1GRB{b}(xi,y)=1|B|GRB(xi,y)|B|1=1|B|1GR¯B(X)(xi))|B|1=|B|GR¯B(X)(xi)1|B|1 .

    If f(xi,b) = f(y,b) for any yYxi , then

    GR¯B{b}(X)(xi)=yUX(1GRB{b}(xi,y))=1(GRB{b}(xi,y0)(yUXYxi,f(xi,b)f(y,b)GRB{b}(xi,y)))=1(|B|GRB(xi,y0)1|B|1(yUXYxi,f(xi,b)f(y,b)|B|GRB(xi,y)|B|1),
    where y0Yxi . For any yUXYxi , we have GRB(xi,y0) > GRB(xi,y), that is |B|GRB(xi,y0)1|B|1|B|GRB(xi,y)|B|1 . Therefore, GR¯B{b}(X)(xi)=1|B|GRB(xi,y0)1|B|1=|B|GR¯B(X)(xi)|B|1 .

  • Upper approximation: For any xjX, it is obviously that GR¯B{b}(X)(xj)=GR¯B(X)(xj)=1 . For any xjUX, if yYxj such that f(xj,b) ≠ f(y,b), then GR¯B{b}(X)(xj)=GRB{b}(xj,y)=|B|GRB(xj,y)|B|1=|B|GR¯B(X)(xj)|B|1 .

    If f(xj,b) = f(y,b) for any yYxj , then

    GR¯B{b}(X)(xj)=yXGRB{b}(xj,y)=GRB{b}(xj,y0)(yXYxj,f(xj,b)f(y,b)GRB{b}(xj,y))=|B|GRB(xj,y0)1|B|1(yXYxj,f(xj,b)f(y,b)|B|GRB(xj,y)|B|1),
    where y0Yxj . For any yXYxj , we have GRB(xj,y0) > GRB(xj,y), that is |B|GRB(xj,y0)1|B|1|B|GRB(xj,y)|B|1 . Therefore, GR¯B{b}(X)(xj)=|B|GRB(xj,y0)1|B|1=|B|GR¯B(X)(xj)1|B|1 .

Particularly, if Yxi={yk} and Yxj={yl} are subsets of the universe with single element, the lower and upper approximations of X by removing b from B can be updated respectively as follows.

  • Lower approximation: If xiUX, then GR¯B{b}(X)(xi)=GR¯B(X)(xi)=0 . If xiX, then

    GR¯B{b}(X)(xi)={|B|GR¯B(X)(xi)1|B|1,f(xi,b)f(yk,b);|B|GR¯B(X)(xi)|B|1,f(xi,b)=f(yk,b).

  • Upper approximation: If xjX, then GR¯B{b}(X)(xj)=GR¯B(X)(xj)=1 . If xjUX, then

    GR¯B{b}(X)(xj)={|B|GR¯B(X)(xj)|B|1,f(xj,b)f(yl,b);|B|GR¯B(X)(xj)1|B|1,f(xj,b)=f(yl,b).

Proposition 6

The set of Yxi and Yxj by removing b from B can be updated respectively as follows.

Yxi*={{yYxi;f(xi,b)=f(y,b)},yYxi(f(xi,b)f(y,b));{y;yUXGRB{b}(xi,y)},y(yYxif(xi,b)=f(y,b)).Yxj*={{yYxj;f(xj,b)=f(y,b)};yYxj(f(xj,b)f(y,b)),{y;yXGRB{b}(xj,y)};y(yYxjf(xj,b)=f(y,b)).

  • Step 1: Input the relation matrix Mn×m; GR¯B(X)(xi) , Yxi , xiX; GR¯B(X)(xj , Yxj , xjUX.

  • Step 2: We get a new relation matrix

    Mn×m*=GRB{b}(xi,xj)={|B|·GRB(xi,xj)+1|B|+1,f(xi,b)=f(xj,b);|B|·GRB(xi,xj)|B|+1,f(xi,b)f(xj,b).
    xiX, xjUX. // According to Proposition 4.

  • Step 3: Calculate the lower approximations xiX

    If yYxi(f(xi,b)f(y,b) , then GR¯B{b}(X)(xi)=|B|GR¯B(X)(xi)1|B|1 and Yxi*={yYxi;f(xi,b)=f(y,b)} .

    Else Yxi*={y;yUXGRB{b}(xi,y)} and GR¯B{b}(X)(xi)=|B|GR¯B(X)(xi)|B|1 . // According to Proposition 5 and 6.

  • Step 4: Calculate the upper approximations xjUX

    If yYxj(f(xj,b)f(y,b)) , then GR¯B{b}(X)(xj)=|B|GR¯B(X)(xj)|B|1 and Yxj*={yYxj;f(xj,b)=f(y,b)} .

    Else GR¯B{b}(X)(xj)=|B|GR¯B(X)(xj)1|B|1 and Yxj*={y;yXGRB{b}(xj,y)} . // According to Proposition 5 and 6.

  • Step 5: Output the relation matrix Mn×m* ; GR¯B(X)(xi) , Yxi , xiX; GR¯B(X)(xj , Yxj , xjUX; GR¯B(X)(xi)=0 , xiUX; GR¯B(X)(xj=1 , xjX.

Algorithm 3.2

(Incremental algorithm for updating approximations when removing an attribute b)

The Algorithm 3.2 has a time complexity of O(|X| |UX|), which is mainly decided by Step 2. In the following Example 3, We use the results from Example 2 to demonstrate how algorithm 3.2 update the approximations when removing an attribute.

Example 3

We consider the information system given in Table 1. Let U = {x1,x2,x3,x4,x5,x6} be the universal set, B = {b1,b2,b3,b4,b5} be the conditional attribute set, X = {x2,x3,x6} be the decision set. Removing an attribute b2 from B.

Using the result of Example 2.

x1x4x5M3×3=x2x3x6(3/53/52/54/503/504/53/5)

GR¯B(X)(x2)=25 , Yx2={x1,x4} ; GR¯B(X)(x3)=15 , Yx3={x1} ; GR¯B(X)(x6)=15 , Yx6={x4} ; GR¯B(X)(xi)=0 , xi ∈ {x1,x4,x5}.

GR¯B(X)(x1)=45 , Yx1={x3} ; GR¯B(X)(x4)=45 , Yx4={x6} ; GR¯B(X)(x5)=35 , Yx5={x3,x6} ; GR¯B(X)(xj)=1 , xj ∈ {x2,x3,x6}.

From the Step 2 of Algorithm 3.2 we have a new relation matrix M3×3*

x1x4x5M3×3*=x2x3x6(2/43/41/43/402/403/43/4)

The lower and upper approximations of X = {x2,x3,x6} by removing b2 from B are updated as follows.

From the step 3 of algorithm 3.2 we have the lower approximations:

  • GR¯B{b2}(X)(x2) : x4Yx2 , such that f(x2,b2) ≠ f(x4,b2), then GR¯B{b2}(X)(x2)=2151=14 and Yx2*={yYx2;f(x,b2)f(y,b2)}={x4} .

    For any yYx3 , f(x3,b2) = f(y,b2), so we have GR¯B{b2}(X)(x3)=151=14 and Yx3*={y;yUXGRB{b2}(x3,y)}={x1} .

    For any yYx6 , f(x6,b2) = f(y,b2), so we have GR¯B{b2}(X)(x6)=151=14 and Yx6*={y;yUXGRB{b2}(x6,y)}={x4,x5} .

  • GR¯B{b2}(X)(xi)=0 , xi ∈ {x1,x4,x5}.

From the step 4 of algorithm 3.2 we have the upper approximations:

  • GR¯B{b2}(X)(x1) : For any yYx1 , f(x1,b2) = f(y,b2), so we have GR¯B{b2}(X)(x1)=4151=34 and Yx1*={y;yXGRB{b2}(x1,y)}={x3} .

  • GR¯B{b2}(X)(x4) : For any yYx4 , f(x4,b2) = f(y,b2), so we have GR¯B{b2}(X)(x4)=4151=34 and Yx4*={y;yXGRB{b2}(x4,y)}={x2,x6} .

  • GR¯B{b2}(X)(x5) : x6Yx5 , such that f(x5,b2) ≠ f(x6,b2), then GR¯B{b2}(X)(x5)=351=34 and Yx5*={yYx5;f(x5,b2)f(y,b2)}={x6} .

  • GR¯B{b2}(X)(xj)=1 , xj ∈ {x2,x3,x6}.

4. Incrementally updating approximations while adding or removing an object

In this section, we consider the problem of updating approximations based on the garde indiscernibility relation of a target concept in terms of adding or removing an object.

4.1. Incrementally updating approximations when adding an object

Proposition 7

Let S = (U,A,V,F) be an information system, BA, xU, XU, U = U ∪ {x}. The lower and upper approximations of X by adding x to U can be updated respectively as follows.

  • Lower approximation: When xiUX, we have GR¯B(X)(xi)=0 . When xiX, we have:

    • If 1GRB(xi,x)>GR¯B(X)(xi) , then GR¯B(X)(xi)=GR¯B(X)(xi) and Yxi*=Yxi .

    • If 1GRB(xi,x)=GR¯B(X)(xi) , then GR¯B(X)(xi)=GR¯B(X)(xi) and Yxi*=Yxi{x} .

    • If 1GRB(xi,x)<GR¯B(X)(xi) , then GR¯B(X)(xi)=1GR¯B(xi,x) and Yxi*={x} .

  • Upper approximation: If xjX, GR¯B(X)(xj)=1 ; If xjU − {x} − X, then GR¯B(X)(xj)=GR¯B(X)(xj) and Yxj*=Yxj . If xj = x, then GR¯B(X)(x)=yXGRB(x,y) and Yx*={y;yXGRB(x,y)} .

Proof.

  • Lower approximation:

    If xiX, we have GR¯B(X)(xi)=y(UX){x}(1GRB(xi,y))=(1GRB(xi,x))(yUX(1GRB(xi,y)))=(1GRB(xi,x))GR¯B(X)(xi) . If xUX, it is obviously that GR¯B(X)(xi)=GR¯B(X)(xi)=0 .

  • Upper approximation:

    If xjX, it is obviously that GR¯B(X)(xj)=GR¯B(X)(xj)=1 . If xjU − {x} − X, we have GR¯B(X)(xj)=yXGRB(xj,y)=GR¯B(X)(xj) . If xj = x, we have GR¯B(X)(xj)=yXGRB(xj,y) .

  • Step 1: Input the relation matrix Mn×m; GR¯B(X)(xi) , Yxi , xiX; GR¯B(X)(xj , Yxj , xjUX; the increasing object x.

  • Step 2: We get a new relation matrix Mn×(m+1)* . // Calculate GRB(x,xi), xiX.

  • Step 3: Calculate the lower approximations xiX

    • If 1GRB(xi,x)>GR¯B(X)(xi) , then GR¯B(X)(xi)=GR¯B(X)(xi) and Yxi*=Yxi .

    • If 1GRB(xi,x)=GR¯B(X)(xi) , then GR¯B(X)(xi)=GR¯B(X)(xi) and Yxi*=Yxi{x} .

    • If 1GRB(xi,x)<GR¯B(X)(xi) , then GR¯B(X)(xi)=1GRB(xi,x) and Yxi*={x} . // According to Proposition 7.

  • Step 4: Calculate the upper approximations xjUX

    If xjU − {x} − X, then GR¯B(X)(xj)=GR¯B(X)(xj) and Yxj*=Yxj .

    GR¯B(X)(x)=yXGRB(x,y) and Yx*={y;yXGRB(x,y)} . // According to Proposition 7.

  • Step 5: Output the relation matrix Mn×(m+1)* ; GR¯B(X)(xi) , Yxi* , xiX; GR¯B(X)(xj) , YxJ* , xjUX. GR¯B(X)(xi)=0 , xjUX, GR¯B(X)(xj)=1 , xjX.

Algorithm 4.1

(Incremental algorithm for updating approximations when adding an object x to UX)

The time complexity of Algorithm 4.1 is O(|X| |B|), which is mainly decided by Step 2. In the following Example 4, We use the results from Example 3 to demonstrate how algorithm 4.1 update the approximations when adding an object to UX.

Example 4

We consider the information system given in Table 2. Let U = {x1,x2,x3,x4,x5,x6} be the universal set, B = {b1,b3,b4,b5} be the conditional attribute set, X = {x2,x3,x6} be the decision set. Adding an object x7 ∈ Ψ to UX, U = U ∪ {x7}.

b1 b3 b4 b5 d
x1 2 1 1 3 UX
x2 2 1 2 2 X
x3 2 2 1 3 X
x4 1 1 2 2 UX
x5 1 2 2 3 UX
x6 1 2 2 2 X

Using the result of Example 3.

x1x4x5M3×3=x2x3x6(2/43/41/43/402/403/43/4)

GR¯B(X)(x2)=14 , Yx2={x4} ; GR¯B(X)(x3)=14 , Yx3={x1} ; GR¯B(X)(x6)=14 , Yx6={x4,x5} ; GR¯B(X)(xi)=0 , xi ∈ {x1,x4,x5}.

GR¯B(X)(x1)=34 , Yx1={x3} ; GR¯B(X)(x4)=34 , Yx4={x2,x6} ; GR¯B(X)(x5)=34 , Yx5={x6} ; GR¯B(X)(xj)=1 , xj ∈ {x2,x3,x6}.

From the Step 2 of Algorithm 4.1 we have a new relation matrix:

x1x4x5x7M3×4*=x2x3x6(2/43/41/41/43/402/41/403/43/43/4)

The lower and upper approximations of X = {x2,x3,x6} by adding x7 to UX are updated as follows.

From the step 3 of algorithm 4.1 we have the lower approximations:

  • GR¯B(X)(x2) : Since 1GRB(x2,x7)>GR¯B(X)(x2) , then GR¯B(X)(x2)=GR¯B(X)(x2)=14 and Yx2*=Yx2={x4} .

  • GR¯B(X)(x3) : Since 1GRB(x3,x7)>GR¯B(X)(x3) , then GR¯B(X)(x3)=GR¯B(X)(x3)=14 and Yx3*=Yx3={x1} .

  • GR¯B(X)(x6) : Since 1GRB(x6,x7)=GR¯B(X)(x6) , then GR¯B(X)(x6)=GR¯B(X)(x6)=14 and Yx6*=Yx6{x7}={x4,x5,x7} .

  • GR¯B(X)(xi)=GR¯B(X)(xi)=0 , xi ∈ {x1,x4,x5,x7}.

From the step 4 of algorithm 4.1 we have the upper approximations:

  • GR¯B(X)(x7)=y{x2,x3,x6}GRB(x7,y)=34 , Yx7*={y;y{x2,x3,x6}GRB(x7,y)}={x6} .

  • GR¯B(X)(xj)=GR¯B(X)(xj) , Yxj*=Yxj , xj ∈ {x1,x4,x5}.

  • GR¯B(X)(xj)=1 , xj ∈ {x2,x3,x6}.

Proposition 8

Let S = (U,A,V,F) be an information system, BA, xU, XU, U = U ∪ {x}.

The lower and upper approximations of X = X ∪ {x} by adding x to X can be updated respectively as follows.

  • Lower approximation: If xiUX, then GR¯B(X)(xi)=0 ; If xiX, then GR¯B(X)(xi)=GR¯B(X)(xi) and Yxi*=Yxi . If xi = x, then GR¯B(X)(x)=yUX(1GRB(x,y)) and Yx*={y;yUXGRB(x,y)} .

  • Upper approximation: When xjX, we have GR¯B(X)(xj)=1 . When xjUX, we have:

    • If GRB(xj,x)>GR¯B(X)(xj) , then GR¯B(X)(xj)=GRB(xj,x) and Yxj*={x} .

    • If GRB(xj,x)=GR¯B(X)(xj) , then GR¯B(X)(xj)=GR¯B(X)(xj) and Yxj*=Yxj{x} .

    • If GRB(xj,x)<GR¯B(X)(xj) , then GR¯B(X)(xj)=GR¯B(X)(xj) and Yxj*=Yxj .

Proof.

This proof is similar to that of Proposition 7.

  • Step 1: Input the relation matrix Mn×m; GR¯B(X)(xi) , Yxi , xiX; GR¯B(X)(xj , Yxj , xjUX; the increasing object x.

  • Step 2: We get a new relation matrix M(n+1)×(m+1)* . // Calculate GRB(x,xj), xjUX.

  • Step 3: Calculate the lower approximations xiX

    If xiX, then GR¯B(X)(xi)=GR¯B(X)(xi) and Yxi*=Yxi . If xi = x, then GR¯B(X)(x)=yUX(1GRB(x,y)) and Yx*={y;yUXGRB(x,y)} . // According to Proposition 8.

  • Step 4: Calculate the upper approximations xjUX

    • If GRB(xj,x)>GR¯B(X)(xj) , then GR¯B(X)(xj)=GRB(xj,x) and Yxj*={x} .

    • If GRB(xj,x)=GR¯B(X)(xj) , then GR¯B(X)(xj)=GR¯B(X)(xj) and Yxj*=Yxj{x} .

    • If GRB(xj,x)<GR¯B(X)(xj) , then GR¯B(X)(xj)=GR¯B(X)(xj) and Yxj*=Yxj . // According to Proposition 8.

  • Step 5: Output the relation matrix M(n+1)×m* ; GR¯B(X)(xi) , Yxi* , xiX; GR¯B(X)(xj) , Yxj* , xjUX; GR¯B(X)(xi)=0 , xiUX, GR¯B(X)(xj)=1 , xjX.

Algorithm 4.2

(Incremental algorithm for updating approximations when adding an object x to X)

The Algorithm 4.2 has a time complexity of O(|UX| |B|), which is mainly decided by Step 2. In the following Example 5, We use the results from Example 4 to demonstrate how algorithm 4.2 update the approximations when adding an object to X.

Example 5

We consider the information system given in Table 3. Let U = {x1,x2,x3,x4,x5,x6,x7} be the universal set, B = {b1,b3,b4,b5} be the conditional attribute set, X = {x2,x3,x6} be the decision set. Adding x8 ∈ Φ to X, X = X ∪ {x8}.

b1 b3 b4 b5 d
x1 2 1 1 3 UX
x2 2 1 2 2 X
x3 2 2 1 3 X
x4 1 1 2 2 UX
x5 1 2 2 3 UX
x6 1 2 2 2 X
x7 1 2 2 1 UX

Using the result of Example 4.

x1x4x5x7M3×4=x2x3x6(2/43/41/41/43/402/41/403/43/43/4)

GR¯B(X)(x2)=14 , Yx2={x4} ; GR¯B(X)(x3)=14 , Yx3={x1} ; GR¯B(X)(x6)=14 , Yx6={x4,x5,x6} ; GR¯B(X)(xi)=0 , xi ∈ {x1,x4,x5,x7}.

GR¯B(X)(x1)=34 , Yx1={x3} ; GR¯B(X)(x4)=34 , Yx4={x2,x6} ; GR¯B(X)(x5)=34 , Yx5={x6} ; GR¯B(X)(x7)=34 , Yx5={x6} ; GR¯B(X)(xj)=1 , xj ∈ {x2,x3,x6}.

From the Step 2 of Algorithm 4.2 we have a new relation matrix M4×4* :

x1x4x5x7M4×4*=x2x3x6x8(2/43/41/41/43/402/41/403/43/43/42/43/41/41/4)

The lower and upper approximations of X = {x2,x3,x6,x8} by adding x8 to X are updated as follows.

From the step 3 of algorithm 4.2 we have the lower approximations:

  • GR¯B(X)(x8)=y{x1,x4,x5,x7}(1GRB(x8,y))=14 , and Yx8*={y;y{x1,x4,x5,x7}GRB(x8,y)}={x4} .

  • GR¯B(X)(xi)=GR¯B(X)(xi) , Yxi*=Yxi , xi ∈ {x2,x3,x6}.

  • GR¯B(X)(xi)=0 , xi ∈ {x1,x4,x5,x7}.

From the step 4 of algorithm 4.2 we have the upper approximations:

  • GR¯B(X)(x1) : Since GRB(x1,x8)<GR¯B(X)(x1) , then GR¯B(X)(x1)=GR¯B(X)(x1)=34 and Yx1*=Yx1={x3} .

  • GR¯B(X)(x4) : Since GRB(x4,x8)=GR¯B(X)(x4) , then GR¯B(X)(x4)=GR¯B(X)(x4)=34 and Yx4*=Yx4{x8}={x2,x6,x8} .

  • GR¯B(X)(x5) : Since GRB(x5,x8)<GR¯B(X)(x5) , then GR¯B(X)(x5)=GR¯B(X)(x5)=34 and Yx5*=Yx5={x6} .

  • GR¯B(X)(x7) : Since GRB(x7,x8)<GR¯B(X)(x7) , then GR¯B(X)(x7)=GR¯B(X)(x7)=34 and Yx7*=Yx7={x6} .

  • GR¯B(X)(xj)=1 , xj ∈ {x2,x3,x6,x8}.

4.2. Incrementally updating approximations when removing an object

Proposition 9

Let S = (U,A,V,F) be an information system, BA, xU and xX, XU, U = U − {x}. The lower and upper approximations of X by removing x from U can be updated respectively as follows.

  • Lower approximation: If xiUX, then GR¯B(X)(xi)=GR¯B(X)(xi)=0 . If xiX, then we have:

    • If GR¯B(X)(xi)=1GRB(xi,x) , then GR¯B(X)(xi)=yUX(1GRB(xi,y)) and Yxi*={y;yUXGRB(xi,y)} .

    • If yYxi , such that GR¯B(X)(xi)=1GRB(xi,y) , then GR¯B(X)(xi)=GR¯B(X)(xi) and Yxi*=Yxi{x} .

  • Upper approximation: GR¯B(X)(xj)=1 , xjX, and GR¯B(X)(xj)=GR¯B(X)(xj) , Yxi*=Yxi , xjUX.

  • Step 1: Input the relation matrix Mn×m; GR¯B(X)(xi) , Yxi , xiX; GR¯B(X)(xj , Yxj , xjUX.

  • Step 2: We get a new relation matrix Mn×(m1)* . // Delete GRB(x,xi), xiX from the relation matrix Mn×m.

  • Step 3: Calculate the lower approximations xiX

    If yYxi , such that GR¯B(X)(xi)=1GRB(xi,y) , then GR¯B(X)(xi)=GR¯B(X)(xi) and Yxi*=Yxi{x} .

    Else GR¯B(X)(xi)=yUX(1GRB(xi,y)) and Yxi*={y;yUXGRB(xi,y)} . // According to the Proposition 9.

  • Step 4: Calculate the upper approximations xjUX

    GR¯B(X)(xj)=GR¯B(X)(xj) , Yxi*=Yxi . // According to the Proposition 9.

  • Step 5: Output the relation matrix Mn×(m1)* ; GR¯B(X)(xi) , Yxi* , xiX; GR¯B(X)(xj) , Yxi* , xjUX; GR¯B(X)(xi)=0 , xiUX and GR¯B(X)(xj)=1 , xjX.

Algorithm 4.3

(Incremental algorithm for updating approximations when removing an object x from UX)

The Algorithm 4.3 has a time complexity of O(|X|), which is mainly decided by Step 2. In the following Example 6, We use the results from Example 5 to demonstrate how algorithm 4.3 update the approximations when removing an object from UX.

Example 6

We consider the information system given in Table 4. Let U = {x1,x2,x3,x4,x5,x6,x7,x8} be the universal set, B = {b1,b3,b4,b5} be the conditional attribute set, X = {x2,x3,x6,x8} be the decision set. Removing x4 ∈ Ψ from UX.

b1 b3 b4 b5 d
x1 2 1 1 3 UX
x2 2 1 2 2 X
x3 2 2 1 3 X
x4 1 1 2 2 UX
x5 1 2 2 3 UX
x6 1 2 2 2 X
x7 1 2 2 1 UX
x8 2 1 2 2 X

Using the result of Example 5.

x1x4x5x7M4×4=x2x3x6x8(2/43/41/41/43/402/41/403/43/43/42/43/41/41/4)

GR¯B(X)(x2)=14 , Yx2={x4} ; GR¯B(X)(x3)=14 , Yx3={x1} ; GR¯B(X)(x6)=14 , Yx6={x4,x5,x7} ; GR¯B(X)(x8)=14 , Yx8={x4} ; GR¯B(X)(xi)=0 , xi ∈ {x1,x4,x5,x7}.

GR¯B(X)(x1)=34 , Yx1={x3} ; GR¯B(X)(x4)=34 , Yx4={x2,x6,x8} ; GR¯B(X)(x5)=34 , Yx5={x6} ; GR¯B(X)(x7)=34 , Yx7={x6} ; GR¯B(X)(xj)=1 , xj ∈ {x2,x3,x6,x8}.

From the Step 2 of Algorithm 4.3 we have a new relation matrix M4×3* :

x1x5x7M4×3*=x2x3x6x8(2/41/41/43/42/41/403/43/42/41/41/4)

The lower and upper approximations of X = {x2,x3,x6,x8} by removing x4 from UX be updated as follows:

From the step 3 of algorithm 4.3 we have the lower approximations:

  • GR¯B(X)(x2) : Since GR¯B(X)(x2)=1GRB(x2,x4) , then GR¯B(X)(x2)=y{x1,x5,x7}(1GRB(x2,y))=24 and Yx2*={y;y{x1,x5,x7}GRB(x2,y)}={x1} .

  • GR¯B(X)(x3) : Since Yx3={x1} , GR¯B(X)(x3)=1GRB(x3,x1) , then GR¯B(X)(x3)=GR¯B(X)(x3)=14 and Yx3*=Yx3{x4}={x1} .

  • GR¯B(X)(x6) : Since x5Yx6 , such that GR¯B(X)(x5)=1GRB(x6,x5) , then GR¯B(X)(x6)=GR¯B(X)(x6)=14 and Yx6*=Yx6{x4}={x5,x7} .

  • GR¯B(X)(x8) : Since GR¯B(X)(x8)=1GRB(x8,x4) , then GR¯B(X)(x8)=y{x1,x5,x7}(1GRB(x8,y))=24 and Yx8*={y;y{x1,x5,x7}GRB(x8,y)}={x1} .

  • GR¯B(X)(xi)=GR¯B(X)(xi)=0 , xi ∈ {x1,x5,x7}.

From the step 4 of algorithm 4.3 we have the upper approximations:

  • GR¯B(X)(xj)=GR¯B(X)(xj) , Yxi*=Yxi , xj ∈ {x1,x5,x7}, and GR¯B(X)(xj)=1 , xj ∈ {x2,x3,x6,x8}.

Proposition 10

Let S = (U,A,V,F) be an information system, BA, U = U − {x}, xX, XU. The lower and upper approximations of X = X − {x} by removing x from X can be respectively updated as follows.

  • Lower approximation: GR¯B(X)(xi)=0 , xiUX, and GR¯B(X)(xi)=GR¯B(X)(xi) , Yxi*=Yxi , xiX.

  • Upper approximation: If xjX, then GR¯B(X)(xj)=GR¯B(X)(xj)=1 . If xjUX, we have:

    • If GR¯B(X)(xj)=GRB(xj,x) , then GR¯B(X)(xj)=yXGRB(xj,y) and Yxj*={y;yXGRB(xj,y)} .

    • If yYxj , such that GR¯B(X)(xj)=GRB(xj,y)GRB(xj,x) , then GR¯B(X)(xj)=GR¯B(X)(xj) and Yxj*=Yxj{x} .

  • Step 1: Input the relation matrix Mn×m; GR¯B(X)(xi) , Yxi , xiX; GR¯B(X)(xj , Yxj , xjUX.

  • Step 2: We get a new relation matrix M(n1)×m* . // Delete GRB(x,xj), xjUX from the relation matrix Mn×m.

  • Step 3: Calculate the lower approximations xiX

    • GR¯B(X)(xi)=GR¯B(X)(xi) , Yxi*=Yxi . // According to the Proposition 10.

  • Step 4: Calculate the upper approximations xjUX

    If yYxj , such that GR¯B(X)(xj)=GRB(xj,y)GRB(xj,x) , then GR¯B(X)(xj)=GR¯B(X)(xj) and Yxj*=Yxj{x} .

    Else GR¯B(X)(xj)=yXGRB(xj,y) and Yxj*={y;yXGBB(xj,y)} . // According to the Proposition 10.

  • Step 5: Output the relation matrix M(n1)×m* ; GR¯B(X)(xi) , Yxi* , xiX; GR¯B(X)(xj) , Yxj* , xjUX; GR¯B(X)(xi)=0 , xiUX, and GR¯B(X)(xj)=GR¯B(X)(xj)=1 , xjX.

Algorithm 4.4

(Incremental algorithm for updating approximations when removing an object x from X)

The Algorithm 4.4 has a time complexity of O(|UX|), which is mainly decided by Step 2. In the following Example 7, We use the results from Example 6 to demonstrate how algorithm 4.4 update the approximations when removing an object from X.

Example 7

We consider the information system given in Table 5. Let U = {x1,x2,x3,x5,x6,x7,x8} be the universal set, B = {b1,b3,b4,b5} be the conditional attribute set, X = {x2,x3,x6,x8} be the decision set. Removing x3 ∈ Φ from X, X = X − {x3}.

b1 b3 b4 b5 d
x1 2 1 1 3 UX
x2 2 1 2 2 X
x3 2 2 1 3 X
x5 1 2 2 3 UX
x6 1 2 2 2 X
x7 1 2 2 1 UX
x8 2 1 2 2 X

Using the result of Example 6.

x1x5x7M4×3=x2x3x6x8(2/41/41/43/42/41/403/43/42/41/41/4)

GR¯B(X)(x2)=24 , Yx2={x1} ; GR¯B(X)(x3)=14 , Yx3={x1} ; GR¯B(X)(x6)=14 , Yx6={x5,x7} ; GR¯B(X)(x8)=24 , Yx8={x1} ; GR¯B(X)(xi)=0 , xi ∈ {x1,x5,x7}.

GR¯B(X)(x1)=34 , Yx1={x3} ; GR¯B(X)(x5)=34 , Yx5={x6} ; GR¯B(X)(x7)=34 , Yx7={x6} ; GR¯B(X)(xj)=1 , xj ∈ {x2,x3,x6,x8}.

From the Step 2 of Algorithm 4.4 we have a new relation matrix M3×3* :

x1x5x7M3×3*=x2x6x8(2/41/41/403/43/42/41/41/4)

The lower and upper approximations of X = {x2,x6,x8} by removing x3 from X be updated as follows:

From the step 3 of algorithm 4.4 we have the lower approximations:

  • GR¯B(X)(xi)=GR¯B(X)(xi) , Yxi*=Yxi , xi ∈ {x2,x6,x8}, and GR¯B(X)(xi)=0 , xi ∈ {x1,x5,x7}.

From the step 4 of algorithm 4.4 we have the upper approximations:

  • GR¯B(X)(x1) : Since GR¯B(X)(x1)=GRB(x1,x3) , then GR¯B(X)(x1)=y{x2,x6,x8}GRB(x1,y)=24 and Yx1*={y;y{x2,x6,x8}GRB(x1,y)}={x2,x8} .

  • GR¯B(X)(x5) : Since Yx5={x6} , GR¯B(X)(x5)=GRB(x5,x6)GRB(x5,x3) , then GR¯B(X)(x5)=GR¯B(X)(x5)=34 and Yx5*=Yx5{x3}={x6} .

  • GR¯B(X)(x7) : Since Yx7={x6} , GR¯B(X)(x7)=GRB(x7,x6)GRB(x7,x3) , then GR¯B(X)(x7)=GR¯B(X)(x7)=34 and Yx7*=Yx7{x3}={x6} .

  • GR¯B(X)(xj)=GR¯B(X)(xj)=1 , xj ∈ {x2,x6,x8}.

5. Incrementally updating approximations when changing the attribute value of the object

In practical situation, the attribute values of the object are likely to change, as well. In this section, we discuss the methods of incrementally updating approximations based on the grade indiscernibility relation when changing the decision attribute value and the conditional attribute value of the object, respectively.

5.1. Incrementally updating approximations when changing the decision attribute value of the object

Proposition 11

Let S = (U,A,V,F) be an information system, BA, XU, xU and xX. The lower and upper approximations of X = X ∪ {x} by changing the decision attribute value of x from UX to X can be updated respectively as follows.

  • Lower approximation: If xiUX, we have GR¯B(X)(xi)=GR¯B(X)(xi)=0 . If xiX, we have:

    • If yYxi , such that GR¯B(X)(xi)=1GRB(xi,y)1GRB(xi,x) , then GR¯B(X)(xi)=GR¯B(X)(xi) and Yxi*=Yxi{x} .

    • If GR¯B(X)(xi)=1GRB(xi,x) or xi = x, then GR¯B(X)(xi)=yUX(1GRB(xi,y)) and Yxi*={y;yUXGRB(xi,y)} .

  • Upper approximation: If xjX, we have GR¯B(X)(xj)=1 . If xjUX, we have:

    • If GRB(xj,x)>GR¯B(X)(xj) , then GR¯B(X)(xj)=GRB(xj,x) and Yxj*={x} .

    • If GRB(xj,x)=GR¯B(X)(xj) , then GR¯B(X)(xj)=GR¯B(X)(xj) and Yxj*=Yxj{x} .

    • If GRB(xj,x)<GR¯B(X)(xj) , then GR¯B(X)(xj)=GR¯B(X)(xj) and Yxj*=Yxj .

  • Step 1: Input the relation matrix Mn×m; GR¯B(X)(xi) , Yxi , xiX; GR¯B(X)(xj , Yxj , xjUX.

  • Step 2: We get a new relation matrix M(n+1)×(m1)* . // Delete GRB(x, xi), xiX from the relation matrix Mn×m and calculate GRB(x,xj), xjUX.

  • Step 3: Calculate the lower approximations xiX

    If yYxi , such that GR¯B(X)(xi)=1GRB(xi,y)1GRB(xi,x) , then GR¯B(X)(xi)=GR¯B(X)(xi) and Yxi*=Yxi{x} .

    Else GR¯B(X)(xi)=yUX(1GRB(xi,y)) and Yxi*={y;yUXGRB(xi,y)} . // According to the Proposition 11.

  • Step 4: Calculate the upper approximations xjUX

    • If GRB(xj,x)>GR¯B(X)(xj) , then GR¯B(X)(xj)=GRB(xj,x) and Yxj*={x} .

    • If GRB(xj,x)=GR¯B(X)(xj) , then GR¯B(X)(xj)=GR¯B(X)(xj) and Yxj*=Yxj{x} .

    • If GRB(xj,x)<GR¯B(X)(xj) , then GR¯B(X)(xj)=GR¯B(X)(xj) and Yxj*=Yxj . // According to the Proposition 11.

  • Step 5: Output the relation matrix M(n+1)×(m1)* ; GR¯B(X)(xi) , Yxi* , xiX; GR¯B(X)(xj) , Yxj* , xjUX; GR¯B(X)(xi)=0 , xiUX, and GR¯B(X)(xj)=1 , xjX.

Algorithm 5.1

(Incremental algorithm for updating approximations when changing the decision attribute value of x, xUX to xX)

The Algorithm 5.1 has a time complexity of O(|UX| |B|), which is mainly decided by Step 2. In the following Example 8, We use the results from Example 7 to demonstrate how algorithm 5.1 update the approximations when changing the decision value of the object from UX to X.

Example 8

We consider the information system given in Table 6. Let U = {x1,x2,x5,x6,x7,x8} be the universal set, B = {b1,b3,b4,b5} be the conditional attribute set, X = {x2,x6,x8} be the decision set. Changing the decision attribute value of x5 to X from UX, X = X ∪ {x5}.

b1 b3 b4 b5 d
x1 2 1 1 3 UX
x2 2 1 2 2 X
x5 1 2 2 3 UX
x6 1 2 2 2 X
x7 1 2 2 1 UX
x8 2 1 2 2 X

Using the result of Example 7.

x1x5x7M3×3=x2x6x8(2/41/41/403/43/42/41/41/4)

GR¯B(X)(x2)=24 , Yx2={x1} ; GR¯B(X)(x6)=14 , Yx6={x5,x7} ; GR¯B(X)(x8)=24 , Yx8={x1} ; GR¯B(X)(xi)=0 , xi ∈ {x1,x5,x7}.

GR¯B(X)(x1)=24 , Yx1={x2,x8} ; GR¯B(X)(x5)=34 , Yx5={x6} ; GR¯B(X)(x7)=34 , Yx7={x6} ; GR¯B(X)(xj)=1 , xj ∈ {x2,x6,x8}.

From the Step 2 of Algorithm 5.1 we have a new relation matrix M4×2* :

x1x7M4×2*=x2x6x8x5(2/41/403/42/41/41/43/4)

The lower and upper approximations of X = {x2,x5,x6,x8} by changing the decision attribute value x5UX to x5X are updated as follows:

From the step 3 of algorithm 5.1 we have the lower approximations:

  • GR¯B(X)(x2) : Since Yx2={x1} , GR¯B(X)(x2)=1GRB(x2,x1) , then GR¯B(X)(x2)=GR¯B(X)(x2)=24 and Yx2*=Yx2{x5}={x1} .

  • GR¯B(X)(x6) : Since x7Yx6 , such that GR¯B(X)(x6)=1GRB(x6,x7) , then GR¯B(X)(x6)=GR¯B(X)(x6)=14 and Yx6*=Yx6{x5}={x7} .

  • GR¯B(X)(x8) : Since Yx8={x1} , GR¯B(X)(x8)=1GRB(x8,x1) , then GR¯B(X)(x8)=GR¯B(X)(x8)=24 and Yx8*=Yx8{x5}={x1} .

  • GR¯B(X)(x5) : GR¯B(X)(x5)=y{x1,x7}(1GRB(x5,y))=14 and Yx8*={y;y{x1,x7}GRB(x5,y)}={x7} .

  • GR¯B(X)(xi)=GR¯B(X)(xi)=0 , x ∈ {x1,x7}.

From the step 4 of algorithm 5.1 we have the upper approximations:

  • GR¯B(X)(x1) : Since GRB(x1,x5)<GR¯B(X)(x1) , then GR¯B(X)(x1)=GR¯B(X)(x1)=24 and Yx1*=Yx1={x2,x8} .

  • GR¯B(X)(x7) : Since GRB(x7,x5)=GR¯B(X)(x7) , then GR¯B(X)(x7)=GR¯B(X)(x7)=34 and Yx7*=Yx7{x5}={x5,x6} .

  • GR¯B(X)(xj)=GR¯B(X)(xj)=1 , xj ∈ {x2,x5,x6,x8}.

Proposition 12

Let S = (U,A,V,F) be an information system, BA, XU, xX. The lower and upper approximations of X = X − {x} by changing the decision attribute value of x from X to UX can be updated respectively as follows.

  • Lower approximation: If xiUX, we have GR¯B(X)(xi)=0 ; If xiX, we have:

    • If 1GRB(xi,x)>GR¯B(X)(xi) , then GR¯B(X)(xi)=GR¯B(X)(xi) and Yxi*=Yxi .

    • If 1GRB(xi,x)=GR¯B(X)(xi) , then GR¯B(X)(xi)=GR¯B(X)(xi) and Yxi*=Yxi{x} .

    • If 1GRB(xi,x)<GR¯B(X)(xi) , then GR¯B(X)(xi)=1GRB(xi,x) and Yxi*={x} .

  • Upper approximation: If xjX, we have GR¯B(X)(xj)=GR¯B(X)(xj)=1 ; If xjUX, we have:

    • If yYxj , such that GR¯B(X)(xj)=GRB(xj,y) , then GR¯B(X)(xj)=GR¯B(X)(xj) and Yxj*=Yxj{x} .

    • If GR¯B(X)(xj)=GRB(xj,x) , or xj = x, then GR¯B(X)(xj)=yXGRB(xj,y) and Yxj*={y;yXGRB(xj,y)} .

  • Step 1: Input the relation matrix Mn×m; GR¯B(X)(xi) , Yxi , xiX; GR¯B(X)(xj , Yxj , xjUX.

  • Step 2: We get a new relation matrix M(n+1)×(m1)* . // Delete GRB(x,xj), xjUX from the relation matrix Mn×m and calculate GRB(x, xi), xiX.

  • Step 3: Calculate the lower approximations xiX

    • If 1GRB(xi,x)>GR¯B(X)(xi) , then GR¯B(X)(xi)=GR¯B(X)(xi) and Yxi*=Yxi ;

    • If 1GRB(xi,x)=GR¯B(X)(xi) , then GR¯B(X)(xi)=GR¯B(X)(xi) and Yxi*=Yxi{x} ;

    • If 1GRB(xi,x)<GR¯B(X)(xi) , then GR¯B(X)(xi)=1GRB(xi,x) and Yxi*={x} . // According to the Proposition 12.

  • Step 4: Calculate the upper approximations xjUX

    If yYxj , such that GR¯B(X)(xj)=GRB(xj,y) , then GR¯B(X)(xj)=GR¯B(X)(xj) and Yxj*=Yxj{x} ;

    Else GR¯B(X)(xj)=yXGRB(xj,y) and Yxj*={y;yXGRB(xj,y)} . // According to the Proposition 12.

  • Step 5: Output the relation matrix M(n+1)×(m1)* ; GR¯B(X)(xi) , Yxi* , xiX; GR¯B(X)(xj) and Yxj* , xjUX; GR¯B(X)(xi)=0 , xiUX, and GR¯B(X)(xj)=1 , xjX.

Algorithm 5.2

(Incremental algorithm for updating approximations when changing the decision attribute value of x, xX to xUX)

The Algorithm 5.2 has a time complexity of O(|X| |B|), which is mainly decided by Step 2. In the following Example 9, We use the results from Example 8 to demonstrate how algorithm 5.2 update the approximations when changing the decision attribute value of the object from X to UX.

Example 9

We consider the information system given in Table 7. Let U = {x1,x2,x5,x6,x7,x8} be the universal set, B = {b1,b3,b4,b5} be the conditional attribute set, X = {x2,x5,x6,x8} be the decision set. Changing the decision attribute value of x8 from X to UX, X = {x2,x5,x6}.

b1 b3 b4 b5 d
x1 2 1 1 3 UX
x2 2 1 2 2 X
x5 1 2 2 3 X
x6 1 2 2 2 X
x7 1 2 2 1 UX
x8 2 1 2 2 X

Using the result of Example 8.

x1x7M4×2=x2x6x8x5(2/41/403/42/41/41/43/4)

GR¯B(X)(x2)=24 , Yx2={x1} ; GR¯B(X)(x6)=14 , Yx6={x7} ; GR¯B(X)(x8)=24 , Yx8={x1} ; GR¯B(X)(x5)=14 , Yx5={x7} ; GR¯B(X)(xi)=0 , xi ∈ {x1,x7}.

GR¯B(X)(x1)=24 , Yx1={x2,x8} ; GR¯B(X)(x7)=34 , Yx7={x5,x6} ; GR¯B(X)(xj)=1 , xj ∈ {x2,x5,x6,x8}.

From the Step 2 of Algorithm 5.2 we have a new relation matrix M3×3* :

x1x7x8M3×3*=x2x6x5(2/41/44/403/42/41/43/41/4)

The lower and upper approximations of X = {x2,x5,x6} by changing the decision attribute value x8X to x8UX, are updated as follows:

From the step 3 of algorithm 5.2 we have the lower approximations:

  • GR¯B(X)(x2) : Since 1GRB(x2,x8)<GR¯B(X)(x2) , then GR¯B(X)(x2)=1GRB(x2,x8)=0 and Yx2*={x8} .

  • GR¯B(X)(x6) : Since 1GRB(x6,x8)>GR¯B(X)(x6) , then GR¯B(X)(x6)=GR¯B(X)(x6)=14 and Yx6*=Yx6={x7} .

  • GR¯B(X)(x5) : Since 1GRB(x5,x8)>GR¯B(X)(x5) , then GR¯B(X)(x5)=GR¯B(X)(x5)=14 and Yx5*=Yx5={x7} .

  • GR¯B(X)(xi)=GR¯B(X)(xi)=0 , x ∈ {x1,x7,x8}.

From the step 4 of algorithm 5.2 we have the upper approximations:

  • GR¯B(X)(x1) : Since x2Yx1 , such that GR¯B(X)(x1)=GRB(x1,x2) , then GR¯B(X)(x1)=GR¯B(X)(x1)=24 and Yx1*=Yx1{x8}={x2} .

  • GR¯B(X)(x7) : Since x6Yx7 , such that GR¯B(X)(x7)=GRB(x7,x6) , then GR¯B(X)(x7)=GR¯B(X)(x7)=34 and Yx7*=Yx7{x8}={x5,x6} .

  • GR¯B(X)(x8) : GR¯B(X)(x8)=y{x2,x5,x6}GRB(x8,y)=44 and Yx8*={y;y{x2,x5,x6}GRB(x8,y)}={x2} .

  • GR¯B(X)(xj)=GR¯B(X)(xj)=1 , xj ∈ {x2,x5,x6}.

5.2. Incremental updating approximations when changing the conditional attribute value of the object

Proposition 13

Let S = (U,A,V,F) be an information system, BA, bB, changing the conditional attribute value f(x*,b) to f*(x*,b), f*(x*,b) ≠ f(x*,b), x*U. For any yU,

GRB*(x*,y)={|B|·GRB(x*,y)+1|B|,f*(x*,b)=f(y,b);|B|·GRB(x*,y)1|B|,f*(x*,b)f(y,b).

Proposition 14

Let S = (U,A,V,F) be an information system, BA, XU, x*X. The lower and upper approximations of X by changing the conditional attribute value of x*, f(x*,b) to f*(x*,b) (f(x*,b) ≠ f*(x*,b)) can be updated respectively as follows.

  • Lower approximation: If xiUX, then GR¯B*(X)(xi)=0 . If xiX, we have

    • If xiX − {x*}, then GR¯B*(X)(xi)=GR¯B(X)(xi) and Yxi*=Yxi .

    • If xi = x*, then GR¯B*(X)(x*)=yUX(1GRB*(x*,y)) and Yxi*={y;yUXGRB*(x*,y)} .

  • Upper approximation: If xjX, then GR¯B*(X)(xj)=1 . If xjUX, we have:

    • If GR¯B(X)(xj)=GRB(xj,x*) , then GR¯B*(X)(xj)=yXGRB*(xj,y) and Yxj*={y;yXGRB*(xj,y)} .

    • If yYxj , such that GR¯B(X)(xj)=GRB(xj,y) , we have:

    • If GR¯B(X)(xj)>GRB*(xj,x*) , then GR¯B*(X)(xj)=GR¯B(X)(xj) and Yxj*=Yxj{x*} .

    • If GR¯B(X)(xj)=GRB*(xj,x*) , then GR¯B*(X)(xj)=GR¯B(X)(xj) and Yxj*=Yxj{x*} .

    • If GR¯B(X)(xj)<GRB*(xj,x*) , then GR¯B*(X)(xj)=GRB*(xj,x*) and Yxj*={x*} .

  • Step 1: Input the relation matrix Mn×m; GR¯B(X)(xi) , Yxi , xiX; GR¯B(X)(xj) , Yxj , xjUX; The changing attribute value of x*: f(x*,b).

  • Step 2: We get a new relation matrix Mn×m* . Changing the value GRB(x*,y) to GRB*(x*,y) of the relation matrix Mn×m,

    GRB*(x*,y)={|B|·GRB(x*,y)+1|B|,f*(x*,b)=f(y,b);|B|·GRB(x*,y)1|B|,f*(x*,b)f(y,b).
    yUX // According to Proposition 13.

  • Step 3: Calculate the lower approximations xiX

    • If xiX − {x*}, then GR¯B*(X)(xi)=GR¯B(X)(xi) and Yxi*=Yxi .

    • If xi = x*, then GR¯B*(X)(x*)=yUX(1GRB*(x*,y)) and Yx**={y;yUXGRB*(x*,y)} . // According to Proposition 14.

  • Step 4: Calculate the upper approximations xjUX

    • If yYxj , such that GR¯B(X)(xj)=GRB(xj,y) , we have:

    • If GR¯B(X)(xj)>GRB*(xj,x*) , then GR¯B*(X)(xj)=GR¯B(X)(xj) and Yxj*=Yxj{x*} .

    • If GR¯B(X)(xj)=GRB*(xj,x*) , then GR¯B*(X)(xj)=GR¯B(X)(xj) and Yxj*=Yxj{x*} .

    • If GR¯B(X)(xj)<GRB*(xj,x*) , then GR¯B*(X)(xj)=GRB*(xj,x*) and Yxj*={x*} .

    Else GR¯B*(X)(xj)=yXGRB*(xj,y) and Yxj*={y;yXGRB*(xj,y)} . // According to Proposition 14.

  • Step 5: Output the relation matrix Mn×m* ; GR¯B*(X)(xi) , Yxi* , xiX; GR¯B*(X)(xj) , Yxj* , xjUX; GR¯B*(X)(xi)=0 , xiUX, and GR¯B*(X)(xj)=1 , xjX.

Algorithm 5.3

(Incremental algorithm for updating approximations when changing the conditional attribute value f(x*,b) to f*(x*,b), x*X)

The Algorithm 5.3 has a time complexity of O(|UX| |B|), which is mainly decided by Step 2. In the following Example 10, We use the results from Example 9 to demonstrate how algorithm 5.3 update the approximations when changing the attribute value f(x*,b) to f*(x*,b), x*X.

Example 10

We consider the information system given in Table 8. Let U = {x1,x2,x5,x6,x7,x8} be the universal set, B = {b1,b3,b4,b5} be the conditional attribute set, X = {x2,x5,x6} be the decision set. Changing f(x5,b3) = 2 to f*(x5,b3) = 1.

b1 b3 b4 b5 d
x1 2 1 1 3 UX
x2 2 1 2 2 X
x5 1 2 2 3 X
x6 1 2 2 2 X
x7 1 2 2 1 UX
x8 2 1 2 2 UX

Using the result of Example 9.

x1x7x8M3×3=x2x6x5(2/41/44/403/42/41/43/41/4)

GR¯B(X)(x2)=0 , Yx2={x8} ; GR¯B(X)(x6)=14 , Yx6={x7} ; GR¯B(X)(x5)=14 , Yx5={x7} ; GR¯B(X)(xi)=0 , xi ∈ {x1,x7,x8}.

GR¯B(X)(x1)=24 , Yx1={x2} ; GR¯B(X)(x7)=34 , Yx7={x5,x6} ; GR¯B(X)(x8)=44 , Yx8={x2} ; GR¯B(X)(xj)=1 , xj ∈ {x2,x5,x6}.

From the Step 2 of Algorithm 5.3 we have a new relation matrix M3×3* :

x1x7x8M3×3*=x2x6x5(2/41/44/403/42/42/42/42/4)

The lower and upper approximations of X = {x2,x5,x6} by changing f(x5,b3) to f*(x5,b3) are updated as follows.

From the step 3 of algorithm 5.3 we have the lower approximations:

  • GR¯B*(X)(xi)=GR¯B(X)(xi) , Yxi*=Yxi , xi ∈ {x2,x6}.

  • GR¯B*(X)(x5)=y{x1,x7,x8}(1GRB*(x5,y))=24 and Yx5*={y;y{x1,x7,x8}GRB*(x5,y)}={x1,x7,x8} .

From the step 4 of algorithm 5.3 we have the upper approximations:

  • GR¯B*(X)(x1) : Since x2Yx1 , GR¯B(X)(x1)=GRB(x1,x2) , and GR¯B(X)(x1)=GRB*(x1,x5) , then GR¯B*(X)(x1)=GR¯B(X)(x1)=24 and Yx1*=Yx1{x5}={x2,x5} .

  • GR¯B*(X)(x7) : Since x6Yx7 , GR¯B(X)(x7)=GRB(x7,x6) , and GR¯B(X)(x7)>GRB*(x7,x5) , then GR¯B*(X)(x7)=GR¯B(X)(x7)=34 , and Yx7*=Yx7{x5}={x6} .

  • GR¯B*(X)(x8) : Since x2Yx8 , GR¯B(X)(x8)=GRB(x8,x2) , and GR¯B(X)(x8)>GRB*(x8,x5) , then GR¯B*(X)(x8)=GR¯B(X)(x8)=44 , and Yx8*=Yx8{x5}={x2} .

Proposition 15

Let S = (U,A,V,F) be an information system, BA, XU, x*UX. The lower and upper approximations of X by changing the conditional attribute value of x*, f(x*,b) to f*(x*,b) (f(x*,b) ≠ f*(x*,b)) can be updated respectively as follows.

  • Lower approximation: If xiUX, then GR¯B*(X)(xi)=0 . If xiX, we have

    • If GR¯B(X)(xi)=GRB(xi,x*) , then GR¯B*(X)(xi)=yUX(1GRB*(xi,y)) and Yxi*={y;yUXGRB*(xi,y)} .

    • If yYxi , such that GR¯B(X)(xi)=GRB(xi,y) , we have:

    • If GR¯B(X)(xi)>GRB*(xi,x*) , then GR¯B*(X)(xi)=GR¯B(X)(xi) and Yxi*=Yxi{x*} ;

    • If GR¯B(X)(xi)=GRB*(xi,x*) , then GR¯B*(X)(xi)=GR¯B(X)(xi) and Yxi*=Yxi{x*} ;

    • If GR¯B(X)(xj)<GRB*(xj,x*) , then GR¯B*(X)(xj)=GRB*(xj,x*) and Yxj*={x*} .

  • Upper approximation: If xjX, then GR¯B*(X)(xj)=1 . If xjUX, we have:

    • If xjUX − {x*}, then GR¯B*(X)(xj)=GR¯B(X)(xj) and Yxj*=Yxj .

    • If xj = x*, then GR¯B*(X)(x*)=yXGRB*(x*,y) and Yx**={y;yXGRB*(x*,y)} .

  • Step 1: Input the relation matrix Mn×m; GR¯B(X)(xi) , Yxi , xiX; GR¯B(X)(xj , Yxj , xjUX; The changing attribute value of x*: f(x*,b).

  • Step 2: We get a new relation matrix Mn×m* . Changing the value GRB(x*,y) to GRB*(x*,y) of the relation matrix Mn×m,

    GRB*(x*,y)={|B|·GRB(x*,y)+1|B|,f*(x*,b)=f(y,b);|B|·GRB(x*,y)1|B|,f*(x*,b)f(y,b).
    yX // According to Proposition 13.

  • Step 3: Calculate the lower approximations xiX

    • If yYxi , such that GR¯B(X)(xi)=GRB(xi,y) , we have:

    • If GR¯B(X)(xi)>GRB*(xi,x*) , then GR¯B*(X)(xi)=GR¯B(X)(xi) and Yxi*=Yxi{x*} ;

    • If GR¯B(X)(xi)=GRB*(xi,x*) , then GR¯B*(X)(xi)=GR¯B(X)(xi) and Yxi*=Yxi{x*} ;

    • If GR¯B(X)(xj)<GRB*(xj,x*) , then GR¯B*(X)(xj)=GRB*(xj,x*) and Yxj*={x*} .

    Else GR¯B*(X)(xi)=yUX(1GRB*(xi,y)) and Yxi*={y;yUXGRB*(xi,y)} . // According to Proposition 15.

  • Step 4: Calculate the upper approximations xjUX

    • If xjUX − {x*}, then GR¯B*(X)(xj)=GR¯B(X)(xj) and Yxj*=Yxj .

    • If xj = x*, then GR¯B*(X)(x*)=yXGRB*(x*,y) and Yx**={y;yXGRB*(x*,y)} . // According to Proposition 15.

  • Step 5: Output the relation matrix Mn×m* ; GR¯B*(X)(xi) , Yxi* , xiX; GR¯B*(X)(xj) , Yxj* , xjUX; GR¯B*(X)(xi)=0 , xiUX, and GR¯B*(X)(xj)=1 , xjX.

Algorithm 5.4

(Incremental algorithm for updating approximations when changing the conditional attribute value f(x*,b) to f*(x*,b), x*UX)

The time complexity of the Algorithm 5.4 is O(|X| |B|), which is mainly decided by the time cost of building the information system matrix in Step 2. In the following Example 11, We use the results from Example 10 to demonstrate how algorithm 5.4 update the approximations when changing the conditional attribute value f(x*,b) to f*(x*,b), x*UX.

Example 11

We consider the information system given in Table 9. Let U = {x1,x2,x5,x6,x7,x8} be the universal set, B = {b1,b3,b4,b5} be the conditional attribute set, X = {x2,x5,x6} be the decision set. Changing f(x8,b4) = 2 to f*(x8,b4) = 1.

b1 b3 b4 b5 d
x1 2 1 1 3 UX
x2 2 1 2 2 X
x5 1 1 2 3 X
x6 1 2 2 2 X
x7 1 2 2 1 UX
x8 2 1 2 2 UX
b1 b3 b4 b5 d
x1 2 1 1 3 UX
x2 2 1 2 2 X
x5 1 1 2 3 X
x6 1 2 2 2 UX
x7 1 2 2 1 UX
x8 2 1 1 2 UX

Using the result of Example 10.

x1x7x8M3×3=x2x6x5(2/41/44/403/42/42/42/42/4)

GR¯B(X)(x2)=0 , Yx2={x8} ; GR¯B(X)(x6)=14 , Yx6={x7} ; GR¯B(X)(x5)=24 , Yx5={x1,x7,x8} ; GR¯B(X)(xi)=0 , xi ∈ {x1,x7,x8}.

GR¯B(X)(x1)=24 , Yx1={x2,x5} ; GR¯B(X)(x7)=34 , Yx7={x6} ; GR¯B(X)(x8)=44 , Yx8={x2} ; GR¯B(X)(xj)=1 , xj ∈ {x2,x5,x6}.

From the Step 2 of Algorithm 5.4 we have a new relation matrix M3×3* :

x1x7x8M3×3*=x2x6x5(2/41/43/403/41/42/42/41/4)

The lower and upper approximations of X = {x2,x5,x6} by changing f(x8,b4) to f*(x8,b4) are updated as follows.

From the step 3 of algorithm 5.4 we have the lower approximations:

  • GR¯B*(X)(x2) : Since GR¯B(X)(x2)=GRB(x2,x8) , then GR¯B*(X)(x2)=y{x1,x7,x8}(1GRB*(x2,y))=34 and Yx2*={y;y{x1,x7,x8}GRB*(x2,y)}={x8} ;

  • GR¯B*(X)(x6) : Since x7Yx6 , such that GR¯B(X)(x6)=GRB(x6,x7) , and GR¯B(X)(x6)>GRB*(x6,x8) then GR¯B*(X)(x6)=GR¯B*(X)(x6)=14 and Yx6*=Yx6{x8}={x7} ;

  • GR¯B*(X)(x5) : Since x7Yx5 , such that GR¯B(X)(x5)=GRB(x5,x7) , and GR¯B(X)(x5)>GRB*(x5,x8) then GR¯B*(X)(x5)=GR¯B(X)(x5)=24 and Yx5*=Yx5{x8}={x1,x7} .

From the step 4 of algorithm 5.4 we have the upper approximations:

  • GR¯B*(X)(xj)=GR¯B(X)(xj) , Yxj*=Yxj , xj ∈ {x1,x7}.

  • GR¯B*(X)(x8)=y{x2,x5,x6}GRB*(x8,y)=24 and Yx8*={y;y{x2,x5,x6}GRB*(x8,y)}={x1,x7} .

6. Algorithm analysis

The time complexities of all the incrementally updating algorithms proposed in this paper are list in the Table 11 as follow.

Operation Non-incremental algorithm Time Complexity Incremental algorithm Time Complexity
Adding an attribute b Algorithm 2.1 O(|X| |UX| |B|) Algorithm 3.1 O (|X| |UX|)
Removing an attribute b Algorithm 3.2 O(|X| |UX)
Adding an object xUX Algorithm 4.1 O(|X| |B|)
Adding an object xX Algorithm 4.2 O(|UX| |B|)
Removing an object xUX Algorithm 4.3 O(|X|)
Removing an object xX Algorithm 4.4 O(|UX|)
xUXxX Algorithm 5.1 O(|UX| |B|)
xXxUX Algorithm 5.2 O(|X| |B|)
f(x,b) → f*(x,b), xX Algorithm 5.3 O(|UX| |B|)
f(x,b) → f*(x,b), xUX Algorithm 5.4 O(|X| |B|)

From the Table 11 we can see that the proposed incrementally updating algorithms has lower time complexity for every operation on data. In addition to the time cost, it is obviously that every incrementally updating algorithm will use less memory space to get the information updated, because there is no need to rebuild the whole relation matrix in an incrementally updating algorithm.

The incrementally updating algorithms could be seen as a group of functions to update the existing results to keep the relation matrix, approximation and the set of Yxj and Yxi changing in right way when the information system changed. And we can use different types of the incrementally updating algorithms in any order and with any number of times to adapt to any possible changing in the information system. For example, we can use Algorithm 3.2 once and then use Algorithm 4.1 or 4.2 twice when the basic data need a changing of adding two objects and removing an attribute. And we should note that putting the removing type Algorithm before others will also make the time cost reduced a bit more.

7. Experimental evaluation

In this section, we compare the computational time of the non-incremental algorithm and the proposed incremental algorithms on different data sets shown in Table 12. The three data sets used herein came from the University of California, Irvine (UCI) Machine Learning Repository (www.ics.uci.edu/). In data set Dermatology, we delete some attributes with missing values. Experiments were performed on a 2.40GHz Pentium Server with 8GB of memory, running Windows 10. Algorithms were code in Matlab r2012a.

ID Data set Number of objects Number of attributes Decision classes Missing value
1 Zoo 101 17 7 No
2 Dermatology 366 33 7 Yes
3 Chess 3196 36 No

From the following experimental results, it is clear that the incremental algorithm outperform the traditional non-incremental algorithm in different cases, Table 13 (Where NIA is the non-incrementally updating algorithm and IA is the incrementally updating algorithm) shows that the proposed methods can effectively update approximations when the information system changes.

Operation Zoo Dermatology Chess
NIA IA NIA IA NIA IA
Adding an attribute b 0.0115 0.0083 0.1735 0.0064 15.8108 0.2609
Removing an attribute b 0.0115 0.0077 0.1774 0.0065 15.8190 0.2316
Adding an object xUX 0.0123 0.0030 0.1582 0.0052 15.9404 0.1340
Adding an object xX 0.0120 0.0081 0.1485 0.0074 16.2771 0.1263
Removing an object xUX 0.0101 0.0044 0.1708 0.0049 15.9115 0.1308
Removing an object xX 0.0114 0.0067 0.1697 0.0057 16.0953 0.1302
xUXxX 0.0117 0.0055 0.1726 0.0054 15.9011 0.1641
xXxUX 0.0112 0.0099 0.1744 0.0067 15.7100 0.1548
f(x,b) → f*(x,b), xX 0.0120 0.0079 0.1795 0.0067 15.9708 0.1184
f(x,b) → f*(x,b), xUX 0.0115 0.0030 0.1623 0.0060 15.8709 0.1442
Table 13.

Comparisons of computational time between non-incremental algorithm and incremental simplified algorithm when objects and attributes increase simultaneously(s).

8. Conclusion

This paper discusses the approaches of incrementally updating of the rough approximations based on the grade indiscernibility relation when complete information system is varied. Taking full use of the existing information make the time cost of the approaches reduced a lot especially when dealing with the large-scale data. In the future, we will discuss the methods of dynamic updating of the rough approximations based on the grade indiscernibility relation in incomplete information system. Moreover, the incremental updating approaches of the decision rules with respect to grade indiscernibility relation is another important and interesting issue to be addressed.

Acknowledgements

This work was partially supported by the National Natural Science Foundation of China (Grant No. 61473239, 61175044, 61372187), the Fundamental Research Funds for the Central Universities of China (Grant No. 2682014ZT28), and the open research fund of key laboratory of intelligent network information processing, Xihua University (szjj2014-052).

References

22.M Lichman, UCI Machine Learning Repository, University of California, Irvine, School of Information and Computer Sciences, 2013. http://archive.ics.uci.edu/ml
Journal
International Journal of Computational Intelligence Systems
Volume-Issue
10 - 1
Pages
212 - 233
Publication Date
2017/01/01
ISSN (Online)
1875-6883
ISSN (Print)
1875-6891
DOI
10.2991/ijcis.2017.10.1.15How to use a DOI?
Copyright
© 2017, the Authors. Published by Atlantis Press.
Open Access
This is an open access article under the CC BY-NC license (http://creativecommons.org/licences/by-nc/4.0/).

Cite this article

TY  - JOUR
AU  - Junfang Luo
AU  - Yaya Liu
AU  - Keyun Qin
AU  - Heng Ding
PY  - 2017
DA  - 2017/01/01
TI  - Incremental update of rough set approximation under the grade indiscernibility relation
JO  - International Journal of Computational Intelligence Systems
SP  - 212
EP  - 233
VL  - 10
IS  - 1
SN  - 1875-6883
UR  - https://doi.org/10.2991/ijcis.2017.10.1.15
DO  - 10.2991/ijcis.2017.10.1.15
ID  - Luo2017
ER  -