International Journal of Computational Intelligence Systems

Volume 14, Issue 1, 2021, Pages 1419 - 1425

Attribute Reduction Method of Covering Rough Set Based on Dependence Degree

Authors
Li Fachao1, 2, Ren Yexing2, *, ORCID, Jin Chenxia1
1School of Economics and Management, Hebei University of Science and Technology, Shijiazhuang, 050018, China
2College of Science, Hebei University of Science and Technology, Shijiazhuang, 050018, China
*Corresponding author. Email: x960312@yeah.net
Corresponding Author
Ren Yexing
Received 10 January 2021, Accepted 13 April 2021, Available Online 30 April 2021.
DOI
10.2991/ijcis.d.210419.002How to use a DOI?
Keywords
Covering rough sets; Attribute reduction; Dependence degree; Local dependence degree; ε-Boolean identification matrix
Abstract

Attribute reduction is a hot topic in the field of data mining. Compared with the traditional methods, the attribute reduction algorithm based on covering rough set is more suitable for dealing with numerical data. However, this kind of algorithm is still not efficient enough to deal with large-scale data. In this paper, we firstly propose ε-Boolean identification matrix of covering rough sets, give the calculation methods of dependence degree and local dependence degree, and further discuss their properties. Secondly, we give two attribute reduction algorithms based on dependence degree and local dependence degree, respectively. Finally, we test the performance of the algorithm through several UCI data sets. Experimental results show that the efficiency of our algorithm has been greatly improved. So it is more suitable for handling large-scale data processing problems, and can have wide application value.

Copyright
© 2021 The Authors. Published by Atlantis Press B.V.
Open Access
This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).

1. INTRODUCTION

Rough set theory was proposed by Polish mathematician Pawlak Z in 1982. It is a new mathematical tool to deal with fuzzy and uncertain knowledge, and has been successfully applied in machine learning and knowledge discovery, data mining, pattern recognition, and other fields [1,2]. And attribute reduction (removal of redundant attributes while keeping certain performance of the information system unchanged) is one of the important contents in rough set theory.

Classical rough set theory uses equivalence relations to obtain the division of objects under attribute sets. It is an effective tool for processing complete discrete data. However, in reality, due to the complexity of the problem, the data in the information system is often numerical data, which means that the numerical data needs to be discretized before attribute reduction, which will cause information loss [3]. Since the equivalence relation is no longer suitable for numerical information systems, many scholars extend the equivalence relation to the dominant relation, or change the division of the universe into the coverage of the universe, so as to deal with the problem of attribute reduction and rule acquisition of information system. Since most of the attribute reductions of information systems are not unique, and seeking the reduction with the least number of attributes is an NP-hard problem [4]. Therefore, how to realize attribute reduction through a certain heuristic algorithm is a hot research content, and many scholars have conducted many useful discussions.

Guan and Wang [5] proposed the concept of relative reduction of the maximum tolerance class, and defined a discriminant function to calculate the relative reduction using Boolean reasoning technology. Ma and Zhang [6] gave the form of generalized binary discernibility matrix for processing incomplete information systems, introduces some useful properties of attribute kernel and attribute relative reduction, and proposes an attribute kernel based on generalized binary discernibility matrix Algorithm for relative reduction of attributes. Meng and Shi [7] derived some properties of incomplete information systems, and proposed an attribute reduction algorithm based on positive regions. Chen et al. [8] proposed a method to reduce the attributes of the coverage decision system. The consistent and inconsistent coverage decision-making systems and their attribute reductions are defined, and the necessary and sufficient conditions for reduction are given. Wang et al. [9] gave some basic structural properties of attribute reductions in covering rough sets, and proposes a heuristic algorithm based on discernibility matrix to find the approximate minimum reduction attribute subsets. Gao et al. [10] rewrote the matrix form of neighborhood approximation set. Based on the neighborhood approximation set of matrix, the relative dependence of attributes and its algorithm are proposed. Chen et al. [11] proposed an acceleration strategy based on attribute groups. First, divide all candidate attributes into different groups. In the process of searching for reduction, it is only necessary to calculate the attributes in the group that contains at least one attribute in the potential reduction. Tsang et al. [12,13] improved the upper approximation definition of covering generalized rough sets, then proposed a new method of constructing a simpler discernibility matrix based on covering rough sets. Liu et al. [14] proposed two new matrix-based minimum and maximum description calculation methods for covering rough sets, which reduces the computational complexity of traditional methods. Li et al. [15] proposed a single-attribute identification matrix, gave the concept of dual-attribute comprehensive dependence, and designed an attribute reduction algorithm based on dual-attribute comprehensive dependence. Shi et al. [16] proposed a new attribute reduction model based on Boolean operations based on the concept of neighborhood rough set. Chen et al. [17] studied the attribute reduction problem of the coverage decision system based on graph theory, and proposed an algorithm for the coverage decision system. Chen and Chen [18] proposed a variable precision neighborhood rough set model, and a feature subset selection algorithm to the variable precision neighborhood rough sets is designed.

The above mentioned researches have promoted the development of rough set theory to a certain extent, but it is worth noting that the resolution matrix has the problems of large storage space and low generation efficiency. Therefore, in the context of covering rough set, this paper first proposes the ε-Boolean identification matrix, according to which the attribute reduction algorithm based on dependence degree is designed, which solves the above problems to a certain extent and greatly improves the operation efficiency of the algorithm. Secondly, the sub matrix of ε-Boolean identification matrix is used to replace ε-Boolean identification matrix, and an attribute reduction algorithm based on local dependence degree is designed, which further improves the performance of the algorithm.

The main work of this paper is stated as follows: (1) The basic concepts of rough set correlation are reviewed; (2) The calculation methods of dependence degree and local dependence degree under covering rough set are given, and the concept of ε-Boolean identification matrix is proposed; (3) The algorithm of attribute reduction based on dependency and local dependency is designed respectively; (4) The effectiveness of this algorithm is verified by several commonly used UCI datasets. Theoretical analysis and simulation results show that our methods can effectively improve the efficiency of the algorithm, and the reduction results are similar to the existing algorithms.

2. PRELIMINARIES

This section mainly reviews some basic concepts related to rough set. For convenience, we assume that (1) (U,F,A) is an information system, (where U=x1,x2,,xn is object set, A=a1,a2,,am is a set of attributes, F=fa:UVa|aA is the set of relationships between U and A, Va is the range of a); (2) U is a nonempty finite universe, C is a family of nonempty subsets of U. C is called a covering of U if no subset in C is empty and C=U.

Definition 1.

[12] Suppose U is a finite universe and C=k1,k2,kn is a covering of U. For any xU, let Cx=kj:kjC,xkj, then Cov(C)=Cx:xU is also a covering of U, we call it the induced covering of C.

Definition 2.

[12] Suppose U is a finite universe and Δ=Ci:i=1,2,,m is a family of covering of U. For any xU, let Δx=(Ci)x:(Ci)xCov(Ci),i=1,2,,m, then Cov(Δ)=Δx:xU is also a covering of U, we call it the induced covering of Δ.

Generally speaking, for each numerical attribute aA, we can define a neighborhood of each sample xU. Na(x,ε)=yU:d(x,y)ε, NA(x,ε)=Na(x,ε):aA, where d(x,y)=|a(x)a(y)| is a distance function and ε is a specified threshold. Obviously, Na=Na(x,ε):xU is the coverage of U, Δ=Na:aA is the coverage family of U, and Δx=NA(x,ε).

Theorem 1.

[12] Let Δ=Ci:i=1,2,,m be a family of coverings on U, PΔ. Then Cov(P)=Cov(Δ) if and only if Δx=Px for all xU.

Definition 3.

Let (U,F,A) be an information system, aiA, x,yU, B is a nonempty subset of A and ε is a specified threshold,

d(s,t)=1,if |st|>ε,0,if |st|ε.(1)

  1. If dfai(x),fai(y)=1, then ai can distinguish x from y; otherwise, ai cannot distinguish x from y.

  2. Let at(x,y)=dfat *(x),fat *(y)

G(ai)=x,yUai(x,y)(2)
G(B)=x,yUmaxaiBai(x,y)(3)
Dai|Bx=yUai(x,y)dmaxakBak(x,y),ai(x,y)(4)
Dai|B=x,yUai(x,y)dmaxakBak(x,y),ai(x,y)(5)

Then G(ai) is called the distinguishing ability of ai, G(B) is called the comprehensive distinguishing ability of attribute set B, Dai|Bx is called the local dependence degree of B on ai and Dai|B is called the dependence degree of B on ai.

Obviously, 1) G(ai) is the total number of order pairs that can be distinguished by ai, and G(B) is the total number of ordered pairs that can be distinguished by attribute set B; 2) G(B1)G(B2) and Dai|B1Dai|B2 are consistent with any B1B2A; 3) Dai|Bx represents the total number of objects that cannot be distinguished by B but can be distinguished by ai when distinguishing object x from other objects, Dai|B is the total number of ordered pairs that cannot be distinguished by B but can be distinguished by ai. This shows that Dai|Bx and Dai|B are important (complementary) measures to B in the sense of improving distinguishing ability, and are a selection basis for adding attributes when designing additive attribute reduction methods.

3. DEPENDENCE DEGREE ON COVERING ROUGH SETS

For convenience, in the following, we assume that,

  1. δ(x)=1 for any x>0, and δ(x)=0 for any x0.

  2. For X=(x1,x2,,xn),Y=(y1,y2,,yn), Xi=xi1,xi2, ,xin,i=1,2,,s, S(X)=i=1nxi S(X)=i=1nxi, δ(X)=δ(x1),δ(x2),,δ(xn), XY=x1y1,x2y2,,xnyn, iIXi=iIxi1,iIxi2,,iIxin.

  3. If the first row of matrix Q is a1,a2,,am, and the elements of other rows are 0 or 1, then Q is called the identity Boolean matrix of a1,a2,,am, denoted as

    Q= a1 a2 amP(Q,a1)P(Q,a2)P(Q,am).

3.1. ε-Boolean Identification Matrix and Its Submatrix

Definition 4.

Let (U,F,A) be an information system, A=a1,a2,am, U={x1,x2,,xn},

rij(k)=1,if | faj(xk)faj(xi)|>ε,0,if | faj(xk)faj(xi)|ε,(6)

Mxk=rij(k)(Nk)×m is a matrix with rij(k) as its element, where i=k+1,k+2,,N, j=1,2,,m. A=a1,a2,,am,

  1. The following nn1/2+1×m order matrix:

    M=AM(x1)M(x2)M(xn1)T[ a1 a2 amP(M,a1)P(M,a2)P(M,am)](7)
    is the ε-Boolean identification matrix of information system (U,F,A).

  2. The following [(nk)+1]×m order matrix:

    Mxk=AM(xk)T  a1  a2  amPMxk,a1PMxk,a2PMxk,am(8)
    is the submatrix of ε-Boolean identification matrix of information system (U,F,A).

It is easy to see that (1) rij(k) is a 0-1 description about whether attribute aj(j=1,2,,m) can distinguish objects xk and xii=k+1,k+2,,N; (2) M is a formal comprehensive description of the discriminative performance of attribute set A with respect to U=x1,x2,xn, and Mxk is a 0-1 comprehensive description about whether attribute set A can distinguish objects xk and xii=k+1,k+2,,N; (3) In row tt=1,2,,Nk of Mxk, the attribute corresponding to element 1 can distinguish xk and xk+t, while the attribute corresponding to element 0 cannot distinguish xk and xk+t. This indicates that the number of element 1 in this row is the number of attributes in A that can distinguish xk and xk+t (in particular, if there is only one 1 in the row, then the attribute in the column where this 1 is located must be the core attribute); (4) The j-th column of M is a comprehensive description of the distinguishing performance of attribute aj with respect to U=x1,x2,xn, and its basic characteristics are as Theorem 2.

Theorem 2.

Let (U,F,A) be an information system, A=a1,a2,am, U=x1,x2,,xn, M is the ε-Boolean identification matrix, Mxk is the submatrix of the ε-Boolean identification matrix. For aiA and BA, the following statements hold.

  1. G(ai)=2SP(M,ai)=2k=1N1SP(Mxk,ai);

  2. G(B)=2SatBP(M,at)=2k=1N1SatBPMxk,at;

  3. Daj|Bxk=2SδPMxk,ajatBPMxk,at;

  4. Dai|B=2SδP(M,ai)atBP(M,at).

Proof.

It is obvious by Definition 3.

Theorem 3.

Let (U,F,A) be an information system. For xk,xk+tU and BA, the following statements hold.

  1. xk+tNA(xk,ε) if and only if the t-th row of matrix M(xk) is row 0;

  2. Cov(P)=Cov(Δ) if and only if the NA(x,ε)=NB(x,ε) is always true for any xU.

Proof.

  1. Necessity. If xk+1NA(xk,ε), then xk+tNaxk,ε holds for any aA. Next, we will prove that the row t of matrix M is row 0 by contradiction. In fact, if the element of a column in row t of the matrix is not 0 (it may be assumed that the element of the first column in row t of the matrix is not 0), then from Definitions 3 and 4, we can see that |fa1(xk)fa1(xk+t)|>ε, that is, xk+tN1(xk,ε). This is in contradiction with the above condition xk+tNa(xk,ε).

    Sufficiency. Similarly.

  2. It can be proved by Theorem 1.

3.2. Properties and Calculation Methods of Dependence Degree and Local Dependence Degree

Suppose that (1) Mak is the new matrix formed by deleting the row in which the column element of attribute ak is 1 and deleting the column in which ak is located in matrix M; (2) Let BA, MB be the matrix formed by deleting the row corresponding to element 1 in atBP(M,at) and the column corresponding to attribute in B in matrix M; (3) Mϕ=M. According to the above convention, it is not difficult to see the following rules of operation: (1) Commutative law i.e.,M{ai,aj}=M{aj,ai}; (2) Recurrence law i.e.,M{ai,aj}=M{ai}{aj}, M{ai,aj,ak}=M{ai,aj}{ak}.

Using the above convention and Definition 3, we can see that for the nonempty subset of A, there are

D(ai|B)xk=2S(P(MxkB,ai)),(9)
D(ai|B)=2S(P(MB,ai)).(10)

It can be seen that the degree of dependence Dai|B is an extension of the degree of local dependence Dai|Bx. In addition, the dependence degree can also be extended to the multi-attribute comprehensive dependence degree. For the nonempty subsets B and C of A, there is

DC|B=2SauCP(MB,au).(11)

In the additive attribute reduction algorithm, adding the attributes with the largest local dependence degree or dependence degree of attribute set B in sequence can improve the comprehensive distinguishing ability of attribute set B faster. (9) and (10) are progressive attribute importance calculation methods for attribute addition process. The properties of dependence degree are given below.

Theorem 4.

Let (U,F,A) be an information system, B,CA. Then the following statements hold:

  1. Daj|B=2k=1N1Daj|Bxk;

  2. GBC=G(B)+D(C|B).

Proof.

It can be proved by Definition 3 and Theorem 2.

4. ATTRIBUTE REDUCTION ALGORITHM BASED ON DEPENDENCE AND LOCAL DEPENDENCE

Since the above analysis shows that dependence degree and local dependence degree can be used as a selection basis for attribute addition, this section will design a corresponding heuristic algorithm to obtain the reduction of the information system. The detailed process is stated as follows:

Algorithm 1: Attribute reduction algorithm based on dependence degree

Input: Information system S=(U,F,A), where U is the object set and A is the attribute set;

Output: A reduction B of the information system;

Step 1: Construct the corresponding ε-Boolean identification matrix M according to the information system S, and let B=ϕ;

Step 2: First, delete the 0 and 1 rows in the ε-Boolean identification matrix M;

Step 3: Calculate the sum of the elements of each row in MB. If there is a row with the sum of 1, add the attribute a corresponding to 1 in the row to B, update B to B{a}, and update MB at the same time; otherwise, go to step 5;

Step 4: Calculate the sum of elements in each column in MB, select the corresponding attribute a with the largest result to add to B, update B to B{a}, and update MB at the same time. Turn to step 5;

Step 5: If MB is not a row vector, turn to step 4; otherwise, output B.

The basic idea of this algorithm is to first add the core attributes to the reduction set B and update the matrix MB. Then add the attributes with the greatest dependence degree on the reduction set B in turn until the comprehensive distinguishing ability of the reduction set B is equal to that of the original attribute set to get the reduction. It is worth noting that although ε-Boolean identification matrix solves the shortcomings of large storage space and low generation efficiency of discernible matrix to a certain extent, but ε-Boolean identification matrix also has the problem of high computational complexity, so this paper designs Algorithm 2, which can further improve the operation efficiency of the algorithm. The implementation process is as follows:

Algorithm 2: Attribute reduction algorithm based on local dependence degree

Input: Information system S=(U,F,A), where U is the object set and A is the attribute set;

Output: A reduction B of the information system;

Step 1: Let k=1, B=ϕ;

Step 2: Calculate the submatrix Mxk of ε-Boolean identification matrix and MxkB;

Step 3: Delete the 0 and 1 rows in the submatrix of the ε-Boolean identification matrix Mxk;

Step 4: Calculate the sum of the elements of each row in MxkB. If there is a row with the sum of 1, add the attribute ai corresponding to 1 in the row to B, update B to B{ai}, and update MxkB at the same time; otherwise, go to step 6;

Step 5: Calculate the sum of elements in each column in MxkB, select the corresponding attribute ai with the largest result to add to B, update B to B{ai}, and update MxkB at the same time.

Step 6: If the matrix MxkB is not a row vector, go to step 5; otherwise go to step 7;

Step 7: Let k=k+1, if k=n, output B; otherwise go to step 2.

The basic idea of the algorithm is as follows: firstly, according to the submatrix Mx1 of the ε-Boolean identification matrix, the attribute with the largest local dependence degree of the reduction set is added in turn to distinguish the first object from other objects. Similarly, according to the submatrix Mx2, the second object is separated from other objects (at this time, the first object does not need to be considered) and the above process is repeated until the last object is separated from other objects. Because the calculation of local dependence degree is simpler, the attribute reduction algorithm based on local dependence degree is more efficient.

The computational complexity of Algorithms 1 and 2 is analyzed as follows: the time complexity of calculating ε-Boolean identification matrix is On2×|A|, so the time complexity and space complexity of Algorithm 1 are On2×|A|; The time complexity of calculating the submatrix of ε-Boolean identification matrix is On×|A|. Because the algorithm needs to cycle n1, the time complexity of Algorithm 3 is On2×|A| and the space complexity is On×|A|. Therefore, Algorithm 2 reduces the storage space, which will greatly improve the efficiency of the algorithm. But in the reduction results, the number of attributes in Algorithm 2 may be more than that in Algorithm 1.

5. SIMULATION AND COMPARATIVE ANALYSIS

In this section, 11 commonly used datasets in UCI database (see Table 1) are used to compare the reduction performance of algorithms 1 and 2 with CVR algorithm proposed in reference [12] and CDG algorithm proposed in reference [15] (since the algorithm in this paper is designed for information system, only conditional attribute set is selected for processing when the data set is a decision table), the results are shown in Tables 2 and 3 (where, |U| represents the number of examples, |A| represents the number of attributes in the reduction set, T represents the average running time (seconds), and * indicates that it exceeds the computer memory).

No Data Sets |U| |A|
1 Anuran calls (MFCC) 7195 22
2 HCV for Egyptian patients 1385 29
3 Zoo 101 17
4 MEU-Mobile KSD 2856 71
5 Abalone 4177 8
6 Yeast 1484 8
7 Image segmentation 2310 19
8 Wine 178 13
9 Ionosphere 351 34
10 Glass 214 10
11 Audit data 777 18
Table 1

Data information.

No |A| Algorithm 1 ( ε) Algorithm 2 ( ε) CVR ( ε) CDG ( ε)
0 0.1 0.2 0 0.1 0.2 0 0.1 0.2 0 0.1 0.2
1 22 2 22 22
2 29 2 6 4 2 6 4 2 6 4 2 6 4
3 17 11 11 1 11 11 1 11 11 1 11 11 1
4 71 2 17 8 2 20 10 2 17 8
5 8 3 8 5 4 8 5 3 8 5
6 8 5 8 8 4 8 8 4 5 6 5 8 8
7 19 3 5 2 3 5 2 3 5 2 5 5 2
8 13 2 1 1 2 2 1 2 1 1 2 1 1
9 34 7 31 32 7 31 32 7 6 7 7 31 32
10 10 2 1 ϕ 2 1 ϕ 2 ϕ 1 2 ϕ 1
11 18 5 3 2 5 3 3 5 4 3 5 3 2
Table 2

Reduction results.

No |A| T
Algorithm 1 Algorithm 2 CVR CDG
1 22 * 25.93 * *
2 29 1.64 0.74 496.115 3.29
3 17 0.03 0.06 0.03 1.74
4 71 167.4 22.15 * 479.39
5 8 13.03 4.96 * 42.42
6 8 0.52 0.74 334.97 2.38
7 19 3.21 1.91 1575.52 1.46
8 13 0.01 0.19 9.04 0.04
9 34 0.20 0.36 93.22 0.28
10 10 0.04 0.10 8.56 0.04
11 18 0.39 0.31 190.88 0.74
Table 3

Running time of the reduction with the four algorithms (in seconds).

The information of the data set is given in Table 1. Before reduction, each numeric or integer attribute is normalized into interval [0, 1]. The reduction results are as follows:

CVR and CDG algorithms have important applications in covering rough sets. According to the attribute reduction results of 11 examples in Table 2, the reduction results of Algorithms 1 and 2 are similar to those of the above two algorithms. It is not difficult to see that the reduction result is related to the value of ε. With the increase of ε, the number of attributes may not decrease after reduction, because the order pairs that can be distinguished may become indistinguishable with the increase of ε, which may lead to the increase of core attributes and the number of attributes after reduction. But Algorithms 1 and 2 greatly reduce the running time of the algorithm (see Table 3). This is because compared with the identification matrix in CVR algorithm, the elements of ε-Boolean identification matrix in Algorithms 1 and 2 are 0 or 1, which reduces the storage space of the algorithm to a certain extent. CDG algorithm needs to compare n1 times when distinguishing objects, while Algorithms 1 and 2 only need to compare nk times when distinguishing the k-th object from other objects. The running time of the four algorithms is given as follows:

It can be seen from the results in Table 3 that Algorithms 1 and 2 greatly reduce the running time of the algorithm compared with algorithm CVR and algorithm CDG. But Algorithm 1 is still not efficient in dealing with some large data sets, so this paper further improves Algorithm 1. The submatrix of ε-Boolean identification matrix is used to replace ε-Boolean identification matrix, and the local dependence degree is used as a selection basis for attribute addition. It can be seen from the Table that Algorithm 2 further reduces the running time of the algorithm and improves the efficiency of the algorithm. In Example 1, Algorithm 1 exceeds memory because Algorithm 1 needs to create an array of 25880415×22 (4.2GB), which exceeds the preset maximum array size.

In a word, from the data in Tables 2 and 3, it can be seen that the reduction results of Algorithms 1 and 2 proposed in this paper are similar to those of the other two algorithms (CVR, CDG), and the algorithm proposed in this paper has higher efficiency. Compared with Algorithm 1, Algorithm 2 is more efficient and more suitable for processing large-scale data sets. The experimental results are consistent with the theoretical analysis, which verifies the effectiveness of the proposed algorithm.

6. CONCLUSION

Covering rough set is a generalization of classic rough set. In the context of covering rough sets, this paper designs attribute reduction algorithms based on dependency degree and local dependency degree respectively. The algorithm proposed in this paper has the following advantages:

First, the reduction results of Algorithms 1 and 2 are comparable to those of existing algorithms (CVR algorithm, CDG algorithm). But Algorithms 1 and 2 greatly improve the operation efficiency of the algorithm, and are more suitable for processing large-scale data sets.

Second, Algorithm 2 further reduces the computational complexity of Algorithm 1 and is more efficient in dealing with large-scale data sets, but the number of attributes in the reduction may be more than other algorithms.

Third, the algorithm proposed in this paper is also suitable for attribute reduction of decision table, and has strong application value in data processing, data mining and other fields.

It is worth noting that this paper does not consider the application of the algorithm in the field of big data, so the next step is to reduce the sample before the attribute reduction of information system, or use the sampling method for multiple reduction to improve the efficiency of the algorithm.

CONFLICTS OF INTEREST

The authors have no conflicts of interest to declare.

AUTHORS’ CONTRIBUTIONS

Li Fachao: Visualization; Ren Yexing: Writing - original draft; Jin Chenxia: review & editing.

ACKNOWLEDGMENTS

This work is supported by the National Natural Science Foundation of China (71771078, 71371064).

REFERENCES

1.W.X. Zhang and G.F. Qiu, Uncertain Decision Making Based on Rough Sets, Tsinghua University Press, Beijing, China, 2005.
4.S.K.M. Wang and W. Ziarko, On optimal decision rules in decision tables, Bull. Pol. Acad. Sci, Vol. 33, 1985, pp. 693-696.
Journal
International Journal of Computational Intelligence Systems
Volume-Issue
14 - 1
Pages
1419 - 1425
Publication Date
2021/04/30
ISSN (Online)
1875-6883
ISSN (Print)
1875-6891
DOI
10.2991/ijcis.d.210419.002How to use a DOI?
Copyright
© 2021 The Authors. Published by Atlantis Press B.V.
Open Access
This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).

Cite this article

TY  - JOUR
AU  - Li Fachao
AU  - Ren Yexing
AU  - Jin Chenxia
PY  - 2021
DA  - 2021/04/30
TI  - Attribute Reduction Method of Covering Rough Set Based on Dependence Degree
JO  - International Journal of Computational Intelligence Systems
SP  - 1419
EP  - 1425
VL  - 14
IS  - 1
SN  - 1875-6883
UR  - https://doi.org/10.2991/ijcis.d.210419.002
DO  - 10.2991/ijcis.d.210419.002
ID  - Fachao2021
ER  -