International Journal of Computational Intelligence Systems

Volume 11, Issue 1, 2018, Pages 1338 - 1356

On a knowledge measure and an unorthodox accuracy measure of an intuitionistic fuzzy set(s) with their applications

Authors
Sumita Lalotra1, rajputsumita88@gmail.com, Surender Singh1, *, surender1976@gmail.com
1School of Mathematics, Faculty of Sciences, Shri Mata Vaishno Devi University, Katra-182320, J & K, India
*Corresponding Author
Corresponding Author
Available Online 1 January 2018.
DOI
10.2991/ijcis.11.1.99How to use a DOI?
Keywords
Intuitionistic fuzzy sets; Knowledge measure; MADM; Accuracy; Pattern recognition
Abstract

A measure of knowledge may be viewed as a dual measure of entropy in a fuzzy system; thus, it appears that the less entropy may always accompany the greater amount of knowledge. In this paper, we propose a novel measure of knowledge for an intuitionistic fuzzy set (IFS) through an axiomatic approach. We investigate the effectiveness of the proposed knowledge measure through some comparative studies with some existing entropy measures and knowledge measures of IFS. We also introduce one parametric generalized version of this knowledge measure. This paper also provides application of the proposed knowledge measures in multi-attribute decision-making (MADM). We also give two characterization results to obtain a general framework for defining new knowledge measures. Further, similarity and dissimilarity measures may also be viewed as dual concepts to deal with the problems related to pattern recognition. In this paper, we provide an accuracy measure of an intuitionistic fuzzy set relative to a given intuitionistic fuzzy set. The proposed accuracy measure seems to serve as an effective alternative to similarity and dissimilarity measures in some pattern recognition problems. We also give proof of some properties related to accuracy measure.

Copyright
© 2018, the Authors. Published by Atlantis Press.
Open Access
This is an open access article under the CC BY-NC license (http://creativecommons.org/licences/by-nc/4.0/).

1. Introduction

There are many concepts in real life which comprise vague and imprecise information. Zadeh’s 1 idea of fuzziness provided the quantified approach to deal with this vagueness and imprecision. The representation of vagueness associated with a member of universe of discourse was done in terms of degree of membership. Atanassov 24 generalized the Zadeh’s notion of fuzzy set. Atanassov2 added a degree of non-membership with each member of the universe of discourse. The theoretical extension of fuzzy sets to intuitionistic fuzzy sets (IFSs) had not been a difficult job. However, the pragmatic aspect of IFS had been sought and justified by various authors in the last three decades. It had been quite crucial that how the association of non-membership degree with an element of the universe of discourse finds its significance in real life problems? We can give the answer of this question as follows:

Consider the case of the bird flu and an expert thinks that this type of flu is found in migratory and resident birds. The expert assigns a degree of association of the bird flu to migratory birds and resident birds as 0.6 and 0.4 respectively. In this case, the expert has also considered birds who have never developed the bird flu to both the above mentioned categories. Therefore, for more accurate results the categories migratory birds and resident birds must be provided with degree of membership and degree of non-membership to the bird flu case. In this situation, suppose degree of membership and non-membership of migratory birds is 0.55 and 0.34 respectively then the value 1−0.55−0.34 = 0.11 accounts for those migratory birds among which the bird flu had never been observed. In IFS terminology the value 1 - (membership) - (non-membership) is called hesitation degree or hesitation margin (indeterminacy degree). Thus, IFSs theory seems to provide deep insight into vague and imprecise data.

De-Luca Termini 5 introduced an axiomatic definition of fuzzy entropy. Yager 6 obtained fuzzy entropy from distance between the fuzzy set and its complement. Since then, lots of work has been done by the researchers for generalization of fuzzy sets and its applications 725. In this paper, first, we deal with the amount of knowledge associated with an IFS. In particular, the knowledge measure may be used to tackle some problems in artificial intelligence which may be difficult to handle by using fuzzy entropy alone, such as making the distinction between the cases in which there are a large number of arguments in favor but an equally large number of arguments are not in favor at the same time.

Intuitively, the entropy of an intuitionistic fuzzy set is conceived as a dual measure of the amount of knowledge contained in the intuitionistic fuzzy set. In decision-making problems, the entropy of intuitionistic fuzzy set may not be a satisfactory measure of knowledge (Szmidt et al.19, Szmidt and Kacprzyk 26). Szmidt et al.27 pointed out that fuzziness does not consider the peculiarities of how the fuzziness is distributed. Consider the two situations: first, when membership function and non-membership function are both equal to 0.5 and the second situation, when the membership function and non-membership function are both equal to 0. In both situations, intuitionistic fuzzy entropy assumes maximum value (equal to 1). However, from the pragmatic point of view, these two situations are clearly different. The existing entropy measures of IFS fails to capture this unique feature of IFS. To handle these situations, Szmidt et al.27 proposed a measure of knowledge for an intuitionistic fuzzy set which may be considered as a dual measure of intuitionistic fuzzy entropy. This measure claimed to capture some additional features which might be useful in decision-making problems. In the context of IFSs, Szmidt et al. 27, Guo and Song 29, Guo 28 developed knowledge measures to capture some interesting features of IFSs. They also showed the effectiveness and applications of their knowledge measure in 29,28 and 27. The linguistic hedges are the essential features of fuzzy and intuitionistic fuzzy theory. The linguistic hedge argument explains the effectiveness of an entropy or a knowledge measure (more detail in section 4). Depending on the hesitation degree, different knowledge measures are suitable for different situations. Guo 28 showed the effectiveness of his knowledge measure from the aspect of structured linguistic variables (linguistic hedges) in the IFS having high hesitation degree. However, it is not so effective for IFS having small hesitation degree. The entropy measure and the knowledge measure are conceptually dual to each other. But in intuitionistic fuzzy routine, a knowledge measure is not the hard complement of entropy measure and viceversa. Therefore, a problem dealt with entropy measure alone apprehend to miss out some intriguing features of IFS. Thus, a problem with uncertain dataset(s) handeled by both, knowledge measure and entropy measure arguably seems to augment the knowledge base of expert system in vague environment. Consequently, better solutions to optimization problems may be obtained. These facts motivated us to obtain a general framework for defining the knowledge measure of IFS for different situations. In particular, we propose a novel knowledge measure for IFS and study its application in MADM problems. Furthermore, we obtain a one parametric generalization of the proposed knowledge measure. The generalized knowledge measure provides the flexibility of application and the parameter α a may be considered as a sensitivity parameter to detect the adaptive changes.

Similarity and dissimilarity are important concepts associated with two data sets. These are very helpful in problems related to pattern recognition. Since fuzzy methods are adaptive and provide soft solutions to real life problems, therefore the notions of similarity and dissimilarity have been extended to fuzzy sets and intuitionistic fuzzy sets by many researchers and lots of similarity and dissimilarity have been put forward (refer 7,12,13,15,16,3036 and the references therein). Wu et al. 37 investigated the similarity measure models and algorithms for hierarchical case based reasoning (CBR). They developed a similarity evaluation model for hierarchical case (HC) trees by aggregating conceptual similarity and the value similarity of two HC-trees. The effectiveness of the hierarchical case similarity evaluation model had also been demonstrated with the help of illustrative examples. Wu et al. 38 developed a recommender system to recommend a suitable e-learning activity to a learner according to his profile and requirements. In their study, they proposed a fuzzy category similarity measure to evaluate the semantic similarity between learning activities/learner profile. In this study, when a subtree is matched to a target tree then an asymmetric similarity measure is desirable. Our proposed accuracy measure between two IFSs may be qualified to be an asymmetric similarity measure. Zhang et al. 39 presented a hybrid similarity measure method for analysis of patent portfolios. In this model categorical similarity of international patent classifications (IPCs) and the semantic similarity of textual elements have been fused to obtain a hybrid similarity measure. Zhang et al. 39 also presented a case study of firms in China’s medical device industry using the proposed hybrid similarity measure. Due to ever widening multiple perspectives in Science, Technology and Innovations (ST&I), the Technological Roadmapping (TRM) is an essential process for research and development, planning and strategic management. The analysis of ST&I data plays an instrumental role in augmenting the capabilities of domain experts while dealing with real world problems. Zhang et al. 40 utilized similarity measures for topical analysis of Science, Technology and Innovations. Boran and Akay 7 analysed some existing similarity measures for an intuitionistic fuzzy set along with some counter-intuitive cases. Xiao et al. 41 presented detail analysis of existing distance measures of intuitionistic fuzzy sets along with their counter-intuitive cases. Xiao et al. 41 also introduced an intuitive distance measure for intuitionistic fuzzy sets and discussed its application in pattern recognition. Since similarity and dissimilarity are dual concepts. Therefore, both concepts have been equally applied to pattern recognition problems by many researchers. We have observed the following research gaps in the previous studies.

  • In all the earlier studies there is no asymmetric similarity measure which is desirable for certain problems (e.g. Wu et al. 37).

  • Due to counter-intuitive situations one model can’t solve all problems related to a particular class (e.g. pattern recognition).

These facts motivated us to develop asymmetric similarity measure between two IFSs. Pertaining to this, we propose a measure of the accuracy of an intuitionistic fuzzy set relative to a given intuitionistic fuzzy set. We consider this accuracy measure as a generalization of knowledge measure. The novelties of this paper are summarized as follows:

  • We introduce a new knowledge measure of an IFS and demonstrate its superiority in certain situations.

  • The application of knowledge measure in MADM problem is presented.

  • We propose the notion of accuracy in an IFS B relative to a given IFS A. The effectiveness of accuracy measure is tested in pattern recognition problems. The result shows that some similarity/dissimilarity measures are unable to differentiate some IFSs but accuracy measure can do so.

  • The accuracy measure captures some essential features of IFSs. So, we prove some properties of accuracy measure.

The remainder of the paper is organized as follows: Section 2 presents some preliminaries related to IFSs. In Section 3, we propose a new knowledge measure and prove some of its properties. To explore the linguistic aspect of knowledge measure and effectiveness of knowledge measure in an MADM problems, some comparative empirical studies are presented in Section 4. In Section 5, the application of knowledge measure is discussed in an MADM problem with completely unknown/incomplete criteria weights information. Section 6 introduces a generalized version of knowledge measure presented in Section 3 and discusses its effectiveness in certain situation. In Section 7, we prove some characterization theorems for the knowledge measure. Section 8, provides an axiomatic framework to define an accuracy measure of an IFS B relative to a given IFS A and proof of some results related to accuracy measure. In Section 9, we discuss applicability and efficiency of accuracy measure in problems related to pattern recognition. Finally, Section 10 concludes and discusses scope for future research work.

2. Preliminaries

Zadeh 1 introduced the notion of fuzzy sets as follows.

Definition 1

let X = {x1, x2, ..., xn} be a universal set, then a fuzzy subset of universal set X is defined as

A={(x,ξA(x))|xX},
where ξA(x) : X → [0, 1] represents a membership function. The value ξA(x) describes the extent of presence of xX in A.

Atanassov 24 generalized the notion of FS as follows.

Definition 2

Let X be the universe of discourse. Then an intuitionistic fuzzy set A is represented as follow X is given by

A={<x,ξA(x),ηA(x)>|xX},
where ξA : X → [0, 1] and ηA : X → [0, 1] are membership and non-membership functions subject to the condition
0ξA(x)+ηA(x)1,
xX. The values ξA(x) and ηA(x) denote the extent of belongingness and the degree of non-belongingness of x to A, respectively. In addition for each IFS A in X if
πA(x)=1ξA(x)+ηA(x),
then πA(x) is called hesitancy margin/degree of x to A.

Definition 3

Let the family of all IFSs over a universe of discourse X be IFS(X). Let A, BIFS(X) are

  • A = {(x, ξA(xi), ηA(xi))|xX},

  • B = {(x, ξB(xi), ηB(xi))|xX},

and the operations defined on IFS(X) are given, for every xX, as:
  • AB if and only if ξA(xi) ⩽ ξB(xi) and ηA(xi) ⩾ ξB(xi);

  • A=B if and only if AB and BA;

  • Ac = {(x, ηA(xi), ξA(xi))|xX};

  • AB = {(x, ξA(xi) ∨ ξB(xi), ηA(xi) ∧ ηB(xi))};

  • AB = {(x, ξA(xi) ∧ ξB(xi), ηA(xi) ∨ ηB(xi))};

Szmidt and Kacprzyk 19 proposed the axiomatic definition of entropy of IFS as follows.

Definition 4

An entropy on IFS(X) is a real-valued function E:IFS(X)→ [0,1], satisfying the following four axioms.

(P1) E(A)=0 iff A is a crisp set; that is, ξA(xi)= 0, ηA(xi)= 1 or ξA(xi)= 1, ηA(xi) = 0 for all xiX,
(P2) E(A)=1 iff ξA(xi) = ηA(xi) for all xiX,
(P3) E(A)E(B) iff AB, that is, if ξA(xi) ⩽ ξB(xi) and ηA(xi), ⩾ ηB(xi), for ξB(xi) ⩽ ηB(xi), or if ξA(xi) ⩾ ξB(xi) and ηA(xi) ⩽ ηB(xi), for ξB(xi) ⩾ ηB(xi) for any xiX,
(P4) E(A)= E(AC).

Montes et al. 42 proposed the following definition of divergence measure between two IFSs.

Definition 5

let X be a finite universe, and let IFS(X) denote the set of all intuitionistic fuzzy sets on X. A map DIF : IFS(X) × IFS(X) →is a divergence measure for IFs if for every A, BIFS(X) it fulfills the following properties:

  • (D1) DIF (A, B) = DIF (B, A),

  • (D2) DIF (A, A) = 0,

  • (D3) DIF (AC, BC) ⩽ DIF (A, B),

  • (D4) DIF (AC, BC) ⩽ DIF (A, B).

Dengfeng and Cheng 13 introduced the following definition of intuitionistic fuzzy similarity measure.

Definition 6

A function S : IFS(X) × IFS(X) → [0, 1] is called a similarity measure, if S has the following properties:

(SM1) S(A,B)=S(B,A),A, BIFS(X),
(SM2) S(A,B)=1, iff A = B,
(SM3) 0S(A,B)1A,BIFS(X),
(SM4) A,B,CIFS(X), if ABC, then S(A, B) ⩾ S(A,C) and S(B, C) ⩾ S(A, C).

Guo28 proposed the following axiomatic definition of IFS’ knowledge measure.

Definition 7

A mapping K : IFS(X) → [0, 1] is called a knowledge measure on AIFS(X), if K has the following properties:

(KPAIFS1) K(A) = 1 iff A is a crisp set;
(KPAIFS2) K(A) = 0 iff πA(xi) = 1xiX;
(KPAIFS3) K(A) ⩾ K(B) if A is less fuzzy than B, i.e., AB for ξB(xi) ⩽ ηB(xi) ∀xiX or AB for ξB(xi) ⩾ ηB(xi) ∀xiX,
(KPAIFS4) K(Ac) = K(A).

For example, the following two knowledge measures satisfies the axiomatic requirements of a knowledge measure.

K(x)=1ni=1n(10.5[E(xi)+π(xi)]).
(Szmidt et al.27)
KG(A)=12ni=1n(1|ξA(xi)ηA(xi)|)(1+πA(xi)).
(Guo 28)

In the next section, we propose a new knowledge measure of an intuitionistic fuzzy set A and study some of its properties.

3. Novel knowledge measure of IFS

Let AIFS(X), then we propose the following measure of knowledge in the IFS A

K(A)=1ni=1n(ξA2(xi)+ηA2(xi)).

First, we establish that (3) is a valid knowledge measure and then we prove its valuation property.

Theorem 1

K(A) is a valid knowledge measure.

Proof

We prove the axiomatic requirements (KPAIFS1 − KPAIFS4).

(KPAIFS1) Let A be a crisp set. This implies ξA(xi) = 1 or ηA(xi) = 1 ∀ xiX; thus K(A) = 1. On the other hand, let K(A) = 1 then

K(A)=1ni=1n[ξA2(xi)+ηA2(xi)]1=1ni=1n[ξA2(xi)+ηA2(xi)].
which means ξA (xi) = 1 or ηA (xi) = 1 ∀ xiX and thus A is a crisp set.

(KPAIFS2) Let πA(xi) = 1 ∀ xiX. This implies ξA(xi) = ηA(xi) = 0 ∀ xiX; thus K(A) = 0. On the other hand, let K(A) = 0 further algebraic manipulation lead to

1ni=1n[ξA2(xi)+ηA2(xi)]=0ξA2(xi)+ηA2(xi)=0.
Now, ξA(xi) ⩾ 0, ηA(xi) ⩾ 0. Therefore, Eq. (4) yields ξA(xi = ηA(xi) = 0 ∀ xiX. But then πA(xi) = 1.

(KPAIFS3) We empirically test the axiom (KPAIFS3) for K(A), by generation of IFSs satisfying the conditions ξA(xi) ⩽ ξB(xi) ⩽ ηB(xi) ⩽ ηA(xi) and ξA(xi) ⩾ ξB(xi) ⩾ ηB(xi) ⩾ ηA(xi). In both the situations we observe that K(A) ⩾ K(B).

(KPAIFS4) Follows from the definition of Ac.

Theorem 2

Let K(A) and K(B) be knowledge measures of IFSs A and B respectively. Then

K(AB)+K(AB)=K(A)+K(B).

Proof

We consider two cases.

Case1

First we consider the case when ξA(xi) ⩾ ξB(xi) ⩾ ηB(xi) ⩾ ηA(xi), 1 ⩽ in. We have K(AB) + K(AB)

=1ni=1n[ξAB2(xi)+ηAB2(xi)]+1ni=1n[ξAB2(xi)+ηAB2(xi)]=1ni=1n(ξAB(xi))2+1ni=1n(ηAB(xi))2+1ni=1n(ξAB(xi))2+1ni=1n(ηAB(xi))2=1ni=1n(ξA(xi)ξB(xi))2+1ni=1n(ηA(xi)ηB(xi))2+1ni=1n(ξA(xi)ξB(xi))2+1ni=1n(ηA(xi)ηB(xi))2=1ni=1n(ξA2(xi)+ηA2(xi))+1ni=1n(ξB2(xi)+ηB2(xi))=K(A)+K(B).

Case2

Now, we consider the case when ξA(xi) ⩽ ξB(xi) ⩽ ηB(xi) ⩽ ηA(xi), 1 ⩽ in.

By following the similar steps as in Case1, we get K(AB) + K(AB) = K(A) + K(B).

Hence, the proof follows.

4. Comparative studies

In this section, we present some comparative studies of knowledge measure (3). In subsection 4.1, we investigate the performance of (3) from the point of view of structural linguistic variables. We also compare the results with some entropies of an intuitionistic fuzzy set and two existing knowledge measures. Then in subsection 4.2, we investigate the effectiveness of K(A) in a MADM problem and compare the results with Mao 43’s intuitionistic fuzzy entropy method.

4.1. Effectiveness of new knowledge measure in structured linguistic framework

First, we examine the performance of our developed measure with the help of example given below (adapted from Hung and Yang 11 and Guo 28).

Example 1

Let A = 〈x, ξA(x), ηA(x)〉|xX be an IFS in X. De et al. 44 defined an IFS Am, where m is any real number which is given by

Am={x,(ξA(x))m,1(1ηA(x))m|xX}.

1) Let A be an IFS in X = {6, 7, 8, 9, 10}, which is given by

A={6,0.1,0.8,7,0.3,0.5,8,0.6,0.2,9,0.9,0.0,10,1.0,0.0}.

Now, by using the above operation, we generate IFSs pertaining to A which are given below:

A0.5={6,0.316,0.553,7,0.548,0.293,8,0.775,0.106,9,0.949,0.0,10,1.0,0.0};
A2={6,0.010,0.960,7,0.090,0.750,8,0.360,0.360,9,0.810,0.0,10,1.0,0.0};
A3={6,0.001,0.992,7,0.027,0.875,8,0.216,0.488,9,0.729,0.0,10,1.0,0.0};
and
A4={6,0.000,0.998,7,0.008,0.938,8,0.130,0.590,9,0.656,0.0,10,1.0,0.0}.
De et al. 44 regarded A as LARGE in X by considering the characterization of linguistic variables. In the same way, linguistic equivalent of A0.5 is More or less LARGE,
  • A2 is Very LARGE,

  • A3 is Quite very LARGE, and

  • A4 is Very very LARGE.

Now by taking into account the mathematical operations, the entropy of these IFS should have the following order:

Entropy(A0.5)>Entropy(A)>Entropy(A2)>Entropy(A3)>Entropy(A4).

Now, from structured linguistic point of view knowledge measure of IFS should follow the order:

K(A0.5)<K(A)<K(A2)<K(A3)<K(A4).

In this section, we consider three entropies Esk, Eldl, Ezj, the knowledge measures Kskb, KAIFS and the proposed knowledge measure K(A). Each knowledge measure is suitable in some situation. We investigate the suitability of knowledge measures based on the hesitation margin. We examine the performance of knowledge measure (3) with these IFSs and interpret the results in the sense of the amount of knowledge associated with them. We also compare our knowledge-based results with the entropy of these IFSs, with the aim of showing the effectiveness of considered measure. The comparative results are shown in Table 1.

IFSs Entropy Esk19 Entropy Eldl14 Entropy Ezj23 Knowledge Kskb27 Knowledge KAIFS28 Knowledge K(A)
A0.5 0.319 0.471 0.249 0.794 0.785 0.668056
A 0.307 0.466 0.212 0.786 0.788 0.64
A2 0.301 0.390 0.266 0.783 0.805 0.68152
A3 0.212 0.317 0.095 0.827 0.854 0.713332
A4 0.176 0.278 0.046 0.844 0.877 0.732496
Table 1.

Comparative results by different models for IFSs pertaining to A

From Table 1, we observe that requirement (5) is satisfied by all entropies and requirement (6) is satisfied by Guo 28’s knowledge measure. Now, on reducing the hesitation margin in set A, we obtain set B i.e.

B={6,0.1,0.8,7,0.3,0.5,8,0.5,0.4,9,0.9,0.0,10,1.0,0.0}.

and construct Table 2.

IFSs Entropy Esk 19 Entropy Eldl 14 Entropy Ezj 23 Knowledge Kskb 27 Knowledge KAIFS 28 Knowledge K(A)
B0.5 0.345 0.508 0.285 0.787 0.767 0.6485786
B 0.374 0.502 0.305 0.763 0.761 0.642
B2 0.197 0.345 0.104 0.852 0.865 0.7241
B3 0.131 0.352 0.038 0.888 0.911 0.7824282
B4 0.109 0.200 0.016 0.899 0.926 0.8133984
Table 2.

Comparative results by different models for IFSS pertaining to B

From Table 2, we observe that requirement (5) is satisfied by all entropies and requirement (6) is not satisfied by any of the knowledge measure.

If we further reduce the hesitation margin, we obtain set C i.e.

C={6,0.1,0.8,7,0.3,0.5,8,0.5,0.5,9,0.9,0.0,10,1.0,0.0}.

and construct Table 3.

IFSs Entropy Esk 19 Entropy Eldl 14 Entropy Ezj 23 Knowledge Kskb 27 Knowledge KAIFS 28 Knowledge K(A)
C0.5 0.352 0.519 0.304 0.790 0.763 0.6556234
C 0.407 0.512 0.345 0.756 0.760 0.66
C2 0.168 0.328 0.093 0.878 0.883 0.75468
C3 0.110 0.229 0.035 0.907 0.923 0.812622
C4 0.095 0.179 0.015 0.913 0.934 0.8379872
Table 3.

Comparative results by different models for IFSS pertaining to C

a1 a2 a3 a4
x1 (0.7,0.2,0.1) (0.5,0.3,0.2) (0.6,0.1,0.3) (0.6,0.2,0.2)
x2 (0.7,0.3,0.0) (0.5,0.2,0.2) (0.7,0.2,0.1) (0.4,0.5,0.1)
x3 (0.6,0.4,0.0) (0.5,0.4,0.1) (0.5,0.3,0.1) (0.6,0.3,0.1)
x4 (0.8,0.1,0.1) (0.6,0.3,0.1) (0.3,0.4,0.3) (0.6,0.2,0.2)
x5 (0.6,0.2,0.2) (0.7,0.4,0.3) (0.6,0.1,0.2) (0.2,0.3,0.2)
Table 4.

Collective decision table

From Table 3, we observe that the preference order (6) is satisfied by the newly proposed knowledge measure. The comparative results are shown in Table 3. Similarly, there is still the greater entropy for the IFS B in Table 3, from which it is clear that the measures Esk, Ezj, Kskb, and our developed model K(A) are doing well this time. In summary, remarkable among the entropy above is the measure Esk that clearly out performs the other ones throughout the process. As far as the knowledge measurement models are concerned, our developed knowledge measure K(A) is doing better throughout the process. Therefore, the model K(A) may be considered to employ in some situations may be theoretical or practical.

4.2. Effectiveness of new knowledge measure in multiple-attribute decision-making

We examine the performance of the new knowledge measure with the help of example adapted from section 5.2 of Mao et al. 43.

Example 2

An investment company plans to invest some money in the best fund. Five possible funds (x1, x2, x3, x4 and x5) satisfies requirements under four attributes (a1, a2, a3 and a4), in order to choose the best fund, the company makes some evaluations for these funds. The results have been given using intuitionistic fuzzy sets in Table 5 of Mao et al. 43. Under the condition that attributes are benefit type. The weight associated with attribute aj is given by

Wj=1E(aj)k=14(1E(ak)),(1j4);
where E(aj)=i=15(xi,aj), and E(xi, aj)= Intuitionistic fuzzy entropy of every object under each attribute. The score function for each alternative is given by
S(xi)=j=14(ξ(xi,aj)η(xi,aj))×Wi,(1i5);
where ξ (xi, aj), η(xi, aj) represent the membership and non-membership degree of object xi under attribute aj. Since, the knowledge measure is considered as a dual of entropy of intuitionistic fuzzy set. Therefore, in the context of study of knowledge measure we can modify Eq. (7) as follows
Wj=K(aj)Σk=14K(ak),j=1,2,3,4.

Using the data of Table 5 of Mao et al. 43 and by using (11) the weights of alternatives x1, x2, x3, x4 and x5 are as follows

W={0.3186682521,0.2068965517,0.2366230678,0.2378121284}.

Consequently, using (8) we obtain scores of alternatives x1, x2, x3, x4, x5 as follows : S(x1) = 0.4141, S(x2) = 0.3316, S(x3) = 0.2031, S(x4) = 0.4039 and S(x5) = 0.3377.

Here, we have S(x1) > S(x4) > S(x5) > S(x2) > S(x3). The preference order of five alternative is completely in agreement with the results obtained in Mao et al. 43. Therefore, the measure (3) is effective in the problems of multi-attribute decision-making. In the next section, we apply K(A) in a MADM problem, with complete and partial information of weights.

5. A knowledge measure based approach for MADM problem

Problem Formulation:

Let A = {a1, a2, ..., an} be the set of n-attributes, X = {x1, x2, ..., xm} be the set of m-alternatives and W = {w1, w2, ..., wn}, where wj ∈ [0, 1] be the weight vector of attributes which is not predefined.

Suppose IFS aij = (ξij, ηij) denote the expert’s assessment corresponds to the jth attribute aj (j = 1, 2, ..., n) to evaluate ith alternative xi, i = 1, 2, ..., m.

The decision matrix corresponding to above mentioned situations is as follows:

M=x1x2...xm(a1a2...ana11a12...a1na21a22...a2n..................am1am2...amn)

We discuss the problem of MADM in two situations. One, when weights of attributes are completely unknown and secondly, when we have partial information about weights of attributes.

5.1. A MADM model based on knowledge measure when there is no information about weights

The main steps of the model are as follows:

  • Step 1

    Let

    M=[aij]n×m
    be the intuitionistic fuzzy decision matrix (as described in the problem statement). Consequently, our proposed knowledge measure takes the form
    K(aj)=1mi1m(ξij2+ηij2);j=1,2,...,n.

  • Step 2

    The weights associated with attribute aj can be obtained using equation (9).

  • Step 3

    After evaluating the weights of attributes, the weighted aggregated values for each alternative are obtained by the weighted intuitionistic fuzzy arithmetic mean operator 48 as follows:

    Si=S(xi)=(1j=1n(1ξaij)wj,j=1nηaijwj)
    =(ξi,ηi)(say).

  • Step 4

    Finally, to rank the alternative we compute the score of each alternative as follows 45:

    Score(Si)=ξiηi;i=1,2,...m.

Example 3 43

Consider the case of five alternatives with four attributes. Let the decision matrix be M=

x1x2x3x4x5.(a1a2a3a4(0.7,0.2)(0.5,0.3)(0.6,0.1)(0.6,0.2)(0.7,0.3)(0.5,0.2)(0.7,0.2)(0.4,0.5)(0.6,0.4)(0.5,0.4)(0.5,0.3)(0.6,0.3)(0.8,0.1)(0.6,0.3)(0.3,0.4)(0.6,0.2)(0.6,0.2)(0.4,0.3)(0.7,0.1)(0.5,0.3))
  • Step 1

    We obtain the knowledge associated with each attribute using (11).

    • K(a1) = 0.536, K(a2) = 0.348, K(a3) = 0.398, K(a4) = 0.4.

  • Step 2

    We calculate the weights of attributes using (9).

    • w1 = 0.3186682521, w2 = 0.2068965517, w3 = 0.2366230678, w4 = 0.2378121284

  • Step 3

    We compute weighted aggregated value of each alternative using (12). We obtain the values

    • S1 = (0.382205, 0.1846),

    • S2 = (0.393196, 0.282996),

    • S3 = (0.441613, 0.348967) and

    • S4 = (0.366134, 0.205478).

  • Step 4

    We obtain score of alternatives by using (13).

    • Score(S1) = 0.197605,

    • Score(S2) = 0.110201,

    • Score(S3) = 0.92645,

    • Score(S4) = 0.160656

    • and Score(S5) = 0.225238.

The preference order in this case is
  • S5 > S1 > S4 > S2 > S3.

Hence x5 has maximum score. Therefore, it is the best alternative.

5.2. Knowledge measure based MADM model when partial information regarding attribute weights is available.

In the problems related to MADM, the determination of attribute weight is very important. In real life situations, we also come across certain problems in which partially information regarding attribute weights is available. That is, for the weight vector W = (w1, w2, ..., wn), some constraints are given.

To compute the weights of attributes in such a case, we use the principle of maximum information as a dual to principle of minimum entropy suggested by Wang and Wang 47. We suggest a method to compute the attribute weight vector by means of proposed Intuitionistic fuzzy knowledge measure. The greater the knowledge measure, the greater in the intuitionistic fuzzy degree of attribute assessment information. Hence, we use the maximum knowledge principle for obtaining the weight vector of attribute by solving the following programming model.

  • Step 1

    The knowledge based decision matrix Mk = (Kij)m×n is derived from the decision matrix M. First we compute the knowledge matrix.

    Mk=x1x2...xm(a1a2...anK11K12...K1nK21K22...K2n..................Km1Km2...Kmn)
    where
    Kij=K(aij)=ξij2+ηij2.

  • Step 2

    We consider each alternative in a fair competitive environment, the weight coefficient with respect to the same attribute must be equal. Let H be the set of partial information about attribute weights. To obtain the optimal weight we construct the following linear programming model:

    Max.KW=j=1ni=1mKijwj
    subject to
    j=1mwj=1,
    Solving (15) among with all constraints given in H, we get optimal solution
    W=(w1,w2,...,wn)T.

  • Step 3

    By using (12), we obtain the weighted aggregated score function.

  • Step 4

    By using (14), we obtain scores.

Example 4

Consider the intuitionistic fuzzy decision matrix given in example 3.

  • Step 1

    We obtain the knowledge based matrix as follows:

    Mk=x1x2x3x4x5(a1a2a3a40.530.340.370.40.580.290.290.410.520.40.340.450.650.450.250.40.40.250.50.34)

  • Step 2

    Let the partial information regarding attributes weight is given by the set H as follow:

    H = {0.25 ⩽ w1 ⩽ 0.75, 0.35 ⩽ w2 ⩽ 0.60, 0.30 ⩽ w3 ⩽ 0.35, 0.40 ⩽ w4 ⩽ 0.45, w1 + w2 + w3 + w4 = 1}.

    Now, by using the model we have

    Max.KW=2.68w1+1.73w2+1.75w3+2w4
    subject to WH.

    wj ⩾ 0, j = 1, 2, 3, 4.

    Solving this model using MATLAB, we get

    w1 = 0.25, w2 = 0.35, w3 = 0.30 and w4 = 0.45.

  • Step 3

    We obtain the weighted aggregated scores as follows:

    • S1 = (0.292055, 0.106589),

    • S2 = (0.321532, 0.190321),

    • S3 = (0.335562, 0.233925),

    • S4 = (0.288692, 0.135854)

    • and S5 = (0.339266, 0.127925).

  • Step 4

    Finally, to rank the alternatives, we determine the score of each alternative using (16) we have Score(S1) = 0.185466, Score(S2) = 0.131211, Score(S3) = 0.101637, Score(S4) = 0.152838 and Score(S5) = 0.211331.

    Therefore, the preference order is

    S3 > S2 > S4 > S5 > S1.

    Hence x3 is best alternative.

However, the multi-attribute decision-making problem discussed in this section can also be solved by entropy methods under intuitionistic fuzzy environment. The knowledge measure based approach in MADM problems must be preferred due to following reasons.

  • Less computational complexity of knowledge measure.

  • In structured linguistic variables, the proposed knowledge measure is more effective in an intuitionistic fuzzy set with small hesitation margin (Section 4).

It is quite natural that in some problems the score of two or more alternatives may be equal. To handle such problems, for obtaining a strict preference order among alternatives we introduce a one parametric generalization of (3) in the next section and study its effectiveness in certain situations.

6. Generalized knowledge measure

We propose the following generalized knowledge measure

Kα(A)=1n(α1)i=1n(ξα(xi)+ηα(xi)),α>1.
For α = 2, we recover K(A).

Theorem 3

Kα (A) is a valid knowledge measure.

Proof

Following the same steps as in proof of theorem 1 we can prove that, Kα (A) is a valid knowledge measure.

Example 5

Consider the case of five alternatives xi, i = 1, 2, 3, 4, 5 and four attributes aj, j = 1, 2, 3, 4 with the following decision table (Table 4).

We construct generalized score function as follows:

Sα(xi)=j=14(ξα(xi,aj)ηα(xi,aj))×Wjα,i=1,2,...,5.
where ξ (xi, aj) and η(xi, aj) are membership and non-membership degree of object xi under attribute aj. Let,
Wjα=Kα(aj)i=15Kα(ai),
denotes the generalized weight of attributes aj. Now we calculate weights for different values of α by using (17) and (19). The corresponding weight vectors are obtained in Table 5.

α w1α w2α w3α w4α
2 0.3614 0.2526 0.2196 0.2113
1.1 0.2790 0.2631 0.2264 0.2313
1.3 0.2867 0.2618 0.2245 0.2269
1.5 0.2984 0.25985 0.2228 0.2224
1.7 0.3033 0.2572 0.2215 0.2179
1.9 0.3119 0.2542 0.2202 0.2135
Table 5.

Weight vectors corresponding to various values of α

Finally, we compute the score of each alternative corresponding to various values of α and obtain Table 6. We observe that for α = 2 score of alternative x2 and x5 is equal and hence we are not able to obtain a strict preference. But, when we change the values of α, we obtain a strict preference order among the alternatives. Moreover, it is interesting to note that the best alternative in all the cases is same i.e. x1. Therefore, generalized knowledge measure is resolving a tie when a strict preference among alternative is desirable.

α S1α S2α S3α S4α S5α Preference order
2.5 0.401427 0.300873 0.20052 0.370227 0.296702 S(x1) > S(x4) > S(x2) > S(x5) > S(x3)
2.3 0.403909 0.298322 0.199719 0.367647 0.295934 S(x1) > S(x4) > S(x2) > S(x5) > S(x3)
2.1 0.404051 0.293977 0.197544 0.36305 0.29322 S(x1) > S(x4) > S(x2) > S(x5) > S(x3)
2 0.42557 0.309027 0.204868 0.391358 0.309027 S(x1) > S(x4) > S(x2) = S(x5) > S(x3)
1.1 0.344604 0.231003 0.156131 0.290381 0.236164 S(x1) > S(x4) > S(x5) > S(x2) > S(x3)
1.3 0.367602 0.250657 0.169607 0.3138 0.254937 S(x1) > S(x4) > S(x5) > S(x2) > S(x3)
1.5 0.385738 0.267746 0.180808 0.334573 0.271038 S(x1) > S(x4) > S(x5) > S(x2) > S(x3)
1.7 0.394898 0.278479 0.188069 0.345909 0.280516 S(x1) > S(x4) > S(x5) > S(x2) > S(x3)
1.9 0.401279 0.287496 0.193754 0.355985 0.288214 S(x1) > S(x4) > S(x5) > S(x2) > S(x3)
Table 6.

Preference orders for various values of α

Now, in the next section, we prove some characterization results to obtain a general framework for defining new knowledge measures.

7. Characterization of knowledge measure

Theorem 4

Let G : [0, 1]2 → [0, ∞] be a mapping. Then the following function KIF : IFS(X) → [0, 1] defined by

KIF(A)=1ni=1nG(ξi,ηi).
satisfies axioms (KAIFS1)−(KAIFS4) if G satisfies the following conditions:
  1. (1)

    G(x,y)=0 with x+y ⩽ 1 if and only if x=y=0,

  2. (2)

    G(x,y)=1 with x+y ⩽ 1 if and only if either x=0, y=1 or x=1, y=0,

  3. (3)

    G(x,y)=G(y,x),

  4. (4)

    G(x,y)⩾ G(z,w) if x⩽z⩽w⩽y or x⩾z⩾w⩾y.

Proof

Consider KIF (A) as given in equation(*) satisfies axioms (KAIFS1) − (KAIFS4). We show that G satisfies (1) if G(x, y) = 0; x, y ∈ [0, 1] then we can take ξi = x and ηi = y for every i = 1, 2, ..., n. Then, we have

KIF(A)=1ni=1nG(ξi,ηi)=0.

From the axiom (KAIFS2), it can happen iff x = y = 0, which we wanted to show.

Now suppose that G(x, y) = 1 with x+y ⩽ 1; x+y ∈ [0, 1]. Further, we consider the IFS given by ξi = x and ηi = y for every i = 1, 2, ..., n. This gives

KIF(A)=1ni=1nG(ξi,ηi)=1.
From the axiom (KAIFS1) it can happen iff either x=0, y=1 or x=1, y=0, which we wanted to show

In context of (3), suppose that there exist x, y ∈ [0, 1] such that x + y ⩽ 1 but G(x, y) ≠ G(y, x). Without loss of generality, we can assume that G(x, y) > G(y, x). Consider the IFS A defined by ξi = x and ηi = y. Then we get

KIF(A)=1ni=1nG(ξi,ηi)>1ni=1nG(ηi,ξi)=KIF(Ac).
which contradicts axiom (KAIFS3). Finally (4) can be shown by a similar argument. The converse is just an easy calculation.

Theorem 5

Let M : [0, 1]2 → [0, 1] be a function such that M(x, .) : [0, 1] → [0, 1] is strictly decreasing for every x ∈ [0, 1] satifying following conditions:

  1. (a)

    Symmetry:M(x, y) = M(y, x),

  2. (b)

    Idempotency : M(x, x) = x, ∀x ∈ [0, 1],

  3. (c)

    Boundary condition: M(x, 0) = x.

Let f : [0, 1] → [0, 1] be a mapping. Then the function G(x, y) = M(f (x), f (y)) satisfies properties 1–4 in theorem 4 if following properties holds:

  1. (i)

    f (x) = 0 iff x = 0,

  2. (ii)

    f (x) = 1 iff x = 1,

  3. (iii)

    f is monotonic increasing in [0,1].

Proof

Given, G(x, y) = M(f (x), f (y)) satisfying property 1–4 of theorem 4. We have from property (1) of theorem 4

G(0,0)=0M(f(0),f(0))=0.

Using idempotency of M, we have

f(0)=0.

Conversely, if possible suppose ∃ x ≠ = 0 such that f (x) = 0.

Then

G(x,0)=M(f(x),f(0))=f(x)=0.

Therefore, G(x, 0) = 0 which is a contradiction to condition (1) of theorem 4. This proves (i). Now,

G(1,0)=M(f(1),f(0))=M(f(1),0)=f(1).

From condition (2) of theorem 4 we have,

G(1,0)=1f(1)=1.

Let y0(≠1) such that f (y0) = 1.

Then,

G(y0,0)=M(f(y0),f(0))=M(f(y0),0)=f(y0)=1.
G(y0, 0) = 1 which is a contradiction to condition (2) of theorem 4.

Therefore, f (x) = 1 iff x = 1.

This proves (ii).

Next, let x, y ∈ [0, 1].

If possible suppose that f(x) is monotonically decreasing function in [0,1]. Therefore, we assume that xy such that f (x) > f (y).

Now, G(x, 1 − y) = M(f (x), f(1 − y)) and G(y, 1 − y) = M(f (y), f (1 − y)).

Then, M(f (x), f (1 − y)) < M(f (y), f (1 − y))

G(x, 1 − y) < G(y, 1 − y).

This contradicts condition (4) of theorem 4.

Therefore f(x) is monotonically increasing in [0, 1].

8. A measure of accuracy

The amount of knowledge in an IFS A can be considered as a amount of accuracy in A. Intuitionistically, if A is crisp then it must be absolutely accurate and value of knowledge/accuracy may numerically be considered as 1. Now, if A is any IFS and B is another IFS and we want to calculate degree of accuracy in B relative to A i.e., when A is benchmark for accuracy of B. The unorthodoxy in this situation is that A in itself may not be accurate (crisp). Thus, intuitionistic argument says; the amount of accuracy in B can be maximum when A and B both are crisp sets and ξA(xi) = ηA(xi), ξB(xi) = ηB(xi). Otherwise, it can attain as much accuracy as that of A when A = B.

The accuracy in B relative to A is zero if we have no knowledge about of A (i.e., ξA(xi) = ηA(xi) = 0). This type of measure of accuracy of B relative to A is clearly different from measure of similarity S(A, B) between A and B due to the following facts:

  • S(A; B) is symmetric but 𝒜 (A, B) is not symmetric.

  • S(A; B) = 1 even if A and B both are Intuitionistic fuzzy sets but Accuracy 𝒜 (A, B) ≠ 1 if A and B both are Intuitionistic fuzzy sets.

Therefore, this type of accuracy measure seems to be important for pattern recognition problems.

Inspired by these characteristics present in intuitionistic fuzzy sets, we introduce a notion of accuracy in an intuitionistic fuzzy set B relative to a reference intuitionistic fuzzy set A. We also compute the degree of accuracy, the axiomatic definition of such an intuitive measure of accuracy is as follows:

Definition 8

Let A, B, CIFS(X). Let 𝒜 be a mapping 𝒜 : IFS(X) × IFS(X)→ [0,1]. 𝒜 (A,B) is said to accuracy in B relative to A if it satisfies the following properties:

  • (A1) 0 ⩽ 𝒜 (A; B) ⩽ 1;

  • (A2) 𝒜 (A; B) = 0 iff πA(x) = 1;

  • (A3) 𝒜 (A; B) = 1 if both A and B are crisp sets and A = B, (ξA(x) = ξB(x), ηA(x) = ηB(x));

  • (A4) 𝒜 (A; B) = K(A) if A = B;

We propose a measure of accuracy in B relative to A as follows.

Let A and B be two fuzzy sets. The accuracy of the intuitionistic fuzzy set B relative to intuitionistic fuzzy set A is given by

𝒜(A;B)=12K(A)+12C(A,B),
where K(A)=1ni=1n[ξA2(xi)+ηA2(xi)] and C(A,B)=1ni=1n[ξA(xi)ξB(xi)+ηA(xi)ηB(xi)]. .

Remark 1.

A and B may not be similar at all but B may be accurate to some extent relative to A.

For example, if A = (0, 1), B = (1, 0) then S(A, B) = 0; 𝒜 (A; B) = 0.5.

Where we consider the similarity measure S(A,B)=i=1nξA(xi)ξB(xi)+ηA(xi)ηB(xi)ξA2(xi)+ηA2(xi)ξA2(xi)+ηA2(xi)

Remark 2.

There can be IFSs A and B with higher degree of similarity but very low degree of accuracy.

For example, if A = (0.4, 0.2), B = (0.5, 0,2) then S(A, B) = 0.9965; 𝒜 (A; B) = 0.22.

Where S(A,B) is same as used in remark 1.

Theorem 6

𝒜 is a valid measure of accuracy.

Proof

We verify the axioms A1A4.

  • (A1) This is obvious from the definition.

  • (A2) Let πA(xi) = 1. This implies ξA(xi) = ηA(xi) = 0 ∀xiX, thus

    𝒜(A;B)=12ni=1n[ξA2(xi)+ηA2(xi)]+=12ni=1n[ξA(xi)ξB(xi)+ηA(xi)ηB(xi)].=0.

    On the other hand, let

    𝒜(A;B)=012ni=1n[ξA2(xi)+ηA2(xi)]+12ni=1n[ξA(xi)ξB(xi)+ηA(xi)ηB(xi)]=0.

    This is possible iff each term in sum on LHS is zero i.e. ξA2(xi)=0, ηA2(xi)=0 and ξA(xi)ξB(xi) = 0, ηA(xi)ηB(xi) = 0 which gives, ξA(xi) = ηA(xi) = 0 and πA(xi) = 1.

  • (A3) Let A and B are crisp sets. Now we consider ξA(xi) = ξB(xi) = 1, ηA(xi) = ηB(xi) = 0 and ξA(xi) = ξB(xi) = 0, ηA(xi) = ηB(xi) = 1. Clearly in both cases 𝒜 (A; B) = 1. Hence, 𝒜 (A, B) = 0 iff πA(x) = 1.

  • (A4) We know that

    𝒜(A;B)=12ni=1n[ξA2(xi)+ηA2(xi)]+=12ni=1n[ξA(xi)ξB(xi)+ηA(xi)ηB(xi)].
    Now, when A=B
    𝒜(A;B)=K(A).
    Thus 𝒜 (A; B) is a valid measure of accuracy.

Theorem 7

Let A, B, C be three intuitionistic fuzzy sets then 𝒜 (A; BC) + 𝒜 (A; B ∩ C) = 𝒜 (A; B) + 𝒜 (A;C).

Proof

Let

  • Z1 = {x|xX, ξB(xi) ⩾ ξC(xi) & ηB(xi) < ηC(xi)},

  • Z2 = {x|xX, ξB(xi) < ξC(xi) & ηB(xi) ⩾ ηC(xi)}.

where ξA(xi), ξB(xi) and ξC(xi) are the membership functions and ηA(xi), ηB(xi) and ηC(xi) are the non-membership functions of A, B and C respectively. Now
𝒜(A;BC)=12ni=1n[ξA2(xi)+ηA2(xi)]+12ni=1n[ξA(xi)ξBC(xi)+ηA(xi)ηBC(xi)]=12ni=1n[ξA2(xi)+ηA2(xi)]+12nxiZ1[ξA(xi)ξB(xi)+ηA(xi)ηB(xi)]+12nxiZ2[ξA(xi)ξC(xi)+ηA(xi)ηC(xi)].
and
𝒜(A;BC)=12ni=1n[ξA2(xi)+ηA2(xi)]+12ni=1n[ξA(xi)ξBC(xi)+ηA(xi)ηBC(xi)]=12ni=1n[ξA2(xi)+ηA2(xi)]+12nxiZ1[ξA(xi)ξC(xi)+ηA(xi)ηC(xi)]+12nxiZ2[ξA(xi)ξB(xi)+ηA(xi)ηB(xi)].
Therefore, we have
𝒜(A;BC)+𝒜(A;BC)=𝒜(A;B)+𝒜(A;C).

Theorem 8

Let A, B, C be three intuitionistic fuzzy sets then

𝒜(AB;C)+𝒜(AB;C)=𝒜(A;C)+𝒜(B;C).

Proof

Let

  • Z1 = {x|xX, ξA(xi) ⩾ ξB(xi) & ηA(xi) < ηB(xi)},

  • Z2 = {x|xX, ξA(xi) < ξB(xi) & ηA(xi) ⩾ ηB(xi)}.

where ξA(xi), ξB(xi) and ξC(xi) are the membership functions and ηA(xi), ηB(xi) and ηC(xi) are the non-membership functions of A,B and C respectively. We have
𝒜(AB;C)=12ni=1n[ξAB2(xi)+ηAB2(xi)]+12ni=1n[ξAB(xi)ξC(xi)+ηAB(xi)ηC(xi)]=12nxiZ1[ξA2(xi)+ηA2(xi)]+12nxiZ1[ξA(xi)ξC(xi)+ηA(xi)ηC(xi)]+12nxiZ2[ξB2(xi)+ηB2(xi)]+12nxiZ2[ξB(xi)ξC(xi)+ηB(xi)ηC(xi)].
and
𝒜(AB;C)=12ni=1n[ξAB2(xi)+ηAB2(xi)]+12ni=1n[ξAB(xi)ξC(xi)+ηAB(xi)ηC(xi)]=12nxiZ1[ξB2(xi)+ηB2(xi)]+12nxiZ1[ξB(xi)ξC(xi)+ηB(xi)ηC(xi)]+12nxiZ2[ξA2(xi)+ηA2(xi)]+12nxiZ2[ξA(xi)ξC(xi)+ηA(xi)ηC(xi)].
Therefore,
𝒜(AB;C)+𝒜(AB;C)=12nxiZ1[ξA2(xi)+ηA2(xi)]+12nxiZ1[ξA(xi)ξC(xi)+ηA(xi)ηC(xi)]+12nxiZ2[ξB2(xi)+ηB2(xi)]+12nxiZ2[ξB(xi)ξC(xi)+ηB(xi)ηC(xi)]+12nxiZ1[ξB2(xi)+ηB2(xi)]+12nxiZ1[ξB(xi)ξC(xi)+ηB(xi)ηC(xi)]+12nxiZ2[ξA2(xi)+ηA2(xi)]+12nxiZ2[ξA(xi)ξC(xi)+ηA(xi)ηC(xi)]=12ni=1n[ξA2(xi)+ηA2(xi)]+12ni=1n[ξA(xi)ξC(xi)+ηA(xi)ηC(xi)]+12ni=1n[ξB2(xi)+ηB2(xi)]+12ni=1n[ξB(xi)ξC(xi)+ηB(xi)ηC(xi]=𝒜A(A;C)+𝒜A(B;C).

Theorem 9

Let A, B, C be three intuitionistic fuzzy sets then

𝒜(AB;AB)+𝒜(AB;AB)=𝒜(A;B)+𝒜(B;A).

Proof

Let

  • Z1 = {x|xX, ξA(xi) ⩾ ξB(xi) & ηA(xi) < ηB(xi)},

  • Z2 = {x|xX, ξA(xi) < ξB(xi) & ηA(xi) ⩾ ηB(xi)}.

where ξA(xi), ξB(xi) and ξC(xi) are the membership functions and ηA(xi), ηB(xi) and ηC(xi) are the non-membership functions of A,B and C respectively. We have
𝒜(AB;AB)=12ni=1n[ξAB2(xi)+ηAB2(xi)]+12ni=1n[ξAB(xi)ξAB(xi)+ηAB(xi)ηAB(xi)]=12nxiZ1[ξA2(xi)+ηA2(xi)]+12nxiZ1[ξA(xi)ξB(xi)+ηA(xi)ηB(xi)]+12nxiZ2[ξB2(xi)+ηB2(xi)]+12nxiZ2[ξB(xi)ξA(xi)+ηB(xi)ηA(xi)].
and
𝒜(AB;AB)=12ni=1n[ξAB2(xi)+ηAB2(xi)]+12ni=1n[ξAB(xi)ξAB(xi)+ηAB(xi)ηAB(xi)]=12nxiZ1[ξB2(xi)+ηB2(xi)]+12nxiZ1[ξB(xi)ξA(xi)+ηB(xi)ηA(xi)]+12nxiZ2[ξA2(xi)+ηA2(xi)]+12nxiZ2[ξA(xi)ξB(xi)+ηA(xi)ηB(xi)].
Therefore,
𝒜(AB;AB)+𝒜(AB;AB)=12ni=1n[ξA2(xi)+ηA2(xi)]+12ni=1n[ξA(xi)ξB(xi)+ηA(xi)ηB(xi)]+i=1n[ξB2(xi)+ηB2(xi)]+12ni=1n[ξB(xi)ξA(xi)+ηB(xi)ηA(xi)].=𝒜(A;B)+𝒜(B;A).

Theorem 10

Let A, B be two intuitionistic fuzzy sets. Then

  1. (a)

    𝒜 (A; B) = 𝒜 (Ā; B¯ ),

  2. (b)

    𝒜 (A; Ā) = 𝒜 (Ā; A),

  3. (c)

    𝒜 (A; B¯ ) = 𝒜 (Ā; B),

  4. (d)

    𝒜 (A; B) + 𝒜 (Ā; B) = 𝒜 (Ā; B¯ ) + 𝒜 (A; B¯ ).

Proof

The proof of this theorem is an easy calculation.

9. Application of accuracy measure in pattern recognition

Problem formulation:

Let C1,C2, ...,Cn be some known patterns characterized by IFS in the universal set Y = {z1, z2, ..., zk} as follows:

Ci={zj,ξCi(zj),ηCi(zj)|zjY,j=1,2,...,k}.
Let
B={zj,ξB(zj),ηB(zj)|zjY,j=1,2,...,k}.
be an unknown pattern. The problem is to classify pattern B into one of the known patterns Ci.

The solution of the problem can be obtained as follows:

  1. 1.

    Distance or Dissimilarity measure approach

    Let d(Ci; B)= Distance or Dissimilarity of pattern B from Ci. Then B is assigned to Ci*

    i*=argmini=1,2,...,n{d(Ci;B)}.

  2. 2.

    Similarity measure approach

    Let S(Ci; B)= Similarity of pattern B from Ci*. Then B is assigned to Ci*

    i*=argmaxi=1,2,...,n{S(Ci;B)}.

  3. 3.

    Accuracy measure approach

    Let 𝒜 (Ci; B)= Accuracy of pattern B from Ci*. Then B is assigned to Ci*

    i*=argmaxi=1,2,...,n{𝒜(Ci;B)}.

Xiao et al. 41 conducted a comprehensive investigation of pattern recognition problem by distance/dissimilarity measures. Boran et al. 7 gave comparative study of various existing similarity measures in pattern recognition problems. In the comparative studies regarding distance/dissimilarity measures (Xiao et al. 41 and the references there in) and similarity measures (Boran et al. 7 and the references there in), we observe that there is neither a distance/dissimilarity measure nor a similarity measure which suits to every problem of pattern recognition. This happens due to some counter intuitive situations. Thus, a new distance/dissimilarity, similarity measure or some alternative model is always desirable for pattern recognition problems. Our proposed accuracy measure is also an alternative and may be more effective than existing distance/dissimilarity and similarity measures in some pattern recognition problems. For the sake of comparative study and demonstration of effectiveness of the proposed accuracy measure, we consider the examples from Boran and Akay 7 in pattern recognition problem for applying accuracy measure approach.

Example 67

Let C1,C2 and C3 be the IFSs which represents three known patterns in the universal set Y = {z1, z2, z3, z4} respectively which are given below

C1={z1,0.5,0.3|z1Y,z2,0.7,0.0|z2Y,z3,0.4,0.5|z3Y,z4,0.7,0.3|z4Y},
C2={z1,0.5,0.2|z1Y,z2,0.6,0.1|z2Y,z3,0.2,0.7|z3Y,z4,0.7,0.3|z4Y},
C3={z1,0.5,0.4|z1Y,z2,0.7,0.1|z2Y,z3,0.4,0.6|z3Y,z4,0.7,0.2|z4Y},
Now our aim is that the unknown pattern which is represented by the IFS B can be classified into one of the patterns C1,C2, or C3. The unknown pattern B is given below
B={z1,0.4,0.3|z1Y,z2,0.7,0.1|z2Y,z3,0.3,0.6|z3Y,z4,0.7,0.3|z4Y},

For some existing distance measures, the degrees of dissimilarity/distances d(C1, B), d(C2, B) and d(C3, B) are calculated 41. The results obtained are shown in Table 7.

Distances d(C1, B) d(C2, B) d(C3, B)
dL41 0.1083 0.1208 0.0917
dnH34 0.0750 0.0750 0.1000
dE35 0.0866 0.0866 0.1118
lh32 0.0750 0.0750 0.0750
leh49 0.0750 0.0750 0.1000
d135 0.0625 0.0687 0.0625
d2P35 0.0500 0.0625 0.0500
dZ124 0.0500 0.0500 0.0500
ddZ224 NaN NaN NaN
dfd7 0.050 0.062 0.0333
Table 7.

The distance between known and unknown patterns in Example 6 (Patterns not discriminated are in bold italic)(p = 1 in d2p and t = 2, p = 1 in dfd)

From the results obtained in Table 7, we observe that the distance dnH, dE, lh, leh, d1, d2p, dZ1, dZ2 and dfd are not able to classify the pattern B into one of the problem Ci (i = 1, 2, 3) as value of d(Ci; B) is same to the value of i (shown in bold). Only, the distance dL can classify B in the pattern C3.

Boran and Akay7 also considers the same example and uses similarity measure approach. The results are shown in Table 8 for various existing similarity measures.

Similarity measures S(C1, B) S(C2, B) S(C3, B)
SC30 0.925 0.863 0.925
SH33 0.975 0.963 0.975
SL31 0.950 0.938 0.963
SO15 0.929 0.921 0.929
SDC13 0.950 0.938 0.975
SHB46 0.950 0.938 0.950
SeP16 0.950 0.938 0.950
SSP16 0.950 0.938 0.963
ShP16 0.958 0.954 0.958
SHY112 0.925 0.925 0.925
SHY212 0.886 0.886 0.886
SHY312 0.860 0.860 0.860
CIFS36 0.991 0.987 0.996
SEP16 0.950 0.938 0.967
Table 8.

The similarity between known and unknown patterns in Example 6 (Patterns not discriminated are in bold italic) (p = 1 in SHB, SeP, SSP, ShP and p = 1, t = 2 in SEP)

We observe that the similarity measures SC, SH, SO, SHB, Sep, SHp, SHY1, SHY2 and SHY3 are unable to classify pattern B. But the similarity measures SL, SDC, SSp, CIFS and SEp can classify pattern B into pattern C3.

Using both approaches distance/dissimilarity and similarity measure the pattern B is classified into the pattern C3.

Now, we apply accuracy measure approach. The value of accuracy of pattern B from C1, C2, and C3 are as follows: 𝒜 (C1, B) = 0.445, 𝒜 (C2, B) = 0.4412 and 𝒜 (C3, B) = 0.4538.

Therefore, the accuracy measure approach also classify the pattern B into the pattern C3. Hence, our accuracy measure approach is effective in this pattern recognition problem.

Example 77

Let C1, C2 and C3 be the IFSs which represents three known patterns in the universal set Y = {z1, z2, z3, z4} respectively which are given below

C1={z1,0.8,0.1|z1Y,z2,0.5,0.3|z2Y,z3,0.5,0.5|z3Y,z4,0.6,0.1|z4Y},
C2={z1,0.5,0.4|z1Y,z2,0.4,0.2|z2Y,z3,0,0|z3Y,z4,0.3,0.4|z4Y},
C3={z1,0.6,0.3|z1Y,z2,0.7,0.2|z2Y,z3,0.6,0.1|z3Y,z4,0.2,0.5|z4Y},
The unknown pattern B is given below:
B={z1,0.7,0.2|z1Y,z2,0.5,0.2|z2Y,z3,1,0|z3Y,z4,0.4,0.3|z4Y}.

For some existing distance measures, the degrees of dissimilarity/distances d(C1, B), d(C2, B) and d(C3, B) are calculated by Xiao et al. 41. The results obtained are shown in Table 9.

Distances d(C1, B) d(C2, B) d(C3, B)
dL41 0.3625 0.5792 0.3042
dnH34 0.2250 0.3500 0.2250
dE35 0.2784 0.5148 0.2345
lh32 0.2250 0.3500 0.2250
leh49 0.2250 0.3500 0.2250
d135 0.2188 0.2813 0.1937
d2P35 0.2125 0.2125 0.1625
dZ124 0.2350 0.3250 0.1625
dZ224 NaN NaN NaN
dfd7 0.2125 0.2125 0.1625
Table 9.

The distance between known and unknown patterns in Example 7 (Patterns not discriminated are in bold italic)(p = 1 in d2p) and t = 2, p = 1 in dfd

From the results obtained in Table 9, we observe that the distance dnH, lh, leh, d2p, dZ2 and dfd are not able to classify the pattern B into one of the problem Ci (i = 1, 2, 3) as value of d(Ci; B) is same to the value of i (shown in bold). The distances dL, dE, d1 and dZ1 are able to classify pattern B in the pattern C3.

We also considers the same example and use similarity measure approach. The degree of similarity for various measures is shown in Table 10.

Similarity measures S(C1, B) S(C2, B) S(C3, B)
SC30 0.7875 0.7875 0.825
SH33 0.8625 0.7875 0.825
SL31 0.7875 0.7875 0.825
SO15 0.8461 0.8095 0.8661
SDC13 0.515 0.6125 0.5331
SHB46 0.7875 0.7875 0.825
SeP16 0.9813 0.9969 0.9919
SHY112 0.975 0.95 0.975
SHY212 0.9609 0.9228 0.9609
SHY312 0.9512 0.9048 0.9512
CIFS36 0.8926 NAN 0.9451
Table 10.

The similarity between known and unknown patterns in Example 7 (Patterns not discriminated are in bold italic)(p = 1 in SHB, SeP, and p = 1,t = 2 in SEP)

We observe that the similarity measures SC, SL, SHB, SHY1, SHY2, SHY3 and CIFS are unable to classify pattern B. But the similarity measures SH, SO, SDC and Sep are able to classify pattern B into pattern C3.

Using both approaches distance/dissimilarity and similarity measure the pattern B is classified into the pattern C3.

Now, we apply accuracy measure approach. The value of accuracy of pattern B from C1, C2, and C3 are as follows: 𝒜 (C1, B) = 0.46625, 𝒜 (C2, B) = 0.3725 and 𝒜 (C3, B) = 0.47125.

Therefore, the accuracy approach also classify the pattern B into the pattern C3. Hence, the accuracy measure approach is effective in this pattern recognition problem.

10. Conclusion and future studies

In this study, we have introduced a measure of knowledge contained in an IFS and investigated the applicability and effectiveness of this knowledge measure in MADM problems. It has been observed that some existing knowledge measures are useful for a large degree of hesitancy in IFS while our proposed knowledge measure is useful in the problems in which the IFS have the small degree of hesitancy. We have also introduced an unorthodox measure of accuracy of an IFS relative to a given IFS. Further, we have shown the effectiveness and application of the proposed accuracy measure in pattern recognition problems through illustrative examples. The proposed accuracy measure has been found to be an effective alternative to similarity and dissimilarity measures in the study of pattern recognition problems. The usefulness and some potential applications of intuitionistic fuzzy information measures presented in this work may be summarized as follows:

  1. 1).

    The optimization problem dealt with fuzzy entropy or intuitionistic fuzzy entropy alone may provide better insight to the experts if knowledge measure is also considered along with entropy measure.

  2. 2).

    The accuracy measure between IFSs (asymmetric similarity measure) may provide robust solutions to the problems where asymmetric similarity measures are desired.

  3. 3).

    In some counter-intuitive situations the proposed accuracy measure recognise the pattern but some similarity measures are unable to do so.

  4. 4).

    The proposed accuracy measure can be applied in the problems of binary image segmentation.

Our future studies includes:
  1. 1).

    Development of an aggregated or hybrid intuitionistic fuzzy information comprising intuitionistic fuzzy entropy and intuitionistic fuzzy knowledge measure.

  2. 2).

    To apply intuitionistic fuzzy accuracy measure in image segmentation problem.

  3. 3).

    Extension of the proposed knowledge measure and accuracy measure to interval-valued intuitionistic fuzzy sets/hesitant fuzzy sets.

Acknowledgements

The authors are grateful to anonymous reviewers for their constructive suggestions that helprd to bring this paper in the present form.

References

4.KT Atanassov, On intuitionistic fuzzy sets theory, Springer, Berlin, Germany, 2012.
7.FE Boran and D Akay, A biparametric similarity measure on intuitionistic fuzzy sets with applications to pattern recognition, Information sciences, Vol. 255, 2015, pp. 45-57.
8.H Bustince, E Barrenechea, M Pangola, J Fernandez, C Guerra, and P Couto, Generalized Atanassov’s intuitionistic fuzzy index: Construction of Atanasov’s fuzzy entropy from fuzzy implication operators, International journal of uncertainity, fuzziness and knowledge-based systems, Vol. 19, 2011, pp. 305-315.
9.H Bustince and P Burillo, Entropy on intuitionistic fuzzy sets and on interval-valued fuzzy sets, Fuzzy sets and systems fuzziness, Vol. 19, 1996, pp. 51-69.
15.Y Li, C Zhongxian, and Y Degin, Similarity measures between vague sets and vague entropy, Journal of computer science, Vol. 29, 2002, pp. 129-132.
28.K Guo, Knowledge measure for Atanassov’s intutionistic fuzzy sets, IEEE Transactions on Fuzzy Systems, Vol. 24, 2016, pp. 1072-1078.
31.L Fan and X Zhangyan, Similarity measures between vague sets, Journal of systems and software, Vol. 12, 2001, pp. 922-927.
32.P Grzegorzewski, Distznces between intuitionistic fuzzy sets and/or intervalvalued fuzzy sets based on the Hausdorff metric, Mathematical and computer modelling, Vol. 53, 2011, pp. 91-97.
38.D Wu, J Lu, and G Zhang, A fuzzy tree matching-based personalized e-learning recommender system, IEEE transactions on fuzzy systems, Vol. 23, 2015, pp. 2412-2426.
41.L Xiao, L Weimin, and Z Wei, Intuitive distance for intuitionistic fuzzy sets with applications in pattern recognition, Applied intelligence, 2017.
42.I Montes, NR Pal, V Janis, and S Montes, Divergence measures for intutionistic fuzzy sets, IEEE Transactions on Fuzzy Systems, No. 2, 2015, pp. 444-456.
47.JQ Wang and P Wang, Intuitionistic linguistic fuzzy multi–criteria decision–making method based on intuitionistc fuzzy entropy, Control and decision, Vol. 27, 2012, pp. 1694-1698.
48.Z Xu, Intuitionistic fuzzy aggregation operators, IEEE transactions fuzzy system, Vol. 15, 2007, pp. 1179-1187.
Journal
International Journal of Computational Intelligence Systems
Volume-Issue
11 - 1
Pages
1338 - 1356
Publication Date
2018/01/01
ISSN (Online)
1875-6883
ISSN (Print)
1875-6891
DOI
10.2991/ijcis.11.1.99How to use a DOI?
Copyright
© 2018, the Authors. Published by Atlantis Press.
Open Access
This is an open access article under the CC BY-NC license (http://creativecommons.org/licences/by-nc/4.0/).

Cite this article

TY  - JOUR
AU  - Sumita Lalotra
AU  - Surender Singh
PY  - 2018
DA  - 2018/01/01
TI  - On a knowledge measure and an unorthodox accuracy measure of an intuitionistic fuzzy set(s) with their applications
JO  - International Journal of Computational Intelligence Systems
SP  - 1338
EP  - 1356
VL  - 11
IS  - 1
SN  - 1875-6883
UR  - https://doi.org/10.2991/ijcis.11.1.99
DO  - 10.2991/ijcis.11.1.99
ID  - Lalotra2018
ER  -