International Journal of Computational Intelligence Systems

Volume 12, Issue 1, November 2018, Pages 227 - 237

A Novel Comparative Linguistic Distance Measure Based on Hesitant Fuzzy Linguistic Term Sets and Its Application in Group Decision-Making

Authors
Mei Cai1, *, Yiming Wang1, Zaiwu Gong1, Guo Wei2
1School of Management Science and Engineering, Nanjing University of Information Science & Technology, Nanjing, Jiangsu, 210044, China
2Department of Mathematics & Computer Science, University of North Carolina at Pembroke, North Carolina, 28372, United States of America
*

Corresponding author. Email: sanmoon_1980@163.com

Received 20 August 2018, Accepted 30 December 2018, Available Online 14 January 2019.
DOI
10.2991/ijcis.2018.125905643How to use a DOI?
Keywords
Hesitant fuzzy linguistic term set (HFLTS); Comparative linguistic expression; Fuzzy group decision making (FGDM); Distance measure; Aggregation approach
Abstract

The linguistic approaches are required in order to assess qualitative aspects of many real problems. In most of these problems, decision makers only adopt single and very simple terms which would not reflect exactly what the experts mean for many intricate applications. Frequently, the assessments of decision making problems involve comparative linguistic expressions. Accordingly, we propose a novel distance measure between hesitant fuzzy linguistic term sets (HFLTSs) to solve fuzzy group decision making (FGDM) problems. Firstly we define the characteristic functions to describe the HFLTSs transformed from comparative linguistic expressions. Then we construct a weighted HFLTSs graph containing all notes in the HFLTSs. Distances in the graph of individual assessments are defined by measures considering diversity and specificity of HFLTS’s granularity. We put forward a new approach to achieve aggregation results for group decision making to realize the minimal distances with individual assessments. Finally, numerical examples are illustrated.

Copyright
© 2019 The Authors. Published by Atlantis Press SARL.
Open Access
This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).

1. INTRODUCTION

Decision making is a common process to human beings. Decision making problems are usually defined in uncertain and imprecise situations. Different tools have been provided to solve such problems, such as fuzzy sets theory and the fuzzy linguistic approaches [1]. It seems natural to apply the computing with words (CWW) methodology in order to create and enrich decision models in which the information provided and manipulated has a qualitative nature [2]. CWW processes can be carried out by different linguistic representation models and computational models. Since Zadeh [1] proposed the fuzzy linguistic approach, many extensions such as the 2-tuple linguistic computational model [3], the proportional 2-tuple model [4], the continuous linguistic term [5] and numerical scales of 2-Tuple linguistic model [6, 7] have been introduced. However, these models have some limitations, mainly because they assess a linguistic variable by using single and very simple terms which may not reflect exactly what the experts mean [8, 9]. Considering the diversity in which different sources of knowledge exhibit, decision makers show their personal preferences when giving assessments. In order to rich linguistic expressions in different decision making situations, decision makers are permitted to use context-free grammars to generate comparative linguistic expressions. For example, “lower than medium”, “greater than high”. But comparative linguistic expressions are hard to be directly participated in computing. Rordiguez et al. [9,10] elicited computable information of comparative linguistic expressions and transformed to hesitant fuzzy linguistic term set (HFLTS). HFLTSs provide experts with greater flexibility. Many applications [1120] were developed based on HFLTSs computational models.

In order to apply HFLTSs to solve decision making problems, studies of computational models about HFLTSs were developed. These are outlined below.

  1. Envelope-based approaches.

    Rodriguez, Martinez [9] defined the envelope of HFLTS. Some envelope-based approaches were proposed. Different kinds of aggregation operators are calculated by means of aggregating the envelopes for HFLTSs which are presented as linguistic intervals. Rodríguez, Martínez [10] aggregated the lower value of the linguistic intervals as the pessimistic perception and aggregated the greater value as the optimistic perception. Chen and Hong [21] performed the minimum operations and the maximum operations among the linguistic intervals and used the likelihood method for ranking the priority. Aggregating HFLTSs is transformed into aggregating two sets of different simple linguistic terms.

  2. All-elements-included approaches

    The second kind approaches use the initial fuzzy representation of HFLTS in computing processes, which we call all-elements-included approach, while envelope-based operators only use the upper bound and the lower bound of an HFLTS. Wei, Zhao [22] defined two aggregation operators which use all elements in HFLTSs to obtain a new HFLTS. Liao, Xu [8] introduced the Hamming distance and the Euclidean distance for HFLTSs based on all-elements-included approaches. Liu and Rodríguez [23] defined a fuzzy envelope for HFLTS which is a trapezoidal fuzzy membership function obtained by aggregating the fuzzy membership functions of the linguistic terms of the HFLTS. Chen and Hong [21] proposed a method to aggregate the fuzzy sets in each HFLTS into a fuzzy set and performed the α-cut operations to these aggregated fuzzy sets to get intervals. The difficulty in this method is that the cardinalities of two HFLTSs are different. Xia and Xu [24] introduced the axioms of distance and similarity measures for hesitant fuzzy sets (HFSs) with different cardinalities. Rodríguez, Martínez [25] also discussed the comparison problem of two HFEs with different cardinalities. Liao, Xu [8] extended the shorter HFLTS by adding the average value in it until both HFLTSs have the same length.

Each of these two approaches has advantages and disadvantages. Envelope-based approach makes computation easier. But envelope-based approaches simply rely on the upper bound and the lower bound of HFLTS, which may lead to information distortion and/or loss [14]. The most important merit of HFLTSs is that they can reflect decision makers’ uncertainty and hesitancy. Envelope-based approaches cannot reflect this merit. All-elements-included approaches make full use of information contained in HFLTSs, even use fuzzy membership function or possibility degree formulas of HFLTSs. But the process of decision making is complex. How to combine the advantages of these two approaches and provide a novel approach which can both simplify the computational complexity and identify difference in HFLTSs is our paper’s main motivation.

Distance and similarity measures are fundamentally important in decision making and pattern recognition [8]. The individual assessments presented by HFLTS compose a graph and the individual assessments are the vertices in the graph. Liao and Xu [26] introduced a family of novel distance and similarity measures for HFLTSs from the geometric point of view. Falcó, García-Lapresta [27] applied the distance measure between two HFLTSs to solve decision problems in new majority judgment voting system. Wei, Zhao [15] introduced distance measures for extended HFLTS (EHFLTS) and developed a novel multi-criteria group decision making model. Wang and Xu [28] proposed two distinct consistency measures of extended hesitant fuzzy linguistic preference relations for group decision making. But the assumptions such as the equality among the distances between consecutive linguistic terms for all the agents [27], and the assumption that distance measures only be used to solve the MCDM problem with single expert/decision maker [26], will limit their scope of application.

Inspired by decision making models based on distance measures, we look for other measures, based on fuzzy sets and fuzzy logic, to identify the differences of HFLTSs to avoid complex computation. We will propose some novel distance measures from the view of graphs. As distance and similar measures are the foundation of many decision making models, it would be interesting to integrate the geometric distance and envelope-based approaches into HFLTSs decision making approaches.

In this paper, we focus on investigating the distance measures for HFLTSs, and their properties. Then we define a set of aggregation operators especially for fuzzy group decision-making (FGDM).

The remainder of the paper is organized as follows. In Section 2, we present a brief review of HFLTSs. In Section 3, we give a novel distance measure between HFLTSs and distance measures. We present a weighted HFLTSs graph composed by linguistic expressions as its vertices to obtain a distance measure between any two vertices. In Section 4, an approach based on distance measures to solve FGDM problems is provided. Finally examples are illustrated in Section 5. The last section draws our conclusion.

2. PRELIMINARIES

2.1. Comparative Linguistic Expressions and HFLTSs

In the traditional CWW, assessment is selected from a predefined linguistic term set. This kind assessment is not powerful enough in reflecting a decision maker’s hesitance. Hence, more attention is paid to another possibility for generating more elaborate linguistic expressions, which refers to a context-free grammar [29]. In this section we recall some concepts of HFLTS. HFLTS was proposed by Rodríguez, Martínez [10] to model comparative linguistic expression using a context-free grammar. A context-free grammar [9] GH is a 4-tuple (VN, VT, I, P), where VN is the set of non-terminal symbols, VT is the set of terminals’ symbols, I is the starting symbol, and P is the production rules.

Comparative linguistic expressions cannot be directly used in decision making process. Elicitation is the necessary process to get the formal representation to suit the CWW [30]. Comparative linguistic expressions can be converted into HFLTSs by the transformation function EGH [10]:

  1. EGHsi=si|siS,

  2. EGH(lessthansi)={sj|sjSandsjsi},

  3. EGH(greaterthansi)={sj|sjSandsjsi},

  4. EGHbetween si and sj=sk|skS and sisksj.

The basic concepts and operations of HFLTSs are defined as follows.

Definition 1.

[10] Let GH be a context-free grammar and S=s0,,sg a linguistic term set. The elements of GH = (VN, VT, I, P) are defined as follows:

VN=<primary term>,<composite term>,<unary relation>,<binary relation>,<conjuction>
VT=lower than,greater than,at least,at most,between,and,s0,,sg
IVN
P={I::=<primary term>|<compositeterm><compositeterm>::==<unaryrelation>,<binaryrelation>|<binaryrelation><primaryterm><conjuction><primaryterm><primaryterm>::=s0|s1||sg<unaryrelation>::=lowerthan,greaterthan,atleast,atmost<binaryrelation>::=between<conjuction>::=and

For example, by using the previous grammar GH and linguistic terms siS, the expert may express their preferences about an item by comparative linguistic expressions, such as:

P1=between high and very highvery highat most lowhighat most lowbetween very low and low

Definition 2.

[9] let S=s0,s1,,sg be a linguistic term set. A HFLTS Hs is an ordered finite subset of consecutive linguistic terms of the linguistic term set S.

Definition 3.

[9] let S=s0,s1,,sg be a linguistic term set. The upper bound Hs+ and the lower bound Hs of the HFLTS Hs, respectively, are defined as follows:

  1. Hs+=maxsi=sj,i,si sj,siHsandsjHs,

  2. Hs=minsi=sj,i,si sj,siHs and sjHs.

Definition 4.

[9] Let S=s0,s1,,sg be a linguistic term set. The envelope env (Hs) of an HFLTS Hs is a linguistic interval which is obtained by means of the upper bound Hs+ and the lower bound Hs of an HFLTS Hs, shown as follows:

envHs=Hs,Hs+

Many computational models of HFLTSs are based on the envelope env (Hs) of an HFLTS Hs. They treat HFLTSs as linguistic intervals.

2.2. The Distance Measure Between HFLTSs

Traditional work on distance measures was based on Hamming distance and Euclidean distance. Liao, Xu [8] adopted the Hamming distance and Euclidean distance.

Definition 5.

[8] Let S=sα|ατ,,1,0,1,τ be a linguistic term set, and HS1 and HS2 two HFLTSs. Let HS1=Sδl1HS1Sδl1|l=1,,#HS1 (#HS1 is the number of linguistic terms in HS1) and HS2=Sδl2HS2Sδl2|l=1,,#HS2 where #HS1=#HS2=L.

The Hamming distance is

dhdHS1,HS2=1Ll=1L|δl1δl2|2τ+1
and the Euclidean distance is
dedHS1,HS2=1Ll=1L|δl1δl2|2τ+121/2

Another kind distance measures are based on the Manhattan distance. Roselló, Sánchez [31] defined a space for computing this distance. The distance between HFLTSs is the shortest path connecting two vertices in the graph.

Definition 6.

[31] Given an order-of-magnitude space GSn*, the graph GSn* associated with Sn* is the graph whose vertices are the basic labels of Sn* or the connex union of basic labels, i.e. the set of vertices is VGSn*=xi,j=Bi,Bj|Bi,BjSn*,ij, and whose set of edges is EGSn*=xi,jxr,s|r=i and s=j+1ors=j and r=i+1.

For example, consider the set of linguistic terms Sn*=B1,B2,,B5. Each vertex is presented as xi,j = [Bi, Bj]. The graph GSn* is given in Figure 1.

Figure 1

An example of the graph representation.

According to the graph GSn*, some interesting distance measures are proposed. For example, suppose l :VGSn*×VGSn* is the geodesic distance in VGSn*. Roselló, Sánchez [31] gave a special situation where the weights of all the edges are equal, and then obtain the conclusion:

lBi,Bj,Br,Bs=k|ir|+|js|
where all the edges have an equal weight k.

The definition of distance between two linguistic expressions is also given (Definition 7).

Definition 7.

[31] The distance between two linguistic expressions  E=Bi,Bj and F=Bh,Bk is defined as

dE,F=dMψE,ψF=|ih|+|jk|

The function ψ maps a linguistic expression [Bi, Bj] as a point in the plane.

ψli,lj=j1,i1

3. A NOVEL DISTANCE MEASURE BETWEEN HFLTSs

3.1. Characteristics of HFLTSs

The previously presented Hs in Definition 4 appears in as an interval form and it is viewed as an information granule. Information granules offer a unique way of quantifying a diversity of sources of knowledge under consideration and expressing this aspect in the form of the level of granularity (specificity) [32]. We apply two criteria parameters (coverage criterion and specificity criterion [32]) to describe the characteristics of Hs.

Definition 8.

[32] Characteristics of Hs are described by coverage index cov (Hs) and specificity criterion spe (Hs):

  1. A coverage criterion cov (Hs) expresses to which extent Hs covers. It reflects the uncertainty degree of Hs.

  2. A specificity criterion spe (Hs) articulates the cumulative length of Hs. It reflects the position of Hs in the domain S.

Let Hs=Hs,Hs+, and Hs=Bi, Hs+=Bj. There are elements Bi, Bi+1, …, Bj between Hs and Hs+. Hs’s granularity is defined as the cardinality of the set (Bi, Bi+1, …, Bj), presented as card (Bi, Bi+1, …, Bj). The coverage index is expressed as a ratio

covHs=cardBi,Bi+1,,BjcardS

The specificity index is expressed as

speHs=k=ijΔ1Bk/cardS
where Δ−1 [3] is a function transforming a linguistic label Bk into a numerical value and card (S) is the number of elements in the set S.

There are two decision scenarios. The first one is that decision makers use multi-granular linguistic term sets to discriminate the assessments with different precision levels. That is to say, decision makers are free to choose elements of different coverage criteria. The other one is that a decision maker chooses one linguistic term set and use several elements in this set to generate different HFLTSs to discriminate the assessments with different precision levels, just like HFLTSs in Definition 2. According to Definition 2, Hs is generated from S=s0,s1,,sg. S is an input space which is granulated to a collection of fuzzy sets (si). S reveals the input space structure. S contains card (S) elements. These elements’ semantics meanings in S are the same. Their coverage criteria are the same according to our Definition 8. But their specificity criteria are different. In any linguistic approach, an important parameter to be determined is the granularity (cardinality) of uncertainty [33].

3.2. A Weighted HFLTSs Graph

The distance between two HFLTSs is defined as the geodesic distance in the graph GSn*. Firstly, we give the definition of the graph GSn*.

Definition 9.

Given a weighted graph GSn*=V,E,, where V is a nonempty set of vertices; E is a set of edges; ∅ is the weight set of edges.

Let us consider the vertices set. Sn*=B1,,Bn is a finite set of basic labels. Sn* is the base of a graph, since every vertex is composed by the labels in Sn*. The basic linguistic term set Sn* has card Sn* granules. card Sn* is a desirable index of inherent diversity of a graph that we have to deal with. VGSn is presented as xi,j which is the label [Bi, Bj] with ij. A decision maker’s assessment Hs is constructed by Sn*. The granularity of Hs is presented as card Bi,Bi+1,,Bj.

We group the vertices according to cov (xi,j). The same group is at the same “cover”. We use ∧t to describe a group that represents the same coverage index. ∧t will be: Bi,Bj|Bi,BjSn*,j=i+t. For example, 2=B1,B3,B2,B4,B3,B5 is a group at the same “cover”, considering the set of linguistic terms Sn*=B1,B2,,B5.

The graph GSn* can be seen as the combination of several levels of different coverage indexes from the basic labels set of Sn* to the top level of one label set. A new definition of the graph GSn* is the union of all levers.

Definition 10.

A graph GSn* is defined as the union of all levels

GSn*=Sn*tt
where t = 1, 2, …, (n − 1) and Sn*=B1,,Bn.

For example, GSn*=S5*1234.

In the graph GSn*, the lowest layer represented the basic labels of Sn*. The second layer represents the linguistic expressions created by two consecutive linguistic terms [Bi, Bi+1], and so on up to the last layer. As a result, the higher an vertex is in, the more imprecise it becomes. When a decision maker is confident about his opinion on an alternative, he/she can assign an HFLTS in the lower lever. However, if he/she is unconfident about his/her opinion, he/she might assign an HFLTS in the higher lever. The coverage index of this interval, which is the vagueness information of Hs, also reflects the confidence of a decision maker.

Then, let us see the edges set EGSn*. The edge set is EGSn*=xi,jxr,s|r=i and s=j+1ors=j and r=i+1. It gives the position relations of vertices.

Finally, we look at the weight set GSn*. The weight set is GSn*=wxi,jxr,s+|xi,jxr,sEGSn. We need to know not only the position relations but also qualitative relations of different vertices in order to separate different vertices. Through defining the edge weight of the graph GSn*, we can know the qualitative relations of different vertices. That is the advantage of our approach. In the next (Section 3.3), we will give the weight function of edge set.

3.3. The Distance Measure Between HFLTSs

Note that each edge of a graph may have a specific weight wxi,jxr,s given by a weight function w:GSn*+. Before we present the definition of weight function for the graph GSn*, we introduce some important concepts.

Roselló, Prats [34] used an information function of qualitative labels to measure the consensus degree of two qualitative labels in the group decision. We modify this function to measure the distance between two linguistic intervals. The information I for the vertex xi,j is a positive continuous real function. It is used to build the weight function of each edge. The information I satisfies that if xi,j and xh,k are two vertices and covxi,jcovxh,k, then Ixi,jIxh,k.

Ixi,j=ln1covxi,j

Definition 11.

The weight function wGSn*+ of edge EGSn*=xi,jxr,s|r=i and s=j+1ors=j and r=i+1 is

wxi,jxr,s=1+k|Ixi,jIxr,s|
where k ∈ [0, + ∞] is the imprecise index.

Remarks.

k reflects the importance of imprecise when deciding the weight of an edge. If two vertices are more different in coverage, then |Ixi,jIxr,s| is larger. Through adjusting the value of k, we can obtain different weights of the two vertices. Distance measures given by Falcó, García-Lapresta [27] and Roselló, Sánchez [31] can be viewed as a simplified form of Definition 11 where k = 0, which means that the weights of all edges are equal.

In our model, an edge is between two close neighbor levels. Since every vertex in a level is uniform distributed, we can get covBi,Bi+t=t+1/cardSn* for any vertex in Λt.

Hence, the weight of the edge between level Λt and Λt+1 is

wxi,jxr,s=1+k|lnt+2lnt+1|

Corollary 1.

If k = 0, the distance of all edges in the space GSn* are equal to 1.

The weight function (9) calculates the distance of two directly connected vertices. Now we introduce an approach to calculate the distance from arbitrary vertices v0 to vk. There are several possible reachable paths from vertex v0 to vk, and a path is denoted as a finite sequence p = v0v1vk, where vivi+1 is an edge. The path length of the path p is denoted as ϕp=wv0v1+wv1v2+wvk1vk. The shortest path length from v0 to vk is the minimum distance of all possible reachable paths from a starting vertex to a target vertex. We define the distance from vertex v0 to vk as

dv0vk=minϕp:v0vk
where pv0vk is a reachable path from v0 to vk. Dijkstra's algorithm [35] can be a good tool to find the shortest path from v0 to vk. Based on the refined Dijkstra's algorithm proposed in [36], we find the shortest paths from xi,j to xr,s in graph GSn*.

Corollary 2.

If the weights of all edges in the space GSn* are equal to 1, the geodesic distance of vertexs xi,j and xr,s can be expressed through

d[Bi,Bj],Br,Bs=|ir|+|js|

Proof.

The shortest path is the union of two parts. xa,j and xr,s are in the same level. xa,j is between xi,j and xr,s in vertical direction.

Without loss of generality, we assume ri and sj. See Figure 2.

Figure 2

An example to show the positions of vertices xi, j and xr, s.

The red lines in Figure 2 are the shortest path from xi,j to xr,s. The path can be divided into two parts. One path is vertical and the other is horizontal. The vertical part is from xi,j to xa,j, and the horizontal part is from xa,j to xr,s. These two paths constitute the shortest path from xi,j to xr,s.

The shortest path from xi,j to xa,j is a straight line. The distance of this line is

d[Bi,Bj],[Ba,Bj]=|ai|=ai

The shortest path from xa,j to xr,s is like a zigzag line. The distance of this line is

dBa,Bj,Br,Bs=sj+ra

The whole distance from xi,j to xr,s is

dBi,Bj,Br,Bs=ai+sj+ra=ri+sj

Other situations can be treated in a similar way. So we conclude:

dBi,Bj,Br,Bs=|ir|+|js|

Remarks.

We can conclude from Corollary 2 that in a special situation our distance measure coincides with Function (4). In other words, Function (4) is equal to our distance measure when k = 0. k = 0 means that we do not consider the coverage criterion of characteristics of Hs. If we consider the coverage criterion of HFLTSs, our distance measure shows its advantage. In fact, the coverage criterion of HFLTSs is just what many other distance measures will ignore. After a deeper analysis of the characteristics of HFLTSs, it is seen that the coverage criterion of HFLTSs reflects the hesitance of decision makers. The hesitance of decision makers reflects personality and knowledge background, which is important for FGDM. A distance measure without fully considering the characteristics of HFLTSs is not perfect.

3.4. Comparison Operators

The comparison of HFLTSs represented by linguistic intervals is carried out according to an ordinary lexicographic order [27]. Lexicographic order is more intuitive for people.

Given F,εGSn*, the binary relation 1 is defined as the first superior

F1εdF,sgdε,sg

The binary relation  2 is defined as the second superior

F2εdF,sg1,sgdε,sg1,sg

This order seems natural: the closer the assessments are to the linguistic term sg, the better the alternative is. The better alternative also has the longest geometric distance from s0. Then we give the proof.

Lemma 1.

For all F,εGSn*, if F1ε, it holds F1εdF,s0dε,s0

Proof.

dF,sg=dsl,sk,sg,sg=|lg|+|kg|=2gl+k

=2g|l0|+|k0|=2gdsl,sk,s0,s0=2gdF,sg
F1εdF,sgdε,sg dF,s0dε,s0

Lemma 2.

For all F,εGSn*, if F2ε, it holds F2εdF,s0,s1dε,s0,s1

Proof.

dF,sg1,sg=dsl,sk,sg1,sg=|lg+1|+|kg|=2g1l+k

=2g|l0|+|k1|=2gdsl,sk,s0,s1=2gdF,s0,s1
F2εdF,sg1,sgdε,sg1,sgdF,s0,s1dε,s0,s1

4. APPLICATION TO FUZZY GROUP DECISION MAKING

4.1. The Description of FGDM

FGDM, which uses linguistic expresses as assessments, becomes an important research subject because of the fuzziness of objective things and hesitance of human thinking. Assume a set of decision makers E=e1,e2,,en and a set of alternatives X=x1,x2,,xm. The decision matrix is denoted by A=aijn×m where aij is the assessment for the alternative xj with respect to decision maker ei. The decision makers have a weighting vector W=w1,w2,,wn, where 0 ≤ wi ≤ 1 and i=1nwi=1.

Decision makers use comparative linguistic expressions to express their assessments. The key problem in our decision process is in the aggregation phase. We need to aggregate Aj=a1j,a2j,,anj to obtain O=O1,O2,,Om  where Oj is the global opinion of alternative xj.

In this section, we will introduce an approach based on distance measures to solve FGDM problems.

4.2 The Aggregation Approach Based on Distance Measure

The basic assumptions of the aggregation approach are as following:

  • The result of aggregating a set of linguistic intervals is also a linguistic interval.

  • The linguistic interval should reflect all information of the set of linguistic intervals as accurately as possible.

In the aggregation phase, we find a linguistic expression which has the minimal distances from all assessments Aj=a1j,a2j,,anj as the global assessment Oj. This aggregation result can satisfy the above two assumptions. We call the linguistic express “substitution” because the linguistic expression is a best substitution to replace a set of assessments a1j,a2j,,anj.

Now we give the Algorithm 1 to find a linguistic expression which has the minimal distances with U=u1,u2,,un.

Algorithm 1:

Step 1. Construct GSn*=(V,E,ϕ), where uiV.

Step 2. Calculate distance for ui.

For a starting node uiV, use Dijkstra’s shortest path algorithm to calculate the distance d(uiv) for all vV.

Step 3. Find the substitution xi,j.

We present the model based on the weighted averaging operator to find a linguistic expression having the minimal distances with u1,u2,,un

minuiUxi,jVdui, xi,jyi,jwi
s.t.    yi,j=0or1uiUxi,jVyi,j=1i=1nwi=1

In the model, yi,j = 1 means that vertex xi,j is selected as the best substitution. The condition is that we can only select one node. uiUxi,jVyi,j=1 makes the condition satisfied. The linguistic interval xi,j is the linguistic interval that can reflect all information of the set assessment.

We can extend this model to meet the needs of some operators. Now we give an extension for the hesitant fuzzy LOWA (HFLOWA) operator.

minuσiUxi,jVduσi, xi,jyi,jwi
s.t.    yi,j=0,1uiUxi,jVyi,j=1i=1nwi=1
where W=w1,,wnT is an associated weighting vector, and uσ1,uσ2,,uσn is a permutation of u1,u2,,un such that uσiuσj for all i<j.

A common extension operator is given as follows.

minuσiUxi,jVduσi, xi,j,yi,j,wi
s.t.    yi,j=0,1uiUxi,jVyi,j=1i=1nwi=1
where W=w1,,wnT is an associated weighting vector and is an aggregation operator.

Since our models reach the result that has the minimal distances with a1j,a2j,,anj, the proximity degree of the aggregation result must be better than the one calculated by any other aggregation operators. We use proximity measures [37] to evaluate the distance between the experts’ individual opinions and the group or collective opinion. Proximity degree is a number between 0 and 1; the closer it is to 1, the better the aggregation result is.

Definition 12.

Proximity degree function is defined by

δj=1i=1ndxij,OjwiMaxvxVdvx,Oj

In function (10), we find the vx in GSn* that has the longest distance from Oj. The effect of MaxvxVdvx,Oj in function (10) is to normalize the proximity degree for being between 0 and 1.

Our aggregation approach not only solves the problems of aggregating decision makers assessments, but also gets the result with good proximity degree. So we think it is especially suitable for FGDM.

4.3 The Process of FGDM with HFLTSs

Based on the proposed algorithm of finding a substitute expression of HFLTSs, the proposed FGDM approach is presented as follows:

  • Step 1. Transform comparative linguistic assessment aij into HFLTS.

    Use the transformation function EGH [10] to convert comparative linguistic expressions into HFLTS. The HFLTS is presented as envHs=Hs,Hs+.

  • Step 2. Construct a graph for decision makers.

For all decision makers, all assessments are the vertices in graph GSn* which is described as a weighted graph GSn*=V,E,, where Sn*=B1,,Bn is the basic label set.

minaijAjxh,lVGSn*daij, xh,lyh,lwi
s.t.    yh,l=0or1xh,lVGSn*yh,l=1i=1nwi=1
  • Step 3. Aggregate assessments of decision makers.

    Given j=1, the assessment set according to alternative xj is Aj=a1j,a2j,,anj. We use the following model to obtain Oj.

  • Oj is the global opinion of alternative xj

    j=j+1.

    Repeat Steps 2 and 3 until j=m.

    We obtain O=O1,O2,,Om where Oj is the global opinion of alternative xj.

  • Step 4. Exploitation phase.

Establish a rank ordering among O1,O2,,Om using the comparison operators. Firstly, rank the opinions with the binary relation 1. Calculate dOj,sg and rank the set dO1,sg,,dOm,sg. If the first superior can give a clear ranking order, the exploitation phase is over. If not, rank the opinions with the binary relation 2. Repeat the process until a clear ranking order is obtained.

4.4 Discussion and Comparative Analyses

Now we highlight some advantages of our approach with respect to others. We focus on two aspects:

  1. Accuracy of the distance measure

    In our approach, the distance measure is based on the Manhattan distance. Although most distances or similar measures are based on the Hamming distance and Euclidean distance, we think these kinds of distance measures cannot reflect the diversity of the linguistic intervals. The linguistic intervals cover different numbers of granule in S. Distance function in definition 5 adds linguistic terms in HS1 to ensure cardHS1=card(HS2)=L. This approach can solve the problem where HS1 has less granules than HS2. But the granules in HS1, which are added in, are not original ones in Sn*. The granules in a same linguistic term set have the same semantics meaning. So, the granules in HS1 has different semantics meanings from the granules in HS2. Directly computing on two granules with different semantics is not suitable. We call this problem information distortion. Applying Hamming distance to measure information granules with different coverage indexes needs further studies.

  2. Complexity of the aggregation model

    It is universally acknowledged that the aggregation result of a set of hesitant fuzzy information should have the minimums difference with the set of hesitant fuzzy information. Different aggregation models realize this aim from different aspects. Rodriguez, Martinez [9] defined two aggregation operators, min_upper and max_lower, which carry out the aggregation by using HFLTS. These aggregators do not require that the cardinality of HSi is equal. Liao, Xu [8] proposed a family of aggregators based on distance measures (Definition 5). The possible information distortion problems are proposed in the first aspect. In our aggregation phase, we find a linguistic express which has the minimal distances when all assessments are the global assessments. Through adjusting k of Function (9), we can recognize nuances among alternatives in FGDM.

5. ILLUSTRATIVE EXAMPLES

We use two examples to illustrate the process of our aggregation approach.

Example 11.

Consider two alternatives X=x1,x2, a group decision makers E=e1,e2,e3, weights w1,w2,w3, and the assessment matrix aij3×2, as given in Table 1.

x1 x2 wi
e1 l4 l3 1/3
e2 [l3, l4] [l3, l4] 1/3
e3 [l2, l4] [l3, l5] 1/3
Table 1

Decision makers assessments of example 1.

The graph GSn* that describes all assessments is constructed, where Sn*=l1,l2,,l5, and l1 = VERY BAD, l2 = BAD, l3 = NORMAL, l4 = GOOD, l5 = VERY GOOD.

We assume that the global opinion Oi=lh,lk satisfies

mini=13dxi1,O1wi
mini=13dxi2,O2wi

When k = 0, the weight of any edge is

wxi,jxr,s=1+k|lnt+2lnt+1|=1.

Using Lingo software, we obtain O1=O2=l3,l4.

The global opinions of the two alternatives are the same.

Since O1=O2, we can get alternatives order x1~x2.

Then we calculate the group consensus degree

δ1=1i=1ndxij,OjwiMaxvxVdvx,Oj=11+0+153=0.867
δ2=0.867
vx is l1 in order to get  MaxvxVdvx,Oj.

Since δ1 = δ2, the proximity degrees of the two aggregation results are the same.

Example 2.

Consider two alternatives X=x1,x2, a group decision makers E=e1,e2,e3, weights w1,w2,w3, and the assessment matrix aij3×2, as given in Table 2.

x1 x2 wi
e1 l2 l3 1/3
e2 [l3, l4] [l4, l5] 1/3
e3 [l2, l5] [l1, l4] 1/3
Table 2

Decision makers assessments example 2.

When k = 1, the weight of any edge is

wxi,jxr,s=1+|lnt+2lnt+1|

xi,jxr,s satisfies r=i and s=j+1 or s=j and r=i+1

The edge connecting the basic Sn* and level 1 is

wlili,li+1=1+|IliIli1,li|=1+|lg1/5lg2/5|=1.30

The edge connecting the level ∧1 and ∧2 is

wli,li+1li,li+2=1+|Ili,li+1Ili,li+2|=1+|lg2/5lg3/5|=1.18

The edge connecting the level ∧2 and ∧3 is

wli,li+2li,li+3=1+|Ili,li+2Ili,li+3|=1+|lg3/5lg4/5|=1.12

The edge connecting the level ∧3 and ∧4 is

wli,li+3li,li+4=1+|Ili,li+3Ili,li+4|=1+|lg4/5lg5/5|=1.10

Construct model to find the substitution Oj.

minaijAjxh,lGSn*daij, xh,lyh,lwk
s.t.    yh,l=0 or 1xh,lGSn*yh,l=1k=1pwk=1
Oj is the global opinion of alternative xj
O1=l2,l4
O2=l3,l4

Since O11O2, we can get alternatives order x1x2.

Then we calculate the group consensus degree

δ1=1i=1ndxij,OjwiMaxvxVdvx,Oj=11.12+1.18+2.484.72×3=0.662
vx is l1 or l5 for achieving MaxvxVdvx,Oj.
δ2=1i=1ndxij,OjwiMaxvxVdvx,Oj=12.3+2.36+1.35.9×3=0.663
vx is l1 for achieving MaxvxVdvx,Oj.

Since δ1 < δ2, the proximity degree of O2 is better than O1.

Remarks: We apply Ixi,j to build the weight function of each edge in Example 2. If the two vertices are more different in coverage, then |Ixi,jIxr,s| is larger. This result is interesting to readers since weight of any edge in Example 1 ignores the difference of two vertices in coverage, which will lead to information loss. So the approach in Example 1 cannot tell the difference between alternatives. It gets the conclusion x1~x2. While our approach maintains all information and gets the conclusion x1x2, one of the advantages of our approach to be highlighted is the accuracy of the distance measure. The information I for the vertex xi,j used to build the weight function of each edge is just for the accuracy of the distance measure.

Roselló, Sánchez [31] computed the distances from optimal labels F and solved group decision-making under multi-granular linguistic assessments. With their method, we obtain the following result of Example 2:

dx1,F=dl2,l5w1+d[l3,l4],l5w2+d[l2,l5],l5w3=7.2+3.66+3.63=4.82
dx2,F=dl3,l5w1+d[l4,l5],l5w2+d[l1,l4],l5w3=4.96+1.3+5.83=4.02
dx1,F>dx2,Fx1x2

This result is the same with ours.

Liao et al. [8] extended the shorter HFLTS to obtain the same length and compute the distances to rank alternatives in multi-criteria decision making. With their method, the solution for the problem in Example 2 is presented below:

Firstly, one can extend these preferences into the same length and get matrix

aij3×2=l2,l2,l2,l2l3,l3,l4,l4l2,l3,l4,l5l3,l3,l3,l3l4,l4,l5,l5l1,l2,l3,l4

The hesitant fuzzy linguistic positive ideal solution is m+=l5,l5 and negative ideal solution is m=l0,l0. The satisfaction degree ηxi for each alternatives can be calculated using Equation (11).

ηxi=1θdxi,mθdxi,m++1θdxi,m
where θ = 0.5 and d(xi, m) is Hamming distance between x1 and m, cardxi=cardm=L, τ=cardSn*.
dxi,m=1Ll=1L|δl1δl2|2τ+1

Finally, we get

ηx1=0.6
ηx2=0.667
ηx1<ηx2x1x2

This result is the same with ours.

Hence, consistent ranking results are reached between our approach and that of Roselló et al. and Liao et al. for the problem in Example 2. However, the advantages of our approach with respect to others are highlighted in Section 4.4.

6. CONCLUSION

This newly developed distance measure can complement the existing computational models and is particularly suitable for solving FGDM problems in a context-free grammar. HFLTSs have diversity not only in center points but also in the coverage (degree of uncertainty). The most difficult problem when dealing with HFLTSs is aggregation because the coverage indices of HFLTSs are different. A proper operator is the key. We propose a new approach to get the aggregation result. The idea is the result should have the minimal distances with the HFLTSs. This idea is reasonable and consistent with the definition of proximity measure in GDM. This distance measure applied into FGDM is very suitable. And our numerical examples illustrate the process and effects of our method.

However, our distance measure assumes edges connecting the same two levels are of equal length. In fact, this condition is based on the assumption that the basic linguistic term set Sn* is balanced linguistic term set where terms are uniformly and symmetrically distributed. So, future research will include the use of the proposed methodology to deal with unbalanced linguistic term sets and explore its application to FGDM problems with unbalanced linguistic term sets. Moreover, the grammar GH, which is used to generate comparative linguistic expressions, also has some potential improvements. More freedom can be given to generate linguistic expressions, which are closer to human language. Future research will also include the use of the proposed distance measure to treat other kind decision models, such as TOPSIS method.

CONFLICT OF INTEREST

This section is to certify that we have no potential conflict of interest.

This article does not contain any studies with human participants or animals performed by any of the authors.

ACKNOWLEDGEMENTS

This work was supported by National Natural Science Foundation of China (NSFC) (71871121, 71401078, 71503134), Top-notch Academic Programs Project of Jiangsu High Education Institutions, and HRSA, US Department of Health & Human Services (No.H49MC0068).

Footnotes

1

This example is a revised version of Example 3 in [26].

REFERENCES

1.L.A. Zadeh, The concept of a linguistic variable and its application to approximate reasoning—I, Info. Sci., Vol. 8, No. 3, 1975, pp. 199-249.
2.F. Herrera, S. Alonso, F. Chiclana, and E. Herrera-Viedma, Computing with words in decision making: foundations, trends and prospects, Fuzzy Optim. Decis. Making, Vol. 8, No. 4, 2009, pp. 337-364.
3.F. Herrera and L. Martinez, A 2-tuple fuzzy linguistic representation model for computing with words, IEEE Trans. Fuzzy Syst., Vol. 8, No. 6, 2000, pp. 746-752.
4.J.-H. Wang and J. Hao, A new version of 2-tuple fuzzy linguistic representation model for computing with words, IEEE Trans. Fuzzy Syst., Vol. 14, No. 3, 2006, pp. 435-445.
5.Z.S. Xu, A method based on linguistic aggregation operators for group decision making with linguistic preference relations, Info. Sci., Vol. 166, No. 1–4, 2004, pp. 19-30.
6.Y. Dong, C.-C. Li, and F. Herrera, Connecting the linguistic hierarchy and the numerical scale for the 2-tuple linguistic model and its use to deal with hesitant unbalanced linguistic information, Info. Sci., Vol. 367, 2016, pp. 259-278.
7.Y.C. Dong and E. Herrera-Viedma, Consistency-Driven automatic methodology to set interval numerical scales of 2-Tuple linguistic term sets and its use in the linguistic GDM with preference relation, IEEE Trans. Cybern., Vol. 45, No. 4, 2015, pp. 780-792.
8.H. Liao, Z. Xu, and X.J. Zeng, Distance and similarity measures for hesitant fuzzy linguistic term sets and their application in multi-criteria decision making, Info. Sci., Vol. 271, 2014, pp. 125-142.
9.R.M. Rodriguez, L. Martinez, and F. Herrera, Hesitant fuzzy linguistic term sets for decision making, IEEE Trans. Fuzzy Syst., Vol. 20, No. 1, 2012, pp. 109-119.
10.R.M. Rodríguez, L. Martínez, and F. Herrera, A group decision making model dealing with comparative linguistic expressions based on hesitant fuzzy linguistic term sets, Info. Sci., Vol. 241, 2013, pp. 28-42.
11.S. Liu, F.T.S. Chan, and W. Ran, Multi-attribute group decision-making with multi-granularity linguistic assessment information: an improved approach based on deviation and TOPSIS, Appl. Math. Model., Vol. 37, No. 24, 2013, pp. 10129-10140.
12.J.-Q. Wang, J.-T. Wu, J. Wang, H.-Y. Zhang, and X.-H. Chen, Interval-valued hesitant fuzzy linguistic sets and their applications in multi-criteria decision-making problems, Info. Sci., Vol. 288, 2014, pp. 55-72.
13.Z. Zhang and C. Wu, Hesitant fuzzy linguistic aggregation operators and their applications to multiple attribute group decision making, J. Intell. Fuzzy Syst., Vol. 26, No. 5, 2014, pp. 2185-2202.
14.J. Wang, J.-Q. Wang, H.-Y. Zhang, and X.-H. Chen, Multi-criteria decision-making based on hesitant fuzzy linguistic term sets: an outranking approach, Knowl. Based Syst., Vol. 86, 2015, pp. 224-236.
15.C. Wei, N. Zhao, and X. Tang, A novel linguistic group decision-making nodel based on extended hesitant fuzzy linguistic term sets, Int. J. Uncertainty Fuzziness Knowl. Based Syst., Vol. 23, No. 3, 2015, pp. 379-398.
16.G. Hesamian and M. Shams, Measuring similarity and ordering based on hesitant fuzzy linguistic term sets, J. Intell. Fuzzy Syst., Vol. 28, No. 2, 2015, pp. 983-990.
17.H. Liao, Z. Xu, X.-J. Zeng, and J.M. Merigó, Qualitative decision making with correlation coefficients of hesitant fuzzy linguistic term sets, Knowl. Based Syst., Vol. 76, 2015, pp. 127-138.
18.M. Yavuz, B. Oztaysi, S.C. Onar, and C. Kahraman, Multi-criteria evaluation of alternative-fuel vehicles via a hierarchical hesitant fuzzy linguistic model, Expert Syst. Appl., Vol. 42, No. 5, 2015, pp. 2835-2848.
19.C. Wei, R.M. Rodríguez, and L. Martínez, Uncertainty measures of extended hesitant fuzzy linguistic term sets, IEEE Trans. Fuzzy Syst., Vol. 26, No. 3, 2018, pp. 1763-1768.
20.R.M. Rodríguez, Á. Labella, and L. Martínez, An overview on fuzzy modelling of complex linguistic preferences in decision making, Int. J. Comput. Intell. Syst., Vol. 9, No. sup1, 2016, pp. 81-94.
21.S.-M. Chen and J.-A. Hong, Multicriteria linguistic decision making based on hesitant fuzzy linguistic term sets and the aggregation of fuzzy sets, Info. Sci., Vol. 286, 2014, pp. 63-74.
22.C. Wei, N. Zhao, and X. Tang, Operators and comparisons of hesitant fuzzy linguistic term sets, Fuzzy Syst, IEEE Trans., Vol. 22, No. 3, 2014, pp. 575-585.
23.H. Liu and R.M. Rodríguez, A fuzzy envelope for hesitant fuzzy linguistic term set and its application to multicriteria decision making, Info. Sci., Vol. 258, 2014, pp. 220-238.
24.M. Xia and Z. Xu, Hesitant fuzzy information aggregation in decision making, Int. J. Approx. Reason., Vol. 52, No. 3, 2011, pp. 395-407.
25.R.M. Rodríguez, L. Martinez, V. Torra, Z.S. Xu, and F. Herrera, Hesitant fuzzy sets: state of the art and future directions, Int. J. Intell. Syst., Vol. 29, No. 6, 2014, pp. 495-524.
26.H. Liao and Z. Xu, Approaches to manage hesitant fuzzy linguistic information based on the cosine distance and similarity measures for HFLTSs and their application in qualitative decision making, Expert Syst. Appl., Vol. 42, No. 12, 2015, pp. 5328-5336.
27.E. Falcó, J.L. García-Lapresta, and L. Roselló, Allowing agents to be imprecise: a proposal using multiple linguistic terms, Info. Sci., Vol. 258, 2014, pp. 249-265.
28.H. Wang and Z. Xu, Some consistency measures of extended hesitant fuzzy linguistic preference relations, Info. Sci., Vol. 297, 2015, pp. 316-331.
29.B. Gloria and P. Gabriella, A fuzzy linguistic approach generalizing Boolean information retrieval: a model and its evaluation, J. Am. Soc. Info. Sci., Vol. 44, No. 2, 1993, pp. 70-82.
30.L.-W. Lee and S.-M. Chen, Fuzzy decision making based on likelihood-based comparison relations of hesitant fuzzy linguistic term sets and hesitant fuzzy linguistic operators, Info. Sci., Vol. 294, 2015, pp. 513-529.
31.L. Roselló, M. Sanchez, and N. Agell, Using consensus and distances between generalized multi-attribute linguistic assessments for group decision-making, Info. Fusion, Vol. 17, 2014, pp. 83-92.
32.W. Pedrycz and M. Song, Granular fuzzy models: a study in knowledge management in fuzzy modeling, Int. J. Approx. Reason., Vol. 53, No. 7, 2012, pp. 1061-1079.
33.E. Herrera-Viedma, O. Cordón, M. Luque, A.G. Lopez, and A.M. Muñoz, A model of fuzzy linguistic IRS based on multi-granular linguistic information, Int. J. Approx. Reason., Vol. 34, No. 2–3, 2003, pp. 221-239.
34.L. Roselló, F. Prats, N. Agell, and M. Sánchez, Measuring consensus in group decisions by means of qualitative reasoning, Int. J. Approx. Reason., Vol. 51, No. 4, 2010, pp. 441-452.
35.E.W. Dijkstra, A note on two problems in connexion with graphs, Numer. Math., Vol. 1, No. 1, 1959, pp. 269-271.
36.M.H. Xu, Y.Q. Liu, Q.L. Huang, Y.X. Zhang, and G.F. Luan, An improved Dijkstra’s shortest path algorithm for sparse network, Appl. Math. Comput., Vol. 185, No. 1, 2007, pp. 247-254.
37.E. Herrera-Viedma, F. Mata, L. Martinez, F. Chiclana, and L.G. Pérez, Measurements of consensus in multi-granular linguistic group decision-making, V. Torra and Y. Narukawa (editors), Modeling Decisions for Artificial Intelligence, Springer, Berlin/Heidelberg, Vol. 3131, 2004, pp. 61-73.
Journal
International Journal of Computational Intelligence Systems
Volume-Issue
12 - 1
Pages
227 - 237
Publication Date
2019/01/14
ISSN (Online)
1875-6883
ISSN (Print)
1875-6891
DOI
10.2991/ijcis.2018.125905643How to use a DOI?
Copyright
© 2019 The Authors. Published by Atlantis Press SARL.
Open Access
This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).

Cite this article

TY  - JOUR
AU  - Mei Cai
AU  - Yiming Wang
AU  - Zaiwu Gong
AU  - Guo Wei
PY  - 2019
DA  - 2019/01/14
TI  - A Novel Comparative Linguistic Distance Measure Based on Hesitant Fuzzy Linguistic Term Sets and Its Application in Group Decision-Making
JO  - International Journal of Computational Intelligence Systems
SP  - 227
EP  - 237
VL  - 12
IS  - 1
SN  - 1875-6883
UR  - https://doi.org/10.2991/ijcis.2018.125905643
DO  - 10.2991/ijcis.2018.125905643
ID  - Cai2019
ER  -