International Journal of Computational Intelligence Systems

Volume 10, Issue 1, 2017, Pages 970 - 985

Generalizing linguistic distributions in hesitant decision context

Authors
Guiqing Zhang1, gqzhang@mail.xjtu.edu.cn, Yuzhu Wu2, yzwu@stu.scu.edu.cn, Yucheng Dong2, ycdong@scu.edu.cn
1School of Finance and Economics, Xi’an Jiaotong University, Xi’an, 710049, China
2Business School, Sichuan University, Chengdu, 610065, China
Received 18 April 2017, Accepted 13 June 2017, Available Online 28 June 2017.
DOI
10.2991/ijcis.2017.10.1.65How to use a DOI?
Keywords
decision making; computing with words; hesitant fuzzy linguistic term set; linguistic distribution
Abstract

The hesitant fuzzy linguistic term set (HFLTS) and the linguistic distribution (LD) are becoming popular tools to describe decision makers’ linguistic preferences. By combining HFLTS and LD, this paper proposes a new concept called hesitant linguistic distribution (HLD), and then presents the transformation between HLDs and LDs and the basic comparison and aggregation operations to perform on HLDs. Following, comparisons among several linguistic expressions are made. Finally, the use and behavior mechanism of HLD in multiple attribute group decision making is demonstrated.

Copyright
© 2017, the Authors. Published by Atlantis Press.
Open Access
This is an open access article under the CC BY-NC license (http://creativecommons.org/licences/by-nc/4.0/).

1. Introduction

Linguistic decision making in which linguistic information is utilized to describe decision makers’ preferences/opinions qualitatively is a common activity in our daily life, and different linguistic approaches have been proposed to deal with computing with words (CW) [11, 16, 19, 26, 36] in linguistic decision making problems. The two classical linguistic computation models: (1) the semantic model [26] and (2) the symbolic model [20, 21] have been intensively studied. Especially, the 2-tuple linguistic representation model [13], which avoids the computation weakness in information loss, has been widely applied (e.g., [4, 17, 22, 23, 24]. Furthermore, different progress has been made based on the 2-tuple linguistic model, such as the linguistic hierarchy model [12, 14], multi-granular linguistic model [25, 27], the proportional 2-tuple linguistic model [32] and the numerical scale model [5, 9].

In the above mentioned models, decision makers can only utilize single linguistic terms to elicit their preferences, which restricts decision makers from expressing their opinions with flexible and rich linguistic expressions [15, 30]. To address this issue, Rodríguez et al. [31] introduced the concept of hesitant fuzzy linguistic term set (HFLTS) taking decision makers’ hesitancy among different linguistic terms into consideration. Based on the use of HFLTS, Beg and Rashid [1] proposed a TOPSIS method to aggregate HFLTSs in multi-criteria decision making. Liu and Rodríguez [18] proposed the fuzzy envelope to carry out the CW processes of HFLTSs. Wei et al. [33] introduced the aggregation operators and comparisons of HFLTSs. Dong et al. [3] presented a novel approach to deal with consensus reaching process with hesitant linguistic assessments in group decision making. The recent progress of the HFLTS in decision making can be found in the position paper (see Rodríguez et al. [29]).

Different from the HFLTS which does not consider the symbolic proportion information of the terms, Zhang et al. [39] proposed the linguistic distribution (LD) in which symbolic proportions are assigned to all the terms in a linguistic term set. Dong et al. [7] introduced the unbalanced LD with interval symbolic proportions in multi-granular context. Wu and Xu [34] proposed the possibility distributions for HFLTS with symbolic proportions uniformly distributed over the terms in an HFLTS. Chen et al. [2] proposed the proportional hesitant fuzzy linguistic term set which includes the proportional information of generalized linguistic terms. Zhang et al. [41] discussed the LDs in large-scale multi-attribute group decision making (MAGDM). Pang et al. [28], Guo et al. [10] and Wu and Dong [35] discussed the cases of LDs with incomplete information.

However, in the LD and its variants, all symbolic proportion information of these expressions are focused on single terms, and in some situations decision makers have to express symbolic proportion information over HFLTSs. For example, an expert is asked to evaluate on a football player according to the player’s past performances. The established linguistic term set is S = {s0 : very poor, s1 : poor, s2 : slightly poor, s3 : medium, s4 : slightly good, s5 : good, s6 : very good}.

When evaluating, the expert considers that the proportion of the performance ‘good’ is 0.3, the performance ‘slightly good’ is 0.2 and the performance ‘very good’ is 0.2. But the expert hesitates among the terms ‘very poor’, ‘poor’, and ‘slightly poor’ and ‘medium’ and he/she could just be sure that the proportion of the performance ‘no better than medium’ is 0.3. In this situation, the evaluation provided by the expert can be described by {({s0, s1, s2, s3}, 0.3), (s4, 0.3), (s5, 0.2), (s6, 0.2)}, and the expert doesn’t provide the proportion information for single terms in {s0, s1, s2, s3} but a sum proportion for the HFLTS {s0, s1, s2, s3}. In this paper, we call this kind of evaluation information hesitant linguistic distribution (HLD). We will present the basic operations including the comparison and aggregation operations to perform on HLDs, and will also propose the comparisons among several linguistic expressions to show that the HLD is their generalization.

The rest of this paper is organized as follows. Section 2 introduces the basic knowledge regarding the 2-tuple linguistic model, HFLTS, and LD. Then, Section 3 proposes the HLD and its basic operations. Next, Section 4 presents the comparisons among several linguistic expressions. Following, in Section 5 we discuss the use and behavior mechanism of HLD in multi-attribute decision making (MAGDM). Finally, conclusion remarks are included in Section 6.

2. Preliminaries

This section introduces the basic knowledge regarding the 2-tuple linguistic model, HFLTS and LD. The introduction of the 2-tuple linguistic model is very necessary because (1) it provides the basis for CW in this paper, and (2) HFLTS and LD are both its generalization.

2.1. The 2-tuple linguistic model

Let S = {s0, s1,…, sg} be a linguistic term set with odd cardinality satisfying [11, 13, 19]:

  1. (1)

    The set is ordered: skst if kt ;

  2. (2)

    There is a negation operator: neg(sk) = si such that k + t = g.

g + 1 is called the cardinality of S and the term sk (k = 0, 1, …, g) represents a possible value for a linguistic variable. The basic notations and operation laws of linguistic variables are introduced in [37].

Herrera and Martínez [13] proposed the 2-tuple linguistic model.

Definition 1 [13]:

Let S = {s0, s1,…, sg} be as before and α ∈ [0, g] be a value representing the result of a symbolic aggregation operation. A linguistic 2-tuple (sk, β) that expresses the equivalent information to α is obtained by the function:

Δ:[0,g]S×[0.5,0.5)
where Δ(α)=(sk,β)with{sk,k=round(α)β=αk,β[0.5,0.5) , with round being the usual rounding operation. The set of all linguistic 2-tuples is denoted by S¯ , i.e., S¯={(sk,β)|skS,β[0.5,0.5),k=0.1,,g} .

Clearly, Δ is a one to one mapping function and the inverse function of Δ is:

Δ1:S¯[0,g]
with Δ−1(sk, β) = k + β.

For any linguistic 2-tuple of S¯ , there are the following computational operations:

  1. (1)

    Negation operation:

    Neg(sk, β) = Δ(g−(Δ−1(sk, β))).

  2. (2)

    Comparison operation:

    Let (sk, β1) and (st, β2) be two linguistic 2-tuples.

    1. (i)

      If k < t, then (sk, β1) is smaller than (st, β2);

    2. (ii)

      If k = t,

      1. (a)

        β1 = β2, then (sk, β1) and (st, β2) represent the same information;

      2. (b)

        β1 < β2, then (sk, β1) is smaller than (st, β2).

Several aggregation operators such as the linguistic weighted average (WA) operator and the ordered weighted average (OWA) operator have been developed (see [13, 20, 21]).

2.2. Hesitant fuzzy linguistic term set

The 2-tuple linguistic model proposed by Herrera and Martínez [13] can deal with linguistic information with single terms. However, there are situations that single terms cannot handle. For example, the coach may hesitate among several terms when he/she is not sure whether the performance of the player is ‘slightly good’ or ‘good’ or ‘very good’. To overcome the limitations, Rodríguez et al. [31] proposed the concept of HFLTS in which multiple consecutive terms are allowed to represent a decision maker’s hesitant preference. The concepts of HFLTS and its envelope are introduced as Definitions 2 and 4.

Definition 2 [31]:

Let S = {s0, s1,…, sg} be a linguistic term set, and an HFLTS, denoted as HS, is an ordered finite subset of the consecutive linguistic terms of S.

If HS = {}, HS is called an empty HFLTS; if HS = S, HS is called a full HFLTS.

Definition 3 [31]:

Let S = {s0, s1,…, sg} be a linguistic term set, and let HS be an HFLTS. The upper bound HS+ and lower bound HS of HS are defined as:

  1. (1)

    HS+ = max(sk) = sj, skHS and sksjk;

  2. (2)

    HS = min(sk) = sj, skHS and sksjk.

Definition 4 [31]:

Let S = {s0, s1,…, sg} be a linguistic term set, and the envelope of the HFLTS, env(HS), is a linguistic interval whose limits are obtained by the upper bound (max) and lower bound (min). Hence

env(HS)=[HS,HS+],HSHS+.

For any two HFLTSs H1S and H2S , there is the following comparison operation:

  1. (1)

    H1S>H2S if and only if env(H1S)>env(H2S) ;

  2. (2)

    H1S=H2S if and only if env(H1S)=env(H2S) .

The details for the HFLTS can be found in Rodríguez et al. [31].

2.3. Linguistic distribution

Zhang et al. [39] and Dong et al. [7] proposed the concept of linguistic distribution, as Definition 5.

Definition 5 [7, 39]:

Let S = {s0, s1,…, sg} be a linguistic term set. Let m = {(sk, β(sk))| k = 0, 1, …, g}, where β(sk) is the symbolic proportion of sk, β(sk) ≥ 0, and k=0gβ(sk)=1 . Then m is called a linguistic distribution (LD) over S.

Let m = {(sk, β(sk))| k = 0, 1, …, g} be an LD over S. Then the expectation of m is defined as [7, 39]:

E(m)=Δ(t=0gβ(sk)×Δ1(sk))
where E(m)S¯ .

For any LD over S, there are the following operations:

  1. (1)

    Negation operation:

    Neg({(sk, β(sk))}) = {(sk, β(sgk))}, k = 0,1,…, g.

  2. (2)

    Comparison operation:

    Let m1 and m2 be two LDs over S.

    1. (i)

      If E(m1) < E(m2), m1 is smaller than m2;

    2. (ii)

      If E(m1) = E(m2), m1 and m2 have the same expectation. Zhang et al. [41] discussed the situation when E(m1) = E(m2).

The weighted average operator and the ordered weighted average operator of LD have been developed (see [7, 39]).

In Wu and Dong [35], LD was generalized to incomplete linguistic distribution as Definition 6.

Definition 6 [35]:

Let S = {s0, s1,…, sg} be a linguistic term set. Let m = {(sk, β(sk))| k = 0, 1, …, g}, where β(sk) ∈ [0,1]∪{null}. Then

  1. (1)

    If β(sk) ∈ [0,1] ∀skS and k=0gβ(sk)=1 , then β(sk) is called the symbolic proportion of sk and m is a complete linguistic distribution (CLD), denoted as mC (refer to [39]).

  2. (2)

    If there exists β(sk) = null and β(sk)nullβ(sk)<1 , then m is called an incomplete linguistic distribution (ILD) over S denoted as mI.

The possibility distribution for HFLTS proposed in Wu and Xu [34], in which the sum of the symbolic proportions of an HFLTS of S equals to one, is a special CLD.

3. The hesitant linguistic distribution and its operations

The symbolic proportion information of LDs are over simple terms. However, in some situations, decision makers cannot provide symbolic proportion information for simple terms. Instead, they may hesitant among several consecutive terms and provide a sum of proportions for an HFLTS. In order to deal with this situation, in this section we propose the concept of hesitant linguistic distribution (HLD) and some of its operations.

3.1. Definition of the hesitant linguistic distribution

Let S = {s0, s1,…, sg} be a linguistic term set, and let HS be an HFLTS of S. Then the set of all HFLTSs of S is denoted as H in this paper.

The concept of the HLD can be formally defined as Definition 7.

Definition 7:

Let S = {s0, s1,…, sg} and H be as before, the HLD is defined as:

M={(HiS,β(HiS))|HiSH},
where HiS={s(Li),s(Li+1),,s(Ui)} and β(HiS)[0,1]{null} . In the HLD M, β(HiS) is called the symbolic proportion of HiS if β(HiS)null .

Further, we consider two cases:

  1. (1)

    If HiSH;β(HiS)nullβ(HiS)1 , we normalize the symbolic proportions β(HiS) , and let β(HiS)={β(HiS)HiSH;β(HiS)nullβ(HiS),ifβ(HiS)nullnull,ifβ(HiS)=null .

    The normalized M denoted as MN is a complete hesitant linguistic distribution (CHLD) over S.

  2. (2)

    If HiSH;β(HiS)nullβ(HiS)<1 , M is an incomplete hesitant linguistic distribution (IHLD) over S.

Example 1.

Let S = {s0, s1, s2, s3, s4, s5, s6} be a linguistic term set. Then the set of all HFLTSs of S is:

H={{s0},{s1},{s2},{s3},{s4},{s5},{s6},{s0,s1},{s1,s2},{s2,s3},{s3,s4},{s4,s5},{s5,s6},{s0,s1,s2},{s1,s2,s3},{s2,s3,s4},{s3,s4,s5},{s4,s5,s6},{s0,s1,s2,s3},{s1,s2,s3,s4},{s2,s3,s4,s5},{s3,s4,s5,s6},{s0,s1,s2,s3,s4},{s1,s2,s3,s4,s5},{s2,s3,s4,s5,s6},{s0,s1,s2,s3,s4,s5},{s1,s2,s3,s4,s5,s6},{s0,s1,s2,s3,s4,s5,s6}},
and
M1={({s0,s1},0.2),({s2,s3,s4},0.3),({s3,s4},0.3),({s4,s5,s6},0.3),(s6,0.1)}.

As β(Hi1S)nullβ(Hi1S)>1 , we normalize M1 and the normalized M1 is:

M1N={({s0,s1},0.17),({s2,s3,s4},0.25),({s3,s4},0.25),({s4,s5,s6},0.25),(s6,0.08)}
which is a CHLD over S, and M2 = {(s0, 0.2), ({s1, s2}, 0.2), ({s3, s4}, 0.3), ({s5, s6}, 0.2)} is an IHLD over S.

3.2. The transformation and comparison operations

After introducing the concept of the HLD, it is necessary to introduce its operations. First, we propose a transformation approach between LD and HLD. Let HiS={s(Li),s(Li+1),,s(Ui)}H , and the basic idea of the transformation approach is to averagely assign the symbolic proportion β(HiS) of HiS over the single terms s(Li), s(Li+1),…, s(Ui).

The transformation approach between LD and HLD can be formally presented as Algorithm I.

Algorithm I.

Input: The HLD M={(HiS,β(HiS))|HiSH} .

Output: The transformed LD M* = {(sk, αk)|k = 0,1,…, g}.

Step 1: Let αk = 0 and βk = −1 (k = 0,1,…, g).

Step 2: Let HτS be any HFLTS in H, and β(HτS)null . For any skHτS , then let βk = 1 and αk=αk+β(HτS)#(HτS) , where #(HτS) is the number of terms in HτS . Let H=HHτS .

Step 3: If Hnull, go to Step 2; otherwise, if βk = −1, then let αk = null. Let M* = {(sk, αk)|k = 0,1,…, g} and output M*.

We call M* the transformed LD of M.

Next, we present an example to illustrate the transformation between LD and HLD.

Example 2.

Continuing Example 1. Based on Algorithm I, the CHLD M1N and IHLD M2 can be transformed into the LDs M1N* and M2* as follows:

M1N*={(s0,12×0.17),(s1,12×0.17),(s2,13×0.25),(s3,13×0.25+12×0.25),(s4,13×0.25+12×0.25+13×0.25),(s5,13×0.25),(s6,13×0.25+0.08)}={(s0,0.09),(s1,0.09),(s2,0.08),(s3,0.21),(s4,0.29),(s5,0.08),(s6,0.16)},M2*={(s0,0.2),(s1,12×0.2),(s2,12×0.2),(s3,12×0.3),(s4,12×0.3),(s5,12×0.2),(s6,12×0.2)}={(s0,0.2),(s1,0.1),(s2,0.1),(s3,0.15),(s4,0.15),(s5,0.1),(s6,0.1)}.

Proposition 1:

Let M={(HiS,β(HiS))|HiSH} be an HLD, and M* = {(sk, β(sk))|k = 0,1,…, g} be the transformed LD of M. Then,

  1. (1)

    if HiSH;β(HiS)nullβ(HiS)=1 , then k=0gβ(sk)=1 and M* is a CLD;

  2. (2)

    if HiSH;β(HiS)nullβ(HiS)<1 , then k=0gβ(sk)<1 and M* is an ILD.

Proof.

Let φ(k,i)={0,ifskHiS1,ifskHiS(HiSH) . Then according to Algorithm I, we have αk=HiSH;β(HiS)nullβ(HiS)#(HiS)φ(k,i) . As HiSH;β(HiS)nullφ(k,i)=#(HiS) , so αk=HiSH;β(HiS)nullβ(HiS) . Let M* = {(sk, αk)|k = 0,1,…, g}. Then if HiSH;β(HiS)nullβ(HiS)=1 , we have k=0gαk=1 and M* is a CLD; if HiSH;β(HiS)nullβ(HiS)<1 , we have k=0gαk<1 and M* is an ILD.

This completes the proof of Proposition 1.

Proposition 1 indicates that based on Algorithm I the HLD M={(HiS,β(HiS))|HiSH} can be transformed into a CLD if HiSH;β(HiS)nullβ(HiS)=1 and an ILD if HiSH;β(HiS)nullβ(HiS)<1 .

Next, we define the comparison operations of any two HLDs based on the use of expectation and variation of transformed LD.

Definition 8:

Let M and M* = {(sk, β(sk))|k = 0,1,…, g} be as before. Then the expectation and variation of M are defined as Eqs. (4) and (5):

E(M)=Δ(skS;β(sk)nullβ(sk)×Δ1(sk))
and
V(M)=skS;β(sk)nullβ(sk)×(E(M)Δ1(sk))2
where (sk, β(sk)) ∈ M* (k = 0,1,…, g).

Let M1 and M2 be two HLDs over S, and the comparison operations between M1 and M2 are as follows.

  1. (1)

    If E(M1) < E(M2), then M1 < M2;

  2. (2)

    If E(M1) > E(M2), then M1 > M2;

  3. (3)

    If E(M1) = E(M2), then

    1. (i)

      If V(M1) < V(M2), then M1 > M2;

    2. (ii)

      If V(M1) > V(M2), then M1 < M2;

    3. (iii)

      If V(M1) = V(M2), then there is no difference between M1 and M2.

Next, we present an example to illustrate the comparison operations.

Example 3.

Continuing Example 2. As E(M1) = 3.4 > E(M2) = 2.45, we have M1 > M2.

3.3. Aggregation operations for HLDs

In this section, we introduce the weighted average operator and the ordered weighted average operator for HLDs.

For any two HLDs, the weighted union of them is defined as Definition 9.

Definition 9:

Let S = {s0, s1,…, sg} and H be as before, and let M1={(Hi1S,β(Hi1S))|Hi1SH} and M2={(Hi2S,β(Hi2S))|Hi2SH} be two HLDs over S where β(Hi1S)[0,1]{null} and β(Hi2S)[0,1]{null} . Let w1 and w2 be the corresponding weights, where w1, w2 ≥ 0 and w1 + w2 = 1. Then the weighted union of M1 and M2 is defined as

U(M1,M2)w1,w2=U(w1M1,w2M2),
where U(w1M1,w2M2)={(HiS,β(HiS))|HiSH} with
(HiS,β(HiS))={(Hi1S,w1β(Hi1S)+w2β(Hi2S)),ifβ(Hi1S),β(Hi2S)null,Hi1S=Hi2S(Hi1S,w1β(Hi1S))(Hi2S,w2β(Hi2S)),ifβ(Hi1S),β(Hi2S)null,Hi1SHi2S(Hi1S,w1β(Hi1S)),ifβ(Hi1S)null,β(Hi2S)=null(Hi2S,w2β(Hi2S)),ifβ(Hi2S)null,β(Hi1S)=null.

Example 4.

Continuing Example 2. Suppose the weights for M1 and M2 are w1 = 0.6 and w2 = 0.4. Then

U(M1,M2)w1,w2=U(0.6M1,0.4M2)={(s0,0.08),({s0,s1},0.102),({s1,s2},0.08),({s2,s3,s4},0.15),({s3,s4},0.27),({s4,s5,s6},0.15),({s5,s6},0.08),(s6,0.08)}.

Based on Definition 9, we introduce the weighted average operator and the ordered weighted average operator for HLDs as Definitions 10 and 11.

Definition 10:

Let S = {s0, s1,…, sg} and H be as before, and let {M1, M2,…, Mn} be a set of HLDs over S, where Mj={(HijS,β(HijS))|HijSH} (j = 1,2,…, n), and let w = {w1, w2,…, wn}T be an associated weighting vector satisfying wj ≥ 0 and j=1nwj=1 . Then the weighted average operator for {M1, M2,…, Mn} is defined as:

HLDWA(M1,M2,,Mn)w=U(w1M1,U(w2M2,,wnMn)).

Definition 11:

Let S = {s0, s1,…, sg} and H be as before, and let {M1, M2,…, Mn} be a set of HLDs over S, where Mj={(HijS,β(HijS))|HijSH} (j = 1,2,…, n). Let w = {w1, w2,…, wn}T be an associated weighting vector satisfying wj ≥ 0 and j=1nwj=1 . The ordered weighted average operator of {M1, M2,…, Mn} is defined as:

HLDOWA(M1,M2,,Mn)w=U(w1Mσ(1),U(w2Mσ(2),,wnMσ(n))),
where (σ(1), σ(2),…, σ(n)) is a permutation of {1,2,…, n} such that Mσ(j−1) > Mσ(j) for j = 1,2,…, n.

Example 5:

Continuing examples 13. Let {M1, M2, M3, M4, M5} be a set of HLDs over S, where M3, M4 and M5 are as follows:

M3={({s0,s1},0.1),({s2,s3},0.2),({s3,s4,s5},0.4),({s5,s6},0.3)},
M4={({s0,s1,s2},0.3),({s3,s4},0.4),({s5,s6},0.3)},
and
M5={({s0,s1},0.1),({s2,s3},0.2),(s3,0.2),({s4,s5,s6},0.5)}.

Without loss of generality, assume that w = (0.2,0.3,0.2,0.2,0.1)T. Then

HLDWA(M1,M2,M3,M4,M5)=U(w1M1,U(w2M2,w3M3,w4M4,w5M5))={(s0,0.06),({s0,s1},0.055),({s0,s1,s2},0.06),({s1,s2},0.06),({s2,s3},0.06),({s2,s3,s4},0.05),(s3,0.02),({s3,s4},0.22),({s3,s4,s5},0.08),({s4,s5,s6},0.1),({s5,s6},0.18),(s6,0.016)}.

Based on Definition 8, we have:

E(M3)>E(M5)>E(M1)>E(M4)>E(M2).

So,

HLDOWA(M1,M2,M3,M4,M5)=U(w1M3,U(w2M5,w3M1,w4M4,w5M2))={({s0},0.02),({s0,s1},0.39),({s0,s1,s2},0.06),({s1,s2},0.02),({s2,s3},0.1),({s2,s3,s4},0.05),(s3,0.06),({s3,s4},0.16),({s3,s4,s5},0.08),({s4,s5,s6},0.2),({s5,s6},0.14),(s6,0.016)}.

Following, we discuss the desirable properties of the proposed operators. We take the ordered weighted average operator as example, and the properties of the weighted average operator are similar with the ordered weighted average operator.

Property 1.

Let {M1, M2,…, Mn} and w = {w1, w2,…, wn}T be as before. For any HLDOWA operator, HLDOWA(M1, M2,…, Mn) is an HLD over S.

Proof.

Based on Definition 11, we have

HLDOWA(M1,M2,,Mn)w=U(w1Mσ(1),U(w2Mσ(2),,wnMσ(n))),
where Mσ(j) is the jth largest HLD in {M1, M2,…, Mn}. According to the definition of the HLD (Definition 7) and Definition 9, we can easily obtain that HLDOWA(M1, M2,…, Mn) is an HLD over S.

This completes the proof of Property 1.

Property 2.

For any HLDOWA operator

minj{Mj}HLDOWA(M1,M2,,Mn)maxj{Mj}.

Proof.

Let Mj* be the transformed LD associated with Mj. According to the comparisons for HLD, we have

minj{E(Mj)}=minj{Δ(k=0gβ(sk,σ(j))×Δ1(sk,σ(j)))}j=1n(wj×Δ(k=0gβ(sk,σ(j))×Δ1(sk,σ(j))))=E(HLDOWA(M1,M2,,Mn))maxj{Δ(k=0gβ(sk,σ(j))×Δ1(sk,σ(j)))}=maxj{E(Mj)}
where (sk,σ(j),β(sk,σ(j)))Mσ(j)* and Mσ(j)* is the jth largest in {M1*,M2*,,Mn*} .

That is,

minj{Mj}HLDOWA(M1,M2,,Mn)maxj{Mj}.

This completes the proof of Property 2.

Property 3 (Commutativity).

Let {M1, M2,…, Mn} be a set of HLDs over S, and {D1, D2,…, Dn} be a permutation of {M1, M2,…, Mn}. Then, for any HLDOWA operator

HLDOWA(M1,M2,,Mn)=HLDOWA(D1,D2,,Dn).

Proof.

Let (σ(1), σ(2),…, σ(n)) be a permutation of {1,2,…, n} such that Mσ(j−1) > Mσ(j) for j = 1,2,…, n, and let (δ(1), δ(2),…, δ(n)) be a permutation of {1,2,…, n} such that Dδ(j−1) > Dδ(j) for j = 1,2,…, n. As {D1, D2,…, Dn} is a permutation of {M1, M2,…, Mn}, we have σ(j) = δ(j) (j = 1,2,…, n). Thus,

HLDOWA(M1,M2,,Mn)=U(w1Mσ(1),U((w2Mσ(2),,wnMσ(n)))=U(w1Mδ(1),U(w2Mδ(2),,wnMδ(n)))=U(w1Dδ(1),U(w2Dδ(2),,wnDδ(n)))=HLDOWA(D1,D2,,Dn).

This completes the proof of Property 3.

Property 4 (Monotonicity).

Let {M1, M2,…, Mn} be a set of HLDs over S. Let {L1, L2,…, Ln} be another set of HLDs over S. If MjLj, then

HLDOWA(M1,M2,,Mn)HLDOWA(L1,L2,,Ln).

Proof.

Let (σ(1), σ(2),…, σ(n)) be a permutation of {1,2,…, n} such that Mσ(j−1) > Mσ(j) for j = 1,2,…, n, and let (δ(1), δ(2),…, δ(n)) be a permutation of {1,2,…, n} such that Lδ(j−1) > Lδ(j) for j = 1,2,…, n. As MjLj, we have Mσ(j)Lδ(j). Thus,

E(HLDOWA(M1,M2,,Mn))=j=1nwjE(Mσ(j))j=1nwjE(Lδ(j))=E(HLDOWA(L1,L2,,Ln))HLDOWA(M1,M2,,Mn)HLDOWA(L1,L2,,Ln)

So, we have

HLDOWA(M1,M2,,Mn)HLDOWA(L1,L2,,Ln).

This completes the proof of Property 4.

Property 5 (Idempotency).

If Mj = M for all j = 1,2,…, n, then for any HLDOWA,

HLDOWA(M1,M2,,Mn)=M.

Proof.

Let (σ(1), σ(2), …, σ(n)) be a permutation of {1,2,…, n} such that Mσ(j−1) > Mσ(j) for j = 1,2,…, n. We have

HLDOWA(M1,M2,,Mn)=U(w1Mδ(1),U((w2Mδ(2),,wnMδ(n)))=U((w1M,U(w2M,,wnM))=M.

This completes the proof of Property 5.

To improve the readability, the basic notations in this paper are listed below.

  • S = {s0, s1,…, sg}: The linguistic term set.

  • HS : An HFLTS of S.

  • H : The set of all HFLTSs of S.

  • m = {(sk, β(sk))|k = 0, 1, …, g} : An LD over S, where β(sk) ∈ [0,1]∪{null} and skSβ(sk)1 .

  • mC = {(sk, β(sk))|k = 0, 1, …, g} : A CLD over S, where β(sk) ∈ [0,1] ∀skS and k=0gβ(sk)=1 .

  • mI = {(sk, β(sk))|k = 0, 1, …, g} : An ILD over S, where there exists β(sk) = null and β(sk)nullβ(sk)<1 .

  • M={(HiS,β(HiS))|HiSH} : An HLD over S, where β(HiS)[0,1]{null} . If HiSH;β(HiS)nullβ(HiS)=1 , M is a CHLD over S ; if HiSH;β(HiS)nullβ(HiS)<1 , M is an IHLD over S.

  • M* = {(sk, β(sk))|k = 0,1,…, g} : The transformed LD of M.

  • E(M): The expectation of M.

  • V(M): The variation of M.

  • U(M1, M2,…, Mn): The union of {M1, M2,…, Mn} (n ≥ 2).

4. Comparisons among different linguistic expressions

In this section, we briefly describe the concepts of LD and its variants, and discuss the differences among several different linguistic expressions.

4.1. LD and its variants

Let S = {s0, s1,…, sg} be a linguistic term set, HS be an HFLTS of S, and H be the set of all HFLTSs of S.

  1. (1)

    The CLD. Zhang et al. [39] discussed the LD in which the symbolic proportion information provided for the terms is complete, i.e., the sum of the proportion information equals to one, which can be mathematically described as:

    m={(sk,β(sk))|k=0,1,,g},
    where β(sk) ∈ [0,1] ∀skS and k=0gβ(sk)=1 , and β(sk) is the symbolic proportion of sk.

  2. (2)

    The ILD. Pang et al. [28] and Guo et al. [10] and Wu and Dong [35] discussed the cases of LD with incomplete information, in which the symbolic proportion information for the terms in an LD is incomplete, i.e., the sum of the proportion information is less than one, which can be mathematically described as:

    m={(sk,β(sk))|k=0,1,,g},
    where ∃β(sk) = null and β(sk)nullβ(sk)<1 .

    Note 1. Pang et al. [28] and Guo et al. [10] discussed the cases of LD with partial ignorance of symbolic proportion information by introducing the concepts of probabilistic linguistic term sets and proportional fuzzy linguistic distribution through different mathematical representations respectively.

  3. (3)

    The possibility distribution for HFLTS. Wu and Xu [34] proposed the concept of the possibility distribution for HFLTS (PDHFLTS), in which the symbolic proportion information is uniformly distributed over the simple terms in an HFLTS and the sum of the possibility equals to one, which can be mathematically described as:

    m={(sk,β(sk))|k=0,1,,g},
    where β(sk) ∈ [0,1] ∀skS and sτHSβ(sτ)=1 , β(sτ) ∈ [0,1] and HS is an HFLTS of S.

    The PDHFLTS is a special CLD.

    Note 2. The mathematical formulation of the PDHFLTS in this paper is different from the definition provided in [34], but they have the same meaning.

  4. (4)

    The LD with interval symbolic proportions (Interval LD). Dong et al. [7] proposed the concept of LD with interval symbolic proportions, in which the symbolic proportion information for the terms in an LD are interval values, which can be mathematically described as:

    m={(sk,β(sk))|k=0,1,,g},
    where β(sk)=[β(sk)¯,β(sk)¯][0,1] and k=0gβ(sk)[0,1] , and β(sk) is the symbolic proportion of sk.

  5. (5)

    The HLD proposed in this paper. In HLD, the symbolic proportion information for the terms are distributed over HFLTSs, which can be mathematically described as:

    M={(HiS,β(HiS))|HiSH},
    where β(HiS)[0,1]{null} and β(HiS) is called the symbolic proportion of HiS if β(HiS)null . If HiSH;β(HiS)nullβ(HiS)=1 , M is a CHLD over S. If HiSH;β(HiS)nullβ(HiS)<1 , M is an IHLD over S.

4.2. Comparisons among LD and its variants

Based on the analysis of LD and its variants, their comparison results are listed in Table 1.

LD Our proposal (HLD)
Name of linguistic expression PDHFLTS
[34]
CLD
[39]
ILD
[10, 28, 35]
Interval LD
[7]
HLD
Mathematical format {(sk, β(sk))}
(k = 0,…, g)
{(sk, β(sk))}
(k = 0,…, g)
{(sk, β(sk))}
(k = 0,…, g)
{(sk, β(sk))}
(k = 0,…, g)
{(HiS,β(HiS))}(HiSH)
Symbolic proportion information skHSβ(sk)=1 skSβ(sk)=1 skSβ(sk)<1 β(sk)=[β(sk)¯,β(sk)¯]β(sk)[0,1];k=0gβ(sk)[0,1] HiSH;β(HiS)nullβ(HiS)1
Table 1.

The comparisons of LD and its variants

From the above comparisons, we can figure out the following characteristics:

  1. (1)

    The PDHFLTS is a special CLD.

  2. (2)

    CLD is a special CHLD.

  3. (3)

    ILD is a special IHLD.

  4. (4)

    Interval LD is a generalization of LD.

The relationships among the LD and its variants can be described as Fig.1.

Fig.1

The relationships among LD and its variants

5. Use of the proposed HLDs in MAGDM

In this section, we apply our proposal in MAGDM problems [6, 8, 38, 40]. Procedures for MAGDM with HLDs are presented and an illustrative example is demonstrated.

5.1. Description of the MAGDM problem with HLDs

In this section we propose the procedures for the MAGDM with HLDs.

Let S = {s0, s1,…, sg} be the linguistic term set and D = {d1, d2,…, dn} be a set of n decision makers with w = {w1, w2,…, wn}T being the weighting vector satisfying wj ≥ 0 and j=1nwj=1 . Let X = {x1, x2,…, xq} be a set of q alternatives and A = {a1, a2,…, ap} be a set of p attributes with v = {v1, v2,…, vp}T being the weighting vector satisfying vi ≥ 0 and i=1pvi=1 . As the information for the alternatives is uncertain, decision makers {d1, d2,…, dn} provide their preferences over alternatives {x1, x2,…, xq} by the form of LDs. In addition, the decision makers are from different fields and they may hesitate due to their vague knowledge when providing their professional opinions on the alternatives. So, the HLDs are adopted by decision makers to elicit their preferences. The procedures for the MAGDM with HLDs are depicted below:

  • Step 1: Construct the evaluation matrix of each decision maker dj (j = 1,2,…, n) for alternatives {x1, x2,…, xq}, Lj=(Mrtj)q×p , where Mrtj={(Hi,rtS,j,β(Hi,rtS,j))|Hi,rtS,jH} is an HLD.

    The HLD based evaluation matrix Mrtj represents dj’s preferences for alternative xr (r = 1,2,…, q) with respect to attribute at (t = 1,2,…, p).

  • Step 2: Obtain the collective evaluation matrix L¯=(Mrt¯)q×p for each alternative xr with respect to attribute at by aggregating each decision maker dj’s evaluation matrix.

    Without loss of generality, we utilize the HLDOWA in the aggregation process in this step and the next step. Then based on Eq. (8), we obtain Mrt¯={(Hi,rtS,β(Hi,rtS))|Hi,rtSH} , where

    Mrt¯=HLDOWA(Mrt1,Mrt2,,Mrtn)w.

  • Step 3: Calculate the overall value zr of each alternative xr.

    zr is computed by

    zr=HLDOWA(Mr1¯,Mr2¯,,Mrp¯)v.

  • Step 4: Rank the alternatives {x1, x2,…, xq}.

    According to the comparisons for HLDs in section 3 and the Algorithm I, we rank the alternatives based on the expectations and variations of zr by Eqs. (4)(5). The larger value of E(zr), the better alternative xr.

5.2. Illustrative example

Suppose that five experts, D = {d1, d2, d3, d4, d5}, are invited to provide professional evaluations on football teams. Four football teams, X = {x1, x2, x3, x4}, are considered and four attributes A = {a1, a2, a3, a4} are taken into account in the evaluation process. The information of the football teams provided for the experts are each team’s previous performances, so experts {d1, d2, d3, d4, d5} provide their preferences over {x1, x2, x3, x4} by the form of LDs. As the expert team consists of experts from different fields, who may hesitate when evaluating due to their vague knowledge for the football teams, the HLDs are adopted by experts to elicit their preferences. The established linguistic term set is

S={s0:very poor,s1:poor,s2:slightly poor,s3:average,s4:slightly good,s5:good,s6:very good}.
  • Step 1: Five experts’ evaluation matrices for alternative xr (r = 1,2,3,4) are listed below. See Tables 26.

  • Step 2: Obtain the collective evaluation L¯=(Mrt¯)4×4 for alternative xr (r = 1,2,3,4) with respect to attribute at (t = 1,2,3,4) by Eq. (13). Without loss of generality, suppose that all the experts have equal weights. See Table 7.

  • Step 3: Calculate the overall value zr of each alternative xr (r = 1,2,3,4) by Eq. (14). Without loss of generality, suppose v = (0.15,0.3,0.25,0.3)T. See Table 8.

  • Step 4: Rank the alternatives {x1, x2, x3, x4} according to the expectations of zr (r = 1,2,3,4). Based on the Algorithm I, we have the transformed LDs zr* associated with zr. See Table 9.

L1 a1 a2 a3 a4
x1 {(s0, 0.2),
({s1, s2}, 0.2),
({s3, s4}, 0.3),
({s5, s6}, 0.3)}
{({s0, s1}, 0.1),
({s2, s3}, 0.2),
(s3, 0.2),
({s4, s5, s6}, 0.5)}
{({s0, s1}, 0.1),
(s2, 0.1),
({s3, s4}, 0.2),
({s4, s5, s6}, 0.6)}
{(s0, 0.2),
({s1, s2, s3}, 0.2),
({s4, s5, s6}, 0.6)}
x2 {({s0, s1}, 0.1),
({s2, s3}, 0.2),
({s3, s4, s5}, 0.4),
({s5, s6}, 0.3)}
{({s0, s1}, 0.2),
({s2, s3}, 0.2),
({s3, s4}, 0.3),
({s5, s6}, 0.3)}
{(s0, 0.1),
({s1, s2, s3}, 0.4),
({s4, s5}, 0.3),
({s5, s6}, 0.2)}
{({s0, s1, s2}, 0.4),
(s3, 0.2),
({s4, s5, s6}, 0.4)}
x3 {({s0, s1, s2}, 0.3),
({s3, s4}, 0.4),
({s5, s6}, 0.3)}
{(s0, 0.1),
({s1, s2, s3}, 0.3),
(s4, 0.3),
({s5, s6}, 0.3)}
{({s0, s1, s2}, 0.4),
({s3, s4, s5}, 0.4),
(s6, 0.2)}
{({s0, s1}, 0.2),
(s2, 0.1),
({s3, s4, s5, s6}, 0.7)}
x4 {({s0, s1}, 0.1),
({s2, s3, s4}, 0.4),
({s3, s4}, 0.2),
({s4, s5}, 0.2), (s6, 0.1)}
{({s0, s1, s2}, 0.2),
(s3, 0.1),
({s3, s4}, 0.3),
({s5, s6}, 0.4)}
{({s0, s1}, 0.3),
({s1, s2, s3}, 0.3),
({s4, s5, s6}, 0.4)}
{(s0, 0.1),
({s0, s1, s2}, 0.4),
({s3, s4}, 0.2),
({s5, s6}, 0.3)}
Table 2.

The linguistic preference L1 provided by d1.

L2 a1 a2 a3 a4
x1 {(s0, s1, s2}, 0.4),
({s3, s4, s5}, 0.3),
(s6, 0.3)}
{({s0, s1}, 0.1),
({s2, s3, s4}, 0.4),
({s5, s6}, 0.5)}
{({s0, s1}, 0.2),
({s2, s3, s4}, 0.3),
({s4, s5}, 0.2),
({s4, s5, s6}, 0.3)}
{(s0, 0.1), (s1, 0.1)
({s2, s3}, 0.2),
({s4, s5, s6}, 0.6)}
x2 {({s0, s1}, 0.1),
({s1, s2, s3}, 0.3),
({s3, s4}, 0.3),
({s5, s6}, 0.3)}
{({s0, s1, s2, s3}, 0.5),
({s3, s4, s5, s6}, 0.5)}
{(s0, 0.1),
({s1, s2, s3}, 0.4),
({s4, s5, s6}, 0.3),
({s5, s6}, 0.2)}
{({s0, s1, s2}, 0.3),
(s3, 0.2), (s4, 0.1),
({s4, s5, s6}, 0.4)}
x3 (s1, 0.1),
({s2, s3, s4}, 0.4),
({s5, s6}, 0.3)}
{(s0, 0.1),
({s1, s2}, 0.3),
({s3, s4}, 0.3),
(s5, 0.2), (s6, 0.1)}
{({s0, s1, s2}, 0.4),
({s3, s4,}, 0.3),
(s5, 0.1),
({s5, s6}, 0.2)}
{({s0, s1}, 0.2),
(s2, 0.1),
({s3, s4, s5}, 0.6),
({s5, s6}, 0.1)}
x4 {(s0, 0.1),
({s1, s2, s3}, 0.3),
({s3, s4}, 0.2),
({s4, s5}, 0.2), (s6, 0.2)}
{({s0, s1}, 0.1),
(s2, 0.1), (s3, 0.1),
({s3, s4}, 0.2),
({s4, s5, s6}, 0.5)}
{({s0, s1}, 0.2),
({s1, s2, s3}, 0.3),
({s4, s5, s6}, 0.3),
({s5, s6}, 0.2)}
{(s0, 0.1),
({s0, s1, s2}, 0.4),
({s2, s3, s4}, 0.3),
({s5, s6}, 0.2)}
Table 3.

The linguistic preference L2 provided by d2.

L3 a1 a2 a3 a4
x1 {(s0, 0.1),
(s1, 0.1), (s2, 0.1),
({s3, s4, s5}, 0.3),
({s5, s6}, 0.4)}
{({s0, s1}, 0.1),
(s1, 0.1), (s2, 0.1)
(s3, 0.2), ({s2, s3, s4}, 0.3),
({s5, s6}, 0.2)}
{({s0, s1}, 0.2),
({s1, s2, s3}, 0.3),
(s4, 0.2), (s5, 0.1),
(s6, 0.2)}
{(s0, 0.1), (s1, 0.1)
({s0, s1, s2}, 0.2),
(s3, 0.1),
({s4, s5, s6}, 0.5)}
x2 {({s0, s1, s2}, 0.2),
({s1, s2}, 0.2),
({s3, s4, s5}, 0.3),
({s5, s6}, 0.3)}
{({s0, s1}, 0.1),
({s1, s2, s3}, 0.3)
({s3, s4, s5}, 0.4),
({s5, s6}, 0.2)}
{(s0, 0.1), ({s0, s1}, 0.2),
({s1, s2, s3}, 0.3),
({s4, s5}, 0.3),
({s5, s6}, 0.1)}
{({s0, s1, s2}, 0.3),
(s2, 0.1), (s3, 0.2),
(s4, 0.1),
({s4, s5, s6}, 0.3)}
x3 {({s0, s1}, 0.2),
({s1, s2}, 0.1),
({s2, s3, s4}, 0.4),
({s4, s5, s6}, 0.3)}
{(s0, 0.1),
({s1, s2}, 0.2), (s2, 0.1),
({s3, s4}, 0.3),
({s4, s5}, 0.2), (s6, 0.1)}
{({s0, s1, s2}, 0.3),
({s3, s4}, 0.3),
({s4, s5}, 0.1),
({s5, s6}, 0.3)}
{({s0, s1}, 0.2),
(s2, 0.1), (s3, 0.1),
({s3, s4, s5}, 0.3),
(s4, 0.1), ({s5, s6}, 0.2)}
x4 {(s0, 0.1), ({s1, s2}, 0.2)
({s1, s2, s3}, 0.3),
({s3, s4, s5}, 0.3),
({s5, s6}, 0.1)}
{({s0, s1}, 0.1),
(s2, 0.1), (s3, 0.1),
({s2, s3, s4}, 0.3),
({s5, s6}, 0.4)}
{({s0, s1}, 0.2),
({s1, s2}, 0.2),
({s3, s4, s5, s6}, 0.4),
({s5, s6}, 0.1), (s6, 0.1)}
{(s0, 0.1), ({s0, s1}, 0.2),
({s0, s1, s2}, 0.3),
({s3, s4}, 0.2),
({s5, s6}, 0.2)}
Table 4.

The linguistic preference L3 provided by d3.

L4 a1 a2 a3 a4
x1 {(s0, 0.1), (s1, 0.1),
(s2, 0.1), (s3, 0.1),
({s4, s5, s6}, 0.6)}
{({s0, s1}, 0.1),
(s1, 0.1), (s2, 0.1)
(s3, 0.2), (s4, 0.1),
({s5, s6}, 0.4)}
{({s0, s1, s2}, 0.3),
({s2, s3}, 0.3),
(s4, 0.1), (s5, 0.2),
(s6, 0.1)}
{({s0, s1, s2}, 0.3),
(s3, 0.1),
({s4, s5, s6}, 0.6)}
x2 {({s0, s1, s2}, 0.3),
(s3, 0.2), (s4, 0.2),
(s5, 0.1), (s6, 0.2)}
{({s0, s1, s2}, 0.3),
({s3, s4}, 0.2),
(s5, 0.3), (s6, 0.2)}
{({s0, s1}, 0.2),
({s1, s2, s3}, 0.3),
(s4, 0.1), (s5, 0.2),
(s6, 0.2)}
{(s0, 0.1),
({s0, s1, s2}, 0.3),
(s2, 0.1), (s3, 0.2),
(s4, 0.1), ({s5, s6}, 0.2)}
x3 {({s0, s1}, 0.2),
(s2, 0.1), ({s2, s3}, 0.3),
(s4, 0.1), ({s5, s6}, 0.3)}
{({s0, s1}, 0.2),
({s2, s3}, 0.3),
(s3, 0.1), (s4, 0.1),
(s5, 0.1), ({s5, s6}, 0.2)}
{({s0, s1}, 0.2),
({s2, s3, s4}, 0.3),
(s5, 0.2), ({s5, s6}, 0.2),
(s6, 0.1)}
{(s0, 0.1), ({s1, s2}, 0.2),
({s3, s4, s5}, 0.3),
{(s4, s5, s6}, 0.3), (s6, 0.1)}
x4 {(s0, 0.1), ({s1, s2}, 0.2)
(s3, 0.3), (s4, 0.1),
(s5, 0.2), (s6, 0.1)}
{(s0, 0.1),
({s1, s2, s3}, 0.3), (s4, 0.2),
(s5, 0.2), (s6, 0.2)}
{(s0, 0.1),
({s1, s2, s3, s4}, 0.6),
(s5, 0.2), (s6, 0.1)}
{({s0, s1}, 0.2),
({s2, s3, s4, s5}, 0.5),
(s6, 0.3)}
Table 5.

The linguistic preference L4 provided by d4.

L5 a1 a2 a3 a4
x1 {(s0, 0.1), (s1, 0.1)
(s2, 0.1), ({s3, s4}, 0.1),
({s4, s5, s6}, 0.6)}
{(s0, 0.1), (s1, 0.1),
(s2, 0.1), (s3, 0.2),
(s4, 0.2), ({s5, s6}, 0.3)}
{({s0, s1}, 0.2),
(s1, 0.1), (s2, 0.1)
(s3, 0.1), (s4, 0.1),
(s5, 0.1), ({s5, s6}, 0.3)}
{(s0, 0.1), (s1, 0.1),
(s2, 0.1), (s3, 0.2),
(s4, 0.2), (s5, 0.1),
({s5, s6}, 0.2)}
x2 {({s0, s1, s2}, 0.3),
(s2, 0.1), (s3, 0.2), (s4, 0.1),
(s5, 0.1), (s6, 0.2)}
{(s0, 0.1),{s1, s2, s3}, 0.3),
(s3, 0.1), (s4, 0.1),
(s5, 0.2), (s6, 0.2)}
{(s0, 0.1), (s1, 0.2),
({s2, s3, s4}, 0.3),
(s4, 0.1), (s5, 0.1),
(s6, 0.2)}
{(s0, 0.1), (s1, 0.2),
(s2, 0.1),{s3, s4, s5}, 0.3),
(s6, 0.3)}
x3 {({s0, s1}, 0.2),
(s2, 0.1), (s3, 0.3),
(s4, 0.1), ({s5, s6}, 0.3)}
{({s0, s1}, 0.2),
({s2, s3}, 0.3),
(s4, 0.1), (s5, 0.1),
({s4, s5, s6}, 0.3)}
{(s0, 0.2),{s1, s2}, 0.2),
({s3, s4}, 0.3),
(s5, 0.1), (s6, 0.2)}
{(s0, 0.1), ({s1, s2, s3}, 0.3),
({s3, s4, s5}, 0.3),
{(s4, s5, s6}, 0.3)}
x4 {(s0, 0.1), (s1, 0.1), (s2, 0.2)
(s3, 0.1), (s4, 0.2),
(s5, 0.2), (s6, 0.1)}
{(s0, 0.2),
({s1, s2, s3}, 0.3),
({s4, s5}, 0.2),
({s4, s5, s6}, 0.3)}
{(s0, 0.2),
({s1, s2, s3, s4, s5}, 0.6),
(s6, 0.2)}
{({s0, s1}, 0.2),
(s2, s3, s4, s5, s6}, 0.8)}
Table 6.

The linguistic preference L5 provided by d5.

L¯ a1 a2 a3 a4
x1 {(s0, 0.1),
({s0, s1, s2}, 0.08),
(s1, 0.06), ({s1, s2}, 0.04),
(s2, 0.06), (s3, 0.02),
({s3, s4}, 0.08),
({s3, s4, s5}, 0.12),
({s4, s5, s6}, 0.24),
({s5, s6}, 0.14), (s6, 0.06)}
{(s0, 0.02),
({s0, s1}, 0.08),
(s1, 0.06), (s2, 0.06),
({s2, s3}, 0.04),
({s2, s3, s4}, 0.14),
(s3, 0.16), (s4, 0.06),
({s4, s5, s6}, 0.1),
({s5, s6}, 0.28)}
{({s0, s1}, 0.14),
({s0, s1, s2}, 0.06), (s1, 0.02),
({s1, s2, s3}, 0.06), (s2, 0.04),
({s2, s3}, 0.06),
({s2, s3, s4}, 0.06),
(s3, 0.02), ({s3, s4}, 0.04),
(s4, 0.08), ({s4, s5}, 0.04),
({s4, s5, s6}, 0.18), (s5, 0.08),
({s5, s6}, 0.06), (s6, 0.06)}
{(s0, 0.1),
({s0, s1, s2}, 0.1),
(s1, 0.06),
({s1, s2, s3}, 0.04),
(s2, 0.02),
({s2, s3}, 0.04),
(s3, 0.08), (s4, 0.04),
({s4, s5, s6}, 0.46),
(s5, 0.02), ({s5, s6}, 0.04)}
x2 {({s0, s1}, 0.04),
({s0, s1, s2}, 0.16),
({s1, s2}, 0.04),
({s1, s2, s3}, 0.06),
(s2, 0.02), ({s2, s3}, 0.04),
(s3, 0.08),
({s3, s4}, 0.06),
({s3, s4, s5}, 0.14),
(s4, 0.06), (s5, 0.04),
({s5, s6}, 0.18), (s6, 0.08)}
{(s0, 0.02), ({s0, s1}, 0.06),
({s0, s1, s2}, 0.06),
({s0, s1, s2, s3}, 0.1),
({s1, s2, s3}, 0.12),
({s2, s3}, 0.04), (s3, 0.02),
({s3, s4}, 0.1),
({s3, s4, s5}, 0.08),
({s3, s4, s5, s6}, 0.1),
(s4, 0.02), (s5, 0.1),
({s5, s6}, 0.1), (s6, 0.08)}
{(s0, 0.12),
({s0, s1}, 0.04),
(s1, 0.04),
({s1, s2, s3}, 0.28),
({s2, s3, s4}, 0.06),
(s4, 0.04), ({s4, s5}, 0.12),
({s4, s5, s6}, 0.06),
(s5, 0.06),
({s5, s6}, 0.1), (s6, 0.08)}
{(s0, 0.04),
({s0, s1, s2}, 0.26),
(s1, 0.04), (s2, 0.06),
(s3, 0.16),
({s3, s4, s5}, 0.06),
(s4, 0.06),
({s4, s5, s6}, 0.22),
({s5, s6}, 0.04), (s6, 0.06)}
x3 {({s0, s1}, 0.16),
({s0, s1, s2}, 0.06), (s1, 0.02),
({s1, s2}, 0.02), (s2, 0.04),
({s2, s3}, 0.06),
({s2, s3, s4}, 0.16),
(s3, 0.06), ({s3, s4}, 0.08),
(s4, 0.04),
({s4, s5, s6}, 0.06),
({s5, s6}, 0.24)}
{(s0, 0.06), ({s0, s1}, 0.08),
({s1, s2}, 0.1),
({s1, s2, s3}, 0.06),
(s2, 0.02), ({s2, s3}, 0.12),
(s3, 0.02), ({s3, s4}, 0.12),
(s4, 0.1), ({s4, s5}, 0.04),
({s4, s5, s6}, 0.06),
(s5, 0.08), ({s5, s6}, 0.1),
(s6, 0.04)}
{(s0, 0.04),
({s0, s1}, 0.04),
({s0, s1, s2}, 0.22),
({s1, s2}, 0.04),
({s2, s3, s4}, 0.06),
({s3, s4}, 0.18),
({s3, s4, s5}, 0.08),
({s4, s5}, 0.02), (s5, 0.08),
({s5, s6}, 0.14), (s6, 0.1)}
{(s0, 0.04), ({s0, s1}, 0.12),
({s1, s2}, 0.04),
({s1, s2, s3}, 0.06),
(s2, 0.06), (s3, 0.02),
({s3, s4, s5}, 0.3),
({s3, s4, s5, s6}, 0.14),
(s4, 0.02),
({s4, s5, s6}, 0.12),
({s5, s6}, 0.06), (s6, 0.02)}
x4 {(s0, 0.08), ({s0, s1}, 0.02),
(s1, 0.02), ({s1, s2}, 0.08),
({s1, s2, s3}, 0.12),
(s2, 0.04), ({s2, s3, s4}, 0.08),
(s3, 0.08), ({s3, s4}, 0.08),
({s3, s4, s5}, 0.06),
(s4, 0.06), ({s4, s5}, 0.08),
(s5, 0.08), ({s5, s6}, 0.02),
(s6, 0.1)}
{(s0, 0.06), ({s0, s1}, 0.04),
({s0, s1, s2}, 0.04),
({s1, s2, s3}, 0.12),
(s2, 0.04),
({s2, s3, s4}, 0.06),
(s3, 0.06), ({s3, s4}, 0.1),
(s4, 0.04), ({s4, s5}, 0.04),
(s5, 0.04), ({s4, s5, s6}, 0.16),
({s5, s6}, 0.16), (s6, 0.04)}
{(s0, 0.06), ({s0, s1}, 0.14),
({s1, s2}, 0.04),
({s1, s2, s3}, 0.12),
({s1, s2, s3, s4}, 0.12),
({s1, s2, s3, s4, s5}, 0.12),
({s3, s4, s5, s6}, 0.08),
({s4, s5, s6}, 0.14),
(s5, 0.04), ({s5, s6}, 0.06),
(s6, 0.08)}
{(s0, 0.06),
({s0, s1}, 0.12),
({s0, s1, s2}, 0.22),
({s2, s3, s4}, 0.06),
({s2, s3, s4, s5}, 0.1),
({s2, s3, s4, s5, s6}, 0.16),
({s3, s4}, 0.08),
({s5, s6}, 0.14),
(s6, 0.06)}
Table 7.

The collective linguistic preference L¯ .

zr (r = 1,2,3,4)
x1 {(s0, 0.058), ({s0, s1, s2}, 0.067), ({s0, s1}, 0.054), (s1, 0.048), ({s1, s2}, 0.012),
({s1, s2, s3}, 0.028), (s2, 0.044), ({s2, s3}, 0.034), ({s2, s3, s4}, 0.039), (s3, 0.056),
({s3, s4}, 0.036), ({s3, s4, s5}, 0.036), (s4, 0.043), ({s4, s5}, 0.012),
({s4, s5, s6}, 0.256), (s5, 0.029), ({s5, s6}, 0.112), (s6, 0.036)}
x2 {(s0, 0.045), ({s0, s1}, 0.031), ({s0, s1, s2}, 0.135), ({s0, s1, s2, s3}, 0.015),
(s1, 0.022), ({s1, s2}, 0.012), ({s1, s2, s3}, 0.106), (s2, 0.024), ({s2, s3}, 0.018),
({s2, s3, s4}, 0.015), (s3, 0.075), ({s3, s4}, 0.033), ({s3, s4, s5}, 0.072),
({s3, s4, s5, s6}, 0.015), (s4, 0.049), ({s4, s5}, 0.03), ({s4, s5, s6}, 0.081),
(s5, 0.042), ({s5, s6}, 0.106), (s6, 0.074)}
x3 {{(s0, 0.033), ({s0, s1}, 0.098), ({s0, s1, s2}, 0.084), (s1, 0.006), ({s1, s2}, 0.049),
({s1, s2, s3}, 0.024), (s2, 0.026), ({s2, s3}, 0.048), ({s2, s3, s4}, 0.066), (s3, 0.026),
({s3, s4}, 0.108), ({s3, s4, s5}, 0.069), ({s3, s4, s5, s6}, 0.021), (s4, 0.04),
({s4, s5}, 0.016), ({s4, s5, s6}, 0.051), (s5, 0.044), ({s5, s6}, 0.148), (s6, 0.043)}
x4 {{(s0, 0.066), ({s0, s1}, 0.083), ({s0, s1, s2}, 0.072), (s1, 0.006), ({s1, s2}, 0.034),
({s1, s2, s3}, 0.084), ({s1, s2, s3, s4}, 0.03), ({s1, s2, s3, s4, s5}, 0.03), (s2, 0.018),
({s2, s3, s4}, 0.051), ({s2, s3, s4, s5}, 0.03), ({s2, s3, s4, s5, s6}, 0.048), (s3, 0.033),
({s3, s4}, 0.063), ({s3, s4, s5}, 0.018), ({s3, s4, s5, s6}, 0.02), (s4, 0.024),
({s4, s5}, 0.03), ({s4, s5, s6}, 0.059), (s5, 0.04), ({s5, s6}, 0.087), (s6, 0.074)}
Table 8.

The overall value of alternative xr (r 1,2,3,4).

zr*(r=1,2,3,4)
x1 {(s0, 0.1073), (s1, 0.1127), (s2, 0.1117), (s3, 0.1253),, (s4, 0.1773), (s5, 0.1883), (s6, 0.1773)}
x2 {{(s0, 0.10925), (s1, 0.1276), (s2, 0.1281), (s3, 0.1723), (s4, 0.14025), (s5, 0.16475), (s6, 0.15775)}
x3 {{(s0, 0.11), (s1, 0.1155), (s2, 0.1325), (s3, 0.16225), (s4, 0.16925), (s5, 0.17125), (s6, 0.13925)}
x4 {(s0, 0.1315), (s1, 0.13), (s2, 0.1346), (s3, 0.1511),, (s4, 0.1488), (s5, 0.1523), (s6, 0.1517)}
Table 9.

The transformed LDs zr* associated with zr (r = 1,2,3,4).

As E(z1) > E(z3) > E(z2) > E(z4), the ranking of the alternatives is: x1x3x2x4.

6. Conclusions

In this paper, we propose the concept of the HLD to model decision makers’ linguistic expression preferences based on the use of HFLTSs and LDs, and study the proposed HLD from the following aspects.

  1. (1)

    The transformation between the HLDs and LDs is presented and basic operations including comparison and aggregation operations are proposed to perform on HLDs.

  2. (2)

    The comparisons among several linguistic expressions such as the PDHFLTS, CLD, ILD, and interval LD and HLD are discussed to show that the HLD is their generalization.

  3. (3)

    We discuss the use and behavior mechanism of the HLD in MAGDM.

The GDM with linguistic expressions not only relates to mathematical models, but also to philosophical issues. Therefore, it would be interesting to psychologically investigate decision makers’ behaviors when using different linguistic expressions in the future research.

Acknowledgements

This work was supported by the grants (Nos. 71201122, 71571124) from NSF of China.

References

6.YC Dong, YT Liu, HM Liang, F Chiclana, and E Herrera-Viedma, Strategic weight manipulation in multiple attribute decision making, Omega. in press,
10.WT Guo, VN Huynh, and S Sriboonchitta, A proportional linguistic distribution based model for multiple attribute decision making under linguistic uncertainty, Annals of Operations Research. in press,
17.WQ Liu, YC Dong, F Chiclana, FJ Cabrerizo, and E Herrera-Viedma, Group decision-making based on heterogeneous preference relations with self-confidence, Fuzzy Optimization and Decision Making. in press,
21.L Martínez, RM Rodriguez, and F Herrera, The 2-tuple linguistic model: Computing with words in decision making, Springer, New York, 2015.
27.JA Morente-Molinera, J Mezei, C Carlsson, and E Herrera-Viedma, Improving supervised learning classification methods using multi-granular linguistic modelling and fuzzy entropy, IEEE Transactions on Fuzzy Systems. in press,
38.HJ Zhang, YC Dong, and X Chen, The 2-Rank consensus reaching model in the multigranular linguistic multiple-attribute group decision-making, IEEE Transactions on Systems, Man, and Cybernetics: Systems. in press,
41.Z Zhang, CH Guo, and L Martínez, Managing multi-granular linguistic distribution assessments in large-scale multi-attribute group decision making, IEEE Transactions on Systems, Man, and Cybernetics: Systems. in press,
Journal
International Journal of Computational Intelligence Systems
Volume-Issue
10 - 1
Pages
970 - 985
Publication Date
2017/06/28
ISSN (Online)
1875-6883
ISSN (Print)
1875-6891
DOI
10.2991/ijcis.2017.10.1.65How to use a DOI?
Copyright
© 2017, the Authors. Published by Atlantis Press.
Open Access
This is an open access article under the CC BY-NC license (http://creativecommons.org/licences/by-nc/4.0/).

Cite this article

TY  - JOUR
AU  - Guiqing Zhang
AU  - Yuzhu Wu
AU  - Yucheng Dong
PY  - 2017
DA  - 2017/06/28
TI  - Generalizing linguistic distributions in hesitant decision context
JO  - International Journal of Computational Intelligence Systems
SP  - 970
EP  - 985
VL  - 10
IS  - 1
SN  - 1875-6883
UR  - https://doi.org/10.2991/ijcis.2017.10.1.65
DO  - 10.2991/ijcis.2017.10.1.65
ID  - Zhang2017
ER  -