International Journal of Computational Intelligence Systems

Volume 14, Issue 1, 2021, Pages 1229 - 1241

Multi-Attribute Decision-Making Method Based Distance and COPRAS Method with Probabilistic Hesitant Fuzzy Environment

Authors
Haifeng Song, Zi-chun Chen*
School of Sciences, Xihua University, Chengdu, Sichuan, 610039, China
*Corresponding author. Email: czclph@163.com
Corresponding Author
Zi-chun Chen
Received 11 January 2021, Accepted 14 March 2021, Available Online 29 March 2021.
DOI
10.2991/ijcis.d.210318.001How to use a DOI?
Keywords
Multi-attributive decision-making; Probabilistic hesitant fuzzy set; New distance measures; COPRAS method
Abstract

As an extension of hesitant fuzzy set, the probabilistic hesitant fuzzy set (PHFS) can more accurately express the initial decision information given by experts, thus the decision method based on PHFS is more true and reliable. In this paper, multi-attribute decision-making (MADM) method is proposed under probabilistic hesitant fuzzy environment, which is based the new distance measures of probabilistic hesitant fuzzy elements (PHFEs) and the COmplex PRoportional ASsessment (COPRAS) method. Firstly, the existing problems of some distances are analyzed and we propose some new distance measures including new Hamming distance, new Euclidean distance and new generalized distance under probabilistic hesitant fuzzy environment. Secondly, a maximizing deviation method based on the new Hamming distance measure is proposed to obtain the attribute weights in probabilistic hesitant fuzzy information. Then, the COPRAS method is extended to solve MADM problems under probabilistic hesitant fuzzy environment. Finally, compared other methods, an example is given to demonstrate the effectiveness of the proposed method.

Copyright
© 2021 The Authors. Published by Atlantis Press B.V.
Open Access
This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).

1. INTRODUCTION

Multi-attribute decision-making (MADM) problems are becoming more and more common in human social activities. For example, it could be applied to site selection assessment [1], hotel selection [2], passenger demand and service quality of High-speed rail [3,4], sustainable supplier selection [5], and so on. Decision makers (DMs) solve MADM problems based on their own values, preferences and knowledge system. However, when DMs evaluate the MADM problems, they probably cannot give a specific evaluation value but a vague range due to the uncertainty and hesitation of expressing information. To solve this problem, hesitant fuzzy set (HFS) proposed by Torra [6] solves this problem well, which allows experts to take several possible values as membership degree. After that, many scholars domestic and overseas have made a lot of achievements in HFS theory [715]. HFS allows multiple membership degrees in an element, but these membership degrees have the same importance by default. However, in most cases, due to the preference and quantity of experts, different membership degrees actually correspond to different importance. In order to solve this problem, the probabilistic hesitant fuzzy set (PHFS) [16] is generated, which adds corresponding probability information for each membership degree and thus expresses different importance. Based on the fuzzy set and its extended forms, PHFS as a practical and effective tool that can improve the rationality and credibility of the result. In recent years, scholars have done some works to extend PHFS theory [1723]. It is worth noting that the sum of the probabilities of all membership degrees in the same probabilistic hesitant fuzzy element (PHFE) is required to be one in these studies. In fact, in many practical decision-making problems, this requirement cannot be usually met. Because of this, Zhang et al. [24] reduced the conditions for probabilistic information and perfect the definition of PHFS.

For MADM problem, alternatives are evaluated by their attributes, so attribute weights become an important part of the ranking of alternatives. There are several ways to obtain attribute weights and they can be divided into three categories [25]: objective, subjective and integrated subjective and objective methods. The objective method [2629] determines attribute weights based on objective decision information, mainly including maximum deviation method and entropy method. Subjective method [3035] determines the attribute weights according to the DMs’s subjective preference information, mainly including eigenvector method, weighted least squares method and card method. The synthetic subjective and objective method [3638] determines attribute weights by combining the subjective preference information with the objective decision information provided by the DMs. In addition, there are still many new challenges in obtaining attribute weights. Up to now, little research has been focused on extracting the weight of evaluation attributes in probabilistic hesitant environment. In this sense, it is necessary to develop an effective method to obtain attribute weights to solve the probabilistic hesitant MADM problem.

In the decision-making process, distance is one of the basic tools to measure the difference or similarity between PHFEs. Differences between PHFEs’s include length, membership degree and probability. In probabilistic hesitant fuzzy environment, most existing distances have some disadvantages. For example, distance proposed by Wu and Xu [39] does not satisfy the reflexivity of the definition of distance measure; Hamming distance proposed by Zhang et al. [40] and the Euclidean distance proposed by Ding et al. [41] do not consider the difference in length and probability of PHFEs; The Euclidean distance proposed by Li et al. [21] is based on the normalized PHFEs, which is not the original PHFEs. Therefore it has a certain influence on the decision result. As a result, the research on distance measures in probabilistic hesitant fuzzy environment is not enough and needs to be improved. Therefore, it is necessary to establish effective distance to make up for the lack of distance in the literature, so as to solve the problem of probabilistic hesitant fuzzy decision problems.

The COmplex PRoportional ASsessment (COPRAS) is one of the latest MADM methods. COPRAS method is a compensation method, which is easy to calculate. Besides, it converts qualitative attributes into quantitative attributes. It can calculate the maximum and minimum index values respectively in the evaluation process, and considers the influence of maximization and minimization of attributes on the evaluation results respectively. Utilizing the COPRAS approach, many researchers have achieved some results. Manik Chandra Das et al. [42] used a comprehensive model combining fuzzy Analytic Hierarchy Process and fuzzy COPRAS to evaluate and rank the preferences of stakeholders in Seven Indian Institutes of Technology. Zheng et al. [43] established an indicator system for evaluating chronic obstructive pulmonary disease severity from systems engineering and practical clinical experience, and proposed a hesitant fuzzy linguistic COPRAS method to solve decision-making problems in a fuzzy language environment. Harish Garg and Nancy [44] proposed two algorithms based on COPRAS method and aggregation operators method to solve the decision-making problems based on possibility linguistic single-valued neutrosophic set information. Harsh S. Dhiman and Dipankar Deb [45] evaluated the sequencing of fuzzy COPRAS and fuzzy Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) and applied them to select the best sites in three hybrid wind farms. Therefore, although COPRAS method has achieved good results, there is hardly any extension for MADM under probabilistic hesitant fuzzy environment. Based on this, we propose COPRAS method in probabilistic hesitant fuzzy environment.

In this study, in order to overcome the shortcomings of distance in the literature, the focus of this research is to introduce several new distances to obtain attribute weights. Meanwhile, the COPRAS approach is extended into probabilistic hesitant fuzzy environments. The main motivations and contributions of our work are summarized as follows.

  1. In order to make up for these existing distances [21,3941] that do not take into account the differences in length, membership degrees and probability between PHFEs, this study proposes some new distances based on PHFEs.

  2. In order to obtain attribute weights, the newly proposed distance is applied to the maximizing deviation method to establish a nonlinear programming model under the probabilistic hesitant fuzzy environment.

  3. In order to make up for the shortcomings that the traditional COPRAS method cannot handle MADM problems with PHFE evaluation information, the COPRAS method is extended, COPRAS method is extended to the probabilistic hesitant fuzzy environment.

The structure of this paper is as follows. Section 2 reviews the basic definitions and operations of HFS and PHFS. In Section 3, the existing distance measures of PHFEs under probabilistic hesitant fuzzy environment are discussed, and new distance measures are proposed. In Section 4, a new maximizing deviation method based on the new distance measure is presented in the probabilistic hesitant fuzzy environment, and the extended COPRAS method is proposed to solve the probabilistic hesitant MADM problem. In Section 5, an example of energy selection is used to prove the effectiveness of the proposed method. Finally, this paper was summarized in Section 6.

2. PRELIMINARIES

In this section, we review some basic definitions on HFS, PHFS.

2.1. Hesitant Fuzzy Sets

Definition 1.

[6] Let X be a finite universe of discourse, an HFS A on X is defined as:

A={x,hA(x)|xX},
where hA(x) is a set of some values in [0, 1], denoting the possible membership degrees of the element xX to the set A. Xia and Xu [46] called h=hA(x) a hesitant fuzzy element (HFE).

Definition 2.

[46] Given three HFEs h, h1 and h2, and a positive number ε, it holds that:

  1. hε=ςh{ςε};

  2. εh=ςh{1(1ς)ε};

  3. h1h2=ς1h1,ς2h2{ς1+ς2ς1ς2};

  4. h1h2=ς1h1,ς2h2{ς1ς2}.

Definition 3.

[46] For a HFE h, the score function of h can be defined as:

S(h)=1ιhςhς
where ιh is the number of all ς in h.

Definition 4.

[47] For a HFE h, we define the deviation degree:

DF(h)=[1lhςh(ζS(h))2]12

Definition 5.

[47] Let h1 and h2 be two HFEs, S(h1) and S(h2) are the scores of h1 and h2, respectively, and D(h1) and D(h2) the deviation degrees of h1 and h2, respectively, then.

  1. If S(h1)<S(h2), then h1<h2;

  2. If S(h1)=S(h2), then

    • If D(h1)=D(h2), then h1=h2;

    • If D(h1)>D(h2), then h1<h2;

    • If D(h1)<D(h2), then h1>h2.

2.2. Probabilistic Hesitant Fuzzy Sets

Definition 6.

[16,24] Suppose that nonempty set X={x1,x2,,xn} be a fixed set, then an PHFS H on X is defined as:

H={x,hx(p)|xX},
where hx(p) is a set of some values in [0,1], hx denotes the possible membership degrees of the element xX to the set H. p a set of some values in [0,1], p represents a set of probabilities with respect to hx.

A PHFE can be described as h(p)=hx(p)={ςτ(pτ),τ=1,2,,ι.τ=1ιpτ1},

where ςτ denote the possible membership degree of the PHFE h(p), pτ represent the probability with respect to ςτ . ι is the number of all ςτ(pτ) in h(p).

Definition 7.

[24,48] Given three PHFEs h(p), h1(p) and h2(p), and a positive number ε, then

  1. h(p)ε=ςτh{ςτε(pτ)};

  2. εh(p)=ςτh{[1(1ςτ)ε](pτ)};

  3. h1(p)h2(p)=ς1τh1,ς2τh2[ς1τ+ς2τς1τς2τ]((p1τ+p2τ)/τ=1ι(p1τ+p2τ));

  4. h1(p)h2(p)=ς1τh1,ς2τh2[ς1τς2τ]((p1τ+p2τ)/τ=1ι           (p1τ+p2τ);

Definition 8.

[24] Let h(p) be a PHFE, the score function of h(p) can be defined as:

S(h(p))=τ=1ιςτpτ/τ=1ιpτ.

Definition 9.

[24] For a PHFE h(p), we define the deviation degree:

DF(h(p))=τ=1ι((ςτS(h(p)))pτ)2/τ=1ιpτ.

Definition 10.

[24] Let h1(p) and h2(p) be two PHFEs, S(h1(p)) and S(h2(p)) are the scores of h1(p) and h2(p), respectively, D(h1(p)) and D(h2(p)) the deviation degrees of h1(p) and h2(p), respectively, then

  1. If S(h1(p))<S(h2(p)), then h1(p)<h2(p);

  2. If S(h1(p))=S(h2(p)), then

    • If D(h1(p))=D(h2(p)), then h1(p)=h2(p);

    • If D(h1(p))>D(h2(p)), then h1(p)<h2(p);

    • If D(h1(p))<D(h2(p)), then h1(p)>h2(p).

Definition 11.

[48] Given a collection of PHFEs hi(p)(i=1,2,,n).

  1. An adjusted probabilistic hesitant fuzzy weighted averaging (APHFWA) operator is

    APHFWA(h1(p),h2(p),,hn(p))=i=1n(Wihi(p))=τ=1l{[1i=1n(1ςiτ)Wi](i=1npiτ/τ=1li=1npiτ)}

  2. An adjusted probabilistic hesitant fuzzy weighted geometric (APHFWG) operator is

    APHFWG(h1(p),h2(p),,hn(p))=i=1n(hiWi(p))=τ=1ιi=1nςiτWii=1npiτ/τ=1ιi=1npiτ
    where Wi is the weight of hi(p), i=1nWi=1.

3. THE EXISTING DISTANCE MEASURES AND THE NEW DISTANCE MEASURES UNDER PHFEs

In the section, when the distance measure is used, it is generally required that the number of elements in the two PHFEs are equal and then they are compared. Therefore, elements (maximum, minimum or median value of membership degrees in the PHFEs) are added to PHFEs with fewer elements, so that the number of elements in the PHFEs is the same. The values added to the PHFEs shows whether DM’ attitude to risk is positive or negative. In this paper, we chose to increase median value in shorter PHFEs so that the number of elements of the PHFEs is the same. And the probability of increasing the value is 0.

Distance is one of the basic tools to measure the difference or similarity between PHFEs. It can be widely used in MADM. The existing distance measures of PHFE are analyzed, and the new distance measures are proposed.

3.1. The Existing Distance Measures Under PHFEs

Based on the definition of distance measure, this section analyzes the existing distance measures of PHFEs.

Definition 12.

[41,49] Let H={x,hx(p)|xX} be a PHFS, h, h1 and h2 are three PHFEs on H, then the distance measure between h1 and h2 is defined as d(h1,h2), which satisfies the following properties:

  1. 0d(h1,h2)1

  2. d(h1,h2)=0 if and only if h1=h2

  3. d(h1,h2)=d(h2,h1)

  4. d(h1,h)+d(h,h2)d(h1,h2)

The existing distance measures of PHFEs.

Wu and Xu [39] proposed the distance between h1(p) and h2(p) as follows:

dh(h1(p),h2(p))=τ1=1ι1τ2=1ι2ς1τ1ς2τ2p1τ1p2τ2(1)
where ιi is the number of the elements in hi(p), ςiτi(piτi) is the value in PHFE hi(p), i=1,2.

For two PHFEs h1(p) and h2(p), the distance measures of d(h1(p),h2(p)) among them should satisfy the following properties: (1) Nonnegative: 0d(h1(p),h2(p))1; (2) Reflexivity: d(h1(p),h2(p))=0 if and only if h1(p)=h2(p); (3) Commutativity: d(h1(p),h2(p))=d(h2(p),h1(p)); And (4) Triangle inequality: d(h1(p),h2(p))+d(h2(p),h3(p))d(h1(p),h3(p)). Unfortunately, the distance described in Definition 12 does not have this reflexivity. Let’s take the following example.

Example 1.

Let h1(p)=h2(p)={0.4(13),0.85(13)} are two equal PHFEs. The distance between them is d(h1(p),h2(p))=0.1. Obviously, by Definition 12, the reflexivity doesn’t work because d(h1(p),h2(p))=0.10.

Zhang et al. [40] defined the Hamming distance between h1(p) and h2(p), its specific definition is as follows:

dhg(h1(p),h2(p))=1ιτ=1ιp1τς1τ(p1τ)p2τς2τ(p1τ)(2)
where ι1 and ι2 are the number of the elements in h1(p) and h2(p), ι1=ι2=ι; ς1τ(p1τ) is the τth largest value in h1(p), ς2τ(p2τ) is the τth largest value in h2(p).

Ding etal. [41] defined a Euclidean distance, let h1(p) and h2(p) are two PHFEs and the Euclidean distance among them was defined as follows:

dE(h1(p),h2(p))=(1ιτ=1ι(p1τς1τ(p1τ)p2τς2τ(p1τ))2)12(3)
where ιi is the number of the elements in hi(p),i=1,2 and ι1=ι2=ι,ς1τ(p1τ) and ς2τ(p2τ) are the τth lowest values in h1(p) and h2(p).

Li etal. [21] defined a distance between h1(p) and h2(p), then the Euclidean distance among them was developed as follows:

dEN(h1(p),h2(p))=τ=1ι(ς1τς2τ)2pτp1τ=p2τ=pτ,ι=ι1=ι2(4)
where ς1τ and ς2τ are the τth values in h1(p) and h2(p). piτ is the probabilities with respect to ςiτ, ιi is the number of the elements in hi(p), i=1,2.

Li etal. [21] proposed the distance after normalization of PHFEs, so it is not the original PHFEs after normalization. PHFEs are different before and after normalization, so the distance will be different.

Differences among PHFEs include differences in their lengths, differences in their values, and differences in their probabilities. Therefore, in order to study the differences among PHFEs, we should consider their membership degrees, their length and probability. Otherwise, distance measures will lead to unreasonable results. However, all distance measures proposed in the literature do not take into account the effect of PHFEs’s length and the difference of probability and 1.

Example 2.

Suppose that there are two patterns, which are represented by the PHFEs h1(p)={0.55(0.3),0.8(0.3)} and h2(p)={0.45(0.2),0.55(0.2),0.65(0.1),0.75(0.1)}. Now I have a sample to recognize, which is represented by a PHFE h(p)={0.5(0.3),0.65(0.2)}, known by the principle of the minimum distance measure of the PHFEs,

d(h0(p),h(p))=min{d(h1(p),h(p)),d(h2(p),h(p))}

Then we can decide that the sample h(p) belongs to the pattern h0(p).

We can see that the difference of membership degrees and probability values between h(p) and h1(p) and the difference of membership degrees and probability values between h(p) and h2(p) are almost the same. Where the number of h(p) is the same as h1(p) and the probability distribution is similar, but different from h2(p). This means that h(p) reflects roughly the same hesitant as h1(p), but not the same hesitant as h2(p). Then, it’s easy to understand that h(p) should belong to pattern h1(p). However, by applying the above distance measures Eqs. (2) and (3), then dhg(h(p),h1(p))=0.0625, dhg(h(p),h2(p))=0.055; dE(h(p),h1(p))=0.0785, dE(h(p),h2(p))=0.0588; and h(p) belongs to the pattern h2(p). This is in stark contrast to our intuition.

Therefore, we believe that it is necessary to further consider the distance measures between PHFEs. The following, we propose some new distances that can overcome the above shortcomings by considering the hesitation of each PHFE.

3.2. Proposed the New Distance Measures Under PHFEs

Before coming up with the new distance measures, we still have some work to do. In order to fully study the differences between PHFEs, we should consider the differences of membership degrees, probability and their lengths. Based on this, we have the following definition.

Definition 13.

Let h(p)=ςτ(pτ),τ=1,2,,ιτ=1ι pτ1 be a PHFE, we can get:

Z(h(p))=242+(11ι)+(1τ=1lpτ)
where Z(h(p))[0,1] is the overall hesitancy of the PHFE h(p).ι(ι1) is the length of h(p), pτ[0,1] represents probability with respect to ςτ.11ι reflects the hesitant degree of h(p), the more elements in a PHFE, the greater the hesitancy.1τ=1ιpτ is the incompleteness of the probability information of PHFE h(p). The incompleteness of the probability information can also reflect the hesitant degree of PHFE. The higher the probability, the less the incompleteness, and the less the hesitancy of PHFE. When Z(h(p))=0, then we can get ι=1 and p=1. In this case, we use the distance measures defined by Eq. (2). Now, we’re not going to talk about Z(h(p))=0.

Based on the above definition, some new distance measures under PHFEs are proposed.

Definition 14.

Let h1(p) and h2(p) are two PHFEs and the new generalized distance measure between them is defined as follows:

dng(h1(p),h2(p))=121ιτ=1ιp1τς1τ(p1τ)p2τς2τ(p2τ)λ+Z(h1(p))Z(h2(p))λr=111λ(5)

When λ=1, the new Hamming distance between h1(p) and h2(p) is defined as:

dnh(h1(p),h2(p))=121ιτ=1ιp1τς1τ(p1τ)p2τς2τ(p2τ)+Z(h1(p))Z(h2(p))r=11(6)

When λ=2, the new Euclidean distance between h1(p) and h2(p) is defined as:

dne(h1(p),h2(p))=121ιτ=1ιp1τς1τ(p1τ)p2τς2τ(p2τ)2+Z(h1(p))Z(h2(p))2r=1112(7)
where Z(hi(p))=242+11ιi+1τi=1ιipiτi, λ>0, ι1 and ι2 are the length of h1(p) and h2(p), ι=max{ι1,ι2}.ςiτ(piτ) is the τth lowest value in hi(p), piτ is the probability with respect to ςiτ, i=1,2.

Theorem 1.

dng(h1(p),h2(p)),dnh(h1(p),h2(p))anddne(h1(p),h2(p)) meet Definition12.

Proof.

(1)Nonnegative:

According to the definition of PHFS, we can get:

ςiτ,piτ[0,1],|p1τς1τ(p1τ)p2τς2τ(p2τ)|[0,1],Z(hi(p))[0,1],i=1,2.

So, dng(h1(p),h2(p))[0,1].

(2)Sufficient necessity:

dng(h1(p),h2(p))=0|p1τς1τ(p1τ)p2τς2τ(p2τ)|=0,ι1=ι2,and1τ1=1ι1p1τ1=1τ2=1ι2p2τ2h1(p)=h2(p).

(3)Commutativity:

by Definition 13,

dng(h1(p),h2(p))=121ιτ=1ι|p1τς1τ(p1τ)p2τς2τ(p2τ)|λ+|Z(h1(p))Z(h2(p))|λr=111λ=121ιτ=1ι|p2τς2τ(p2τ)p1τς1τ(p1τ)|λ+|Z(h2(p))Z(h1(p))|λr=111λ=dng(h2(p),h1(p))

(4)Triangle inequality:

|p1τς1τ(p1τ)p2τς2τ(p2τ)|λp1τς1τ(p1τ)p3τς3τ(p3τ)|λ+|p3τς3τ(p3τ)p2τς2τ(p2τ)|λ|Z(h1(p))Z(h2(p))|λ|Z(h1(p))Z(h3(p))|λ+|Z(h3(p))Z(h2(p))|λThendng(h1(p),h2(p))dng(h1(p),h3(p))+dng(h3(p),h2(p))

Similarly, it can be proved that dnh(h1(p),h2(p)) and dne(h1(p),h2(p)) also satisfy the four properties of Definition 12.

The distance proposed in Eqs. (57) overcomes the shortcomings of the above-mentioned distance Eqs. (14), satisfies the definition of distance, and fully considers the differences in their membership degrees, as well as the differences in length and probability. For Example 2, using the new Hamming distance to calculate, we can get: dnh(h(p),h1(p))=0.05, dnh(h(p),h2(p))=0.06. This means that h(p) belongs to pattern h1(p). So the validity of the proposed new distance is proved.

4. THE COPRAS METHOD DISTANCE-BASED UNDER PROBABILISTIC HESITANT FUZZY ENVIRONMENT

In this section, the COPRAS method is extended to solve the probabilistic hesitant MADM problems.

The MADM problems is explained as follows:

  • A is the alternatives represented by A={A1,A2,,Am}.

  • C is the attributes denoted by C={C1,C2,,Cn}.

  • W is the weights with respect to attribute expressed by W={W1,W2,,Wn}.

  • The evaluation of the alternative Ai with respect to the attribute Cj is denoted by hij(pij)={ςij(τ)(pij(τ))|i=1,2,,m;j=1,2,,n;τ=1,2,,ι}.

4.1. The Classical COPRAS Method

The COPRAS method is used to appraise the maximizing and minimizing index values, and considers the influence of maximization and minimization of attributes on the evaluation results respectively. It is a compensation method, attributes are independent and it converts qualitative attributes into quantitative attributes. It is important to note that the classical COPRAS method can only be adapt when the attributes information are crisp numbers. Attribute weights [W1,W2,,Wn] are given by the DM. The MADM steps for the classical COPRAS method are described as follows:

Step 1: Obtain the evaluation decision matrix X=[sij]m×n and normalize decision matrix X into X*=[sij*]m×n.

where sij is the evaluation value of ith alternative in jth attribute, expressed in the form of crisp number, sij* is the normalization of sij, iM,M={1,2,,m},jN,N={1,2,,n}.

sij*=siji=1msij;j=1,2,,n

Step 2: Calculate the weighted values of X* through the following expression:

sij^=sij*Wj,i=1,2,,m;j=1,2,,n

Step 3: Compute the aggregated values of the weighted values of X* for positive or negative type attributes by the following expression:

PN+i=j=1rsij^;i=1,2,,mPNi=j=r+1nsij^;i=1,2,,m
where r reflects the number of positive type attributes and nr indicates the number of negative type attributes, and PNi describes the maximizing or minimizing indexes of i th attribute.

Step 4: Calculate the relative priority value or significance value RVi of each alternative by the following formula:

RVi=PN+i+miniPNii=1mPNiPNii=1mminiPNiPNi

It can also be written as follows:

RVi=PN+i+i=1mPNiPNii=1m1PNi

Step 5: The alternatives are ranked in descending order according to the relative importance value.

4.2. Determine the Weights of Evaluation Attributes

In this section, by the maximizing deviation method [50], we propose a new distance measure Eq(6) to obtain the evaluation attribute weights under probabilistic hesitant fuzzy environments.

Firstly, calculate the new Hamming distance between the alternative Ai and other alternatives Ak(k=1,2,,m,ki) with respect to the attribute Cj by the following expression:

Dij=k=1,kidnh(hij(pij),hkj(pkj)),
i=1,2,,m;j=1,2,,n
where
dnh(hij(pij),hkj(pkj))=12(242+(11lkj)+(1τkj=1lkjpkj(τkj)))1lτ=1lpij(τ)ςij(τ)pkj(τ)ςkj(τ)+(242+(11lij)+(1τij=1lijpij(τij)))(242+(11lkj)+(1τkj=1lkjpkj(τkj))).

Secondly, for attribute Cj, calculate the total distance of all alternatives is calculated as follows:

Dj=i=1mDij

Let

D(W)=j=1ni=1mk=1,kidnh(hij(pij),hkj(pkj))Wj.
where D(W) represents the weight distance function.

Because of the attribute weights are completely unknown; so, we can construct a nonliner programming model to gain the weights as follows:

{maxD(W)=j=1ni=1mWjDijs.t.Wj0,j=1nWj2=1,j=1,2,,n.

To solve the above model, we construct the Lagrange function as follows:

f(W,η)=D(W)+η2j=1nWj21
where η is a real number, expressing the Lagrange multiplier variable. Then the partial derivatives of f are computed as:
fWj=i=1mk=1,kidnh(hij(pij),hkj(pkj))+ηWj=0(8)
fη=12(j=1nWj21)=0(9)

Next, it follows from Eq. (8) that

Wj=i=1mk=1,kidnh(hij(pij),hkj(pkj))η(10)

Substituting Eq. (10) into Eq. (9), we have

η=j=1n(i=1mk=1,kidnh(hij(pij),hkj(pkj)))2(11)

Obviously, η<0. Then, combine Eqs. (10) and (11), we can get

Wj=i=1mk=1,kidnh(hij(pij),hkj(pkj))j=1n(i=1mk=1,kidnh(hij(pij),hkj(pkj)))2

Lastly, the attribute weight Wj can be rewritten as:

Wj=Djj=1nDj2

By normalizing Wj(j=1,2,,n), we can get:

Wj*=Wjj=1nWj.

4.3. The Extended COPRAS Method Under Probabilistic Hesitant Fuzzy Environment

The COPRAS method has the ability to consider positive and negative attributes, and can be independently evaluated in during the evaluation process. The main advantage of this approach is that it can be used to calculate the utility value of a given alternative, thereby showing how better or worse one alternative is than other alternative. The following steps summarize the method based on the New Hamming distance in the probabilistic hesitant fuzzy environment.

Step 1: Obtain the evaluation decision matrix H=[hij(pij)]m×n and normalize it to =[hij'(pij)]m×n.

Where hij'(pij)=ςij(τ)(pij(τ))|i=1,2,,m;j=1,2,,n;τ=1,2,,ι. In the evaluation matrix, the number of membership degrees may be different, so it is necessary to normalize the evaluation matrix. The median value of membership degrees of the set is added to the PHFS with fewer membership degrees, and the probability of increased membership degrees is 0. This PHFS has the same number of membership degrees as other PHFS.

Step 2: We go to obtain the weight of the attribute Wj(j=1,2,,n) by (4.2).

Step 3: Calculate the weighted evaluation value hij^ of each attribute.

hij^={ςij^(τ)(pij^(τ)),τ=1,2,,l}=hijWj=ςij(τ)hij{[1(1ςij(τ))Wj](pij(τ))}i=1,2,,m;j=1,2,,n.

Step 4: According to Definition 11, the aggregated value of the weighted value hij^ is calculated for the positive or negative type attributes, as shown below:

PN+i=j=1rhij^=τ=1l[1j=1r(1ςij^(τ))](j=1rpij^(τ)/τ=1lj=1rpij^(τ))PNi=j=r+1nhij^=τ=1l[1j=r+1n(1ςij^(τ))](j=r+1npij^(τ)/τ=1lj=r+1npij^(τ))
where r reflects the number of positive type attributes and nr indicates the number of negative type attributes, and PNi={ςi^(τ)pi(τ)} describes the maximizing or minimizing indexes of ith attribute.

Step 5: Calculate the score values of the aggregated values based on the negative and positive type attributes.

S(PN+i)=τ=1lς+i^(τ)p+i(τ)S(PNi)=τ=1lςi^(τ)pi(τ)

Step 6: Integrate the scores of the negative and positive attributes of each alternative. The relative priority value or significance value RVi of each alternative Ai is calculated by the following formula:

RVi=S(PN+i)+miniS(PNi)i=1mS(PNi)S(PNi)i=1mminiS(PNi)S(PNi)

It can also be written as follows:

RVi=S(PN+i)+i=1mS(PNi)S(PNi)i=1m1S(PNi)

Step 7: Rank the alternatives based on the relative priority values or significance values. The higher the relative values RVi is, the greater the alternative is.

5. CASE STUDY

5.1. Instance Profile

It is suppose that there are five sources of energy (Ai,i=1,2,,5) to choose, DM use four attributes to assess energy, including advanced technology (C1), degree of impact on the environment (C2), potential market value (C3) and energy service life (C4). Among them, C1, C3 and C4 are positive attributes, C2 is negative attribute. The attribute weights unknown. Because of the uncertainty of information and the form of PHFEs, the evaluation value of each energy under the corresponding attribute is given. The DMs give evaluation information H=[hij(pij)]5×4 in Table 1.

C1 C2 C3 C4
A1 0.3(0.1), 0.8(0.8) 0.3(0.1), 0.6(0.7) 0.6(0.7) 0.3(0.4), 0.4(0.3), 0.6(0.2)
A2 0.5(0.6), 0.7(0.4) 0.4(0.8) 0.3(0.4), 0.5(0.4) 0.6(0.9)
A3 0.2(0.4), 0.7(0.4), 0.9(0.1) 0.4(0.3), 0.6(0.4) 0.5(0.3), 0.6(0.6) 0.8(0.7)
A4 0.4(0.2), 0.5(0.3), 0.6(0.4) 0.3(0.1), 0.5(0.5) 0.3(0.2), 0.4(0.1), 0.6(0.5) 0.5(0.4), 0.7(0.2)
A5 0.4(0.3), 0.6(0.1), 0.7(0.2) 0.3(0.3), 0.4(0.2), 0.5(0.4) 0.3(0.4), 0.5(0.3), 0.6(0.3) 0.1(0.2), 0.5(0.7)
Table 1

Evaluation information.

5.2. The Decision-Making Process

Step 1: Normalize the evaluation information H into =[hij'(pij)]m×n in Table 2.

C1 C2 C3 C4
A1 0.3(0.1), 0.55(0), 0.8(0.8) 0.3(0.1), 0.45(0), 0.6(0.7) 0.6(0), 0.6(0), 0.6(0.7) 0.3(0.4), 0.4(0.3), 0.6(0.2)
A2 0.5(0.6), 0.6(0), 0.7(0.4) 0.4(0), 0.4(0), 0.4(0.8) 0.3(0.4), 0.4(0), 0.5(0.4) 0.6(0), 0.6(0), 0.6(0.9)
A3 0.2(0.4), 0.7(0.4), 0.9(0.1) 0.4(0.3), 0.5(0), 0.6(0.4) 0.5(0.3), 0.55(0), 0.6(0.6) 0.8(0), 0.8(0), 0.8(0.7)
A4 0.4(0.2), 0.5(0.3), 0.6(0.4) 0.3(0.1), 0.4(0), 0.5(0.5) 0.3(0.2), 0.4(0.1), 0.6(0.5) 0.5(0.4), 0.6(0), 0.7(0.2)
A5 0.4(0.3), 0.6(0.1), 0.7(0.2) 0.3(0.3), 0.4(0.2), 0.5(0.4) 0.3(0.4), 0.5(0.3), 0.6(0.3) 0.1(0.2), 0.3(0), 0.5(0.7)
Table 2

Normalize the evaluation information.

Step 2: We go to obtain the weight of the attribute Wj(j=1,2,3,4) by (4.2), then the result of attribute weights calculation are as follows:W1=0.2366,W2=0.215,W3=0.2052,W4=0.3432.

Step 3: Table 3 indicates the weighted evaluation value hij^ of each attribute.

C1 C2 C3 C4
A1 0.0809(0.1), 0.1722(0), 0.3167(0.8) 0.0738(0.1), 0.1206(0), 0.1788(0.7) 0.1714(0), 0.1714(0), 0.1714(0.7) 0.1152(0.4), 0.1608(0.3), 0.2698(0.2)
A2 0.1513(0.6), 0.1949(0), 0.2479(0.4) 0.104(0), 0.104(0), 0.104(0.8) 0.0706(0.4), 0.0995(0), 0.1326(0.4) 0.2698(0), 0.2698(0), 0.2698(0.9)
A3 0.0514(0.4), 0.2479(0.4), 0.42(0.1) 0.104(0.3), 0.1385(0), 0.1788(0.4) 0.1326(0.3), 0.1511(0), 0.1714(0.6) 0.4244(0), 0.4244(0), 0.4244(0.7)
A4 0.1138(0.2), 0.1513(0.3), 0.1949(0.4) 0.0738(0.1), 0.104(0), 0.1385(0.5) 0.0706(0.2), 0.0995(0.1), 0.1714(0.5) 0.2117(0.4), 0.2698(0), 0.3385(0.2)
A5 0.1138(0.3), 0.1949(0.1), 0.2479(0.2) 0.0738(0.3), 0.104(0.2), 0.1385(0.4) 0.0706(0.4), 0.1326(0.3), 0.1714(0.3) 0.0355(0.2), 0.1152(0), 0.2117(0.7)
Table 3

Weighted value hij^ of normalize evaluation information.

Step 4: The aggregated value of hij^ is obtained in Table 4.

A1 A2 A3 A4 A5
PN+i 0.3262(0.2), 0.4244(0.12), 0.5866(0.68) 0.424(0.3704), 0.4706(0), 0.5236(0.6296) 0.5264(0.28), 0.6325(0.16), 0.7234(0.56) 0.3507(0.35), 0.4419(0.17), 0.5587(0.48) 0.2056(0.36), 0.3821(0.16), 0.5087(0.48)
PN−i 0.0738(0.125), 0.1206(0), 0.1788(0.875) 0.104(0), 0.104(0), 0.104(1) 0.104(0.429), 0.1385(0), 0.1788(0.571) 0.0738(0.17), 0.104(0), 0.1385(0.83) 0.0738(0.33), 0.104(0.22), 0.1385(0.45)
Table 4

The aggregated value of hij^.

Step 5: The score function values of aggregated value are shown in Table 5.

A1 A2 A3 A4 A5
S(PN+i) 0.5151 0.4867 0.6537 0.466 0.3793
S(PN–i) 0.1657 0.104 0.1467 0.1275 0.1096
Table 5

The score function values of aggregated value.

Step 6: The relative priority value or significance value of each alternative is calculated as follows: RV1=0.6150,RV2=0.6460,RV3=0.7666,RV4=0.5742,RV5=0.5306

Step 7: Regarding the relative values: RV3>RV2>RV1>RV4>RV5

and alternatives are ranked as follows: A3>A2>A1>A4>A5

5.3. Comparative Analysis

In order to illustrate the effectiveness of the proposed method, on the basis of the above case study, comparative analysis with other methods are carried out.

Case 1 Comparing with the classical COPRAS method.

In the classical COPRAS method, the attributes information are crisp numbers. Based on this, we convert the probabilistic hesitant fuzzy evaluation information into the crisp numbers using the score function, and then compare it with the extended COPRAS method. By (4.1), we calculate the relative significance values or priority values and the results are as follows:RV1=0.1954RV2=0.2042,RV3=0.2257,RV4=0.1927,RV5=0.1797.

Thus, A3>A2>A1>A4>A5.

Case 2 The proposed method is compared with the aggregating operator-based methods.

Zhang etal. [24] improved the properties and operation rules of PHFEs and proposed some aggregation operators.

Here the weighted geometric aggregation operator and weighted averaging operator are used to integrate the probabilistic hesitant fuzzy information.

The first, calculate weights of the attribute by (4.1). That, W1=0.2366,W2=0.215,W3=0.205,W4=0.3432. Then aggregate the evaluation values of the same alternative with probabilistic hesitant fuzzy weighted geometric operator (PHFWG ) and finally calculate the score value of each alternative after aggregation, the score values are as follows:

S(PHFWG(A1))=0.5411,S(PHFWG(A2))=0.5706, S(PHFWG(A3))=0.6106,S(PHFWG(A4))=0.5397, S(PHFWG(A5))=0.4382, and get the ranking according to descending order.

Thus, A3>A2>A1>A4>A5.

The second, aggregate the evaluation values of the same alternative with probabilistic hesitant fuzzy weighted averaging operator (PHFWA), and the score values of the alternatives are as follows:

S(PHFWA(A1))=0.5748,S(PHFWA(A2))=0.5854,S(PHFWA(A3))=0.6995,S(PHFWA(A4))=0.5715,S(PHFWA(A5))=0.4949, and finally the ranking is obtained according to the descending order of the score of the alternatives.

Thus, A3>A2>A1>A4>A5.

Case 3 Comparing with the possibility degree formula-based method for PHFEs.

Song et al. [51] proposed a charting technique to analyze the structure of PHfEs, and then they proposed a new possibility degree formula to sort the PHFEs. This comparison method is more accurate, especially when facing different PHFEs with the same or intersecting values. At the same time, the proposed possibility degree formula can realize the optimal ranking in probabilistic hesitant fuzzy environment. Here, we use the possibility degree to sort the above case. Specific steps are as follows:

First, we can use the PHFWA operator to aggregate the evaluation values of the same alternative. The results are as follows:

A1={0.48(0.18),0.52(0.09),0.63(0.73)}A2={0.53(0.29),0.57(0),0.61(0.71)}A3={0.61(0.31),0.68(0.13),0.75(0.56)}A4={0.5(0.31),0.54(0.14),0.62(0.55)}A5={0.39(0.35),0.49(0.18),0.58(0.47)}

After that, calculate the possibility degree that each alternative has priority over other alternatives through the possibility degree formula, as shown in Table 6.

A1 A2 A3 A4 A5
A1 0.5 0.49 0.47 0.59 0.66
A2 0.51 0.5 0.48 0.6 0.68
A3 0.53 0.52 0.5 0.63 0.72
A4 0.41 0.4 0.37 0.5 0.61
A5 0.34 0.32 0.28 0.39 0.5

PHFEs, probabilistic hesitant fuzzy elements.

Table 6

Possibility degree for the PHFEs.

Then, derive the priorities of complementarity judgment by using the exact solution:

V=(v1,v2,v3,v4,v5)T=(0.2308,0.2367,0.2646,0.1590,0.1084)

Last, according to the above probability matrix P, we get the rank of alternatives:

A3>0.52>A2>0.51>A1>0.59>A4>0.61>A5

Case 4 The proposed method is compared with the distance-based method.

Xu and Zhang [50] extended the TOPSIS method to hesitant fuzzy environments, and utilized the distance measures of HFEs to solve the MADM problem. The method is divided into four steps, namely, normalizing HFEs, finding positive ideal solutions (PIS) and negative ideal solutions (NIS), calculating the distance between each alternative and PIS and NIS, and obtaining the ranking of alternatives. Next, in the above illustrative example, we extend similar TOPSlS method to deal with probabilistic hesitant fuzzy information. The complete steps are as follows:

Firstly, normalized the decision-making evaluation values.

Secondly, determine the PIS and NIS. The PIS and the NIS are determined by the score values of PHFEs in Table 7.

C1 C2 C3 C4
A1 0.3(0.1), 0.55(0), 0.8(0.8) 0.3(0.1), 0.45(0), 0.6(0.7) 0.6(0), 0.6(0), 0.6(0.7) 0.3(0.4), 0.4(0.3), 0.6(0.2)
A2 0.5(0.6), 0.6(0), 0.7(0.4) 0.4(0), 0.4(0), 0.4(0.8) 0.3(0.4), 0.4(0), 0.5(0.4) 0.6(0), 0.6(0), 0.6(0.9)
A3 0.2(0.4), 0.7(0.4), 0.9(0.1) 0.4(0.3), 0.5(0), 0.6(0.4) 0.5(0.3), 0.55(0), 0.6(0.6) 0.8(0), 0.8(0), 0.8(0.7)
A4 0.4(0.2), 0.5(0.3), 0.6(0.4) 0.3(0.1), 0.4(0), 0.5(0.5) 0.3(0.2), 0.4(0.1), 0.6(0.5) 0.5(0.4), 0.6(0), 0.7(0.2)
A5 0.4(0.3), 0.6(0.1), 0.7(0.2) 0.3(0.3), 0.4(0.2), 0.5(0.4) 0.3(0.4), 0.5(0.3), 0.6(0.3) 0.1(0.2), 0.3(0), 0.5(0.7)
PIS 0.3(0.1), 0.55(0), 0.8(0.8) 0.4(0), 0.4(0), 0.4(0.8) 0.6(0), 0.6(0), 0.6(0.7) 0.8(0), 0.8(0), 0.8(0.7)
NIS 0.2(0.4), 0.7(0.4), 0.9(0.1) 0.3(0.1), 0.45(0), 0.6(0.7) 0.3(0.4), 0.4(0), 0.5(0.4) 0.3(0.4), 0.4(0.3), 0.6(0.2)

NIS, negative ideal solutions; PHFEs, probabilistic hesitant fuzzy elements; PIS, positive ideal solutions.

Table 7

The PIS and the NIS are determined by the PHFEs.

Thirdly, calculate the distance by Eq. (2) from PIS and NIS for each alternative and the results are put in Table 8.

C1
C2
C3
C4
Di1+ Di1 Di2+ Di2 Di3+ Di3 Di4+ Di4
A1 0 0.1667 0.0433 0 0 0.1133 0.2267 0
A2 0.21 0.0967 0 0.0433 0.1133 0 0.3667 0.22
A3 0.1667 0 0.06 0.09 0.07 0.0633 0 0.2267
A4 0.2 0.0933 0.0333 0.0567 0.0733 0.0667 0.2067 0.0733
A5 0.2167 0.1033 0.0967 0.12 0.17 0.0567 0.0767 0.15

NIS, negative ideal solutions; PIS, positive ideal solutions.

Table 8

Distance between each alternative to the PIS and NIS.

Ultimately, obtain the ranking of the alternatives.

According to the previous step, we can get di+=j=14Dij+Wj and di=j=14DijWj, then the results are as follows: d1+=0.0871,d2+=0.1987,d3+=0.0667,d4+=0.1429,d5+=0.1678 and d1=0.0626,d2=0.1351,d3=0.1101,d4=0.0822,d5=0.0897. Next, the relative closeness coefficient formula is as follows: C(Ai)=didi+di+,i=1,2,,5. so, the values of the alternatives are obtained as follows: C(A1)=0.4185, C(A2)=0.4047, C(A3)=0.6228, C(A4)=0.3653, C(A5)=0.3484.

Thus,

A3>A1>A2>A4>A5.

In order to compare these methods more intuitively, the ranking results are obtained in Table 9.

Method Ranking Values Ranking
The classical COPRAS method 0.1954 0.2042 0.2257 0.1927 0.1797 A3 > A2 > A1 > A4 > A5
Method of Zhang et al. [24] PHFWA 0.5748 0.5854 0.6995 0.5715 0.4949 A3 > A2 > A1 > A4 > A5
PHFWG 0.5411 0.5706 0.6106 0.5397 0.4382 A3 > A2 > A1 > A4 > A5
Method of Xu and Zhang [50] 0.4185 0.4047 0.6228 0.3653 0.3484 A3 > A1 > A2 > A4 > A5
Method of Song et al. [51] 0.2308 0.2367 0.2646 0.1590 0.1084 A3 > A2 > A1 > A4 > A5
Proposed method 0.6150 0.6460 0.7666 0.5742 0.5306 A3 > A2 > A1 > A4 > A5

COPRAS, COmplex PRoportional Assessment; PHFWA, probabilistic hesitant fuzzy weighted averaging operator; PHFWG, probabilistic hesitant fuzzy weighted geometric operator.

Table 9

The ranking results of the different methods.

Table 9 shows that the ranking results obtained by the classical COPRAS method and Zhang et al. [24] and Song et al. [51] are the same as those obtained by the proposed method, indicating that the proposed method is effective.

The possible reasons for using different methods to obtain the same ranking are as follows: these methods obtain the alternative ranking according to the same steps, use the same normalized method to integrate the evaluation values, and then aggregate the evaluation value to obtain the score value of each alternative, and finally rank according to the score value or possibility degree. When the method based on the aggregation operator is compared with the proposed one, we can find that the calculation based on the aggregation operator is more complex. On the other hand, compared with the proposed method, the classical COPRAS method has the same ranking result, but the proposed method can better reflect the gap between alternatives, while the calculation results of the classical method show that the alternatives are similar. Finally, compared with the proposed method, although the method based on the possibility degree has obtained the same result, the difference between these alternatives are very weak.

From the Table 9, we can also see that the distance-based method differs from the proposed method in sorting. This is because of their differences in their basic theory. The method based on distance is to compare the alternatives according to the distance from the NIS and choose the alternative farthest from the NIS as the best alternative. Moreover, the alternative is compared according to the hesitant fuzzy information, and some shortcomings, such as whether the choice of PIS and NIS is appropriate, still need to be considered. The reason for this result may be due to the choice of normalization method and distance measures. Therefore, it is normal to differ from the method mentioned.

Comparative analysis shows that the MADM method proposed in this study has the following advantages over other methods.

  1. Based on the proposed new distance measures, this paper constructs a nonlinear model to obtain attribute weights, which makes the results more objective and accurate.

  2. Compared with the method based on aggregation operator, the proposed method is easier to calculate, and the aggregation operator is not suitable for solving multiattribute problems with a large number of attribute indexes.

  3. Compared with the method based on the classical COPRAS method and the possibility degree-based method, the proposed method can better reflect the differences between alternatives, and the calculation is objective. Moreover, it is reasonable to use the improved COPRAS method to select the optimal alternative under the probabilistic hesitant fuzzy environment.

6. CONCLUSION

PHFS is generated by the preference of DM and can reflect the importance of different membership degrees. In other words, it can reflect the characteristics of different evaluations in MADM problems. In this study, a MADM method with probabilistic hesitant evaluation information is proposed. First, the definitions and operation rules of HFS and PHFS are discussed. Second, the existing distance measures of PHFEs are discussed and the new distance between PHFEs are proposed. Subsequently, combine the newly proposed distance with the maximizing deviation method to establish a nonlinear programming model to obtain attribute weights. Then, the COPRAS method is extended into probabilistic hesitant fuzzy environments. Lastly, the problem of selecting energy source is provided to prove that the proposed method is effective.

This study provides several important contributions to MADM problems with probabilistic hesitant information, summarized as follows. The first, new distance are proposed to make up for the difference between the probability and 1 of membership degrees. Then, the maximizing deviation method based on the new Hamming distance is proposed to obtain the attribute weights. The last, the COPRAS method is extended to probabilistic hesitant fuzzy environment. The proposed method can provide more options for DM to solve probabilistic hesitant MADM problem. In general, the study of PHFSs is in its infancy and still has great potential for development. In the future study, the attribute weights under probabilistic hesitant fuzzy information can continue to be extended, and the proposed by Wang et al. [52] can be used for reference. When facing membership degrees and probability information at the same time, the existing research usually chooses to merge the two information in a certain way and then deal with it according to the original method. Therefore, in future research, we need to find a new method to make the probability information can be processed more reasonably when establishing the measure of probabilistic hesitant fuzzy information.

CONFLICTS OF INTEREST

The authors declare they have no conflicts of interest.

AUTHORS' CONTRIBUTIONS

All authors have contributed equally to the paper.

Funding Statements

The Funding to complete this paper is provided by authors.

ACKNOWLEDGMENTS

The author thanks the editors and reviewer for providing very partinent comments, which allowed this paper to get better results through improvements.

REFERENCES

49.W. Takahashi, Nonlinear Functional Analysis: Fixed Point Theory and its Applications, Yokohama Publishers, Yokohama, 2000.
50.Z. Xu and X. Zhang, Hesitant fuzzy multi-attribute decision making based on TOPSIS with incomplete weight information, Knowl.-Based Syst., Vol. 52, 2013, pp. 53-64. https://doi.org/10.1016/j.knosys.2013.05.011
Journal
International Journal of Computational Intelligence Systems
Volume-Issue
14 - 1
Pages
1229 - 1241
Publication Date
2021/03/29
ISSN (Online)
1875-6883
ISSN (Print)
1875-6891
DOI
10.2991/ijcis.d.210318.001How to use a DOI?
Copyright
© 2021 The Authors. Published by Atlantis Press B.V.
Open Access
This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).

Cite this article

TY  - JOUR
AU  - Haifeng Song
AU  - Zi-chun Chen
PY  - 2021
DA  - 2021/03/29
TI  - Multi-Attribute Decision-Making Method Based Distance and COPRAS Method with Probabilistic Hesitant Fuzzy Environment
JO  - International Journal of Computational Intelligence Systems
SP  - 1229
EP  - 1241
VL  - 14
IS  - 1
SN  - 1875-6883
UR  - https://doi.org/10.2991/ijcis.d.210318.001
DO  - 10.2991/ijcis.d.210318.001
ID  - Song2021
ER  -