International Journal of Computational Intelligence Systems

Volume 13, Issue 1, 2020, Pages 1590 - 1597

Application of Fuzzy C-Mean Clustering Based on Multi-Polar Fuzzy Entropy Improvement in Dynamic Truck Scale Cheating Recognition

Authors
Zhenyu Lu1, Xianyun Huang2, *
1Artificial Intelligence Institute, Nanjing University of Information Science and Technology, Nanjing, Jiangsu, 210044, China
2Scientific Research Post, Suzhou Institute of Metrology, Suzhou, Jiangsu, 215128, China
*Corresponding author. Email: huangxianyun06@163.com
Received 31 May 2020, Accepted 15 September 2020, Available Online 1 October 2020.
DOI
10.2991/ijcis.d.200923.001How to use a DOI?
Keywords
Multi-polar fuzzy entropy; Fuzzy C-means clustering; Multi-polar fuzzy feature; Dynamic truck scale
Abstract

In the big data background, the uncertainty of data is increasingly apparent. Multi-polar fuzzy feature of data has been more popularly used by the research community for the purpose of the classification of weighing cheating in dynamic truck scale characteristic and the clustering problem of multi-polar fuzzy feature information. Additionally, the traditional classification method leads to slow speed and inaccuracy because of its difficulties. Therefore, by considering a multi-polar fuzzy feature classification of defects, a fuzzy c-means ( FCM) clustering algorithm based on multi-polar fuzzy entropy is proposed. Firstly, according to the evaluation of available characteristics, the characteristic value of clustering samples is established. Secondly, we calculated the multi-polar fuzzy entropy of clustering samples. Finally, an improved FCM clustering algorithm based on multi-polar fuzzy entropy is presented. The experimental results of the data set collected from 5 different types of weighing cheating cars demonstrate that the algorithm improves the classification accuracy of FCM with multi-polar fuzzy feature information clustering and reduces significantly both the number of iterations and the classification time.

Copyright
© 2020 The Authors. Published by Atlantis Press B.V.
Open Access
This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).

1. INTRODUCTION

Artificial intelligence being rich in information, has been widely used in various research fields to solve the complex problems through computer simulation. Cayley bipolar fuzzy graphs theory is proposed by Alshehri et al. [1] to solve the real time system modeling problem, where the level of information inherent in the system varies with the different levels of precision, In 2014, Mesiarová et al. [2] expanded bipolar fuzzy sets to m-polar fuzzy sets and promoted the development of bipolar fuzzy theory. In 2016, Zhou et al. [3] discussed the problem of nonlinear optimization with bipolar fuzzy relation equation constraints and promoted the development of bipolar fuzzy theory. Hanying et al. however, successfully applied the Yin Yang bipolar fuzzy cognitive Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) method in the diagnosis of bipolar disorder.

FCM is a fuzzy clustering algorithm based on an objective function proposed by Dunn and promoted by Bezdek [4]. It has been widely used in the image segmentation, automatic speech recognition, fault diagnosis and customer classification. In the traditional fuzzy C-clustering algorithm, the eigenvalues of clustering objects are accurate by default, but the characteristic information has the essential attribute of fuzziness and the bipolarity or the multi-polarity [5]. In this paper, in order to solve the existing problem of FCM clustering, in the process of bipolar fuzzy characteristic information clustering analysis, an improved FCM algorithm based on multi-polar fuzzy information entropy is proposed.

2. MULTI-POLAR FUZZY SET THEORY

2.1. Fuzzy Set Theory

The fuzzy set was proposed by Zadeh in 1965 [6], which provides a form for dealing with less rigorous information.

Definition 1.

Let Z be the set of elements, z denotes a class of elements of Z, namely Z=z. This set is called the domain. Fuzzy subset A in Z is represented by membership function ua(Z). The value is between [0, 1], and is shown as

A={Z,ua(Z)|zZ}(1)
where 0ua(Z)1 extending the concept in classical set theory to fuzzy sets. uA : Z[0,1] represent a fuzzy set, or a fuzzy subset called Z. Mapping (function) uA() or shorthand for A() is the membership function of fuzzy sets A.

2.2. Bipolar Fuzzy Set Theory

An extension of fuzzy set, called bipolar fuzzy set was first proposed by Zhang [7] in 1994. After more than 20 years of development, the bipolar fuzzy set was matured and then applied to the clinical diagnosis of mental health diseases of the World Health Organization [8].

Definition 2.

Zhang [7] defined a bipolar fuzzy set as a pair (u+,u), where, u+ : X[0,1] and u : X1,0 are any mappings. Let B=(u+,u) be the bipolar fuzzy sets on X., in which, u+ represents the membership that element z belongs to one character regarding bipolar fuzzy set B and u represents the membership that element z belongs to the opposite character and the set of all bipolar fuzzy sets on X is denoted by BF(X).

Chinese medicine pays attention to Yin and Yang, and the astronomy is divided into day and night. There are positive and negative comments on the judgment of things. In fact, wide variety of human decision-making is based on the double-sided or bipolar judgmental thinking on a positive side and a negative side. In recent years, substantial developmental work has been reported on the bipolar fuzzy by many scholars [3,813].

2.3. Multi-Polar Fuzzy Set

The dual fuzzy set is a special case of multi-polar fuzzy sets. Most of the multi-polar problems are more comprehensive than the bipolar. Modeling of the real-world problems often involves multi-agents, multi-attributes, multi-objects and multi-indexes. If only one side is evaluated from the front and the back, a lot of useful information will be lost. If more information is given from the multi-polar feature information of the problem, it will make the description more reflective.

Definition 3.

Let A={x,uA(x1),uA(x2),,uA(xn)|xX} and B={x,uB(x1),uB(x2),,uB(xn)|xX} are two multi-polar fuzzy set, then the Euclidean distance of the two multi-polar fuzzy sets is

D=d(A,B)=1ni=1n(uA(x1)uB(x1))2+(uA(x2)uB(x2))2++(uA(xn)uB(xn))2(2)
where A : V[0,1]m is the m-polar fuzzy set on V, B : E[0,1]m is the m-polar fuzzy set on E, EVV satisfies the symmetry, scilicet <x,y>E<y,x>E. If <x,y>E has B(<x,y>)inf{A(x),A(y)}, then the even pair (A, B) is an m-polar fuzzy graph whose base set is (V, E), a set of m-polar fuzzy vertices [3] where, A is V, and B is the m-pole of E Fuzzy edge set.

Let G=(A,B) be an m-pole fuzzy graph whose base set is (V, E), if B(<x,y>)=B(<y,x>)((x,y)E) and B(<x,x>)=0([0,1]m minimum element (xV), then G is a bipolar fuzzy graph [14] which is also a simple multi-pole fuzzy graph.

Let G=(A,B) be a multi-polar fuzzy graph whose basis set is (V, E). If <x,y>E, there is B(<x,y>)=infA(x),A(y), then G is a strong multi-polar fuzzy graph. The complement of G is a basic set of (V, E), multi-polar fuzzy graph G¯=(A,B¯), where B¯ : E[0,1]m is defined as

<x,y>E,im,piB¯(<x,y>)=0,ifpiB¯(<x,y>)<0infpiA(x),piA(y),ifpiB¯(<x,y>)>0(3)

2.4. Multi-Polar Fuzzy Entropy

Fuzzy entropy is not determined by a given value anymore. It comes from the fuzzy function of fuzzy value, which is closer to the real object, so it has a better fitting effect. Fuzzy entropy has fuzziness and is no longer determined by a given value, it comes from the fuzzy-valued function that is closer to the real thing therefore, it has a better fitting effect. The bipolar fuzzy entropy can be defined as follows:

Definition 4.

Let the set X={x1,x2,xn}, the bipolar fuzzy set on X can be expressed as A={<xi,uAP(xi),uAN(xi)>|xiX},i=1,2,,n. In the formula, uAP(xi)[0,1],uAN(xi)[0,1] denotes the anode and cathode membership of xi to A, respectively.

Definition 5.

Let the set X={x1,x2,xn}, the multi-polar fuzzy set on X can be expressed as A={<xi,uA1(xi),uA2(xi),,uAm(xi)>|xiX},i=1,2,,n. In the formula, uAj(xi)[0,1],j=1m, denotes the membership degree of xi belongs to A.

Definition 6.

The set X={x1,x2,xn} is a nonempty set, and E(A) is the set of bipolar fuzzy entropy.

E(A)=12ni=1n(πA(x)θA(x))(4)

In the formula, πA(x)=1uAP(xi)uAN(xi) is the uncertainty of x in A, θA(x)=1|uAP(xi)uAN(xi)|is the fuzziness of x in A. Let X={x1,x2,xn} constitute the evaluation criterion, C={c1,c2,,cm} is the condensed group.

Definition 7.

The set X={x1,x2,xn} is a nonempty set, and E(A) is the multi-polar fuzzy entropy of the set, then

A=<c11,c12,,c1m>x1,<c21,c22,,c2m>x2,,<cn1,cn2,,cnm>xn(5)
E(cj)=1ni=1n<0,0,,c1j,,0,0>x1+<0,0,,c2j,,0,0>x2++<0,0,,cnj,,0,0>xn(6)

In the formula, jis1,2,,m.

3. FCM CLUSTERING BASED ON MULTI-POLAR FUZZY ENTROPY

3.1. Limitations of FCM

Clustering analysis is the process of grouping a collection of elements into multiple classes, the classification of the elements is based on the large similarity of the elements in the same class and the large difference between the elements of the different classes. Traditional hard clustering is a hard partition, it has an either or quality. But there is an intermediation in the application of the actual genus and there is no definite boundary to draw. Ruspirti applied fuzzy set theory to the cluster analysis based on hard partition clustering and fuzzy membership knowledge in 1969 [15]. Based on this, Dunn [16] proposed FCM algorithm in 1974. FCM clustering improves the defect of hard clustering, it is basically a soft clustering algorithm, that improves the traditional hard clustering algorithm and uses the method of membership degree to determine each element belonging to a certain clustering degree. Fuzzy C-means clustering has the advantages of low complexity and low implementation difficulty, and has been widely used in many fields, such as data classification, image segmentation and cell analysis.

FCM clustering (FCM) algorithm, is going to use membership to determine each sample X={x1,x2,xn}, is a soft clustering algorithm express the degree to which it belongs to a class Ccenter={c1,c2,cn} [17].

FCM algorithm is a flexible partitioning clustering algorithm. The basic principle of classification is to maximize the similarity of elements of the same class and minimize the similarity of elements of different classes.

An important concept in FCM is a membership function, which represents the degree to which a classification recognition object A belongs to a collection of sample categories xi, generally do remember uA(xi), the value range of the independent variable is all the objects that belong to the collection, the range of uA(xi) is [0, 1]. Where uA(xi)=1 represents xi, completely subordinate to the set A; uA(xi)=0 represents xi, not A member of set A at all; uA(xi)(0,1) represents xi, part of set A. A membership function defined in space X={xi|xiX} expresses a fuzzy set S.

FCM puts the sample data set, which is composed of sample data of n research objects divided into categories X={x1,x2,xn} a number of c, and finds the clustering center ci of each group, makes the sample similarity between different categories as small as possible, however, data in the same category are as similar as possible. The value of FCM adopts the membership degree of soft partition within the interval of [0,1] to determine the degree to which each sample belongs to each category. U={uij|i=1,2,,cj=1,2,,n} represents the sample data xj, and for the membership degree of category ci, uij is the membership degree matrix and satisfies

i=1cuij=1,1jnj=1nuij>0,1icuij0,1ic,1jn(7)

General form of the objective function of FCM is expressed as

Obj(U,c1,c2,,cc)=i=1cj=1nuijmdij2(8)
where 0uij1, ci is the center of the class. dij=||classixi|| is the distance from the center ci of the i-th class to the xj-th, and m is the weighted exponent.

Construct the Lagrange function of the following objective function, the necessary condition for reaching the minimum value is obtained by using

Obj¯(U,c1,c2,,cc,λ1,λ2,,λn)=i=1cj=1nuijmdij2+j=1ni=1cuij1(9)
where λ1,λ2,,λn is the Lagrange multiplier. From derivation of Equation (9), we can get the objective function. The necessary condition for getting the minimum value is
Centeri=j=1nuijmXj=1nuijm=j=1nuijmj=1nuijmXj(10)
uij=1k=1cdijdkj2M1=1k=1cdijdkjm(11)

The goal of the FCM clustering algorithm is to find the optimal cluster center and the membership matrix, and then use these cluster centers and membership matrix to classify the particles.

Traditionally FCM clustering has three defects. Firstly, the characteristic values of the clustering samples are all exact values, which are often difficult to be accurately given in practical problems; secondly, the algorithm is not stable, and finally the classification effect is often affected by the initial clustering center. For the existence of FCM high sensitivity to the cluster center initialization problem, yuan heuristic [18] is proposed based on chemical, and successfully obtain the optimal cluster center. However, the actual problems tend to be multi-polar, and needs multi-polar fuzzy theory to optimize the FCM algorithm.

3.2. Calculate the Eigenvalue Matrix

Fuzzy state describes the degree of fuzziness and uncertainty of fuzzy sets. The lesser the fuzzy set, the smaller is the weight and vice versa. The feature weight of sample wood must reflect the relative importance of each feature. It has a larger fuzzy state, in order to avoid subjective arbitrariness in the process of weighting. This work uses multi-polar fuzzy entropy to determine the weight wi of feature gi, as follows:

E(gj)=1ni=1n<0,0,,g1j,,0,0>x1+<0,0,,g2j,,0,0>x2+,+<0,0,,gnj,,0,0>xn(12)
where jis1,2,,m

According to the literature [19] formula (12) can be written as

E(gj)=1ni=1nMean<0,0,,g1j,,0,0>x1+Mean<0,0,,g2j,,0,0>x2+,+Mean<0,0,,gnj,,0,0>xn(13)
wj=1Egij=1s(1Egi)(14)
where E(gj) reflects the fuzziness and uncertainty of the eigenvalues of the sample set Y at the feature gj. The larger the value of gj, the greater is the dependence of the clustering result on the feature gj.

Weighting the eigenvalue matrix P to obtain an eigenvalue matrix P=(pij)(n×s), where

pij=wjPij(15)

3.3. Identify Cluster Centers

FCM clustering has been widely used from its inception in the fields of pattern recognition, fault diagnosis and image. Although, many samples are processed by filtering and other algorithms, they still contain a lot of noise. Since FCM is sensitive to the initial cluster center, the clustering results obtained by the different initial cluster centers are also quite different, especially, when FCM clustering is used to classify the samples [18]. In order to effectively avoid the noise area, the selected area of the initial cluster center can be subdivided into density, and the c points which are taken farthest from each other in the high-density area.

Let Pi be the position of the particle, define ρi as the area density and calculate n Euclidean distances using definition (3) centered on pi,d(pi,p1)d(pi,p2)d(pi,pn), let them rearrange from big to small.

d(pi,p(1))d(pi,p(2))d(pi,p(n))(16)

In the formula, subscript(i) refers the subscript rearrangement to satisfy the above condition. Due to d(pi,p(i)) m to 0, so after reordering, p(1) is pi.

The pi area radius is the minimum Euclidean distance of C feature vectors including pi, recorded as R(pi), by

R(pi)=d(pi,p(c))(17)

It is easy to know that the region where pi is located contains the regional density parameter of p(1)p(2),,p(c) total C feature vectors pi,ρi is

ρi=1R(pi)=1d(pi,p(c))(18)

In the formula, 0<c<n where c is an integer and takes specific value subject to the availability. The greater ρi is, explains that the regional density of pi is larger; on the contrary, the smaller ρi explains that the regional density of pi is smaller.

Calculate the regional density parameters of the feature vectors p1,p2,pn using Equations (17) and (18). After comparison, the region with the highest density is selected as the first cluster center Z1, and get a feature vector set of high-density regions H={p(1)p(2),,p(c)}; then, take the feature vector farthest from Z1 in H as the first cluster center Z2, recalculate all the feature vectors distance in H to Z1, Z2, take out in H to meet.

max(min(d(pr,Z1),d(pr,Z2)))r=1,2,c(19)

The feature vector is the third cluster center Z3; at last, calculate the distance of all feature vectors s distance in H to Z1,Z2,Zk1, take out in H to meet

max(min(d(pr,Z1),d(pr,Z2)d(pr,Zk1))),r=1,2,c(20)

The feature vector of d is the kth cluster center Zk(k=1,2,c). According to this, the clustering center Z=(Z1,Z2,Zk,) is obtained, where c cluster centers are taken from C feature vectors in the feature vector set H. When determining the initial cluster center, avoid excessive concentration, and the feature vector in H should be chosen to avoid noise points. In this work, we take c((n+c)2,n) and are integers.

3.4. Update Cluster Center and Membership Matrix

In the FCM clustering algorithm based on multi-polar fuzzy entropy, it is necessary to calculate the corresponding membership matrix by using the cluster center.

(1) If h,1<h<c, such that d(pi,Zh)=0, then let

uik=1,k=h0,kh(21)

(2) If h,1<h<c, such that d(pi,Zh)>0, then let

uik=1h=1cd(pi,Zk)d(pi,Zh)2m1(22)

In the formula, m is the fuzziness parameter.

Update the cluster center with the membership matrix, where the kth cluster center is recorded as Zk,

Zk={zk1,zk2,zks}(23)

At this time, the square of the generalized Euclidean distance of the sample set Y for the cluster center Z is

J(U,Z)=i=1nk=1c(uik)m(d(pi,Zh))2(24)

3.5. MPFCM Algorithm Steps

The specific steps of the multi-polar fuzzy entropy optimization FCM clustering algorithm are as follows:

Step 1: Input the sample eigenvalue matrix P, the number of clusters c, the fuzziness parameter m and the threshold ϵ iteration number δ of the iteration is stopped.

Step 2: Calculate the feature weights by using Equations (13) and (14), and then calculate the weighted eigenvalue matrix P using Equation (15).

Step 3: Let t=0, determine the initial cluster center Zt and calculate the membership matrix U(t) by using Equations (21) and (22).

Step 4: Determine if t is less than δ. If yes, then continue with step 5; If not, go to step 7.

Step 5: Let t=t+1, update the clustering center Z(t) by using Equation (20), and then update the membership matrix U(t) by using equations.

Step 6: Uses Equation (17) to calculate J(U(t1),Z(t1)) and if J(U(t),Z(t)), J(U(t1),Z(t1))J(U(t),Z(t))<ϵ continue with step 7; if not, go to step 4.

Step 7: Output the membership matrix U and the cluster center Z.

4. EXPERIMENTAL VERIFICATION

The algorithm FCM clustering was successfully applied to the diagnosis of diseases, making the diagnosis more accurate [20]. Then, we apply the algorithm which has been optimized to the dynamic truck scale for application analysis.

Expressway plays an important role in the transportation, but the phenomenon of the overload of freight vehicles occurs from time to time, which seriously affects the service life and transportation safety. However, traditional inspection is far from meeting the demand. Nowadays, the expressway has begun to operate on the Internet. We have realized the prejudgment of overload cheating of trucks based on the analysis of the characteristics of monitoring data of trucks and improved the correct rate of overload judgment.

High-speed jumping and towing weights are used when the vehicles pass through the scale. These have advantages of short reaction time, the effective data collected is less and inaccurate, the rear wheels brake the head and tail respectively to reduce weight and achieve the purpose of weight reduction cheating. These two cheating methods are dynamic and fast but most difficult to define and prevent from cheating at high speed.

There are five passing vehicles to be inspected, taking pictures of key points from different angles of vehicles weighed on scale, X={x,y,z,u,v} 5 system devices, from image data G={g1,g2,g3,g4}. Evaluation of the three angles namely, loading capacity, alignment S-type, acceleration and deceleration, location of the four features as high-speed jumping weights, normal and towing weights, which can be combined with multi-polar fuzzy theory to give membership. The membership degree of the g1 fault type is as follows: A(x)=<0.49,0.46,0.51>, A(y)=<0.45,0.42,0.59>,A(z)=<0.50,0.40,0.54>,A(u)=<0.40,0.49,0.60> and A(v)=<0.51,0.52,0.50>.

Step 1: Similarly, the 3-pole fuzzy set A : X[0,1]3 of 5 vehicles is described as follows:

A(g1)=<0.49,0.46,0.51>x,<0.45,0.42,0.59>y,<0.50,0.40,0.54>z,<0.40,0.49,0.60>u,<0.51,0.52,0.50>vA(g2)=<0.46,0.51,0.48>x,<0.43,0.49,0.53>y,<0.45,0.51,0.44>z,<0.49,0.51,0.48>u,<0.42,0.45,0.53>vA(g3)=<0.52,0.47,0.49>x,<0.47,0.45,0.51>y,<0.45,0.54,0.50>z,<0.46,0.53,0.47>u,<0.41,0.48,0.52>vA(g4)=<0.51,0.46,0.50>x,<0.52,0.47,0.51>y,<0.58,0.47,0.50>z,<0.55,0.51,0.46>u,<0.57,0.52,0.43>v

Step 2: Sample eigenvalue matrix

Obtained weighted sample eigenvalue matrix P

Calculate the initial cluster center based on the weighted sample feature matrix

After the initial clustering center, the membership matrix U is (Table 4):

Sample feature clustering center after one iteration (Table 5)

Membership matrix U(1) is (Table 6)

Sample feature clustering center after iteration two iteration (Table 7).

Calculating the membership matrix U(2)

According to parameter c=3,m=2,ϵ=105,δ=100 , we calculate each feature weight by Table 1, and then obtain the weighted sample eigenvalue matrix, as shown in Table 2. According to Table 2, the calculation method proposed in this work determines the initial cluster center, and the initial values of the three cluster centers are Z1(0),Z2(0) and Z3(0), respectively. By using the degree calculation formula, we calculate the corresponding membership matrix of the initial cluster center. The cluster centers obtained after the iteration are Z1(2),Z2(2) and Z3(2), as shown in the Table 3. After two iterations, we get J(U(1),Z(1))J(U(2),Z(2))ϵ. The membership degree iteration results are shown in Table 8. As show in Table 8, the samples are classified into the first class is z1(0), the second class is z2(0) and the third category is z3(0) according to the principle of maximum membership.

g1 g2 g3 g4
x <0.49,0.46,0.51> <0.46,0.51,0.48> <0.52,0.47,0.49> <0.51,0.46,0.50>
y <0.45,0.42,0.59> <0.43,0.49,0.53> <0.47,0.45,0.51> <0.52,0.47,0.51>
z <0.50,0.40,0.54> <0.45,0.51,0.44> <0.45,0.54,0.50> <0.58,0.47,0.50>
u <0.40,0.49,0.60> <0.49,0.51,0.48> <0.46,0.53,0.47> <0.55,0.51,0.46>
v <0.51,0.52,0.50> <0.42,0.45,0.53> <0.41,0.48,0.52> <0.57,0.52,0.43>
Table 1

Sample eigenvalue matrix.

g1 g2 g3 g4
x <0.1230,0.1155,0.1281> <0.1124,0.1246,0.1173> <0.1286,0.1163,0.1212> <0.1312,0.1183,0.1286>
y <0.1130,0.1055,0.1482> <0.1050,0.1197,0.1295> <0.1163,0.1113,0.1262> <0.1338,0.1209,0.1312>
z <0.1256,0.1004,0.1356> <0.1099,0.1246,0.1075> <0.1113,0.1336,0.1237> <0.1492,0.1209,0.1286>
u <0.1004,0.1230,0.1507> <0.1197,0.1246,0.1173> <0.1138,0.1311,0.1163> <0.1415,0.1312,0.1183>
v <0.1281,0.1306,0.1256> <0.1026,0.1099,0.1295> <0.1014,0.1187,0.1286> <0.1466,0.1338,0.1106>
Table 2

Sample eigenvalue matrix.

z1(0) z2(0) z3(0)
<0.1230,0.1155,0.1281> <0.1281,0.1306,0.1256> <0.1256,0.1004,0.1356>
<0.1124,0.1246,0.1173> <0.1026,0.1099,0.1295> <0.1099,0.1246,0.1075>
<0.1286,0.1163,0.1212> <0.1014,0.1187,0.1286> <0.1113,0.1336,0.1237>
<0.1312,0.1183,0.1286> <0.1466,0.1338,0.1106> <0.1492,0.1209,0.1286>
Table 3

Initial cluster center.

x y z u v
Ui1(0) 1 0.6295 0 0.3797 0
Ui2(0) 0 0.1114 0 0.2094 1
Ui3(0) 0 0.2591 1 0.4109 0
Table 4

The membership matrix U(0).

z1(1) z2(1) z3(1)
<0.1183,0.1136,0.1353> <0.1267,0.1300,0.1269> <0.1214,0.1038,0.1383>
<0.1112,0.1233,0.1204> <0.1033,0.1107,0.1290> <0.1110,0.1243,0.1100>
<0.1241,0.1164,0.1220> <0.1021,0.1192,0.1281> <0.1119,0.1320,0.1228>
<0.1328,0.1202,0.1283> <0.1463,0.1335,0.1112> <0.1473,0.1223,0.1273>
Table 5

Initial cluster center.

x y z u v
Ui1(1) 0.8598 0.6179 0.0448 0.5039 0.0032
Ui2(1) 0.0475 0.1353 0.0208 0.3301 0.9940
Ui3(1) 0.0927 0.2469 0.9344 0.1660 0.0028
Table 6

The membership matrix U(1).

z1(2) z2(2) z3(2)
<0.1153,0.1134,0.1393> <0.1221,0.1261,0.1318> <0.1176,0.1074,0.1406>
<0.1114,0.1230,0.1210> <0.1059,0.1138,0.1269> <0.1117,0.1238,0.1136>
<0.1212,0.1181,0.1219> <0.1058,0.1201,0.1262> <0.1136,0.1287,0.1222>
<0.1345,0.1218,0.1274> <0.1441,0.1314,0.1146> <0.1473,0.1103,0.1041>
Table 7

Clustering center Z(2).

x y z u v
Ui1(2) 0.8598 0.6313 0.0311 0.5046 0.0116
Ui2(2) 0.0426 0.1335 0.0220 0.3296 0.9866
Ui3(2) 0.0776 0.2352 0.9469 0.1658 0.0018
Table 8

The membership matrix U(2).

Table 9 shows that the iterative speed of the algorithm is superior to the other two kinds of algorithms in addition, the size of the numerical J(U(0),Z(0))J(U(n),Z(n)) reflects changes of the whole process of iteration of the objective function and reflection of the effectiveness based on multi-polar fuzzy entropy to initialize of the cluster center.

Number of Iterations J(U(0),Z(0))J(U(n),Z(n))
Literature [21] algorithm 8 0.8446
Literature [19] algorithm 4 0.7562
Algorithm in this paper 2 0.2635
Table 9

Algorithm comparison.

5. CONCLUSION

This paper proposes a multi-polar fuzzy FCM based on the classification of multi-polar fuzzy features. The algorithm uses the sample multi-polar fuzzy membership degree to calculate the feature weights and obtain a new clustering center. The algorithm improves the problem of random initialization of the clustering center in the FCM clustering algorithm, which tends to cause slow convergence or even fall into the local minimum defects. Finally, the new clustering centers are obtained by calculating the feature weights with the multi-polar fuzzy membership degree of samples. The algorithm can effectively improve the ponderation on jump pounds and pound of the recognition accuracy and effectiveness of the increasing recognition reliability, reducing disputes.

CONFLICTS OF INTEREST

The authors declared that they have no conflicts of interest to this work.

ACKNOWLEDGMENTS

This work was supported by the National Science Foundation of China under Grant numbers 61773220.

I shall extend my thanks to Weighing Laboratory of Suzhou Institute of Metrology for providing data. I would also like to thank all members of research team who participated this study with great cooperation.

REFERENCES

Journal
International Journal of Computational Intelligence Systems
Volume-Issue
13 - 1
Pages
1590 - 1597
Publication Date
2020/10/01
ISSN (Online)
1875-6883
ISSN (Print)
1875-6891
DOI
10.2991/ijcis.d.200923.001How to use a DOI?
Copyright
© 2020 The Authors. Published by Atlantis Press B.V.
Open Access
This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).

Cite this article

TY  - JOUR
AU  - Zhenyu Lu
AU  - Xianyun Huang
PY  - 2020
DA  - 2020/10/01
TI  - Application of Fuzzy C-Mean Clustering Based on Multi-Polar Fuzzy Entropy Improvement in Dynamic Truck Scale Cheating Recognition
JO  - International Journal of Computational Intelligence Systems
SP  - 1590
EP  - 1597
VL  - 13
IS  - 1
SN  - 1875-6883
UR  - https://doi.org/10.2991/ijcis.d.200923.001
DO  - 10.2991/ijcis.d.200923.001
ID  - Lu2020
ER  -