International Journal of Computational Intelligence Systems

Volume 13, Issue 1, 2020, Pages 663 - 671

A Neural Network for Moore–Penrose Inverse of Time-Varying Complex-Valued Matrices

Authors
Yiyuan Chai1, Haojin Li2, Defeng Qiao2, Sitian Qin2, *, ORCID, Jiqiang Feng1
1Shenzhen Key Laboratory of Advanced Machine Learning and Application, College of Mathematics and Statistics, Shenzhen University, Shenzhen, 518060, China
2Department of Mathematics, Harbin Institute of Technology, Weihai, 264209, China
*Corresponding author. Email: qinsitian@163.com
Corresponding Author
Sitian Qin
Received 1 March 2020, Accepted 17 May 2020, Available Online 17 June 2020.
DOI
10.2991/ijcis.d.200527.001How to use a DOI?
Keywords
Zhang neural network; Moore–Penrose inverse; Finite-time convergence; Noise suppression
Abstract

The Moore–Penrose inverse of a matrix plays a very important role in practical applications. In general, it is not easy to immediately solve the Moore–Penrose inverse of a matrix, especially for solving the Moore–Penrose inverse of a complex-valued matrix in time-varying situations. To solve this problem conveniently, in this paper, a novel Zhang neural network (ZNN) with time-varying parameter that accelerates convergence is proposed, which can solve Moore–Penrose inverse of a matrix over complex field in real time. Analysis results show that the state solutions of the proposed model can achieve super convergence in finite time with weighted sign-bi-power activation function (WSBP) and the upper bound of the convergence time is calculated. A related noise-tolerance model which possesses finite-time convergence property is proved to be more efficient in noise suppression. At last, numerical simulation illustrates the performance of the proposed model as well.

Copyright
© 2020 The Authors. Published by Atlantis Press SARL.
Open Access
This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).

1. INTRODUCTION

The Moore–Penrose inverse of a matrix is widely used in scientific and engineering fields as one of the basic problems encountered extensively, such as pattern recognition [1], optimization [2], and signal processing [3]. In general, it is not easy to immediately and accurately solve the Moore–Penrose inverse in numerical simulation. Because of its wide applications, the calculation of Moore–Penrose inverse has been studied a lot, and many algorithms for constant matrix have been proposed, such as continuous matrix square algorithm [4], Newton iteration method [5] and Greyville recursion [6]. However, in the process of solving matrix inverse kinematics problems of online control of redundant robots [7], the real-time calculation problem of the matrix Moore–Penrose inverse is required to be solved. Although there have been many effective methods for solving the matrix Moore–Penrose inverse, as far as we know, most of the research is based on the real field and few people generalize this problem to the complex field. In fact, solving the Moore–Penrose inverse of the complex-valued matrix is also often involved in the abovementioned fields, and it also plays a pivotal role in practical applications.

Parallel computing is widely used to solve linear and nonlinear problems [716] for its superior performance in complex large-scale online applications. Neural network is considered as an effective alternative to scientific computing [17,18] due to its parallel distribution and easy hardware implementation. Some recursive neural networks [1921] have been constructed to compute the pseudo-inverse of matrices.

Zhang neural network (ZNN), as a special kind of recurrent neural network, is different from gradient neural networks (GNNs) that use a passive tracking method. ZNN performs better when involving the time derivative information of time-varying coefficients. It has the advantages of fast convergence rate in solving time-varying problems. And it is widely used in dynamic problems such as nonlinear dynamic optimization [22], motion control of robot redundant arms [23]. In 2013, a sign-bi-power activation function that accelerates ZNN to converge in finite time was proposed by Li et al. [24]. Then, Shen et al. [25] tried to accelerate the convergence rate of ZNN by constructing a tunable activation function. In addition, robustness is also an influence property because noise interference is unavoidable in practical application. Thus, inspired by xiao et al. [26], one noise-tolerance ZNN model which possesses finite-time convergence property is proposed in this paper.

It has been shown that ZNN has been proven to own superior property in many studies (see [2729]). Therefore, for the time-varying full-rank matrix Moore–Penrose inversion problem, we choose to construct a ZNN to solve the time-varying optimization problem. The contributions of this paper are listed as follows:

  1. To the best of our knowledge, there is little research on solving time-varying Moore–Penrose inverse problems over complex field with ZNN model. Compared with the existing results [30], we generalize the problem to the complex field and solve the Moore–Penrose inverse in finite time.

  2. A time-varying parameter is utilized in the design formula, which can effectively reduce the convergence time of the model solution.

  3. An improved ZNN model which is proved to be more efficient in noise suppression compared with traditional ZNN model is proposed, and the value of its error function converges to zero in finite time.

The remains of this paper are summarized as follows: Some definitions and preliminaries about generalized inverse and complex analysis are introduced in Section 2. The ZNN model for right Moore–Penrose inverse of a matrix with special activation function is constructed and its Lyapunov stability and convergence in finite time are proved in Section 3. In addition, an improved ZNN model for matrix Moore–Penrose inverse which has the ability to suppress noise is introduced. It is worth mentioning that the improved ZNN model can not only effectively suppress noise but also its solution can reach a finite time convergence under the acceleration of a special activation function. In Section 4, numerical examples of time-vary complex-valued problems are given to demonstrate the validity of our results.

2. PROBLEM DESCRIPTION AND PRELIMINARIES

In order to lay the foundation for further discussion, some definitions about the matrix Moore–Penrose inverse are introduced. In this paper, m×n means m×n-dimensional linear space over complex fields and 2n means 2n-dimensional real vector space. AH means conjugate transpose of matrix A.

Definition 1.

[7,31] For a matrix Am×n, if Xn×m satisfies one or more of the following equations:

AXA=A,XAX=X,(AX)H=AX,(XA)H=XA,
then X is called the generalized inverse of A. And if X satisfies all of the equations above, then X is called the Moore–Penrose inverse of A, which is denoted by A+.

If rank(A)=min{m,n}, AAH (or AHA) is nonsingular. Then the unique Moore–Penrose inverse A+m×n for matrix A could be rewritten as [30]

A+:=AH(AAH)1,ifm<n,A1,ifm=n,(AHA)1AH,ifm>n,(1)
where the three equations from top to bottom respectively represent the right Moore–Penrose inverse, inverse and the left Moore–Penrose inverse.

From (1), the Moore–Penrose inverse of matrix A depends on (AAH)1. However, calculating (AAH)1 in high-dimensional case is difficult, so it is hard to calculate the Moore–Penrose inverse of matrix A at high dimension.

A+ can not be gotten by (1) when rank(A)<min{m,n}, so, in this paper, the Moore–Penrose inverses of A(t)m×n are only considered to be full-rank.

Definition 2.

[32] For any Zn, the linear function φ:n2n is defined as

φ(Z)=Re(Z)Im(Z)2n.

From the definition of φ(Z), it is clear that

Proposition 1.

[32] For any λ and Z,Z1,Z2n, it holds that

φ(λZ)=λφ(Z);φ(Z1±Z2)=φ(Z1)±φ(Z2);ZHZ=φT(Z)φ(Z).

3. MODEL DESIGN AND CONVERGENCE ANALYSIS

In this section, we will propose a ZNN model and an improved noise-tolerance ZNN model to solve right time-varying Moore–Penrose inverse of matrices. Their stability and convergence are discussed as well.

3.1. A ZNN Model for Right Time-Varying Moore–Penrose Inverse

In this subsection, we will propose a ZNN model to solve the right Moore–Penrose inverse of time-varying matrix A(t) with full row-rank. Hence, for (1), if rank(A)=m, the right Moore–Penrose inverse A+(t) satisfies

A+(t)A(t)AH(t)=AH(t).(2)

For solving A+(t), we first let X(t)=A+(t).

The corresponding error function [8,33] could be defined as

e(t)=X(t)A(t)AH(t)AH(t),(3)
then motivated by Zhang et al. [34], the following design formula is introduced
ė(t)=γ1eα1tΦe(t),(4)
where γ1,α1>0 and Φ():n×mn×m is activation function array. For any zn×m, the ijth element of the function Φ(z) is defined as
Φ(z)ij=ϕRe(zij)+iϕIm(zij),(5)
where ϕ(): is a monotonically increasing odd function constructed as [35]
ϕ(x)=λ1sgnk(x)+λ2sgn1k(x)+λ3x.(6)

Here λ1,λ2,λ3>0,k(0,1),

sgnk(x)=|x|k,x>00,x=0|x|k,x<0.

Combining (3) with (4), ZNN model is written as follows:

(t)A(t)AH(t)=γ1eα1tΦX(t)A(t)AH(t)AH(t)X(t)Ȧ(t)AH(t)+A(t)ȦH(t)+ȦH(t),
which can be modified as
(t)=(t)IA(t)AH(t)γ1eα1tΦX(t)A(t)AH(t)AH(t)X(t)Ȧ(t)AH(t)+A(t)ȦH(t)+ȦH(t).(7)

Theorem 2.

For any nonzero complex-valued activation function array Φ() defined as (5), the zero solution of the complex ZNN design formula (4) remains stable in the sense of Lyapunov.

Proof.

For convenience, design formula (4) could be written as subsystems:

ėij=γ1eα1tΦe(t)ij,(8)
where i{1,2,,n},j{1,2,,m}.

The Lyapunov function is defined as

vij(t)=eijH(t)eij(t)=φTeij(t)φeij(t),(9)
where eij(t) means the ijth element of e(t). It's obvious that vij(t)0. vij(t)=0 if and only if eij(t)=0. That is Re(eij(t))=0 and Im(eij(t))=0.

The derivative with respect to time is

v̇ij(t)=2φTeij(t)φėij(t)=2γ1eα1tφTeij(t)φΦe(t)ij=2γ1eα1tReeij(t)ϕReeij(t)+Imeij(t)ϕImeij(t).

Since ϕ() is a monotonically increasing odd function, it holds that

Reeij(t)ϕReeij(t)0Imeij(t)ϕImeij(t)0.

So we can get that v̇ij0. And v̇ij=0 if and only if eij(t)=0. By the Lyapunov theory [36], the zero solution of (4) is stable in the sense of Lyapunov.

It is worth mentioning that under the acceleration of time-varying parameter eα1t, the solution of our proposed model not only keeps stable, but also achieves finite time convergence. The upper bound of convergent time of state solution X(t) will be calculated and this can be described by the following theorem.

Theorem 3.

From any initial value X(0)=X0, the solution of the ZNN model (7) with the activation function (6) will converge to the theoretical solution A+(t) in finite time.

Proof.

For the Lyapunov function vij in (9) and the ZNN model (7) with the activation function (6), we can get

v˙ij(t)=2γ1eα1t[Re(eij(t))ϕ(Re(eij(t)))+Im(eij(t))ϕ(Im(eij(t)))]=2γ1eα1tRe(eij(t))[λ2sgn1kλ1sgnk(Re(eij(t)))+λ2sgn1k(Re(eij(t)))+λ3Re(eij(t))]2γ1eα1tIm(eij(t))[λ2sgn1kλ1sgnk(Im(eij(t)))+λ2sgn1k(Im(eij(t)))+λ3Im(eij(t))]=2γ1eα1t(λ1[|Re(eij(t))|k+1+|Im(eij(t))|k+1]+λ2[|Re(eij(t))|k+1k+|Im(eij(t))|k+1k]+λ3[|Re(eij(t))|2+|Im(eij(t))|2]|Im(eij(t))|k+1)2γ1eα1t(λ1[(|Re(eij(t))|2)k+12+(|Im(eij(t))|2)k+12]+λ3[|Re(eij(t))|2+|Im(eij(t))|2])2γ1eα1t(λ1[|Re(eij(t))|2+|Im(eij(t))|2]k+12+λ3[|Re(eij(t))|2+|Im(eij(t))|2]|Im(eij(t))|2]k+12)=2γ1eα1t(λ1vijk+12(t)+λ3vij(t)).

That is,

v̇ij(t)2γ1λ1eα1tvijk+12(t)2γ1λ3eα1tvij(t).

From the above inequality, we can conclude that vij(t) is monotonically decreasing and positive definite, which means that vij(t)=0 for every tTij when vij(Tij)=0. That is to say, eij(t)=0 for every tTij. So, without loss of generality, we suppose vij(t)>0 for t(0,Tij). Following inequality is obtained

vijk+12(t)v̇ij(t)2γ1λ1eα1t2γ1λ3eα1tvij1k2(t).
for t(0,Tij). Multiply both sides by e1α1γ1λ3(1k)eα1t and we have
de1α1γ1λ3(1k)eα1tvij1k2(t)(1k)γ1λ1eα1t+1α1γ1λ3(1k)eα1tdt.

Integrating both sides of this inequality from 0 to t for t(0,Tij), one has

e1α1γ1λ3(1k)eα1tvij1k2(t)e1α1γ1λ3(1k)vij1k2(0)λ1λ3e1α1γ1λ3(1k)eα1t+λ1λ3e1α1γ1λ3(1k).

Since vij(t)>0, we can get

λ1λ3e1α1γ1λ3(1k)eα1te1α1γ1λ3(1k)vij1k2(0)+λ1λ3e1α1γ1λ3(1k).

Therefore,

tTij1α1lnα1λ3(1k)lnλ3λ1e1α1γ1λ3(1k)vij1k2(0)+e1α1γ1λ3(1k).

The upper bound on the convergence time of X(t) is T~1, which T̃1=max{Tiji=1,,n,j=1,,m}.

As is proved above, the state solution of ZNN model can converge to the theoretical solution A+(t) in finite time.

It is worth noting that compared with the methods proposed in other references [8,30], not only do we generalize the problem to the complex domain, but also the state solution of the model proposed in this paper can achieve finite time convergence.

Remark 1.

ZNN model for left time-varying Moore–Penrose inverse of matrices can be obtained similarly. The error function could be defined as

ε(t)=AH(t)A(t)X(t)AH(t),(10)
and the following design formula is introduced
ε̇(t)=γ2eα2tΦ(ε(t)),(11)
where γ2,α2>0. The corresponding ZNN model can be written as
AH(t)A(t)(t)=γ2eα2tΦAH(t)A(t)X(t)AH(t)ȦH(t)A(t)+AH(t)Ȧ(t)X(t)+ȦH(t),
which can be rewritten as
(t)=IAH(t)A(t)(t)γ2eα2tΦA(t)HA(t)X(t)AH(t)ȦH(t)A(t)+AH(t)Ȧ(t)X(t)+ȦH(t).(12)

The convergence analysis is similar to the proof of Theorems 2 and 3, thus omitted here.

3.2. An Improved Matrix Inversion ZNN Model

The analysis about ZNN model for right time-varying Moore–Penrose inverse in Section 3.1 does not concern the problem of external noise, which means the state solution of the ZNN model (7) and (12) may be unstable once the external noise appears. In this section, we propose an improved matrix inversion ZNN model which can also be more efficient in noise suppression.

We introduce a design formula [26] for e(t) mentioned in (3) as follows:

ė(t)=δ1eα1tΦe(t)δ2Φe(t)+δ10teα1τΦ(e(τ))dτ,(13)
where δ1,δ2>0. By employing the design formula (13), following improved ZNN model for solving right Moore–Penrose inverse is obtained
(t)A(t)AH(t)=δ1eα1tΦX(t)A(t)AH(t)AH(t)δ2ΦX(t)A(t)AH(t)AH(t)+δ10teα1τΦ(X(τ)A(τ)AH(τ)AH(τ))dτX(t)Ȧ(t)AH(t)+A(t)ȦH(t)+ȦH(t).(14)

The following theorem ensures the state solution of the ZNN model (14) converges to the exact solution in finite time.

Theorem 4.

From any initial state X(0), the state solution X(t) of the ZNN model (14) with the activation function Φ() will converge to the exact solution A+(t).

Proof.

We firstly introduce an intermediate variable r(t)=e(t)+δ10teα1τΦ(e(τ))dτ, it is clear that

(t)=ė(t)+δ1eα1tΦ(e(t))=δ2Φ(r(t)).

Define a Lyapunov function as

wij(t)=rijH(t)rij(t)=φTrij(t)φrij(t),
where rij(t) means the ijth element of r(t). The derivative with respect to time is
ij(t)=2φTrij(t)φij(t)=2δ2φTrij(t)φΦr(t)ij=2δ2Rerij(t)ϕRerij(t)+Imrij(t)ϕImrij(t)0.(15)

Note that wij(t)0 and ij(t)0. The fact that wij(t)=0 holds if and only if rij(t)=0 and ij(t)=0 holds if and only if rij(t)=0. According to Lyapunov theory and analysis in Theorem 3, we know that there exists T~2, such that rij(t)=0 for t>T̃2. In this situation, (13) is further written as

ė(t)=δ1eα1tΦ(e(t)).(16)

It is similar to (8). According to the proof of Theorem 2, eij(t) converges to zero finally. That is to say, X(t) will converge to exact state solution finally.

Next, the upper bound of convergent time of state solution X(t) will be calculated and this can be described by the following theorem.

Theorem 5.

Starting from any initial state X(0), the state solution X(t) of the ZNN model (14) with activation function Φ() will converge to the exact solution A+(t) in finite time.

Proof.

According to (15), we can further get

ij(t)=2δ2Rerij(t)λ2sgn1kλ1sgnkRerij(t)+λ2sgn1kRerij(t)+λ3Rerij(t)2δ2eα1tImrij(t)λ2sgn1kλ1sgnkImrij(t)+λ2sgn1kImrij(t)+λ3Imrij(t)=2δ2λ1|Re(rij(t))|k+1+|Im(rij(t))|k+1+λ2|Re(rij(t))|k+1k+|Im(rij(t))|k+1k+λ3|Re(rij(t))|2+|Im(rij(t))|22δ2λ1|Re(rij(t))|2k+12+|Im(rij(t))|2k+12+λ3|Re(rij(t))|2+|Im(rij(t))|22δ2λ1|Re(rij(t))|2+|Im(rij(t))|2k+12+λ3|Re(rij(t))|2+|Im(rij(t))|2λ2sgn1k=2δ2λ1wijk+12(t)+λ3wij(t).

Multiplying eδ2λ3(1k)t on both sides, we have

deδ2λ3(1k)twij1k2(t)(1k)δ2λ1eδ2λ3(1k)tdt,
which means that
tTij1δ2λ3(1k)ln1+λ3λ1wij1k2(0).

Thus, we get T̃2=max{Tiji=1,,n,j=1,,m}. It follows that when tT̃2, we have rij(t)=0. In this situation, (13) reduces to (16). Hence, we obtain the upper bound of convergence time as follows:

tTij1α1lnα1λ3(1k)lnλ3λ1e1α1δ1λ3(1k)vij1k2(0)+e1α1δ1λ3(1k),
which is similar to the proof of Theorem 3. Define T~3=max{Tiji=1,,n,j=1,,m}, we get the final convergence time is T̃N=T̃2+T̃3. The proof is complete.

To verify noise-tolerance ability, we add a constant external noise to design formula (13) which is denoted by η, and thus the following noise-polluted design formula is obtained

ė(t)=δ1eα1tΦe(t)δ2Φe(t)+δ10teα1τΦ(e(τ))dτ+η,(17)
and the noise-polluted ZNN model can be written as
(t)A(t)AH(t)=δ1eα1tΦX(t)A(t)AH(t)AH(t)δ2ΦX(t)A(t)AH(t)AH(t)+δ10teα1τΦ(X(τ)A(τ)AH(τ)AH(τ))dτX(t)Ȧ(t)AH(t)+A(t)ȦH(t)+ȦH(t)+η.(18)

Theorem 6.

For noise-polluted model (18), the state solution X(t) starting from any initial state X(0) will converge to the exact solution A+(t) with activation function Φ().

Proof.

As defined in Theorem 4, r(t)=e(t)+δ10teα1τΦ(e(τ))dτ. It can be written as a subsystem and its derivative with respect to time is

ij(t)=δ2Φ(r(t))+ηij.

Define a Lyapunov function as

W(t)=δ2Φ(r(t))ηijHδ2Φ(r(t))ηij.

It is obvious that W(t)0. The fact that W(t)=0 holds if and only if δ2Φ(r(t))ηij=0. Note that function array Φ() are monotonically increasing odd functions, and its derivative about time can be calculated as

W˙(t)=δ2(Φ(r(t)))ijrijij(t)Hδ2Φ(r(t))ηij+δ2Φ(r(t))ηijHδ2(Φ(r(t)))ijrijij(t)=δ2(Φ(r(t)))ijrijH+δ2(Φ(r(t)))ijrijδ2Φ(r(t))ηijHδ2Φ(r(t))ηij=2Reδ2(Φ(r(t)))ijrijφTδ2Φ(r(t))ηijφδ2Φ(r(t))ηij0.

That is to say, rij will converge to the equilibrium according to Lyapunov theory. In other words, it means that limt+ij(t)=limt+δ2Φ(r(t))ηij=0. In this situation, it can be obtained that ėij(t)=δ1eα1tΦ(e(t))ij which is similar to (8) due to ėij(t)=ij(t)δ1eα1tΦ(e(t))ij. According to the proof of Theorem 2, eij(t) converges to zero finally. That is to say, X(t) will converge to the exact state solution finally.

4. NUMERICAL EXAMPLES

In this section, numerical examples are presented to illustrate the effectiveness of the proposed ZNN model.

Example 1.

Consider the right Moore–Penrose inverse of the following complex time-varying matrix:

A(t)=A1(t)A2(t)A3(t),
where
A1(t)=sin(t)+icos(t)cos(t),
A2(t)=cos(t)+isin(t)sin(t)+icos(t),
A3(t)=sin(t)cos(t)+isin(t).

The Zhang neural network (ZNN) model (7) is used to simulate the Moore–Penrose inverse of the time-varying matrix in Example 1. Here, we take λ1=1, λ2=1, λ3=1, γ1=1, α1=1 and k=0.5. The results of the state solution are shown in Figure 1. We divide the elements in the same position into real part and imaginary part and put them in the same diagram. There are four different initial values in every diagram. It can be seen from Figure 1 that starting from any four given initial values, the solution curves of model (7) converges to one state corresponding to the elements of X(t) respectively and after that, the curves keep converging. This also illustrate the effectiveness of our proposed model for solving the matrix Moore–Penrose inverse problem.

Figure 1

State trajectories of ZNN models (7) for Example 1.

In order to further illustrate the validity of the conclusion, we plot the trajectory of the error function and the results are presented in Figure 2. We also divide the elements in the same position into real part and imaginary part and put them in the same diagram. It can be seen that starting from the same four initial values as before, the real part and imaginary part of each element rapidly converge to zero, indicating that the state solution can converge well to the theoretical solution. Compared with the traditional ZNN model proposed in another reference [30] that also solves the matrix Moore–Penrose inverse, the results in Figure 3 show that the variable parameter eα1t can indeed accelerate the convergence with the same initial values, and the convergence time of the model solution decreases as the parameter α1 becomes larger.

Figure 2

State trajectories of error function (3) for Example 1.

Figure 3

Influence of parameters on convergence time for Example 1.

Example 2.

Consider the left Moore-Penrose inverse of the following complex time-varying matrix:

A=sin(t)+icos(t)cos(t)+isin(t)cos(t)sin(t)sin(t)+icos(t)cos(t)+isin(t).

The ZNN model (12) is used to simulate the Moore–Penrose inverse of the time-varying matrix in Example 2. In this case, taking λ1=1, λ2=1, λ3=1, γ2=1, α2=1 and k=0.5.

Choosing four different initial values arbitrarily in every diagram, we divide the elements in the same position into real part and imaginary part. In order to reflect the convergence speed more intuitively, we put them in the same diagram. Similarly, as shown in Figures 4 and 5, we can observe that starting from any four given initial values, the solution curves of model (12) converges to one state corresponding to the elements of X(t) respectively. That is to say, our model effectively solves the proposed problem of time-varying Moore–Penrose inverse.

Figure 4

State trajectories of Zhang neural network (ZNN) models (12) for Example 2.

Figure 5

State trajectories of error function (10) for Example 2.

Example 3.

Consider the right Moore–Penrose inverse of the following complex time-varying matrix with noise:

A(t)=A1(t)A2(t)A3(t)
where
A1(t)=sin(t)+icos(t)cos(t),
A2(t)=cos(t)+isin(t)sin(t)+icos(t),
A3(t)=sin(t)cos(t)+isin(t).

We first examine the stability of the model without interference from external noise. In order to further illustrate the validity of the conclusion, we plot the trajectory of the error function and it is shown in Figure 6. Obviously, every element in the error function, whether real or imaginary, will eventually converge to zero, which effectively proves that the model (14) can solve the time-varying Moore–Penrose inverse of matrices. Next, we randomly choose a set of constant noise values and compare the convergence of the error curves under this noise interference. It is clear that the error curves which are represented by the blue solid line and the blue dotted line in Figure 7 of model (14) both converges to zero faster. However, the error curves of model (7), which are the curves represented by red in Figure 7, converge slowly under the interference of external noise. In other words, model (14) does better in noise suppression.

Figure 6

Trajectories of error function without noise for Example 3.

Figure 7

Trajectories of error function with noise for Example 3.

5. CONCLUSION

In this paper, for solving time-varying Moore–Penrose inverse over complex fields, a new ZNN model is proposed. The solution of the ZNN model (7) and the solution of the ZNN model (12) are proved to be stable in the sense of Lyapunov. Furthermore, for any initial value, the state solution of ZNN will converge to the theoretical solution in real time. Compared with existing results, our model converges faster because of the new activation function. The improved ZNN model (14) is proven to be more efficient in noise suppression. In addition, models with faster convergence rate and models for solving Moore–Penrose inverses of rank-deficient matrix would be one of our future research directions, which might achieve rich results under unremitting efforts.

CONFLICT OF INTEREST

The authors declare that we have no financial and personal relationships with other people or organizations that can inappropriately influence our work, there is no professional or other personal interest of any nature or kind in any product, service and/or company that could be construed as influencing the position presented in, or the review of, the manuscript entitled.

AUTHORS' CONTRIBUTIONS

Yiyuan Chai: Conceptualization and Methodology; Haojin Li: Writing- Original draft preparation; Defeng Qiao: Computer programs; Sitian Qin: Supervision, Validation and Investigation; Jiqiang Feng: Writing- Reviewing and Editing.

ACKNOWLEDGMENTS

This research is supported by the National Natural Science Foundation of China (61773136, 11871178).

REFERENCES

36.Y. Zhang and C. Yi, Zhang Neural Networks and Neural-Dynamic Method, Nova Science Publishers, New York, NY, USA, 2011.
Journal
International Journal of Computational Intelligence Systems
Volume-Issue
13 - 1
Pages
663 - 671
Publication Date
2020/06/17
ISSN (Online)
1875-6883
ISSN (Print)
1875-6891
DOI
10.2991/ijcis.d.200527.001How to use a DOI?
Copyright
© 2020 The Authors. Published by Atlantis Press SARL.
Open Access
This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).

Cite this article

TY  - JOUR
AU  - Yiyuan Chai
AU  - Haojin Li
AU  - Defeng Qiao
AU  - Sitian Qin
AU  - Jiqiang Feng
PY  - 2020
DA  - 2020/06/17
TI  - A Neural Network for Moore–Penrose Inverse of Time-Varying Complex-Valued Matrices
JO  - International Journal of Computational Intelligence Systems
SP  - 663
EP  - 671
VL  - 13
IS  - 1
SN  - 1875-6883
UR  - https://doi.org/10.2991/ijcis.d.200527.001
DO  - 10.2991/ijcis.d.200527.001
ID  - Chai2020
ER  -