16
Exponential estimates and exponential stability for neutral-type neural networks with multiple delays Xiaofeng Liao a,b,n , Yilu Liu a,b , Huiwei Wang a,b , Tingwen Huang c a College of Electronic and Information Engineering, Southwest University, Chongqing 400715, PR China b College of Computer, Chongqing University, Chongqing, PR China c Texas A & M University, Qatar, Doha, P.O. Box 23874, Qatar article info Article history: Received 9 March 2014 Received in revised form 8 May 2014 Accepted 14 July 2014 Communicated by He Huang Available online 12 August 2014 Keywords: Neutral-type neural networks Exponential estimate Exponential stability LyapunovKrasovskii functional Linear matrix inequalities Descriptor transformation abstract In this paper, exponential estimates and sufcient criteria for the exponential stability of neutral-type neural networks with multiple delays are given. First, because of the key role of the difference equation part of the neutral-type neural networks with multiple delays, some novel results concerning exponential estimates for non-homogeneous difference equations evolving in continuous time are derived. Then, by constructing several different LyapunovKrasovskii functionals combined with a descriptor transformation approach at some cases, several novel global and exponential stability conditions are presented and expressed in terms of linear matrix inequalities (LMIs), and the obtained results are less conservative and restrictive than the known results. Some numerical examples are also given to show their effectiveness and advantages over others. & 2014 Elsevier B.V. All rights reserved. 1. Introduction In the last decade, there has been a great interest in various types of recurrent neural networks, for example, CohenGrossberg neural networks, Hopeld neural networks, cellular neural networks and bidirectional associative memory neural networks and so on, which had been applied in various engineering elds, such as signal and image processing, associative memories, combinatorial optimization, and automatic control [15]. However, neural networks can be implemented by means of very large-scale integrated (VLSI) electronic circuits, the nite switching speed of the associated ampliers and the communication time expanded in the interaction between neurons inevitably lead to time delays [59]. These time delays may result in oscillatory behaviors or network instability (for example, periodic oscillation and chaos). Hence, the stability of recurrent neural networks with time delays has received much more attention and has been investigated by many researchers in recent years [511] and the references therein. Generally, there exist two types of time delays in nonlinear systems, i.e., the retarded-type delays and the neutral-type delays. The retarded-type delays describe that the delays exist in the states of systems (named as the retarded differential equations), whereas the neutral-type delays describe that the delays exist in the derivatives of states of systems (named as the neutral differential equations). In contrast to the retarded differential systems, the neutral differential systems which time delays appear explicitly in the state velocity vector can be applied to describe more complicated nonlinear engineering and bioscience models, for example, population ecology [12], the distributed networks with lossless transmission lines [13,14], chemical reactors [15], partial element equivalent circuits in very large scale integration (VLSI) system [16], and so on. As discussed above, recurrent neural networks can be implemented using VLSI circuits. Therefore, both retarded-type delays and neutral-type delays are inherent in the dynamics of recurrent neural networks. In recent years, some approaches were also employed to derived several stability criteria for the retarded or the neutral differential systems, such as the characteristic function approaches [17,18], the Lyapunov functionals, M-matrix, LMI approach, Lyapunov function combined with Razumikhin technique [516], and the augmented LyapunovKrasovskii functional and free-weighting matrix approaches [19,20] and so on. In the last decade, most of works focus on the stability and robust stability for recurrent neural networks with retarded-type delays. Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/neucom Neurocomputing http://dx.doi.org/10.1016/j.neucom.2014.07.048 0925-2312/& 2014 Elsevier B.V. All rights reserved. n Corresponding author. Tel.: þ86 23 68250655; fax: þ86 23 6825099. E-mail addresses: x[email protected], x[email protected] (X. Liao), [email protected] (T. Huang). Neurocomputing 149 (2015) 868883

Exponential estimates and exponential stability for neutral-type neural networks with multiple delays

Embed Size (px)

Citation preview

Exponential estimates and exponential stability for neutral-type neuralnetworks with multiple delays

Xiaofeng Liao a,b,n, Yilu Liu a,b, Huiwei Wang a,b, Tingwen Huang c

a College of Electronic and Information Engineering, Southwest University, Chongqing 400715, PR Chinab College of Computer, Chongqing University, Chongqing, PR Chinac Texas A & M University, Qatar, Doha, P.O. Box 23874, Qatar

a r t i c l e i n f o

Article history:Received 9 March 2014Received in revised form8 May 2014Accepted 14 July 2014Communicated by He HuangAvailable online 12 August 2014

Keywords:Neutral-type neural networksExponential estimateExponential stabilityLyapunov–Krasovskii functionalLinear matrix inequalitiesDescriptor transformation

a b s t r a c t

In this paper, exponential estimates and sufficient criteria for the exponential stability of neutral-typeneural networks with multiple delays are given. First, because of the key role of the difference equationpart of the neutral-type neural networks with multiple delays, some novel results concerningexponential estimates for non-homogeneous difference equations evolving in continuous time arederived. Then, by constructing several different Lyapunov–Krasovskii functionals combined with adescriptor transformation approach at some cases, several novel global and exponential stabilityconditions are presented and expressed in terms of linear matrix inequalities (LMIs), and the obtainedresults are less conservative and restrictive than the known results. Some numerical examples are alsogiven to show their effectiveness and advantages over others.

& 2014 Elsevier B.V. All rights reserved.

1. Introduction

In the last decade, there has been a great interest in various types of recurrent neural networks, for example, Cohen–Grossberg neuralnetworks, Hopfield neural networks, cellular neural networks and bidirectional associative memory neural networks and so on, which hadbeen applied in various engineering fields, such as signal and image processing, associative memories, combinatorial optimization, andautomatic control [1–5]. However, neural networks can be implemented by means of very large-scale integrated (VLSI) electronic circuits,the finite switching speed of the associated amplifiers and the communication time expanded in the interaction between neuronsinevitably lead to time delays [5–9]. These time delays may result in oscillatory behaviors or network instability (for example, periodicoscillation and chaos). Hence, the stability of recurrent neural networks with time delays has received much more attention and has beeninvestigated by many researchers in recent years [5–11] and the references therein.

Generally, there exist two types of time delays in nonlinear systems, i.e., the retarded-type delays and the neutral-type delays.The retarded-type delays describe that the delays exist in the states of systems (named as the retarded differential equations), whereas theneutral-type delays describe that the delays exist in the derivatives of states of systems (named as the neutral differential equations).In contrast to the retarded differential systems, the neutral differential systems which time delays appear explicitly in the state velocityvector can be applied to describe more complicated nonlinear engineering and bioscience models, for example, population ecology [12],the distributed networks with lossless transmission lines [13,14], chemical reactors [15], partial element equivalent circuits in very largescale integration (VLSI) system [16], and so on. As discussed above, recurrent neural networks can be implemented using VLSI circuits.Therefore, both retarded-type delays and neutral-type delays are inherent in the dynamics of recurrent neural networks. In recent years,some approaches were also employed to derived several stability criteria for the retarded or the neutral differential systems, such as thecharacteristic function approaches [17,18], the Lyapunov functionals, M-matrix, LMI approach, Lyapunov function combined withRazumikhin technique [5–16], and the augmented Lyapunov–Krasovskii functional and free-weighting matrix approaches [19,20] andso on. In the last decade, most of works focus on the stability and robust stability for recurrent neural networks with retarded-type delays.

Contents lists available at ScienceDirect

journal homepage: www.elsevier.com/locate/neucom

Neurocomputing

http://dx.doi.org/10.1016/j.neucom.2014.07.0480925-2312/& 2014 Elsevier B.V. All rights reserved.

n Corresponding author. Tel.: þ86 23 68250655; fax: þ86 23 6825099.E-mail addresses: [email protected], [email protected] (X. Liao), [email protected] (T. Huang).

Neurocomputing 149 (2015) 868–883

However, little reports for neutral-type neural networks with time delays, i.e., recurrent neural networks with both the retarded-typedelays and the neutral-type delays, are involved very little. Furthermore, the known stability criteria for neutral-type neural networksrarely consider the importance and the key role of the difference equation part of the neutral-type neural networks with multiple delays.

When neural circuits are employed as an associative memory, the existence of multiple equilibrium points is a necessary feature.However, in applications to parallel computation and signal processing involving the solution of optimization problems, it is required thatthere is a well-defined computable solution for all possible initial states. From a mathematical viewpoint, this means that the networkshould have a unique equilibrium point which is globally and exponentially convergent. Indeed, earlier applications to optimizationproblems have suffered from the existence of a complicated set of equilibria [21]. Thus, the globally exponential stability of systems is ofgreat importance for both practical and theoretical purposes, and has been the major concern of most authors. Recently, the authors [22]have given the delay-dependent stability criteria for a class of neutral differential systems describing neural networks. These delay-dependent criteria are less conservative than their delay-independent counterparts when the delay is sufficiently small. But, the sufficientcriteria derived in [22] do not consider the exponential estimates for difference equations in neutral-type neural networks. Cheng et al. [23]investigated the globally asymptotic stability of a class of neutral-type neural networks with delays, which included Hopfield neuralnetworks, cellular neural networks, and Cohen–Grossberg neural networks. By applying the Lyapunov stability approach, two delay-independent sufficient stability criteria were derived in terms of matrix norm. But, they do not involve the exponential estimate of differenceoperator and exponential stability of neutral-type neural networks. In [24], the authors studied a class of switched neutral-type neuralnetworks (SNTNNs) with time-varying delays, and some less conservative robust stability criteria for SNTNNs with time-varying delays werederived based on a new Lyapunov–Krasovskii functional and a new series compensation technique. Lien et al. [25] studied the global andexponential stability for a class of uncertain delayed neural networks of neutral-type with mixed delays. By using linear matrix inequality (LMI)and Razumikhin-like approaches, delay-dependent and delay-independent stability criteria were presented. In [26], the authors have obtainednew sufficient conditions for the existence, uniqueness and globally asymptotic stability of the equilibrium points for a class of neutral-typesystems with multiple delays. However, the results obtained in [26] are very complex and are also difficult to determine them. In [27], theauthors proposed the semi-free weighting matrices instead of the known free weighting matrices which there were too manymatrices includedin the stability criterion in [19,20], and used Lyapunov functional and the augmented Lyapunov functional approaches to derive the globalstability conditions for neutral-type delayed neural networks. However, all the above results which only guarantee the asymptotic stability ofneutral-type neural networks do not discuss the exponential estimates for difference equations part of the neutral systems, at the same time, donot provide some estimates of the rate of convergence, neither the bound of the norm of the solution. Then one may ask if there exists thepossibility of using the LMI approach to derive the exponential estimates for the solutions of the neutral-type neural networks. However, to thebest of our knowledge, these estimates as well as the exponential bounds for the case of neutral-type neural networks with multiple delayshave not been addressed in the literature.

The rest of this paper is organized as follows. Because the difference equations of neutral-type neural networks play a key role in our work,some results concerning the exponential estimates for difference equations evolving in continuous-time domain are given in Section II.In Section III, some delay-dependent solutions are derived in terms of linear matrix inequalities (LMIs) by using Lyapunov–Krasovskii functionalapproach and a descriptor model transformation of the system. These results allow to obtaining the exponential estimates for the solution ofneutral-type neural networks with multiple delays, and the derived conditions also lead to less restrictive and conservative exponentialestimates on the solution of this system. As a by-product, we also derive some criteria for neutral-type network with linear form in thederivative of system's states, therefore, our results have generalized and extended the known results. Illustrative numerical examples arepresented in Section 4 and the contribution ends with some concluding remarks.

Notation. Throughout this paper, for real symmetric matrices X and Y, the notation XZY (X4Y, respectively) means that matrix X–Y ispositive (semi-positive definite, respectively). R and Rþ denote the sets of all real and positive real numbers, respectively. Rn�n stands forthe set of all n� n matrices with entries in R. The superscript “T” represents the transpose. We use λminð�Þ and λmaxð�Þ to denote theminimum and maximum eigenvalue of a real symmetric matrix, respectively. The notation :x: denotes a vector norm defined by:x:¼ ðΣn

i ¼ 1x2i Þ1=2 where x is a vector, while :A: denotes a matrix norm defined by jjAjj ¼ ðλmaxðATAÞÞ1=2 where A is a matrix. Matrices, if not

explicitly stated, are assumed to have compatible dimensions.

2. Neutral-type neural networks and exponential estimates for difference equations

Consider the following set of nonlinear differential equations which describe the class of neutral-type neural networks with multipledelays:

ddt

uiðtÞ� ∑r

k ¼ 1∑n

j ¼ 1dðkÞij gjðujðt�τðkÞj ÞÞ

" #¼ �aiuiðtÞþ ∑

r

k ¼ 1∑n

j ¼ 1wðkÞ

ij gjðujðt�τðkÞj ÞÞþ Ii; i¼ 1;2;…;n: ð1Þ

The above neutral time delay system can be rewritten in the vector–matrix form:

ddt

uðtÞþ ∑r

k ¼ 1Dkgðuðt�τkÞÞ

" #¼ �AuðtÞþ ∑

r

k ¼ 1Wkgðuðt�τkÞÞþ I ð2Þ

where

uðtÞ ¼ ðu1ðtÞ;u2ðtÞ;…;unðtÞÞT ; A¼ diagða1; a2;…; anÞ; I¼ ðI1; I2;…; InÞT ;Dk ¼ ðdðkÞij Þn�nARn�n; Wk ¼ ðwðkÞ

ij Þn�nARn�n; k¼ 1;…; r; i; j¼ 1;2;…;n;

gðuðt�τkÞÞ ¼ ðg1ðu1ðt�τðkÞ1 ÞÞ;…; gnðunðt�τðkÞn ÞÞÞT ;τðkÞj ARþ ; k¼ 1;…; r; j¼ 1;2;…;n;

; ð3Þ

X. Liao et al. / Neurocomputing 149 (2015) 868–883 869

and 0¼ τ0rτ1rτ2r…rτr ¼ τ are time delays, where n is the number of neurons in the network and rZ2, ui denotes thestate variable associated with the i-th neuron, ai represents the amplification constants, the delay feedback connection matrix

Wk ¼ ðwðkÞij Þn�nARn�n describes the strength of the neuron interconnections within the network with the time delay parameter τk.

Dk ¼ ðdðkÞij Þn�nARn�n shows how the derivatives of the neurons are delay feed forward connected in the network, i.e., the time delay occur

in the state velocity vector. If Dk ¼ 0, then system (1) describes a class of neural networks with retarded-type delays. The activationfunction gi shows how the neurons respond to one another. Ii is the external constant input. Typically, the activation function gi(u) arebounded and satisfy the following equation:

0ogiðxÞ�giðyÞx�y

rmi; i¼ 1;2;…;n; ð4Þ

for any x; yAR; xay; where mi40 for i¼ 1;2;…;n; M ¼ max ðm1;…;mnÞ.For any continuous initial function φACð½�τ;0�; RnÞ, there exists an unique solution uðt;φÞ of (1) satisfying the initial condition

uðθ;φÞ ¼ φðθÞ; θA ½�τ;0�: ð5ÞThe space of initial functions is provided with the uniform norm jjφjjτ ¼maxθA ½� τ;0�fjjφðθÞjjg.

In [22–27], the authors only consider linear form in the derivative of state of systems, i.e., system (1) becomes the following form:

ddt

uiðtÞ� ∑r

k ¼ 1∑n

j ¼ 1dðkÞij ujðt�τðkÞj Þ

" #¼ �aiuiðtÞþ ∑

r

k ¼ 1∑n

j ¼ 1wðkÞ

ij gjðujðt�τðkÞj ÞÞþ Ii; i¼ 1;2;⋯;n; ð6Þ

and its vector–matrix form is as follows:

ddt

uðtÞþ ∑r

k ¼ 1Dkuðt�τkÞ

" #¼ �AuðtÞþ ∑

r

k ¼ 1Wkgðuðt�τkÞÞþ I: ð7Þ

The equilibrium point un of system (2) or (7) satisfies the following equation:

Aun ¼ ∑r

k ¼ 1WkgðunÞþ I:

In the following, we always shift the equilibrium point un of system (2) to the origin by the transformation xn ¼ u�un, which changessystem (2) to

ddt

xðtÞþ ∑r

k ¼ 1Dkf ðxðt�τkÞÞ

" #¼ �AxðtÞþ ∑

r

k ¼ 1Wkf ðxðt�τkÞÞ; ð8Þ

and similarly, system (7) becomes

ddt

xðtÞþ ∑r

k ¼ 1Dkxðt�τkÞ

" #¼ �AxðtÞþ ∑

r

k ¼ 1Wkf ðxðt�τkÞÞ; ð9Þ

where x¼ ðx1; x2;…; xnÞT is the state vector of the transformed system, with f jðxjÞ ¼ gjðxjþun

j Þ�gjðun

j Þ; j¼ 1;2;…;n.Throughout this paper, we will use the Euclidean norm for vectors and the induced matrix norm for matrices. If tZ0 we denote by

utðφÞ the segment of trajectory utðφÞ : θ↦uðtþθ;φÞ; θA ½�τ;0�.

Definition 1. (Bellman and Cooke [28]). System (8) or (9) is said to be exponentially stable if there exists xðt;φÞ; where φACð½�τ; 0�; RnÞ,the following exponential estimate holds:

jjxðt;φÞjjrγe� σt jjφjjτ: ð10Þ

The aim of this paper is to determine a lower bound on the decay rare σ, and an upper bound for the γ� factor in terms of linear matrixinequalities derived using the Lyapunov–Krasovsky approach.

First, we give a technical result of independent interest. It will be used in the sequal for deriving of exponential estimates for differenceequations which evolve in continuous time.

Lemma 1. Given a function xðtÞ; tZ�τr ; that satisfies the condition

jjxðtÞjjrλ supτ1 r θr τr

:xðt�θÞ:þμe�βt ; ð11Þ

with λ a positive real number. Let zðtÞ; tZ�τr be a continuous function such that

zðtÞZλ supτ1 r θr τr

zðt�θÞþμe�βt ; for tZ0: ð12Þ

Then, if

:xðθÞ:rzðθÞ; for �τrrθr0; ð13Þ

it follows that z(t) defines an upper bound for :xðtÞ: for tZ0, i.e.,

jjxðtÞjjrzðtÞ; for tZ0:

X. Liao et al. / Neurocomputing 149 (2015) 868–883870

Proof. For 0rtrτ1, we have

zðtÞZλ supτ1 r θr τr

zðt�θÞþμe�βt :

As the time argument of z is negative then it follows from (13) that

zðtÞZλ supτ1 r θr τr

:xðt�θÞ:þμe�βt ;

and (11) implies

zðtÞZ:xðtÞ::Repeating the process on successive time intervals of length τ1, we arrive at the conclusion. □

We consider the non-homogeneous difference equation of the following form:

xðtÞþ ∑r

k ¼ 1Dkf ðxðt�τkÞÞ ¼ hðtÞ; ð14Þ

with DkARn�n; k¼ 1;…; r; delays 0oτ1oτ2o…oτr and where h(t) satisfies the following equation:

:hðtÞ:rμe�βt ; μZ0; and β40: ð15ÞFor any continuous function φACð½�τ; 0�; RnÞ, there exists an unique solution xðt; φÞ of (14) and (15) satisfying the initial condition

xðt; φÞ ¼ φðθÞ; θA ½�τ; 0� ð16ÞIn this paper, unless otherwise stated, we make the following assumption.

Assumption A. The matrices Dk; k¼ 1;…; r; are such that

λ9M ∑r

k ¼ 1:Dk:o1 ð17Þ

In order to use Lemma 1 for determining exponential estimates for the solution of (14), we look for a solution z(t) of the form Le� σt withσ40 and L40, for inequality (12), i.e., z(t) must satisfy the following equation:

Le� σtZλL supτ1 r θr τr

e� σðt� θÞ þμe�βt : ð18Þ

Obviously,

Le� σtZλLe�σteστr þμe�βt ;

must hold, or equivalently

1Zλeστr þμ

Le�ðσ�βÞt : ð19Þ

Let us define a positive constant α such that λ¼ e�ατr . Inequality (19) takes the form

1Zeðσ�αÞτr þμ

Leðσ�βÞt ; ð20Þ

and we obtain that for any choice of σ and L such that

0oσo min fα; βg; ð21Þand correspondingly

L4μ

1�eðσ�αÞτr ; ð22Þ

then condition (12) of Lemma 1 is satisfied.

Now, we look for a z(t) such that (21) and (22) hold and that also satisfy condition (13) of Lemma 1. We observe that asλo1; μZ0 and σoα; the choice

zðtÞ ¼ max :φ:τr ;μ

1�eðσ�αÞτr

n oe� min f� lnðλ=τr Þ; βgt ð23Þ

is such that for �τrrθr0

zðtÞZ:φðθÞ:τr Z:φðθÞ:Z:xðθÞ:;

hence for z(t) as in (23) condition (13) and (12) of Lemma 1 hold.We are now able to present exponential estimates for the solution of the non-homogeneous difference equation.

Lemma 2. Consider system (14) with initial condition (16) and suppose that the Assumption A is satisfied. Then the solution xðt;φÞ of (14) issuch that the inequality

:xðtÞ:rKðφÞe�σt ; ð24Þhold, with

σo min fα; βg; ð25Þ

X. Liao et al. / Neurocomputing 149 (2015) 868–883 871

where α¼ � lnðλÞ=τr , and

KðφÞ4 max :φ:τr ;μ

1�eðσ�αÞτr

n oð26Þ

Proof. Observe first that the solution x(t) of system (14) is such that

:xðtÞ:r ∑r

k ¼ 1:Dk:M:xðt�τkÞ:þ:hðtÞ:r M ∑

r

k ¼ 1:Dk:

" #sup

τ1 r θr τr

:xðt�θÞ:þμe�βt ¼ λ supτ1 r θr τr

:xðt�θÞ:þμe�βt ; ð27Þ

hence condition (11) of Lemma 1 holds. Observe also that for zðtÞ ¼ KðφÞe�σt ; tZ�τr with σ and KðφÞ defined in (25) and (26),respectively, it follows from the above arguments (see (23)) that z(t) satisfies conditions (12) and (13) of Lemma 1. The result followsstraightforwardly. □

In the homogeneous case the results reduce to the following;

Corollary 1. Consider system (14) with initial condition (16) and suppose that the Assumption A is satisfied. In the homogeneous case, i.e.,μ¼ 0 and β arbitrary, then the solution xðt; φÞ is such that

:xðtÞ:r:φ:τr eρt ; for tZ0;

with ρ¼ ln λ=τr .Proof. The result is obtained by replacing μ¼ 0 and β arbitrary large in Lemma 1. □

3. Exponential estimates for neutral-type neural networks with multiple delays

In this section, we determine exponential estimates for the solutions of neutral-type neural networks with multiple delays, when theparameters are known.

Lemma 3. Let the nonlinear time delay system (8) be given. If there exist positive definite matrices P and Qk; k¼ 1;2;…; r, and a positiveconstant β such that the inequality

μðP;QkÞþ2β½NðPÞ�o0; ð28Þholds, where

μðP;QkÞ ¼

rM2 ∑r

k ¼ 1Qr�PA�AP PW1�APD1 PW2�APD2 ⋯ PWr�APDr

WT1P�DT

1PA DT1PW1þWT

1PD1�e�2βτ1Q1 DT1PW2þWT

1PD2 ⋯ DT1PWrþWT

1PDr

WT2P�DT

2PA DT2PW1þWT

1PD2 DT2PW2þWT

2PD2�e�2βτ2Q2 ⋯ DT2PWrþWT

2PDr

⋮ ⋮ ⋮ ⋮WT

r P�DTr PA DT

r PW1þWTr PD1 DT

r PW2þWTr PD2 ⋯ DT

r PWrþWTr PDr�e�2βτr Qr

26666666664

37777777775; ð29Þ

NðPÞ ¼ ½I;D1;…;Dr �TP½I;D1;…;Dr�: ð30ÞThen for any initial condition (16), the solution xðt;ϕÞ of system (8) satisfies the inequality

xðtÞþ ∑r

k ¼ 1Dkf ðxðt�τkÞÞ

����������r

ffiffiffiffiffiα2α1

re�βt:ϕ:τr ; tZ0; ð31Þ

where the positive constants α1 and α2 are defined as follows:

α1 ¼ λminðPÞ; ð32Þ

α2 ¼ ½λmaxðPÞ� 1þM ∑r

k ¼ 1:Dk:

!þM ∑

r

k ¼ 1τkλmaxðQkÞ: ð33Þ

Here λminðPÞ and λmaxðPÞ denote the minimum and maximum eigenvalues of the positive definite matrix P.

Proof: Consider the Lyapunov–Krasovskii functional

VðxtÞ ¼ xðtÞþ ∑r

k ¼ 1Dkf ðxðt�τkÞÞ

" #TP xðtÞþ ∑

r

k ¼ 1Dkf ðxðt�τkÞÞ

" #þ ∑

r

k ¼ 1

Z 0

� τk

f T ðxðtþθÞÞe2βθQkf ðxðtþθÞÞdθ; ð34Þ

where P and Qk; k¼ 1;…; r, are the positive definite matrices of Lemma 3.From (34), we obtain the following inequalities:

α1 xðtÞþ ∑r

k ¼ 1Dkf ðxðt�τkÞÞ

����������rVðxtÞrα2:xt:

2τr; ð35Þ

where α1 and α2 are, respectively, given by (32) and (33).The time derivative of this functional along the trajectories of system (8) is as follows:

ddtVðxtÞ ¼ 2 xðtÞþ ∑

r

k ¼ 1Dkf ðxðt�τkÞÞ

" #TP �AxðtÞþ ∑

r

k ¼ 1Wkf ðxðt�τkÞÞ

" #þ f T ðxðtÞÞ ∑

r

k ¼ 1Qk

" #f ðxðtÞÞ� ∑

r

k ¼ 1f T ðxðt�τkÞÞe�2βτkQkf ðxðt�τkÞÞ

X. Liao et al. / Neurocomputing 149 (2015) 868–883872

�2β ∑r

k ¼ 1

Z 0

� τk

f T ðxðtþθÞÞe2βθQkf ðxðtþθÞÞdθ¼ �2xT ðtÞPAxðtÞþ2 ∑r

k ¼ 1xT ðtÞ½PWk�APDk�f ðxðt�τkÞÞ

þ2 ∑r

k ¼ 1f T ðxðt�τkÞÞDT

k

" #P ∑

r

k ¼ 1Wkf ðxðt�τkÞÞ

" #þ f T ðxðtÞÞ ∑

r

k ¼ 1Qk

" #f ðxðtÞÞ

� ∑r

k ¼ 1f T ðxðt�τkÞÞe�2βτkQkf ðxðt�τkÞÞ�2β ∑

r

k ¼ 1

Z 0

� τk

f T ðxðtþθÞÞe2βθQkf ðxðtþθÞÞdθ

rxT ðtÞ rM2 ∑r

k ¼ 1Qk�PA�AP

!xðtÞþ2 ∑

r

k ¼ 1xT ðtÞ½PWk�APDk�f ðxðt�τkÞÞ

þ2 ∑r

k ¼ 1f T ðxðt�τkÞÞDT

k

" #P ∑

r

k ¼ 1Wkf ðxðt�τkÞÞ

" #� ∑

r

k ¼ 1f T ðxðt�τkÞÞe�2βτkQkf ðxðt�τkÞÞ�2β ∑

r

k ¼ 1

Z 0

� τk

f T ðxðtþθÞÞe2βθQkf ðxðtþθÞÞdθ:

It can be rewritten as follows:

ddtVðxtÞryT ðtÞμðP;QkÞyðtÞ�2β ∑

r

k ¼ 1

Z 0

� τk

f T ðxðtþθÞÞe2βθQkf ðxðtþθÞÞdθ;

where

yðtÞ ¼ ½xT ðtÞ; f T ðxðt�τ1ÞÞ;…; f T ðxðt�τrÞÞ�

μðP;QkÞ ¼

rM2 ∑r

k ¼ 1Qr�PA�AP PW1�APD1 PW2�APD2 ⋯ PWr�APDr

WT1P�DT

1PA DT1PW1þWT

1PD1�e�2βτ1Q1 DT1PW2�WT

1PD2 ⋯ DT1PWr�WT

1PDr

WT2P�DT

2PA DT2PW1þWT

1PD2 DT2PW2þWT

2PD2�e�2βτ2Q2 ⋯ DT2PWr�WT

2PDr

⋮ ⋮ ⋮ ⋯WT

r P�DTr PA DT

r PW1þWTr PD1 DT

r PW2þWTr PD2 ⋯DT

r PWrþWTr PDr�e�2βτr Qr

26666666664

37777777775:

Clearly, we have

ddtVðxtÞryT ðtÞ μðP;QkÞþ2βNðPÞ� �

yðtÞ;

where

NðPÞ ¼ ½I;D1;…;Dr �TP½I;D1;…;Dr �:Hence, condition (28) implies that

ddtVðxtÞþ2βVðxtÞr0

This inequality leads to the following one:

VðxtðϕÞÞre�2βtVðϕÞ; tZ0:

Combining (35) with the previous inequality, we get for tZ0 the estimation

α1 xðtÞþ ∑r

k ¼ 1Dkf ðxðt�τkÞÞ

����������rVðxtðϕÞÞre�2βtV ðϕÞrα2e�2βt:ϕt:

2τr;

and we arrive at the conclusion

xðtÞþ ∑r

k ¼ 1Dkf ðxðt�τkÞÞ

����������r

ffiffiffiffiffiα2α1

re�βt:φ:τr ; tZ0;

where α1 and α2 are given by (32) and (33). □By Lemma 3, we have the following theorem.

Theorem 1. Consider system (8) and suppose that Assumption A is satisfied. If there exist positive definite matrices P,Qk; k¼ 1;…; r; and apositive constant β such that the inequalities (28) is satisfied, where μðP;QkÞ and NðPÞ are defined in (29) and (30) respectively, then for anyinitial condition (16) the solution xðt;φÞ of system (8) satisfies inequality (28) with

0oσo min fα; βg;and

γ41

1�eðσ�αÞτr

ffiffiffiffiffiα2α1

r;

where α¼ � ln λ=τr , and the constants α1; α2 are defined in (25) and (26), respectively.

Proof: The result follows from Lemma 3 by substituting μ¼ffiffiffiffiffiffiα2α1

r:φ:τ in (26). □

X. Liao et al. / Neurocomputing 149 (2015) 868–883 873

Remark 1. There is a trade-off in the estimates for the decay rate and the bound on the norm of the solutions. The decay rate can bechosen in the range ð0; min f� ln λ=τr ; βgÞ; if σ is close to zero, we get a lower γ factor at the cost of slow convergence rate, if σ is close to itsupper bound the γ factor increase.

Remark 2. As we seen above, the exponential estimate or the exponential stability problems in system (8) can be formulated using linearmatrix inequality (28) with (29) and (30). Obviously, it only makes sense to case this problem (28) with (29) and (30) if this inequality canbe solved efficiently and in a reliable way, i.e., a key problem related to the study of linear matrix inequality (28) with (29) and (30) isfeasibility which means that the test whether or not there exist solutions P,Qk; k¼ 1;…; r; for a given β is called a feasibility problem.The LMI (28) with (29) and (30) is called infeasible if no solutions exist.

If we construct another Lyapunov–Krasovskii functional, then the following Lemma 4 is immediate.

Lemma 4. Let the nonlinear time delay system (8) be given. If there exist positive definite matrices P,Qk and Rk; k¼ 1;2;…; r, and a positiveconstant β such that the inequality

~μðP;Qk;RkÞþ2β½ ~NðPÞ�o0; ð36Þholds, where

~μðP;Qk;RkÞ ¼

H 0 ⋯⋯ 0 PW1�APDT1 ⋯ PWr�APDT

r

0 �e�2βτ1Q1 ⋯ 0 0 ⋯ 0⋮ ⋮ ⋯ ⋮ ⋮ ⋯ ⋮0 0 ⋯ �e�2βτr Qr 0 ⋯ 0

WT1P�DT

1PA 0 ⋯ 0 DT1PW1�WT

1PD1�e�2βτ1R1 ⋯ DT1PWrþWT

1PDr

WT2P�DT

2PA 0 ⋯ 0 DT2PW1þWT

1PD2 ⋯ DT2PWr�WT

2PDr

⋮ ⋮ ⋯ ⋮ ⋮ ⋯ ⋮WT

r P�DTr PA 0 ⋯ 0 DT

r PW1þWTr PD1 ⋯ DT

r PWrþWTr PDr�e�2βτr Rr

2666666666666664

3777777777777775o0

where H¼ ∑r

k ¼ 1ðQrþrM2RkÞ�PA�AP; and ð37Þ

~NðPÞ ¼ ½I;0;…;0;|fflfflfflffl{zfflfflfflffl}r

D1;…;Dr�TP½I;0;…;0;|fflfflfflffl{zfflfflfflffl}r

D1;…;Dr �: ð38Þ

Then for any initial condition (16), the solution xðt;ϕÞ of system (8) satisfies the inequality

xðtÞþ ∑r

k ¼ 1Dkf ðxðt�τkÞÞ

����������r

ffiffiffiffiffiffi~α2

~α1

re�βt:ϕ:τr ; tZ0; ð39Þ

where the positive constants ~α1 and ~α2 are defined as follows:

~α1 ¼ λminðPÞ; ð40Þ

~α2 ¼ ½λmaxðPÞ� 1þM ∑r

k ¼ 1:Dk:

!þM ∑

r

k ¼ 1τk½λmaxðQkÞþλmaxðRkÞ�: ð41Þ

Here λminðPÞ and λmaxðPÞ denote the minimum and maximum eigenvalues of the positive definite matrix P.Proof: Consider the Lyapunov–Krasovskii functional

VðxtÞ ¼ xðtÞþ ∑r

k ¼ 1Dkf ðxðt�τkÞÞ�TP½xðtÞþ ∑

r

k ¼ 1Dkf ðxðt�τkÞÞ

" #þ ∑

r

k ¼ 1

Z 0

� τk

xT ðtþθÞe2βθQkxðtþθÞdθþ ∑r

k ¼ 1

Z 0

� τk

f T ðxðtþθÞÞe2βθRkf ðxðtþθÞÞdθ; ð42Þ

where P, Qk and Rk; k¼ 1;2;…; r, are the positive definite matrices of Lemma 4.

From (42), we obtain the following inequalities:

~α1 xðtÞþ ∑r

k ¼ 1Dkf ðxðt�τkÞÞ

����������rVðxtÞr ~α2:xt:

2τr; ð43Þ

where ~α1 and ~α2 are, respectively, given by (40) and (41).

The time derivative of this functional along the trajectories of system (8) is as follows:

ddtVðxtÞ ¼ 2 xðtÞþ ∑

r

k ¼ 1Dkf ðxðt�τkÞÞ

" #TP �AxðtÞþ ∑

r

k ¼ 1Wkf ðxðt�τkÞÞ

" #þxT ðtÞ ∑

r

k ¼ 1Qk

" #xðtÞ� ∑

r

k ¼ 1xT ðt�τkÞe�2βτkQkxðt�τkÞ

�2β ∑r

k ¼ 1

Z 0

� τk

xT ðtþθÞe2βθQkxT ðtþθÞdθþ f T ðxðtÞÞ ∑

r

k ¼ 1Rk

" #xðtÞ� ∑

r

k ¼ 1f T ðxðt�τkÞÞe�2βτkRkf ðxðt�τkÞÞ

�2β ∑r

k ¼ 1

Z 0

� τk

f T ðxðtþθÞÞe2βθRkf ðxðtþθÞÞdθrxT ðtÞ ∑r

k ¼ 1ðQkþrM2RkÞ�PA�AP

" #xðtÞþ2 ∑

r

k ¼ 1xT ðtÞ½PWk�APDk�f ðxðt�τkÞÞ

X. Liao et al. / Neurocomputing 149 (2015) 868–883874

þ2 ∑r

k ¼ 1f T ðxðt�τkÞÞDT

k

" #P ∑

r

k ¼ 1Wkf ðxðt�τkÞÞ

" #� ∑

r

k ¼ 1xT ðt�τkÞe�2βτkQkxðt�τkÞ� ∑

r

k ¼ 1f T ðxðt�τkÞÞe�2βτkRkf ðxðt�τkÞÞ

�2β ∑r

k ¼ 1

Z 0

� τk

xT ðtþθÞe2βθQkxðtþθÞdθ�2β ∑r

k ¼ 1

Z 0

� τk

f T ðxðtþθÞÞe2βθRkf ðxðtþθÞÞdθ:

It can be rewritten as follows:

ddtVðxtÞr ~yT ðtÞμðP;Qk;RkÞ ~yðtÞ�2β ∑

r

k ¼ 1

Z 0

� τk

xT ðtþθÞe2βθQkxðtþθÞdθ�2β ∑r

k ¼ 1

Z 0

� τk

f T ðxðtþθÞÞe2βθRkf ðxðtþθÞÞdθ;

where

~yðtÞ ¼ ½xT ðtÞ; xT ðt�τ1Þ;…; xT ðt�τrÞ; f T ðxðt�τ1ÞÞ;…; f T ðxðt�τrÞÞ�;

~μðP;Qk;RkÞ ¼

H 0 ⋯⋯ 0 PW1�APDT1 ⋯ PWr�APDT

r

0 �e�2βτ1Q1 ⋯ 0 0 ⋯ 0⋮ ⋮ ⋯ ⋮ ⋮ ⋯ ⋮0 0 ⋯ �e�2βτr Qr 0 ⋯ 0

WT1P�DT

1PA 0 ⋯ 0 DT1PW1�WT

1PD1�e�2βτ1R1 ⋯ DT1PWrþWT

1PDr

WT2P�DT

2PA 0 ⋯ 0 DT2PW1þWT

1PD ⋯ DT2PWr�WT

2PDr

⋮ ⋮ ⋮ ⋮ ⋮ ⋯ ⋮WT

r P�DTr PA 0 ⋯ 0 DT

r PW1þWTr PD1 ⋯ DT

r PWrþWTr PDr�e�2βτr Rr

2666666666666664

3777777777777775;

where H¼ ∑r

k ¼ 1ðQrþrM2RkÞ�PA�AP:

Clearly, we have

ddtVðxtÞ ¼ ~yT ðtÞ μðP;QkÞþ2βNðPÞ� �

~yðtÞ;

where

~NðPÞ ¼ ½I;0;…;0;|fflfflfflffl{zfflfflfflffl}r

D1;…;Dr�TP½I;0;…;0;|fflfflfflffl{zfflfflfflffl}r

D1;…;Dr�:

Hence, condition (36) implies that

ddtVðxtÞþ2βVðxtÞr0:

This inequality leads to the following one:

VðxtðϕÞÞre�2βtVðϕÞ; tZ0:

Combining (43) with the previous inequality, we get for tZ0 the estimation

~α1 xðtÞþ ∑r

k ¼ 1Dkf ðxðt�τkÞÞ

����������rVðxtðϕÞÞre�2βtVðϕÞr ~α2e�2βt:ϕt:

2τr;

and we arrive at the conclusion

xðtÞþ ∑r

k ¼ 1Dkf ðxðt�τkÞÞ

����������r

ffiffiffiffiffiffi~α2

~α1

re�βt:φ:τr ; tZ0;

where ~α1 and ~α2 are given by (40) and (41). □By Lemma 4, we have the following results similar to Theorem 1.

Theorem 2. Consider system (8) and suppose that Assumption A is satisfied. If there exist positive definite matrices P, Qk and Rk; k¼ 1;…; r;and a positive constant β such that the inequalities (36) is satisfied, where ~μðP;Qk;RkÞ and ~NðPÞ are defined in (37) and (38) respectively, then forany initial condition (16) the solution xðt;φÞ of system (8) satisfies inequality (36) with

0oσo min fα; βg;and

γ41

1�eðσ�αÞτr

ffiffiffiffiffiffi~α2

~α1

r;

where α¼ � ln λ=τr , and the constants ~α1; ~α2 are defined in (40) and (41), respectively.For system (9), if we construct Lyapunov–Krasovskii functional as follows:

VðxtÞ ¼ xðtÞþ ∑r

k ¼ 1Dkxðt�τkÞ

" #TP xðtÞþ ∑

r

k ¼ 1Dkxðt�τkÞ

" #; þ ∑

r

k ¼ 1

Z 0

� τk

xT ðtþθÞe2βθQkxðtþθÞdθþ ∑r

k ¼ 1

Z 0

� τk

f T ðxðtþθÞÞe2βθRkf ðxðtþθÞÞdθ ð44Þ

then the following Corollary 2 is immediate.

X. Liao et al. / Neurocomputing 149 (2015) 868–883 875

Corollary 2. Let the nonlinear time delay system (9) be given. If there exist positive definite matrices P, Qk and Rk, k¼ 1;2;…; r, and apositive constant β such that the inequality

μðP;Qk;RkÞþ2β½NðPÞ�o0; ð45Þholds, where

~μðP;Qk;RkÞ ¼

H �APD1 ⋯ �APDr PW ⋯ PWr

�DT1PA �e�2βτ1Q1 ⋯ 0 DT

1PW ⋯ DT1PWr

⋮ ⋮ ⋯ ⋮ ⋮ ⋯ ⋮�DT

r PA 0 ⋯ �e�2βτr Q r DTr PW ⋯ DT

r PWr

WT1P WT

1PD1 ⋯ WT1PDr �e�2βτ1R1 ⋯ 0

WT2P WT

2PD1 ⋯ WT2PDr 0 ⋯ �e�2βτ2R2

⋮ ⋮ ⋮ ⋮ ⋮ ⋯ ⋮WT

r P WTr PD1 ⋯ WT

r PDr 0 ⋯ �e�2βτr Rr

2666666666666664

3777777777777775o0;

where H¼ ∑r

k ¼ 1ðQrþrM2RkÞ�PA�AP; and; ð46Þ

~NðPÞ ¼ ½I;D1;…;Dr ;0;…;0|fflfflffl{zfflfflffl}r

�TP½I;D1;…;Dr ;0;…;0|fflfflffl{zfflfflffl}r

�: ð47Þ

Then for any initial condition (16), the solution xðt;ϕÞ of system (9) satisfies the inequality

xðtÞþ ∑r

k ¼ 1Dkxðt�τkÞ

����������r

ffiffiffiffiffiα2α1

re�βt:ϕ:τr ; tZ0; ð48Þ

where the positive constants α1 and α2 are defined as follows:

α1 ¼ λminðPÞ; ð49Þ

α2 ¼ ½λmaxðPÞ� 1þ ∑r

k ¼ 1:Dk:

!þ ∑

r

k ¼ 1τk½λmaxðQkÞþλmaxðRkÞ�: ð50Þ

Here λminðPÞ and λmaxðPÞ denote the minimum and maximum eigenvalues of the positive definite matrix P.By Corollary 2, we have the following results similar to Theorem 2.

Theorem 3. Consider system (9) and suppose that Assumption A is satisfied. If there exist positive definite matrices P, Qk and Rk; k¼ 1;…; r;and a positive constant β such that the inequalities (45) is satisfied, where μðP;Qk;RkÞ and NðPÞ are defined in (46) and (47) respectively, then forany initial condition (16) the solution xðt;φÞ of system (8) satisfies inequality (36) with

0oσo min fα; βg;and

γ41

1�eðσ�αÞτr

ffiffiffiffiffiα2α1

r;

where α¼ � ln λ=τr , and the constants α1; α2 are defined in (49) and (50), respectively.Rewrite system (8) in the following equivalent descriptor system:

_yðtÞ ¼ �AxðtÞþ ∑r

k ¼ 1Wkf ðxðt�τkÞÞ

0¼ �yðtÞþxðtÞþ ∑r

k ¼ 1Dkf ðxðt�τkÞÞ

8>>><>>>: : ð51Þ

We have the following results.

Lemma 5. Let the nonlinear time delay system (8) be given. If there exist real matrices P2, P3 and symmetric positive definite matrices P1, andQk, k¼ 1;2;…; r, and a positive constant β such that the inequality

μ0ðP1; P2; P3;QkÞ ¼

2P3þrM2 ∑r

k ¼ 1Qk �APT

1þPT2�P P3D1 P3D2 ⋯ P3Dr

�P1AþP2�PT3 2βP1�2P2 P1W1þP2D1 P1W2þP2D2 ⋯ P1WrþP2Dr

DT1P3 WT

1P1þDT1P2 �e�2βτ1Q1 0 ⋯ 0

DT2P3 WT

2P1þDT2P2 0 �e�2βτ2Q2 ⋯ 0

⋮ ⋮ ⋮ ⋮ ⋯ ⋮DTr P3 WT

r P1þDTr P2 0 0 ⋯ �e�2βτr Qr

26666666666664

37777777777775o0 ð52Þ

holds, then for any initial condition (16) the solution xðt;ϕÞ of system (8) satisfies the inequality

xðtÞþ ∑r

k ¼ 1Dkf ðxðt�τkÞÞ

����������r

ffiffiffiffiffiα02α01

se�βt:ϕ:τr ; tZ0; ð53Þ

X. Liao et al. / Neurocomputing 149 (2015) 868–883876

where the positive constants α01 and α02 are defined as follows:

α01 ¼ λminðP1Þ; ð54Þ

α02 ¼ ½λmaxðP1Þ� 1þM ∑r

k ¼ 1:Dk:

!þM ∑

r

k ¼ 1τkλmaxðQkÞ: ð55Þ

Proof: Consider the following Lyapunov–Krasovskii functional

VðxtÞ ¼ ½yT ðtÞ; xT ðtÞ� I 00 0

� � P1 0P2 P3

" #yðtÞxðtÞ

" #þ ∑

r

k ¼ 1

Z 0

� τk

f T ðxðtþθÞÞe2βθQkf ðxðtþθÞÞdθ;

¼ yT ðtÞP1yðtÞþ ∑r

k ¼ 1

Z 0

� τk

f T ðxðtþθÞÞe2βθQkf ðxðtþθÞÞdθ ð56Þ

where P1 is a positive definite matrix, and P2, P3 are real matrix. This functional is degenerated as it is usual for descriptor systems [29].From (56), we have obtained the following inequalities:

α01 xðtÞþ ∑r

k ¼ 1Dkf ðxðt�τkÞÞ

����������rVðxtÞrα02:xt:

2τr; ð57Þ

where α01 and α02 are given by (54) and (55), respectively.The time derivative of this functional along the trajectories of system (51) is as follows:

ddtVðxtÞ ¼ 2½yT ðtÞ; xT ðtÞ�

P1 P2

0 P3

" #_yðtÞ0

� �þ f T ðxðtÞÞ ∑

r

k ¼ 1Qk

" #f ðxðtÞÞ

� ∑r

k ¼ 1f T ðxðt�τkÞÞe�2βτkQkf ðxðt�τkÞÞ�2β ∑

r

k ¼ 1

Z 0

� τk

f T ðxðtþθÞÞe2βθQkf ðxðtþθÞÞdθ

¼ 2½yT ðtÞ; xT ðtÞ�P1 P2

0 P3

" # �AxðtÞþ ∑r

k ¼ 1Wkf ðxðt�τkÞÞ

�yðtÞþxðtÞþ ∑r

k ¼ 1Dkf ðxðt�τkÞÞ

26664

37775þ f T ðxðtÞÞ ∑

r

k ¼ 1Qk

" #f ðxðtÞÞ

� ∑r

k ¼ 1f T ðxðt�τkÞÞe�2βτkQkf ðxðt�τkÞÞ�2β ∑

r

k ¼ 1

Z 0

� τk

f T ðxðtþθÞÞe2βθQkf ðxðtþθÞÞdθ

rxT ðtÞ 2P3þrM2 ∑r

k ¼ 1Qk

!xðtÞþyT ðtÞ½2βP1�2P2�yðtÞþ2yT ðtÞ½�P1AþP2�PT

3�xðtÞþ2 ∑r

k ¼ 1yT ðtÞ½P1WkþP2Dk�f ðxðt�τkÞÞ

þ2 ∑r

k ¼ 1xT ðtÞP3Dkf ðxðt�τkÞÞ� ∑

r

k ¼ 1f T ðxðt�τkÞÞe�2βτkQkf ðxðt�τkÞÞ�2βV ðxtÞ

It can be rewritten as follows:

dVðxtÞdt

rzT ðtÞμ0ðP1; P2; P3;QkÞzðtÞ�2βVðxtÞ;

where zT ðtÞ ¼ ½xT ðtÞ; yT ðtÞ; f T ðxðt�τ1ÞÞ;…; f T ðxðt�τrÞÞ�, and

μ0ðP1; P2; P3;QkÞ ¼

2P3þrM2 ∑r

k ¼ 1Qk �APT

1þPT2�P3 P3D1 P3D2 ⋯ P3Dr

�P1AþP2�PT3 2βP1�2P2 P1W1þP2D1 P1W2þP2D2 ⋯ P3Dr

DT1P3 WT

1P1þDT1P �e�2βτ1Q1 0 ⋯ 0

DT2P3 WT

2P1þDT2P2 0 �e�2βτ2Q2 ⋯ 0

⋮ ⋮ ⋮ ⋮ ⋯ ⋮DTr P WT

r P1þDTr P2 0 0 ⋯ �e�2βτr Qr

26666666666664

37777777777775

Hence, condition (57) implies that

ddtVðxtÞþ2βVðxtÞr0:

This inequality leads to the following one:

VðxtðϕÞÞre�2βtVðϕÞ; tZ0:

Combining (57) with the previous inequality, we get for tZ0 the estimation

α01 xðtÞþ ∑r

k ¼ 1Dkf ðxðt�τkÞÞ

����������rVðxtðϕÞÞre�2βtV ðϕÞrα02e

�2βt:ϕt:2τr;

X. Liao et al. / Neurocomputing 149 (2015) 868–883 877

and we arrive at the conclusion

xðtÞþ ∑r

k ¼ 1Dkf ðxðt�τkÞÞ

����������r

ffiffiffiffiffiα02α02

se�βt:φ:τr ; tZ0;

where α01 and α02 are given by (54) and (55). □

By Lemma 5, we have the following results similar to Theorem 2.

Theorem 4. Consider system (8) and suppose that Assumption A is satisfied. If there exist real matrices P2, P3 and symmetric positive definitematrices P1, and Qk, k¼ 1;2;⋯; r, and a positive constant β such that the inequality (52) is satisfied, then for any initial condition (16) thesolution xðt;ϕÞ of system (8) satisfies inequality (52) with

0oσo min fα; βg;and

γ41

1�eðσ�αÞτr

ffiffiffiffiffiα02α01

s;

where α¼ � ln λ=τr , and the constants α01; α02 are defined in (54) and (55), respectively.

If we construct another Lyapunov–Krasovskii functional, then the following Lemma 6 is immediate.

Lemma 6. Let the nonlinear time delay system (8) be given. If there exist real matrices P2, P3 and symmetric positive definite matrices P1, andQk;Rk, k¼ 1;2;…; r, and a positive constant β such that the inequality

~μ 0ðP1; P2; P3;QkÞ ¼

H �APT1þPT

2�P3 0 0 ⋯ 0 P3D1 P3D2 ⋯ P3Dr

�P1AþP2�PT3 2βP1�2P2 0 0 ⋯ 0 P1W1þP2D1 P1W2þP2D2 ⋯ P1WrþP2Dr

0 0 �e�2βτ1Q1 0 ⋯ 0 0 0 ⋯ 0⋮ ⋮ ⋮ ⋮ ⋯ ⋮ ⋮ ⋮ ⋯ ⋮0 0 0 0 ⋯ �e�2βτr Qr 0 0 ⋯ 0

DT1P3 WT

1P1þDT1P2 0 0 ⋯ 0 �e�2βτ1R1 0 ⋯ 0

DT2P3 WT

2P1þDT2P2 0 0 ⋯ 0 0 �e�2βτ2R2 ⋯ 0

⋮ ⋮ ⋮ ⋮ ⋯ ⋮ ⋮ ⋮ ⋯ ⋮DTr P3 WT

r P1þDTr P2 0 0 ⋯ 0 0 0 ⋯ �e�2βτr Rr

2666666666666666664

3777777777777777775

o0

where H¼ 2P3þ ∑r

k ¼ 1ðQkþrM2RkÞ ð58Þ

Then for any initial condition (16), the solution xðt;ϕÞ of system (8) satisfies the inequality

xðtÞþ ∑r

k ¼ 1Dkf ðxðt�τkÞÞ

����������r

ffiffiffiffiffiffi~α 02~α 01

se�βt:ϕ:τr ; tZ0; ð59Þ

where the positive constants ~α 01 and ~α 0

2 are defined as follows:

~α 01 ¼ λminðP1Þ; ð60Þ

~α 02 ¼ ½λmaxðP1Þ� 1þM ∑

r

k ¼ 1:Dk:

!þ ∑

r

k ¼ 1τk½λmaxðQkÞþMλmaxðRkÞ�: ð61Þ

Here λminðP1Þ and λmaxðP1Þ denote the minimum and maximum eigenvalues of the positive definite matrix P.

Proof: Consider the Lyapunov–Krasovskii functional

VðxtÞ ¼ ½yT ðtÞ; xT ðtÞ� I 00 0

� � P1 0P2 P3

" #yðtÞxðtÞ

" #þ ∑

r

k ¼ 1

Z 0

� τk

xT ðtþθÞe2βθQkxðtþθÞdθ

¼ yT ðtÞP1yðtÞþ ∑r

k ¼ 1

Z 0

� τk

xT ðtþθÞe2βθQkxðtþθÞdθþ ∑r

k ¼ 1

Z 0

� τk

f T ðxðtþθÞÞe2βθRkf ðxðtþθÞÞdθ: ð62Þ

The time derivative of this functional along the trajectories of system (51) is as follows:

ddtVðxtÞ ¼ 2½yT ðtÞ; xT ðtÞ�

P1 P2

0 P3

" # �AxðtÞþ ∑r

k ¼ 1Wkf ðxðt�τkÞÞ

�yðtÞþxðtÞþ ∑r

k ¼ 1Dkf ðxðt�τkÞÞ

26664

37775þxT ðtÞ ∑

r

k ¼ 1Qk

" #xðtÞ� ∑

r

k ¼ 1xT ðt�τkÞe�2βτkQkxðt�τkÞ

�2β ∑r

k ¼ 1

Z 0

� τk

xT ðtþθÞe2βθQkxðtþθÞdθþ f T ðxðtÞÞ ∑r

k ¼ 1Rk

" #f ðxðtÞÞ� ∑

r

k ¼ 1f T ðxðt�τkÞÞe�2βτkRkf ðxðt�τkÞÞ

�2β ∑r

k ¼ 1

Z 0

� τk

f T ðxðtþθÞÞe2βθRkf ðxðtþθÞÞdθrxT ðtÞ 2P3þ ∑r

k ¼ 1QkþrM2Rk

!xðtÞþyT ðtÞ½2βP1�2P2�yðtÞþ2yT ðtÞ½�P1AþP2�PT

3�xðtÞ

X. Liao et al. / Neurocomputing 149 (2015) 868–883878

þ2 ∑r

k ¼ 1yT ðtÞ½P1WkþP2Dk�f ðxðt�τkÞÞþ2 ∑

r

k ¼ 1xT ðtÞP3Dkf ðxðt�τkÞÞ

� ∑r

k ¼ 1xT ðt�τkÞe�2βτkQkxðt�τkÞ� ∑

r

k ¼ 1f T ðxðt�τkÞÞe�2βτkRkf ðxðt�τkÞÞ�2βVðxtÞ:

It can be rewritten as follows:

dVðxtÞdt

rzT ðtÞ ~μ 0ðP1; P2; P3;Qk;RkÞzðtÞ�2βVðxtÞ;

where zT ðtÞ ¼ ½xT ðtÞ; yT ðtÞ; xðt�τ1Þ;…; xðt�τrÞ; f T ðxðt�τ1ÞÞ;…; f T ðxðt�τrÞÞ�, and~μ 0ðP1; P2; P3;Qk;RkÞ

¼

H �APT1þPT

2�P3 0 0 ⋯ 0 P3D1 P3D2 ⋯ P3Dr

�P1AþP2�PT3 2βP1�2P2 0 0 ⋯ 0 P1W1þP2D1 P1W2þP2D2 ⋯ P1WrþP2Dr

0 0 �e�2βτ1Q1 0 ⋯ 0 0 0 ⋯ 0⋮ ⋮ ⋮ ⋮ ⋯ ⋮ ⋮ ⋮ ⋯ ⋮0 0 0 0 ⋯ �e�2βτr Qr 0 0 ⋯ 0

DT1P3 WT

1P1þDT1P2 0 0 ⋯ 0 �e�2βτ2R2 0 ⋯ 0

DT2P WT

1P1þDT1P2 0 0 ⋯ 0 0 �e�2βτ2R2 ⋯ 0

⋮ ⋮ ⋮ ⋮ ⋯ ⋮ ⋮ ⋮ ⋯ ⋮DTr P3 WT

r P1þDTr P2 0 0 ⋯ 0 0 0 ⋯ �e�2βτr Rr

2666666666666666664

3777777777777777775

where H ¼ 2P3þ ∑r

k ¼ 1ðQkþrM2RkÞ:

Hence, condition (58) implies that

ddtVðxtÞþ2βVðxtÞr0:

This inequality leads to the following one:

VðxtðϕÞÞre�2βtVðϕÞ; tZ0:

We are easy to get for tZ0 the estimation

~α 01 xðtÞþ ∑

r

k ¼ 1Dkf ðxðt�τkÞÞ

����������rVðxtðϕÞÞre�2βtVðϕÞr ~α 0

2e�2βt:ϕt:

2τr;

and we arrive at the conclusion

xðtÞþ ∑r

k ¼ 1Dkf ðxðt�τkÞÞ

����������r

ffiffiffiffiffiffi~α 02~α 01

se�βt:φ:τr ; tZ0;

where ~α 01 and ~α 0

2 are given by (60) and (61). □By Lemma 6, we have the following results similar to Theorem 2.

Theorem 5. Consider system (8) and suppose that Assumption A is satisfied. If there exist real matrices P2, P3 and symmetric positive definitematrices P1, and Qk;Rk, k¼ 1;2;⋯; r, and a positive constant β such that the inequality (58) is satisfied, then for any initial condition (16) thesolution xðt;φÞ of system (8) satisfies inequality (58) with

0oσo min fα; βg;

and

γ41

1�eðσ�αÞτr

ffiffiffiffiffiffi~α 02~α 01

s;

where α¼ � ln λ=τr , and the constants ~α 01; ~α

02 are defined in (60) and (61), respectively.

Rewrite system (9) in the following equivalent descriptor system

_yðtÞ ¼ �AxðtÞþ ∑r

k ¼ 1Wkf ðxðt�τkÞÞ

0¼ �yðtÞþxðtÞþ ∑r

k ¼ 1Dkxðt�τkÞ

8>>><>>>: : ð63Þ

For system (63), if we construct Lyapunov–Krasovskii functional (62), similar to the above proof, then the following Corollary 3 isimmediate.

X. Liao et al. / Neurocomputing 149 (2015) 868–883 879

Corollary 3. Let the nonlinear time delay system (9) be given. If there exist real matrices P2, P3 and symmetric positive definite matricesP1, and Qk;Rk, k¼ 1;2;…; r, and a positive constant β such that the inequality

μðP1; P2; P3;Qk;RkÞ ¼H �APT

1þPT2�P3 P3D1 ⋯ P3Dr 0 ⋯ 0

�P1AþP2�PT3 2βP1�2P2 P2D1 ⋯ P2Dr P1W1 ⋯ P1Wr

DT1P3 DT

1P2 �e�2βτ1Q1 ⋯ 0 0 ⋯ 0⋮ ⋮ ⋮ ⋯ ⋮ ⋮ ⋯ ⋮

DTr P3 DT

r P2 0 ⋯ �e�2βτr Q r 0 ⋯ 0

0 WT1P1 0 ⋯ 0 �e�2βτ1R1 ⋯ 0

⋮ ⋮ ⋮ ⋯ ⋮ ⋮ ⋯ ⋮0 WT

r P1 0 ⋯ 0 0 ⋯ �e�2βτr Rr

26666666666666664

37777777777777775

o0

where H ¼ 2P3þ ∑r

k ¼ 1ðQkþrM2RkÞ:

ð64Þ

Then for any initial condition (16), the solution xðt;φÞ of system (9) satisfies the inequality

xðtÞþ ∑r

k ¼ 1Dkxðt�τkÞ

����������r

ffiffiffiffiffiα2α1

se�βt:ϕ:τr ; tZ0; ð65Þ

where the positive constants α1 and α2 are defined as follows:

α1 ¼ λminðP1Þ; ð66Þ

α2 ¼ λmaxðPÞ½ � 1þM ∑r

k ¼ 1:Dk:

!þ ∑

r

k ¼ 1τk λmaxðQkÞþMλmaxðRkÞ½ �: ð67Þ

By Corollary 2, we have the following results similar to Theorem 2.

Theorem 6. Consider system (8) and suppose that Assumption A is satisfied. If there exist real matrices P2, P3 and symmetric positive definitematrices P1, and Qk;Rk, k¼ 1;2;…; r, and a positive constant β such that the inequality (64) is satisfied, then for any initial condition (16) thesolution xðt;φÞ of system (8) satisfies inequality (64) with

0oσo min fα; βg;and

γ41

1�eðσ�αÞτr

ffiffiffiffiffiα2α1

s;

where α¼ � ln λ=τr , and the constants α1; α2 are defined in (66) and (67), respectively.

4. Numerical examples

In this section, we will illustrate the correctness and effectiveness of our results and also compare them with those in Refs. [22,26,27].Example. Consider the delayed neutral-type neural network (1) with parameters

A¼ 1 00 1

� �; D1 ¼

�0:02 �0:01�0:03 �0:07

� �; D2 ¼

0:05 0:020:08 0:04

� �; W1 ¼

0:08 0:020:07 0:04

� �;

W2 ¼�0:04 �0:01�0:05 �0:01

� �; I ¼ 1;1½ �T ; β¼ 0:0001; M¼ 1; τ1 ¼ 0:5; τ2 ¼ 1:

It is easy to calculate λ¼ 0:1824 in (17), obviously, Assumption A is satisfied. In this case, all of the conditions in Lemmas 3–6 can be solvedby using LMI toolbox and the feasible solutions are derived, which means that the problem on the exponential stability of the delayedneutral-type neural network is solved, and the corresponding feasible solution is shown in Table 1.

By taking a sigmoid function as the activation function, that is gðxÞ ¼ ð1=1þe�xÞ, and is satisfied M ¼ 1. Then, for initial state ϕ¼ ½0;3�T ,the dynamics of the studied neural networks are shown as Fig. 1.

The LMI conditions (28), (36), (52) and (58) remain feasible for β¼ 0:8168 when τ1 ¼ 0:5, τ2 ¼ 1. For these values of β¼ ð0;0:8168Þ andτ1 ¼ 0:5, τ2 ¼ 1 after using Theorems 1, 2, 4 and 5, we can obtain the exponential estimate as shown in Table 2.

Now, let us consider a special case with D1 ¼O and W1 ¼ O, that is, the multi-delayed neutral-type neural network in (6) or (7) reducesto the model in [22,27], the other parameters remain unchanged in the above. Then, by Corollaries 2 and 3, we can obtain some feasiblesolutions for the system (6), they are omitted here due to space limitations. It can be verified that the delay-dependent conditions ofTheorem 1 in [22,27] cannot be satisfied for any τ40, i.e, the authors cannot provide any results on the maximum allowed delay τmax.However, by Corollaries 2 and 3, it is computed that the maximum allowed delay with respect to the different parameter β as shown inTable 3. From the table, it implies that, with a proper β, one can obtain the maximum allowed delay asymptotically approximating infinity.It thus can be concluded that although several delay-dependent results are derived in this paper, these results can achieve the same delay-tolerated target in some delay-independent results [26]. Therefore, Corollaries 2 and 3 are less conservative than the delay-dependentresults in [22,27] and the delay-independent results in [26].

X. Liao et al. / Neurocomputing 149 (2015) 868–883880

Table 1The corresponding feasible solutions to the given parameters.

The corresponding feasible solutions

Lemma 3P ¼ 45:4292 �1:3302

�1:3302 44:5522

� �; Q1 ¼

18:1776 �0:5911�0:5911 17:7506

� �; Q2 ¼

18:0778 �0:5224�0:5224 17:8563

� �;α1 ¼ 43:5901; α2 ¼ 82:6526:

Lemma 4P ¼ 2:2442 �0:0690

�0:0690 2:1972

� �; Q1 ¼

0:8569 �0:0095�0:0095 0:8469

� �; Q2 ¼

0:8569 �0:0095�0:0095 0:8469

� �;

R1 ¼0:5552 �0:0267�0:0267 0:5382

� �; R2 ¼

0:5503 �0:0233�0:0233 0:5418

� �; ~α1 ¼ 2:1478; ~α2 ¼ 4:8628:

Lemma 5P1 ¼

40:2795 �0:5269�0:5269 39:9106

� �; P2 ¼

8:1765 3:4065�3:2537 8:3418

� �; P3 ¼

�40:3050 3:8899�2:7117 �39:6023

� �;

Q1 ¼16:1219 �0:2356�0:2356 15:8408

� �; Q2 ¼

16:1223 �0:2357�0:2357 15:8412

� �; α01 ¼ 39:5369; α02 ¼ 72:4544:

Lemma 6P1 ¼

201:3477 �4:7621�4:7621 195:0590

� �; P2 ¼

28:4071 19:0691�16:8845 28:9437

� �; P3 ¼

�186:7549 23:4297�11:4138 �178:3116

� �;

Q1 ¼57:2591 �1:7960�1:7960 55:6088

� �; Q2 ¼

57:2615 �1:7961�1:7961 55:6110

� �; R1 ¼

51:2865 �1:8307�1:8307 47:8949

� �;

R2 ¼51:2694 �1:9656�1:9656 47:2081

� �; ~α 0

1 ¼ 192:4968; ~α 02 ¼ 406:8384:

Fig. 1. The solution trajectory of system (1).

Table 2The exponential estimate with respect to the given parameters.

β¼ 0:8168 β¼ 0:1 β¼ 0:01 β¼ 0:001 β¼ 0:0001

Theorem 1 σA ð0;0:8168Þγ43:3424

σAð0;0:1Þγ43:3424

σAð0;0:01Þγ43:3424

σAð0;0:001Þγ43:3424

σAð0;0:0001Þγ43:3424

Theorem 2 σA ð0;0:8168Þγ43:3424

σAð0;0:1Þγ43:3424

σAð0;0:01Þγ43:3424

σAð0;0:001Þγ43:3424

σAð0;0:0001Þγ43:3424

Theorem 4 σA ð0;0:8168Þγ43:3291

σAð0;0:1Þγ43:3291

σAð0;0:01Þγ43:3291

σAð0;0:001Þγ43:3291

σAð0;0:0001Þγ43:3291

Theorem 5 σA ð0;0:8168Þγ43:3291

σAð0;0:1Þγ43:3291

σAð0;0:01Þγ43:3291

σAð0;0:001Þγ43:3291

σAð0;0:0001Þγ41:7782

Table 3The feasible maximize delay with respect to the given parameters.

β¼ 1 β¼ 0:1 β¼ 0:01 β¼ 0:001 β-1=1

Corollary 2 Infeasible τmax ¼ 19:8329 τmax ¼ 199:6817 τmax ¼ 1998:0949 τmax-1Corollary 3 Infeasible τmax ¼ 21:4643 τmax ¼ 219:2480 τmax ¼ 2196:3495 τmax-1[22,27] Infeasible

X. Liao et al. / Neurocomputing 149 (2015) 868–883 881

5. Conclusions

In this paper, we have studied the exponential estimates for the rate of convergence and the norm of the solution of neutral-typeneural networks with multiple delays. First, we have found exponential estimates for the solution of non-homogeneous differenceequations evolving in time which is not discussed in the known references. Second, some novel global and exponential stability conditionsare obtained in terms of linear matrix inequalities by constructing several different Lyapunov–Krasovskii functionals combined with adescriptor transformation approach at some cases. The derived results are less conservative and restrictive than the known results.A numerical example has also been presented to demonstrate the validity and correctness of the obtained criteria. Our results can beextended to the case of neutral-type neural networks with uncertainties and with commensurate delays. These works will study in otherpapers.

Acknowledgment

This work was supported in part by the National Natural Science Foundation of China under Grant 61170249, in part by the ResearchFund of Preferential Development Domain for the Doctoral Program of Ministry of Education of China under Grant 20110191130005, inpart by the Talents of Science and Technology Promote Plan, Chongqing Science & Technology Commission, and in part by the Program forChangjiang Scholars.

References

[1] M. Cohen, S. Grossberg, Absolute stability of global pattern formation and parallel memory storage by competitive neural networks, IEEE Trans. Syst. Man Cybern. 13 (5)(1983) 815–826.

[2] J.J. Hopfield, Neural networks and physical systems with emergent collective computational abilities, Proc. Natl. Acad. Sci. USA 79 (8) (1982) 2554–2558.[3] L.O. Chua, L. Yang, Cellular neural networks: theory, IEEE Trans. Circuits Syst. 35 (10) (1988) 1257–1272.[4] B. Kosko, Neural Networks and Fuzzy Systems, Prentice-Hall, Upper Saddle River, NJ, 1992.[5] X.F. Liao, C.G. Li, K.W. Wong, Criteria for exponential stability of Cohen–Grossberg neural networks, Neural Netw. 17 (10) (2004) 1401–1414.[6] X.F. Liao, G.R. Chen, E.N. Sanchez, Delay-dependent exponential stability analysis of delayed neural networks: an LMI approach, Neural Netw. 15 (7) (2002) 855–866.[7] X.F. Liao, G.R. Chen, E.N. Sanchez, LMI-based approach for asymptotically stability analysis of delayed neural networks, IEEE Trans. Circuits Syst. I 49 (7) (2002)

1033–1039.[8] X.F. Liao, K.W. Wong, Robust stability of interval bidirectional associative memory neural network with time delays, IEEE Trans. Syst. Man Cybern. B 34 (2) (2004)

1142–1154.[9] C.D. Li, X.F. Liao, New algebraic conditions for global exponential stability of delayed recurrent neural networks, Neurocomputing 64 (2005) 319–333.[10] C.D. Li, X.F. Liao, Delay-dependent and delay-independent stability criteria for cellular neural networks with delays, Int. J. Bifur. Chaos 16 (11) (2006) 3323–3340.[11] J. Cao, S. Zhong, Y. Hu, Global stability analysis for a class of neutral networks with varying delays and control input, Appl. Math. Comput. 189 (2) (2007) 1480–1490.[12] K. Gopalsamy, Stability and Oscillations in Delay Differential Equations Population Dynamics, Kluwer Academic Publishers, Boston, 1992.[13] A. Bellen, N. Guglielmi, A.E. Ruehli, Methods for linear systems of circuit delay differential equations of neutral type, IEEE Trans. Circuits Syst. 76 (1) (1999) 212–215.[14] R.K. Bayton, Small signal stability criterion for networks containing lossless transmission lines, IBM J. Res. Dev. 12 (1968) 431–440.[15] J. Hale, S.M.V. Lunel, Introduction to Functional Differential Equations, Springer-Verlag, New York, 1993.[16] D. Yue, Q.L. Han, A delay-dependent stability criterion of neutral systems and its application to a partial element equivalent circuit model, IEEE Trans. Circuits Syst. II

Exp. Briefs 51 (12) (2004) 685–689.[17] P. Fu, S. Niculescu, J. Chen, Stability of linear neutral time-delay system: exact conditions via matrix pencil solutions, IEEE Trans. Autom. Control 51 (6) (2006)

1063–1069.[18] S. Niculescu, Delay Effects on Stability: A Robust Approach, Springer, Verlag, Germany, 2001.[19] M. Wu, Y. He, J.H. She, new delay-dependent stability criteria and stabilizing method for neutral systems, IEEE Trans. Autom. Control 49 (12) (2004) 2266–2271.[20] Y. He, Q.,G. Wang, L.H. Xie, C. Lin, Further improvement of free-weighting matrices technique for systems with time-varying delay, IEEE Trans. Automat. Control 52 (2)

(2007) 293–299.[21] D. Tank, J.J. Hopfield, Simple neural optimization networks: an A/D converter, signal decision circuit and a linear programming circuit, IEEE Trans. Circuits Syst. 33 (1986)

533–541.[22] S. Xu, J. Lam, W.C. Ho, Y. Zou, Delay-dependent exponential stability for a class of neural networks with time delays, J. Comput. Appl. Math. 83 (1) (2005) 16–28.[23] C.J. Cheng, T.L. Liao, J.J. Yan, C.C. Hwang, Globally asymptotic stability of a class of neutral-type neural networks with delays, IEEE Trans. Syst. Man Cybern. B 36 (5)

(2006) 1191–1195.[24] H.G. Zhang, Z.W. Liu, G.B. Huang, Novel delay-dependent robust stability analysis for switched neutral-type neural networks with time-varying delays via SC technique,

IEEE Trans. Syst. Man Cybern. B 40 (6) (2010) 1480–1490.[25] C.H. Lien, K.W. Yu, Y.f. Lin, Y.J. Chuang, L.Y. Chuang, Global exponential stability for uncertain delayed neural networks of neutral-type with mixed time delays, IEEE

Trans. Syst. Man Cybern. B 38 (3) (2008) 709–719.[26] Z. Orman, New sufficient conditions for global stability of neutral-type neural networks with time delays, Neurocomputing 97 (2012) 141–148.[27] H.H. Mai, X.F. Liao, C.D. Li, A semi-free weighting matrices approach for neutral-type delayed neural networks, J. Comput. Appl. Math. 225 (2009) 44–55.[28] R. Bellman, K.L. Cooke, Differential-Difference Equations, Academic, New York, 1963.[29] S.J. Deng, X.F. Liao, S.T. Guo, Asymptotic stability analysis of certain neutral differential equations: a descriptor system approach, Math. Comput. Simul. 79 (2009)

2981–2993.

Xiaofeng Liao received the BS and MS degrees in mathematics from Sichuan University, Chengdu, China, in 1986 and 1992, respectively, and thePh.D. degree in circuits and systems from the University of Electronic Science and Technology of China in 1997. From 1999 to 2012, he was aprofessor at Chongqing University. At present, he is a professor at Southwest University and the Dean of School of Electronic and InformationEngineering. He is also a Yangtze River Scholar of the Ministry of Education of China. From November 1997 to April 1998, he was a researchassociate at the Chinese University of Hong Kong. From October 1999 to October 2000, he was a research associate at the City University of HongKong. From March 2001 to June 2001 and March 2002 to June 2002, he was a senior research associate at the City University of Hong Kong. FromMarch 2006 to April 2007, he was a research fellow at the City University of Hong Kong.

Professor Liao holds 4 patents, and published 4 books and over 300 international journal and conference papers. His current research interestsinclude neural networks, nonlinear dynamical systems, bifurcation and chaos, and cryptography.

X. Liao et al. / Neurocomputing 149 (2015) 868–883882

Yilu Liu received the B.S degree in information security of computer from Chongqing University, Chongqing, China, in 2011. At present, she ispursuing a master's degree in computer architecture of computer at Chongqing University, China.

Her research areas are including neural networks, cryptography, chaos and nonlinear dynamical systems.

Huiwei Wang received the B.Sc. degree in information and computing science and the M.Sc. degree in computer application, both fromChongqing JiaoTong University, Chongqing, China, in 2008 and in 2011, respectively. Currently, he is working towards the Ph.D. degree at theCollege of Computer Science, Chongqing University, Chongqing, China. From October 2012 to April 2013, he was a program aid at the Texas A&MUniversity at Qatar, Doha, Qatar. His research interest involves neural networks, multi-agent networks, complex networks and wireless sensornetworks.

Tingwen Huang received his B.S. degree in mathematics from Southwest Normal University, Chongqing, China, in 1990, M.S. degree in appliedmathematics from Sichuan University, Chengdu, China, in 1993 and Ph.D. degree in mathematics from Texas A&M University, College Station,Texas, USA, in 2002.

He was a lecturer at Jiangsu University, China from 1994 to 1998, and Visiting Assistant Professor at Texas A&M University, College Station, USAfrom January of 2003 to July of 2003, and from August of 2003 to June of 2009, he was an Assistant Professor, from July of 2009 to now, anAssociate Professor at Texas A&M University at Qatar, Doha, Qatar.

His research areas are including neural networks, complex networks, chaos and dynamics of systems and operator semi-groups and theirapplications.

X. Liao et al. / Neurocomputing 149 (2015) 868–883 883