8
Neural Networks 44 (2013) 64–71 Contents lists available at SciVerse ScienceDirect Neural Networks journal homepage: www.elsevier.com/locate/neunet A new upper bound for the norm of interval matrices with application to robust stability analysis of delayed neural networks Ozlem Faydasicok a , Sabri Arik b,a Istanbul University, Department of Mathematics, 34134 Vezneciler, Istanbul, Turkey b Isik University, Department of Electrical and Electronics Engineering, 34980 Sile, Istanbul, Turkey article info Article history: Received 18 October 2012 Revised and accepted 21 March 2013 Keywords: Interval matrices Robust stability Delayed neural networks Lyapunov functionals Homomorphic mapping abstract The main problem with the analysis of robust stability of neural networks is to find the upper bound norm for the intervalized interconnection matrices of neural networks. In the previous literature, the major three upper bound norms for the intervalized interconnection matrices have been reported and they have been successfully applied to derive new sufficient conditions for robust stability of delayed neural networks. One of the main contributions of this paper will be the derivation of a new upper bound for the norm of the intervalized interconnection matrices of neural networks. Then, by exploiting this new upper bound norm of interval matrices and using stability theory of Lyapunov functionals and the theory of homomorphic mapping, we will obtain new sufficient conditions for the existence, uniqueness and global asymptotic stability of the equilibrium point for the class of neural networks with discrete time delays under parameter uncertainties and with respect to continuous and slope-bounded activation functions. The results obtained in this paper will be shown to be new and they can be considered alternative results to previously published corresponding results. We also give some illustrative and comparative numerical examples to demonstrate the effectiveness and applicability of the proposed robust stability condition. © 2013 Elsevier Ltd. All rights reserved. 1. Introduction In this paper, we consider the neural network model whose dynamical behavior is described by the following sets of nonlinear differential equations dx i (t ) dt =−c i x i (t ) + n j=1 a ij f j (x j (t )) + n j=1 b ij f j (x j (t τ j )) + u i , i = 1, 2,..., n (1) where n is the number of neurons, x i (t ) denotes the state of the neuron i at time t , f i (·) denote activation functions, a ij and b ij denote the strengths of connectivity between neurons j and i at time t and t τ j , respectively; τ j represents the time delay required in transmitting a signal from the neuron j to the neuron i, u i is the constant input to the neuron i, and c i is the charging rate for the neuron i. Corresponding author. E-mail addresses: [email protected] (O. Faydasicok), [email protected], [email protected] (S. Arik). The neural network model defined by (1) can be written in the matrix–vector form as follows: ˙ x(t ) =−Cx(t ) + Af (x(t )) + Bf (x(t τ)) + u (2) where x(t ) = (x 1 (t ), x 2 (t ),..., x n (t )) T R n , A = (a ij ) n×n , B = (b ij ) n×n , C = diag(c i > 0), u = (u 1 , u 2 ,..., u n ) T R n , f (x(t )) = (f 1 (x 1 (t )), f 2 (x 2 (t )), . . . , f n (x n (t ))) T R n and f (x(t τ)) = (f 1 (x 1 (t τ 1 )), f 2 (x 2 (t τ 2 )), . . . , f n (x n (t τ n ))) T R n . The main approach to establishing the desired equilibrium and stability properties of dynamical neural networks is first to determine the character of the activation functions f i , and then, impose some constraint conditions on the interconnected weight matrices A and B. The activation functions f i are assumed to be slope-bounded in the sense that there exist some positive constants k i such that the following conditions hold: 0 f i (x) f i (y) x y k i , i = 1, 2,..., n, x, y R, x = y. This class of functions will be denoted by f K. The functions of this class do not require to be bounded, differentiable and mono- tonically increasing. In the recent literature, many authors have considered unbounded but slope bounded activation functions in equilibrium and stability analysis of different types of neural net- works and presented various robust stability conditions for the 0893-6080/$ – see front matter © 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.neunet.2013.03.014

A new upper bound for the norm of interval matrices with application to robust stability analysis of delayed neural networks

  • Upload
    sabri

  • View
    217

  • Download
    0

Embed Size (px)

Citation preview

Neural Networks 44 (2013) 64–71

Contents lists available at SciVerse ScienceDirect

Neural Networks

journal homepage: www.elsevier.com/locate/neunet

A new upper bound for the norm of interval matrices with application to robuststability analysis of delayed neural networks

Ozlem Faydasicok a, Sabri Arik b,∗

a Istanbul University, Department of Mathematics, 34134 Vezneciler, Istanbul, Turkeyb Isik University, Department of Electrical and Electronics Engineering, 34980 Sile, Istanbul, Turkey

a r t i c l e i n f o

Article history:Received 18 October 2012Revised and accepted 21 March 2013

Keywords:Interval matricesRobust stabilityDelayed neural networksLyapunov functionalsHomomorphic mapping

a b s t r a c t

Themain problemwith the analysis of robust stability of neural networks is to find the upper bound normfor the intervalized interconnection matrices of neural networks. In the previous literature, the majorthree upper bound norms for the intervalized interconnection matrices have been reported and theyhave been successfully applied to derive new sufficient conditions for robust stability of delayed neuralnetworks. One of the main contributions of this paper will be the derivation of a new upper bound for thenorm of the intervalized interconnectionmatrices of neural networks. Then, by exploiting this new upperbound norm of interval matrices and using stability theory of Lyapunov functionals and the theory ofhomomorphicmapping, wewill obtain new sufficient conditions for the existence, uniqueness and globalasymptotic stability of the equilibrium point for the class of neural networks with discrete time delaysunder parameter uncertainties and with respect to continuous and slope-bounded activation functions.The results obtained in this paper will be shown to be new and they can be considered alternative resultsto previously published corresponding results. We also give some illustrative and comparative numericalexamples to demonstrate the effectiveness and applicability of the proposed robust stability condition.

© 2013 Elsevier Ltd. All rights reserved.

1. Introduction

In this paper, we consider the neural network model whosedynamical behavior is described by the following sets of nonlineardifferential equations

dxi(t)dt

= −cixi(t) +

nj=1

aijfj(xj(t))

+

nj=1

bijfj(xj(t − τj)) + ui, i = 1, 2, . . . , n (1)

where n is the number of neurons, xi(t) denotes the state of theneuron i at time t, fi(·) denote activation functions, aij and bijdenote the strengths of connectivity between neurons j and i attime t and t−τj, respectively; τj represents the time delay requiredin transmitting a signal from the neuron j to the neuron i, ui is theconstant input to the neuron i, and ci is the charging rate for theneuron i.

∗ Corresponding author.E-mail addresses: [email protected] (O. Faydasicok),

[email protected], [email protected] (S. Arik).

0893-6080/$ – see front matter© 2013 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.neunet.2013.03.014

The neural network model defined by (1) can be written in thematrix–vector form as follows:

x(t) = −Cx(t) + Af (x(t)) + Bf (x(t − τ)) + u (2)

where x(t) = (x1(t), x2(t), . . . , xn(t))T ∈ Rn, A = (aij)n×n, B =

(bij)n×n, C = diag(ci > 0), u = (u1, u2, . . . , un)T

∈ Rn, f (x(t)) =

(f1(x1(t)), f2(x2(t)), . . . , fn(xn(t)))T ∈ Rn and f (x(t − τ)) =

(f1(x1(t − τ1)), f2(x2(t − τ2)), . . . , fn(xn(t − τn)))T

∈ Rn.The main approach to establishing the desired equilibrium

and stability properties of dynamical neural networks is first todetermine the character of the activation functions fi, and then,impose some constraint conditions on the interconnected weightmatrices A and B.

The activation functions fi are assumed to be slope-bounded inthe sense that there exist some positive constants ki such that thefollowing conditions hold:

0 ≤fi(x) − fi(y)

x − y≤ ki, i = 1, 2, . . . , n, ∀x, y ∈ R, x = y.

This class of functions will be denoted by f ∈ K . The functions ofthis class do not require to be bounded, differentiable and mono-tonically increasing. In the recent literature, many authors haveconsidered unbounded but slope bounded activation functions inequilibrium and stability analysis of different types of neural net-works and presented various robust stability conditions for the

O. Faydasicok, S. Arik / Neural Networks 44 (2013) 64–71 65

neural network models considered (Baese, Koshkouei, Emmett, &Goodall, 2009; Balasubramaniam & Ali, 2010; Cao, 2001; Cao & Ho,2005; Cao, Huang, & Qu, 2005; Cao & Wang, 2005; Deng, Hua, Liu,Peng, & Fei, 2011; Ensari & Arik, 2010; Faydasicok & Arik, 2012;Guo & Huang, 2009; Han, Kao, & Wang, 2011; Huang, Ho, & Qu,2007; Huang, Li, Mohamad, & Lu, 2009; Kao, Guo, Wang, & Sun,2012; Kwon & Park, 2008; Liao &Wong, 2004; Liu, Han, & Li, 2009;Lou, Ye, & Cui, 2012;Mahmoud & Ismail, 2010; Ozcan & Arik, 2006;Pan, Wang, & Hu, 2011; Qi, 2007; Shao, Huang, & Zhou, 2010; Shen& Wang, 2012; Shen, Wang, & Liu, 2011; Singh, 2007; Wang, Liu,Liu, & Shi, 2010; Wu, Park, Su, & Chu, 2012; Zeng & Wang, 2009;Zhang, Liu, & Huang, 2010; Zhang, Yang, & Huang, 2011; Zhou &Wan, 2010). It is well-known from Brouwer’s fixed point theoremthat there always exists an equilibrium point of a neural networkif the activation functions are bounded. However, if the activationfunctions are not bounded, then we cannot always guarantee theexistence of an equilibrium point of a neural network. Therefore,in the next sections, we first focus on investigating the existenceof a unique equilibrium point for neural networks, which is nec-essary for global robust asymptotic stability of the neural networkmodel (1).

When we desire to employ a stable neural network which isrobust against the deviations in values of the elements of thenon-delayed and delayed interconnection matrices of the neuralsystem, we need to know the exact parameter intervals for thesematrices. A standard approach to formulating this problem is tointervalize the system matrices A = (aij), B = (bij) and C =

diag(ci > 0) as follows

CI := {C = diag(ci) : 0 < C ≤ C ≤ C, i.e.,0 < c i ≤ ci ≤ c i, ∀i}

AI := {A = (aij) : A ≤ A ≤ A,i.e., aij ≤ aij ≤ aij, i, j = 1, 2, . . . , n} (3)

BI := {B = (bij) : B ≤ B ≤ B,i.e., bij ≤ bij ≤ bij, i, j = 1, 2, . . . , n}.

Inmost of the robust stability analysis of neural networkmodel (1),the norms of the matrices A = (aij) and B = (bij) indispensablyinvolve the stability conditions. When the matrices A and B aregiven by the parameter ranges defined by (3), then, in order toexpress the exact involvement of the norms of thematrices A and Bin the stability conditions, we need to know an upper bound for thenorms of these matrices. In the previous literature, three differentupper bound norms for the matrices A and B have been reported.A key contribution of the current paper is to define a new upperbound for the norms of A and B.

2. Preliminaries

In this section, we first view some basic norms of vectors andmatrices. Let v = (v1, v2, . . . , vn)

T∈ Rn. The three commonly

used vector norms are ∥v∥1, ∥v∥2, ∥v∥∞ which are defined as:

∥v∥1 =

ni=1

|vi|, ∥v∥2 =

ni=1

v2i , ∥v∥∞ = max

1≤i≤n|vi|.

If Q = (qij)n×n, then ∥Q∥1, ∥Q∥2 and ∥Q∥∞ are defined as follows

∥Q∥1 = max1≤i≤n

nj=1

|qji|

∥Q∥2 = [λmax(Q TQ )]1/2

∥Q∥∞ = max1≤i≤n

nj=1

|qij|.

We use these notations. For the vector v = (v1, v2, . . . , vn)T , |v|

will denote |v| = (|v1|, |v2|, . . . , |vn|)T . For any real matrix

Q = (qij)n×n, |Q | will denote |Q | = (|qij|)n×n, and λm(Q ) andλM(Q ) will denote the minimum and maximum eigenvalues ofQ , respectively. If Q = (qij)n×n is a symmetric matrix, thenQ > 0 (≥0) will imply that Q is positive definite (positive semi-definite), i.e., Q has all positive (nonnegative) eigenvalues. LetP = (pij)n×n and Q = (qij)n×n be two positive definite matrices.Then, P < Q will imply that vTPv < vTQv for any real vectorv = (v1, v2, . . . , vn)

T .We can now present one of the main results of this paper in the

following lemma, which formulates a new upper bound norm forinterval matrices:

Lemma 1. Let B be any real matrix defined by B ∈ BI := {B =

(bij) : B ≤ B ≤ B, i.e., bij ≤ bij ≤ bij, i, j = 1, 2, . . . , n}. DefineB∗

=12 (B + B) and B∗ =

12 (B − B). Let

σ1(B) =

∥ |B∗TB∗| + 2|B∗T |B∗ + BT

∗B∗∥2.

Then, the following inequality holds:

∥B∥2 ≤ σ1(B).

Proof. If B ∈ BI := {B = (bij) : B ≤ B ≤ B,i.e., bij ≤ bij ≤ bij, i, j = 1, 2, . . . , n}, then bij can be expressedas follows:

bij =12(bij + bij) +

12δij(bij − bij), −1 ≤ δij ≤ 1.

Let B = (bij)n×n with bij =12δij(bij − bij). Then, B can be expressed

as follows:

B =12(B + B) + B = B∗

+ B.

For any vector x = (x1, x2, . . . , xn)T ∈ Rn, we can write

xTBTBx = xT (B∗+ B)T (B∗

+ B)x

= xTB∗TB∗x + xTB∗T Bx + xT BTB∗x + xT BT Bx= xTB∗TB∗x + 2xTB∗T Bx + xT BT Bx≤ |xT∥B∗TB∗

∥x| + 2|xT∥B∗T∥B| |x| + |xT∥BT

∥B| |x|.

Using the property that |bij| ≤12 (bij − bij), i, j = 1, 2, . . . , n we

directly get that

|xT∥B∗T∥B| |x| ≤ |xT | |B∗T

|B∗|x|

and

|xT∥BT∥B| |x| ≤ |xT |BT

∗B∗|x|.

Hence, we can write

xTBTBx ≤ |xT∥B∗TB∗∥x| + 2|xT | |B∗T

|B∗|x| + |xT |BT∗B∗|x|

= |xT |(|B∗TB∗| + 2|B∗T

|B∗ + BT∗B∗)|x|

≤ ∥(|B∗TB∗| + 2|B∗T

|B∗ + BT∗B∗)∥2∥x∥2

2

implying that

∥B∥22 ≤ ∥(|B∗TB∗

| + 2|B∗T|B∗ + BT

∗B∗)∥2

or

∥B∥2 ≤ σ1(B).

Hence, the proof of Lemma 1 is complete. Note that if A ∈ AI :=

{A = (aij) : A ≤ A ≤ A, i.e., aij ≤ aij ≤ aij, i, j = 1, 2, . . . , n}, then,for A∗

=12 (A + A) and A∗ =

12 (A − A), we can write

∥A∥2 ≤ σ1(A)

66 O. Faydasicok, S. Arik / Neural Networks 44 (2013) 64–71

where

σ1(A) =

∥ |A∗TA∗| + 2|A∗T |A∗ + AT

∗A∗∥2.

The result of Lemma1will play a key role in determining the robuststability conditions for the neural network model (1). �

In the following, we give the previous literature results thatstate various upper bound norms for interval matrices:

Lemma 2 (Cao et al., 2005). Let B be any real matrix defined byB ∈ BI := {B = (bij) : B ≤ B ≤ B, i.e., bij ≤ bij ≤ bij, i, j =

1, 2, . . . , n}. Define B∗=

12 (B + B) and B∗ =

12 (B − B). Let

σ2(B) = ∥B∗∥2 + ∥B∗∥2.

Then, the following inequality holds:

∥B∥2 ≤ σ2(B).

Lemma 3 (Ensari & Arik, 2010). Let B be any real matrix defined byB ∈ BI := {B = (bij) : B ≤ B ≤ B, i.e., bij ≤ bij ≤ bij, i, j =

1, 2, . . . , n}. Define B∗=

12 (B + B) and B∗ =

12 (B − B). Let

σ3(B) =

∥B∗∥

22 + ∥B∗∥

22 + 2∥BT

∗|B∗| ∥2.

Then, the following inequality holds:

∥B∥2 ≤ σ3(B).

Lemma 4 (Singh, 2007). Let B be any real matrix defined by B ∈

BI := {B = (bij) : B ≤ B ≤ B, i.e., bij ≤ bij ≤ bij, i, j = 1, 2, . . . , n}.Define B = (bij)n×n with bij = max{|bij|, |bij|}. Let

σ4(B) = ∥B∥2.

Then, the following inequality holds:

∥B∥2 ≤ σ4(B).

The following results are also important in the context of this paper:

Lemma 5 (Ensari & Arik, 2010). Let x = (x1, x2, . . . , xn)T ∈ Rn. If.

A ∈ AI := {A = (aij) : A ≤ A ≤ A,

i.e., aij ≤ aij ≤ aij, i, j = 1, 2, . . . , n}

then, for any positive diagonal matrix P, the following inequalityholds:

xT (PA + ATP)x ≤ −|xT |S|x|

where S = (sij) with sii = −2piaii and sij = −max(|piaij +

pjaji|, |piaij + pjaji|) for i = j.

Lemma 6 (Qi, 2007). Let x = (x1, x2, . . . , xn)T ∈ Rn. If

A ∈ AI := {A = (aij) : A ≤ A ≤ A,

i.e., aij ≤ aij ≤ aij, i, j = 1, 2, . . . , n}

then, for any positive diagonal matrix P, the following inequalityholds:

xT (PA + ATP)x ≤ xT (PA∗+ A∗TP + ∥PA∗ + AT

∗P∥2I)x

where A∗=

12 (A + A), A∗ =

12 (A − A).

In what follows, we restate the result that is the key factorin proving the existence and uniqueness part of our stabilityconditions:

Lemma 7 (Ensari & Arik, 2010). If H(x) ∈ C0 satisfies the followingconditions(i) H(x) = H(y) for all x = y,(ii) ∥H(x)∥ → ∞ as ∥x∥ → ∞,then, H(x) is a homomorphism of Rn.

3. Existence and uniqueness analysis of equilibrium point

In this section, we obtain new sufficient conditions that ensurethe existence and uniqueness of the equilibriumpoint of the neuralnetwork model (2).

We first present the following result:

Theorem 1. For the neural system defined by (2), let f ∈ Kand the network parameters satisfy (3). Then, the neural networkmodel (2) has a unique equilibrium point for each u, if there existsa positive diagonal matrix P = diag(pi > 0) such that

Φ1 = 2CPK−1− (PA∗

+ A∗TP + ∥PA∗ + AT∗P∥2I)

−2∥P∥2

∥ |B∗TB∗| + 2|B∗T |B∗ + BT

∗B∗∥2I

= 2CPK−1− (PA∗

+ A∗TP + ∥PA∗ + AT∗P∥2I)

− 2∥P∥2σ1(B)I > 0

where A∗=

12 (A + A), A∗ =

12 (A − A), K = diag(ki > 0), B∗ =

12 (B − B), B∗

=12 (B + B).

Proof. It has been customary tomake use of the result of Lemma 7to prove the existence and uniqueness of the equilibriumpoint. Forneural system (2), we can define the following map:

H(x) = −Cx + Af (x) + Bf (x) + u. (4)

Note that every solution of H(x) = 0 is an equilibrium point of (2)since H(x) = 0 is equivalent to x = 0. Therefore, showing thatH(x) is a homomorphism of Rn will be sufficient for the proof ofTheorem 1. Let x ∈ Rn and y ∈ Rn be two vectors such that x = y.Then, H(x) defined by (4) gives

H(x) − H(y) = −C(x − y) + A(f (x) − f (y)) + B(f (x) − f (y)). (5)

When f ∈ K, x = y implies two distinct cases: f (x) − f (y) = 0 orf (x)− f (y) = 0. First let x = y and f (x)− f (y) = 0. Then, (5) takesthe form

H(x) − H(y) = −C(x − y).

Since C is a positive diagonal matrix, x − y = 0 directly impliesthat H(x) = H(y). Now assume that x − y = 0 and f (x) −

f (y) = 0. Let P = diag(pi > 0). If we multiply both sides of (5)by 2(f (x) − f (y))TP , we obtain

2(f (x) − f (y))TP(H(x) − H(y))= −2(f (x) − f (y))TPC(x − y)

+ 2(f (x) − f (y))TPA(f (x) − f (y))+ 2(f (x) − f (y))TPB(f (x) − f (y))

= −2(f (x) − f (y))TPC(x − y)+ (f (x) − f (y))T (PA + ATP)(f (x) − f (y))

+ 2(f (x) − f (y))TPB(f (x) − f (y)). (6)

Using the fact that ∥PB∥2 ≤ ∥P∥2∥B∥2, we can write

2(f (x) − f (y))TPB(f (x) − f (y))≤ 2∥PB∥2∥f (x) − f (y)∥2

2

≤ 2∥P∥2∥B∥2(f (x) − f (y))T (f (x) − f (y)). (7)

In the light of Lemma 1, (7) can be written as follows

2(f (x) − f (y))TPB(f (x) − f (y))

≤ 2∥P∥2

∥ |B∗TB∗| + 2|B∗T |B∗ + BT

∗B∗∥2

× (f (x) − f (y))T (f (x) − f (y))

= 2∥P∥2σ1(B)(f (x) − f (y))T (f (x) − f (y)) (8)

O. Faydasicok, S. Arik / Neural Networks 44 (2013) 64–71 67

f ∈ K implies that

− 2(f (x) − f (y))TPC(x − y)

= −2n

i=1

pici(fi(xi) − fi(yi))(xi − yi)

≤ −2n

i=1

pic iki

(fi(xi) − fi(yi))2

= −2(f (x) − f (y))TCPK−1(f (x) − f (y)). (9)

From Lemma 6, we can get

(f (x) − f (y))T (PA + ATP)(f (x) − f (y))

≤ (f (x) − f (y))T (PA∗+ A∗TP)(f (x) − f (y))

+ (f (x) − f (y))T (∥PA∗ + AT∗P∥2I)(f (x) − f (y)). (10)

Using (8)–(10) in (6) yields

2(f (x) − f (y))TP(H(x) − H(y))≤ −2(f (x) − f (y))TCPK−1(f (x) − f (y))

+ (f (x) − f (y))T (PA∗+ A∗TP)(f (x) − f (y))

+ (f (x) − f (y))T (∥PA∗ + AT∗P∥2I)(f (x) − f (y))

+ 2∥P∥2σ1(B)(f (x) − f (y))T (f (x) − f (y))= −(f (x) − f (y))TΦ1(f (x) − f (y)).

The fact that Φ1 is a positive definite matrix implies

2(f (x) − f (y))TP(H(x) − H(y)) ≤ −λm(Φ1)∥f (x) − f (y)∥22. (11)

Since f (x) − f (y) = 0 and λm(Φ1) > 0, from (11), it follows that

2(f (x) − f (y))TP(H(x) − H(y)) < 0

which directly implies that H(x) = H(y). Hence, we have provedthat H(x) = H(y) for all x = y. �

Now letting y = 0 in (11) leads to

2(f (x) − f (0))TP(H(x) − H(0)) ≤ −λm(Φ1)∥f (x) − f (0)∥22.

Taking the absolute value of both sides of the above inequalityyields:

|2(f (x) − f (0))TP(H(x) − H(0))| ≥ λm(Φ1)∥f (x) − f (0)∥22

from which we can write

2∥P∥∞∥f (x) − f (0)∥∞∥H(x) − H(0)∥1

> λm(Φ1)∥f (x) − f (0)∥22.

Using the well-known norm properties, we can write theinequalities ∥f (x) − f (0)∥∞ ≤ ∥f (x) − f (0)∥2, ∥H(x) − H(0)∥1 ≤

∥H(x)∥1 +∥H(0)∥1 and ∥f (x)− f (0)∥2 ≥ ∥f (x)∥2 −∥f (0)∥2. Then,we have

∥H(x)∥1

>λm(Φ1)∥f (x)∥2 − λm(Φ1)∥f (0)∥2 − 2∥P∥∞∥H(0)∥1

2∥P∥∞

.

Since ∥P∥∞, ∥H(0)∥1 and ∥f (0)∥2 are finite, it follows that∥H(x)∥ → ∞ as ∥f (x)∥ → ∞ or equivalently ∥H(x)∥ → ∞ as∥x∥ → ∞. Thus, the proof of Theorem 1 is complete.

We now give the following existence and uniqueness result:

Theorem 2. For the neural system defined by (2), let f ∈ Kand the network parameters satisfy (3). Then, the neural network

model (2) has a unique equilibrium point for each u, if there existsa positive diagonal matrix P = diag(pi > 0) such that

Θ1 = 2CPK−1+ S − 2∥P∥2

×

∥ |B∗TB∗| + 2|B∗T |B∗ + BT

∗B∗∥2I

= 2CPK−1+ S − 2∥P∥2σ1(B)I > 0

where K = diag(ki > 0), S = (sij)n×n with sii = −2piaii, sij =

−max(|piaij+pjaji|, |piaij+pjaji|) for i = j and B∗ =12 (B−B), B∗

=

12 (B + B).

Proof. From Lemma 5, we know that

(f (x) − f (y))T (PA + ATP)(f (x) − f (y))

≤ −|(f (x) − f (y))T |S|(f (x) − f (y))|. (12)

Using (8), (9) and (12) in (6) results in

2(f (x) − f (y))TP(H(x) − H(y))≤ −2|(f (x) − f (y))T |CPK−1

|(f (x) − f (y))|

− |(f (x) − f (y))T |S|(f (x) − f (y))|+ 2∥P∥2σ1(B)(f (x) − f (y))T (f (x) − f (y))

= −|(f (x) − f (y))T |Θ1|(f (x) − f (y))|.

Given that Θ1 is a positive definite matrix, we can obtain

2(f (x) − f (y))TP(H(x) − H(y)) ≤ −λm(Θ1)∥f (x) − f (y)∥22. (13)

Note that (13) is exactly in the same form as (11) other than thatΦ1is replaced by Θ1. Therefore, Θ1 > 0 directly implies the existenceand uniqueness of the equilibrium point of neural system (2). �

4. Stability analysis of equilibrium point

In this section, we show that the conditions obtained inTheorems 1 and 2 for the existence and uniqueness of theequilibrium point also imply the asymptotic stability of theequilibrium point of neural system (2). In order to proceedfurther, we denote the equilibrium point of (2) by x∗ and use thetransformation zi(·) = xi(·) − x∗

i , i = 1, 2, . . . , n to put system (2)in the form:

zi(t) = −cizi(t) +

nj=1

aijgj(zj(t)) +

nj=1

bijgj(zj(t − τj)) (14)

where gi(zi(·)) = fi(zi(·) + x∗

i ) − fi(x∗

i ), i = 1, 2, . . . , n. It can beseen that the functions gi satisfy the assumptions on fi, i.e., f ∈ Kimplies that g ∈ K with gi(0) = 0, i = 1, 2, . . . , n. Note thatthis transformation simply shifts the equilibrium point x∗ of (2) tothe origin which is the equilibrium point of system (14). Therefore,our focus now will be on proving the stability of the origin ofthe transformed system (14) instead of considering the stabilityof x∗.

Neural system (14) can be written in the form:

z(t) = −Cz(t) + Ag(z(t)) + Bg(z(t − τ)) (15)

where z(t) = (z1(t), z2(t), . . . , zn(t))T is the new state vector,g(z(t)) = (g1(z1(t)), g2(z2(t)), . . . , gn(zn(t)))T and g(z(t − τ)) =

(g1(z1(t − τ1)), g2(z2(t − τ2)), . . . , gn(zn(t − τn)))T .

We can now proceed with the following result:

Theorem 3. For the neural system defined by (15), let g ∈ K andthe network parameters satisfy (3). Then, the origin of neural networkmodel (15) is globally asymptotically stable, if there exists a positive

68 O. Faydasicok, S. Arik / Neural Networks 44 (2013) 64–71

diagonal matrix P = diag(pi > 0) such that

Φ1 = 2CPK−1− (PA∗

+ A∗TP + ∥PA∗ + AT∗P∥2I)

− 2∥P∥2

∥ |B∗TB∗| + 2|B∗T |B∗ + BT

∗B∗∥2I

= 2CPK−1− (PA∗

+ A∗TP + ∥PA∗ + AT∗P∥2I)

− 2∥P∥2σ1(B)I > 0

where A∗=

12 (A + A), A∗ =

12 (A − A), K = diag(ki > 0), B∗ =

12 (B − B), B∗

=12 (B + B).

Proof. We employ the following positive definite Lyapunovfunctional:

V (z(t)) = zT (t)z(t) + 2αn

i=1

zi(t)

0pigi(s)ds

+ (αγ + β)

ni=1

t

t−τi

g2i (zi(ζ ))dζ

where the pi, α, β and γ are some positive constants to bedetermined later. The time derivative of the functional along thetrajectories of system (15) is obtained as follows

V (z(t)) = −2zT (t)Cz(t) + 2zT (t)Ag(z(t)) + 2zT (t)× Bg(z(t − τ)) − 2αgT (z(t))PCz(t) + 2αgT

× (z(t))PAg(z(t)) + 2αgT (z(t))PBg(z(t − τ))

+ αγ ∥g(z(t))∥22 − αγ ∥g(z(t − τ))∥2

2

+ β∥g(z(t))∥22 − β∥g(z(t − τ))∥2

2. (16)

We note the following inequalities:

−zT (t)Cz(t) + 2zT (t)Ag(z(t)) ≤ gT (z(t))ATC−1Ag(z(t))≤ ∥A∥

22∥C

−1∥2∥g(z(t))∥2

2 (17)

−zT (t)Cz(t) + 2zT (t)Bgz(t − τ)

≤ gT (z(t − τ))BTC−1Bg(z(t − τ))

≤ ∥B∥22∥C

−1∥2∥g(z(t − τ))∥2

2 (18)

2αgT (z(t))PBg(z(t − τ))

≤ 2α∥PB∥2∥g(z(t))∥2∥g(z(t − τ))∥2

≤ α∥PB∥2∥g(z(t))∥22 + α∥PB∥2∥g(z(t − τ))∥2

2

≤ α∥P∥2∥B∥2∥g(z(t))∥22 + α∥P∥2∥B∥2∥g(z(t − τ))∥2

2

≤ α∥P∥2σ1(B)∥g(z(t))∥22 + α∥P∥2σ1(B)∥g(z(t − τ))∥2

2 (19)

−2αgT (z(t))PCz(t) ≤ −2αgT (z(t))PCK−1g(z(t)). (20)

Asserting (17)–(20) into (16) results in:

V (z(t)) ≤ ∥A∥22∥C

−1∥2∥g(z(t))∥2

2

+ ∥B∥22∥C

−1∥2∥g(z(t − τ))∥2

2 − 2αgT (z(t))PCK−1g(z(t))

+ αgT (z(t))(PA + ATP)g(z(t)) + α∥P∥2σ1(B)∥g(z(t))∥22

+ α∥P∥2σ1(B)∥g(z(t − τ))∥22 + αγ ∥g(z(t))∥2

2

− αγ ∥g(z(t − τ))∥22 + β∥g(z(t))∥2

2 − β∥g(z(t − τ))∥22.

Considering that ∥A∥2 ≤ σ1(A), ∥B∥2 ≤ σ1(B), V (z(t)) can bewritten as follows:

V (z(t)) ≤ σ 21 (A)∥C−1

∥2∥g(z(t))∥22

+ σ 21 (B)∥C−1

∥2∥g(z(t − τ))∥22 − 2αgT (z(t))PCK−1g(z(t))

+ αgT (z(t))(PA + ATP)g(z(t)) + α∥P∥2σ1(B)∥g(z(t))∥22

+ α∥P∥2σ1(B)∥g(z(t − τ))∥22 + αγ ∥g(z(t))∥2

2

− αγ ∥g(z(t − τ))∥22 + β∥g(z(t))∥2

2 − β∥g(z(t − τ))∥22.

By taking β = σ 21 (B)∥C−1

∥2 and γ = ∥P∥2σ1(B), we can writeV (z(t)) in the form

V (z(t)) ≤ (σ 21 (A) + σ 2

1 (B))∥C−1∥2∥g(z(t))∥2

2

− 2αgT (z(t))PCK−1g(z(t)) + αgT (z(t))

× (PA + ATP)g(z(t)) + 2α∥P∥2σ1(B)∥g(z(t))∥22. (21)

From Lemma 6, we can write

gT (z(t))(PA + ATP)g(z(t))

≤ gT (z(t))(PA∗+ A∗TP + ∥PA∗ + AT

∗P∥2I)g(z(t)).

Using the above inequality in (21) yields

V (z(t)) ≤ (σ 21 (A) + σ 2

1 (B))∥C−1∥2∥g(z(t))∥2

2

− 2αgT (z(t))PCK−1g(z(t))

+ αgT (z(t))(PA∗+ A∗TP)g(z(t))

+ αgT (z(t))∥PA∗ + AT∗P∥2g(z(t))

+ 2α∥P∥2σ1(B)∥g(z(t))∥22

= (σ 21 (A) + σ 2

1 (B))∥C−1∥2∥g(z(t))∥2

2

− αgT (z(t))Φ1g(z(t)). (22)

Since Φ1 is a positive definite matrix, from (22) it follows that

V (z(t)) ≤ (σ 21 (A) + σ 2

1 (B))∥C−1∥2∥g(z(t))∥2

2

− αλm(Φ1)∥g(z(t))∥22. (23)

If we make the choice

α >(σ 2

1 (A) + σ 21 (B))∥C−1

∥2

λm(Φ1)

then it directly follows that V (z(t)) is negative definite for allg(z(t)) = 0. (We note here that g(z(t)) = 0 implies thatz(t) = 0.) �

If g(z(t)) = 0 and z(t) = 0, then V (z(t)) is of the form:

V (z(t)) = −2zT (t)Cz(t) + 2zT (t)Bg(z(t − τ))

− βgT (z(t − τ))g(z(t − τ))

− αγ gT (z(t − τ))g(z(t − τ))

≤ −2zT (t)Cz(t) + 2zT (t)Bg(z(t − τ))

− βgT (z(t − τ))g(z(t − τ)).

Using the inequality

−zT (t)Cz(t) + 2zT (t)Bg(z(t − τ))

− βgT (z(t − τ))g(z(t − τ)) ≤ 0

we get

V (z(t)) ≤ −zT (t)Cz(t).

Clearly, V (z(t)) is negative definite for all z(t) = 0. Now assumethat g(z(t)) = 0 and z(t) = 0. In this case, for V (z(t)), we have

V (z(t)) = −βgT (z(t − τ))g(z(t − τ))

− αγ gT (z(t − τ))g(z(t − τ)).

It can be seen that V (z(t)) is negative definite for all g(z(t − τ)) =

0. Hence, we observe that V (z(t)) = 0 if and only if z(t) =

g(z(t)) = g(z(t − τ)) = 0, otherwise V (z(t)) < 0. In addition,V (z(t)) is radially unbounded sinceV (z(t)) → ∞ as ∥z(t)∥ → ∞.Thus, the origin of system (15), or equivalently the equilibriumpoint of system (2) is globally asymptotically stable. Therefore, thecondition of Theorem 3 implies that neural system (2) is globallyasymptotically robustly stable.

O. Faydasicok, S. Arik / Neural Networks 44 (2013) 64–71 69

We now obtain the following result:

Theorem 4. For the neural system defined by (15), let g ∈ K andthe network parameters satisfy (3). Then, the origin of neural networkmodel (15) is globally asymptotically stable, if there exists a positivediagonal matrix P = diag(pi > 0) such that

Θ1 = 2CPK−1+ S − 2∥P∥2

×

∥ |B∗TB∗| + 2|B∗T |B∗ + BT

∗B∗∥2I

= 2CPK−1+ S − 2∥P∥2σ1(B)I > 0

where K = diag(ki > 0), S = (sij)n×n with sii = −2piaii, sij =

−max(|piaij+pjaji|, |piaij+pjaji|) for i = j and B∗ =12 (B−B), B∗

=

12 (B + B).

Proof. From Lemma 5, we have

gT (z(t))(PA + ATP)g(z(t)) ≤ −|gT (z(t))|S|gT (z(t))|.

Using the above inequality in (21) yields

V (z(t)) ≤ (σ 21 (A) + σ 2

1 (B))∥C−1∥2∥g(z(t))∥2

2

− 2αgT (z(t))PCK−1g(z(t))− α|gT (z(t))|S|gT (z(t))|+ 2α∥P∥2σ1(B)∥g(z(t))∥2

2

= (σ 21 (A) + σ 2

1 (B))∥C−1∥2∥g(z(t))∥2

2

− α|gT (z(t))|Θ1|g(z(t))|. (24)

Since Θ1 is a positive definite matrix, (24) can be written as

V (z(t)) ≤ (σ 21 (A) + σ 2

1 (B))∥C−1∥2∥g(z(t))∥2

2

−αλm(Θ1)∥g(z(t))∥22. (25)

Note that (25) is exactly in the same form as (23) other than thatΦ1 is replaced by Θ1. Therefore, it can be directly concluded thatΘ1 > 0 implies the global robust asymptotic stability of neuralsystem (2). �

5. A comparative numerical example

In this section, we will give an example to compare our resultswith the previous corresponding literature results. We first restatethe previous results:

Theorem 5 (Qi, 2007). For the neural system defined by (2), let f ∈

K and the network parameters satisfy (3). Then, the neural networkmodel (2) is globally asymptotically robustly stable, if there exists apositive diagonal matrix P = diag(pi > 0) such that

Φ2 = 2CPK−1− (PA∗

+ A∗TP + ∥PA∗ + AT∗P∥2I)

− 2∥P∥2σ2(B)I > 0

where A∗=

12 (A + A), A∗ =

12 (A − A), K = diag(ki > 0), σ2(B) =

∥B∗∥2 + ∥B∗∥2, B∗ =

12 (B − B), B∗

=12 (B + B).

Theorem 6 (Faydasicok & Arik, 2012). For the neural system definedby (2), let f ∈ K and the network parameters satisfy (3). Then, theneural network model (2) is globally asymptotically robustly stable, ifthere exists a positive diagonal matrix P = diag(pi > 0) such that

Φ3 = 2CPK−1− (PA∗

+ A∗TP + ∥PA∗ + AT∗P∥2I)

− 2∥P∥2σ3(B)I > 0

where A∗=

12 (A + A), A∗ =

12 (A − A), K = diag(ki > 0), σ3(B) =

∥B∗∥22 + ∥B∗∥

22 + 2∥BT

∗|B∗|∥2, B∗ =

12 (B − B), B∗

=12 (B + B).

Theorem 7 (Shao et al., 2010). For the neural system defined by (2),let f ∈ K and the network parameters satisfy (3). Then, the neuralnetwork model (2) is globally asymptotically robustly stable, if thereexists a positive diagonal matrix P = diag(pi > 0) such that

Φ4 = 2CPK−1− (PA∗

+ A∗TP + ∥PA∗ + AT∗P∥2I)

− 2∥P∥2σ4(B)I > 0

where A∗=

12 (A + A), A∗ =

12 (A − A), K = diag(ki > 0), σ4(B) =

∥B∥2, B = (bij)n×n with bij = max{|bij|, |bij|}.

Theorem 8 (Ozcan & Arik, 2006). For the neural system definedby (2), let f ∈ K and the network parameters satisfy (3). Then, theneural network model (2) is globally asymptotically robustly stable, ifthere exists a positive diagonal matrix P = diag(pi > 0) such that

Θ2 = 2CPK−1+ S − 2∥P∥2σ2(B)I > 0

where K = diag(ki > 0), S = (sij)n×n with sii = −2piaii, sij =

−max(|piaij + pjaji|, |piaij + pjaji|) for i = j and σ2(B) = ∥B∗∥2 +

∥B∗∥2, B∗ =12 (B − B), B∗

=12 (B + B).

Theorem 9 (Ensari & Arik, 2010). For the neural system definedby (2), let f ∈ K and the network parameters satisfy (3). Then, theneural network model (2) is globally asymptotically robustly stable, ifthere exists a positive diagonal matrix P = diag(pi > 0) such that

Θ3 = 2CPK−1+ S − 2∥P∥2σ3(B)I > 0

where K = diag(ki > 0), S = (sij)n×n with sii = −2piaii, sij =

−max(|piaij + pjaji|, |piaij + pjaji|) for i = j and σ3(B) =∥B∗∥

22 + ∥B∗∥

22 + 2∥BT

∗|B∗|∥2, B∗ =

12 (B − B), B∗

=12 (B + B).

Theorem 10 (Singh, 2007). For the neural system defined by (2), letf ∈ K and the network parameters satisfy (3). Then, the neuralnetwork model (2) is globally asymptotically robustly stable, if thereexists a positive diagonal matrix P = diag(pi > 0) such that

Θ4 = 2CPK−1+ S − 2∥P∥2σ4(B)I > 0

where K = diag(ki > 0), S = (sij)n×n with sii = −2piaii, sij =

−max(|piaij + pjaji|, |piaij + pjaji|) for i = j and σ4(B) = ∥B∥2, B =

(bij)n×n with bij = max{|bij|, |bij|}.

We will now give the following example to demonstrate theapplicability and advantages of our results:

Example 1. Assume that the network parameters of the neuralnetwork model (2) are given as follows:

A =

−a −a −a −a−a −a −a −a−a −a −a −a−a −a −a −a

, A =

a a a aa a a aa a a aa a a a

B =

a a a aa a a aa a a aa a a −3a

, B =

3a 2a 3a 2a2a 3a 2a 3a3a 2a 3a 2a2a 3a 2a a

k1 = k2 = k3 = k4 = 1 and c1 = c2 = c3 = c4 = 13.76.

From the above matrices, we obtain

A∗=

0 0 0 00 0 0 00 0 0 00 0 0 0

, A∗ =

a a a aa a a aa a a aa a a a

70 O. Faydasicok, S. Arik / Neural Networks 44 (2013) 64–71

B∗=

12

4a 3a 4a 3a3a 4a 3a 4a4a 3a 4a 3a3a 4a 3a −2a

,

B∗ =12

2a a 2a aa 2a a 2a2a a 2a aa 2a a 4a

B =

3a 2a 3a 2a2a 3a 2a 3a3a 2a 3a 2a2a 3a 2a 3a

from which we can calculate the norms:σ1(B) =

∥ |B∗TB∗| + 2|B∗T |B∗ + BT

∗B∗∥2 = 9.76a, σ2(B) =

∥B∗∥2 + ∥B∗∥2 = 9.79a, σ3(B) =

∥B∗∥

22 + ∥B∗∥

22 + 2∥BT

∗|B∗| ∥2

= 9.83a and σ4(B) = ∥B∥2 = 10a.

Let us compare the conditions of the above theorems by takingP = I . In this case, for the network parameters of this example, Φ1in Theorem 3 is calculated as

Φ1 = 2CK−1− (A∗

+ A∗T+ ∥A∗ + AT

∗∥2I) − 2σ1(B)I

= (27.52 − 27.52a)I

Φ1 > 0, provided that a < 1. Therefore, the sufficient conditionfor robust stability is obtained as a < 1.

Φ2 in Theorem 5 is calculated as

Φ2 = 2CK−1− (A∗

+ A∗T+ ∥A∗ + AT

∗∥2I) − 2σ2(B)I

= (27.52 − 27.58a)I

where Φ2 > 0 if a < 0.9978. Hence, the sufficient condition forrobust stability is obtained as a < 0.9978.

Φ3 in Theorem 6 is calculated as

Φ3 = 2CK−1− (A∗

+ A∗T+ ∥A∗ + AT

∗∥2I) − 2σ3(B)I

= (27.52 − 27.66a)I

a < 0.9949 implies that Φ3 > 0. Hence, the sufficient conditionfor robust stability is obtained as a < 0.9949.

Φ4 in Theorem 7 is calculated as

Φ4 = 2CK−1− (A∗

+ A∗T+ ∥A∗ + AT

∗∥2I) − 2σ4(B)I

= (27.52 − 28a)I

a < 0.9829 ensures that Φ4 > 0. Hence, the sufficient conditionfor robust stability is obtained as a < 0.9829.

In the case of this example, for P = I , the matrix S in Theorem 4is in the form:

S =

−2a −2a −2a −2a−2a −2a −2a −2a−2a −2a −2a −2a−2a −2a −2a −2a

Θ1 in Theorem 4 is calculated as

Θ1 = 2CK−1+ S − 2σ1(B)I

= (27.52 − 19.52a)I −

2a 2a 2a 2a2a 2a 2a 2a2a 2a 2a 2a2a 2a 2a 2a

.

Note thatΘ1 > 0 if and only if a < 1. Thus, the sufficient conditionfor robust stability is determined to be a < 1.

Θ2 in Theorem 8 is calculated as

Θ2 = 2CK−1+ S − 2σ2(B)I

= (27.52 − 19.58a)I −

2a 2a 2a 2a2a 2a 2a 2a2a 2a 2a 2a2a 2a 2a 2a

.

Note that Θ2 > 0 if and only if a < 0.9978. Thus, the sufficientcondition for robust stability is determined to be a < 0.9978.

Θ3 in Theorem 9 is calculated as

Θ3 = 2CK−1+ S − 2σ3(B)I

= (27.52 − 19.66a)I −

2a 2a 2a 2a2a 2a 2a 2a2a 2a 2a 2a2a 2a 2a 2a

.

Note that Θ3 > 0 if and only if a < 0.9949. Thus, the sufficientcondition for robust stability is determined to be a < 0.9949.

Θ4 in Theorem 10 is calculated as

Θ4 = 2CK−1+ S − 2σ4(B)I

= (27.52 − 20a)I −

2a 2a 2a 2a2a 2a 2a 2a2a 2a 2a 2a2a 2a 2a 2a

.

Note that Θ4 > 0 if and only if a < 0.9829. Thus, the sufficientcondition for robust stability is determined to be a < 0.9829.

Hence, for this example, we have shown that the conditionsimposed on the network parameters by Theorems 3 and 4 areweaker than those imposed by Theorems 5–10, thus proving thenovelty of our results. However, we should point out here that ourresults aremore advantageous than the previous results in the caseof this example. One can find different sets of network parametersfor which the conditions presented in this paper and the previousstability conditions may have advantages over each other as allthese results give sufficient conditions. Therefore, we can unifyour current results and the previous corresponding results. Theconditions of Theorems 3 and 5–7 can be unified as follows:

Theorem 11. For the neural system defined by (2), let f ∈ Kand the network parameters satisfy (3). Then, the neural networkmodel (2) is globally asymptotically robustly stable, if there exists apositive diagonal matrix P = diag(pi > 0) such that

Φ = 2CPK−1− (PA∗

+ A∗TP + ∥PA∗ + AT∗P∥2I)

− 2∥P∥2σm(B)I > 0

where A∗=

12 (A + A), A∗ =

12 (A − A), K = diag(ki > 0), σm(B) =

min(σ1(B), σ2(B), σ3(B), σ4(B)).

The conditions of Theorems 4 and 8–10 can be unified asfollows:

Theorem 12. For the neural system defined by (2), let f ∈ Kand the network parameters satisfy (3). Then, the neural networkmodel (2) is globally asymptotically robustly stable, if there exists apositive diagonal matrix P = diag(pi > 0) such that

Θ = 2CPK−1+ S − 2∥P∥2σm(B)I > 0

where K = diag(ki > 0), S = (sij)n×n with sii = −2piaii, sij =

−max(|piaij + pjaji|, |piaij + pjaji|) for i = j and σm(B) =

min(σ1(B), σ2(B), σ3(B), σ4(B)).

O. Faydasicok, S. Arik / Neural Networks 44 (2013) 64–71 71

6. Conclusions

We have first introduced a new upper bound for the norm ofthe interval matrices. Then, by applying this result to dynamicalanalysis of delayed neural networks together with employingLyapunov stability and homomorphic mapping theorems, we havederived some new sufficient conditions for the global robustasymptotic stability of the equilibrium point for this class of neuralnetworks. By giving a comparative numerical example, we haveshown that our proposed results are new and can be consideredalternative conditions to some of the previous robust stabilityresults derived in the literature. By unifying the previous robuststability results and our current results, we have also stated ageneralization of robust stability results.

References

Baese, A. M., Koshkouei, A. J., Emmett, M. R., & Goodall, D. P. (2009). Globalstability analysis and robust design of multi-time-scale biological networksunder parametric uncertainties. Neural Networks, 22, 658–663.

Balasubramaniam, P., & Ali, M. S. (2010). Robust stability of uncertain fuzzycellular neural networkswith time-varying delays and reaction diffusion terms.Neurocomputing , 74, 439–446.

Cao, J. (2001). Global stability conditions for delayed CNNs. IEEE Transactions onCircuits and Systems I: Regular Papers, 48, 1330–1333.

Cao, J., & Ho, D. W. C. (2005). A general framework for global asymptotic stabilityanalysis of delayed neural networks based on LMI approach. Chaos, Solitons andFractals, 24, 1317–1329.

Cao, J., Huang, D. S., & Qu, Y. (2005). Global robust stability of recurrent neuralnetworks. Chaos, Solitons and Fractals, 23, 221–229.

Cao, J., & Wang, J. C. (2005). Global asymptotic and robust stability of recurrentneural networks with time delays. IEEE Transactions on Circuits and Systems I:Regular Papers, 52, 417–426.

Deng, F., Hua, M., Liu, X., Peng, Y., & Fei, J. (2011). Robust delay-dependentexponential stability for uncertain stochastic neural networks with mixeddelays. Neurocomputing , 74, 1503–1509.

Ensari, T., & Arik, S. (2010). New results for robust stability of dynamical neuralnetworks with discrete time delays. Expert Systems with Applications, 37,5925–5930.

Faydasicok, O., & Arik, S. (2012). Robust stability analysis of a class of neuralnetworks with discrete time delays. Neural Networks, 29–30, 52–59.

Guo, Z., & Huang, L. (2009). LMI conditions for global robust stability of delayedneural networks with discontinuous neuron activations. Applied Mathematicsand Computation, 215, 889–900.

Han, W., Kao, Y., & Wang, L. (2011). Global exponential robust stability of staticinterval neural networks with S-type distributed delays. Journal of the FranklinInstitute, 348, 2072–2081.

Huang, H., Ho, D. W. C., & Qu, Y. (2007). Robust stability of stochastic delayedadditive neural networks with Markovian switching. Neural Networks, 20,799–809.

Huang, Z., Li, X., Mohamad, S., & Lu, Z. (2009). Robust stability analysis of staticneural networkwith S-type distributed delays. AppliedMathematical Modelling ,33, 760–769.

Kao, Y. G., Guo, J. F., Wang, C. H., & Sun, X. Q. (2012). Delay-dependent robust ex-ponential stability of Markovian jumping reaction–diffusion Cohen–Grossbergneural networks with mixed delays. Journal of the Franklin Institute, 349,1972–1988.

Kwon, O. M., & Park, J. H. (2008). New delay-dependent robust stability criterion foruncertain neural networks with time-varying delays. Applied Mathematics andComputation, 205, 417–427.

Liao, X. F., & Wong, K. (2004). Robust stability of interval bidirectional associativememory neural network with time delays. IEEE Transactions on Systems, Manand Cybernetics, Part B, 34, 1142–1154.

Liu, L., Han, Z., & Li, W. (2009). Global stability analysis of interval neural networkswith discrete and distributed delays of neutral type. Expert Systems withApplications, 36, 7328–7331.

Lou, X., Ye, Q., & Cui, B. (2012). Parameter-dependent robust stability of uncertainneural networks with time-varying delay. Journal of the Franklin Institute, 349,1891–1903.

Mahmoud, M. S., & Ismail, A. (2010). Improved results on robust exponentialstability criteria for neutral-type delayed neural networks. Applied Mathematicsand Computation, 217, 3011–3019.

Ozcan, N., & Arik, S. (2006). Global robust stability analysis of neural networks withmultiple time delays. IEEE Transactions on Circuits and Systems I: Regular Papers,53, 166–176.

Pan, W., Wang, Z., & Hu, J. (2011). Robust stability of delayed genetic regulatorynetworks with different sources of uncertainties. 13 (pp. 645–654).

Qi, H. (2007). New sufficient conditions for global robust stability of delayedneural networks. IEEE Transactions on Circuits and Systems I: Regular Papers, 54,1131–1141.

Shao, J. L., Huang, T. Z., & Zhou, S. (2010). Some improved criteria for globalrobust exponential stability of neural networks with time-varying delays.Communications in Nonlinear Science Numerical Simulations, 15, 3782–3794.

Shen, Y., & Wang, J. (2012). Robustness analysis of global exponential stabilityof recurrent neural networks in the presence of time delays and randomdisturbances. IEEE Transactions on Neural Networks and Learning Systems, 23,87–96.

Shen, B., Wang, Z., & Liu, X. (2011). Bounded H-infinity synchronization and stateestimation for discrete time-varying stochastic complex networks over a finite-horizon. IEEE Transactions on Neural Networks, 22, 145–157.

Singh, V. (2007). Global robust stability of delayed neural networks: Estimatingupper limit of norm of delayed connection weight matrix. Chaos, Solitons andFractals, 32, 259–263.

Wang, Z., Liu, Y., Liu, X., & Shi, Y. (2010). Robust state estimation for discrete-time stochastic neural networks with probabilistic measurement delays.Neurocomputing , 74, 256–264.

Wu, Z. G., Park, J. H., Su, H., & Chu, J. (2012). Robust dissipativity analysis of neuralnetworks with time-varying delay and randomly occurring uncertainties.Nonlinear Dynamics, 69, 1323–1332.

Zeng, Z., &Wang, J. (2009). Associativememories based on continuous-time cellularneural networks designed using space-invariant cloning templates. NeuralNetworks, 22, 651–657.

Zhang, H., Liu, Z., & Huang, G. B. (2010). Novel delay-dependent robust stabilityanalysis for switched neutral-type neural networks with time-varying delaysvia SC technique. IEEE Transactions on Systems, Man, and Cybernetics, Part B:Cybernetics, 40, 1480–1491.

Zhang, Z., Yang, Y., & Huang, Y. (2011). Global exponential stability of intervalgeneral BAMneural networkswith reaction–diffusion terms andmultiple time-varying delays. Neural Networks, 24, 457–465.

Zhou, Q., &Wan, L. (2010). Global robust asymptotic stability analysis of BAMneuralnetworks with time delay and impulse: an LMI approach. Applied Mathematicsand Computation, 216, 1538–1545.