9
Research Article Finite-Time Stability of Fractional-Order BAM Neural Networks with Distributed Delay Yuping Cao 1 and Chuanzhi Bai 2 1 Department of Basic Courses, Lianyungang Technical College, Lianyungang, Jiangsu 222000, China 2 Department of Mathematics, Huaiyin Normal University, Huaian, Jiangsu 223300, China Correspondence should be addressed to Chuanzhi Bai; [email protected] Received 8 February 2014; Accepted 1 April 2014; Published 22 April 2014 Academic Editor: Sabri Arik Copyright © 2014 Y. Cao and C. Bai. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Based on the theory of fractional calculus, the generalized Gronwall inequality and estimates of mittag-Leffer functions, the finite- time stability of Caputo fractional-order BAM neural networks with distributed delay is investigated in this paper. An illustrative example is also given to demonstrate the effectiveness of the obtained result. 1. Introduction Fractional calculus (integral and differential operations of noninteger order) was firstly introduced 300 years ago. Due to lack of application background and the complexity, it did not attract much attention for a long time. In recent decades fractional calculus is applied to physics, applied mathematics, and engineering [16]. Since the fractional- order derivative is nonlocal and has weakly singular kernels, it provides an excellent instrument for the description of memory and hereditary properties of dynamical processes. Nowadays, study on the complex dynamical behaviors of fractional-order systems has become a very hot research topic. We know that the next state of a system not only depends upon its current state but also upon its history information. Since a model described by fractional-order equations possesses memory, it is precise to describe the states of neurons. Moreover, the superiority of the Caputo’s fractional derivative is that the initial conditions for fractional differential equations with Caputo derivatives take on the similar form as those for integer-order differentiation. ere- fore, it is necessary and interesting to study fractional-order neural networks both in theory and in applications. Recently, fractional-order neural networks have been presented and designed to distinguish the classical integer- order models [710]. Currently, some excellent results about fractional-order neural networks have been investigated, such as Kaslik and Sivasundaram [11, 12], Zhang et al. [13], Delavari et al. [14], and Li et al. [15, 16]. On the other hand, time delay is one of the inevitable problems on the stability of dynamical systems in the real word [1720]. But till now, there are few results on the problems for fractional-order delayed neural networks; Chen et al. [21] studied the uniform stability for a class of fractional-order neural networks with constant delay by the analytical approach; Wu et al. [22] investigated the finite-time stability of fractional-order neural networks with delay by the generalized Gronwall inequality and estimates of Mittag-Leffler functions; Alofi et al. [23] discussed the finite-time stability of Caputo fractional-order neural networks with distributed delay. e integer-order bidirectional associative memory (BAM) model known as an extension of the unidirectional autoassociator of Hopfield [24] was first introduced by Kosko [25]. is neural network has been widely studied due to its promising potential for applications in pattern recognition and automatic control. In recent years, integer-order BAM neural networks have been extensively studied [2629]. However, to the best of our knowledge, there is no effort being made in the literature to study the finite-time stability of fractional-order BAM neural networks so far. Motivated by the above-mentioned works, we were devoted to establishing the finite-time stability of Caputo fractional-order BAM neural networks with distributed Hindawi Publishing Corporation Abstract and Applied Analysis Volume 2014, Article ID 634803, 8 pages http://dx.doi.org/10.1155/2014/634803

Research Article Finite-Time Stability of Fractional-Order ...downloads.hindawi.com/journals/aaa/2014/634803.pdf · Finite-Time Stability of Fractional-Order BAM Neural Networks with

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

  • Research ArticleFinite-Time Stability of Fractional-Order BAM Neural Networkswith Distributed Delay

    Yuping Cao1 and Chuanzhi Bai2

    1 Department of Basic Courses, Lianyungang Technical College, Lianyungang, Jiangsu 222000, China2Department of Mathematics, Huaiyin Normal University, Huaian, Jiangsu 223300, China

    Correspondence should be addressed to Chuanzhi Bai; [email protected]

    Received 8 February 2014; Accepted 1 April 2014; Published 22 April 2014

    Academic Editor: Sabri Arik

    Copyright © 2014 Y. Cao and C. Bai. This is an open access article distributed under the Creative Commons Attribution License,which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

    Based on the theory of fractional calculus, the generalized Gronwall inequality and estimates of mittag-Leffer functions, the finite-time stability of Caputo fractional-order BAM neural networks with distributed delay is investigated in this paper. An illustrativeexample is also given to demonstrate the effectiveness of the obtained result.

    1. Introduction

    Fractional calculus (integral and differential operations ofnoninteger order) was firstly introduced 300 years ago. Dueto lack of application background and the complexity, itdid not attract much attention for a long time. In recentdecades fractional calculus is applied to physics, appliedmathematics, and engineering [1–6]. Since the fractional-order derivative is nonlocal and has weakly singular kernels,it provides an excellent instrument for the description ofmemory and hereditary properties of dynamical processes.Nowadays, study on the complex dynamical behaviors offractional-order systems has become a very hot researchtopic.

    We know that the next state of a system not onlydepends upon its current state but also upon its historyinformation. Since a model described by fractional-orderequations possesses memory, it is precise to describe thestates of neurons. Moreover, the superiority of the Caputo’sfractional derivative is that the initial conditions for fractionaldifferential equations with Caputo derivatives take on thesimilar form as those for integer-order differentiation.There-fore, it is necessary and interesting to study fractional-orderneural networks both in theory and in applications.

    Recently, fractional-order neural networks have beenpresented and designed to distinguish the classical integer-order models [7–10]. Currently, some excellent results about

    fractional-order neural networks have been investigated,such as Kaslik and Sivasundaram [11, 12], Zhang et al. [13],Delavari et al. [14], and Li et al. [15, 16]. On the other hand,time delay is one of the inevitable problems on the stabilityof dynamical systems in the real word [17–20]. But till now,there are few results on the problems for fractional-orderdelayed neural networks; Chen et al. [21] studied the uniformstability for a class of fractional-order neural networks withconstant delay by the analytical approach; Wu et al. [22]investigated the finite-time stability of fractional-order neuralnetworks with delay by the generalized Gronwall inequalityand estimates of Mittag-Leffler functions; Alofi et al. [23]discussed the finite-time stability of Caputo fractional-orderneural networks with distributed delay.

    The integer-order bidirectional associative memory(BAM) model known as an extension of the unidirectionalautoassociator of Hopfield [24] was first introduced by Kosko[25]. This neural network has been widely studied due to itspromising potential for applications in pattern recognitionand automatic control. In recent years, integer-order BAMneural networks have been extensively studied [26–29].However, to the best of our knowledge, there is no effortbeing made in the literature to study the finite-time stabilityof fractional-order BAM neural networks so far.

    Motivated by the above-mentioned works, we weredevoted to establishing the finite-time stability of Caputofractional-order BAM neural networks with distributed

    Hindawi Publishing CorporationAbstract and Applied AnalysisVolume 2014, Article ID 634803, 8 pageshttp://dx.doi.org/10.1155/2014/634803

  • 2 Abstract and Applied Analysis

    delay. In this paper, we will apply Laplace transform, gen-eralized Gronwall inequality, and estimates of Mittag-Lefflerfunctions to establish the finite-time stability criterion offractional-order distributed delayed BAM neural networks.

    This paper is organized as follows. In Section 2, somedefinitions and lemmas of fractional differential and integralcalculus are given and the fractional-order BAM neuralnetworks are presented. A criterion for finite-time stabilityof fractional-order BAM neural networks with distributeddelay is obtained in Section 3. Finally, the effectiveness andfeasibility of the theoretical result is shown by an example inSection 4.

    2. Preliminaries

    For the convenience of the reader, we first briefly recall somedefinitions of fractional calculus; formore details, see [1, 2, 5],for example.

    Definition 1. The Riemann-Liouville fractional integral oforder 𝛼 > 0 of a function 𝑢 : (0,∞) → 𝑅 is given by

    𝐼𝛼

    0+𝑢 (𝑡) =

    1

    Γ (𝛼)

    𝑡

    0

    (𝑡 − 𝑠)𝛼−1

    𝑢 (𝑠) 𝑑𝑠 (1)

    provided that the right side is pointwise defined on (0,∞),where Γ(⋅) is the Gamma function.

    Definition 2. The Caputo fractional derivative of order 𝛾 > 0of a function 𝑢 : (0,∞) → 𝑅 can be written as

    𝐶

    0𝐷𝛾

    𝑡𝑢 (𝑡) =

    1

    Γ (𝑛 − 𝛾)

    𝑡

    0

    𝑢(𝑛)

    (𝑠)

    (𝑡 − 𝑠)𝛾+1−𝑛

    𝑑𝑠,

    𝑛 − 1 < 𝛾 < 𝑛.

    (2)

    Definition 3. The Mittag-Leffler function in two parametersis defined as

    𝐸𝛼,𝛽

    (𝑧) =

    𝑘=0

    𝑧𝑘

    Γ (𝑘𝛼 + 𝛽)

    , (3)

    where 𝛼 > 0, 𝛽 > 0, and 𝑧 ∈ C, where C denotes the complexplane. In particular, for 𝛽 = 1, one has

    𝐸𝛼,1

    (𝑧) = 𝐸𝛼(𝑧) =

    𝑘=0

    𝑧𝑘

    Γ (𝑘𝛼 + 1)

    . (4)

    The Laplace transform of Mittag-Leffler function is

    L {𝑡𝛽−1

    𝐸𝛼,𝛽

    (−𝜆𝑡𝛼

    )} =

    𝑠𝛼−𝛽

    𝑠𝛼+ 𝜆

    ,

    (R (𝑠) > |𝜆|1/𝛼

    ) ,

    (5)

    where 𝑡 and 𝑠 are, respectively, the variables in the timedomain and Laplace domain andL{⋅} stands for the Laplacetransform.

    In this paper, we are interested in the finite-time stabilityof fractional-order BAM neural networks with distributeddelay by the following state equations:

    𝐶

    0𝐷

    𝛼

    𝑡𝑥𝑖(𝑡) = − 𝑐

    𝑖𝑥𝑖(𝑡) +

    𝑛

    𝑗=1

    𝑏𝑖𝑗𝑓𝑗(𝑦𝑗(𝑡))

    +

    𝑛

    𝑗=1

    𝜏

    0

    𝑟𝑖𝑗(𝑠) 𝑓𝑗(𝑦𝑗(𝑡 − 𝑠)) 𝑑𝑠 + 𝐼

    𝑖,

    𝑡 ≥ 0,

    𝐶

    0𝐷

    𝛽

    𝑡𝑦𝑗(𝑡) = − 𝑐

    𝑗𝑦𝑗(𝑡) +

    𝑛

    𝑖=1

    𝑑𝑗𝑖𝑔i (𝑥𝑖 (𝑡))

    +

    𝑛

    𝑖=1

    𝜏

    0

    𝑝𝑗𝑖(𝑠) 𝑔𝑖(𝑥𝑖(𝑡 − 𝑠)) 𝑑𝑠 + 𝐼

    𝑗,

    𝑖, 𝑗 = 1, . . . , 𝑛,

    (6)

    or in the matrix-vector notation

    𝐶

    0𝐷

    𝛼

    𝑡𝑥 (𝑡) = − 𝐶𝑥 (𝑡) + 𝐵𝑓 (𝑦 (𝑡))

    + ∫

    𝜏

    0

    𝑅 (𝑠) 𝑓 (𝑦 (𝑡 − 𝑠)) 𝑑𝑠 + 𝐼,

    𝐶

    0𝐷

    𝛽

    𝑡𝑦 (𝑡) = − 𝐶𝑦 (𝑡) + 𝐷𝑔 (𝑥 (𝑡))

    + ∫

    𝜏

    0

    𝑃 (𝑠) 𝑔 (𝑥 (𝑡 − 𝑠)) 𝑑𝑠 + 𝐼, 𝑡 ≥ 0,

    (7)

    where 1 < 𝛼, 𝛽 < 2. The model (6) is made up of two neuralfields 𝐹

    𝑥and 𝐹

    𝑦, where 𝑥

    𝑖(𝑡) and 𝑦

    𝑗(𝑡) are the activations of

    the 𝑖th neuron in 𝐹𝑥and the 𝑗th neuron in 𝐹

    𝑦, respectively;

    (𝑥 (𝑡) , 𝑦 (𝑡)) = (𝑥1(𝑡) , . . . , 𝑥

    𝑛(𝑡) , 𝑦1(𝑡) , . . . , 𝑦

    𝑛(𝑡))𝑇

    ∈ R2𝑛

    (8)

    is the state vector of the network at time 𝑡; the functions

    𝑓 (𝑦 (𝑡)) = (𝑓1(𝑦1(𝑡)) , 𝑓

    2(𝑦2(𝑡)) , . . . , 𝑓

    𝑛(𝑦𝑛(𝑡)))𝑇

    ,

    𝑔 (𝑥 (𝑡)) = (𝑔1(𝑥1(𝑡)) , 𝑔

    2(𝑥2(𝑡)) , . . . , 𝑔

    𝑛(𝑥𝑛(𝑡)))𝑇

    (9)

    are the activation functions of the neurons at time 𝑡; 𝐶 =diag(𝑐

    𝑖) is a diagonal matrix; 𝑐

    𝑖> 0 represents the rate with

    which the 𝑖th unit will reset its potential to the resting state inisolation when disconnected from the network and externalinputs; 𝐵 = (𝑏

    𝑖𝑗)𝑛×𝑛

    and𝐷 = (𝑑𝑗𝑖)𝑛×𝑛

    are the feedback matrix;𝜏 > 0 denotes the maximum possible transmission delayfrom neuron to another; 𝑅 = (𝑟

    𝑖𝑗)𝑛×𝑛

    and 𝑃 = (𝑝𝑗𝑖)𝑛×𝑛

    are the delayed feedback matrix; 𝐼 = (𝐼1, . . . , 𝐼

    𝑛)𝑇 and 𝐼 =

    (𝐼1, . . . , 𝐼

    𝑛)𝑇 are two external bias vectors.

    Let C1([−𝜏, 0],R𝑛) be the Banach space of all continu-ously differential functions over a time interval of length 𝜏,

  • Abstract and Applied Analysis 3

    mapping the interval [−𝜏, 0] into R𝑛 with the norm definedas follows: for every 𝜑 ∈ C1([−𝜏, 0],R𝑛),

    𝜑1

    = max {𝜑,

    𝜑}

    = max{ sup𝜃∈[−𝜏,0]

    𝜑 (𝜃)

    , sup𝜃∈[−𝜏,0]

    𝜑

    (𝜃)

    } .

    (10)

    The initial conditions associated with (6) are given by

    𝑥𝑖(𝜃) = 𝜑

    𝑖(𝜃) , 𝑥

    𝑖(𝜃) = 𝜑

    𝑖(𝜃) , 𝑦

    𝑗(𝜃) = 𝜓

    𝑗(𝜃) ,

    𝑦

    𝑗(𝜃) = 𝜓

    𝑗(𝜃) , 𝜃 ∈ [−𝜏, 0] ,

    (11)

    where 𝜑𝑖, 𝜓𝑗∈ 𝐶1

    ([−𝜏, 0],R).In order to obtain main result, we make the following

    assumptions.

    (H1) For 𝑖, 𝑗 = 1, . . . , 𝑛, the functions 𝑟𝑖𝑗(⋅) and 𝑝

    𝑗𝑖(⋅) are

    continuous on [0, 𝜏].

    (H2) The neurons activation functions 𝑓𝑖and 𝑔

    𝑗(𝑖, 𝑗 =

    1, . . . , 𝑛) are bounded.

    (H3) The neurons activation functions 𝑓𝑖and 𝑔

    𝑗are Lips-

    chitz continuous; that is, there exist positive constantsℎ𝑖, 𝑙𝑗(𝑖, 𝑗 = 1, . . . , 𝑛) such that

    𝑓𝑖(𝑢) − 𝑓

    𝑖(V)

    ≤ ℎ𝑖|𝑢 − V| ,

    𝑔𝑗(𝑢) − 𝑔

    𝑗(V)

    ≤ 𝑙𝑗|𝑢 − V| ,

    ∀𝑢, V ∈ R.(12)

    Since the Caputo’s fractional derivative of a constant isequal to zero, the equilibriumpoint of system (6) is a constantvector (𝑥∗, 𝑦∗) = (𝑥∗

    1, 𝑥∗

    2, . . . , 𝑥

    𝑛, 𝑦∗

    1, 𝑦∗

    2, . . . , 𝑦

    𝑛)𝑇

    ∈ R2𝑛

    which satisfies the system

    𝑐𝑖𝑥∗

    𝑖−

    𝑛

    𝑗=1

    𝑏𝑖𝑗𝑓𝑗(𝑦∗

    𝑗) −

    𝑛

    𝑗=1

    𝜏

    0

    𝑟𝑖𝑗(𝑠) 𝑓𝑗(𝑦∗

    𝑗) 𝑑𝑠 − 𝐼

    𝑖= 0,

    𝑖 = 1, . . . , 𝑛,

    𝑐𝑗𝑦∗

    𝑗−

    𝑛

    𝑖=1

    𝑑𝑗𝑖𝑔𝑖(𝑥∗

    𝑖) −

    𝑛

    𝑖=1

    𝜏

    0

    𝑝𝑗𝑖(𝑠) 𝑔𝑖(𝑥∗

    𝑖) 𝑑𝑠 − 𝐼

    𝑗= 0,

    𝑗 = 1, . . . , 𝑛.

    (13)

    By using the Schauder fixed point theorem and assumptions(H1)–(H3), it is easy to prove that the equilibrium points ofsystem (6) exist. We can shift the equilibrium point of system(6) to the origin. Denoting

    (𝑢 (𝑡) , V (𝑡)) = (𝑢1(𝑡) , . . . , 𝑢

    𝑛(𝑡) , V1(𝑡) , . . . , V

    𝑛(𝑡))𝑇

    = (𝑥1(𝑡) − 𝑥

    1, . . . , 𝑥

    𝑛(𝑡)

    −𝑥∗

    𝑛, 𝑦1(𝑡) − 𝑦

    1, . . . , 𝑦

    𝑛(𝑡) − 𝑦

    𝑛)𝑇

    ,

    (14)

    then system (6) can be written as

    𝐶

    0𝐷

    𝛼

    𝑡𝑢𝑖(𝑡) = − 𝑐

    𝑖𝑢𝑖(𝑡) +

    𝑛

    𝑗=1

    𝑏𝑖𝑗𝐹𝑗(V𝑗(𝑡))

    +

    𝑛

    𝑗=1

    𝜏

    0

    𝑟𝑖𝑗(𝑠) 𝐹𝑗(V𝑗(𝑡 − 𝑠)) 𝑑𝑠,

    𝑡 ≥ 0,

    𝐶

    0𝐷

    𝛽

    𝑡V𝑗(𝑡) = − 𝑐

    𝑗V𝑗(𝑡) +

    𝑛

    𝑖=1

    𝑑𝑗𝑖𝐺𝑖(𝑢𝑖(𝑡))

    +

    𝑛

    𝑖=1

    𝜏

    0

    𝑝𝑗𝑖(𝑠) 𝐺𝑖(𝑢𝑖(𝑡 − 𝑠)) 𝑑𝑠,

    𝑖, 𝑗 = 1, . . . , 𝑛,

    (15)

    with the initial conditions

    𝑢𝑖(𝜃) = 𝜑

    𝑖(𝜃) , 𝑢

    𝑖(𝜃) = 𝜑

    𝑖(𝜃) , V

    𝑗(𝜃) = 𝜓

    𝑗(𝜃) ,

    V𝑗(𝜃) = 𝜓

    𝑗(𝜃) , 𝜃 ∈ [−𝜏, 0] ,

    (16)

    where

    𝐹𝑗(V𝑗(𝑡)) = 𝑓

    𝑗(V𝑗(𝑡) + 𝑦

    𝑗) − 𝑓𝑗(𝑦∗

    𝑗) ,

    𝐺𝑖(𝑢𝑖(𝑡)) = 𝑔

    𝑖(𝑢𝑖(𝑡) + 𝑥

    𝑖) − 𝑔𝑖(𝑥∗

    𝑖) ,

    𝜑𝑖(𝜃) = 𝜑

    𝑖(𝜃) − 𝑥

    𝑖, 𝜓

    𝑗(𝜃) = 𝜓

    𝑗(𝜃) − 𝑦

    𝑗,

    𝜃 ∈ [𝜏, 0] .

    (17)

    Similarly, by using thematrix-vector notation, system (15) canbe expressed as

    𝐶

    0𝐷

    𝛼

    𝑡𝑢 (𝑡) = − 𝐶𝑢 (𝑡) + 𝐵𝐹 (V (𝑡))

    + ∫

    𝜏

    0

    𝑅 (𝑠) 𝐹 (V (𝑡 − 𝑠)) 𝑑𝑠,

    𝑡 ≥ 0,

    𝐶

    0𝐷

    𝛽

    𝑡V (𝑡) = − 𝐶V (𝑡) + 𝐷𝐺 (𝑢 (𝑡))

    + ∫

    𝜏

    0

    𝑃 (𝑠) 𝐺 (𝑢 (𝑡 − 𝑠)) 𝑑𝑠,

    𝑡 ≥ 0,

    (18)

    with the initial condition

    𝑢 (𝜃) = 𝜑 (𝜃) , 𝑢

    (𝜃) = 𝜑

    (𝜃) , V (𝜃) = 𝜓 (𝜃) ,

    V (𝜃) = 𝜓 (𝜃) , 𝜃 ∈ [−𝜏, 0] ,(19)

    where

    𝐹 (V (𝑡)) = (𝐹1(V1(𝑡)) , 𝐹

    2(V2(𝑡)) , . . . , 𝐹

    𝑛(V𝑛(𝑡)))𝑇

    ,

    𝐺 (𝑢 (𝑡)) = (𝐺1(𝑢1(𝑡)) , 𝐺

    2(𝑢2(𝑡)) , . . . , 𝐺

    𝑛(𝑢𝑛(𝑡)))𝑇

    .

    (20)

  • 4 Abstract and Applied Analysis

    Define the functions as follows:

    ℎ𝑖(𝑡) =

    {{

    {{

    {

    𝐹𝑖(V𝑖(𝑡))

    V𝑖(𝑡)

    , V𝑖(𝑡) ̸= 0,

    0, V𝑖(𝑡) = 0,

    𝑙𝑗(𝑡) =

    {{

    {{

    {

    𝐺𝑗(𝑢𝑗(𝑡))

    𝑢𝑗(𝑡)

    , 𝑢𝑗(𝑡) ̸= 0,

    0, 𝑢𝑗(𝑡) = 0,

    (21)

    where 𝑖, 𝑗 = 1, . . . , 𝑛. From assumption (H3), we can obtain|ℎ𝑖(𝑡)| ≤ ℎ

    𝑖, |𝑙𝑗(𝑡)| ≤ 𝑙

    𝑗. By (21), we have

    𝐹𝑖(V𝑖(𝑡)) = ℎ

    𝑖(𝑡) V𝑖(𝑡) , 𝐺

    𝑗(𝑢𝑗(𝑡)) = 𝑙

    𝑗(𝑡) 𝑢𝑗(𝑡) ,

    𝑖, 𝑗 = 1, . . . , 𝑛.

    (22)

    Thus, system (18) can be furtherwritten as the following form:

    𝐶

    0𝐷

    𝛼

    𝑡𝑢 (𝑡) = − 𝐶𝑢 (𝑡) + 𝐵𝐻 (𝑡) V (𝑡)

    + ∫

    𝜏

    0

    𝑅 (𝑠)𝐻 (𝑡 − 𝑠) V (𝑡 − 𝑠) 𝑑𝑠,

    𝑡 ≥ 0,

    𝐶

    0𝐷

    𝛽

    𝑡V (𝑡) = − 𝐶V (𝑡) + 𝐷𝐿 (𝑡) 𝑢 (𝑡)

    + ∫

    𝜏

    0

    𝑃 (𝑠) 𝐿 (𝑡 − 𝑠) 𝑢 (𝑡 − 𝑠) 𝑑𝑠,

    𝑡 ≥ 0,

    (23)

    where𝐻(𝑡) = diag{ℎ𝑖(𝑡)}, 𝐿(𝑡) = diag{𝑙

    𝑗(𝑡)}.

    Definition 4. System (23) with the initial condition (19) isfinite-time stable with respect to {𝛿, 𝜀, 𝑡

    0, 𝐽}, 𝛿 < 𝜀, if and only

    if(𝜑, 𝜓)

    1

    :=𝜑1

    +𝜓1

    < 𝛿 (24)

    implies

    ‖(𝑢 (𝑡) , V (𝑡))‖ = ‖𝑢 (𝑡)‖ + ‖V (𝑡)‖ < 𝜀, ∀𝑡 ∈ 𝐽, (25)

    where 𝛿 is a positive real number and 𝜀 > 0, 𝛿 < 𝜀, 𝑡0denotes

    the initial time of observation of the system, and 𝐽 denotestime interval 𝐽 = [𝑡

    0, 𝑡0+ 𝑇).

    A technical result about norm upper-bounding functionof the matrix function 𝐸

    𝛼,𝛽is given in [30] as follows.

    Lemma 5. If 𝛼 ≥ 1, then, for 𝛽 = 1, 2, 𝛼, one has𝐸𝛼,𝛽

    (𝐴𝑡𝛼

    )

    𝑒𝐴𝑡𝛼

    , 𝑡 ≥ 0. (26)

    Moreover, if 𝐴 is a diagonal stability matrix, then𝐸𝛼,𝛽

    (𝐴𝑡𝛼

    )

    ≤ 𝑒−𝜔𝑡

    , 𝑡 ≥ 0, (27)

    where −𝜔 (𝜔 > 0) is the largest eigenvalue of the diagonalstability matrix 𝐴.

    Lemma 6 (see [31]). Let 𝑢(𝑡), 𝑎(𝑡) be nonnegative and localintegrable on [0, 𝑇)(𝑇 ≤ +∞), and let 𝑔 be a nonnegative,nondecreasing continuous function defined on [0, 𝑇), 𝑔(𝑡) ≤𝑀, and let𝑀 be a real constant, 𝛼 > 0, with

    𝑢 (𝑡) ≤ 𝑎 (𝑡) + 𝑔 (𝑡) ∫

    t

    0

    (𝑡 − 𝑠)𝛼−1

    𝑢 (𝑠) 𝑑𝑠, 𝑡 ∈ [0, 𝑇) . (28)

    Then

    𝑢 (𝑡) ≤ 𝑎 (𝑡) + ∫

    𝑡

    0

    [

    𝑛=1

    (𝑔 (𝑡) Γ (𝛼))𝑛

    Γ (𝑛𝛼)

    (𝑡 − 𝑠)𝑛𝛼−1

    𝑎 (𝑠)] 𝑑𝑠,

    𝑡 ∈ [0, 𝑇) .

    (29)

    Moreover, if 𝑎(𝑡) is a nondecreasing function on [0, 𝑇), then

    𝑢 (𝑡) ≤ 𝑎 (𝑡) 𝐸𝛼,1

    (𝑔 (𝑡) Γ (𝛼) 𝑡𝛼

    ) , 𝑡 ∈ [0, 𝑇) . (30)

    3. Main Result

    We first give a key lemma in the proof of our main result asfollows.

    Lemma 7. Let 𝑢(𝑡), V(𝑡) be nonnegative and local integrableon [0, 𝑇) (𝑇 ≤ +∞), and let 𝑎

    1(𝑡), 𝑎2(𝑡) be nonnegative,

    nondecreasing and local integrable on [0, 𝑇), and let 𝑏1, 𝑏2be

    two positive constants, 𝛼, 𝛽 > 1, with

    𝑢 (𝑡) ≤ 𝑎1(𝑡) + 𝑏

    1∫

    𝑡

    0

    (𝑡 − 𝑠)𝛼−1V (𝑠) 𝑑𝑠, 𝑡 ∈ [0, 𝑇) , (31)

    V (𝑡) ≤ 𝑎2(𝑡) + 𝑏

    2∫

    𝑡

    0

    (𝑡 − 𝑠)𝛽−1

    𝑢 (𝑠) 𝑑𝑠, 𝑡 ∈ [0, 𝑇) . (32)

    Then

    𝑢 (𝑡) ≤ (𝑎1(𝑡) + 𝑏

    1∫

    𝑡

    0

    (𝑡 − 𝑠)𝛼−1

    𝑎2(𝑠) 𝑑𝑠)

    × 𝐸𝛼+𝛽

    (𝑏1𝑏2Γ (𝛼) Γ (𝛽) 𝑡

    𝛼+𝛽

    ) , 𝑡 ∈ [0, 𝑇) ,

    V (𝑡) ≤ (𝑎2(𝑡) + 𝑏

    2∫

    𝑡

    0

    (𝑡 − 𝑠)𝛽−1

    𝑎1(𝑠) 𝑑𝑠)

    × 𝐸𝛼+𝛽

    (𝑏1𝑏2Γ (𝛼) Γ (𝛽) 𝑡

    𝛼+𝛽

    ) , 𝑡 ∈ [0, 𝑇) .

    (33)

    Proof. Substituting (32) into (31), we obtain

    𝑢 (𝑡) ≤ 𝑎1(𝑡) + 𝑏

    1∫

    𝑡

    0

    (𝑡 − 𝑠)𝛼−1V (𝑠) 𝑑𝑠

    ≤ 𝑎1(𝑡) + 𝑏

    1∫

    𝑡

    0

    (𝑡 − 𝑠)𝛼−1

    𝑎2(𝑠) 𝑑𝑠

    + 𝑏1𝑏2∫

    𝑡

    0

    (𝑡 − 𝑠)𝛼−1

    𝑠

    0

    (𝑠 − 𝜉)𝛽−1

    𝑢 (𝜉) 𝑑𝜉.

    (34)

  • Abstract and Applied Analysis 5

    Changing the order of integration in the above doubleintegral, we obtain

    𝑢 (𝑡) ≤ 𝑎1(𝑡) + 𝑏

    1∫

    𝑡

    0

    (𝑡 − 𝑠)𝛼−1

    𝑎2(𝑠) 𝑑𝑠

    + 𝑏1𝑏2∫

    𝑡

    0

    𝑢 (𝜉) 𝑑𝜉∫

    𝑡

    𝜉

    (𝑡 − 𝑠)𝛼−1

    (𝑠 − 𝜉)𝛽−1

    𝑑𝑠

    = 𝑎1(𝑡) + 𝑏

    1∫

    𝑡

    0

    (𝑡 − 𝑠)𝛼−1

    𝑎2(𝑠) 𝑑𝑠

    + 𝑏1𝑏2∫

    𝑡

    0

    Γ (𝛼) Γ (𝛽)

    Γ (𝛼 + 𝛽)

    (𝑡 − 𝜉)𝛼+𝛽−1

    𝑢 (𝜉) 𝑑𝜉.

    (35)

    Let 𝑎(𝑡) = 𝑎1(𝑡) + 𝑏

    1∫

    𝑡

    0

    (𝑡 − 𝑠)𝛼−1

    𝑎2(𝑠)𝑑𝑠, 𝑔(𝑡) =

    𝑏1𝑏2((Γ(𝛼)Γ(𝛽))/Γ(𝛼 + 𝛽)); then 𝑎(𝑡) is a nonnegative, non-

    decreasing, and local integrable function and 𝑔(𝑡) is anonnegative, nondecreasing continuous function. Thus, byLemma 6 (30), one has

    𝑢 (𝑡) ≤ (𝑎1(𝑡) + 𝑏

    1∫

    𝑡

    0

    (𝑡 − 𝑠)𝛼−1

    𝑎2(𝑠) 𝑑𝑠)

    × 𝐸𝛼+𝛽

    (𝑏1𝑏2Γ (𝛼) Γ (𝛽) 𝑡

    𝛼+𝛽

    ) .

    (36)

    Similarly, we get

    V (𝑡) ≤ (𝑎2(𝑡) + 𝑏

    2∫

    𝑡

    0

    (𝑡 − 𝑠)𝛽−1

    𝑎1(𝑠) 𝑑𝑠)

    × 𝐸𝛼+𝛽

    (𝑏1𝑏2Γ (𝛼) Γ (𝛽) 𝑡

    𝛼+𝛽

    ) .

    (37)

    For convenience, let

    𝑅 = sup0≤𝑠≤𝜏

    {‖𝑅 (𝑠)‖} , 𝑃 = sup0≤𝑠≤𝜏

    {‖𝑃 (𝑠)‖} ,

    ℎ = max1≤𝑖≤𝑛

    {ℎ𝑖} , 𝑙 = max

    1≤𝑗≤𝑛

    {𝑙𝑗} ,

    Θ (𝑡) := max{ℎ𝛼

    𝑡𝛼

    (1 +

    𝑡

    𝛼 + 1

    ) (𝜇 (𝐵) + 𝑅𝜏𝑒𝛾𝜏

    ) ,

    𝑙

    𝛽

    𝑡𝛽

    (1 +

    𝑡

    𝛽 + 1

    ) (𝜇 (𝐷) + 𝑃𝜏𝑒𝛾𝜏

    )} ,

    (38)

    where −𝛾 is the largest eigenvalue of the diagonal stabilitymatrix −𝐶 and 𝜇(⋅) denotes the largest singular value ofmatrix (⋅).

    In the following, sufficient conditions for finite-timestability of fractional-order BAM neural networks with dis-tributed delay are derived.

    Theorem 8. Let 1 < 𝛼, 𝛽 < 2. If system (23) satisfies (H1)–(H3) with the initial condition (19), and

    𝑒−𝛾𝑡

    ((1 + 𝑡) + Θ (𝑡)) 𝐸𝛼+𝛽

    × [ℎ𝑙 (𝜇 (𝐵) + 𝑅𝜏𝑒𝛾𝜏

    ) (𝜇 (𝐷) + 𝑃𝜏𝑒𝛾𝜏

    ) Γ (𝛼) Γ (𝛽) 𝑡𝛼+𝛽

    ]

    <

    𝜀

    𝛿

    ,

    (39)

    where 𝑡 ∈ 𝐽 = [0, 𝑇), then system (23) is finite-time stable withrespect to {𝛿, 𝜀, 0, 𝐽}, 𝛿 < 𝜀.

    Proof. By Laplace transform and inverse Laplace transform,system (23) is equivalent to

    𝑢 (𝑡) = 𝐸𝛼(−𝐶𝑡𝛼

    ) 𝜑 (0) + 𝑡𝐸𝛼,2

    (−𝐶𝑡𝛼

    ) 𝜑

    (0)

    + ∫

    𝑡

    0

    (𝑡 − 𝑠 )𝛼−1

    𝐸𝛼,𝛼

    (−𝐶𝑡𝛼

    )

    × [𝐵𝐻 (𝑠) V (𝑠)

    + ∫

    𝜏

    0

    𝑅 (𝜂)𝐻 (𝑠 − 𝜂) V (𝑠 − 𝜂) 𝑑𝜂] 𝑑𝑠,

    (40)

    V (𝑡) = 𝐸𝛽(−𝐶𝑡𝛽

    ) 𝜓 (0) + 𝑡𝐸𝛽,2

    (−𝐶𝑡𝛽

    ) 𝜓

    (0)

    + ∫

    𝑡

    0

    (𝑡 − 𝑠)𝛽−1

    𝐸𝛽,𝛽

    (−𝐶𝑡𝛽

    )

    × [𝐷𝐿 (𝑠) 𝑢 (𝑠)

    + ∫

    𝜏

    0

    𝑃 (𝜂) 𝐿 (𝑠 − 𝜂) 𝑢 (𝑠 − 𝜂) 𝑑𝜂] 𝑑𝑠.

    (41)

    From (40), (41), and Lemma 5, we obtain

    ‖𝑢 (𝑡)‖ ≤ (𝜑+

    𝜑𝑡) 𝑒−𝛾𝑡

    + ∫

    𝑡

    0

    (𝑡 − 𝑠)𝛼−1

    𝑒−𝛾(𝑡−𝑠)

    ×

    𝐵𝐻 (𝑠) V (𝑠)

    + ∫

    𝜏

    0

    𝑅 (𝜂)𝐻 (𝑠 − 𝜂) V (𝑠 − 𝜂) 𝑑𝜂

    𝑑𝑠,

    (42)

  • 6 Abstract and Applied Analysis

    ‖V (𝑡)‖ ≤ (𝜓+

    𝜓𝑡) 𝑒−𝛾𝑡

    + ∫

    𝑡

    0

    (𝑡 − 𝑠)𝛽−1

    𝑒−𝛾(𝑡−𝑠)

    ×

    𝐷𝐿 (𝑠) 𝑢 (𝑠)

    + ∫

    𝜏

    0

    𝑃 (𝜂) 𝐿 (𝑠 − 𝜂) 𝑢 (𝑠 − 𝜂) 𝑑𝜂

    𝑑𝑠.

    (43)

    Let 𝑈(𝑡) = sup𝜃∈[𝑡−𝜏,𝑡]

    ‖𝑢(𝜃)‖𝑒𝛾𝜃, and 𝑉(𝑡) =

    sup𝜃∈[𝑡−𝜏,𝑡]

    ‖V(𝜃)‖𝑒𝛾𝜃; then

    ‖𝑢 (𝑠)‖ 𝑒𝛾𝑠

    ≤ 𝑈 (𝑠) , ‖𝑢 (𝑠 − 𝜏)‖ 𝑒𝛾(𝑠−𝜏)

    ≤ 𝑈 (𝑠) , (44)

    ‖V (𝑠)‖ 𝑒𝛾𝑠 ≤ 𝑉 (𝑠) , ‖V (𝑠 − 𝜏)‖ 𝑒𝛾(𝑠−𝜏) ≤ 𝑉 (𝑠) . (45)

    Thus, we have by (42) and (44) that

    ‖𝑢 (𝑡)‖ 𝑒𝛾𝑡

    ≤𝜑+

    𝜑𝑡

    + ∫

    𝑡

    0

    (𝑡 − 𝑠)𝛼−1

    × [ℎ𝜇 (𝐵) ‖V (𝑠)‖ 𝑒𝛾𝑠

    +∫

    𝜏

    0

    ℎ𝑅V (𝑠 − 𝜂)

    𝑒𝛾(𝑠−𝜂)

    𝑒𝛾𝜂

    𝑑𝜂] 𝑑𝑠

    ≤𝜑+

    𝜑𝑡

    + ∫

    𝑡

    0

    (𝑡 − 𝑠)𝛼−1

    ℎ [𝜇 (𝐵) + 𝑅𝜏𝑒𝛾𝜏

    ] 𝑉 (𝑠) 𝑑𝑠,

    (46)

    where 𝜇(𝐵) denotes the largest singular value of matrix 𝐵.Similarly, by (43) and (45), we get

    ‖V (𝑡)‖ 𝑒𝛾𝑡

    ≤𝜓+

    𝜓𝑡

    + ∫

    𝑡

    0

    (𝑡 − 𝑠)𝛽−1

    × [𝑙𝜇 (𝐷) ‖𝑥 (𝑠)‖ 𝑒𝛾𝑠

    + ∫

    𝜏

    0

    𝑙𝑃𝑥 (𝑠 − 𝜂)

    𝑒𝛾(𝑠−𝜂)

    𝑒𝛾𝜂

    𝑑𝜂] 𝑑𝑠

    ≤𝜓+

    𝜓𝑡

    + ∫

    𝑡

    0

    (𝑡 − 𝑠)𝛽−1

    𝑙 [𝜇 (𝐷) + 𝑃𝜏𝑒𝛾𝜏

    ] 𝑈 (𝑠) 𝑑𝑠.

    (47)

    Hence, by (46) and (47), we have

    𝑈 (𝑡) ≤𝜑1

    (1 + 𝑡)

    + ∫

    𝑡

    0

    (𝑡 − 𝑠)𝛼−1

    ℎ [𝜇 (𝐵) + 𝑅𝜏𝑒𝛾𝜏

    ] 𝑉 (𝑠) 𝑑𝑠,

    𝑉 (𝑡) ≤𝜓1

    (1 + 𝑡)

    + ∫

    𝑡

    0

    (𝑡 − 𝑠)𝛽−1

    𝑙 [𝜇 (𝐷) + 𝑃𝜏𝑒𝛾𝜏

    ] 𝑈 (𝑠) 𝑑𝑠.

    (48)

    Set

    𝑎1(𝑡) =

    𝜑1

    (1 + 𝑡) , 𝑎2(𝑡) =

    𝜓1

    (1 + 𝑡) ,

    𝑏1= ℎ (𝜇 (𝐵) + 𝑅𝜏𝑒

    𝛾𝜏

    ) , 𝑏2= 𝑙 (𝜇 (𝐷) + 𝑃𝜏𝑒

    𝛾𝜏

    ) .

    (49)

    By simple computation, we have

    𝑡

    0

    (𝑡 − 𝑠)𝛼−1

    𝑎2(𝑠) 𝑑𝑠

    ≤𝜓1

    𝑡

    0

    (𝑡 − 𝑠)𝛼−1

    (1 + 𝑠) 𝑑𝑠

    =

    𝜓1

    𝛼

    𝑡𝛼

    (1 +

    𝑡

    𝛼 + 1

    ) ,

    𝑡

    0

    (𝑡 − 𝑠)𝛽−1

    𝑎1(𝑠) 𝑑𝑠

    ≤𝜑1

    𝑡

    0

    (𝑡 − 𝑠)𝛽−1

    (1 + 𝑠) 𝑑𝑠

    =

    𝜑1

    𝛽

    𝑡𝛽

    (1 +

    𝑡

    𝛽 + 1

    ) .

    (50)

    It follows from (48)–(50) and Lemma 7 that

    𝑈 (𝑡) ≤ [(1 + 𝑡)𝜑1

    +

    𝜓1

    𝛼

    𝑡𝛼

    ×(1 +

    𝑡

    𝛼 + 1

    ) ℎ (𝜇 (𝐵) + 𝑅𝜏𝑒𝛾𝜏

    ) ]

    ⋅ 𝐸𝛼+𝛽

    [ℎl (𝜇 (𝐵) + 𝑅𝜏𝑒𝛾𝜏)

    × (𝜇 (𝐷) + 𝑃𝜏𝑒𝛾𝜏

    ) Γ (𝛼) Γ (𝛽) 𝑡𝛼+𝛽

    ] ,

    𝑉 (𝑡) ≤ [(1 + 𝑡)𝜓1

    +

    𝜑1

    𝛽

    𝑡𝛽

    ×(1 +

    𝑡

    𝛽 + 1

    ) 𝑙 (𝜇 (𝐷) + 𝑃𝜏𝑒𝛾𝜏

    ) ]

    ⋅ 𝐸𝛼+𝛽

    [ℎ𝑙 (𝜇 (𝐵) + 𝑅𝜏𝑒𝛾𝜏

    )

    × (𝜇 (𝐷) + 𝑃𝜏𝑒𝛾𝜏

    ) Γ (𝛼) Γ (𝛽) 𝑡𝛼+𝛽

    ] .

    (51)

  • Abstract and Applied Analysis 7

    By (51), we obtain

    ‖(𝑢 (𝑡) , V (𝑡))‖

    = ‖𝑢 (𝑡)‖ + ‖V (𝑡)‖

    ≤ 𝑒−𝛾𝑡

    (𝜑, 𝜓)

    1

    ((1 + 𝑡) + Θ (𝑡))

    × 𝐸𝛼+𝛽

    [ℎ𝑙 (𝜇 (𝐵) + 𝑅𝜏𝑒𝛾𝜏

    )

    × (𝜇 (𝐷) + 𝑃𝜏𝑒𝛾𝜏

    ) Γ (𝛼) Γ (𝛽) 𝑡𝛼+𝛽

    ] .

    (52)

    Thus, if condition (39) is satisfied and ‖(𝜑, 𝜓)‖1

    < 𝛿, then‖(𝑢(𝑡), V(𝑡))‖ < 𝜀, 𝑡 ∈ 𝐽; that is, system (23) is finite-timestable. This completes the proof.

    4. An Illustrative Example

    In this section, we give an example to illustrate the effective-ness of our main result.

    Consider the following two-state Caputo fractional BAMtype neural networks model with distributed delay

    𝐶

    0𝐷

    𝛼

    𝑡𝑥1(𝑡) = − 0.7𝑥

    1(𝑡) − 0.2𝑓

    1(𝑦1(𝑡)) + 0.1𝑓

    2(𝑦2(𝑡))

    + ∫

    𝜏

    0

    𝑠3/2

    𝑓1(𝑦1(𝑡 − 𝑠)) 𝑑𝑠

    + ∫

    𝜏

    0

    𝑠𝑓2(𝑦2(𝑡 − 𝑠)) 𝑑𝑠,

    𝐶

    0𝐷

    𝛼

    𝑡𝑥2(𝑡) = − 0.6𝑥

    2(𝑡) + 0.3𝑓

    1(𝑦1(𝑡)) + 0.2𝑓

    2(𝑦2(𝑡))

    + ∫

    𝜏

    0

    𝑠𝑓1(𝑦1(𝑡 − 𝑠)) 𝑑𝑠

    − ∫

    𝜏

    0

    𝑠3/2

    𝑓2(𝑦2(𝑡 − 𝑠)) 𝑑𝑠,

    𝐶

    0𝐷

    𝛽

    𝑡𝑦1(𝑡) = − 0.7𝑦

    1(𝑡) + 0.4𝑔

    1(𝑥1(𝑡)) + 0.2𝑔

    2(𝑥2(𝑡))

    − ∫

    𝜏

    0

    𝑠𝑔1(𝑥1(𝑡 − 𝑠)) 𝑑𝑠

    + ∫

    𝜏

    0

    𝑠2

    𝑔2(𝑥2(𝑡 − 𝑠)) 𝑑𝑠,

    𝐶

    0𝐷

    𝛽

    𝑡𝑦2(𝑡) = − 0.6𝑦

    2(𝑡) + 0.1𝑔

    1(𝑥1(𝑡)) − 0.3𝑔

    2(𝑥2(𝑡))

    + ∫

    𝜏

    0

    𝑠2

    𝑔1(𝑥1(𝑡 − 𝑠)) 𝑑𝑠

    + ∫

    𝜏

    0

    𝑠𝑔2(𝑥2(𝑡 − 𝑠)) 𝑑𝑠

    (53)

    with the initial condition

    𝑥 (𝑡) = 𝜑 (𝑡) =

    1

    15

    sin 𝑡, 𝑥 (𝑡) = 𝜑 (𝑡) = 115

    cos 𝑡,

    𝑡 ∈ [−𝜏, 0] ,

    𝑦 (𝑡) = 𝜓 (𝑡) =

    1

    15

    cos 𝑡, 𝑦 (𝑡) = 𝜓 (𝑡) = − 115

    sin 𝑡,

    𝑡 ∈ [−𝜏, 0] ,

    (54)

    where 𝛼 = 1.2, 𝛽 = 1.3, 𝜏 = 0.2, and𝑓𝑗(𝑥𝑗) = 𝑔

    𝑗(𝑥𝑗) =

    (1/2)(|𝑥𝑗+ 1| − |𝑥

    𝑗− 1|), 𝑗 = 1, 2. It is easy to know that

    (𝑥∗

    1, 𝑥∗

    2, 𝑦∗

    1, 𝑦∗

    2)𝑇

    = (0, 0, 0, 0)𝑇 is an equilibrium point of

    system (53). Since ‖(𝜑, 𝜓)‖1

    = 1/15 < 0.07, we may let𝛿 = 0.07. Take

    𝑡0= 0, 𝐽 = [0, 30) , 𝜀 = 1,

    𝐶 = [

    −0.7 0

    0 −0.6] , 𝐵 = [

    −0.2 0.1

    0.3 0.2] ,

    𝐷 = [

    0.4 0.2

    0.1 −0.3] , 𝑅 (𝑠) = [

    𝑠3/2

    𝑠

    𝑠 −𝑠3/2

    ] ,

    𝑃 (𝑠) = [

    −𝑠 𝑠2

    𝑠2

    𝑠

    ] .

    (55)

    It is easy to check that

    ℎ = 𝑙 = 1, 𝛾 = 0.6, 𝜇 (𝐵) = 0.3828,

    𝜇 (𝐷) = 0.4515, 𝑅 = 0.2894, 𝑃 = 0.24,

    Θ (𝑡) = max {0.1697𝑡1.2 (𝑡 + 2.2) , 0.1691𝑡1.3 (𝑡 + 2.3)} ,

    𝐸𝛼+𝛽

    [ℎ𝑙 (𝜇 (𝐵) + 𝑅𝜏𝑒𝛾𝜏

    ) (𝜇 (𝐷) + 𝑃𝜏𝑒𝛾𝜏

    ) Γ (𝛼) Γ (𝛽) 𝑡𝛼+𝛽

    ]

    = 𝐸2.5

    (0.1867𝑡2.5

    ) .

    (56)

    From condition (41) of Theorem 8, we can get

    𝑒−0.6𝑡

    [ (1 + 𝑡)

    +max {0.1697𝑡1.2 (𝑡 + 2.2) ,

    0.1691𝑡1.3

    (𝑡 + 2.3)}]

    × 𝐸2.5

    (0.1867𝑡2.5

    ) <

    1

    0.07

    .

    (57)

    We can obtain that the estimated time of finite-time stabilityis 𝑇 ≈ 23.78. Hence, system (53) is finite-time stable withrespect to {0.07, 1, 0, [0, 30)}.

    Conflict of Interests

    The authors declare that there is no conflict of interestsregarding the publication of this paper.

  • 8 Abstract and Applied Analysis

    Acknowledgments

    This work is supported by the Natural Science Foundationof Jiangsu Province (BK2011407) and the Natural ScienceFoundation of China (11271364 and 10771212).

    References

    [1] K. S. Miller and B. Ross, An Introduction to the FractionalCalculus and Fractional Differential Equations, John Wiley &Sons, New York, NY, USA, 1993.

    [2] I. Podlubny, Fractional Differential Equations, Academic Press,San Diego, Calif, USA, 1999.

    [3] E. Soczkiewicz, “Application of fractional calculus in the theoryof viscoelasticity,”Molecular andQuantumAcoustics, vol. 23, pp.397–404, 2002.

    [4] V. V. Kulish and J. L. Lage, “Application of fractional calculus tofluid mechanics,” Journal of Fluids Engineering, vol. 124, no. 3,pp. 803–806, 2002.

    [5] A. A. Kilbas, H. M. Srivastava, and J. J. Trujillo, Theory andApplications of Fractional Differential Equations, vol. 204 ofNorth-Holland Mathematics Studies, Elsevier Science, Amster-dam, The Netherlands, 2006.

    [6] J. Sabatier, O. P. Agrawal, and J. Machado, Theoretical Devel-opments and Applications, Advance in Fractional Calculus,Springer, Berlin, Germany, 2007.

    [7] P. Arena, R. Caponetto, L. Fortuna, and D. Porto, “Bifurcationand chaos in noninteger order cellular neural networks,” Inter-national Journal of Bifurcation and Chaos in Applied Sciencesand Engineering, vol. 8, no. 7, pp. 1527–1539, 1998.

    [8] P. Arena, L. Fortuna, and D. Porto, “Chaotic behavior innoninteger-order cellular neural networks,” Physical Review E:Statistical Physics, Plasmas, Fluids, and Related InterdisciplinaryTopics, vol. 61, no. 1, pp. 776–781, 2000.

    [9] M. P. Lazarević, “Finite time stability analysis of 𝑃𝐷𝛼 fractionalcontrol of robotic time-delay systems,” Mechanics ResearchCommunications, vol. 33, no. 2, pp. 269–279, 2006.

    [10] A. Boroomand and M. Menhaj, “Fractional-order Hopfieldneural networks,” in Proceedings of the Natural ComputationInternational Conference, pp. 883–890, 2010.

    [11] E. Kaslik and S. Sivasundaram, “Dynamics of fractional-orderneural networks,” in Proceedings of the International JointConference on Neural Network (IJCNN ’11), pp. 611–618, August2011.

    [12] E. Kaslik and S. Sivasundaram, “Nonlinear dynamics and chaosin fractional-order neural networks,” Neural Networks, vol. 32,pp. 245–256, 2012.

    [13] R. Zhang, D. Qi, and Y. Wang, “Dynamics analysis of fractionalorder three-dimensional Hopfield neural network,” in Proceed-ings of the 6th International Conference on Natural Computation(ICNC ’10), pp. 3037–3039, August 2010.

    [14] H. Delavari, D. Baleanu, and J. Sadati, “Stability analysis ofCaputo fractional-order nonlinear systems revisited,”NonlinearDynamics, vol. 67, no. 4, pp. 2433–2439, 2012.

    [15] Y. Li, Y. Chen, and I. Podlubny, “Mittag-Leffler stability offractional order nonlinear dynamic systems,” Automatica, vol.45, no. 8, pp. 1965–1969, 2009.

    [16] Y. Li, Y. Chen, and I. Podlubny, “Stability of fractional-ordernonlinear dynamic systems: lyapunov directmethod and gener-alized Mittag-Leffler stability,” Computers & Mathematics withApplications, vol. 59, no. 5, pp. 1810–1821, 2010.

    [17] K. Gu, V. L. Kharitonov, and J. Chen, Stability of Time-delaySystems, Birkhäuser, Boston, Mass, USA, 2003.

    [18] S. Arik, “An analysis of global asymptotic stability of delayed cel-lular neural networks,” IEEE Transactions on Neural Networks,vol. 13, no. 5, pp. 1239–1242, 2002.

    [19] J. Tian, W. Xiong, and F. Xu, “Improved delay-partitioningmethod to stability analysis for neural networks with discreteand distributed time-varying delays,” Applied Mathematics andComputation, vol. 233, pp. 152–164, 2014.

    [20] J. Tian, Y. Li, J. Zhao, and S. Zhong, “Delay-dependent stochasticstability criteria for Markovian jumping neural networks withmode-dependent time-varying delays and partially knowntransition rates,” Applied Mathematics and Computation, vol.218, no. 9, pp. 5769–5781, 2012.

    [21] L. Chen, Y. Chai, R. Wu, and T. Zhai, “Dynamic analysisof a class of fractional-order neural networks with delay,”Neurocomputing, vol. 111, pp. 190–194, 2013.

    [22] R.-C. Wu, X.-D. Hei, and L.-P. Chen, “Finite-time stability offractional-order neural networks with delay,” Communicationsin Theoretical Physics, vol. 60, no. 2, pp. 189–193, 2013.

    [23] A. Alofi, J. Cao, A. Elaiw, and A. Al-Mazrooei, “Delay-dependent stability criterion of caputo fractional neural net-works with distributed delay,” Discrete Dynamics in Nature andSociety, vol. 2014, Article ID 529358, 6 pages, 2014.

    [24] J. J. Hopfield, “Neurons with graded response have collectivecomputational properties like those of two-state neurons,”Proceedings of the National Academy of Sciences of the UnitedStates of America, vol. 81, no. 10, pp. 3088–3092, 1984.

    [25] B. Kosko, “Bidirectional associative memories,” IEEE Transac-tions on Systems, Man, and Cybernetics, vol. 18, no. 1, pp. 49–60,1988.

    [26] S. Arik and V. Tavsanoglu, “Global asymptotic stability analysisof bidirectional associative memory neural networks withconstant time delays,”Neurocomputing, vol. 68, no. 1–4, pp. 161–176, 2005.

    [27] S. Senan, S. Arik, and D. Liu, “New robust stability resultsfor bidirectional associative memory neural networks withmultiple time delays,” Applied Mathematics and Computation,vol. 218, no. 23, pp. 11472–11482, 2012.

    [28] B. Liu, “Global exponential stability for BAM neural networkswith time-varying delays in the leakage terms,” NonlinearAnalysis: Real World Applications, vol. 14, no. 1, pp. 559–566,2013.

    [29] R. Raja and S. M. Anthoni, “Global exponential stability ofBAM neural networks with time-varying delays: the discrete-time case,”Communications in Nonlinear Science andNumericalSimulation, vol. 16, no. 2, pp. 613–622, 2011.

    [30] M. de la Sen, “About robust stability of Caputo linear fractionaldynamic systems with time delays through fixed point theory,”Fixed Point Theory and Applications, vol. 2011, Article ID867932, 19 pages, 2011.

    [31] H. Ye, J. Gao, and Y. Ding, “A generalized Gronwall inequalityand its application to a fractional differential equation,” Journalof Mathematical Analysis and Applications, vol. 328, no. 2, pp.1075–1081, 2007.

  • Submit your manuscripts athttp://www.hindawi.com

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    MathematicsJournal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Mathematical Problems in Engineering

    Hindawi Publishing Corporationhttp://www.hindawi.com

    Differential EquationsInternational Journal of

    Volume 2014

    Applied MathematicsJournal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Probability and StatisticsHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Journal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Mathematical PhysicsAdvances in

    Complex AnalysisJournal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    OptimizationJournal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    CombinatoricsHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    International Journal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Operations ResearchAdvances in

    Journal of

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Function Spaces

    Abstract and Applied AnalysisHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    International Journal of Mathematics and Mathematical Sciences

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    The Scientific World JournalHindawi Publishing Corporation http://www.hindawi.com Volume 2014

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Algebra

    Discrete Dynamics in Nature and Society

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Decision SciencesAdvances in

    Discrete MathematicsJournal of

    Hindawi Publishing Corporationhttp://www.hindawi.com

    Volume 2014 Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

    Stochastic AnalysisInternational Journal of