7
Consensus Behavior of Multi-Agent Systems Under Digital Network Topology YU Hong-Wang 1 ZHENG Yu-Fan 2, 3 LI Ding-Gen 4 Abstract This paper works on the consensus behavior of multi-agent systems under digital network topology. It is assumed that the agents are distributed on a plane and communicate through a digital network topology. The location coordinates of each agent are measured by some remote sensors and transmitted digitally to its neighbors. The control protocol is designed by using the exact discrete state of neighbors and the dynamics of each agent is described by a hybrid system. It is shown that the whole multi-agent system may exhibit not only aggregation but also divergence and periodic oscillation, which are depended on dynamic property of each agent and the sampling time period, as well as the algebraic characterization of network topology. Moreover, we give the accurate algebraic discriminant among them. Examples show the effectiveness of our theoretical results. Key words Multi-agent systems, hybrid systems, distributed protocol, consensus Citation Yu Hong-Wang, Zheng Yu-Fan, Li Ding-Gen. Consensus behavior of multi-agent systems under digital network topology. Acta Automatica Sinica, 2012, 38(3): 357363 DOI 10.1016/S1874-1029(11)60298-X In recent years, distributed coordination for multi-agent systems has emerged as a hot research area. This is mainly because of the demand of engineering applications such as cooperative control of unmanned air vehicles (UAVs), for- mation control, distributed sensor networks, attitude align- ment of clusters of satellites, congestion control in com- munication networks, flocking of biological swarm, such as [115]. One of interesting research topics is to find the conditions under which the dynamic agents in the network achieve aggregation, or called consensus stability. Consen- sus control has been discussed systematically by Olfati- Saber et al. [2] , where the consensus means reaching an agreement (or aggregation) of agents regarding a certain quantity of interest that depends on the initial states of all agents in the network (or dynamical multi-agent systems). In their work, the dynamics of the agents was modeled by a simple scalar continuous-time integrator ˙ x = u. Following the work of [2], Xie et al. [3] studied the average-consensus problem where the agent was a point-mass located in a line, and its dynamics was described by the Newton s law ma = F . A linear consensus control protocol was estab- lished for solving such a consensus problem in their work. In most of the literature on consensus problems of multi- agent systems, a continuous-time setting is often assumed. However, information may not be transmitted continuously or may only be exchanged periodically due to the unrelia- bility of communication and the constraints of total cost. Meanwhile, in many cases, though the system itself is a continuous process, only sampled-data at discrete sampling instants are available for control synthesis, due to the ap- plication of digital sensors and controllers. Thus, it will be more practical to take account of intermittent information transmission, which results in discrete-time or hybrid for- mulation, such as [1621]. For discrete-time systems, the Manuscript received March 8, 2010; accepted March 2, 2011 Supported by National Natural Science Foundation of China (60674046), Theory and Application of Differential Equations Foun- dation of Nanjing Audit University (202720035) Recommended by Associate Editor GUAN Xin-Ping 1. Department of Mathematics and Statistics, Nanjing Audit Uni- versity, Nanjing 211815, China 2. NICTA, Victoria Research Lab- oratory, University of Melbourne, Australia 3. Institute of Sys- tems Sciences, Shanghai University, Shanghai 200436, China 4. School of Energy and Power Engineering, Huazhong University of Science and Technology, Wuhan 430047, China sampling period is often assumed to be 1 and its impact on system performances is neglected (such as [16]). For the hybrid case, the dynamics of agent is often assumed to be described by differential equations, and the control inputs or control protocol are designed by the approximate discrete states of the neighbors and/or leader (s) at the sampling instants. And the approximate states at the sam- pling instants are usually to be obtained by means of first- order or second-order discretization on the dynamical equa- tion of the agent. For example, in the case of the agents described by single-integrator kinematics, [18] proposed a proportional and derivative like discrete-time consensus al- gorithm to study a team of vehicles communicating with their local neighbors at discrete-time instants and obtain the condition on the sampling period and the control gain to ensure stability. In [21], the authors discussed the first- order dynamic agent and showed that convergence results rely on the input-to-output stability properties, and they extended it to a class of n-th-order discrete-time dynamic average consensus algorithms. In our paper, the dynamics of all agents are identical and are considered Lyapunov sta- ble if the agents are control-free. The dynamics of the agent may describe the behavior of an unmanned vehicle. The agent communicates with its neighbors through a digital communication network topology in our setting. Under a linear control protocol, the agents in the network are formu- lated as a hybrid decentralized networked control system. Using the exact discrete state of neighbors in the control protocol, we show that the agents under digital network topology may exhibit not only aggregation but also diver- gence and periodic oscillation, which are depended on, for example, dynamic property of agents, the sampling time period T , as well as the algebraic characterization of net- work topology. Moreover, we give the accurate algebraic discriminant among them while the network topology is fixed. The paper is organized as follows. Section 2 presents some properties on graph theory and describes the problem formulation. Section 3 provides the main results of this paper and gives proof. The simulation results are presented in Section 4. We conclude this paper in Section 5.

Consensus Behavior of Multi-Agent Systems Under Digital Network Topology

Embed Size (px)

Citation preview

Consensus Behavior of Multi-Agent Systems

Under Digital Network TopologyYU Hong-Wang1 ZHENG Yu-Fan2, 3 LI Ding-Gen4

Abstract This paper works on the consensus behavior of multi-agent systems under digital network topology. It is assumed thatthe agents are distributed on a plane and communicate through a digital network topology. The location coordinates of each agentare measured by some remote sensors and transmitted digitally to its neighbors. The control protocol is designed by using the exactdiscrete state of neighbors and the dynamics of each agent is described by a hybrid system. It is shown that the whole multi-agentsystem may exhibit not only aggregation but also divergence and periodic oscillation, which are depended on dynamic propertyof each agent and the sampling time period, as well as the algebraic characterization of network topology. Moreover, we give theaccurate algebraic discriminant among them. Examples show the effectiveness of our theoretical results.

Key words Multi-agent systems, hybrid systems, distributed protocol, consensus

Citation Yu Hong-Wang, Zheng Yu-Fan, Li Ding-Gen. Consensus behavior of multi-agent systems under digital network topology.Acta Automatica Sinica, 2012, 38(3): 357−363

DOI 10.1016/S1874-1029(11)60298-X

In recent years, distributed coordination for multi-agentsystems has emerged as a hot research area. This is mainlybecause of the demand of engineering applications such ascooperative control of unmanned air vehicles (UAVs), for-mation control, distributed sensor networks, attitude align-ment of clusters of satellites, congestion control in com-munication networks, flocking of biological swarm, such as[1−15]. One of interesting research topics is to find theconditions under which the dynamic agents in the networkachieve aggregation, or called consensus stability. Consen-sus control has been discussed systematically by Olfati-Saber et al.[2], where the consensus means reaching anagreement (or aggregation) of agents regarding a certainquantity of interest that depends on the initial states of allagents in the network (or dynamical multi-agent systems).In their work, the dynamics of the agents was modeled by asimple scalar continuous-time integrator x = u. Followingthe work of [2], Xie et al.[3] studied the average-consensusproblem where the agent was a point-mass located in aline, and its dynamics was described by the Newton′s lawma = F . A linear consensus control protocol was estab-lished for solving such a consensus problem in their work.

In most of the literature on consensus problems of multi-agent systems, a continuous-time setting is often assumed.However, information may not be transmitted continuouslyor may only be exchanged periodically due to the unrelia-bility of communication and the constraints of total cost.Meanwhile, in many cases, though the system itself is acontinuous process, only sampled-data at discrete samplinginstants are available for control synthesis, due to the ap-plication of digital sensors and controllers. Thus, it will bemore practical to take account of intermittent informationtransmission, which results in discrete-time or hybrid for-mulation, such as [16−21]. For discrete-time systems, the

Manuscript received March 8, 2010; accepted March 2, 2011Supported by National Natural Science Foundation of China

(60674046), Theory and Application of Differential Equations Foun-dation of Nanjing Audit University (202720035)Recommended by Associate Editor GUAN Xin-Ping1. Department of Mathematics and Statistics, Nanjing Audit Uni-

versity, Nanjing 211815, China 2. NICTA, Victoria Research Lab-oratory, University of Melbourne, Australia 3. Institute of Sys-tems Sciences, Shanghai University, Shanghai 200436, China 4.School of Energy and Power Engineering, Huazhong University ofScience and Technology, Wuhan 430047, China

sampling period is often assumed to be 1 and its impacton system performances is neglected (such as [16]). Forthe hybrid case, the dynamics of agent is often assumedto be described by differential equations, and the controlinputs or control protocol are designed by the approximatediscrete states of the neighbors and/or leader (s) at thesampling instants. And the approximate states at the sam-pling instants are usually to be obtained by means of first-order or second-order discretization on the dynamical equa-tion of the agent. For example, in the case of the agentsdescribed by single-integrator kinematics, [18] proposed aproportional and derivative like discrete-time consensus al-gorithm to study a team of vehicles communicating withtheir local neighbors at discrete-time instants and obtainthe condition on the sampling period and the control gainto ensure stability. In [21], the authors discussed the first-order dynamic agent and showed that convergence resultsrely on the input-to-output stability properties, and theyextended it to a class of n-th-order discrete-time dynamicaverage consensus algorithms. In our paper, the dynamicsof all agents are identical and are considered Lyapunov sta-ble if the agents are control-free. The dynamics of the agentmay describe the behavior of an unmanned vehicle. Theagent communicates with its neighbors through a digitalcommunication network topology in our setting. Under alinear control protocol, the agents in the network are formu-lated as a hybrid decentralized networked control system.Using the exact discrete state of neighbors in the controlprotocol, we show that the agents under digital networktopology may exhibit not only aggregation but also diver-gence and periodic oscillation, which are depended on, forexample, dynamic property of agents, the sampling timeperiod T , as well as the algebraic characterization of net-work topology. Moreover, we give the accurate algebraicdiscriminant among them while the network topology isfixed.

The paper is organized as follows. Section 2 presentssome properties on graph theory and describes the problemformulation. Section 3 provides the main results of thispaper and gives proof. The simulation results are presentedin Section 4. We conclude this paper in Section 5.

358 ACTA AUTOMATICA SINICA Vol. 38

1 Preliminaries and problem descrip-tion

By G = (V, E ,A), we denote an undirected graph with anadjacency matrix A = [aij ], where V = {p1, · · · , pn} is theset of nodes, and E ⊆ V × V is the set of edges. The nodeindices belong to a finite index set M = {1, · · · , M}. Anedge of G is denoted by eij = (pi, pj) for some i, j ∈ M. Theadjacency elements aij are defined in the following way:eij ∈ E ⇔ aij = 1 and eij �∈ E ⇔ aij = 0. Moreover, weassume aii = 0 for all i ∈ M. The set of neighbors of node pi

is denoted by Ni = {pi ∈ V|(pi, pj) ∈ E}. A path betweena pair of distinct nodes, pi and pj , is meant by a sequenceof distinct edges of G in the form of (pi, pk1), (pk1 , pk2), · · · ,(pkl , pj). A graph is called connected if there exists a pathbetween any two distinct nodes of the graph. The node setV consists of M identical continuous time dynamic agents,which may represent unmanned vehicles wildly distributedin a plane. The communication among the agents is definedin the following way: If eij ∈ E , then it implies that agentpi receives the message from pj , while eij �∈ E implies thereis no message from pj to pi.

The mathematical model of dynamic agent is describedas follows. By xxxi = (xxxi1,xxxi2)

T ∈ R2, we denote the locationcoordinate of the i-th agent and vvvi = (vvvi1, vvvi2)

T ∈ R2

represents its velocity. By pi we denote the i-th agent andthe dynamical equation of agent pi is described by

xxxi = vvvi,mivvvi = ρvvvi + uuui,

yyyi = F

(xxxi

vvvi

),

(1)

where yyyi is the measured output of agent pi from someremote sensor and transmitted by communication networktopology to other agents. We assume the sensor can get theinformation of agents′ locations, rather than their velocity.Thus, let F =

(I2×2 02×2

)and yyyi = xxxi. uuui = (uuui1,uuui2)

T

is the control input to pi. For dynamics (1), ρ is the speedfeedback gain and ρ < 0 implies that the dynamics of thevehicle is Lyapunov stable, i.e., the agent will graduallystop if there is no input control signal. The dynamicalresponse of the agent is affected by ρ. It is obvious thatthe larger ρ is (i.e., ρ tends to 0), the faster the dynam-ical response of the agent is. Without loss of general-ity, we assume that mi = 1 for all i ∈ M. The sensorsmeasure the state at time instants with constant interval{t0, t1, · · · , tk, · · · }, i.e., tk+1 − tk = T, k ≥ 0, and transmitthe data through communication network topology to itsneighbors. Then we write xxxi(k) = xxxi(kT ), vvvi(k) = vvvi(kT ).A zero-order holder is used and we have u(t) = ui(k) whent ∈ [tk, tk+1), k = 0, 1, · · · . Denote

ξξξi(k) = (xxxTi (k), vvvT

i (k))T, i ∈ M,

the discretized version of (1) with sampling period T is

ξξξi(k + 1) = Adξξξi(k) + Bduuui(k), (2)

where

Ad =

⎡⎣ 1

1

ρ(eρT − 1)

0 eρT

⎤⎦ × I2×2,

Bd =

⎡⎢⎣ −T

ρ+

1

ρ2(eρT − 1)

1

ρ(eρT − 1)

⎤⎥⎦ × I2×2.

The control protocol for each agent in the network topol-ogy is defined as follows:

uuui(k) =∑

j∈Ni

aij(yj(k) − yi(k)), (3)

where Ni is the set of neighbors of agent pi.This paper works on the collective behaviors of multi-

agent systems described by (1), and graph G = (V, E ,A)under control protocol (3).

2 Conditions of consensus stability formulti-agent system

Denoting ξ(k) = (ξT1 (k), · · · , ξT

M (k))T, we describe thecollective behavior of the agents. The controlled dynamicagents in the network topology are of the following form:

ξ(k + 1) = Ωξ(k), (4)

whereΩ = IM×M ⊗ Ad − L ⊗ BdF, (5)

and L is the Laplacian associated with graph G. We willassume that the M eigenvalues of the Laplacian L for the

graph G are denoted as 0 = λ1 < λ2 ≤ · · · ≤ λ[17]M . With

initial condition ξ(0), one has ξ(k) = Ωkξ(0).Let J be the Jordan form associated with L; there exists

an orthogonal matrix such that WTLW = W−1LW = J .It follows that

(WT ⊗ I4×4)Ω(W ⊗ I4×4) =IM×M ⊗ Ad − J ⊗ BdF =diag{Ad, Ad − λ2BdF, · · · , Ad − λMBdF}.

Therefore, the behavior of the agents largely depends onthe eigenvalues of Ad − λiBdF, i ∈ M .

Lemma 1. Given ρ < 0 and T > 0, matrix Ad of (2) hasonly two eigenvalues equal to 1, the other two are less than1. In other words, each agent of dynamics (2) is Lyapunovstable with control free (i.e., ui(t) = 0).

Proof. The eigenvalues of matrix Ad are

s11 = s12 = 1, s13 = s14 = eρT < 1.

We now discuss the eigenvalues of Ad − λiBdF for i ∈{2, · · · , M} . We obtain Ad − λiBdF = Ai ⊗ I2×2, where

Ai =

⎡⎢⎣ 1 +

λiT

ρ+

λi

ρ2(1 − eρT )

1

ρ(eρT − 1)

λi

ρ(1 − eρT ) eρT

⎤⎥⎦ .

Thus, we will focus on discussing the eigenvalues of Ai.Consider the characteristic polynomial of Ai,

fi(s) = det (sI − Ai) = s2 + ai1s + a0, (6)

No. 7 YU Hong-Wang, et al.: Consensus behavior of multi-agent systems under · · · 359

where i ∈ {2, · · · , M} and

ai1 = −[eρT + 1 +λi

ρ2(1 + ρT − eρT )],

ai0 = eρT +λi

ρ2(1 + ρT eρT − eρT ).

(7)

For any λi > 0, the eigenvalues of Ad−λiBdF are locatedinside the unit circle centered at the origin if and only ifthe following inequalities hold⎧⎨

⎩fi(1) = 1 + ai1 + ai0 > 0,fi(−1) = 1 − ai1 + ai0 > 0,|ai0| < 1.

(8)

�Using (7), one can obtain the equivalent inequalities as thefollowing:⎧⎪⎨

⎪⎩2(1 + eρT ) +

λi

ρ2(2 + ρT + ρT eρT − 2eρT ) > 0,

1 − eρT − λi

ρ2(ρT eρT + 1 − eρT ) > 0.

(9)

It is well known that matrix Ad −λiBdF is Schur stableif and only if inequalities (8) hold. Thus, inequalities (9)are fundamental tools for us to study the behaviors of theagents.

2.1 How does sampling period affect collective be-havior of multi-agents systems?

We discuss how the sampling period T affects the collec-tive behavior of the multi-agent system.

Lemma 2. Given ρ < 0 and λ > 0, there exists anunique T1 > 0 such that the first inequality in (9) holds forall T ∈ (0, T1), where T1 satisfies

2(1 + eρT1) +λ

ρ2(2 + ρT1 + ρT1e

ρT1 − 2eρT1) = 0. (10)

Proof. Denoting q(T ) = 2(1+eρT )+λ

ρ2(2+ρT+ρT eρT −

2eρT ), one may obtain its derivative q′(T ) = 2ρeρT +λ

ρ(1+

ρT eρT − eρT ).Letting r(T ) = 1+ρT eρT −eρmT , we get r′(T ) = ρ2T eρT

for all T > 0. As r(0) = 0, it holds that r(T ) > 0 for all

T > 0. Thus, one can obtain q′(T ) = 2ρeρT +λr(T )

ρ< 0,

which implies that q(T ) is an decreasing function of T > 0.Consider q(0) = 4, q(+∞) = limT→+∞ q(T ) = −∞.

Therefore, there exists an unique T1 > 0 such thatq(T1) = 0 and the first inequality in (9) holds for allT ∈ (0, T1). �

Lemma 3. Given ρ < 0 and λ > 0, the second inequalityin (9) holds if one of the following conditions holds.

1) ρ ≤ −√λ;

2) There exists an unique T2 > 0 such that T ∈ (0, T2)where T2 satisfies

1 − eρT2 − λ

ρ2(ρT2e

ρT2 + 1 − eρT2) = 0. (11)

Proof. Denoting p (T ) = 1−eρT − λ

ρ2(ρT eρT +1−eρT ),

one can obtain its derivative p′(T ) = −(ρ + λT )eρT . If

T < − ρ

λ, then p(T ) is an increasing function of T > 0. If

T > − ρ

λ, then there is an decreasing function of T > 0.

One gets p(0) = 0, p (+∞) = limT→+∞ p(T ) = 1− λ

ρ2. So,

if ρ ≤ −√λ, which implies p(+∞) > 0, then the second

inequality in (9) holds for all T ∈ (0, +∞). Otherwise,

there exists an unique T2 > − ρ

λsuch that p(T2) = 0 and

the second inequality in (9) holds for all T ∈ (0, T2).From the above analysis, we can get the following result.

We assume that dynamics (1) with ρ < 0 and topologyG of the communication network are given. Thus, the Meigenvalues, 0 = λ1 < λ2 ≤ · · · ≤ λM , of the Laplacian Lfor graph G are fixed. �

Lemma 4. 1) If ρ ≤ −√λM , then there exists an unique

TM > 0 such that inequalities (9) hold for i ∈ M andT ∈ (0, TM ), where TM > 0 satisfies

2 (1+eρTM )+λM

ρ2(2+ρTM +ρTMeρTM −2eρTM ) > 0. (12)

2) If ρ > −√λM , there exists an unique TM =

min{TM , TM} > 0 such that inequalities (9) hold for all

T ∈ (0, TM ) where TM satisfies

1 − eρTM − λM

ρ2(ρTMeρTM + 1 − eρTM ) = 0. (13)

Proof. Denote f(λ) = 1 − eρT − λ

ρ2(ρT eρT + 1 − eρT ),

g(λ) = 2(1 + eρT ) +λ

ρ2(2 + ρT + ρT eρT − 2eρT ). Due to

the fact 1 + xex − ex > 0 for all x < 0, one can obtain2 + ρT + ρT eρT − 2eρT < 0 and ρT eρT + 1− eρT > 0 for allρ < 0 and T > 0. So we have that f(λ) and g(λ) are bothdecreasing functions of λ > 0.

1) For given ρ < 0 and λM , there exists an unique TM >0 satisfying (12) such that the first inequality in (9) holdsby Lemma 2. Then the first inequality in (9) also holds forall λi, i ∈ M . As ρ ≤ −√

λM , one may get f(λM ) > 0by Lemma 3. So the second inequality in (9) holds for allλi, i ∈ M . Therefore, if ρ ≤ −√

λM , then there existsan unique TM > 0 such that the inequalities (9) holds forT ∈ (0, TM ) and all i ∈ M .

2) With the definition of TM , we have the first inequalityin (9) also holds for all λi, i ∈ M . If ρ > −√

λM , there

exists an unique TM > 0, where TM satisfies (13) such that

f(λM ) > 0 for all T ∈ (0, TM ) by Lemma 3. Then we havethe second inequality in (9) also holds for all λi, i ∈ M .Therefore, inequalities (9) hold for all T ∈ (0, TM ) with

TM = min{TM , TM}. �Then we have the following theorem.Theorem 1. Under linear control protocol (3), the

agents in network are described by (4) and (5). If the sam-pling period T satisfies

0 < T < T ∗, (14)

where

T ∗ =

{TM , ρ ≤ −√

λM

TM , −√λM < ρ < 0

, (15)

where λM denotes the biggest eigenvalue of the Laplacianmatrix L, TM and TM are defined in Lemma 4, then itholds that

limk→+∞

Ωk = wwwrwwwTl , (16)

360 ACTA AUTOMATICA SINICA Vol. 38

where

wwwr =1√M

1M ⊗[

ρ−1I2×2

02×2

],

wwwl =1√M

1M ⊗[

ρI2×2

−I2×2

],

(17)

1M = (1, 1, · · · , 1︸ ︷︷ ︸M

)Tand wwwTl wwwr = I2×2.

Proof. From Lemma 4, all eigenvalues of Ad − λiBdFfor i ≥ 1, are not located outside the unit circle centered atorigin when 0 < T < T ∗. There are 4M − 2 eigenvalues ofΩ located inside the unit circle and two eigenvalues are 1.

By (5), one has

Ω1√M

1M ⊗[

ρ−1I2×2

02×2

]=

1√M

1M ⊗[

ρ−1I2×2

02×2

].

Thus, wwwr =1√M

1M ⊗[

ρ−1I2×2

02×2

]are the two right-

eigenvectors of Ω with respect to eigenvalue λ = 1.In a similar way,

1√M

1TM ⊗ [ρI2×2 − I2×2] Ω =

1√M

1TM ⊗ [ρI2×2 − I2×2] .

It is obvious that wwwl =1√M

1M ⊗[

ρI2×2

−I2×2

]are two left-

eigenvectors of Ω with respect to eigenvalue λ = 1. Fur-thermore, as there are 4M − 2 eigenvalues of Ω locatedinside the unit circle, one can check limk→+∞ Ωk = wwwrwww

Tl .

It is also easy to check that wwwTl wwwr = I2×2. �

Now we give the main result of our paper.Theorem 2. Under linear control protocol (3) and

dynamic agents (1) in the digital communication networktopology described by G.

1) If the sampling period T satisfies T < T ∗, where T ∗

is defined in (15), then the agents will achieve consensusstability;

2) If T = T ∗, then the dynamic agents have globallyasymptotically stable periodic trajectories;

3) If T > T ∗, then the dynamic agents have divergenttrajectories.

Proof. If 0 < T < T ∗, by Theorem 1, we can getξ(k) = Ωkξ(0) and limk→+∞ Ωk = wwwrwww

Tl . It follows that

limk→+∞

ξ(k) = limk→+∞

Ωkξ(0) = wwwrwwwTl ξ(0) =

1

M

(1M ⊗

[ρ−1I2×2

02×2

]) (1T

M ⊗ [ρI2×2 − I2×2])ξ(0) =

1

M

⎛⎝1M1T

M ⊗⎡⎣ I2×2 −1

ρI2×2

02×2 02×2

⎤⎦

⎞⎠

⎡⎢⎢⎢⎢⎢⎣

xxx1(0)vvv1(0)

...xxxM (0)vvvM (0)

⎤⎥⎥⎥⎥⎥⎦ .

Therefore, as k → +∞, xxxi(k) → 1

M

∑Mi=1[xxxi(0) −

1

ρvvvi(0)], and it is obvious that limk→+∞ vvvi(k) = 0, i ∈

{1, 2, · · · , M}. This implies the agents in the network glob-ally asymptotically achieve consensus stability.

When T = T ∗, one can show that Ad − λMBdF has twoeigenvalues located on the unit circle, and all other eigen-values of Ad −λjBdF , j ∈ {2, · · · , M −1} locate inside theunit circle. Thus all Ad − λjBdF , j ∈ {2, · · · , M − 1}, are

Lyapunov stable. By Lemma 1, Ω has only four eigenvaluesbeing 1 when T = T ∗. According to the results of linearcontrol theory, the agents tend to be of asymptotically pe-riodic trajectories centered at the same fixed point.

If T > T ∗, then one can show that matrices Ad−λiBdF ,i ∈ {2, · · · , M} at least have two eigenvalues located outthe unit circle. Then Ω is not Lyapunov stable. Thus, theagents may have divergent trajectories. �2.2 How does feedback gain affect collective be-

havior of multi-agent systems?

Now we discuss how the negative feedback gain ρ affectsthe collective behavior of the multi-agent systems when thesampling period T is fixed.

Lemma 5. 1) Given λi > 0, i ∈ M , there exists anunique ρi < 0 such that inequalities (9) hold for all ρ ∈(−∞, ρi);

2) All eigenvalues of Ad −λiBdF, i ∈ M are Schur stableif and only if ρ < ρ∗ = ρM (< 0).

Proof. Denoting g(ρ) = 1− eρT − λi

ρ2(ρT eρT + 1− eρT ),

one can obtain its derivation g′(ρ) =2λi

ρ3[1 + eρT (ρT − 1−

ρ2T 2

2)]. Let h(ρ) = 1+eρT (ρT − 1− ρ2T 2

2); the derivation

of h(ρ) can be written as h′(ρ) = −ρ2T 3eρT < 0, whichimplies that h(ρ) is a decreasing function of ρ < 0. Forgiven T > 0 and λi > 0, one gets h(0−) = limρ→0− h(ρ) =0, h(−∞) = limρ→−∞ h(ρ) = 2. Then 0 < h(ρ) < 2 for allρ < 0.

Furthermore, one can obtain that g′(ρ) < 0 for all ρ < 0.We conclude that g(ρ) < 0 is a decreasing function of ρ < 0.One also gets

g(0−) = limρ→0−

g(ρ) = −λiT2

2, g(−∞) = lim

ρ→−∞g(ρ) = 1.

Therefore, there exists an unique ρi < 0 such that theinequality g(ρ) > 0 holds for all ρ ∈ (−∞, ρi), which impliesthat the second inequality of (9) holds for all ρ ∈ (−∞, ρi).Given the sampling period satisfying T ∈ (0, TM ) whereTM is defined in (12), it is obvious that the first inequalityof (9) holds if ρ < 0. Therefore, for given λi > 0 and propersampling period T , there exists an unique ρi < 0 such thatinequalities (9) hold for all ρ ∈ (−∞, ρi). Moreover, alleigenvalues of Ad−λiBdF , i ∈ {2, · · · , M}, are Schur stableif and only if ρ < ρ∗ = ρM (< 0). �

Theorem 3. Consider dynamic agents (1) in digital net-work topology G with sampling period T ∈ (0, TM ), whereTM is defined in (12). Under linear control protocol (3),the agents in network topology G will achieve consensusstability if the feedback gain ρ in the dynamical equation(1) satisfies

ρ < ρ∗ < 0, (18)

where ρ∗ satisfies

1 − eρ∗T − λM

(ρ∗)2(1 + ρ∗T eρ∗T − eρ∗T ) = 0. (19)

Moreover, if ρ = ρ∗, the dynamic agents have globallyasymptotically stable periodic trajectories.

If 0 > ρ > ρ∗, then the dynamic agents have divergenttrajectories.

Proof. Based on Lemma 5, the proof of the theoremfollows the similar methods of Theorem 3, which is omittedto save the space. �

No. 7 YU Hong-Wang, et al.: Consensus behavior of multi-agent systems under · · · 361

3 Simulation

In this section, we study a simple example to show thatour results are effective. The network topology of dynamicagents is described in Fig. 1.

Fig. 1 An undirected graph G with M = 4 nodes

Its Laplacian is L =

⎡⎢⎢⎣

1 −1 0 0−1 3 −1 −10 −1 2 −10 −1 −1 2

⎤⎥⎥⎦ with the

eigenvalues λ1 = 0, λ2 = 1, λ3 = 3, λ4 = 4.First, we consider that gain ρ is fixed and the sampling

period T is varying. In order to implement the consensusstability, we have to find the up-bound T ∗ of the sampling

period. Let q(T ) = 2(1+eρT )+λ4

ρ2(2+ρT +ρT eρT −2eρT ),

p(T ) = 1− eρT − λ4

ρ2(1+ ρT eρT − eρT ). One can obtain the

different sampling periods (refer to Section 3) and usingMaple computation. Table 1 shows the different samplingperiods for different feedback gains.

Table 1 Different sampling periods TM and TM

ρ TM TM T∗

−0.8 2.130588974 0.4239146850 0.4239146850

−1.9 1.960174297 3.255052884 1.960174297

−2.5 2.040311827 Arbitrary 2.040311827

We choose ρ = −0.8 and T = 0.1 < T ∗, which sat-isfy (14). Fig. 2 demonstrates that with the initial con-ditions xxx1(0) = (10,−15)T, xxx2(0) = (−6, 13)T, xxx3(0) =(15,−4)T, xxx4(0) = (−15, 8)T and the initial velocitiesvvv1(0) = (12, 10)T, vvv2(0) = (18, 6)T, vvv3(0) = (16, 18)T,vvv4(0) = (6, 6)T respectively, the state trajectories of agentsaggregate to x∗ = (17.25, 13)T, and the velocities of agentstend to 0.

(a) State trajectories of agents

(b) Velocity trajectories of agents

Fig. 2 State and velocity trajectories of agents with T = 0.1

and ρ = −0.8

(a) State trajectories of agents

(b) Velocity trajectories of agents

Fig. 3 State and velocity trajectories of agents with T = 0.5

and ρ = −0.8

But if one chooses ρ = −0.8 and T = 0.5 > T ∗ underthe same initial conditions as before, Fig. 3 shows the diver-gence of the agents. Under the same initial condition and

362 ACTA AUTOMATICA SINICA Vol. 38

ρ = −0.8, the asymptotically stable periotic trajectories ofstates of the agents appear when T = 0.4239146850 = T ∗.Their velocity trajectories also appear in periodic orbits, asshown in Fig. 4.

(a) State trajectories of agents

(b) Velocity trajectories of agents

Fig. 4 State and velocity trajectories of agents with

T = 0.4239 and ρ = −0.8

(a) State trajectories of agents

(b) Velocity trajectories of agents

Fig. 5 State and velocity trajectories of agents with T = 1 and

ρ = −2

Next we study the behavior of dynamic agents under afixed sampling period T and varying ρ. Without loss ofgenerality, letting T = 1, one gets ρ∗ = −1.513705800.With the same initial condition as above when ρ = −2 <ρ∗, Fig. 5 shows that the agents aggregate together, at themeantime, their velocities tend to 0.

(a) State trajectories of agents

(b) Velocity trajectories of agents

Fig. 6 State and velocity trajectories of agents with T = 1 and

ρ = −1.513705800

No. 7 YU Hong-Wang, et al.: Consensus behavior of multi-agent systems under · · · 363

In Fig. 6, the asymptotically stable periotic state andvelocity trajectories of the agents appear when ρ∗ =−1.513705800.

4 Conclusion

This paper has studied the consensus behavior for multi-agent systems in a hybrid formulation. We analyzed thecondition on the communication network topology, thesampling period and the dynamical property of dynamicagent to ensure consensus behavior. We also discussedthe divergence and periodic oscillation behavior of multia-gent systems, and gave the accurate algebraic discriminantamong them in the paper. Although this paper focuseson the consensus behavior over an undirected fixed com-munication network topology, a similar analysis may beextended to account for the case of a switching and/or di-rected communication network topology. Meanwhile, thetime-delay and packet lost should also be considered in thedigital communication network. These will be our futureresearch directions.

References

1 Fax J A, Murray R M. Information flow and cooperative con-trol of vehicle formations. IEEE Transactions on AutomaticControl, 2004, 49(9): 1465−1476

2 Olfati-Saber R, Murray R M. Consensus problems in net-works of agents with switching topology and time-delays.IEEE Transactions on Automatic Control, 2004, 49(9):1520−1533

3 Xie G M, Wang L. Consensus control for a class of networksof dynamic agents. International Journal of Robust and Non-linear Control, 2007, 17(10−11): 941−959

4 Lin P, Jia Y M. Consensus of a class of second-order multi-agent systems with time-delay and jointly-connected topolo-gies. IEEE Transactions on Automatic Control, 2010, 55(3):778−784

5 Wang L, Xiao F. Finite-time consensus problems for net-works of dynamic agents. IEEE Transactions on AutomaticControl, 2010, 55(4): 950−955

6 Liu X W, Lu W L, Chen T P. Consensus of multi-agentsystems with unbounded time-varying delays. IEEE Trans-actions on Automatic Control, 2010, 55(10): 2396−2401

7 Hong Y G, Chen G R, Bushnell L. Distributed observersdesign for leader-following control of multi-agent networks.Automatica, 2008, 44(3): 846−850

8 Olfati-Saber R, Fax J A, Murray R M. Consensus and co-operation in networked multi-agent systems. Proceedings ofthe IEEE, 2007,95(1): 215−233

9 Hou Z G, Cheng L, Tan M. Decentralized robust adaptivecontrol for the multiagent system consensus problem usingneural networks. IEEE Transactions on Systems, Man, andCybernetics, Part B: Cybernetics, 2009, 39(3): 636−647

10 Meng Z Y, Ren W, Cao Y C, You Z. Leaderless and leader-following consensus with communication and input delaysunder a directed network topology. IEEE Transactions onSystems, Man, and Cybernetics, Part B: Cybernetics, 2011,41(1): 75−88

11 Lin Z Y, Francis B, Maggiore M. Necessary and sufficientgraphical conditions for formation control of unicycles. IEEETransactions on Automatic Control, 2005, 50(1): 121−127

12 Yu W W, Chen G R, Cao M, Kurths J. Second-order con-sensus for multiagent systems with directed topologies andnonlinear dynamics. IEEE Transactions on Systems, Man,and Cybernetics, Part B: Cybernetics, 2010, 40(3): 881−891

13 Ren W. On consensus algorithms for double-integratordynamics. IEEE Transactions on Automatic Control,2008,58(6): 1503−1509

14 Savkin A V. Coordinate collective motion of groups of au-tonomous mobile robots: analysis of vicsek′s model. IEEETransactions on Automatic Control, 2004, 49(6): 981−983

15 Liu Y, Passino K M, Polycarpou M. Stability analysis ofone-dimensional asynchronous swarms. IEEE Transactionson Automatic Control, 2003, 48 (10): 1848−1854

16 Xiao F, Wang L, Wang A. Consensus problems in discrete-time multiagent systems with fixed topology. Journal ofMathematical Analysis and Applications, 2006, 322 (2):587−598

17 Ren W, Cao Y C. Convergence of sampled-data consensusalgorithms for double-integrator dynamics. In: Proceedingsof the 47th IEEE Conference on Decision and Control. Can-cun, Mexico: IEEE, 2008, 28. 3965−3970

18 Cao Y C, Ren W, Li Y. Distributed discrete-time coordi-nated tracking with a time-varying reference state and lim-ited communication. Automatica, 2009, 45 (5): 1299−1305

19 Gao Y P, Wang L, Xie G M, Wu B. Consensus of multi-agent systems based on sampled-data control. InternationalJournal of Control, 2009. 82 (12): 2193−2205

20 Xie G M, Liu H V, Wang L, Jia Y M. Consensus in networkedmultiagent systems via sampled-data control: fixed topol-ogy case. In: Proceedings of the American Control Confer-ence, Hyatt Regency Riverfront, St. Louis, MO, USA, 2009.3902−3907

21 Zhu M H, Martnez H S. Discrete-time dynamic average con-sensus. Automatica, 2010, 46 (2): 322−329

YU Hong-Wang Received the mas-ter degree from the Department of Math-ematic, East China Normal University in2005 and Ph. D. degree from Shanghai Uni-versity in 2008. His research interest coversnonlinear control, collective behavior, andcoordinate control of multiagent systems.Corresponding author of this paper.E-mail: [email protected]

ZHENG Yu-Fan Graduated from EastChina Normal University and worked asprofessor till 2009 in this university. From1996 to 2006, he was a professor at Univer-sity of Melbourne. After retiring from hisposition he was titled as Honorary Profes-sor of University of Melbourne and workingfor Victoria Research Laboratory. Now heis a professor at Shanghai University. Hisresearch interest covers nonlinear control

systems, complex systems, and multi-agent systems in network.E-mail: [email protected]

LI Ding-Gen Associate professor atHuazhong University of Science and Tech-noloqy. He received his master and doc-tor degrees of engineering mechanics inHuazhong University of Science and Tech-nology in 2001 and 2004, respectively. Hewas engaged with postdoctoral research incomputer science and technology in Zhe-jiang University from 2005 to 2006. Hisresearch interest covers engine control and

clean power control.E-mail: [email protected]