Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
ANALYTICAL APPROACH FOR OPINIONDYNAMICS ON SOCIAL NETWORKS
By
Weituo Zhang
A Thesis Submitted to the Graduate
Faculty of Rensselaer Polytechnic Institute
in Partial Fulfillment of the
Requirements for the Degree of
DOCTOR OF PHILOSOPHY
Major Subject: MATHEMATICS
Approved by theExamining Committee:
Chjan Lim, Thesis Adviser
Boleslaw Szymanski, Member
Gyorgy Korniss, Member
Ashwani Kapila, Member
David Isaacson, Member
Rensselaer Polytechnic InstituteTroy, New York
Nov. 2012(For Graduation Dec. 2012)
c© Copyright 2012
by
Weituo Zhang
All Rights Reserved
ii
CONTENTS
LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
ACKNOWLEDGMENT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Opinion Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Naming Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2.1 2-word Naming Game . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Features of Naming Game . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.1 Consensus Time . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.2 Committed Agents . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Coarse-graining Approach . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 Contributions and Organizations . . . . . . . . . . . . . . . . . . . . 6
2. Coarse-graining Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1 Basic Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1.1 ODE Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1.2 Random Walk Model . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.3 Link between the two models . . . . . . . . . . . . . . . . . . 10
2.2 A Prototypical Case: Voter Model . . . . . . . . . . . . . . . . . . . . 10
2.2.1 Expected Consensus Times for Voter Model . . . . . . . . . . 12
3. Expected of Consensus Time for Naming Game on Complete Graphs . . . 16
3.1 The 2-word Naming Game . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.1 Expected Consensus Times for 2-word Naming Game . . . . . 18
3.2 The 2-word Naming Game with External Influence . . . . . . . . . . 20
3.2.1 Probability of Consensus . . . . . . . . . . . . . . . . . . . . . 23
3.3 2-word Naming Game with Committed Agents . . . . . . . . . . . . . 26
3.3.1 Expected Consensus Time for 2-word NG with CommittedAgents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
iii
4. Consensus Time Distribution for Naming Game on Complete Graphs . . . 34
4.1 Recursive Calculation for Consensus Time Distribution . . . . . . . . 34
4.2 Qualitative Property of the Consensus Time Distribution . . . . . . . 37
4.2.1 A Tricky Approximation of E[Tc] when q < qc . . . . . . . . . 37
4.3 Variance of Consensus Time . . . . . . . . . . . . . . . . . . . . . . . 39
4.4 Path Integral Approximation . . . . . . . . . . . . . . . . . . . . . . . 42
4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5. Naming Game on Sparse Networks . . . . . . . . . . . . . . . . . . . . . . 48
5.1 Improved Mean Field Approach . . . . . . . . . . . . . . . . . . . . . 48
5.2 Naming Game without Committed Agents . . . . . . . . . . . . . . . 49
5.2.1 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.2.2 Numerics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.3 Naming Game with Committed Agents . . . . . . . . . . . . . . . . . 57
5.3.1 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.3.2 Numerics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.4 Effect of the Var(k) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
iv
LIST OF TABLES
1.1 All possible interactions of Naming Game . . . . . . . . . . . . . . . . . 3
1.2 All possible interactions of Listener Only Naming Game . . . . . . . . . 4
2.1 Update events for the 2-word LO-NG with committed agents and theassociated random walk transition probabilities . . . . . . . . . . . . . . 10
2.2 Update events for the voter model and the associated random walktransition probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1 Update events for the 2-word LO-NG and the associated random walktransition probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2 Update events for the 2-word LO-NG with central influence and theassociated random walk transition probabilities . . . . . . . . . . . . . . 22
3.3 Update events for the 2-word LO-NG with committed agents and theassociated random walk transition probabilities . . . . . . . . . . . . . . 27
v
LIST OF FIGURES
2.1 Consensus time for voter model on complete graph . . . . . . . . . . . . 15
3.1 Vector field of the expected drift for 2-word naming game on completegraphs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 Consensus time (normalized by N) vs. the network size N for 2-wordnaming game on complete graph. . . . . . . . . . . . . . . . . . . . . . 21
3.3 The expected time spent on each macrostate before consensus for 2-word NG on a complete graphs. . . . . . . . . . . . . . . . . . . . . . . 22
3.4 Vector fields of expected drift for NG with different central influencelevels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.5 Expected time spent on each macrostate before consensus T (nA, nB) inthe 2-word NG on complete graphs. . . . . . . . . . . . . . . . . . . . . 24
3.6 Probability of all A consensus PA with different external influence levelf ’s. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.7 Probability of all B consensus 1− PA. . . . . . . . . . . . . . . . . . . . 26
3.8 Expected normalized consensus time (τ(~n)/N) as a function of the ini-tial macrostate (nA, nB) on the complete graph with N = 100 nodes. . . 29
3.9 Expected time spent in each macrostate before consensus T (nA, nB) onthe complete graph with N = 100 nodes. . . . . . . . . . . . . . . . . . 30
3.10 Normalized time spent near the consensus state before consensus as afunction of network size N for different fraction of committed agents q. 31
3.11 Normalized time spent near the meta-stable state as a function of net-work size N for different fraction of committed agents q. . . . . . . . . . 32
4.1 Comparison between the theoretical prediction of the consensus timedistribution (red line) and the statistics of the numerical simulation ofNaming Game on N=100 complete network) . . . . . . . . . . . . . . . 36
4.2 Rescaled consensus time distributions and their asymptotic behavior. . 38
4.3 Exact variance of consensus time V ar(Tc) for different N . . . . . . . . . 43
4.4 Trajectories of macrostates coming from the Naming Game simulation. 44
4.5 Approximate variance of consensus time. . . . . . . . . . . . . . . . . . 46
vi
5.1 Evolution of fractions of A, B and AB nodes. . . . . . . . . . . . . . . . 55
5.2 The trajectories of NG (no committed agents)solved from the ODE withdifferent < k > mapped onto 2D macrostate space. . . . . . . . . . . . . 56
5.3 Comparison of η-consensus times T0.95 of NG (no committed agent)between the simulation and theoretical prediction for different systemsizes N and average degrees < k >. . . . . . . . . . . . . . . . . . . . . 57
5.4 Fraction of B nodes of the stable point (p∗B) as a function of the fractionof nodes committed to A (p). . . . . . . . . . . . . . . . . . . . . . . . 60
5.5 Normalized consensus time, T0.95/N around the tipping point pc =0.8205 (vertical dash line) when < k >= 10. . . . . . . . . . . . . . . . 61
5.6 Evolution of η = 1− q − pA on ER network with committed agents fordifferent < k >. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
vii
ACKNOWLEDGMENT
I would like to express my appreciation and gratitude to my advisor, Professor Chjan
Lim, for his assistance, guidance and support. It is an honor and great pleasure to
work with him. I learned a lot from him in all these years.
I would like to acknowledge my committee members: Professors Boleslaw Szy-
manski, Gyorgy Korniss, Ashwani Kapila and David Isaacson. I benefited a lot from
their valuable comments and suggestions.
I appreciate all of the Professors, all of my colleagues and Staff members from
the Department of Mathematical Sciences and SCNARC Center with whom I have
interacted during my PhD studies at Rennselaer Polytechnic Institute. They helped
me a lot in my academic work and daily life.
Lastly, I would like to express my deepest gratitude to my family. Thank my
parents Qingdong Zhang and Yueqin Gao for giving me the life and supporting me
in all my pursuits, and thank my wife Le Yin for her constant love, encouragement
and patience in all these years.
viii
ABSTRACT
The opinion dynamics is a class of agent based models arising from the study of the
origins and evolutions of languages and now attracts a lot of interest. These mod-
els consist of a large number of individuals updating their states through pairwise
interactions. These models provide powerful tools to simulate and investigate the
collective behavior in complex systems in sociology, physics and computer science.
The main object of this thesis is to provide a framework for analyzing the
opinion dynamics, in particular the 2-word Naming Game(NG). We develop one
random walk model which works for complete graph and highly connected networks
without community structures. We also develop an improved ODE model which
works for sparse networks without community structures. Both models are based
on a so-called coarse-graining approach which requires an underlining mean field
assumption. We derive analytical results according to these models and confirm our
results by numerical simulations of NG dynamics.
The thesis is organized as follows. Chapter 2 introduces two basic models
based on coarse-graining approach: the ODE model and the random walk model,
and analyze a prototypical opinion dynamics, the voter model, as a random walk.
Chapter 3 further applies the random walk model on the 2-word Naming Game
on complete graphs, calculates the expectation of the total consensus time Tc and
the expected time spent on each macrostate before consensus T (nA, nB) in three
cases: (i)purely symmetric case; (ii)with biased central influence; (iii)with commit-
ted agents. The tipping point of committed fraction and different dynamics behavior
below and above the tipping point are also discussed. Chapter 4 analyzes the dis-
tribution of the consensus time Tc of 2-word Naming Game on complete graphs
in several ways, calculates the variance of Tc through a martingale approach, and
provide a path integral approximation for the variance of Tc. Chapter 5 develops
an improved ODE model using the homogeneous pairwise assumption, a variation
of mean field assumption, which works for sparse networks with narrow degree dis-
tribution and no community structures. This model shows the dependence of the
ix
dynamic behavior on the average degree < k >.
x
CHAPTER 1
Introduction
This chapter introduces backgrounds, basic models and organizations of the thesis.
1.1 Opinion Dynamics
The opinion dynamics is a class of agent based model arising from the study
of the origins and evolutions of languages [37] and now attracts a lot of interest [1, 2,
7, 8]. These models typically consist of a large number of individuals, each of which
is assigned with a state called opinion and updates its opinion through interactions
with its neighbors. The opinion here is represented by a word or a set of words chosen
from a dictionary. Classical opinion dynamics includes the voter model [19, 53],
majority rule model [5], threshold models [40], Axelrod model [67], and Naming Game
[23, 24, 29]. Recently, models with the network topology and opinions changing at
the same time are also studied [58, 59]. We mainly focus on the Naming Game
in this thesis. In these opinion dynamics models, the individual behavior follows a
very simple rule and the effect of the network topology is introduced by the neighbor
interactions, so these models provide powerful tools to simulate and investigate the
collective behavior in complex systems in sociology, physics and computer science.
1.2 Naming Game
The Naming Game(NG) [23, 24, 29] is a typical model of opinion dynamics.
The motivation is to consider a group of agents trying to name the same object.
Each of the agents can store several different names of the object in its name list.
The agents can learn new name from their neighbors. They also can confirm a name
as a “right” one, if they find their neighbors using the same name. The detailed
algorithm of the Naming Game is described as follows.
A network contains N nodes labeled from 1 to N . Each node i has a word list
chosen from a finite size dictionary Name = {A,B,C...}. Γ = {γk} which consists of
all non-empty subsets γk of Name, represents all possible “word lists”/“opinions” of
1
2
a node. Each node i is assigned with a word list si ∈ Γ. Therefore the network state
S (with one word list assigned to each node) is given by the S = {si}, (i = 1, 2..., N).
In each time step, the network updates its state S using the following rules:
1. Two neighboring agents i, j are randomly picked, one as the speaker and
one as the listener.
2. The speaker randomly pick a name X from its name list si and send it as
a message to the listener.
3. If the listener does not have the name X in its name list sj, the listener
add the name to its name list;
4. If the listener has the name X in his name list sj, its called a successful
communication, and both agents delete all their names except X.
In step 1 of the above rules, there are several different ways in randomly picking
i and j. First, the direct way: firstly pick a “speaker” and then pick a “listener”
among its neighbors. Second, the reverse way: firstly pick a “listener” and then pick
a “speaker” among its neighbors. Finally, the neutral way: firstly pick an edge and
then assign its two ends as “listener” and “speaker” with equal probability. One
should note that in the direct and reverse ways the second node picked has higher
average degree than the first one [66], and in the neutral way the ‘listener” and
“speaker” take symmetric places. Therefore, in most social networks, for example
the power-law networks, these three ways lead to different dynamics behaviors [22].
However, for complete graphs and sparse networks with narrow degree distribution
considered in this thesis, these three ways are equivalent.
1.2.1 2-word Naming Game
In this thesis, we focus on a special case, 2-word Naming Game, in which the
dictionary only contains two names, i.e. |Name| = 2. There are several reasons to
do so. First, no matter how many names are there in the initial network state, a 2-
word stage is inevitable before consensus, and the time spent in this stage dominates
the total consensus time. Second, the adoption of two competing opinions is a very
3
Table 1.1: All possible interactions of Naming Game
before interaction after interaction
AA−→ A A-A
AA−→ B A-AB
AA−→ AB A-A
BB−→ A B-AB
BB−→ B B-B
BB−→ AB B-B
ABA−→ A A-A
ABB−→ A AB-AB
ABA−→ B AB-AB
ABB−→ B B-B
ABA−→ AB A-A
ABB−→ AB B-B
common scenario in the real world social network dynamics, such as the opinions
for political election, homosexual marriage, etc.
Denoting the two names by A and B, all possible interactions in 2-word naming
game are listed in the following table:
The rules listed in Table 1.1 is called the Original Naming Game [23]. A
variant of it is the so-call Listener Only Naming Game (LO-NG) [42]. In LO-NG,
only the listener changes its state no matter the communication is successful or
not. Compared with the Original Naming Game, The LO-NG version brings a lot
of convenience in analytical studies which will be mentioned later. All possible
interactions of 2-word LO-NG are listed in Table 1.2:
There is another variation of NG, Speaker Only Naming Game (SO-NG) [38],
in which, in contrast with LO-NG, only the speaker changes its state when the
communication is successful.
4
Table 1.2: All possible interactions of Listener Only Naming Game
before interaction after interaction
AA−→ A A-A
AA−→ B A-AB
AA−→ AB A-A
BB−→ A B-AB
BB−→ B B-B
BB−→ AB B-B
ABA−→ A AB-A
ABB−→ A AB-AB
ABA−→ B AB-AB
ABB−→ B AB-B
ABA−→ AB AB-A
ABB−→ AB AB-B
1.3 Features of Naming Game
1.3.1 Consensus Time
The consensus is a state of network in which all agents share the same single
name opinion. An interesting phenomenon in opinion dynamics is that the agents
can achieve a global consensus through local interactions without a central coordi-
nator. The main object of observation here is Tc, the time(the number of individual
interactions) needed to achieve global consensus. In previous literature, expected
value of the consensus time is extensively discussed [16, 19, 20]. Networks topology,
such as scale-free [60] and small world [61], have significant impact on the expected
consensus time [62, 63, 65, 64, 17].
A significant difference between the NG and other social influence models [3,
4, 5] such as the voter models [45] is that in the NG, an agent is allowed to hold
more than one opinions before switching to the other opinion. Even in some multi-
dimensional models, such as Axelrod model [67], an agent still can not hold two
competing opinions in one dimension at the same time. This changes the expected
time to consensus starting from uniform initial conditions even in perfectly symmet-
5
ric form of the models. Numerical studies in [39] have shown that for the symmetric
NG on a complete graph, starting from the state where each agent has one of the
two opinions with equal probability, the system first achieves consensus in order
O(ln N) time, as compared to order O(N) time for the voter models. Here N is
the number of nodes in the network, and unit time consists of N speaker-listener
interactions.
1.3.2 Committed Agents
The committed agents are nodes that influence their neighbors through usual
interaction rules, but never change their own states. If starting from an initial state
in which a small fraction q of nodes are committed agents having opinion A and all
the other nodes are uncommitted having opinion B, the only absorbing state of the
dynamics is the consensus state that nodes adopt opinion A.
The question of interest is: whether the minority committed agents are able to
persuade all the uncommitted agents to achieve global consensus and how does the
consensus time depend on the fraction of committed agents?
Previous studies [42, 41] find that there exits a critical value qc for the commit-
ted fraction q. When the committed fraction grows across the critical value, there is
a sharp transition in the total consensus time Tc, and the corresponding mean field
system of two coupled nonlinear differential equations [41] undergoes a saddle-node
bifurcation when q = qc, in which the saddle point (symmetric in phase plane in the
case with no committed agents) merges with a node to form a new equilibrium point
of saddle-node type [50, 51]. For the complete graph qc ≈ 0.0979. In subcritical
case q < qc, the mean consensus time Tc ∼ eN , while in supercritical case q > qc,
Tc ∼ ln N
1.4 Coarse-graining Approach
In opinion dynamics, the network state consists of the states of all nodes. For
example, in Naming Game, the network state is fully described by S = {si}, (i = 1, 2..., N)
(Sec.1.2). This network state containing all detailed information is called the mi-
crostate in statistical physics context. However, the microstate space is very high-
6
dimensional, it naturally leads to a coarse-graining approach[56, 57] by lumping
many microstates with a same macroscopic feature into one macrostate. The macrostate
is a much lower-dimensional variable and is more convenient in mathematical anal-
ysis.
The selection of macrostate for the opinion dynamics is not unique, but when
considering complete graphs or sparse networks without community structure, the
most nature macrostate is the mean field, the fractions/numbers of nodes holding
each type of opinions. These two selections of macrostate, the fractions and numbers
of different types of nodes, lead to a continuous ODE model and a random walk
model respectively. These two models are introduced in Chapter 2.
1.5 Contributions and Organizations
The main object of this thesis is to provide a framework for analyzing the
opinion dynamics, in particular the 2-word Naming Game. We develop one random
walk model which works for complete graph and highly connected networks with-
out community structures. We also develop an improved ODE model which works
for sparse networks without community structures. Both models are based on a so
called coarse-graining approach which requires an underlining mean field assump-
tion. We derive analytical results according to these models and confirm our results
by numerical simulations of NG dynamics.
The thesis is organized as follows. Chapter 2 introduces two basic models
based on coarse-graining approach: the ODE model and the random walk model,
and analyze a prototypical opinion dynamics, the voter model, as a random walk.
Chapter 3 further applies the random walk model on the 2-word Naming Game
on complete graphs, calculates the expectation of the total consensus time Tc and
the expected time spent on each macrostate before consensus T (nA, nB) in three
cases: (i)purely symmetric case; (ii)with biased central influence; (iii)with commit-
ted agents. The tipping point of committed fraction and different dynamics behavior
below and above the tipping point are also discussed. Chapter 4 analyzes the dis-
tribution of the consensus time Tc of 2-word Naming Game on complete graphs
in several ways, calculates the variance of Tc through a martingale approach, and
7
provide a path integral approximation for the variance of Tc. Chapter 5 develops
an improved ODE model using the homogeneous pairwise assumption, a variation
of mean field assumption, which works for sparse networks with narrow degree dis-
tribution and no community structures. This model shows the dependence of the
dynamic behavior on the average degree < k >.
CHAPTER 2
Coarse-graining Approach
This chapter introduces the analytic framework based on the coarse-graining ap-
proach. Sec 2.1 introduces the two basic models used in this thesis. These two
analytical models both works for opinion dynamics on complete graphs. Sec. 2.2
develops the analytical framework which maps the opinion dynamics into a random
walk, and derive the equations in a linear-system form for the expected total con-
sensus time and the expected time spent in each macro-state which can be solved
in a closed form for the voter model.
2.1 Basic Models
In this section, we simply introduce the ODE model and random walk model
which will be derived and discussed in later chapters.
2.1.1 ODE Model
In the ODE model [41], pA, pB, pAB denote the fractions of nodes having single
name A or B, or both A, B in their name lists separately. The model also includes
committed agents which only have name A in its name list and never changes its
state, and its fraction is denoted by q.
Since pA + pB + pAB = 1− q, when q is given, (pA, pB) is enough to describe
a macrostate of the whole network. Under the typical mean field assumption, the
evolution of the macrostate (pA, pB) in original NG is given by:
dpA
dt= −pApB + p2
AB + pABpA +3
2qpAB
dpB
dt= −pApB + p2
AB + pABpB − qpB
In this thesis, we mainly focus on LO-NG which gives a slight different version
of ODEs:
8
9
dpA
dt= −pApB + 1
2p2
AB + 12pABpA + qpAB
dpB
dt= −pApB + 1
2p2
AB + 12pABpB − qpB
(2.1)
This model is very cheap to calculate and correctly predicts some behaviors
of the expected value of consensus time. To fully understand the prediction of this
model, one should distinguish two concepts. One is Tc, the consensus time, the
first absorption time for the consensus state in which all nodes adopt the single
name A. The other is Tη, the first time for a macrostate that the fraction of nodes
with the single name A goes across the threshold η. The ODE model predicts the
later one with any threshold η while can only estimate the former one by setting
η = 1 − 1N
. Since the ODE model ignores the randomness of the dynamics, it is
unable to investigate the distribution of Tc or Tη, also it fails when committed agents
q < qc in which the randomness is very important.
2.1.2 Random Walk Model
In the random work model [42], nA, nB ,nAB denote the numbers of nodes
with single name A or B or both A,B in their name lists separately, and nq denotes
the number of nodes committed in name A. The macrostate of the network is given
by ~X = (nA, nB). Since nA, nB ≥ 0 and nA +nB ≤ N , all possible macrostates form
a lattice in a triangular domain D. Therefore ~Xt, the macrostate at time step t, is
a random walk in D:
~nt = ~n0 +T−1∑t=0
∆~n(~n(t)) . (2.2)
where ∆~n(~n(t)) is a random vector representing the change of macrostate ~n at
time step t. All possible values and the corresponding probabilities of the random
vector ∆~n(NA, NB) is given in Table 3.3:
The random walk model is more expansive, but can analytically calculate the
expectation and distribution of Tc other than Tη for finite complete network with N
nodes, and it also works for the case with committed agents q < qc.
10
Table 2.1: Update events for the 2-word LO-NG with committed agentsand the associated random walk transition probabilities
speaker listener event ∆~n(nA,nB) probabiltyB or AB A A → AB (-1,0) P (A−)=(nA−nq)(N−nA+nB)/2N2
A or AB AB AB → A (1,0) P (A+)=(N−nA−nB)(N+nA−nB)/2N2
A or AB B B → AB (0,-1) P (B−)=nB(N+nA−nB)/2N2
B or AB AB AB → B (0,1) P (B+)=(N−nA−nB)(N−nA+nB)/2N2
A, B or AB A or B unchanged (0,0) P0=(nA+nB)/2N+(nA−nB)2/2N2+nq(N−nA+nB)/2N2
2.1.3 Link between the two models
There is a direct link between the random walk model represented by Eq. 2.2
and the ODE model represented in Eq. 2.1 that the later can be derived from the
former by simply take the expectation. Denote the changes of the fractions of nodes
by the random vector ∆~p = (∆pA, ∆pB). Consider the expectations dpA = E[∆pA]
and dpB = E[∆pB], we have
dpA
dt=
E[∆pA(~p)]
dt=
1
N · dtE[∆nA]
dpB
dt=
E[∆pB(~p)]
dt=
1
N · dtE[∆nB]
Let dt = 1N
(t therefore is the time normalized by system size N), and calcu-
lating E[∆nA], E[∆nA] according to Table 2.1, the above equation becomes Eq. 2.1.
In this sense, the ODE model is the drift part of the random walk model.
2.2 A Prototypical Case: Voter Model
In this section, we consider a well studied prototypical model for opinion for-
mation, the voter model [30, 1, 12]. In this model, the evolution of a suitably defined
global variable can be easily mapped onto a random-walk problem. Further, the so-
lution of this model is known in all dimensions, including the complete graph, hence
our method can be easily tested.
Given a network of N nodes, with each node in a state chosen from the set
11
Table 2.2: Update events for the voter model and the associated randomwalk transition probabilities
speaker listener event ∆nA(nA) probabilityA B B → A 1 (1− nA
N) nA
N−1
B A A → B -1 nA
N(1− nA−1
N−1)
A A unchanged 0 nA
NnA
N−1
B B unchanged 0 (1− nA
N)(1− nA−1
N−1)
Name, the voter model is defined by the following update rule:
1. Two neighboring agents i, j are randomly picked, one as the speaker and
one as the listener.
2. The listener then changes its state to that of the speaker.
If the set of nodes in the network is denoted by S, (N=|S|), then the above
rule defines a Markov chain in an N dimensional space ΓS. Here Γ, as the definition
in Sec. 1.2, is the set of all possible opinions, but for voter model Γ is simply Name
instead of the power set of Name.
Under mean field assumption, which is justifiable when dealing with complete
graphs, one coarse-graining approach is to take all network states in ΓS correspond-
ing to the same mean field ~n = (n1, n2, ..., nM) as a macrostate, where nγkdenotes
the number of nodes in state γk ∈ Γ and M = |Name|. Therefore, the coarse-
grained random process is valued in a M − 1 hyper-plane (since∑M
i=1 ni = N) in
M-dimensional space. When M = 2, let Name = {A,B}, the coarse-grained pro-
cess is on a segment (since nA + nB = N(nA, nB >= 0)), and all macrostates can
be represented by a single discrete variable nA = 0, 1, ..., N .
In each time step ∆nA, the change of nA, is a random variable depends only
on current macrostate nA. Its possible values and the corresponding events and
probabilities are listed Table 2.2.
Hence, the coarse-grained process of the voter model can be mapped to a
12
random walk in 1-d,
nA(T ) = nA(0) +T−1∑t=0
∆nA(nA(t)) , (2.3)
where nA = 0, N are two absorbing states.
2.2.1 Expected Consensus Times for Voter Model
The consensus time Tc, is the most interesting feature of the opinion dynam-
ics. We firstly analyze the expectation of Tc, or in the random walk context, the
expected time before absorption by first-step analysis [31]. The idea is based on
a straight forward statement: The absorption time can be decomposed into two
parts, the time steps before and after leaving the current macrostate. The former
one is called the residence time tr(nA) of the given macrostate nA. We denote the
expected time before absorption and the expected residence time starting from a
specific macrostate nA on complete graphs with N nodes as τ(nA|N) and t(nA|N),
respectively, or simply τ(nA) and t(nA) when it is not ambiguous. Then taking
expectation of all random variables mentioned in the statement above, we get the
following equations:
τ(nA) = E[time before leaving nA] + E[time before absorption and after leaving nA]
= E[tr(nA)]
+ E[E[time before absorption and after leaving nA|entering new macrostate n′A]]
= t(nA) +
∑i=−1,1 P (∆nA(nA) = i)τ(nA + i))
1− P (∆nA(nA) = 0)
= t(nA) +1
2(τ(nA + 1) + τ(nA − 1)), (2.4)
13
where t(nA) is given by following argument:
t(nA) = Σ∞k=1P (tr(nA) = k)k
= Σ∞k=1P (tr(nA) ≥ k)
= Σ∞k=1P (∆nA(nA) = 0)k−1
=1
1− P (∆nA(nA) = 0)
=1
2nA
N(1− nA
N−1).
Defining ~τ = (τ(1), ..., τ(N − 1))T , ~t = (t(1), ..., t(N − 1))T and using boundary
conditions τ(0) = τ(N) = 0, we rewrite Eq. (2.4) as a linear system:
~τ = Q~τ + ~t , (2.5)
where
Q =
0 12
12
· ·· · ·· · 1
2
12
0
(N−1)×(N−1)
. (2.6)
τ(nA|N) can be solved exactly from this equation in terms of nA and N .
Next, we derive the expected time spent on each macrostate. Define δiN as a
N-entries column vector in which only the ith entry is non-zero and has value 1. We
have:
τ(i|N) = (δiN−1)
T~τ = (δiN−1)
T (I −Q)−1~t = ~u · ~t , (2.7)
Since the above equation is actually valid for arbitrary random walks with
different Q’s and t’s, the necessary condition for equation 2.7 is (δiN−1)
T (I−Q)−1 =
~u, i.e. ~u = (u1, u2, ..., uj, ..., uN−1)T is the solution of the equation:
(I −Q)T~u = δiN−1. (2.8)
14
As the total consensus time starting from macrostate i
τ(i|N) = ~u · ~t =N−1∑i=1
ui · ti,
uj can be understood as the average number of visits (assuming entering and leaving
a macrostate as one visit) of macrostate nA = j before absorption and starting from
macrostate nA = i. Moreover, the expected number of time steps (including self-
repeating steps) spent on the macrostate nA = j before absorption is given by:
Tj = uj · tj. (2.9)
where ui, ti are elements of ~u and ~t.
It is quite easy to show that τ(n|N) is monotonic with respect to N . Therefore,
in order to obtain the large N behavior of τ(nA|N) we focus on the case when N is
even and always assume nA(0) = N/2. In such cases, ~u = (1, 2, 3, ..., N/2, ...3, 2, 1)T
and
τ(N/2|N) = ~vN/2 · ~t= 2Σ
N/2−1k=1 k
1
2(k/N)(1− k/(N − 1))+
N
22
= N(N − 1)ΣN/2−1k=1
1
N − 1− k+ N
≈ N(N − 1)
∫ N−2
N/2
dx
x+ N
≈ ln(2)N(N − 1) + N .
As is the convention for agent-based models in statistical physics, unit time is
assumed to consist of N update events. Thus, following this convention, the nor-
malized consensus time is τ(N/2|N)/N is ln(2)N to the leading order. This agrees
with the scaling behavior obtained previously for the characteristic (relaxation) time
of the voter model through a Fokker-Planck [27, 28] approach and through simula-
tions [19]. Figure 2.1 shows the comparison of the consensus time from numerical
15
0 100 200 300 400 500 600 700 800 900 10000
100
200
300
400
500
600
700
N
τ / N
numerical simulationanalytical result: log2*N
Figure 2.1: Consensus time for voter model on complete graph. The ver-tical axis represent the consensus time (normalized by N).The horizontal axis is the number of nodes in complete graph.Each star point is an average of 10 runs of numerical simu-lations of voter model and the solid straight line consists ofthe solutions of the linear equation for each N value.
simulations (averaged over 50 runs for each N) with the analytical results.
In this section, we have developed the first-step analysis techniques for voter
model and derived equations 2.5 2.8 2.9 for calculating the expected consensus time
and expected time spent on each macrostate before consensus. All these techniques
and equations can be easily extended to many opinion dynamics including the 2-
word Naming Game.
CHAPTER 3
Expected of Consensus Time for Naming Game on
Complete Graphs
In many recent examples, unlike in low-dimensional spatially-embedded systems
with short-range connections[30], the collective dynamics in sparse random graphs
(with no community structure) exhibit scaling properties very similar to those ob-
served on the complete graph. Therefore, studying fundamental agreement processes
on the complete graph can yield insights for the ordering process in more realistic
sparse random networks. In this and the next chapter, we focus on the Naming
Game on complete graph, and we will discuss the effect of network topology in
Chapter 5.
In this chapter, we follow the coarse-graining approach to map models for
NG on the complete graph to an associated random walk problem which yields
asymptotically exact consensus times for large but finite complete graphs of size
N , and test our framework through known results for the spontaneous agreement
process (without any influencing) [27, 28, 19, 24, 29]. In Sec. 3.1, we employ the
method in Chapter 2 for 2-word Listener Only Naming Game(LO-NG)[29]. Then,
we investigate models for social influencing: we present new asymptotic results
and discuss the behavior of the 2-word NG under the influence of an external field
(Sec.3.2) and in the presence of committed agents (Sec.3.3). A brief summary is
given in Sec.3.4.
3.1 The 2-word Naming Game
The Naming Game (NG) (defined in Sec.1.2) is somewhat more complicated
than the voter model because each agents can process multiple names simultane-
ously. For Name (M = |Name|), the given set of all possible names, the state of
each node is a member of the power set Γ ⊂ 2Name - the set of all subsets of Name
- rather than Name itself.
We adopt LO-NG rules in our analysis which is slightly different from the
16
17
Table 3.1: Update events for the 2-word LO-NG and the associated ran-dom walk transition probabilities
speaker listener event ∆~n(nA,nB) probabiltyB or AB A A → AB (-1,0) P (A−)=nA(N−nA+nB)/2N2
A or AB AB AB → A (1,0) P (A+)=(N−nA−nB)(N+nA−nB)/2N2
A or AB B B → AB (0,-1) P (B−)=nB(N+nA−nB)/2N2
B or AB AB AB → B (0,1) P (B+)=(N−nA−nB)(N−nA+nB)/2N2
A, B or AB A or B unchanged (0,0) P0=(nA+nB)/2N+(nA−nB)2/2N2
original NG [23] is that, upon “successful” communication, only the listener changes
its state [32]. This restriction eliminates steps of size 2 in the associated random
walk model, making it easier to apply the method developed in Sec.2.1.1, without
changing the universal features of the system’s dynamics compared to the original
NG [32]. Furthermore, we will consider the version of the NG with only two words
[29].
The coarse-graining approach mentioned in Chapter 2 merges all network
states labeled by the same vector ~n = (ni)2M−1, i ∈ Γ = 2Name\{∅} into a macrostate.
Here ni is the number of nodes in state i and |Γ| = 2M − 1. The coarse-grained
random process takes values in the 2M − 2 hyper-plane:∑
i∈Γ ni = N , thus we
can map the coarse-grained process into a 2M − 2 dimension random walk. In the
case of 2-word NG M = 2, so assuming Name = {A,B}, ~n = (n{A}, n{B}, n{A,B})
or (nA, nB, nAB) for short, where nA, nB, and nAB are the number of individuals
carrying word A, word B, or both, respectively. Since nAB = N − nA − nB, we
dump one redundant dimension and take the 2-d vector ~n = (nA, nB) to represent
the macrostate. In each time step, the change of macrostate ∆~n have five possible
values. For all these possible values of ∆~n, the corresponding events and probabili-
ties are listed in Table 3.1.
In Table 3.1, the event A → AB, for example, means the listener node changes its
state from A to AB. Analogous to the procedure followed for the voter model, we
18
map the coarse-grained 2-word NG to a 2-d random walk,
~n(T ) = ~n(0) +T−1∑t=0
∆~n(~n(t)) . (3.1)
Here the domain D of ~n is the set containing all integer grid points bounded by the
lines nA = 0, nB = 0 and nA + nB = N while ~n = (0, N), (N, 0) are two absorbing
states.
The expected drift E[∆~n] is given by
E[∆~n(~n] =∑
~n= ~ni
~ni = (P (A+)− P (A−), P (B+)− P (B−))
The vector field of the expected drifts starting from different macrostates
E[∆~n(~n] is plotted in Fig. 3.1 for N = 10. As shown in Fig. 3.1, on average,
~n(t) will quickly go to a stable trajectory and then slowly converge to one of the
consensus states. In other words, in contrast to an unbiased random walk as in the
voter model, the 2-word NG is ”attracted” to the consensus state after a spontaneous
symmetry breaking. So it is reasonable to expect that the 2-word NG will achieve
consensus much faster than the voter model, both starting from the nA = nB = N/2
initial configuration on the complete graph.
3.1.1 Expected Consensus Times for 2-word Naming Game
We now repeat the first-step analysis (developed in Chapter 2), noting that the
method is essentially independent of the number of dimensions. Assume τ(~n|N) and
t(~(n)|N) are expected numbers of time steps before absorption and before leaving
the current state starting at the macrostate ~n = (nA, nB), then we have:
τ(~n) = t(~n) +
∑i∈{(1,0),(−1,0),(0,1),(0,−1)} P (∆~n(~n) = i) τ(~n + i)
1− P (∆~n(~n) = (0, 0))
= t(~n) + [P (A+)τ(nA + 1, nB) + P (A−)τ(nA − 1, nB)
+ P (B+)τ(nA, nB + 1) + P (B−)τ(nA, nB − 1)]/(1− P (0)) (3.2)
19
0 1 2 3 4 5 6 7 8 9 100
1
2
3
4
5
6
7
8
9
10
nA
n B
Figure 3.1: Vector field of the expected drift E[∆~n(~n)] for the randomwalk coarse-grained from 2-word naming game. Each vec-tor is E[∆~n(~n)], the expected drift of the random walk atmacrostates ~n. The network is 10 nodes complete graph andthe domain of random walk is the lower triangle of the squarelattice.
and
t(~n) =1
1− P (∆~n(~n) = (0, 0))=
1
1− (nA + nB)/2N − (nA − nB)2/2N2(3.3)
Ordering all the macrostates ~n (except the two absorbing states) in a vector, and
arranging τ(~n),t(~n) in the same order to get ~τ and ~t whose dimensions are (N +
2)(N + 1)/2− 2, we write the Eq. (3.2) in the same linear system form ~τ = Q~τ + ~t
as Eq. 2.5. This equation gives the expected consensus time.
20
Furthermore, taking δ~nN as a column vector where the only nonzero entry is 1
corresponding to ~n, we solve expected number of visits to each macrostate through
the equation 2.8: (I − Q)T u = δ~nN . The expected number of time steps spent on
each macrostate T (nA, nB) is obtained by multiplying corresponding elements of u
and ~t (as in Eq. 2.9): T (nA, nB) = u(nA, nB) · t(nA, nB). These equations give the
expected time spent on each macrostate before consensus.
Although in this case the matrix Q is not easy to write generally, it has a good
property that sum of each row is ≤ 1 and for some rows (those corresponding to
~n = (N − 1, 0) and (0, N − 1)) the inequality is strict. Consequently the moduli of
all the eigenvalues of Q is strictly less than 1 and (I −Q) is invertible, therefore the
existence and uniqueness of the solutions are guaranteed.
We study the case of even N and unbiased initial state ~n(0) = (N/2, N/2), and
compare the normalized consensus time τ(N/2, N/2|N) obtained from numerical
simulations against the solution obtained from the linear system. Figure 3.2 shows
that the normalized consensus time has an order O(ln N) which is much smaller
than the order O(N) for the voter model. Note that the O(ln N) consensus time
of the original 2-word NG with the above initial configuration has been previously
found using simulations and a rate equation approach [24, 29].
Figure 3.3, depicting the expected number of time steps spent on each macrostate
T (nA, nB) when N = 100 and ~n(0) = (50, 50) shows that the time before absorption
is mainly spent on the macrostates near the consensus states.
3.2 The 2-word Naming Game with External Influence
The previous section focused on the time taken by the system to spontaneously
reach consensus. A natural question that arises is how consensus can be sped up
through an external influencing force such as mass media [33, 34], In this section we
study this case: the 2-word NG subject to an external field of magnitude f for which
the update rule is defined as follows: in each time step, if the listener is in a mixed
state i.e. AB, it will with probability f spontaneously change into state A and with
probability 1− f follow the original NG rule (Sec.1.2). All the differences between
this case and the spontaneous case lie in the ∆~n(~n) which are listed in Table 3.2.
21
3.5 4 4.5 5 5.5 6 6.5 718
20
22
24
26
28
30
32
34
36
38
log(N)
τ /N
numerical simulationanalytical result by linear solver
Figure 3.2: Consensus time (normalized by N) as a function of the log-arithm of the network size N for 2-word naming game oncomplete graph. Each star point is an average of 10 runs ofnumerical simulations of 2-word naming game and the solidstraight line consists of the solutions of the linear equationfor each N .
In Fig. 3.4 we show how the vector field E[∆~n(~n)] which is intuitively the
“drift” part of the course-graining random walk changes at different influence levels
f ’s.
Following the first-step analysis, we can solve for the expected consensus time τ
and the expected number of time steps spent at each macrostate T (nA, nB) starting
from any given macrostate. A specific solution of T (nA, nB) on a complete graph
with N = 100 starting from macrostate (50, 50) is shown in Fig. 3.5. As shown,
there are two peaks around the two consensus states, just as in the spontaneous case,
22
Figure 3.3: The expected time spent on each macrostate before consensusin the 2-word NG on a complete graph with N = 100 nodes.The vertical axis T (nA, nB) is the expected time that the ran-dom walk spends in macrostate (nA, nB) before consensus,starting from the (nA(0), nB(0)) = (50, 50) initial macrostate.
Table 3.2: Update events for the 2-word LO-NG with central influenceand the associated random walk transition probabilities
speaker listener event ∆~n(nA,nB) probabiltyB or AB A A → AB (-1,0) P (A−)=nA(N−nA+nB)/2N2
A, AB or f AB AB → A (1,0)P (A+) =(1−f)(N−nA−nB)(N+nA−nB)/2N2
+f(N−nA−nB)/N
A or AB B B → AB (0,-1) P (B−)=nB(N+nA−nB)/2N2
B or AB AB AB → B (0,1) P (B+)=(1−f)(N−nA−nB)(N−nA+nB)/2N2
A, B or AB A or B unchanged (0,0) P0=(nA+nB)/2N+(nA−nB)2/2N2
23
0 2 4 6 8 10 12 14 16 18 200
2
4
6
8
10
12
14
16
18
20
nA
n B
Figure 3.4: Vector fields of expected drift E[∆~n(~n)] for NG with differentcentral influence levels. Several vector fields are drawn inthis figure with different colors, each color show the drift ofcoarse-grained random walk E[∆~n(~n)] on a complete graphwith N = 20 nodes at different central influence levels f . Thelength of each vector has been rescaled to its square root toavoid cluttering the whole graph.
although the peak near the consensus state which the external influence prefers (all
A state) is much higher than the other one, even at a low influence level f = 0.05.
3.2.1 Probability of Consensus
To better understand the effect of external influence, we apply the first-step
analysis on the probability of reaching a specific consensus state. Defining PA(~n)
24
Figure 3.5: Expected time spent on each macrostate before consensusT (nA, nB) in the 2-word NG on complete graphs. N = 100,central influence f = 0.05, starting from a unbiased initialmacrostate (nA(0), nB(0)) = (50, 50).
as the probability of going to an all-A consensus starting from the macrostate ~n =
(nA, nB), PA(~n) follows:
PA(~n(t)) = E[PA(~n(t + 1))] =∑
i∈{(1,0),(−1,0),(0,1),(0,−1),(0,0)}P (∆~n(~n) = i) PA(~n + i)(3.4)
and satisfies the boundary conditions PA(N, 0) = 1 and PA(0, N) = 0. Ordering
PA(~n) of all macrostates (including the two consensus states) in a vector ~PA, we
rewrite the equations as ~PA = Q0~PA. Q0 is a square matrix of order (N+2)(N+1)/2
and all its elements are given in Table III.
In Fig. 3.6, we consider the NG on 100-node complete graph starting from the
macrostate (n0, N−n0). For n0 = 50 and central influence f = 0 (the left end of the
25
black curve), it has equal probability of going to all A and all B consensus. When
n0 < 50, it is more probable to go to all B consensus without central influence.
However, with a biased central influence f , one can always convert the preference
of the process to the opposite side - all A consensus.
0 0.05 0.1 0.15 0.2 0.25 0.30
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
f
PA(
n 0,N−
n 0 )
n0=50
n0=40
n0=30
n0=20
Figure 3.6: Probability of all A consensus PA with different external influ-ence level f ’s. Starting from macrostate (nA, nB) = (n0, N−n0)on 100-node complete graph.
Furthermore, we show in Fig. 3.7 that the external influence becomes more
powerful in forcing the network to a desired consensus state when network size N
grows larger. Starting from an unbiased macrostate (N/2, N/2), the probability of
going to all B consensus 1−PA(N/2, N/2) (which is against of the central influence)
decays exponentially along with the network size N . So it is reasonable to expect
that in real social network for which network size N is very large, a very slight
biased central influence can strongly affect the social consensus.
26
50 100 150 200 250 300 350 400 450 50010
−8
10−7
10−6
10−5
10−4
10−3
10−2
10−1
100
N
1−P
A(
N /2
, N /2
)
f=0.05f=0.10f=0.15
Figure 3.7: Probability of all B consensus 1 − PA. Starting frommacrostate (nA, nB) = (N/2, N/2) as a function of network sizeN with different external influence level f ’s.
3.3 2-word Naming Game with Committed Agents
A second method of speeding up consensus is by introducing an intrinsic bias
in the system, through a set of inflexible agents [35, 18] promoting a designated
opinion. We refer to such individuals as committed agents [18]. Intuitively, the
introduction of committed agents will break the symmetry of the original NG and
will facilitate global consensus to the state adopted by the committed agents. In
this section we provide asymptotic solutions of 2-word NG with committed agents
Suppose that the number of committed agents is nq and all committed agents
are in state A. The corresponding events in the NG with committed agents and the
associated random walk transition probabilities are summarized in Table 3.3.
27
Table 3.3: Update events for the 2-word LO-NG with committed agentsand the associated random walk transition probabilities
speaker listener event ∆~n(nA,nB) probabiltyB or AB A A → AB (-1,0) P (A−)=(nA−nq)(N−nA+nB)/2N2
A or AB AB AB → A (1,0) P (A+)=(N−nA−nB)(N+nA−nB)/2N2
A or AB B B → AB (0,-1) P (B−)=nB(N+nA−nB)/2N2
B or AB AB AB → B (0,1) P (B+)=(N−nA−nB)(N−nA+nB)/2N2
A, B or AB A or B unchanged (0,0) P0=(nA+nB)/2N+(nA−nB)2/2N2+nq(N−nA+nB)/2N2
3.3.1 Expected Consensus Time for 2-word NG with Committed Agents
The equations ~τ = Q~τ +~t and (I−Q)T u = δ~nN are derived in exactly the same
way as in the non-committed case.
From the dynamics of infinite systems with homogeneous mixing, one can ex-
pect [35, 36], that there exists a critical value qc of the committed fraction q = nq/N ,
above which consensus times drop dramatically. More specifically, for q < qc the
phase space exhibits three fixed points: a “meta-stable” one, dominated by individ-
uals in state B, an stable absorbing fixed point with all individuals in state A, and
a “saddle” point separating them. Initializing the system in a configuration corre-
sponding to macrostate (nA(0), nB(0)) = (nq, N−nq) (a small number of committed
agents embedded among Bs), the system quickly relaxes to the meta-stable fixed
point and stays there for times exponentially large with the system size (i.e., forever
in an infinite system). As q→qc, the meta-stable fixed point and the saddle point
merge, and become an unstable fixed point. For q > qc, regardless of the initial
configuration, the system quickly relaxes to the all-A absorbing fixed point.
Figure 3.8(a) and (b) show the consensus times as a function of the initial
macrostate (nA, nB) for q < qc and q > qc, respectively. In the former case, q < qc,
the expected consensus time τ(nA, nB) is given by the equation ~τ = Q~τ + ~t.
A fast drop-off in consensus time as a function of the initial configuration,
observed for q < qc [Fig. 3.8(a)], indicates the presence of the saddle point. For
q > qc, there is only one stable fixed point (the absorbing one), so the consensus
time starting from any initial configuration is short, including initial configurations
28
in the vicinity of the previously meta-stable states [Fig. 3.8(b)].
Figure 3.9(a) and (b) present the expected number of time steps spent before
absorption in each macrostate T (nA, nB) starting from the initial state (nq, N −nq)
for q < qc and q > qc, respectively. From Fig. 3.9(a) and (b), according to the two
peaks in each figure, the random walk before absorption spend time mainly in two
areas: one is near the meta-stable state [close to the initial state (nq, N − nq)], the
other one is around the consensus state (N, 0). When q < qc, the peak around the
meta-stable state is dominant in the total consensus time, while for q > qc, it can
be ignored compared to the peak around consensus.
We sum T (nA, nB) over these two areas separately (since the peak in Fig. 3.9
is very concentrated, the domain of summation does not matter very much), and
define the time spent near consensus state as Tc =∑
nA>nBT (nA, nB) and that
near the meta-stable state as Tm =∑
nA>nBT (nA, nB). Figure 3.10 shows that the
normalized time spent near consensus Tc/N has the same order O(ln(N)) when N
grows as demonstrated in the non-committed case. We conclude that for different
q’s regardless whether it is less or greater than the critical qc, the peaks around the
consensus state have roughly the same scale and are about twice the height of the
corresponding peak in the 2-word NG without committed agents shown in Fig. 3.3.
Figure 3.11 shows that the normalized time that the random walk is stuck in the
vicinity of the meta-stable state, Tm/N , grows exponentially with N for q < qc,
while it decreases weakly with N for q > qc. The crossover and departure from
these drastically different scaling behaviors appears at around q = 0.08, hence our
rough estimate for the critical fraction of committed agents is qc ≈ 0.08 ± 0.01. A
detailed finite-size analysis of this crossover behavior should be performed to extract
qc in the infinite system-size limit. Since for the consensus time we approximately
have τ ≈ Tm + Tc, the above findings imply that τ/N ∼ O(ecN) (where c is a
constant) for q < qc, while τ/N ∼ O(ln(N)) for q > qc.
29
Figure 3.8: Expected normalized consensus time (τ(~n)/N) as a functionof the initial macrostate (nA, nB) on the complete graph withN = 100 nodes. (a) when the fraction of committed agents isq = 0.06 < qc; (b) when q = 0.12 > qc.
30
Figure 3.9: Expected time spent in each macrostate before consensusT (nA, nB) on the complete graph with N = 100 nodes, startingfrom the (nA(0), nB(0)) = (nq, N −nq) initial macrostate, (a) forq = 0.06 < qc; (b) for q = 0.12 > qc.
31
50 100 150 200 250 300 350 400 450 50012
13
14
15
16
17
18
19
20
21
22
N
Tc /
N
q=0.06q=0.08q=0.10q=0.12
Figure 3.10: Normalized time spent near the consensus state before con-sensus as a function of network size N for different fractionof committed agents q including cases for both q < qc andq > qc. Note the logarithmic scales on the horizontal axis.
3.4 Summary
In this chapter, we studied influencing and consensus formation, in particu-
lar, the asymptotic consensus times, in stylized individual-based models for social
dynamics on the complete graph. We accomplished this by a coarse-graining ap-
proach (lumping microscopic variables into macrostates) resulting in an associated
random walk. We then analyzed first-passage times (corresponding to consensus)
and times spent in each macro-state of the random-walk model. The method yields
asymptotically exact consensus times for large but finite complete graphs of size
N . Direct individual-based simulations can become time consuming for large sys-
tems and prohibitive even for moderately-sized systems when the system initially is
in a meta-stable configuration, as the escape time can increase exponentially with
32
50 100 150 200 250 300 350 400 450 50010
0
102
104
106
108
1010
1012
1014
1016
(a)
N
Tm
/ N
q=0.04q=0.06q=0.07q=0.08
50 100 150 200 250 300 350 400 450 50015
20
25
30
35
40
45
50
55
60
N
Tm
/ N
(b)
q=0.08q=0.09q=0.10q=0.12
Figure 3.11: Normalized time spent near the meta-stable state as a func-tion of network size N for different fraction of committedagents q, (a) for q < qc; (b) for q > qc. The behavior forq = 0.08 is shown in both (a) and (b), corresponding to ourrough estimate of the critical fraction of committed agents,qc ≈ 0.08± 0.01.
33
the system size. The method presented here provides an alternative way to obtain
the asymptotic behavior of consensus times, including the cases associated with
extremely slow meta-stable escapes.
After testing this framework on spontaneous opinion formation in two known
models, we applied it to two scenarios for social influencing in a variation of the
2-word naming game. First, we considered the case when individuals are exposed
to a global external field (or central influence). We found that the external field
dominates the consensus in the large network-size limit. Second, we investigated the
impact of committed individuals with a fixed designated opinion (i.e., individuals
who can influence others but themselves are immune to influence). In the latter
case, we found the existence of a tipping point, associated with the disappearance
of the meta-stable state in the opinion space: When the fraction of committed nodes
is below a critical value, consensus times increase exponentially with system size;
on the other hand when the fraction of committed nodes is above this threshold
value (tipping point) the system is quickly driven to consensus with weak system
size dependence.
The method and the results presented in this chapter are applicable to the
complete graph or highly connected graphs without community structures. When
applying these results on the large homogeneous sparse random networks, the con-
sensus times often exhibit the same asymptotic scaling with simulations [19, 20, 17],
although they do not quantitatively match. We yield some insight of the ordering
and consensus in realistic social networks with a relative simple model.
CHAPTER 4
Consensus Time Distribution for Naming Game on
Complete Graphs
Chapter 3 has investigated the average of the consensus time Tc. As Tc is a random
variable depending on the random dynamical process of NG, it is necessary for us
to also analyze the distribution of Tc. In this chapter, we firstly derive a recursive
way to calculate the exact distribution of Tc in a finite domain 0 ≤ Tc ≤ Tmax (Tmax
is an artificial cutoff for Tc distribution), and compare it with simulation results of
NG. Next, we show the asymptotic distribution of Tc when network size N tends to
infinity. Next, we give an asymptotically exact solution for the variance of Tc using
a martingale approach. In the end, we provide an approximation of Tc which is very
cheap to calculate.
4.1 Recursive Calculation for Consensus Time Distribution
Our goal is to obtain P{X0 = (nA, nB), Tc = T}, the probability for NG start-
ing from the (nA, nB) state and achieve consensus at time T , and we simply denote
it as P (nA, nB, T ). Applying the first-step analysis on the variable P (nA, nB, T ), as
we did on τ(nA, nB) in Chapter 3, we have the recursive relationship:
P (nA, nB, T + 1) = Q(nA + 1, nB|nA, nB)P (nA + 1, nB, T )
+Q(nA − 1, nB|nA, nB)P (nA − 1, nB, T )
+Q(nA, nB + 1|nA, nB)P (nA, nB + 1, T )
+Q(nA, nB − 1|nA, nB)P (nA, nB − 1, T )
+Q(nA, nB + 2|nA, nB)P (nA, nB + 2, T )
+Q(nA + 2, nB|nA, nB)P (nA + 2, nB, T ) ,
where the last two terms on the right hand side is vanishing for LO-NG (not
34
35
vanishing for the original NG). Ordering all macrostates including the consensus
states in a vector, we have ~P (T ) = (P (~n1, T ), P (~n1, T ), P (~n1, T ), ..., P (~nM , T ))T ,
where ~nk’s are macrostates represented in the format (nA, nB). We rewrite the
above recursive equation as:
~P (T + 1) = Q ∗ ~P (T ) (4.1)
The initial conditions for ~P (T = 0) is as following:
P (~(n), 0) =
1 if ~(n) is a consensus state
0 if ~(n) is not a consensus state
This vector sequence ~P (T ) generated by the equation 4.1 is considered as a
vector-valued function of T . For a given dimension, i.e. a given macrostate ~nk =
(nA, nB), this sequence is a real-valued function of T which represents the consensus
time distribution starting from the macrostate ~nk = (nA, nB) corresponding to this
dimension:
P(nA,nB)(Tc = T ) = (P (nA, nB, T )).
Remarks:
(i) We actually calculate the consensus time distributions starting from all the
macrostates at the same time;
(ii) We can only calculate the distribution ~P (T ) in a finite domain 0 ≤ T ≤<
Tmax∞, but the probability value we obtained is identical with that of the full distri-
bution in [0,∞), not normalized in the finite domain.
In figure 4.1, we compare the analytical probability distribution calculated by
our recursive equation and that by the statistics of NG simulation. The good match
in the figure confirms our method.
36
0 0.5 1 1.5 2 2.5 3 3.5 4
x 104
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8x 10
−4
Tc
P(T
c)
q=0.1
numericalanalytical
0 2 4 6 8 10
x 104
0
1
2
3
4
5
6
7
8x 10
−5
Tc
P(T
c)
q=0.09
numericalanalytical
0 0.5 1 1.5 2 2.5
x 107
0
0.5
1
1.5
2
2.5
3
3.5
4x 10
−7
Tc
P(T
c)
q=0.06
numericalanalytical
Figure 4.1: Comparison between the theoretical prediction of the con-sensus time distribution (red line) and the statistics of thenumerical simulation of Naming Game on N=100 completenetwork)
37
4.2 Qualitative Property of the Consensus Time Distribu-
tion
Previous study shows there is a critical fraction qc for the committed agents.
The networks with more or less committed agents than the critical fraction lead to
qualitatively different behaviors of NG dynamics. We calculated the Tc distribution
by equation 4.1 and rescaled the distribution by the mean or standard deviation of
Tc. In the following figure Fig. 4.2, (a) and (c) are rescaled by E[Tc], and (b) and (d)
are rescaled by std(Tc), the probability P (Tc) are also multiplied by the same scaling
parameter to make the rescaled curve still a probability distribution. (a) and (b)
are obtained when committed fraction q = 0.12 > qc showing that the distribution
tends to Gaussian, especially (b) shows the distribution of Tc concentrates to E[Tc]
when N grows. (c) and (d) are obtained when q = 0.06 < qc showing that the
distribution tends to exponential.
These observations imply that for a large enough network, when q > qc, two
parameters ( E[Tc] and std(Tc)) are sufficient to describe the Tc distribution; when
q > qc, only one parameter (E[Tc] or std(Tc)) is necessary to determine the asymp-
totic exponential distribution of Tc.
4.2.1 A Tricky Approximation of E[Tc] when q < qc
According to equation 4.1, ~P (T ) has an asymptotic expansion:
~P (T ) = C1exp(log(λ1)T ) ~P1 + C2exp(log(λ2)T ) ~P2 + ...,
where λ1, λ2... are the largest few eigenvalues of the transition matrix Q, and
and C1, C2, ... are constants left to decide. Consequently, each dimension of this
equation give us:
P(nA,nB)(Tc) = C1exp(log(λ1)Tc) + C2exp(log(λ2)Tc) + ...,
As we know, when q < qc and N is large enough, P(nA,nB)(Tc) is almost
exponential. Hence, in the above equation, the first term dominates, that is:
38
0 0.5 1 1.5 2 2.5 30
0.5
1
1.5
2
2.5
3
3.5
Tc/E[T
c]
P(T
c)*E
[Tc]
(a) q=0.12
N=50N=100N=200N=400N=800
−4 −3 −2 −1 0 1 2 3 4 5 60
0.1
0.2
0.3
0.4
0.5
0.6
0.7
( Tc−E[T
c] ) / std(T
c)
P(T
c) *
std(
Tc)
(b) q=0.12
st. normalN=50N=100N=200N=400N=800
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 20
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1(c) q=0.06
Tc / E[T
c]
P(T
c) *
E[T
c]
N=50N=100N=150y=exp(−x)
−1.5 −1 −0.5 0 0.5 1 1.5 2 2.50
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1(d) q=0.06
(Tc−E[T
c]) / std(T
c)
P(T
c) *
std(
Tc)
N=50N=100N=150y=exp(−x−1)
Figure 4.2: Rescaled consensus time distributions and their asymptoticbehavior. All the distributions are analytically calculated byequation 4.1. In (a) and (c), Tc are rescaled by its mean; in(b) and (d), Tc are rescaled by its standard deviation. (a) and(b) are for committed fraction q = 0.12 > qc, the distributiontends to Gaussian and concentrate when N grows. (c) and(d) are for q = 0.06 < qc, the distribution tends to exponential.
P(nA,nB)(Tc) ≈ 1
− log(λ1)exp(log(λ1)Tc)
Therefore
E[TC ] ≈ 1
− log(λ1)
where λ1 is the largest eigenvalue of the transition matrix Q in the random
39
walk model. This result is consistent with the intuition, as when q < qc, the system
spend most of the time around the metastable state before achieving consensus, and
λ1 in some sense evaluates the leaking rate out of the metastable state.
4.3 Variance of Consensus Time
According to the analysis of Section 4.2, we focus on the variance of Tc when
the committed fraction q > qc. In this section, we calculate Var(Tc) according to
the random walk model.
Let Xt be the macrostate (nA, nB) at time step t mentioned in Section 2.3.
Random variable Tc(X0) or simply Tc denotes the total consensus time starting from
the macrostate X0 at t = 0. We define an auxiliary martingale:
Zt(x) = E[Tc(X0)|Ft] = E[Tc(X0)|Xt = x]
Here the random walk {Xt} is a homogeneous Markov Chain, so Zt(x) does
not depend on X0 although there is an X0 in the definition. Zt has following prop-
erties:
1.Z0(X0) = E[Tc(X0)].
2.When t > Tc, Zt = Tc.
3.Considered as a random process, Zt(Xt) is a martingale.
4.Zt(x) = Z0(x) + t
The properties 1 and 2 are straight forward by definition. We prove property
3 and 4 as follows:
Proof of property 3
40
E[Zt+1|Ft] = E[E[Tc(X0)|Ft+1]|Ft]
= E[Tc(X0)|Ft+1]
= Zt.
Proof of property 4
Zt(x) = E[Tc(X0)|Xt = x]
= E[Tc(x) + t|X0 = x]
= Z0(x) + t.
Since for a finite system, NG goes to consensus at finite time almost surely.
According to property 2, we have
limt→∞
Zt(Xt) = Tc(X0) (a.s.)
As a property of a martingale:
V ar(Tc) = V ar( limt→∞
Zt(Xt))
= V ar
(Z0(X0) +
∞∑t=0
(Zt+1(Xt+1)− Zt(Xt))
)
= V ar(E[Tc(X0)]) +∞∑
t=0
V ar(Zt+1(Xt+1)− Zt(Xt)))
=∞∑
t=0
V ar(Zt+1(Xt+1)− Zt(Xt)))
41
According to Property 4, we have
Zt+1(Xt+1)− Zt(Xt) = Z0(Xt+1) + t + 1− Z0(Xt) + t
= Z0(Xt+1)− Z0(Xt) + 1
Therefore, Zt+1(Xt+1) − Zt(Xt) is independent of t and only depends on the
current macrostate Xt = (nA, nB). So we define :
V (nA, nB) = V ar(Zt+1(Xt+1)− Zt(Xt)|Xt = (nA, nB)).
Define T (nA, nB), as in Chap. 3, as the expected number of time steps spent
on the macrostate (nA, nB) before consensus:
T (nA, nB) = E[∞∑
t=0
1{Xt = (nA, nB)}].
After resuming, we have:
V ar(Tc) =∞∑
t=0
V ar(Zt+1(Xt+1)− Zt(Xt)))
=∞∑
t=0
∑
(nA,nB)
P (Xt = (nA, nB))V (nA, nB) (4.2)
=∑
(nA,nB)
∞∑t=0
E[1{Xt = (nA, nB)}]V (nA, nB)
=∑
(nA,nB)
T (nA, nB)V (nA, nB). (4.3)
Here we can switch the order of two summations because each term P (Xt =
(nA, nB))V (nA, nB) is positive and the infinite sum is always absolutely convergent
whenever it is convergent.
In Chap. 3, Z0(nA, nB) = τ(nA, nB), the expectation of Tc(NA, NB), and
T (nA, nB) are calculated for all (nA, nB) using the first-step analysis.
42
V (nA, nB) is given by:
V (nA, nB) = V ar(Zt+1(Xt+1)− Zt(Xt)|Xt = (nA, nB))
= V ar(Z1(X1)− Z0(X0)|X0 = (nA, nB))
= E[((Z1(X1)− Z0(X0))2|X0 = (nA, nB)]
= P (A+)(Z0(X0 + (1, 0))− Z0(X0))2 + P (A−)(Z0(X0 + (−1, 0))− Z0(X0))
2
+ P (B+)(Z0(X0 + (0, 1))− Z0(X0))2 + P (B−)(Z0(X0 + (0,−1))− Z0(X0))
2
Therefore the function values of T (nA, nB) and V (nA, nB) are known according
to the random walk model, and V ar(Tc) is finally given by:
V ar(Tc) =∑
(nA,nB)
T (nA, nB)V (nA, nB) (4.4)
Fig. 4.3 shows the comparison between the Var(Tc) from the direct NG simu-
lations and the analytical prediction.
4.4 Path Integral Approximation
In the previous section, we obtain an exact method for calculating V ar(Tc).
But this method is till expensive for large systems. Furthermore, when we consider
more general cases such as the Naming game on sparse networks in Chap. 5, estab-
lishing the random walk model is nearly impossible. Therefore the method based
on random walk model does not work for those cases.
In this section, we propose a method purely based on the ODE model to
approximately calculate the variance of consensus time. Since the ODE model does
not include N explicitly, the complexity of this method is constant.
In the ODE models, it is hard to identify a proper cutoff for “total consen-
sus”. Therefore, to make a comparison between the theoretical prediction and the
simulation, we consider η-consensus (Tη) which is the first time pA or pB achieves η.
In later calculation, we focus on V ar(Tη) instead of V ar(Tc).
43
100 150 200 250 300 350 400 450 5001
2
3
4
5
6
7
8
9
10
11x 10
6
N
Var
(Tc)
analytical predictionNG simulation
Figure 4.3: Exact variance of consensus time V ar(Tc) for different N . Eachcircle is the variance of Tc averaged over 100 runs of NG sim-ulations; the solid line is the analytical prediction of V ar(Tc)by equation 4.4. q = 0.2 > qc
We start from the equation 4.2, and rewrite V ar(Tc) as a path integral:
V ar(Tη) =∞∑
t=0
∑
(nA,nB)
P (Xt = (nA, nB))V (nA, nB)
=∑
tr∈Tr
[Tη∏
tr,t=0
P (Xt|Xt−1)
Tη∑tr,t=0
V (Xt)
]
Here the outer sum is over all possible trajectories tr = (X1, X2, ..., XTc).
Tr denotes the space of all possible trajectories. The product∏Tc
t=0 P (Xt|Xt−1)
is the probability measure of a specific trajectory. The inner sum is the variance
integrated along the trajectory. Fig. 4.4 shows one significant feature of the Naming
44
0 0.2 0.4 0.6 0.8 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
pA
p B
Figure 4.4: Trajectories of macrostates coming from the Naming Gamesimulation, when N = 200, q = 0.12 > qc, starting from differentmacrostates X0. Most of the trajectories starting from thesame initial state are close to each other.
game that there is a center manifold, and most of the trajectories are close to the
center manifold.
To make it clear, let µ(·) be the probability measure in the trajectory space
Tr ( µ(Tr) = 1). There exists a subset U ⊂ Tr with µ(U) = 1 − ε1, so that for
∀tr1, tr2 ∈ U , |∑Tη
tr1,t=0 V (Xt) −∑Tη
tr2,t=0 V (Xt)| ≤ ε2. Here ε1 and ε2 are small
variables.
Under above conditions, the path integrals along these trajectories can be
45
estimated as follows:
V ar(Tη) =∑
tr∈Tr
[Tη∏
tr,t=0
P (Xt|Xt−1)
Tη∑tr,t=0
V (Xt)
]
=∑tr∈U
Tη∏tr,t=0
P (Xt|Xt−1)
Tη∑
tr∈U,t=0
V (Xt) + O(ε2)
+ O(ε1)
≈ (1− ε1)
Tη∑
tr∈U,t=0
V (Xt)
≈∫ E[Tη ]
0
V (nA(t), nB(t))dt.
The problem left is to estimate V (x, y) through the ODE model. We define
Fη(x, y) = E[Tη|(nA(0), nB(0)) = (x, y)]. It is a continuous function determined by
the ODE model. ~5Fη, the gradient of Fη along the center manifold is given by
~5Fη =
(−dnA
dt,−dnB
dt
)
(dnA
dt)2 + (dnB
dt)2
.
We locally linearize Fη(x, y):
Fη
((x, y) +
1
N∆ ~N
)= Fη(x, y) +
1
N∆ ~N · ~5Fη,
where ∆ ~N is the random vector in Table 3.1. Then we estimate V (x, y) as
follows:
V (x, y) = P (A+)
(Fη(x +
1
N, y)− Fη(x, y)
)2
+ P (A−)
(Fη(x− 1
N, y)− Fη(x, y)
)2
= P (B+)
(Fη(x, y +
1
N)− Fη(x, y)
)2
+ P (B−)
(Fη(x, y − 1
N)− Fη(x, y)
)2
The Fig. 4.5 compares the V ar(Tη) as a function of 1 − η from direct NG
simulations and the path integral approximation.
46
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.80
200
400
600
800
1000
1200
1400
1600
1800
2000
1−η
Var
(Tη)
NG simulationpath integral approx.
Figure 4.5: Approximate variance of consensus time. The x axis is theconsensus threshold 1−η, the y axis is the variance of the firstpassage time to the consensus threshold η. The solid line isthe path integral approximation of V ar(Tη), the dots com-ing from statistics of 50 runs of NG simulations on completenetwork with N = 5000, q = 20.
4.5 Summary
In this chapter, we study the probability distribution of the consensus time.
In the first section, we find a way to recursively calculate the exact distribution of
Tc in a finite domain 0 ≤ Tc ≤ Tmax, where Tmax is a cutoff of the recursion. A
nature cutoff here is when P(Tmax) ¿ sup0≤T≤Tmax{P(T )}. This natural cutoff
grows tremendously large for big systems especially when q < qc. Comparing with
simulation results of NG, this recursive method shows good accuracy, but it is very
inefficient (although is more efficient than direct simulation), because through this
approach the Tc(nA, nB) distributions starting from all different macrostates are
47
highly coupled and we need to calculate all of them at the same time.
We show the asymptotic distribution of Tc when network size N tends to infin-
ity. When committed fraction q > qc, P(Tc) tends to the exponential distribution
1− log(λ1)
exp(log(λ1)Tc) where λ1 is the largest eigenvalue of the transition matrix Q
of the random walk. When the committed fraction q > qc or there is no committed
agents (both cases have no meta-stable state), P(Tc) tends to Gaussian distribution
N (µ, σ). µ = E[Tc] is studied in Chapter 3, and in this chapter we calculate the
variance (σ2) is two ways. Firstly, we give an asymptotically exact solution for the
variance of Tc using a martingale approach. The total variance of Tc can be decou-
pled into the variance generated in each macrostate. Secondly, we provide a path
integral approximation of Tc which is very cheap to calculate (constant complexity).
We compare the approximated variance with the simulation results. The weakness
of this method is that we cannot improve the accuracy of the approximation to
arbitrary level we want.
CHAPTER 5
Naming Game on Sparse Networks
5.1 Improved Mean Field Approach
In Chapter 3,4, we fully studied the consensus time of 2-word Naming Game
on complete graph, including the expectation and distribution. All the analysis
are based on mean field assumption. However, as most applications of the mean
field approximation, these theoretical predictions deviate from the simulations on
complex networks especially when the network is relatively ”sparse”. In many stud-
ies, the dynamical behavior of the network given its average degree or the degree
distribution is very important.
Simulations on a range of sparse random networks with 100 - 10000 nodes have
shown, after extensive and costly numerical experiments, that the above tipping
point effect of the NG with a minority fraction of committed agents is a very robust
phenomenon with respect to underlying network topology. Of particular significance
is the numerical / empirical observation [41] that as one lowers the average degree
of the underlying random network, the tipping fraction pc decreases.
Homogeneous pair approximation is a variantion of simple mean field approx-
imation which is introduced to voter model by [53]. It improves the simple mean
field approximation by taking account of the correlation between the nearest neigh-
bors. The analysis in [53] is based on the master equation of the active links, the
links between nodes with different opinions. Although it shows a spurious transition
point of the average degree, it captures most features of the dynamics and works
very accurately on most uncorrelated networks such as ER and scale free networks.
This chapter further study the expected time to consensus of Naming Game on
sparse network and its dependence on the network topology, i.e. the average degree
and variance of the degree distribution. In this chapter, we analytically establish
the numerical discovery using the Homogeneous pair approximation [52] and report
on precise changes in NG dynamics with respect to the average degree < k > of an
uncorrelated underlying network which are beyond the reach of the straight forward
48
49
mean field model in [42], [41]. Specifically, the critical tipping fraction in the binary
agreement model decreases to a minimum of 5 percent when the average degree
< k >= 4 from a maximum of 10 percent for complete graphs. This shows that the
new mean field model is in better agreement with the numerical results reported
above in [41] and provides a much improved approximation to NG dynamics on
large random networks in comparison to the straight forward mean field model in
[41].
5.2 Naming Game without Committed Agents
5.2.1 Analysis
Consider the NG dynamics on an uncorrelated random network where the
presence of links are independent, together with the following assumptions which
comprise the foundation of the homogeneous pair approximation:
1. The opinions of direct neighbors are correlated, while there is no extra correla-
tion besides that through the nearest neighbor. To make this assumption clear,
suppose three nodes in the network are linked as 1-2-3 (so there is no link
between 1 and 3). Their opinions are denoted by random variables X1,X2,X3,
correspondingly. Therefore our assumption says: P (X1|X2) 6= P (X1), but
P (X1|X2, X3) = P (X1|X2). This assumption is valid for all uncorrelated net-
works (Chung-Lu type network [55], especially the ER network).
2. The opinion of a node and its degree are mutually independent. Suppose the
node index i is a random variable which labels a random node. In terms of the
opinion and degree of node i, denoted respectively as Xi and ki, this assumption
means E[ki|Xi] =< k >, P (Xi|ki) = P (Xi) and P (Xi|Xj, ki, kj) = P (Xi|Xj)
where j is a neighbor of i. This assumption is obviously satisfied for the
networks in which every node has the same degree (regular geometry), but it
is also valid for the network whose degree distribution is concentrated around
its average (for example, Gaussian distribution with relatively small variance
or Poisson distribution with not too small < k >).
50
In other words, in typical mean field language, the probability distribution
of the neighboring opinions of a specific node is an effective field. This field is
however not uniform over the network but depends only on the opinion of the
given node. For an uncorrelated random network with N nodes and average degree
< k >, the number of links in this network is M = N < k > /2. We denote
the numbers of nodes taking opinions A,B and AB as nA, nB, nAB, their fractions
as pA,pB, pAB. We also denote the numbers of different types of links as ~L =
[LA−A, LA−B, LA−AB, LB−B, LB−AB, LAB−AB]T , and their fractions are given by ~l =
~L/M . We take ~L or ~l as the coarse grained macrostate vector. The global mean
field is given by:
~p(~L) =
pA
pB
pAB
=
1
2M
< k > nA
< k > nB
< k > nAB
=1
2M
2LA−A + LA−B + LA−AB
LA−B + 2LB−B + LB−AB
LA−AB + LB−AB + 2LAB−AB
.
Suppose Xi, Xj are the opinions of two neighboring nodes. We simply write
P (Xi = A|Xj = B), for example, as P (A|B). We also represent the effective fields
for all these types of node in terms of ~L:
−−−−→P (·|A)(~L) =
P (A|A)
P (B|A)
P (AB|A)
=
12LA−A + LA−B + LA−AB
2LA−A
LA−B
LA−AB
,
−−−−→P (·|B)(~L) =
P (A|B)
P (B|B)
P (AB|B)
=
1LA−B + 2LB−B + LB−AB
LA−B
2LB−B
LB−AB
,
51
−−−−−→P (·|AB)(~L) =
P (A|AB)
P (B|AB)
P (AB|AB)
=
1LA−AB + LB−AB + 2LAB−AB
LA−AB
LB−AB
2LAB−AB
.
To derive the averaged nonlinear ODE for NG dynamics, we calculate the
expected change of ~L in one time step, E[∆~L|~L]. In the following equation, we add
up the expectation E[∆~L|~L, ω] conditioned by each type of nodes communications
(ω), and weighted by the probability of this type of nodes communications, P (ω).
E[∆~L|~L] =∑
ω
P (ω)E[∆~L|~L, ω]. (5.1)
For brevity, we display the calculation of one term in the above summation
as example. Consider the case: listener holds opinion A while speaker has opin-
ion B, and denote this case by ω = (B → A). The probability for this type of
communication is
P (B → A) = pBP (A|B) =1
2MLA−B.
The direct consequence of this communication is that the link between the listener
and speaker changes from A-B into AB-B, so LA−B decreases by 1 and LB−AB
increases by 1. This direct change of ~L is represented by
~D(B → A) =
0
−1
0
0
1
0
.
Furthermore, since the listener changes opinions from A to AB, all his other
related links change. The number of these links is on average < k > −1 (here we
use the assumption 2, E[ki|Xi] =< k >). The probabilities for each link to be A-A,
52
A-B, A-AB before the communication is given by−−−−→P (·|A) (here we use assumption
1). After the communication, these links will change into AB-A, AB-B, AB-AB
correspondingly. This related change of ~L is represented by
(< k > −1)
−1 0 0
0 −1 0
1 0 −1
0 0 0
0 1 0
0 0 1
P (A|A)
P (B|A)
P (AB|A)
.
The 6-by-3 matrix in the above expression indicates the link correspondence
between A-A, A-B, A-AB and AB-A, AB-B, AB-AB when a “A node” changes into
“AB node”, we denote it by
QA =
−1 0 0
0 −1 0
1 0 −1
0 0 0
0 1 0
0 0 1
.
Let ~R(B → A) = QA
−−−−→P (·|A), we obtain:
E[∆~L|~L,B → A] = ~D(B → A) + (< k > −1)~R(B → A).
On the right hand side of the above equation, the first term represents the
direct change and the second term represents the related change .
Similarly, we analyze all the other terms in equation (5.1) for different ω (the
listener and speaker’s opinions), and write the weighted sum in matrix form, we
obtain:
E[∆~L|~L] =1
M[D + (< k > −1)R] ~L,
where D is a constant matrix whose column vectors come from linear combi-
53
nations of ~D(ω)’s:
D =
0 0 34
0 0 12
0 −1 0 0 0 0
0 12
−1 0 0 0
0 0 0 0 34
12
0 12
0 0 −1 0
0 0 14
0 14
−1
,
and matrix R is a function of ~L, given by column vectors which come from
~R(ω)’s:
R =(~0,
1
2[QA
−−−−→P (·|A) + QB
−−−−→P (·|B)], QA[
1
4
−−−−→P (·|A)− 3
4
−−−−−→P (·|AB)],
~0, QB[1
4
−−−−→P (·|B)− 3
4
−−−−−→P (·|AB)],−(QA + QB)
−−−−−→P (·|AB)
).
Here QB, similar to QA defined above, indicates the link correspondence be-
tween B-A, B-B, B-AB and AB-A, AB-B, AB-AB when a “B node” changes into
“AB”,
QB =
0 0 0
−1 0 0
1 0 0
0 −1 0
0 1 −1
0 0 1
.
When “AB node” changes into “A” or “B”, the link correspondence is given
by −QA or −QB respectively.
Then we normalize ~L by the total number of links M and normalize time by
the number of nodes N to obtain:
54
d
dt~l =
N
ME[∆~L|~L] =
N
M[D + (< k > −1)R]~l
= 2
[1
< k >D + (
< k > −1
< k >)R
]~l. (5.2)
Thus, we derived the new mean field ODEs for ~l and the average degree < k > of
the underlying social network on which the NG is played is explicit in the formula.
In the last line, the first term is linear and comes from the direct change of the
link between the listener and the speaker. The second term is nonlinear and comes
from the related changes .
Under the previous basic mean field assumptions in [41], the first term does
not exist because there is no specific “speaker” and every one receives messages from
the effective mean field. When < k >→ 1, the new ODE becomes:
d
dt~l = 2D~l,
which is a linear system. When < k >→∞, this ODE becomes:
d
dt~l = 2R~l.
If in matrix R we further require−−−−→P (·|A) =
−−−−→P (·|B) =
−−−−−→P (·|AB) = ~p and transform
the coordinates by ~L → ~p(~L), this ODE reverts to the one we have under the basic
mean field assumption in [41].
5.2.2 Numerics
In this section, we show the numerical results of solving our ODEs by Runge-
Kutta method and compare the phase trajectories with those of the basic mean
field theory and also with the stochastic dynamical trajectories of the simulated
NG on random networks of varying average degree. Fig.5.1 shows the comparison
between our theoretical prediction (color lines) and the simulation on ER networks
(black solid lines). The dotted lines are theoretical prediction by basic mean field
approximation. We calculate the evolution of the fractions of nodes with A, B
55
and AB opinions respectively and show that the prediction of the older basic mean
field approximation deviates from the simulations significantly while that of the
homogeneous pair approximation matches simulations very well.
0 2 4 6 8 10 12 14 16 18 200
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
t
pA
pB
pAB
Theoretical
Mean field
Figure 5.1: Evolution of fractions of A, B and AB nodes. The three colorlines are the averages of 50 runs of NG (without committedagents) on ER network with N = 500 and < k >= 5. Theblack solid lines are solved from the ODE above with thesame < k >. The black dotted lines are from the ODE usingmean field assumption.
Fig.5.2 shows the trajectories of the macrostate mapped onto two dimensional
space (pA,pB), the black line is the trajectory predicted by the mean field approx-
imation. We find that when < k > is large enough, say 50, the homogeneous pair
approximation is very close to the mean field approximation. When < k > de-
creases, the trajectory tends to the line pAB = 1− pA − pB = 0, which means there
are fewer nodes with mixed opinions than predicted by the mean field. In this sit-
uation, opinions of neighbors are highly correlated forming the “opinion blocks”,
56
and mixed opinion (AB) nodes can only appear on the boundary between the “A
opinion block” and “B opinion block”.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
pA
p B
mean field<k>=3<k>=4<k>=5<k>=10<k>=50
Figure 5.2: The trajectories of NG (no committed agents)solved from theODE with different < k > mapped onto 2D macrostate space.When < k >→ ∞, the trajectory tends to that of the meanfield equation. When < k >→ 1, the trajectory get close tothe line pAB = 1− pA − pB = 0.
For the reason we mentioned in Sec. 4.4, to make a comparison between the the-
oretical prediction and the simulation, we consider η-consensus (Tη) which is the first
time pA or pB achieves η. Fig.5.3 shows the comparison of Tη (η = 0.95) for different
system size N and average degrees < k >. According to this figure, we find that
when N grows, the relative standard deviation of T0.95 (∆T0.95/T0.95 ≈ ∆ ln(T0.95))
decreases, which validates the pair approximation in the sense of thermodynamic
limit. Further more, when < k > grows, the pair approximation tends to the simple
57
mean field assumption.
4.5 5 5.5 6 6.5 7 7.5 86.5
7
7.5
8
8.5
9
9.5
10
10.5
11
ln(N)
ln(T
0.95
)
<k>=5 simulation<k>=5 pair approx<k>=10 simulation<k>=10 pair approsimple mean field
Figure 5.3: Comparison of η-consensus times T0.95 of NG (no committedagent) between the simulation and theoretical prediction fordifferent system sizes N and average degrees < k >. Thestraight lines are from theoretical analysis under simple meanfield assumption or pair approximation. The cycles and errorbars show the means and the relative standard deviations ofT0.95 by simulation of dynamics
5.3 Naming Game with Committed Agents
In this section, we consider the asymmetric case of the NG on large random
networks with p (fraction) committed agents (nodes that never change their opin-
ions) of opinion A. Initially, all the other nodes are of opinion B. The main question
considered here is under what conditions it is possible for the committed nodes to
persuade the others and achieve a global consensus. Previous studies found there is
58
a robust critical value of p called the tipping point. Above this value, it is possible
and the persuasion takes a short time, while below this value, it is nearly impossible
as it takes exponentially long time with respect to the system sizes[41, 42].
5.3.1 Analysis
Similar to what we did in the previous section, we derive the new mean field
ODE for the macrostate in the NG with committed agents, although the macrostate
now contains three more dimensions.
~L = [LA−C , LB−C , LAB−C , LA−A, LA−B, LA−AB, LB−B, LB−AB, LAB−AB]T ,
where C denotes the committed A opinion and A itself denotes the non-committed
one. Hence we have a nine dimensional ODE which has the same form as equation
(5.2), but with different details in D and R given below:
D =
0 0 34 0 0 0 0 0 0
0 −12 0 0 0 0 0 0 0
0 12 −3
4 0 0 0 0 0 0
0 0 0 0 0 34 0 0 1
2
0 0 0 0 −1 0 0 0 0
0 0 0 0 12 −1 0 0 0
0 0 0 0 0 0 0 34
12
0 0 0 0 12 0 0 −1 0
0 0 0 0 0 14 0 1
4 −1
,
QA =
−1 0 0 0
0 0 0 0
1 0 0 0
0 −1 0 0
0 0 −1 0
0 1 0 −1
0 0 0 0
0 0 1 0
0 0 0 1
, QB =
0 0 0 0
−1 0 0 0
1 0 0 0
0 0 0 0
0 −1 0 0
0 1 0 0
0 0 −1 0
0 0 1 −1
0 0 0 1
,
59
R = (~0,12QB
−−−−→P (·|B),−3
4QA
−−−−−→P (·|AB),~0,
12[QA
−−−−→P (·|A) + QB
−−−−→P (·|B)],
QA[14−−−−→P (·|A)− 3
4−−−−−→P (·|AB)],~0, QB[
14−−−−→P (·|B)− 3
4−−−−−→P (·|AB)],
−(QA + QB)−−−−−→P (·|AB)).
5.3.2 Numerics
We show the change of the critical tipping fraction with respect to the average
degree < k > of the underlying random networks in Fig.5.4. Starting from the state
that pB = 1− p, the new ODE system will go to a stable state for which pB = p∗B.
p∗B is 0 if the committed agents finally achieve the global consensus. The sharp
drop of each curve indicates the tipping point transition with the corresponding
< k >. Fig.5.5 shows the normalized consensus time, T0.95/N around the tipping
point pc for different system sizes. When p > pc, T0.95/N is logarithmic with N ;
when p < pc, T0.95/N grows very fast (since it takes to much time, we stop the
simulation when T0.95/N exceeds 104). Fig.5.5 confirms the tipping point found in
Fig.5.4 is consistent with the transition point between the region of the logarithmic
consensus time and exponential consensus time, and when the system size grows,
the transition becomes sharper.
According to Fig. 5.4, the tipping point shifts left when the average degree
< k > decreases. This theoretical result confirms and replicate in full without costly
numerical simulations, the observed lowering of the tipping fraction as a function of
decreasing the average degree of the underlying large random networks.
Fig. 5.6 shows the time evolution of η = 1 − q − pA on ER networks with
committed agents (q > qc) and for different average degree < k >. It actually illus-
trates the relationship between the consensus time Tη and the threshold parameter
η. According to Fig. 5.6, if there are two random networks G1, G2 with average
degrees < k1 >,< k2 > respectively, (< k1 >≥< k2 >). At first NG dynamics on
60
0 0.02 0.04 0.06 0.08 0.1 0.120
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
p
p B*
<k>=4<k>=5<k>=6<k>=10<k>=20<k>=50<k>=100simple mean field
Figure 5.4: Fraction of B nodes of the stable point (p∗B) as a function ofthe fraction of nodes committed to A (p). The color linesconsist of stable points obtained by tracking the ODE of NGon ER for a long enough time. The black lines are the stablepoints solved from the mean field ODE.
G1 is slower than that on G2, but after achieving a certain consensus fraction η′ (η′
depends on < k1 >,< k2 >) the former becomes faster than the later. The inter-
pretation of this result is that the sparsity of a social network helps the committed
agents to introduce a new opinion into a social group and influence a significant
fraction of people, but hinders them from achieving total consensus quickly.
5.4 Effect of the Var(k)
In the above analysis, the random network model is parameterized merely by
the average degree < k >. We do not specify the degree distribution of the network
61
0.075 0.08 0.085 0.09 0.095 0.10
500
1000
1500
2000
2500
3000
3500
4000
p
T0.9
5/N
N=100N=200N=400
pc=0.8205
Figure 5.5: Normalized consensus time, T0.95/N around the tipping pointpc = 0.8205 (vertical dash line) when < k >= 10. Each datapoint is obtained by average of 100 runs of NG simulationwith committed agents on ER network. The simulation stopswhen T0.95/N exceed 104, since it almost never achieve consen-sus when p < pc.
other than assuming the spread of the degree distribution is narrow, i.e. Var(k) is
small. This narrow spread assumption is also required to validate assumption 2.
Therefore the above analysis is only accurate for the networks in which every node
has the same degree (regular geometry). In this section, we consider the effect of
Var(k), but still assume Var(k) is small enough to validate assumption 2.
The effect of Var(k) takes place when selecting the listeners and speakers. In
the LO-NG, if we select nodes in the direct way, i.e. firstly select the speaker and
then select the listener among its neighbors, the average degree of the listener is
given by Eq. 5.3
62
0 5 10 15 20 25 30 35 40 45 500.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
t
η
<k>=4<k>=5<k>=6<k>=10<k>=20simple mean field
Figure 5.6: Evolution of η = 1 − q − pA on ER network with committedfraction q = 0.15 > qc for different average degree < k >.Dynamics on sparse networks with low < k > is at first fasterthen slower to consensus compared with that on completegraphs (simple mean field).
< k′ >=< k2 >
< k >=< k > +
Var(k)
< k >. (5.3)
Here < k′ > is the average degree of the neighbor of a randomly picked node.
In the LO-NG, all the related changes are for the edges linked to the listener. So
the expected number of related changes is (< k′ > −1) instead of (< k > −1).
For networks with regular geometry, Var(k) = 0 and < k′ >=< k >. In this case,
the three ways of picking nodes (direct, reverse, neutral)are equivalent. For ER
networks, the degree k follows the Poisson distribution, hence Var(k) =< k > and
63
< k′ >=< k > +1. Thus for the “direct” LO-NG, the ER networks the ODE
equation becomes
d
dt~l = 2
[1
< k >D +
< k′ > −1
< k >R
]~l
= 2
[1
< k >D + R
]~l, (5.4)
which is slightly different from Eq. 5.2. The “neutral” LO-NG is the same as
the “direct” case.
For the “reverse” LO-NG, the listener is selected first, therefore its average
degree is not affected by Var(k). The ODE model in this case is always Eq.5.2 for
any degree distribution. For other versions of Naming Game, the effect of Var(k) is
similar. Qualitatively, NG dynamics on sparse networks with Var(k) > 0 behaves
like that on a sparse network with zero Var(k) and a slightly higher average degree
< k >. The “reverse” LO-NG is the only version which avoids this effect.
5.5 Summary
In this chapter, we apply an improved mean field approach to the NG, espe-
cially the binary agreement model. Compared to the voter model, there are more
than one type of active links (edges) in the NG, so we have to analyze all types of
links including active and inert ones. As a consequence, instead of a one dimensional
averaged nonlinear ODE in the voter model, we have a six dimensional nonlinear
coupled system. We will derive the equations by analyzing all possible updates in
the process and write it in a matrix form with the average degree < k > as a explicit
parameter. In contrast to the basic mean field theory, this improved ODE approx-
imation clearly shows how the NG dynamics changes when < k > decrease to 1,
the critical value for ER network to have giant component, and converges to the
basic mean field equations in [41] when < k > grows to infinity. Next we show the
significantly better agreement between the theoretical predictions of the new mean
field theory and the simulations on ER network. We extend this model to the case
with committed agents and obtain a nine dimensional system. We also analyze the
64
effect of the spread of the degree distribution on different versions of NG.
Using this improved mean field model, we are able to predict and replicate the
empirically observations including the lowering of critical tipping fraction and the
changing of the consensus speed in low average degree networks, i.e. in a loosely
connected social network, we need fewer committed agents to force a global consen-
sus but need more time to achieve the global consensus with the same fraction of
committed agents.
BIBLIOGRAPHY
[1] C. Castellano, S. Fortunato, V. Loreto, Rev. Mod. Phys. 81, 591 (2009).
[2] S. Galam, Int. J. Mod. Phys. C 19, 409 (2008).
[3] S. Galam, Physica A, vol. 274, no. 1-2, p. 132, Dec. 1999.
[4] D. Stauffer, Comput. Phys. Comm., vol. 146, no. 1, pp. 93C98, 2002.
[5] P. L. Krapivsky and S. Redner, Phys. Rev. Lett., vol. 90, no. 23, p. 238701,2003.
[6] V. Sood and S. Redner, Phys. Rev. Lett., vol. 94, p. 178701, 2005.
[7] J. Lorenz, Int. J. Mod. Phys. C 18, 1819 (2007).
[8] X. Pan and J. Yang, in Complex Systems and Complexity Science vol. 6, no.2, pp. 87–92 (2009).
[9] J.M. Epstein and R. Axtell, Growing Artificial Societies (MIT Press, 1996).
[10] D. Challet, M. Marsili, and Y.-C. Zhang, Minority Games: Interacting Agentsin Financial Markets (Oxford University Press, 2005).
[11] M. Anghel, Z. Toroczkai, K.E. Bassler, and G. Korniss, Phys. Rev. Lett. 92,058701 (2004).
[12] P.L. Krapivsky, Phys. Rev. A 45, 1067 (1992).
[13] L. Frachebourg and P.L. Krapivsky, Phys. Rev. E 53, R3009 (1996).
[14] E. Ben-Naim, L. Frachebourg, and P.L. Krapivsky, Phys. Rev. E 53, 3078(1996).
[15] I. Dornic, H. Chate, J. Chave, and H. Hinrichsen, Phys. Rev. Lett. 87, 045701(2001).
[16] A. Baronchelli, M. Felici, E. Caglioti, V. Loreto and L. Steels, J. Stat. Mech.:Theory Exp. P06014 (2006).
[17] Q. Lu, G. Korniss, and B.K. Szymanski, Phys. Rev. E 77, 016111 (2008).
[18] Q. Lu, G. Korniss ,B.K. Szymanski, J. Econ. Interact. Coord. 4, 221 (2009) .
[19] C. Castellano, V. Loreto, A. Barrat, F. Cecconi, and D. Parisi, Phys. Rev. E71, 066107 (2005).
65
66
[20] L. Dall’Asta, A. Baronchelli, A. Barrat, and V. Loreto, Phys. Rev. E 74,036105 (2006).
[21] L. Steels, Artificial Life 2, 319 (1995).
[22] L. Dall’Asta and A. Baronchelli, J. Phys. A: Math. Gen. 39, 14851 (2006).
[23] A. Baronchelli, L. Dall’Asta, A. Barrat and V. Loreto, Phys. Rev. E 73,015102 (2006).
[24] A. Baronchelli, V. Loreto, L. Steels, Int. J. Mod. Phys. C 19, 785 (2008).
[25] L. Dall’Ast and T. Galla, J. Phys. A:Math. Gen 41, 435003 (2008)
[26] F. Vazquez, X. Castello and M. San Miguel J. Stat. Mech. Theory. Exp.P04007 (2010)
[27] F. Slanina and H. Lavicka, Eur. Phys. J. B 35, 279 (2003).
[28] V. Sood, T. Antal, and S. Redner, Phys. Rev. E 77, 041121 (2008).
[29] X. Castello A. Baronchelli, and V. Loreto, Eur. Phys. J. B 71, 557 (2009).
[30] T.M. Liggett, Interacting Particle Systems (Springer-Verlag, New York, 1985).
[31] P. Bremaud, in Markov chains: Gibbs fields, Monte Carlo simulation, andqueues (Springer, 1991), pp. 253–311.
[32] A. Baronchelli, Phys. Rev. E 83, 046103 (2011).
[33] K.I. Mazzitello, J. Candia, and V. Dossetti, Int. J. Mod. Phys. 18 1475 (2007).
[34] J. Candia and K.I. Mazzitello, J. Stat. Mech. Theory. Exp. P07007 (2008).
[35] S. Galam, Physica A 381 366, (2007).
[36] J. Xie, S. Sreenivasan, G. Korniss, W. Zhang, and C. Lim, B.K. Szymanski,preprint (2011).
[37] A. Baronchelli, M. Felici, V. Loreto, E. Caglioti and L. Steels: Sharptransition towards shared vocabularies in multi-agent systems. J. Stat. Mech.:Theory Exp. P06014(2006).
[38] A. Baronchelli: Role of feedback and broadcasting in the naming game. Phys.Rev. E 83,046103 (2011).
[39] C. Castellano, S. Fortunato and V. Loreto: Statistical physics of socialdynamics. Reviews of Modern Physics, 81(2):591-646 (2009).
[40] M. Granovetter : Threshold Models of Collective Behavior. American Journalof Sociology 83 (6): 1420C1443 (1978).
67
[41] J. Xie, S. Sreenivasan, G. Korniss, W. Zhang, C. Lim, B. K. Szymanski:Social Consensus through the Influence of Committed Minorities, Phys. Rev.E 84, 011130 (2011).
[42] W. Zhang, C. Lim, S. Sreenivasan, J. Xie, B. K. Szymanski, and G. Korniss:Social influencing and associated random walk models: Asymptotic consensustimes on the complete graph Chaos 21, 2, 025115 (2011).
[43] Thomas Schelling Micromotives and Macrobehavior. Norton (1978).
[44] D. Kempe, J. Kleinberg, E. Tardos. Maximizing the Spread of Influencethrough a Social Network. Proc. 9th ACM SIGKDD Intl. Conf. on KnowledgeDiscovery and Data Mining, (2003).
[45] P. Clifford and A. Sudbury: ”A Model for Spatial Conflict”. Biometrika 60(3): 581C588 (1973).
[46] F. Bass : A new product growth model for consumer durables. ManagementScience 15 (5): p215C227 (1969).
[47] L. Steels: A self-organizing spatial vocabulary. Artificial Life, 2(3):319-332(1995).
[48] L. Steels and A. MacIntyre: Spatially distributed naming games. Advances incomplex systems, 1: 301-324 (1998)
[49] A. Baronchelli, L. Dall’Asta, A. Barrat and V. Loreto: Topology InducedCoarsening in Language Games. Physical Review E, 73:015102 (2005).
[50] S. H. Strogatz: Nonlinear Dynamics And Chaos: With Applications ToPhysics, Biology, Chemistry, And Engineering. Da Capo Press, (1994).
[51] M. Golubitsky, D. G. Schaeffer, I. Stewart: Singularities and groups inbifurcation theory, Volume 2. Springer, (1988).
[52] W. Zhang, C. Lim, G. Korniss, B. Szymanski, S. Sreenivasan and J. Xie:Tipping Points of Diehards in Social Consensus on Large Random Networks.Proc. 3rd Workshop on Complex Networks, CompleNet, Melbourne, FL,March 7-9, 2012.
[53] F. Vazquez and V. M. Eguluz: Analytical solution of the voter model onuncorrelated networks. New Journal of Physics 10, 063011 (2008).
[54] E. Pugliese and C. Castellano: Heterogeneous pair approximation for votermodels on networks. Eur. Lett. 88, 5, pp. 58004 (2009).
[55] F. Chung and L. Lu: The Average Distances in Random Graphs with GivenExpected Degrees. Proceeding of National Academy of Science99,15879C15882 (2002).
68
[56] Juan J. de Pablo Annual Review of Physical Chemistry Vol. 62: 555-574(2011)
[57] Siewert J. Marrink, Alex H. de Vries, and Alan E. Mark J. Phys. Chem. B2004, 108, 750-760
[58] Kozma B, Barrat A Rev E 77:016,102 (2008)
[59] Kozma B, Barrat A Phys Rev Lett 100:158,701 (2008b)
[60] R. Albert H. Jeong and A-L. Barabasi Nature, 401, 130, 1999.
[61] D. Watts and S. Strogatz, Nature, 393, 440, 1998.
[62] K. K. Kaski, J. Nieminen, and J. D. Gunton. Phys. Rev. B, 31(2998), 1985.
[63] S. Kumar, J. D. Gunton, and K. K. Kaski. Phys. Rev. B, 35(8517), 1987.
[64] M.A. Nowak and N.L. Komarova. Science, 291(114), 2001.
[65] C. Roland and M. Grant. Phys. Rev. B, 41(4663), 1990.
[66] P. Erdos; A. Renyi Publications of the Mathematical Institute of theHungarian Academy of Sciences 5: 17C61(1960).
[67] R. Axelrod, J. Conflict Resolut. 41(2), 203 1997.