Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
The Mathematical Analysis of Biological Aggregation and Dispersal1
Progress, Problems and Perspectives2
Hans G. Othmer∗ Chuan Xue†3
December 9, 20114
Contents5
1 Introduction 26
2 An overview of population-level descriptions 47
2.1 A summary of the levels of description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.2 The Fokker-Planck and Smoluchowski equations . . . . . . . . . . . . . . . . . . . . . . . . . 69
2.3 Interacting particles, Liouville’s equation and reduced descriptions . . . . . . . . . . . . . . . . 710
3 Simple and reinforced random walks in space 911
3.1 The Pearson random walk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 912
3.2 The general evolution equation for space-jump or kangaroo processes . . . . . . . . . . . . . . 1013
3.3 The evolution of spatial moments for general kernels . . . . . . . . . . . . . . . . . . . . . . . 1114
3.4 The effects of long waits or large jumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1315
3.5 Biased jumps dependent on gradients or internal dynamics . . . . . . . . . . . . . . . . . . . . 1416
3.6 Aggregation in reinforced random walks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1517
4 Velocity jump processes and taxis equations 1918
4.1 The general velocity-jump process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2019
4.2 The telegraph process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2120
4.3 Reduction of the VJ process to a diffusion process . . . . . . . . . . . . . . . . . . . . . . . . . 2421
4.4 The role of internal dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2722
4.5 Macroscopic descriptions of eukaryotic cell movement . . . . . . . . . . . . . . . . . . . . . . 3223
5 Discussion 3324
∗School of Mathematics and Digital Technology Center, University of Minnesota, Minneapolis, MN 55455, USA, email: [email protected]
†Mathematical Biosciences Institute, Ohio State University, Columbus, OH 43210, USA; e-mail: [email protected]
1
1 Introduction1
The central topic of this chapter is the process of aggregation of biological organisms, which occurs in systems2
that range in scale from single-celled organisms such as the bacterium E. coli, to flocks of birds, schools of fish,3
and herds of ungulates. Aggregation is a broad term, which we use to mean a self-induced spatial localization of4
motile individuals that results from direct or indirect communication between them and produces a local density5
of individuals higher than would be observed under random motion. Depending on the organisms involved, more6
specific terms may be used: swarming in insects, flocking in birds, schooling in fishes and herding in mammals7
– but all refer to the same underlying process. In some aggregates there is large scale organization, such as8
alignment in fish schools, which undoubtedly involves at least nearest-neighbor interactions, whereas in other9
aggregates, such as the bacterial aggregates discussed later, there is no coherence to the motion even though there10
may be indirect interaction between individuals via the external medium.11
Whatever the scale or type of aggregation, locomotion – which we define to be self-induced movement that12
results from active forces generated by the individual – is an essential process in aggregation, but of course13
it also plays a role in numerous other contexts, including searching for food, mates or shelter. For exam-14
ple cell locomotion, either individually or collectively as tissues, is essential for early development, angio-15
genesis, tissue regeneration, the immune response, and wound healing in multicellular organisms, and plays16
a very deleterious role in cancer metastasis in humans. Directed locomotion, as opposed to random wander-17
ing, usually involves several steps (i) the detection and transduction of external signals, be they visual, chem-18
ical, mechanical, or of other types, (ii) integration of the signals into an internal signal, (iii) the control of19
the internal neural, biochemical and mechanical responses that lead to force generation and directed move-20
ment, and (iv) perhaps relay of the signal. A schematic of the sub-processes involved is shown in Fig. 1.21
Outside
Inside
Signal Transduction
Internal Response
Signal Propagation
External Signal External Signal
Signal Detection
55555555555555555555555555555555555555555555555555555555
55555555555555555555555555555555555555555555555555555555
Figure 1: The general steps involved in generat-ing the response to an external signal.
A detailed description of locomotion of higher organisms22
such as birds or fishes is extremely complex, and simpler de-23
scriptions are used for understanding aggregation. A starting24
point is to treat individuals as points and attempt to understand25
the collective behavior of an aggregate based on postulated in-26
teractions between individuals or between individuals and an27
external field, either imposed or generated by the population.28
In this framework the problem is mathematically similar to the29
study of interacting molecular species, and techniques estab-30
lished in that context can be carried over to biological prob-31
lems. Because single cells are the simplest systems capable32
of self-locomotion, the description of cellular motion can be33
more complete in models of aggregation, but the principles34
that emerge from the analysis of cellular motion apply at higher levels as well. Thus several concrete examples35
of cell-level aggregation will be described in detail later.36
Many single-celled organisms use flagella or cilia to swim, and the best studied example of this is E. coli.37
As we show later, much can be learned about ‘run-and-tumble’ organisms such as E. coli without a detailed38
description of the mechanical forces, but in eukaryotes forces play a more central role. There are two basic modes39
of movement used by eukaryotic cells that lack cilia or flagella – mesenchymal and amoeboid (10). The former,40
which can be characterized as ‘crawling’ in fibroblasts or ‘gliding’ in keratocytes, involves the extension of finger-41
like filopodia or pseudopodia and/or broad flat lamellipodia, whose protrusion is driven by actin polymerization42
2
at the leading edge. This mode dominates in cells such as fibroblasts when moving on a 2D substrate. In the1
amoeboid mode, which does not rely on strong adhesion, cells are more rounded and employ shape changes to2
move – in effect ’jostling through the crowd’ or ‘swimming’. Recent experiments have shown that numerous3
eukaryotic cell types display enormous plasticity in locomotion in that they sense the mechanical properties of4
their environment and adjust the balance between the modes accordingly by altering the balance between parallel5
signal transduction pathways (85). Thus pure crawling and pure swimming are the extremes on a continuum of6
locomotion strategies for eukaryotic cells, but many cells can sense their environment and use the most efficient7
strategy in a given context. Significant progress has been made in going beyond the point particle description in8
such systems (cf. (90) and references therein).9
Some basic questions that arise in studying aggregation, either from the experimental or mathematical stand-10
point, are as follows.11
• At what level of detail must individuals be described to explain the observed phenomena?12
• What is the coarsest or highest-level description of the forces involved that suffices?13
• What is the nature of the signal that is used to initiate aggregation? Is the signal externally-imposed, as for14
example, when bacteria move up the gradient of a desirable substance, is the signal relayed from individual15
to individual, and what is the range of the signal?16
• What determines the size of an aggregate and how does it depend on the nature and range of the signal?17
• When aggregates move coherently, by which we mean they locally adjust their speed and direction to those18
of their neighbors, the latter perhaps weighted in decreasing importance with distance, what is the time19
scale on which coherence is achieved beginning from an incoherent state, and how does the type of signal20
and its range affect this time.21
There is a huge literature on the subject of aggregation, orientation and alignment, and other chapters in this22
volume will cover other aspects (see the chapters by Hillen and Painter and by Franz and Erban). Recent papers23
that discuss some of the topics treated herein are given in (21; 6; 14; 50; 67; 95). Classic texts related to the24
topics herein include (69; 9). We have two main objectives here: (i) to summarize some of the recent work on the25
derivation of macroscopic equations such as the Patlak-Keller-Segel chemotaxis equations from individual-based26
descriptions, and (ii) to illustrate the use of the macroscopic equations that result in cellular aggregation.27
The classical taxis problem began with phenomenological equations in which a biased drift term was added28
to a diffusion equation to describe the movement of individuals in response to an imposed or self-generated signal29
(52), although a more fundamental approach along the lines described later was initiated earlier by Patlak (80),30
and the resulting taxis equation is called the PKS equation. To describe it more precisely, let Ω ⊂ Rn be a31
compact domain with smooth boundary, let n be the ‘particle’ density, and let S be the ‘signal’ density. The32
first of the following pair is the PKS equation, and the second describes the self-generated signal field, when33
applicable.34
nt = ∇ · (∇n− n∇Φ(S)) = ∇ · (∇n− nχ(S)∇S) (1)35
St = D∆S + f(n, S) (2)36
The first rigorous derivation of the coupled equations beginning with an interacting particle system is due to37
Stevens (89). A review of the major developments from 1970 to about 2003 can be found in (45), and a ‘users38
3
guide’ to these and other taxis equations can be found in (42). The quantity χ ≡ ΦS(n, S, x, . . . ) is called the1
chemotactic sensitivity, and uc ≡ χ(S)∇S is called the chemotactic velocity, and the fundamental problem we2
address is how knowledge of the internal dynamics governing signal transduction and response is reflected in3
these quantities. We develop the machinery for addressing this and describe some success for simple organisms4
such as E. coli, and partial success for eukaryotic cells.5
2 An overview of population-level descriptions6
2.1 A summary of the levels of description7
We begin by summarizing classical approaches to the transition from equations of motion for individuals to8
population level distribution functions. The material in this section is standard and widely-discussed, but it is9
useful to remind the reader of some of the underlying assumptions. To understand the broad picture before10
delving into the details, we regard the particles or individuals as structureless, but we admit the possibility that11
they can exert forces and allow for external forces as well. We first consider point particles, and thus describe12
their motion by Newton’s law. For later purposes we include an evolution equation for the internal state of the13
particles, but at present we do not include coupling of the latter to the movement. The general case of forcing on14
both position and velocity leads to the differential equations15
dxi = vidt + dXi (3)16
midvi = Fidt + dVi (4)17
dy
dt= G(x,v, y, t). (5)18
Here (x,v) ∈ Rn, n = 1, 2, 3 are the positions and velocities, and y ∈ Rs characterizes the internal state. If the19
imposed forces X and V are deterministic forces they can be written as dXi = Xidt, and similarly for dV, and20
equations (3) and (4) are the standard Newton equations for particles. When X and V are random forces these21
are stochastic differential equations, the integral forms of which are interpreted in the Ito sense (4; 13).22
The two major types of random forcing processes that are widely used are Brownian motion and compound23
Poisson processes. Both Brownian motion and the Poisson process are examples of a more general class of24
random processes called Levy processes (86; 2), which are stochastic processes that have independent, stationary25
increments, are stochastically continuous, i.e., for any ε > 0, Pr|Xt+s − Xs| > ε → 0 as t → 0, and have26
sample paths that are right-continuous and have left limits. Brownian motion and Poisson processes differ in27
that the former have continuous sample paths whereas Poisson processes have discontinuities at the jump. Levy28
processes with fat-tailed distributions will arise in Section 3.4 in the context of anomalous diffusion.29
The formal differentials that appear in (3) and (4) are assumed to be white noise, which is a wide-sense30
stationary random process in which the component functions dXi have zero mean and are uncorrelated, i. e.31
〈dXi(t)〉 = 0 (6)32
〈dXi(t1)dXi(t2)〉 = σ2δ(t1 − t2). (7)33
Gaussian white noise is the generalized derivative of a single-variable Wiener process, i. e., of Brownian motion34
(4; 36).35
4
As used here, a Poisson forcing function is a compound Poisson process, which can be thought of as a train1
of jumps distributed in time according to a Poisson law. Thus2
Xi(t) =N(t)∑k=1
YkH(t− tk), (8)3
where the amplitudes Yk are independent random variables, H is the step function, and N(t) is a homogeneous4
Poisson counting process with parameter λ that counts the number of jumps in [0, t], assuming that N(0) =5
0 with certainty. A generalization of this allows coupling between the amplitudes of the impulses and their6
temporal occurrence, and can be defined by a random measure M(dt, dY ) that gives the number of jumps in7
((t, t + dt) × (y, y + dy)). The derivative of the forcing, which is called Poisson white noise, is thus a train of8
impulses that arrive at the jump times of the underlying Poisson process.9
dXi(t) =N(t)∑k=1
Ykδ(t− tk), (9)10
Later we allow the Poisson parameter to depend on external fields or on the internal state of individuals.11
The simplest problem arises when there are no inter-particle interactions, and the forces stem from interac-12
tions with the environment. One example is the original Einstein model of a heavy particle in a bath that receives13
Gaussian-distributed momentum impulses from the surrounding bath (27). In Einstein’s formulation this leads to14
the diffusion equation for the position of the particle, and the probability to find a walker at x ∈ R, having started15
at the origin at t = 0, is16
P (x, t) =1√
4πDte−x
2/4Dt, (10)17
for (x, t) ∈ R×R+. In the next section we discuss descriptions that account for both velocity and position.18
When there are impulsive forces, rather than Gaussian forces on the position in (3) we obtain the familiar19
random walk, in which there are instantaneous changes in position at random times. These are called space-jump20
processes (72), and later we show that the probability density for such a process satisfies the renewal equation21
P (x, t|0) = Φ(t)δ(x) +∫ t
0
∫Rn
φ(t− τ)T (x,y)P (y, τ |0) dy dτ. (11)22
Here P (x, t|0) is the conditional probability that a walker who begins at the origin at time zero is in the interval23
(x,x + dx) at time t, φ(t) is the density for the waiting time distribution, Φ(t) is the complementary cumulative24
distribution function associated with φ(t), and T (x,y) is the redistribution kernel for the jump process. In25
Section 3.2 we show that this also leads to diffusion equations in certain limits, which reflects the fact that under26
mild conditions on the distribution of jump sizes the compound Poisson process approaches Brownian motion in27
the limit λ→ ∞.28
If we admit impulsive forces on the velocity in (4) then we arrive at the second major type of jump-driven29
movement, which is called a velocity jump process (72). As described in detail later, the motion consists of a30
sequence of “runs” separated by re-orientations, during which a new velocity is chosen instantaneously. If we31
assume that the velocity changes are the result of a Poisson process of intensity λ, then in the absence of other32
forces we show later that we obtain the evolution equation33
∂p
∂t+ ∇x · vp+ ∇v · Fp = −λp+ λ
∫T (v,v′)p(x,v′, t) dv′. (12)34
A similar equation to describe the random movement of bacteria was first derived by Stroock (91).35
5
2.2 The Fokker-Planck and Smoluchowski equations1
A generalization of the Einstein description of Brownian motion involves both velocity-dependent interaction of2
the particle with a fluid environment, and diffusion in velocity space. This is based on (3) and (4), in which we3
assume that the forcing on position is zero, the random forcing on velocity is Gaussian white noise, and we allow4
velocity-dependent frictional forces. In the standard notation of statistical physics, we write5
dxi = vidt (13)6
mdv = −mζvdt+ Fdt +√
2ζmkBTdW (t) (14)7
where ζ is the friction coefficient, kB is Boltzmann’s constant and T is the temperature. This description is8
predicated on the assumption that the fluid particles are much lighter than the Brownian particle, and as a result,9
that the fluid motion relaxes on a much shorter time scale than the motion of the particle. Thus the hydrodynamic10
forces appear both via the deterministic friction force and the random forces, which are assumed to be Gaussian.11
If the assumption on the relaxation time of the fluid variables is not applicable the process is no longer Markovian,12
and a non-Markovian generalization of (14) has been derived (11).13
The stochastic differential equations are equivalent, under the Gaussian assumption, to a partial differential14
equation for the conditional probability density p(x,v, t|x′,v′, t′), namely,15
∂p
∂t+ ∇x · vp+ ∇v ·
((−ζv +
Fm
)p)
=ζkBT
m∇v · ∇vp. (15)16
This is commonly called the Fokker-Planck-Kramers-Klein equation (100), or simply the Fokker-Planck equa-17
tion, although the latter is used for a much broader class of equations (18; 93; 36). This is a mixed-type equation18
that describes drift-diffusion in the velocity component and drift in x due to the external force. If the latter19
vanishes it reduces to pure drift-diffusion in velocity space. The equation has also been formally generalized to20
describe the motion of multiple Brownian particles by incorporating an integral operator on the right-hand side21
to account for particle-particle interactions (63).22
If the friction coefficient ζ is large, one may intuitively expect that the velocity relaxes on a time scale O(ζ−1),23
and then (14) reduces to an algebraic equation that can be used to replace the velocity in (13). The result is the24
Smoluchowski equation25
∂n
∂t= D∇x ·
(∇xn− 1
kBTFn)
(16)26
for the number density n(x, t) =∫pdv, where the diffusion coefficient is defined by the Einstein relation
D =kBT
ζ.
Clearly this has the form of the PKS equation (1) for a suitable choice of the force. However, the reduction as27
described is formal, since the full equation is a singularly-perturbed hyperbolic equation, and (16) only describes28
the outer solution (100). Smoluchowski equations have been widely used in the studies of aggregation, but the29
limitations are frequently not appreciated. Similar issues arise in the diffusion approximation of velocity-jump30
processes described in Section 4, and we will return to them there.31
6
2.3 Interacting particles, Liouville’s equation and reduced descriptions1
Next we suppose that there are no external forces – only inter-particle forces. Newton’s second law for the system2
reads3
dxdt
= v4
(17)5
Mdvdt
= F(x)6
where x = (x1,x2, · · · ,xN ) is the vector of positions, v = (v1, · · · ,vN ), F = (F1, · · · ,FN ), and M is7
the diagonal matrix with Mii = mi. Note that we assume here that Fi does not depend on the velocity of8
any particle, nor does it depend explicitly on time. Velocity-dependence introduces dissipation and substantially9
changes the BBGKY hierarchy developed later. Thus there is no built-in friction-like force such as arises when10
an individual interacts with the background environment, nor is there a force for alignment, with the result that it11
may be difficult to obtain alignment of individuals for such models. This is in contrast to the force12
Fi(rij) =∑j
φ(|rij |)(vi − vj) (18)13
used in the Cucker-Smale model (22) , where φ(s) is a monotone decreasing function. We assume hereafter that14
force between i and j depends only on their separation, i. e.,15
Fi(xj) = Fi(rij) ≡ F (ri1 . . . , rii−1, rii+1, . . . riN )16
where rij ≡ |xi−xj |. Furthermore, we assume that the particles are identical, and that the forces are conservative.17
Then there is a potential Φ such that18
Fi(rij) = −∇xiΦ19
and we assume that Φ can be written as the sum of pairwise interaction potentials20
Φ =N∑i<j
ϕ(rij).21
While (17) can be solved locally for N (in fact global solutions exist when Φ is the Newtonian potential andN ≥ 2, as long as there are no collisions (97)), a less detailed description of the system for large N can be gottenby finding the joint probability PN (x1,x2, . . .xN , v1, . . .vN , t) = PN (x,v, t) that particle i has position xi andvelocity vi. Denote the solution of (17) subject to x(0) = xo,v(0) = vo as (x,v) = (χ(xo,vo, t),V(xo, vo, t)),which defines a unique curve in the 6N-dimension phase space for suitable Fi, and implies that
PN(x,v, t) = ΠNi=1 δ (xi − χi(xo,vo, t)) ΠN
i=1 δ (vi − Vi(xo,vo, t)) .
Thus if we specify an initial condition with certainty then the probability distribution at any later time is concen-22
trated at one point. Now suppose that we run the ‘experiment’ many times or that we consider a large number23
of copies of the system. Given a distribution of initial conditions, PN is no longer concentrated at a point or a24
finite number of points, but since there is no dissipation, the evolution of PN follows from the Reynolds transport25
theorem (3). Thus the N-particle distribution function evolves according to26
∂PN∂t
+ v · ∇xPN +Fm
· ∇vPN = 0, (19)27
7
which is called Liouville’s equation. This is formally equivalent to Newton’s equations and thus equally in-1
tractable for large numbers of particles, but one can derive equations for reduced or marginal distribution func-2
tions, defined as3
Pl(x1, . . .xl,v1, . . .vl, t) ≡∫PN(x,v, t)dxl+1 . . . dxNdvl+1 . . . dvN .4
Liouville’s equation can be written5
∂PN
∂t+
N∑k=1
vk · ∇xkPN −
N∑i,
j=1i<j
1m
[∂
∂xiϕ(rij) ·
∂
∂vi+
∂
∂xjϕ(rij) ·
∂
∂vj
]PN = 0 (20)6
and more compactly as7
∂PN∂t
+ LNPN = 0. (21)8
By integrating over N − l particles one obtains the evolution equation for the l-particle distribution function (17)9
∂fl∂t
+ L`fl = −l∑
i=1
∂
∂vi·∫ Fi,l+1
mfl+1(x1, . . .xl+1,v1, . . .vl+1, t)dxl+1dvl+1 (22)10
where11
Fij ≡ −∂ϕ(rij)∂xi
12
is the force between particles i and j. These equations for l = 1, · · · , N are called the BBGKY hierarchy. Clearly13
the system is not closed unless l = N , because for any l < N one must know Pl+1 to solve (22). Thus we seem14
once again to have come full circle; the only self-contained equation is Liouville’s equation and it is equivalent15
to Newton’s equations.16
Of particular use in this context are the evolution equations for the one- and two-particle number density17
functions.18∂P1
∂t+ L1P1 = − ∂
∂v1·∫ F1,2
mP2(x1,x2,v1,v2)dx2dv2. (23)19
20
∂P2
∂t+ L2P2 = −
∫ [∂
∂v1· F1,3
m+
∂
∂v2· F2,3
m
]P3(· · · )dx3dv3 (24)21
As noted previously, when there are no collisions Liouville’s equation has a smooth global solution for22
suitable potentials – it is the collisions that lead to Boltzmann’s equation. When only binary interactions are23
involved, i.e., in the dilute limit, the two-particle distribution function factors and the equation for the single-24
particle distribution reduces to Boltzmann’s equation (12). Convergence of solutions of the BBGKY solution25
hierarchy to a smooth solution of a kinetic equation for a single particle distribution function is still an unresolved26
problem for general particle-particle interactions. A very accessible discussion of this, and in particular of the27
Boltzmann-Grad continuum limit N → ∞, σ2 → 0 Nσ2 = constant, is given in Cercignani et al. (16). More28
complete treatments of mathematical techniques for kinetic equations are given in (17; 16; 60; 83). Application29
of the BBGKY hierarchy to derive reduced descriptions for flocking problems is widely-used (38; 15), but the use30
of idealized kinetic models frequently fails to capture some essential characteristics of animal movement (47).31
8
3 Simple and reinforced random walks in space1
3.1 The Pearson random walk2
Consider a random jump process on Rn in which the walker executes a sequence of jumps of negligible duration,3
driven by Poisson forcing x. This is called a random walk, and the earliest analyses of these processes apparently4
dates to Bachelier (5) around 1900, in the context of his analysis of financial time series. However the term5
‘random walk’ was apparently coined by Pearson (81), who proposed the following problem.6
A man starts from the point O and walks ` yards in a straight line; he then turns through any angle whatever7
and walks another ` yards in a second straight line. He repeats this process n times. I require the probability8
that after n stretches he is at a distance between r and r + δr from his starting point O.9
ll
l
Figure 2: Three steps ina Pearson walk of fixedstep-length.
The solution to this problem had previously been obtained by Rayleigh (84) in a10
study of the superposition of sound waves. Later we will see that this walk fits into a11
more general framework that incorporates a waiting time distribution and a jump size12
distribution, but for now we treat the simple 2D walk shown in Figure 2. Let Pn(r)13
be the probability that a walker who begins at the origin is in the interval (r, r+dr) at14
the nth step, and T (ρ) be the probability of taking a step of length |ρ| in the direction15
ρ/|ρ|. If the steps are uncorrelated then Pn(r) satisfies the renewal equation16
Pn+1(r) =∫R2
T (ρ)Pn(r − ρ)dρ (25)17
In the Pearson walk the angular distribution is uniform on the circle of radius ` andthus
T (ρ) =δ(|ρ| − `)
2π`,
and for this kernel the probability at the n+1st step is simply the average of the probabilities at the previous step18
over the circle of radius ` centered at r. The solution of (25) is19
Pn(r) =12π
∫ ∞
0Jn0 (k`)J0(kr)kdk, (26)20
where r = |r| (55; 7), and in the limit n→ ∞ this reduces to21
Pn(r) ∼1
nπ `2e−r
2/n `2 . (27)22
The result sought by Pearson is just 2πr times this, i. e.
Pn(r) ∼2rn`2
e−r2/n`2
which is Rayleigh’s result. In an historical coincidence, Einstein’s seminal paper (27) on Brownian motion also23
appeared in 1905, and the parallel between (10) and (27) for the discrete Pearson walk is evident. An isotropic24
diffusion equation is also derived from the Pearson walk in the chapter by Hillen and Painter, using different25
notation.26
A variation of the 2D Pearson-Rayleigh random walk in which the steps are random vectors of exponential27
length and uniform orientation was considered in (33). It is shown there that imposing a constraint of a fixed28
total length on a walk leads to a number of interesting results. For instance, by taking exactly three steps the29
probability distribution is uniform in the disc of radius l, while for fewer steps the distribution is concentrated30
near the boundary and for more it is concentrated near the origin.31
9
3.2 The general evolution equation for space-jump or kangaroo processes1
We generalize the simple random walk as follows. Suppose that the waiting times between successive jumps are2
independent and identically distributed. Let T be the waiting time between jumps and let φ(t) be the probability3
density function (PDF) for the waiting time distribution (WTD). If a jump has occurred at t = 0 then4
φ(t) = Prt < T ≤ t+ dt.5
The cumulative distribution function for the waiting times is Φ(t) =∫ t0 φ(s) ds = PrT ≤ t and the com-6
plementary cumulative distribution function is Φ(t) = PrT ≥ t = 1 − Φ(t). If the jumps are exponentially7
distributed then Φ(t) = 1 − e−λt, and φ(t) = λe−λt, and this is the only smooth distribution for which the jump8
process is Markovian ((31), p. 458).9
In general the jumps in space may depend on the waiting time, and conversely, the WTD may depend on10
the size of the preceding jump, but to make the analysis tractable, we assume that the spatial redistribution that11
occurs at jumps is independent of the WTD. Let T (x,y) be the PDF for a jump from y to x, i. e., given that a12
jump occurs at Ti,13
T (x,y) dx = Prx ≤ X(T+i ) ≤ x + dx |X(T−
i ) = y, (28)14
where the superscripts ± denote limits from the right and left, respectively. If the underlying medium is spatially15
non-homogeneous and anisotropic, the transition probability depends on x and y separately, while in a homoge-16
neous medium T (x,y) = T ( x − y ), where T is the unconditional probability of a jump of length |x − y|. In17
either case, T is a probability kernel if and only if∫Rn T (x,y) dx = 1. We further assume that T is a smooth18
function and that for any fixed y the first two x- moments of T are finite, though they depend on y unless the19
system is spatially homogeneous. Later we comment on the effect of infinite moments.20
Let P (x, t|0)dx be the probability that a jumper which begins at the origin at t = 0 is in the interval (x,x +21
dx) at time t. It was shown in (72) that P (x, t|0) satisfies the renewal equation22
P (x, t|0) = Φ(t)δ(x) +∫ t
0
∫Rn
φ(t− τ)T (x,y)P (y, τ |0) dy dτ. (29)23
Many of the standard jump processes can be recovered from this general result by particular choices of φ and T .24
For instance, if φ(t) = δ(t− t0) then Φ(t) = H(t0 − t), where H(·) is the Heaviside function, and (29) reduces25
to26
P (x, t|0) = H(t0 − t)δ(x) + [1 −H(t0 − t)]∫Rn
T (x,y)P (y, t− t0|0) dy.27
This is the governing equation for a discrete time, continuous space process in which jumps occur at intervals of28
t0. If in addition the support of T is concentrated on the points of a lattice Zn ⊂ Rn, then29
P (xi, t|0) = H(t0 − t)δi0 + [1 −H(t0 − t)]∑j
TijP (xj , t− t0|0) .30
where δi0 is the Kronecker delta, and xi is a lattice point. This can be written in the more conventional Chapman-31
Kolmogorov form as follows.32
Pi0(n+ 1) =∑
j TijPj0(n) n ≥ 133
If the WTD is exponential, one obtains the continuous time random walk34
∂P
∂t(x, t|0) = −λP (x, t|0) + λ
∫Rn
T (x,y)P (y, t|0) dy. (30)35
10
and if in addition the support of the kernel T (x,y) is a lattice then1
∂P
∂t(xi, t|0) = −λP (xi, t|0) + λ
∑j
TijP (xj , t|0). (31)2
One can cast the latter into the form of a master equation for a countable state Markov process by applying the3
condition on T that guarantees conservation of walkers to obtain4
∂P
∂t(xi, t|0) = −λ
∑i
TijP (xi, t|0) + λ∑j
TijP (xj , t|0). (32)5
A generalization of this that allows for other non-exponential WTDs takes the form6
∂P
∂t(xi, t|0) =
∫ t
0Ψ(t− τ)
−∑i
TijP (xi, t|0) +∑j
TijP (xj , t|0)
dτ, (33)7
and of course one can couple the jump probabilities with the WTD (53).8
There is a large literature on the various special cases. For instance, the continuous-time random walk9
(CTRW) dates at least back to Irwin (48) and has been extensively developed for birth-death processes (40)10
and on lattices (66; 54; 98). The general form (29) was first derived in (72).11
3.3 The evolution of spatial moments for general kernels12
To determine how the evolution of the spatial moments in time depends on the waiting time distribution, we13
assume that the medium is one-dimensional and spatially homogeneous – the generalization to n dimension is14
straightforward. Let15
〈xn(t)〉 =∫ +∞
−∞xnP (x, t|0) dx =
∫ +∞
−∞
∫ t
0
∫ +∞
−∞xnT (x− y)φ(t− τ)P (y, τ |0) dy dτ dx. (34)16
Denote by17
mk =∫ +∞
−∞xkT (x) dx18
the k-th moment about zero of the jump length distribution – then as shown in (72)19
〈xn(t)〉 =∫ t
0
n∑k=0
(n
k
)mkφ(t− τ)〈xn−k(τ)〉 dτ, (35)20
and thus the Laplace transform of the k-th moment is given by
Xn =φ(s)
1 − φ(s)
[n−1∑k=1
(n
k
)mkXn−k +
mn
s
].
In particular the first two moments are21
X1(s) =m1
s
φ(s)1 − φ(s)
22
X2(s) =(2m1X1(s) +
m2
s
) φ(s)1 − φ(s)
. (36)23
11
The two most widely-used waiting time distributions are the exponential distribution and the gamma dis-1
tribution. Suppose in either case that m1 = 0, since a non-zero first moment simply adds a drift. Then for2
the exponential WTD one finds that φ(s) = λ/(s + λ) and that 〈x2(t)〉 = m2λt. If φ is a gamma WTD with3
parameters (2, λ), then φ(t) = λ2te−λt, φ(s) = λ2/(s+ λ)2, and4
〈x2(t)〉 = m2
∫ t
0L−1
(λ2
s(s+ 2λ)
)dτ =
m2λ
2
t− 1
2λ(1 − e−2λt)
. (37)5
In general the asymptotic behavior of the moments can be gotten by applying limit theorems for Laplacetransforms (99). If we denote the kth moment of the WTD as Mk and suppose that m1 = 0, then the leadingterms in an asymptotic expansion of X2(s) are
X2 =m2
M1s2
[1 +
(M2 − 2M2
1
2M1
)s+ O(s2)
].
Therefore, by (i) applying the limit result that lims→0 f(s) = limt→∞ F (t), and (ii) using the fact that
L(tρ−1) =Γ(ρ)sρ
for ρ > 0,
one sees that if the mean waiting time M1 is finite, then the mean-squared displacement for large t is given by
〈x2(t)〉 ∼ m2
M1t.
Thus so far as the mean-squared displacement is concerned, any jump process for which the jump distribution6
has a finite variance and the WTD has a finite mean behaves like a diffusion process with diffusion coefficient7
D = m2/(2M1) for large t.8
To make the connection with the PDE descriptions of motion more explicit, consider first the case of an9
exponential WTD, and suppose that the jump kernel is spatially homogeneous. If10
T (x − y) =δ(|x − y| − `)
ωn`n−111
where ωn = 2πn2 /Γ(n2 ) is the surface measure of the unit sphere in Rn, one finds that12
∂P
∂t= λ[P (x, `, t) − P (x, t)]13
where P is the average of P over the surface of a sphere of radius ` centered at x. Expansion of P about x leads,14
in the diffusion limit λ→ ∞, `→ 0, λ`2/2n = D, to15
∂P
∂t= D∇2P, (38)16
provided that all higher-order derivatives are bounded. The Pearson walk described earlier falls into this class.17
A similar conclusion holds for more general kernels, written in 1D for simplicity, of the form18
T (x− y) =1`T0(
|x− y|`
, `).19
Then20
∂P
∂t= λ
(`
∫RT0(r, `)rdr
)∂P
∂x+ λ
(`2
2
∫RT0(r, `)r2dr
)∂2P
∂x2+ O(`3). (39)21
12
Therefore if the first moment of T0 is O(`) for ` → 0, if the second moment of T0 tends to a constant, and if all1
higher moments are bounded, then in the diffusion limit we obtain a diffusion equation with drift. The diffusion2
coefficient is given by3
D = λ`2
2lim`→0
∫RT0(r, `)r2dr (40)4
and the drift coefficient is given by5
β = λ`2
2lim`→0
∫R
T0(r, `)`
rdr. (41)6
The latter vanishes if the kernel is symmetric.7
3.4 The effects of long waits or large jumps8
The fact that any jump process with a WTD that has a finite first moment and a jump distribution having a finitesecond moment evolves like a standard Brownian motion for large t is simply a reflection of the central limittheorem applied to the sum of the IID steps taken in the walk (51). When the large-time limit of the mean-squaredisplacement grows either sub- or super-linearly the process is said to exhibit anomalous diffusion. For example,if
〈x2(t)〉 ∼ γtβ
for β 6= 1 and t → ∞, it is called subdiffusion if β < 1 and superdiffusion if β > 1 (65). Subdiffusion occurswhen particles spread slowly, whether because they rest or are trapped for a long time, and in particular, if themean waiting time between jumps is infinite. For example, if m1 = 0, then from (36)
〈x2(t)〉 = m2L−1
(φ(s)
s(1 − φ(s))
).
Therefore, if φ(s) ∼ 1/sρ for ρ ∈ (0, 1) and s → 0, then 〈x2(t)〉 ∼ m2tρ for t → ∞, i. e., movement is
asymptotically subdiffusive. As another example, consider
φ(t) =1
(1 + t)2,
which is a well-defined distribution, but for which Mk = ∞ for all k ≥ 1. The transform of φ is
φ(s) =(π
2− Si(s)
)cos s+ Ci(s) sin s
where Si and Ci are the sine and cosine integral functions (20). From the asymptotic expansion of the integralsone finds that
〈x2(t)〉 ∼ log t,
and thus the process is subdiffusive.9
The superdiffusive case arises when the walk is highly persistent in time, for example, if the walker never10
changes direction, or for walks having a fat-tailed jump distribution. The simplest example of the first case arises11
when the walker never turns, which leads to a wave equation for which the mean square displacement scales as12
t2. More generally this arises if φ(s) ∼ Γ(3)/(s2 + Γ(3)) for s→ 0. An application to bacteria that exhibit long13
runs is discussed in (64).14
13
The latter case arises when the variance of the jump distribution diverges and the central limit theorem doesnot apply. The motion corresponds to a Levy flight, which leads to alternate localized meandering punctuated byoccasional long steps. A comparison of a Levy flight for the jump distribution
T (x) = Aµσ
(σ|x|)1+µ
for µ = 1.5 with Brownian motion is shown in Figure 3. The applicability of Levy flights as a1
Figure 3: An example of Brownianmotion (lower left) in the X-Y plane,and a Levy walk (From (65)).
description of animal movement is discussed in (26).2
3.5 Biased jumps dependent on gradients or internal dynam-3
ics4
Several generalizations of the preceding examples are possible. The5
WTD for the jump process can depend on time or on the density of indi-6
viduals, the redistribution kernel may depend on the local density or a lo-7
cal average of the density, and of course the WTD and jump distributions8
need not be independent. Examples of the latter case include introduc-9
tion of a resting phase in which the resting time depends on the preceding10
jump length, or alternatively, the WTD distribution may depend directly11
on the jump length. It is known that a resting phase with Poisson driven12
entry and exits simply rescales the diffusion coefficient in simple random13
walks (98; 46).14
If the waiting time distribution depends on the number density n andt, then
φ(n, t) = λ(n(x, t), t)e−R t0 λ(n(x,s),s) ds.
and the renewal equation for the number density is now the nonlinear equation15
n(x, t) = e−R t0 λ(n(x,s),s) dsF (x)16
+∫ t
0
∫Rλ(n(x, t− τ), t− τ)e−
R t−τ0 λ(n(x,s),s) dsT (x,y)n(y, τ) dy dτ.17
For suitable choices of the dependence on the density this can describe either aggregation or dispersal. Dispersal18
at high densities would obtain if λ(n, ·) is an increasing function of n, in which case the mean waiting time19
between jumps is a decreasing function n. On the other hand, density-dependent aggregation could be modeled20
using a λ that decreases with n, in which case the waiting time between jumps increases with the density. A21
different approach in which the parameters depend on internal state variables will be discussed later.22
The kernel T may also depend on external fields such as the concentration of an attractant and on the internal23
state of the organism, and one expects this dependence to be reflected in the resulting limit equations. This will24
be discussed in greater detail in the context of velocity-jump processes, but here we briefly illustrate the issue for25
space-jump processes.26
Let x = xξ and y = yη, where ξ and η are the directions of x of y. For a fixed y, the average x after a jumpis defined as
x =∫T (x,y)x dx =
∫T (x,y)ξxn dx dωn.
14
The angle between ξ and η measures the tendency for the next jump to remain aligned with η. Therefore we1
define an index of directional persistence as2
ψd ≡ 〈ξ,η〉 (42)3
and clearly ψd ∈ [−1,+1]. If the step lengths are fixed at ∆, as in the Pearson walk, and if the turning probability4
depends only on the cone angle5
θ(x,y) ≡ cos−1 (〈ξ,η〉)6
between y and x, then T (x,y) has the form
T (x,y) =δ(|x − y| − ∆)
∆n−1h (θ(x,y))
for any n ≥ 2 and a normalized distribution h.7
To illustrate how external fields can be incorporated we write
T (x,y) = T0(x− y) + T1(x,y)
and we suppose that the drift in T0 vanishes, that the bias kernel T1 has compact support and vanishing firstmoment, and that ∫
Rn
T1(x,y)P (y)dy =∫Bδ(x)
(y − x) · F(S(y))P (y)dy.
Here S is a specified field, F is a vector-valued function of S, and Bδ(x) is a ball of radius δ, the sensing radius,8
centered at x. For example, let F = −χ∇S, define y − x = ρ, and expand around x; then one finds that9 ∫Rn
T1(x,y)P (y)dy = −γ [∇S(x)∇P (x) + P (x)∇∇S(x)] :∫Bδ(x)
ρρdρ (43)10
= −γVnδ3 [∇S(x)∇P (x) + P (x)∇∇S(x)] : δ (44)11
whereVn = π
n2 /Γ(
n
2+ 1)
is the volume of B1 in n dimensions, and δ is the unit second rank isotropic tensor (71). Thus the n-dimensional12
extension of the drift-free version of (39) to include the bias given above reads13
∂P
∂t= D∆P − χ (∇S · ∇P + P∇ · ∇S) , (45)14
which is a form of the chemotaxis equation discussed later.15
3.6 Aggregation in reinforced random walks16
The rigorous analysis of random walks is more complicated when particle interactions, either direct or indirect,17
are taken into account (cf. (88; 68; 89)). As will be discussed later, E. coli releases a diffusible attractant,18
whereas myxobacteria gliding on a slime trail react to their own contribution to these trails and to the contributions19
of the other bacteria (101). There is a growing mathematical literature on what are called reinforced random walks20
that began with the work of (24); a recent review can be found in (82). Here we sketch the approach developed in21
(73), where the particle motion is governed by a jump process and the walkers modify the transition probabilities22
on intervals for subsequent transitions of an interval.23
15
Davis (24) considered a reinforced random walk for a single particle in one dimension. Initially there is aweight win on each interval (i, i+ 1), i ∈ ZZ which is equal to w0
n1. If at time n an interval has been crossed by
the particle exactly k times, its weight will be
win = w0n +
k∑j=1
aj ,
where aj ≥ 0 , j = 1, ..., k. Furthermore, the transition probabilities are given by
P (xi+1 = n+ 1|xi = n) =win
win + win−1
.
Davis’ main theorem asserts that localization of the particle will occur if the weight on the intervals grows rapidlyenough with each crossing, as summarized in the following. Let xi be the particle position at the ith step, letX ≡ xi, i ≥ 0, and let
and φ(a) ≡∞∑n=1
(1 +
n∑i=1
ai
)−1
.
Theorem 3.1 Suppose that w0n = 1. Then1
(i) If φ(a) = ∞ then X is recurrent2
(ii) If φ(a) <∞ thenX has finite range and there are random integers n and I such that xi ∈ (n, n+1) if i >3
I4
Here recurrent means that every integer is visited infinitely often a.s., i. e., the walker does not become trapped.5
From this it follows that if aj = constant, for instance, which corresponds to linear growth of the weight, then X6
is recurrent almost surely, whereas if the growth is superlinear then the particle oscillates between two random7
integers almost surely after some random elapsed time. Since the result deals with a single particle it does8
not directly address the aggregation problem, but it does suggest that if the particles interact only through the9
modification of the transition probability there may be aggregation if this modification is strong enough.10
This theorem motivated the following development, in which we begin with a master equation for a continuous-11
time, discrete-space random walk. and postulate a generalized form of (31) in which the transition rates depend12
on the density of a control or modulator species that modulates the transition rates (73). We restrict attention to13
one-step jumps, although it is easy, using the framework given earlier, to apply this to general graphs, but one14
may not obtain diffusion equations in the continuum limit.15
Suppose that the conditional probability pn(t) that a walker is at n ∈ ZZ at time t, conditioned on the fact16
that it begins at n = 0 at t = 0, evolves according to the continuous time master equation17
∂pn∂t
= T +n−1(W ) pn−1 + T −
n+1(W ) pn+1 − (T +n (W ) + T −
n (W )) pn. (46)18
Here T ±n (·) are the transition probabilities per unit time for a one-step jump to n±1, and (T +
n (W )+ T −n (W ))−1
19
is the mean waiting time at the nth site. We assume throughout that these are nonnegative and suitably smooth20
functions of their arguments. The vector W is given by21
W = (· · · , w−n−1/2, w−n, w−n+1/2, · · · , wo, w1/2, · · · ). (47)22
1In this section the weight w may be equivalent to the signal S used earlier, or some function of it.
16
Note that the density of the control species w is defined on the embedded lattice of half the step size. The1
evolution of w will be considered later; for now we assume that the distribution of w is given. Clearly a time-2
and p-independent spatial distribution of w can model a heterogeneous environment, but this static situation is3
not treated here.4
As (46) is written, the transition probabilities can depend on the entire state and on the entire distribution of5
the control species. Since there is no explicit dependence on the previous state the jump process may appear to be6
Markovian, but if the evolution of wn depends on pn, then there is an implicit history dependence, and the space7
jump process by itself is not Markovian. However, if one enlarges the state space by appending w one obtains a8
Markov process in this new state space.9
Three distinct types of models are developed and analyzed in (73), which differ in the dependence of thetransition rates on w; (i) strictly local models, (ii) barrier models, and (iii) gradient models. In the first of thesethe transition rates are based on local information, so that T ±
n = T (wn), and to simplify the analysis we assumethat the jumps are symmetric, i. e., that T + = T − ≡ T . In this case the (46) reduces to
∂pn∂t
= T (pn−1, wn−1)pn−1 + T (pn+1, wn+1)pn+1 − 2T (pn, wn)pn.
If we assume that there is a scaling of the transition rates such that T = λT , and that the formal diffusion limit
limh→0λ→∞
λh2 = constant ≡ D
exists, we obtain the nonlinear diffusion equation10
∂p
∂t= D
∂2
∂x2(T (w)p). (48)11
The second type is one called a barrier model, for which there are two sub-cases, depending on whether ornot the transition rates are re-normalized. In the first case one assumes that T ±
n (W ) = T (wn±1/2), which leadsto the equation
∂p
∂t= D∇ · (T ∇p).
If one re-normalizes the transition rates so that
λ(T +n (W ) + T −
n (W )) = constant ≡ λ,
then after some analysis one finds that in the diffusion limit this leads to12
∂p
∂t= D
∂
∂x
(p∂
∂x
(lnp
T
)). (49)13
For later comparison with velocity jump processes we define the chemotactic velocity and sensitivity as14
χ = D (lnT )w u = −D ∂
∂xln p+D (lnT (w))′
∂w
∂x. (50)15
Thus the taxis is positive if T ′(w) > 0. The simplest form of w-dependence is to assume that T (w) = α+ βw,16
and we use this form later in examples.17
The last type of sensing leads to the gradient-based, or look-ahead model, for which T+n−1 = α+β(τ(wn)−18
τ(wn−1)) and T−n+1 = α + β(τ(wn) − τ(wn+1)), α ≥ 0, and again there are two cases, depending on whether19
17
Table 1: Dependence of the response on the sensing mechanism
Type of Taxis Chemotactic Type of
Sensing Velocity Sensitivity Taxis
Negative1. Local -D∇T −DT ′(w)
if T ′(w) > 0
Barrier without2.
re-normalization0 0 None
Barrier with Positive3.
re-normalizationD∇lnT D (lnT (w))′
if T ′(w) > 0
Nearest neighbor with Positive4.
re-normalization2D∇lnT 2D (lnT (w))′
if T ′(w) > 0
Gradient without Positive5.
re-normalization2Dβ∇τ 2Dβτ ′(w)
if βτ ′(w) > 0
Gradient with Positive6.
re-normalizationD
β
α∇τ D
β
ατ ′(w)
if βτ ′(w) > 0
or not the rates are re-normalized. The chemotactic velocities and sensitivities for these and the preceding cases1
are summarized in Table 1.2
Of course we also have to specify the local dynamics for the evolution of w, and here we use the general form3
∂w
∂t=
pw
1 + λw+ γr
p
K + p− µw ≡ R(p, w) (51)4
in the examples shown in Figure 4. For all cases we set D = 0.36, and in the first panel we show the solution of5
(49) and (51) for α = γr = µ = 0 and β = 1, λ = 10−5. The second panel is as in the first, but with λ = 0,6
and in the third panel a more complicated transition rate is used (cf. (73)). One sees in that figure that both the7
dependence of the transition rates on the local modulator w, and the dynamics of w itself, play an important role8
in the dynamics of the system. In the first panel the solution stabilizes at some smooth distribution, in the second9
panel the solution blows up in finite time (around t = 9.3 – this assertion is supported by analysis of the Fourier10
components – see (73)) and in the third panel the solution ultimately collapses, in a very interesting step-wise11
fashion that is not understood at present.12
The analysis of reinforced random walks presented in (73) can be generalized in many directions. For exam-13
ple, consider the re-normalized transition rates14
T ±n (w) =
wn±1/2
wn+1/2 + wn−1/2. (52)15
These can be regarded as the discrete version of the continuous forms16
T +(w) =1h
∫ x+hx w(s)ds
1h
[∫ x+hx−h w(s)ds
]17
18
T −(w) =1h
∫ xx−hw(s)ds
1h
∫ x+hx−h w(s)ds
.19
18
(a) (b) (c)
Density
0x
1
100Ti
me
Density
0x
1Time
9.3
Density
0
x
1
25
Time
Figure 4: The density profiles from three examples of the local dynamics. Reproduced from (73), copyright 1997Society for Industrial and Applied Mathematics.
The continuous version implies that the walker averages over the interval (x, x + h) or (x − h, x) to determine1
the transition rate. Of course one can incorporate a more general kernel. For example, one might use2
T ±(w(x)) = ±∫∞x λ2(y − x)2e−λ
2(y−x)2 w(y) dy∫∞−∞ λ2(y − x)2e−λ2(y−x)2 w(y) dy
3
which assigns the maximum weight to x± 1/λ. More generally, we may simply assume that4
T +(w(x)) =
∫∞x K(y − x, h) w(y) dy∫∞
x K(y − x, h) w(y) dy +∫ x−∞K(x− y, h)w(y) dy
5
T −(w(x)) =
∫ x−∞K(x− y, h) w(y) dy∫∞
x K(y − x, h)w(y) dy +∫ x−∞K(x− y, h)w(y) dy
6
for a suitable kernel K. To recover (52) we choose7
K(y − x, h) = δ(y − x− h
2).8
4 Velocity jump processes and taxis equations9
As described in Section 2, the velocity-jump (VJ) process is predicated on the assumption that particles make10
instantaneous jumps in velocity space, rather than in physical space (72). By comparing the underlying basis of11
the FPKK equation with that of the Smoluchowski equation, one should expect that the VJ process gives rise12
to evolution equations that depend jointly on physical- and velocity-space operators. Just as the FPKK equation13
leads to the Smoluchowski equation in certain regimes, it is known that the long-time asymptotics of VJ processes14
lead to diffusion processes in space under suitable scalings of space and time (77; 1; 41). In this section we15
define the general VJ process and summarize results on diffusion limits of this process. In the last subsections we16
describe the application of this process to two classes of biological organisms – swimming bacteria and crawling17
cells.18
19
4.1 The general velocity-jump process1
We shall work directly with the differential equation form of the conservation equation for a phase space density2
function that depends only on the position, velocity, time and some intracellular variables. In essence the resulting3
equation is the analog of the Liouville equation (19) with an additional term to account for the gain or loss of4
particles at a point in phase space due to the underlying jump process. Throughout we focus on the evolution of a5
smooth density function, and do not address the question of how to connect this to limiting forms of the empirical6
density for an N -particle system.7
Let p(x,v,y, t) be the density function for individuals in a 2n+m-dimensional phase space with coordinates8
(x,v,y), where x ∈ Rn is the position of an individual, v ∈ Rn is its velocity, and y is the set of intracellular9
state variables involved in cell movement. The evolution of p is governed by the equation10
∂p
∂t+ ∇x · (vp) + ∇v · (Fp) + ∇y · (fp) = R, (53)11
where F denotes an external, velocity-independent force acting on the individuals, f is the rate of change of the12
internal variable y, and R is the rate of change of p due to birth/death processes, a jump process that generates13
random changes of velocity, etc. Normally cell proliferation is independent of the velocity, and the rate of14
proliferation can be approximated by r(n)p, where r(n) is the density-dependent growth rate, but here we only15
include random velocity changes. In addition we assume that cells are sufficiently separated and neglect cell-cell16
mechanical interactions.17
The jump process for velocity changes is the direct analog of the stochastic process underlying space jumps.18
Initially we suppose that the waiting time between jumps and the changes in velocity are independent, and that19
the WTD is exponential. As a result, the turning can be described by two quantities, the turning rate λ, and20
the turning kernel T (v,v′), which defines the probability of a change in velocity from v′ to v, given that a21
reorientation occurs. T (v,v′) is non-negative and normalized so that∫T (v,v′) dv = 1, and at present we22
assume that it is independent of time and space. In light of the foregoing assumptions, (53) becomes23
∂p
∂t+ ∇x · (vp) + ∇v · (Fp) + ∇y · (fp) = −λp+ λ
∫T (v,v′)p(x,v′, t) dv′, (54)24
and the underlying stochastic process is called a velocity jump process. For most purposes one does not need the25
distribution p, but only its first few velocity moments. The first three are the observable density of individuals26
n(x, t), the momentum, and the momentum flux.27
n(x, t) =∫p(x,v,y, t)dvdy, j(x, t) =
∫p(x,v,y, t)v dvdy P =
∫p(x,v,y, t)vv dvdy.28
The momentum j defines the average velocity u = j/n. Integration of (54) over (v,y) leads to29
∂n
∂t+ ∇x · nu = 0. (55)30
When λ is independent of y, multiplication of (54) by v and integration over (v,y) yields31
∂(nu)∂t
+ ∇ · P − Fn = −λnu + λ
∫T (v,v′)vp(x,v′,y, t) dv′dvdy. (56)32
These are not closed, except in a special case noted later, due to the presence of the momentum flux tensor P and33
the integral term on the right. Until stated otherwise, we assume that F = 0.34
20
It is observed experimentally that the movement of cells often exhibits directional persistence, and as a result,1
the turning kernel depends on the angle θ between the previous velocity v′ and the new direction v (8; 62; 39; 59).2
Let s denote the cell speed, and ev denote the direction of the velocity, then, v = sev. For a fixed v′, the average3
velocity v after reorientation is defined as4
v =∫T (v,v′)v dv =
∫T (v,v′)snev ds dωn5
and the average speed is6
s ≡∫T (v,v′) ‖ v ‖ dv =
∫T (v,v′)sn ds dωn.7
As in the space-jump framework, we characterize persistence via an index of directional persistence, defined as8
ψd ≡v · v′
ss′∈ [−1,+1], (57)9
which measures the tendency of the motion to persist in a given direction ev′ . Of particular interest is the case in10
which the speed does not change with reorientation and the turning probability depends only on θ. Then T (v,v′)11
has the form12
T (v,v′) = h(θ(v,v′)
)(58)13
for any n ≥ 2. For such T , ψd is independent of v′ and14
v = ψdv′, (59)15
where16
ψd =
2∫ π0 h(θ) cos θ dθ for n = 2
2π∫ π0 h(θ) cos θ sin θ dθ for n = 3.
(60)17
Observations of the movement of Dictyostelium discoideum (Dd) amoeba yield ψd ≈ 0.7 (39), whereas the18
three-dimensional bacterial random walk data in (8) show ψd ≈ 0.33.19
External signals enter either through a direct effect on the turning rate λ and the turning kernel T , or indirectly20
via internal variables y that reflect the external signal and in turn influence λ and/or T . The first case arises when21
experimental results are used to directly estimate parameters in the equation (32), but the latter approach is more22
fundamental. The reduction of (54) to the macroscopic chemotaxis equations for the first case is done in (41; 70),23
and second case is done in (29; 28; 30; 104; 106). In (104), external forces are also included. We summarize24
some of the important aspects of the reduction in the following sections.25
4.2 The telegraph process26
A simple example will illustrate both the reduction of the jump process to a diffusion process, and how the27
parameters of the jump process have to be controlled so as to produce aggregation. Suppose that the walkers are28
confined to the real line R, that the speeds s± to the right and left may depend on position, and that direction is29
reversed at random instants governed by Poisson processes of intensity λ±. Let p± denote the density of walkers30
21
moving to the right and left, respectively. Then the conservation equations for these densities are21
∂p+
∂t+∂(s+p+)∂x
= −λ+p+ + λ−p−,2
(61)3
∂p−
∂t− ∂(s−p−)
∂x= λ+p+ − λ−p−.4
Let n ≡ p+ + p− be the macroscopic density and note that the flux j is (s+p+ − s−p−); then (61) can be written5
in the alternative form6
∂n
∂t+∂j
∂x= 0,7
(62)8
∂j
∂t+ s+
∂(s+p+)∂x
+ s−∂(s−p−)∂x
= (s+ + s−)(−λ+p+ + λ−p−).9
To illustrate the essence of aggregation and taxis in this simple context, we ask how the walkers should modify10
their behavior so as to produce a nonuniform distribution in space at steady-state, and we consider three cases.11
Case I: Constant and equal turning rates and speeds, λ+ = λ− = λ0 and s+(x) = s−(x) = s012
By combining the two equations at (62) we obtain the classical telegrapher’s equation13
∂2n
∂t2+ 2λ0
∂n
∂t= s2
∂2n
∂x2, (63)14
and by formally taking the limit λ0 → ∞, s → ∞ with s2/λ0 ≡ 2D constant in (63), one obtains the diffusionequation. However the limiting procedure can be made more precise by considering the exact solution of (63),which is
n(x, t) =
e−λ0t
2
(δ(x− st) + δ(x+ st) +
λ0
s
[I0(Λ) +
λ0t
ΛI1(Λ)
])+ n0 |x| ≤ st,
n0 |x| > st.
Here I0 and I1 are modified Bessel functions of the first kind. By applying the asymptotic expansions15
I0(z) =ez√2πz
+ O(
1z
), I1(z) =
ez√2πz
+ O(
1z
)as z → ∞,16
one finds that17
n(x, t) =1√
4πDte−x2
4Dt + n0 + e−λ0tO(ξ2), ξ2 ≡ (x/st)2.18
From this one sees that the telegraph process reduces to a diffusion process on space scales that are small com-19
pared with the ballistic scale st. This fact was known to Einstein and this process has since been studied by many20
(92; 34; 37; 49; 72).21
If we define τ = ε2t and ξ = εx, where ε is a small parameter, then (63) reduces to22
ε2∂2n
∂τ2+ 2λ0
∂n
∂τ= s2
∂2n
∂ξ2. (64)23
2These equations are the restriction of (54) to one-space dimension only when the speeds s± are constant, and in that case the momentequations close at the second level for constant λ (72). We consider the more general case for illustrative purposes.
22
In these coordinates x/(st) = εξ/(sτ) and the diffusion regime only requires that ξ/(sτ) ≤ O(1). In the limit1
ε→ 0 the exact solution can be used to show that (64) again reduces to the diffusion equation, both formally and2
rigorously (for t bounded away from zero). However this shows that the approximation of the telegraph process3
by a diffusion process hinges on the appropriate relation between the space and time scales, not necessarily on4
the limit of speed and turning rate tending to infinity. In any case, it is clear that the spatial distribution of n is5
asymptotically constant, and thus there is no localization of walkers. Imposing no-flux boundary conditions on a6
finite interval does not change this conclusion.7
Case II: Constant and equal turning rate λ+ = λ− = λ0, distinct speed s+(x) 6= s−(x)8
By assuming that the flux j at infinity vanishes, and solving for the steady state solutions of (61), one finds that
p+(x) =[s+(0)p+(0)s+(x)
]eλ0
∫ x
0
s+ − s−
s+s−dξ,
p−(x) =[s+(0)p+(0)s−(x)
]eλ0
∫ x
0
s+ − s−
s+s−dξ,
where the constant p+(0) is the cell density moving to the right at x = 0, which is determined from the con-9
servation of total particle number. From this we see that, (a) aggregation can occur when the speed of the cell10
depends on the spatial location, i. e., s± are not constants, (b) the distributions for the right-moving cells and the11
left-moving cells differ if s+(x) 6= s−(x), and (c) if s+ = s−, both left- and right-moving cells aggregate at12
points of low speed. This is somewhat similar to the scenario of traffic flow – when the road becomes narrower,13
cars slow down, and traffic jams may form.14
Case III: Distinct turning rates λ+(x) 6= λ−(x), constant and equal speeds s+ = s− = s015
We write
λ± =λ+ + λ−
2± λ+ − λ−
2=: λ0 ± λ1,
then the density-flux form (62) becomes16
∂n
∂t+∂j
∂x= 0,17
(65)18∂j
∂t+ s2
∂n
∂x= −2λ0j − 2sλ1n.19
When λ0 is constant this reduces to20
∂2n
∂t2+ 2λ0
∂n
∂t= s2
∂2n
∂x2− 2s
∂
∂x(λ1n). (66)21
We call this a hyperbolic aggregation or taxis equation, and we will see later how this emerges in general. The22
difference of the turning rate produces a drift in the dynamical evolution equal to uc = sλ1/λ0. This is similar23
to what is observed in a 1D space-jump process when the probability of right and left jumps differ.24
The steady-state solution of (65) is
n(x) = n0 exp−2s
∫ x
0λ1(ξ)dξ
,
and again there may be a non-constant solution, which is a result of the difference in turning of cells.25
23
We see from the simple 1D process that non-uniform cell distributions can arise when either the cell speeds1
are different or the turning rates are different, and these two cases correspond to what are called chemotaxis and2
chemokinesis, resp. In particular, in case II cells aggregate where their speed is lowest, which is the case when3
amoeboid cells reach the peak of a potential attractant, while in case III cells aggregate most strongly when the4
turning rate deviation λ1 returns to zero, which happens when run-and-tumble cells adapt to the signal gradient.5
4.3 Reduction of the VJ process to a diffusion process6
In general, in higher space dimensions equations (55) and (56) do not specify n and u as they stand, for they7
involve the second v moment of p and the as yet unspecified kernel T (v,v′). We call the process unbiased when8
the turning rate and kernel depend only on v and v′, and biased when external fields or internal state variables9
are included. Note that an unbiased kernel does not mean that reorientation is isotropic. We assume hereafter10
that λ is independent of the velocity, and we write (54) for the unbiased process as11
∂
∂tp(x,v, t) + v · ∇p(x,v, t) = −λp(x,v, t) + λ
∫VT0(v,v′)p(x,v′, t)dv′ ≡ L0p(x,v, t). (67)12
We consider the spatial domain Ω = IRn, and we suppose that the velocities lie in a compact set V ⊂ IRn that is13
symmetric with respect to the origin.14
To state some of the results from (41), we let K denote the cone of nonnegative functions in L2(V ), and for15
fixed (x, t) define an integral operator T and its adjoint T ∗ by16
T p =∫VT (v,v′)p(x,v′, t)dv′, T ∗ p =
∫VT (v′,v)p(x,v′, t)dv′. (68)17
We impose the following conditions on the kernel and the integral operator:18
(T1) T (v,v′) ≥ 0,∫V T (v,v′)dv = 1, and
∫V
∫V T
2(v,v′)dv′dv <∞.19
(T2) There are functions u0, φ, and ψ ∈ K with u0 6≡ 0 and φ, ψ 6= 0 a.e. such that for all (v,v′) ∈ V × V20
u0(v)φ(v′) ≤ T (v′,v) ≤ u0(v)ψ(v′). (69)21
(T3) ‖T ‖〈1〉⊥ < 1, where 〈1〉⊥ is the orthogonal complement in L2(V ) of the span of 1.22
(T4)∫V T (v,v′)dv′ = 1.23
Then the turning operator L0p(x,v, t) acts in L2(V ), and has the following spectral properties (41).24
Theorem 4.1 Assume (T1)–(T4); then the following hold.25
1. 0 is a simple eigenvalue of L0, and the corresponding eigenfunction is φ(v) ≡ 1.26
2. There is a decomposition L2(V ) = 〈1〉 ⊕ 〈1〉⊥, and, for all ψ ∈ 〈1〉⊥,27 ∫VψL0ψdv ≤ −µ2‖ψ‖2
L2(V ), where µ2 ≡ λ(1 − ‖T ‖〈1〉⊥). (70)28
3. All nonzero eigenvalues µ satisfy −2λ < Re µ ≤ −µ2 < 0, and to within scalar multiples there is no29
other positive eigenfunction.30
24
4. ‖L0‖L(L2(V ),L2(V )) ≤ 2λ.1
5. L0 restricted to 〈1〉⊥ ⊂ L2(V ) has an inverse F with norm2
‖F‖L(〈1〉⊥,〈1〉⊥) ≤1µ2. (71)3
If for example the turning kernel T (v,v′) is symmetric, then the constant µ2 given in (70) is the negative4
of the second eigenvalue of the turning operator L0. This defines a time scale for relaxation of the reorientation5
process, and in particular, if 1 is not a simple eigenvalue of T , the streaming character of the transport process6
dominates, and we can no longer expect to obtain a diffusion limit.7
Under the preceding assumptions the parabolic scaling τ = ε2t and ξ = εx, where ε is a small dimensionless8
parameter, leads to a diffusion approximation of the transport equation (41). In these variables we have9
ε2∂p
∂τ+ εv · ∇ξ p = −λp+ λ
∫VT (v,v′)p(ξ,v′, τ)dv′. (72)10
where the subscript on ∇, which we drop hereafter, indicates differentiation with respect to the scaled space11
variable. The right-hand side of (72) is O(1) compared with the left-hand side, whatever the magnitude of p, and12
this leads to a diffusion equation for the lowest order term p0 of an outer expansion, which we write as13
p(ξ,v, τ) =k∑i=0
pi(ξ,v, τ)εi + εk+1pk+1(ξ,v, τ). (73)14
An approximation result for any order in ε that provides a bound on the difference between the solution of the15
transport equation and an expansion derived from the solution of the associated parabolic diffusion equation has16
also been proven.17
Theorem 4.2 (41) Assume (T1)–(T4) and the Hilbert expansion (73), where p0 solves the parabolic limit equa-18
tion19∂p0
∂τ−∇ ·
(D∇p0
)= 0, p0(ξ, 0) =
∫Vp(ξ,v, 0)dv, (74)20
with diffusion tensor21
D = − 1ω
∫V
vFvdv. (75)22
In addition, the higher order corrections are given by23
p1 = F(v · ∇p0), p2 = F(p0,τ + v · ∇Fv · ∇p0),24
where F is the pseudoinverse defined in Theorem 4.1 and ω = |V |. Then, for each ϑ > 0, there exists a constant25
C > 0 such that for each ϑ/ε2 < t <∞ and each x ∈ IRn26
‖p(x, ., t) − q2(εx, ., ε2t)‖L2(V ) ≤ C ε3, 327
and the constant C depends on µ2, V,D, and ϑ.28
3In (41) this estimate appears with the L2-norm squared, but it is clear from the proof that there should be no square.
25
In general, the approximate solution depends only on the solution of the limiting parabolic equation, and,1
therefore, it cannot be uniformly valid in time (cf. (41)). When the speed is constant and the outgoing directions2
are uniformly distributed on Sn−1, F = −λ−1, and3
D =1ω
∫V
vvλdv =
s2
λnI.4
One can prove in general that the diffusion tensor is positive definite, and one can also derive necessary and5
sufficient conditions for it to be a scalar multiple of I.6
Since the reduction depends critically on the existence of the parabolic scaling, we give an example of how it7
is determined. Let L be a characteristic scale associated with the macroscopic evolution, for instance, the size of8
the domain on which an experiment is done. Define the dimensionless velocity, space and time variables9
u =vs
ξ =xL
τ =t
σ10
where s is a characteristic speed and σ is as yet undetermined. Then
1σ
∂p
∂τ+( sL
)u · ∇∗p = −λp+ λ
∫T (u,u′)p(ξ,u′, τ)du′,
We estimate a diffusion coefficient as the product of the characteristic speed times the average distance traveled11
between velocity jumps, which gives D ∼ O(s2/λ), and a characteristic drift time12
τDIFF ∼ L2
D=L2λ
s2τDRIFT =
L
s,13
A characteristic speed for bacteria such as E. coli is 10 − 20µ/sec, and λ−1 ∼ O(1) second. On a length scaleof 1 mm τDRIFT ∼ 50 − 100 seconds and τDIFF ∼ 2500 − 104 seconds. Therefore, in this example we haveτRUN ∼ O(1) on the dimensional scale, and
τDRIFT ∼ O(1/ε) τDIFF ∼ O(1/ε2)
where ε ∼ O(10−2). ThenτRUN ≡ λ−1 τDRIFT τDIFF
and the scaled equation results for σ = τDIFF .14
When biases are introduced their magnitude relative to the base turning rate is critical. We write the kernelwith bias as T (v,v′, p(·)), and if, for example, we assume the bias is linear in a signal gradient, then
T (v,v′, p(·)) = T0(v,v′) + κ(v · ∇p)(v′ · ∇p).
One finds that15
D(ξ, τ) =s2
λ0n
(I +
ωs2
nκ∇p∇p
(I − ωs2
nκ∇p∇p
)−1),16
and as expected, there is no drift or taxis in this case.17
On the other hand, if the perturbation is O(ε), and linear in the gradient, then one finds that
∂p0
∂τ= ∇ · (D∇p0 − ucp0) ,
26
where the drift or chemotactic velocity is given by
uc ≡ −λ0
ω
∫ ∫vF0T1(v,v′)dv′dv.
Here F0 denotes the pseudo inverse defined by the kernel T0. If Q1 has the particular form
Q1 = k1(v′, S)v
then1
χ(p) = k(p)s2
ωn.2
and the lowest-order approximation is the solution of
∂p0
∂τ= ∇ ·
(s2
n∇p0 − p0χ(p)∇p
).
4.4 The role of internal dynamics3
The most widely-studied examples of organisms whose motion can be described as a velocity jump process are4
the flagellated bacteria, the most-studied of which is E. coli. E. coli generates the force needed for swimming5
by rotating flagella embedded in the cell membrane, and thus the swimming speed is fixed by the hydrodynamic6
loading, and can be taken to be essentially constant in a specified medium. To search for food or escape an7
unfavorable environment, E. coli alternates two basic behavioral modes, swimming in a more or less straight8
line called a run, and a highly erratic motion called tumbling, the purpose of which is to reorient the cell. Run
The unbiased process The biased process
Figure 5: The movement of a particle driven by a VJ process, in the absence (left) and presence (right) of anexternal bias.
9
times are typically much longer than the time spent tumbling, and when bacteria move in a favorable direction10
(i.e., either in the direction of foodstuffs or away from harmful substances) the run times are increased further.11
Conversely, when bacteria move in an unfavorable direction the run length decreases and the relative frequency12
of tumbling increases. The distribution of new directions is not uniform on the unit sphere, but has a bias in13
the direction of the preceding run. The effect of alternating these two modes of behavior, and in particular, of14
increasing the run length when moving in a favorable direction, is that a bacterium executes a three-dimensional15
random walk with drift in a favorable direction when observed on a sufficiently long time scale (56; 9) (cf. Figure16
5).17
To illustrate the main points involved in the inclusion of internal dynamics in macroscopic equations, we18
begin with a simple example based on E. coli, and assume that there is no interaction between cells. This is a19
reasonable assumption, since typical bacterial densities are of the order of 108/ml and individual bacteria have20
a volume per cell of order πµm3 – thus the volume fraction is O(10−3). Therefore we can consider either the21
27
probability of a single walker being at a given position with a given velocity at time t, or the density of walkers,1
and we choose the latter here.2
New technology has led to extensive experimental data at the cell and molecular level, and as a result, more3
complete descriptions of inter- and intracellular signal transduction are possible for use in population-level mod-4
els of E. coli. Detailed models of the full signal transduction network exist (87; 102), but simplified cartoon5
models that capture the essential dynamics involved in aggregation and patterning have been used in recent stud-6
ies (29; 28; 104). By neglecting body forces and cell growth, the transport equation for the cell density becomes7
∂p
∂t+ ∇x · (vp) + ∇y · (fp) = −λ(y)p+
∫Vλ(y)T (v,v′,y)p(x,v′,y, t) dv′, (76)8
where y = (y1, y2)T . The vector ys encodes the excitation and adaptation response of cells to external signals,9
and λ(y) describes the motor response. The vector y evolves according to10
dy1
dt=
G(S(x, t)) − (y1 + y2)te
, (77)11
dy2
dt=
G(S(x, t)) − y2
ta, (78)12
where G(S) models signal detection via surface receptors and te and ta specify the excitation and adaptation13
time scales, with te << ta. A complete quantitative understanding of how different parameters at the cell14
level influence the population dynamics involves the incorporation the entire signal transduction of bacteria, but15
the cartoon description can predict biological aggregations and traveling bands of bacteria (cf. Figure 6). Other
(a) (b) (c)
Figure 6: Simulated E. coli patterns by a cell-based model. (a) Network formation from an uniform cell lawn;(b) Aggregate formation from the network; (c) Traveling wave formation from a single inoculum in the center.Adapted from (74) with permission.
16
intracellular variables, such as the metabolic state, can also be included in the transport equation, and this allows17
for a description of nutrient dependent cell growth. The existence of traveling wave solutions in the transport18
equations when coupled with the signal evolution equations was established in (103). Further analysis on the19
comparison of the traveling waves obtained from the classical Keller-Segel equations are presented in the chapter20
by Frantz and Erban.21
Macroscopic equations can be derived from the above multiscale models using perturbation methods and22
moment closure techniques, and this has been carried out successfully for the cartoon description above. The23
macroscopic equation24
∂n
∂t= ∇ ·
[s2
Nλ0∇n−G′(S(x, t))
bs2taNλ0(1 + λ0ta)(1 + λ0te)
n∇S], (79)25
28
with b = − ∂λ∂y1
|y1=0 and N as the space dimension was derived in 1D first in (29) , and extended to 3D in1
(28). The major assumption used there and in earlier papers is that the signal gradient is shallow: G′(S)∇S ·2
v ∼ O(ε) sec−1 and taλ0 ∼ O(1), which results in a clear separation of the microscopic time scales from the3
macroscopic transport and diffusion time scales. Other assumptions include time-independent signals S = S(x),4
a linear turning rate λ = λ0 − by1 and no directional persistence. From this equation one sees that if cells5
adapt instantly, i. e., if ta = 0, then the taxis term vanishes and the population simply diffuses. In this case no6
aggregates will form, which is consistent with experimental observations.7
New moment closure methods were developed in (104) to account for time dependent signals S = S(x, t)8
and nonlinear dependence of the turning rate on internal variables via λ = λ0 − by1 + a2y21 − · · · . In the general9
case considered there, the shallow gradient assumption becomes bλ0G′(S)(∇S · v + ∂S
∂t ) ∼ O(ε) sec−1. As10
before, the implication of this assumption is the separability of microscopic and macroscopic time scales. The11
same equation (79) results from the derivation, with the directional persistence appearing as a scaling of the12
turning rates by a factor of 1/(1 − ψd). The method also applies for any finite system of internal dynamics f(y)13
in polynomial form.14
Chemotaxis equations in the presence of multiple signals and external forces were also derived in (104) in15
the context of bacterial chemotaxis. In general cells have multiple receptor types and thus can respond to many16
different signals. How a cell integrates these different signals and responds properly depends on the cell type and17
is not known in general. However in bacteria different signaling pathways share the same network downstream of18
the receptors, and therefore different signals are integrated at the signal processing step. In this case, the function19
G is generally a function of all signals, G = G(S1, S2, · · · , Sm), and the macroscopic equation for cell density20
becomes21
∂n
∂t= ∇ ·
[Dn∇n− χ0n
(∂G
∂S1∇S1 + · · · + ∂G
∂Sm∇Sm
)], (80)22
where
Dn =s2
Nλ0(1 − ψd),
and
χ0 =bs2ta
Nλ0(1 + λ0(1 − ψd)ta)(1 + λ0(1 − ψd)te).
Model Experiment
Figure 7: Streams in a growing Proteus mirabilis colony. Repro-duced from (107) with permission.
Other systems may involve separate trans-23
duction pathways for different signals,24
which will lead to different chemotactic sen-25
sitivities for different signals. Examples of26
how this affects pattern formation are given27
in (75).28
When there are external forces that act29
on cells , then F 6= 0 in (53), and addi-30
tional terms appear in the chemotaxis equa-31
tions. For example, when E. coli swims32
the flagella rotate counterclockwise when33
viewed from behind, and under typical con-34
ditions the Reynolds number is very small.35
As a result, the motion is both force and36
29
torque free, and thus the cell body must ro-1
tate clockwise. When cells swim close to a surface there is an imbalance in the viscous force between the top and2
bottom of the cell, which produces a clockwise swimming bias when viewed from above (25). When this bias is3
incorporated into a cell-based model of aggregation, it leads to spiral density patterns as shown in Figure 7 (107).4
This was treated as a velocity-dependent force in a continuum description derived from the transport equations5
(54), and this led to the macroscopic chemotaxis equation6
∂n
∂t= ∇ ·
(Dn∇n−G′(S)n
(χ0∇S + β0(∇S)⊥
))(81)7
in two space dimensions (104). Here (∇S)⊥ = (∂x2S,−∂x1S)T is a vector orthogonal to ∇S, and the diffusion8
coefficient and the chemotactic sensitivities, assuming fast excitation, are as follows:9
Dn =s2
2λ0(1 − ψd) + 2ω20
λ0(1−ψd)
,10
χ0 =b(1 − ψd)s2[λ0(1 − ψd)(λ0(1 − ψd) + 1
ta) − ω2
0]
2((λ0(1 − ψd) + 1ta
)2 + ω20)(λ
20(1 − ψd)2 + ω2
0), (82)11
β0 =ω0b(1 − ψd)s2(2λ0(1 − ψd) + 1
ta)
2((λ0(1 − ψd) + 1ta
)2 + ω20)(λ
20(1 − ψd)2 + ω2
0).12
The parameter ω0 measures the swimming bias, while ψd is the index of directional persistence. Notice that13
the swimming bias decreases the diffusion coefficient and the chemotactic sensitivity χ0, and introduces a drift14
or a second taxis-like term in the direction orthogonal to the signal gradient. Since the force is not velocity-15
independent here, the moment analysis had to be modified accordingly. The method developed in (104) can be16
used to incorporate the effect of more general imposed forces as well.17
t2
t3
t1
0 1 2 30
1
2
3
Traces of 10 cells(a) (b)
Figure 8: (a) The positions of 10 randomly chosen cellsfrom Figure 7, each position recorded every 30 sec by ablue dot. (b) the macroscopic drift given by Equation 79yields a qualitative explanation of the spirals. Reproducedfrom (107) with permission.
Equation (81) leads to an heuristic explanation18
of the handedness of the spirals shown in Figure 7.19
This is illustrated in Figure 8, where the traces of20
10 cells are shown in (a), and the path of an indi-21
vidual cell is shown in (b). At t = t1 the blue cell22
detects a signal gradient (red arrow) roughly in the23
1 o’clock direction, but according to (81) its aver-24
age drift is in the direction of the blue arrow, due to25
the combined effects of the attractant gradient and26
the swimming bias. This balance is repeated at each27
successive time point, with the result that the cell28
approaches the signal source (the red dot) along an29
inward-spiraling, counterclockwise path as shown in30
(a).31
As remarked earlier, (79) was derived for shal-32
low signal gradients, i.e., H ≡ bλ0G′(S)(∇S · v +33
∂S∂t ) ∼ O(ε) sec−1 with ε = s/(Lλ0) ≈ 10−2. It34
remains to be determined whether the chemotaxis equation or its variant forms gives an accurate representation35
of the population dynamics for bacterial chemotaxis under large spatial or temporal signal gradients. This hinges36
30
on how the macroscopic quantities relate to microscopic parameters, and, if the PKS equation fails under certain1
conditions, what macroscopic equation can be derived. For an ultra-small signal gradient, H ≤ O(ε2) sec−1,2
the chemotactic response of the population provides a small perturbation, via higher order terms, of the cell3
density, which evolves according to a diffusion process with Dn = s2/(Nλ0) (105). For large signal gradi-4
ents (H ≥ O(1) sec−1) the macroscopic equation should include the nonlinear effects of the gradients in the5
macroscopic drift, for otherwise the linear approximation may predict a chemotactic velocity that exceeds the6
cell speed, which is unrealistic since there are no cell-cell or hydrodynamic interactions in the model. In this case7
the microscopic time scale and macroscopic time scales may overlap, and new techniques are needed to derive8
macroscopic equations.9
In any case, the dependence of the diffusion coefficient D and the chemotactic velocity uc on H can be10
obtained by stochastic simulations. Given different levels of H, 104 stochastic simulations are performed with11
the same initial conditions. The turning rate is given by λ = λ0(1 − y1γ+|y1|). The positions of the cell are12
recorded every half minute, and the data was analyzed to obtain the diffusion rate and the macroscopic drift.13
Figure 9 compares the diffusion rate and the macroscopic drift inferred from the stochastic simulations of the14
2D cell-based model with the predictions from the macroscopic equation (79). It is shown that the macroscopic15
description gives a good approximation for H ∼ O(ε)sec−1, but the nonlinearity in the cell-based model for16
H ∼ O(1)sec−1 can not be captured by the macroscopic equation with its linear dependence of H . More17
specificly, from the stochastic simulations, we see that the cell-based model reveals saturation in the macroscopic18
velocity, and gradient-dependent diffusion rates.19
0 1 2 3 4−1
0
1
2
3
4
5
6
7´ s
2/(2l
0)
Diffusion rate
H
D
Dx
Dy
Dxy
PDE
0 1 2 3 40
0.5
1
1.5
2Macroscopic speed
H
us/s
stochastic
PDE
0 1 2 3 4−30
−20
−10
0
10
20
30
H
qAngle between u
s and grad S
stochastic
PDE
0 1 2 3 4−1
0
1
2
3
4Diffusion rate
H
D
Dx
Dy
Dxy
PDE
0 1 2 3 40
0.5
1
1.5
2Macroscopic speed
H
us/s
stochastic
PDE
0 1 2 3 40
10
20
30
40
50
60
70
80
90
H
q
Angle between us and grad S
stochastic
PDE
A B C
D E F
Figure 9: A comparison of the cell-based and the macroscopic predictions of the chemotactic velocity and thediffusion coefficients in 2D. Here Dx and Dy are the diffusion coefficients perpendicular to and parallel to thesignal gradient, resp., and Dxy is the cross diffusion coefficient. The top row is obtained with no swimming bias,and the bottom row is obtained with εb = 0.04π. Reproduced from (107) with permission.
31
4.5 Macroscopic descriptions of eukaryotic cell movement1
Many single-celled organisms such as E. coli use flagella or cilia to swim, but eukaryotic cells that lack such2
structures use one of two basic modes of movement – mesenchymal and amoeboid (10). The former can be3
characterized as ‘crawling’ and involves the extension of structures whose protrusion is driven by actin polymer-4
ization at the leading edge. This mode dominates in cells such as fibroblasts when moving on a 2D substrate. In5
the amoeboid mode cells are more rounded and employ shape changes to move – in effect ’jostling through the6
crowd’ or ‘swimming’. Leukocytes use this mode for movement through the extracellular matrix in the absence7
of adhesion sites (57). Moreover, it has been shown that numerous cell types can sense the mechanical proper-8
ties of their environment and adjust the balance between the modes appropriately (85). Thus pure crawling and9
pure swimming are the extremes on a continuum of locomotion strategies, but many cells can choose the most10
effective strategy in a given context.11
While ’run-and-tumble’ organisms such as E. coli use temporal sensing to modulate their motile behavior,12
the motile program of eukaryotic cells such as Dd or leukocytes is more complicated. These cells are large13
enough to detect gradients in extracellular chemical and mechanical signals over the length of the cell, and can14
amplify small differences in the extracellular signal over the cell into large end-to-end intracellular differences15
that control the motile machinery (78; 19). Given that these cells use spatial sensing, an individual-based model16
that incorporates direction sensing and movement cannot treat cells as points, but must allow for spatial variations17
in the finite cell volume (or area in 2D). Recent experiments show that cells in a steady gradient can polarize in18
the direction of the gradient without extending pseudopods (78), and thus must rely entirely on differences in the19
signal across the cell body for orientation. Analysis of a model for the cAMP relay pathway in Dd shows that20
a cell experiences a significant difference in the front-to-back ratio of cAMP when a neighboring cell begins to21
signal (23), which demonstrates that sufficient end-to-end differences for reliable orientation can be generated22
for typical extracellular signals; everything needed is that the direction-sensing pathways respond at least as fast23
as the cAMP pathway.24
In addition to the fact that eukaryotes use spatial differences to measure signals, another major difference with25
‘run-and-tumble’ swimmers lies in the force-generation machinery that drives the motion of eukaryotic cells. In26
the ‘run-and-tumble’ description of bacterial motion we assumed that jumps were instantaneous, which led to the27
velocity jump process. Furthermore, the reduction to a diffusion process can still be carried through if there is a28
finite lifetime in the tumble state, as long as the transitions are generated by a Poisson process (70). In contrast,29
the directional changes in eukaryotic cells are much slower and depend directly on the signal location, and thus30
this has to be included in the model. This has been done at the single cell level, using a model for intracellular31
cAMP dynamics, and treating the cells as deformable viscoelastic ellipsoids that exert forces on the substrate32
and one another. This more complex model also produces realistic aggregation patterns (76), but there is a large33
gap between realistic, single-cell models and continuum descriptions. Thus far only relatively simple cell-based34
models have been used for the derivation of macroscopic descriptions35
One approach is to start with a Smoluchowski equation, and to postulate a relationship between the force and36
the chemotactic gradient. If one assumes that the motive force exerted by a cell is a function of the attractant37
concentration, one can compute the difference between the force at the leading and trailing edges, and then by a38
mean-value argument obtain a linear relation between this difference and the chemotactic gradient (79). In this39
approach the chemotactic sensitivity is related to the rate of change of the force with attractant concentration.40
Support for this comes from experiments which show that as many pseudopods are produced down-gradient as41
up, but those up-gradient are more successful in generating cell movement (94). However, Dd and perhaps other42
32
cells, adapt to the signal, and simplified models cannot capture this effectively (43). Thus a different approach1
that incorporates signal transduction and internal dynamics is needed.2
In (23; 30), a cell is described as a disk (N = 2) or a ball (N = 3) B% = ξ ∈ RN | ‖ ξ ‖≤ %, and3
the model is formulated in terms of the position of its center x ∈ RN , its velocity v ∈ RN , its internal state4
functions y : B% → Rd1 and its membrane state functions z : ∂B% → Rd2 . Denote by y = (y, z) ∈ Y the5
combined internal and membrane state, where Y is, in general, an infinite-dimensional Banach space.6
The internal state and the acceleration are assumed to evolve according to
dydt
= G(y, S), (83)
dvdt
= F(x,v,y), (84)
where G : Y × S → Y is a mapping between Banach spaces and F : RN × RN × Y → RN is the force perunit mass on the centroid. Thus the acceleration depends on the internal state. In this formulation the combinedinternal state y includes quantities that depend on the spatial location in the cell or on the membrane, and whichmay, for example, satisfy a reaction-diffusion equation such as
∂y∂t
= D∆y + f(y), in B%, (85)
B(y, z) = 0, in ∂B%. (86)
Thus the boundary condition for y depends on the membrane state functions z, perhaps to reflect binding or other7
processes such as scaffold formation. The boundary variables in turn evolve according to the equation8
∂z∂t
= g(z, S), in ∂B%, (87)9
where S is the external signal, and this could also incorporate diffusion on the boundary by suitably altering the10
equation.11
Given the complexity of the single cell description, it is a formidable task to derive macroscopic equations12
for populations of eukaryotic cells. A simple model of the form (83-84) for a single cell was analyzed in (30).13
This model captures the essential features of cell movement in response to traveling waves of chemoattractant.14
Moreover, in that context there is a mapping P : Y → Rk, k < ∞, satisfying F(x,v,y) = F(x,v,P(y))15
where F : RN × RN × Rk → RN such that a closed evolution equation for the variable z = P(y) can be16
derived. Then the cellular random walk written in terms of (x, v, y) can be equivalently formulated in terms of17
the finite-dimensional state variables (x, v, z). In particular, one can formulate an equation for the probability18
distribution p(x,v, z) (cf. (76) written for p(x,v,y) in the bacterial case). Asymptotic analysis of this transport19
equation leads to a system of macroscopic hyperbolic equations that accurately reflect the dynamics of the full20
system, but it is not known if that system can in turn be reduced to a PKS equation (30). Other approaches have21
been used, e. g., generalized PKS equations have been derived beginning with a cellular Potts model (61), but the22
internal state plays no role in these formulations.23
5 Discussion24
How cells or organisms move about in space in response to signals, and how they coordinate their movement and25
form stationary or dynamic patterns is an important question in many biological processes, including embryonic26
33
development, cancer progression, wound healing and biofilm formation. These phenomena have been modeled1
in two ways in the literature. Firstly, there are continuum models based on phenomenological descriptions that2
lead to convection-diffusion equations such as the chemotaxis equation (42) for the evolution of the macroscopic3
cell density n = n(x, t). However, new experimental technology has advanced our knowledge on how cells4
detect, transduce, respond to, and propagate external stimuli, and this has led to the second approach, in which5
detailed cell-based models of collective cell movement towards chemical or mechanical signals (23; 107; 90; 96)6
are incorporated. However, due to the complexity of intracellular dynamics and the large number of cells that are7
often involved, cell-based models are computationally expensive, and new techniques are needed to embed cell-8
level knowledge into macroscopic equations. This is a difficult problem, comparable to deriving the macroscopic9
rheological properties of a complex fluid such as the cytosol from knowledge of the molecular interactions, and10
thus not surprisingly, progress has been slow.11
Here we have reviewed recent progress on deriving chemotaxis equations from space jump processes and12
velocity jump processes. When swimming bacteria such as E. coli move independently towards chemical signals,13
their movement can be described as independent velocity jump processes. When cells are well separated and the14
signal gradient is sufficiently small, chemotaxis equations are derived from the moment equations of the transport15
equation that describes the evolution of the cell in phase space (41; 70; 29; 28; 30; 104). When the signal gradient16
is large, 1D stochastic simulations of a cell-based model show that the movement of cells is more persistent and17
cells run up the gradient with very little turning. Therefore statistically the diffusion rate decreases to zero, and18
the macroscopic velocity increases to the maximum cell speed, as the signal gradient increases. This shows that19
under extremely large signal gradients, the macroscopic equations for cells movement are more of a hyperbolic20
type, and also reflects the fact that the low-order moments in the internal dynamics cannot capture the strongly21
nonlinear dependence of the turning rate on the signal. This is similar to what is observed in eukaryotic cells,22
which suppress random movement in the presence of strong chemotactic signals. However in 2D stochastic23
simulations, bacteria moving roughly orthogonal to the signal gradient still run and tumble, and this leads to24
diffusion coefficients that do not approach zero, in contrast with the 1D case.25
There are many open problems in this area, a few of which are listed below.26
• A more complete analysis of the time scales and how they depend on the external signal and the inter-27
nal dynamics is needed. For example, the second eigenvalue of the turning operator controls the rate at28
which the diffusion regime is approached, but little has been done to obtain better estimates of the second29
eigenvalue based on properties of the turning kernel.30
• The formulation of VJ processes herein is based on the assumption that the velocity jumps are generated31
by a Poisson process, but there is some evidence mentioned earlier (64) that bacteria show abnormally32
long run lengths that are inconsistent with this assumption. In general the non-streaming component of the33
transport equation (54) is simply the time derivative of the stochastic process generating the jumps, which34
may change depending on the signal strength, and the use of other waiting time distributions in the VJ35
process should be explored.36
• To date most derivations of macroscopic equations from a microscopic model have ignored density effects,37
but these are important in examples of bacterial movement and related problems. Most analyses of density38
effects begin with continuum descriptions and add forces due to active motile particles (58; 44), but a more39
fundamental approach is needed.40
34
• In many situations, cell-cell contact and contact-induced signaling is important for collective movement.1
To describe this one must include cell-cell mechanical interaction terms in F, and cell-cell contact signaling2
terms in the internal dynamics. A suitable starting point for this may be to add the evolution of internal3
dynamics to the (x,v) evolution described by the Fokker-Planck-Kramers-Klein equation (15).4
• As an adjunct to this, continuum-level descriptions of tissue movement based on microscopic models,5
should be formulated (35), but there are many difficult homogenization issues that arise here.6
• The derivation of macroscopic equations for systems when the finite-dimensional reduction P : Y → Rk7
is not possible is an open problem. In fact the entire formulation as a transport equation breaks down, and8
a new approach is needed.9
References10
[1] W. Alt. Biased random walk models for chemotaxis and related diffusion approximations. J. Math. Biol.,11
9:147–177, 1980.12
[2] D. Applebaum. Levy processes and stochastic calculus, volume 93. Cambridge Univ Pr, 2004.13
[3] R. Aris. Vectors, Tensors and the Basic Equations of Fluid Mechanics. Prentice-Hall, New York, 1962.14
[4] L. Arnold. Stochastic Differential Equations, Theory and applications. Wiley-Interscience Publication,15
1974.16
[5] L. Bachelier. Theorie de la speculation. Gauthier-Villars, 1900.17
[6] I. L. Bajec and F. H. Heppner. Organized flight in birds. Animal Behaviour, 78(4):777–789, 2009.18
[7] M. N. Barber and B. W. Ninham. Random and restricted walks: Theory and applications, volume 10.19
Gordon and Breach, NewYork, 1970.20
[8] H. C. Berg and D. A. Brown. Chemotaxis in esterichia coli analysed by three-dimensional tracking.21
Nature, 239:500–504, 1972.22
[9] Howard C. Berg. Random Walks in Biology. Princeton University Press, 1983.23
[10] F. Biname, G. Pawlak, P. Roux, and U. Hibner. What makes cells move: requirements and obstacles for24
spontaneous cell motility. Molecular BioSystems, 6(4):648–661, 2010.25
[11] L. Bocquet and J. Piasecki. Microscopic derivation of non-Markovian thermalization of a Brownian parti-26
cle. J. Stat. Phys., 87(5):1005–1035, 1997.27
[12] M. Born and HS Green. A general kinetic theory of liquids. I. the molecular distribution functions. Proc.28
Roy. Soc. London. Series A. Mathematical and Physical Sciences, 188(1012):10, 1946.29
[13] V. Capasso and D. Bakstein. An Introduction to Continuous-time Stochastic Processes: Theory, models,30
and applications to finance, biology, and medicine. Birkhauser, 2005.31
35
[14] V. Capasso and D. Morale. Asymptotic behavior of a system of stochastic particles subject to nonlocal1
interactions. Stochastic analysis and applications, 27(3):574–603, 2009.2
[15] J. A. Carrillo, M. Fornasier, G. Toscani, and F. Vecil. Particle, kinetic, and hydrodynamic models of3
swarming. Mathematical Modeling of Collective Behavior in Socio-Economic and Life Sciences, pages4
297–336, 2010.5
[16] C. Cercignani, R. Illner, and M. Pulvirenti. The Mathematical Theory of Dilute Gases. Springer, New6
York, 1994.7
[17] Carlo Cercignani. Mathematical Methods in Kinetic Theory. Plenum, second edition, 1969.8
[18] S. Chandrasekhar. Stochastic problems in physics and astronomy. Reviews of Modern Physics, 15:2–89,9
1943.10
[19] C. Y. Chung, S. Funamoto, and R. A. Firtel. Signaling pathways controlling cell polarity and chemotaxis.11
Trends Biochem Sci, 26(9):557–566, 2001. Review.12
[20] Ruel V. Churchill. Operational Mathematics. McGraw-Hill, 1958.13
[21] E. A. Codling, M. J. Plank, and S. Benhamou. Random walk models in biology. J Roy Soc Interface,14
5(25):813, 2008.15
[22] Felipe Cucker and Steve Smale. On the mathematical foundations of learning. Bull. Am. Math. Soc.,16
39(1):1–49, 2001.17
[23] J. C. Dallon and H. G. Othmer. A continuum analysis of the chemotactic signal seen by dictyostelium18
discoideum. J. Theor. Biol., 194(4):461–483, 1998.19
[24] B. Davis. Reinforced random walks. Prob. Thy. Rel. Fields, pages 203–229, 1990.20
[25] W. R. DiLuzio, L. Turner, M. Mayer, P. Garstecki, D. B. Weibel, H. C. Berg, and G. M. Whitesides.21
Escherichia coli swim on the right-hand side. Nature, 435(7046):1271–4, 2005.22
[26] A. M. Edwards, R. A. Phillips, N. W. Watkins, M. P. Freeman, E. J. Murphy, V. Afanasyev, S. V. Buldyrev,23
M. G. E. da Luz, E. P. Raposo, H. E. Stanley, et al. Revisiting Levy flight search patterns of wandering24
albatrosses, bumblebees and deer. Nature, 449(7165):1044–1048, 2007.25
[27] A. Einstein. Uber die von der molekularkinetischen Theorie der Warme geforderte Bewegung von in26
ruhenden Flussigkeiten suspendierten Teilchen. Ann. der Physik, 17:549–560, 1905.27
[28] R. Erban and H. Othmer. From signal transduction to spatial pattern formation in E. coli: A paradigm for28
multi-scale modeling in biology. Multiscale Modeling and Simulation, 3(2):362–394, 2005.29
[29] R. Erban and H. G. Othmer. From individual to collective behavior in bacterial chemotaxis. SIAM J. Appl.30
Math., 65(2):361–391, 2004.31
[30] R. Erban and H. G. Othmer. Taxis equations for amoeboid cells. J. Math. Biol., 54:847–885, 2007.32
[31] W. Feller. An Introduction to Probability Theory. Wiley, New York, 1968.33
36
[32] Roseanne Ford and Douglas A. Lauffenburger. A simple expression for quantifying bacterial chemotaxis1
using capillary assay data: Application to the analysis of enhanced chemotactic responses from growth-2
limited cultures. Math. Biosci., 109(2):127–150, May 1992.3
[33] M. Franceschetti. When a random walk of fixed length can lead uniformly anywhere inside a hypersphere.4
Journal of Statistical Physics, 127(4):813–823, 2007.5
[34] R. Furth. Die Brownische Bewegung bei Berucksichtigung einer Persistenz der Bewegungsrichtung.6
Zeitsch. f. Physik, 2:244–256, 1920.7
[35] J. Galle, M. Hoffmann, and G. Aust. From single cells to tissue architecture-a bottom-up approach to8
modelling the spatio-temporal organisation of complex multi-cellular systems. J Math Biol, 58(1-2):261–9
283, January 2009.10
[36] G. W. Gardiner. Handbook of Stochastic Processes for Physics, Chemistry and Natural Sciences. Springer11
Verlag, 2 edition, 1985.12
[37] S. Goldstein. On diffusion by discontinuous movements, and on the telegraph equation. Quart. J. Mech.13
Applied Math., VI:129–156, 1951.14
[38] S.-Y. Ha and E. Tadmor. From particle to kinetic and hydrodynamic descriptions of flocking. Kinetic and15
Related Models, 1(3):415–35, September 2008.16
[39] R. L. Hall. Amoeboid movement as a correlated walk. J. Math. Biol., 4:327–335, 1977.17
[40] C.R. Heathcote and J.E. Moyal. The random walk [in continuous time] and its application to the theory of18
queues. Biometrika, 46(3-4):400, 1959.19
[41] T. Hillen and H. G. Othmer. The diffusion limit of transport equations derived from velocity jump pro-20
cesses. SIAM JAM, 61:751–775, 2000.21
[42] T. Hillen and K. Painter. A user’s guide to PDE models for chemotaxis. J. Math. Biol., 2008.22
[43] T. H23
”ofer, J. A. Sherratt, and P. K. Maini. Cellular pattern formation during dictyostelium aggregation. Physica24
D, 85(3):425–444, 1995.25
[44] C. Hohenegger and M. J. Shelley. Stability of active suspensions. Physical Review E, 81(4):046311, 2010.26
[45] D. Horstmann. From 1970 until present: the Keller-Segel model in chemotaxis and its consequences I.27
Jahresbericht der DMV, 105(3):103–165, 2003.28
[46] J. Hu and H. G. Othmer. A theoretical analysis of filament length fluctuations in actin and other polymers.29
(to appear – J. Math. Biol.), 2011.30
[47] J.M. Hutchinson and P. M. Waser. Use, misuse and extensions of ”ideal gas” models of animal encounter.31
Biological Reviews-Cambridge, 82(3):335, 2007.32
[48] J.O. Irwin. The frequency distribution of the difference between two independent variates following the33
same Poisson distribution. Journal of the Royal Statistical Society, 100(3):415–416, 1937.34
37
[49] M. Kac. Some stochastic problems in physics and mathematics. Magnolia Petroleum Company Collo-1
quium Lectures in the Pure and Applied Sciences, No. 2, (mimeographed lecture notes)., 1956.2
[50] K. Kang, B. Perthame, A. Stevens, and JJL Velazquez. An integro-differential equation model for align-3
ment and orientational aggregation. Journal of Differential Equations, 246(4):1387–1421, 2009.4
[51] S. Karlin and H. Taylor. A First Course in Stochastic Processes. Academic Press, New York, 1975.5
[52] E. Keller and L. Segel. Initiation of slime mold aggregation viewed as an instability. J. theor. Biol,6
26:399–415, 1970.7
[53] V. M. Kenkre. Statistical mechanics and statistical methods in theory and application, chapter The gener-8
alized master equation and its applications. Plenum Publishing Corporation, 1977.9
[54] VM Kenkre, EW Montroll, and MF Shlesinger. Generalized master equations for continuous-time random10
walks. Journal of Statistical Physics, 9(1):45–50, 1973.11
[55] J.C. Kluyver. A local probability theorem. In Ned. Akad. Wet. Proc. A, volume 8, pages 341–350, 1906.12
[56] D. E. Koshland. Bacterial Chemotaxis as a Model Behavioral System. Raven Press, New York, 1980.13
[57] T. Lammermann, B. L. Bader, S. J. Monkley, T. Worbs, R. Wedlich-Soldner, K. Hirsch, M. Keller,14
R. Forster, D. R. Critchley, R. Fassler, et al. Rapid leukocyte migration by integrin-independent flow-15
ing and squeezing. Nature, 453:51–55, 2008.16
[58] J. Lega and T. Passot. Hydrodynamics of bacterial colonies: a model. Phys. Rev. E Stat Nonlin Soft Matter17
Phys, 67(3 Pt 1):031906, March 2003.18
[59] Liang Li, Simon F. Nrrelykke, and Edward C. Cox. Persistent cell motion in the absence of external19
signals: a search strategy for eukaryotic cells. PLoS One, 3(5):e2093, 2008.20
[60] R. L. Liboff. Kinetic theory: classical, quantum, and relativistic descriptions. Springer Verlag, 2003.21
[61] P. M. Lushnikov, N. Chen, and M. Alber. Macroscopic dynamics of biological cells interacting via chemo-22
taxis and direct contact. Physical Review E, 78(6):061904, 2008.23
[62] R. M. Macnab. Sensing the environment: Bacterial chemotaxis. In R. Goldberg, editor, Biological Regu-24
lation and Development, pages 377–412, New York, 1980. Plenum Press.25
[63] U. M. B. Marconi and P. Tarazona. Nonequilibrium inertial dynamics of colloidal systems. The Journal26
of chemical physics, 124:164901, 2006.27
[64] F. Matthaus, M. Jagodic, and J. Dobnikar. E. coli superdiffusion and chemotaxis–search strategy, precision,28
and motility. Biophysical journal, 97(4):946–957, 2009.29
[65] R. Metzler and J. Klafter. The random walk’s guide to anomalous diffusion: a fractional dynamics ap-30
proach. Physics Reports, 339(1):1–77, 2000.31
[66] E. W. Montroll and G. H. Weiss. Random walks on lattices. II. Journal of Mathematical Physics, 6:167,32
1965.33
38
[67] G. Naldi. Mathematical Modeling of Collective Behavior in Socio-Economic and Life Sciences. Springer,1
2010.2
[68] Karl Oelschlager. A fluctuation theorem for moderately interacting diffusion processes. Probab. Th. Rel.3
Fields, 74:591–616, 1987.4
[69] A. Okubo. Diffusion and Ecological Problems: Mathematical Models. Springer-Verlag, New York, 1980.5
[70] H. Othmer and T. Hillen. The diffusion limit of transport equations 2: Chemotaxis equations. Siam J.6
Appl. Math., 62:1222–1250, 2002.7
[71] H. G. Othmer. Interactions of reaction and diffusion in open systems. PhD thesis, University of Minnesota,8
1969.9
[72] H. G. Othmer, S. R. Dunbar, and W. Alt. Models of dispersal in biological systems. J. Math. Biol.,10
26:263–298, 1988.11
[73] H. G. Othmer and A. Stevens. Aggregation, blowup, and collapse : The ABC’s of taxis in reinforced12
random walks. Siam J. Appl. Math., 57(4):1044–1081, 1997.13
[74] Hans G. Othmer, Kevin Painter, David Umulis, and Chuan Xue. The intersection of theory and application14
in biological pattern formation. Math. Mod. Nat. Phenom., 4:3–79, 2009.15
[75] K. J. Painter, P. K. Maini, and H. G. Othmer. Development and applications of a model for cellular16
response to multiple chemotactic cues. J. Math. Biol., 41(4):285–314, 2000.17
[76] Eirikur Palsson and Hans G. Othmer. A model for individual and collective cell movement in dictyostelium18
discoideum. Proceedings of the National Academy of Science, 97:11448–11453, 2000.19
[77] G. C. Papanicolaou. Asymptotic analysis of transport processes. Bulletin AMS, 81:330–392, 1975.20
[78] C. A. Parent and P. N. Devreotes. A cell’s sense of direction. Science, 284(5415):765–770, April 1999.21
Review.22
[79] E. Pate and H. G. Othmer. Differentiation, cell sorting and proportion regulation in the slug stage of23
Dictyostelium discoideum. J. Theor. Biol., 118:301–319, 1986.24
[80] C. S. Patlak. Random walk with persistence and external bias. Bull. of Math. Biophys., 15:311–338, 1953.25
[81] K. Pearson. The problem of the random walk. Nature, 72(1865):294–294, 1905.26
[82] R. Pemantle. A survey of random processes with reinforcement. Probability Surveys, 4:1–79, 2007.27
[83] B. Perthame. Mathematical tools for kinetic equations. Bull. Am. Math. Soc, 41(2):205–244, 2004.28
[84] Lord Rayleigh. On the resultant of a large number of vibrations of the same pitch and of arbitrary phase.29
Phil. Mag, 10(73):491, 1880.30
[85] J. Renkawitz, K. Schumann, M. Weber, T. Lammermann, H. Pflicke, M. Piel, J. Polleux, J. P. Spatz, and31
M. Sixt. Adaptive force transmission in amoeboid cell migration. Nature Cell Biology, 11(12):1438–1443,32
2009.33
39
[86] K. I. Sato. Levy processes and infinitely divisible distributions. Cambridge Univ Pr, 1999.1
[87] Peter A. Spiro, John S. Parkinson, and Hans G. Othmer. A model of excitation and adaptation in bacterial2
chemotaxis. Proc. Nat. Acad. Sci., 94(14):7263–7268, 1997.3
[88] H. Spohn. Large scale dynamics of interacting particles. Springer-Verlag New York:, 1991.4
[89] A. Stevens. The derivation of chemotaxis equations as limit dynamics of moderately interacting stochastic5
many-particle systems. SIAM J Appl Math, 61:183–212, 2000.6
[90] M. A. Stolarska, Y. Kim, and H. G. Othmer. Multi-scale models of cell and tissue dynamics. Philosophical7
Transactions of the Royal Society A, 367(1902):3525, 2009.8
[91] D. W. Stroock. Some stochastic processes which arise from a model of the motion of a bacterium. Probab.9
Theory and Related Fields, 28:305–315, 1974.10
[92] G. I. Taylor. Diffusion by continuous movements. Proc. Lon. Math. Soc., 20:196–212, 1920.11
[93] N. van Kampen. Stochastic Processes in Physics and Chemistry. North-Holland, Amsterdam, 3rd edition,12
2007.13
[94] B. J. Varnum-Finney, E. Voss, and D. R. Soll. Frequency and orientation of pseudopod formation of14
Dictyostelium discoideum amebae chemotaxing in a spatial gradient: Further evidence for a temporal15
mechanism. Cell Motil Cytoskeleton, 8(1):18–26, 1987.16
[95] T. Vicsek and A. Zafiris. Collective motion. Submitted – Rev. Mod. Phys.17
[96] D. C. Walker, G. Hill, S. M. Wood, R. H. Smallwood, and J. Southgate. Agent-based computational18
modeling of wounded epithelial cell monolayers. IEEE Trans Nanobioscience, 3(3):153–163, September19
2004.20
[97] Q. D. Wang. The global solution of the n-body problem. Celestial Mechanics and Dynamical Astronomy,21
50:73–88, 1991.22
[98] G. H. Weiss. Aspects and applications of the random walk, volume 121. North-Holland Amsterdam:,23
1994.24
[99] D. Widder. The Laplace Transform. Princeton Univ Press, Princeton, N.J., 1946.25
[100] G. Wilemski. On the derivation of Smoluchowski equations with corrections in the classical theory of26
Brownian motion. J Stat Phys, 14(2):153–69, 1976.27
[101] Y. Wu, A. D. Kaiser, Y. Jiang, and M. S. Alber. Periodic reversal of direction allows myxobacteria to28
swarm. Proceedings of the National Academy of Sciences, 106(4):1222, 2009.29
[102] Xiangrong Xin and Hans G. Othmer. A ‘trimer-of-dimers’ model of signal transduction in eshcericia coli.30
Submitted, 2011.31
[103] C. Xue, H. J. Hwang, K. J. Painter, and R. Erban. Travelling waves in hyperbolic chemotaxis equations.32
Bull. Math. Biol.,, 2011.33
40
[104] C. Xue and H. G. Othmer. Multiscale models of taxis-driven patterning in bacterial populations. SIAM J.1
Appl. Math., 70(1):133–167, 2009.2
[105] Chuan Xue. Mathematical models of taxis-driven bacterial pattern formation. PhD thesis, University of3
Minnesota, 2008.4
[106] Chuan Xue, Hans G. Othmer, and Radek Erban. From individual to collective behavior of unicellular5
organisms: Recent results and open problems. volume 1167, pages 3–14. AIP, 2009.6
[107] Chuan Xue, Elena O. Budrene, and Hans G. Othmer Radial and spiral stream formation in Proteus7
mirabilis colonies. PLOS Computational Biology, to appear.8
41