16
JOURNAL OF Economic Behavior ELSEVIER Journal of Economic Behavior and Organization Vol. 28 (1995) 241-256 & Organization The coexistence of conventions Robert Sugden School of Economic and Social Studies, Universiry of East Anglia Norwich NR4 7TJ, United Kingdom Received 20 April 1993; revised 21 November 1994 Abstract The paper presents a model of repeated, anonymous play of a 2 X 2 game with two strict Nash equilibria, one of which risk-dominates the other. Stage games differ according to their location in a “social space”; players receive noisy signals of these locations. The dynamics of the model are such that strategy choices, conditional on any signal, gravitate towards whichever strategy maximizes expected utility. Social space need not be uniform. Certain kinds of non-uniformity in social space can allow stationary states in which the two equilibria coexist, each being played in a different region. JEL classification: C73 Keywords: Convention; Coordination game; Equilibrium selection There are many commodities that could serve as money in any economy. If we expect everyone else to trade in Deutschmarks, it pays each of us to take stocks of Deutschmarks when shopping; if we expect everyone else to trade in cigarettes, it pays each of us to take cigarettes. There are many ways of assigning priority at road junctions: if we expect everyone else to give priority to the vehicle on the right, it pays each of us to give priority to the right; if we expect everyone else to give priority to the vehicle on the more important road, then it pays each of us to do the same. There are many newspapers in which job vacancies can be adver- tised; employers want to advertise in those newspapers that job-seekers are most likely to buy, and job-seekers want to buy those newspapers in which employers are most likely to advertise. These are examples of a significant class of social interactions, in which there are at least two different rules that people might follow when interacting with others; if an individual can expect everyone with whom she 0167-2681/95/$09.50 0 1995 Elsevier Science B.V. AR rights reserved SSDIO167-2681(95)00034-8

The coexistence of conventions

Embed Size (px)

Citation preview

JOURNAL OF

Economic Behavior

ELSEVIER Journal of Economic Behavior and Organization

Vol. 28 (1995) 241-256

& Organization

The coexistence of conventions

Robert Sugden School of Economic and Social Studies, Universiry of East Anglia Norwich NR4 7TJ, United Kingdom

Received 20 April 1993; revised 21 November 1994

Abstract

The paper presents a model of repeated, anonymous play of a 2 X 2 game with two strict Nash equilibria, one of which risk-dominates the other. Stage games differ according to their location in a “social space”; players receive noisy signals of these locations. The dynamics of the model are such that strategy choices, conditional on any signal, gravitate towards whichever strategy maximizes expected utility. Social space need not be uniform. Certain kinds of non-uniformity in social space can allow stationary states in which the two equilibria coexist, each being played in a different region.

JEL classification: C73

Keywords: Convention; Coordination game; Equilibrium selection

There are many commodities that could serve as money in any economy. If we expect everyone else to trade in Deutschmarks, it pays each of us to take stocks of Deutschmarks when shopping; if we expect everyone else to trade in cigarettes, it pays each of us to take cigarettes. There are many ways of assigning priority at road junctions: if we expect everyone else to give priority to the vehicle on the right, it pays each of us to give priority to the right; if we expect everyone else to give priority to the vehicle on the more important road, then it pays each of us to do the same. There are many newspapers in which job vacancies can be adver-

tised; employers want to advertise in those newspapers that job-seekers are most likely to buy, and job-seekers want to buy those newspapers in which employers are most likely to advertise. These are examples of a significant class of social interactions, in which there are at least two different rules that people might follow when interacting with others; if an individual can expect everyone with whom she

0167-2681/95/$09.50 0 1995 Elsevier Science B.V. AR rights reserved SSDIO167-2681(95)00034-8

242 R. Sugden /J. of Economic Behuior & Org. 28 (I 995) 241-256

interacts to follow any specific one of these rules, she would choose to follow the same rule herself. Such a rule will be called a convention.

A convention is a self-regulating social institution, and a building block of social order. From David Hume (1740) onwards, there has been a tradition in philosophy and social theory which seeks to explain how conventions can emerge out of the repeated, uncoordinated interactions of self-interested individuals. ’ In

that tradition, conventions are understood as paradigms of spontaneous order;

understanding the emergence and stability of conventions is seen as a first step towards understanding undesigned institutions. This paper belongs to that tradi- tion.

Once established, conventions can persist over long periods; and alternative conventions often seem to be able to coexist in a fairly stable relationship, each

being used in a different domain of interactions. For example, many linguistic boundaries have changed little over hundreds of years. Conventions of speech and

dress differ between social classes, ethnic groups, and age groups. Conventions which determine who gets served first at a counter differ according to the type of counter: British people, for example, are much more likely to queue in banks than in pubs. And so on. Nevertheless, over time, some conventions encroach on others; sometimes, one convention is completely supplanted by another.

My object in this paper is to investigate the stability - or instability - of conventions by studying the conditions under which conventions can and cannot coexist. I shall present a stylized model both of the coexistence of conventions, and of the process by which one convention replaces another.

1. Existing theories of the evolution of conventions

One of the simplest and best-known theories of the evolution of conventions was developed for theoretical biology by Maynard Smith and various collaborators (e.g. Maynard Smith and Price, 1973; Maynard Smith, 1982) and then applied to social i :raction by, among others, Axelrod (1981), Axelrod (1984) and myself (Sugden, 1986; S ug d en, 1989). The basic idea of this theory can be summarized as

follows. Consider a symmetrical game for two players which is played repeatedly by the members of some large population. Pairs of players are formed by independent random draws from the population. Each member of the population plays many games. Games are anonymous in the sense that no one remembers the identity of any particular opponent. Thus no one, when playing against a particular opponent, can condition her strategy on that opponent’s past behaviour.

For each member of the population, the expected utility of playing a given strategy depends on the frequency with which the alternative strategies are being

1 Modem contributors to this tradition include Schelling (19601, Lewis (1969), and Sugden (1986).

R. Sugden /J. of Economic Behauior & Org. 28 (I 995) 241-256 243

played in the population as a whole. It is assumed that, if one strategy yields a higher expected utility than another, the frequency with which the first strategy is played will tend to increase relative to that of the second. In biological applica- tions, utility is understood as a measure of fitness, and gravitation towards successful strategies occurs by natural selection. Different strategies are associated with different patterns of genes, and organisms which follow more successful strategies are more successful at replicating themselves. New strategies are introduced into the population by the random shuffling of genes that occurs in sexual reproduction, and by mutation. In social applications, in contrast, utility is understood as a measure of preference, and gravitation occurs as a result of learning: individuals tend to switch from less successful to more successful strategies. New strategies are introduced into the population as a result of experiments made by individuals. Successful experiments are repeated and imi- tated, while unsuccessful ones are not.

Given this model of interaction, there is a state of equilibrium if the frequency distribution of strategies in the population has no tendency to change. But whether or not there is such a tendency for change depends on the process by which new strategies enter the population.

Maynard Smith’s equilibrium concept of evolutionary stability rests on two assumptions about this process. First, when a new strategy enters the population, the frequency with which it is initially played is vanishingly small. Second, each player’s opponents are a random sample of the whole population, so that for each player, irrespective of her own strategy, the probability of meeting an opponent who is following the new strategy is equal to the frequency of that strategy in the population. Thus, if initially the whole population is playing some strategy S, a new or mutant ’ strategy R cannot invade unless one of two conditions is satisfied: either against an S-playing opponent, an R-player does strictly better than an S-player; or against an S-playing opponent, an R-player does at least as well as an S-player, and against an R-playing opponent, the R-player does strictly better. 3 In a game in which the positions of the two players are symmetrical, any symmetrical and strict Nash equilibrium is evolutionarily stable. That is, a population in which everyone plays in accordance with the same symmetrical and strict Nash equilibrium cannot be invaded by any other strategy.

Recently, Kandori et al. (1993) and Young (1993) have constructed models of populations of the kind analyzed by Maynard Smith, but with explicitly specified dynamics. In these models, the probability with which a player chooses any strategy depends on the expected utility of playing that strategy, given the

’ I shall use the term “mutant” to refer both to strategies which represent biological mutations and

(in the context of social evolution) to strategies which result from experiment or error.

3 If R-players do just as well as S-players against both types of opponent, it is possible for the new

strategy to spread by random drift.

244 R. Sugden /J. of Economic Behavior & Org. 28 (1995) 241-256

strategies actually chosen by other players in the immediate past: players gravitate towards those strategies that are best replies to the strategies that their potential

opponents chose in the past. For this kind of dynamic process, symmetrical strict Nash equilibria are stationary states. Superimposed on this process, however, is a random process of mutation (or error). There is always some positive probability that the same mutant strategy will be played by many individuals at the same time. Because of this possibility, any symmetrical strict Nash equilibrium is to some degree vulnerable to invasion by any other. It turns out that equilibria which are risk-dominant (as defined by Harsanyi and Selten, 1988) are less vulnerable to invasion by risk-dominated equilibria than risk-dominated equilibria are to inva- sion by risk-dominant ones. However, as Ellison (1993) points out, this conclusion may have little significance for large populations, unless we are interested in behaviour over the very long run (perhaps measured in billions of years). In typical cases, the invasion of one symmetrical strict Nash equilibrium by another is such an improbable event that we might as well treat it as if it had a probability

of zero. Invasion can become a much less improbable event if the interactions of

individuals who play a mutant strategy are “clustered”, so that for a given mutant individual, the probability of its playing a similar mutant is greater than the frequency of mutants in the population. Axelrod (1981) (also Axelrod, 1984) and Axelrod and Hamilton (1981) propose a concept of equilibrium which is similar to evolutionary stability, but which rests on a stronger test of invulnerability to invasion: an equilibrium must be invulnerable to “invasion by a cluster”. Axelrod and Hamilton assume that, when a mutant strategy initially invades a population, each mutant has some probability p of playing against another mutant of the same type; in contrast to Maynard Smith’s analysis, p need not be vanishingly small. Initially, non-mutants are assumed to play one another with probability very close to 1. In a game in which the players’ positions are symmetrical, if one symmetrical Nash equilibrium strategy S Pareto-dominates another symmetrical Nash equilib- rium strategy R, then R cannot invade S, whatever the value of p. (Even if p = 1, so that mutant R-players play only among themselves, they will do less well than S-players.) But there are conditions under which a Pareto-dominant strategy can invade a Pareto-dominated one.

Axelrod and Hamilton provide some motivation for using this model of clustering in biological contexts. Their idea is that interactions and kinship both tend to be geographically localized; thus, for any individual, a significant propor- tion of the games it plays will be with close kin. If strategies are genetically determined, then closely related individuals are likely to play the same strategies as one another; and this will remain true as an invading gene gradually disperses through a population. This allows us to set a minimum value for the probability with which each individual plays against an opponent which follows the same strategy as itself; this is the biological interpretation of the parameter p. But there is no obvious analogue for kinship in the case of social evolution. And even in

R. Sugden /J. of Economic Behavior & Org. 28 (1995) 241-256 245

biological contexts, it is not obvious that in the early stages of an invasion, each

non-mutant will play other non-mutants with probability close to 1. If interactions are geographically localized, an invading cluster of mutants will tend to be concentrated in a particular area; then for non-mutants who are close to this area, the probability of meeting a mutant will not be vanishingly small.

A more satisfactory approach is to build an explicit model of the network of interactions, and then to investigate whether an initially localized invasion by a mutant strategy can spread throughout the network. An early example of this kind of model of localized social interaction is due to Schelling (1971), who offers an explanation of patterns of racial segregation. Axelrod (1984), pp. 158-168, presents a very basic model of localized interaction and reports some simulations, but does not provide much theoretical analysis. More substantial models have been presented by Boyer and Orlean (1992), Blume (1993), Ellison (1993), and Goyal and Janssen (1993). These four models have many similarities; I shall focus on

Ellison’s model, because it is the one most closely related to the approach I shall follow in this paper.

Ellison models a population in which pairs of individuals play the coordination game which is shown below.

Player 2

A B

Player 1

a > d, b > c, (a - d) > (b - c)

This is a symmetrical game in which each player chooses one of two strategies, A and B. Each of the strategy pairs (A, A) and (B, B) is a strict Nash equilibria; I shall call these conuention A and convention B. Convention A risk-dominates convention B. No restriction is placed on whether convention A Pareto-dominates convention B (the case in which a > b) or vice versa (the case in which b > a).

Each player has a fixed location around a circle. The neighbours of a player are the n nearest players on each side, where n is some small constant. (In a typical example, n = 4, so that each player has eight neighbours.) As in Maynard Smith’s model, each player plays repeatedly against anonymous opponents. The difference is that games are always between neighbours; in any given game for a given player, each of that player’s neighbours has an equal probability of being his

246 R. Sugden /J. of Economic Behavior & Org. 28 (1995) 241-256

opponent. The dynamic processes in Ellison’s model are similar to those in the model of Kandori et al. (1993), described above. In Ellison’s model, however, it is much easier than in Kandori et al.‘s for a risk-dominant convention to invade a population in which a risk-dominated convention is being followed. All that is needed to set off such an invasion is an appropriate coincidence of mutations within a neighbourhood. If at any time, n adjacent players all follow the

risk-dominant convention, this pattern of play will spread to all players. In contrast, a population in which everyone plays the risk-dominant convention can be invaded only if there is the right coincidence of mutations in the whole

population. If neighbourhoods are small, the population is large, and mutations are not too infrequent, we can expect the system to gravitate quickly to a state in which all players are following the risk-dominant convention.

Irrespective of the frequency of mutations, Ellison’s model does not permit what I shall call the localized coexistence of conventions. Consider the pattern of play at a point in time. In the context of Ellison’s model, I shall say that a given convention i E {A, B) is universally followed at a location L’ if (i) the player at / plays strategy i with a probability of unity, and (ii) for a randomly-selected stage game involving this player, the ex ante probability that his opponent will choose strategy i is also unity. A pattern of play has the property of localized coexistence if (i) there is some location at which convention A is universally followed, (ii) there is some location at which convention B is universally followed, and (iii) in the absence of mutations, this pattern of play is a stationary state. In Ellison’s model, localized coexistence requires that each of 2n + 1 adjacent players follows convention A, so that the player in the centre of this neighbourhood is certain to meet an A-playing opponent in any interaction. It also requires that each of 2n + 1 adjacent players follows convention B. But two such neighbourhoods cannot persist: even n adjacent A-playing individuals are sufficient to set in motion a process that leads to the elimination of B-playing behaviour.

I shall now present a model which, although quite similar to Ellison’s, is compatible with the localized coexistence of conventions. There are two main novelties in my model. The first of these will be developed in Section 2: in my model, players do not have locations, but their interactions do. This allows a conception of continuous social space. The second and more significant novelty will be developed in Section 3: working with a continuous social space, it becomes natural to consider how the density of interactions varies across that space. Ellison’s model turns out to be analogous with the special case in which the density of interactions is constant across space. Variations in this density can allow conventions to coexist.

2. A model with continuous social space

Consider a large population of identical individuals, from which pairs are repeatedly and randomly drawn to play the coordination game that was described

R. Sugden /J. of Economic Behavior & Org. 28 (1995) 241-256 247

in Section 1. Each stage game - that is, each case in which two players are selected to play the coordination game - has a location in a social space. The locations of stage games are determined by a random process. In interpreting the model, social space might be understood in terms of, for example, geography, ethnicity or class. The players have imperfect knowledge of the locations of the stage games that they play. Each player in a stage game receives a private signal of the game’s location; signals are dispersed around the true location, the errors in the two signals being independent. The players know neither the true location nor

each other’s signals. In the context of this model, I shall say that convention i E (A, B) is universally

followed at location / if (i> the probability that any player chooses strategy i, conditional on his signal being L’, is unity, and (ii) the probability that any player chooses strategy i, conditional on his opponent’s signal being L’, is also unity. (Thus, a player whose signal is /’ can be sure that his opponent will choose strategy i.) Localized coexistence is then defined as before.

It may help to think in terms of an example. Suppose that the coordination game is played between the captains of ships which meet on collision courses. The two conventions are different rules which, if followed by both captains, will ensure that their ships do not collide. Social space is to be interpreted as a shipping channel. Each captain independently plots the common location of the two ships; these plots, which are independent random variables, dispersed around the true location, correspond with the private signals of the model. The density of shipping, which may vary along the channel, corresponds with the density of interaction.

A player may condition her strategy on her signal, but not on her actual location. The significance of this is that if the two conventions are to coexist, each being played in a different region of social space, there will be a zone of actual locations (i.e., as opposed to signals) at the boundary between these regions, in which both conventions are followed with positive probability. Suppose, for example, that social space is a line, and that all players follow convention A when

their signals are z * or less, and convention B when their signals are greater than z * . Then if a stage game is located sufficiently close to z *, the two players may receive signals which lead them to follow different conventions.

My approach of assigning stage games, rather than players, to locations is somewhat unusual. The more normal approach, which Ellison among others follows, is to give each individual a fixed location, and then to constrain each individual to choose a single strategy which she plays in all stage games, whatever the location of her opponent. A natural interpretation of this constraint is to suppose that each player knows her own location, but not those of her opponents. Formally, however, the two approaches are not greatly different. In my model, private signals play essentially the same role as do players’ locations in Ellison’s. In effect, Ellison’s players condition their strategies on their own locations while my players condition their strategies on their private signals. Ellison’s players

248 R. Sugden /J. of Economic Behavior & Org. 28 (I 995) 241-256

behave as if they do not know their opponents’ locations; my players do not know their opponents’ signals. In Ellison’s model, the neighbourhood structure of interactions ensures that, for any given player in a given game, the location of her

opponent is a random variable, with a distribution around the first player’s own location. In my model, the diffused nature of signals ensures that for any given player, her opponent’s signal is a random variable, with a distribution around rhe first player’s signal. Mathematically, the most significant difference between the model that I shall present in this section of the paper and Ellison’s model is this: in my model, social space is a continuum, over which the distributions of interactions and of signals are continuous, whereas Ellison’s players are assigned to discrete locations. This feature of my model makes it easier to investigate cases in which the density of interaction varies across social space.

I shall now set out my model in detail. I shall assume social space to be the set of points on the real line. 4 The location of each stage game is a random variable in the interval [0, 11, whose distribution is the same for all individuals. For each individual, the probability that the location of his next stage game is less than or equal to y is denoted by F(y); the corresponding density function is denoted by 4.). I shall call f(y) the density of interaction at y. I shall assume that f(.> is

continuous, with f(y) > 0 for all y E [O, 11. Consider a stage game that is played at location y. Each player receives a

private signal of this location; these signals will be denoted by zl, z2. Each signal takes the value y + fi, where D is a random variable with the probability density function e(.). The value of ti is stochastically independent for the two players. There is some finite, strictly positive v such that e(x) > 0 if and only if x E (-v, v); e(.) is continuous in the interval [-v, v] and symmetrical around a mean of

zero. For x E L-v, 01, e(x) is non-decreasing; for x E [O, VI, e(x) is non-increasing. I shall assume that v < 0.5, so that there are signals z such that v < z < 1 - v. In interpreting the model, I shall take v to be small, so that most

signals are in the interval [v, 1 - VI. At any given time t, the pattern of play in the model can be described by a

state-of-play function g,(.), which gives, for each feasible signal z (that is, for each value of z in the interval [-v, 1 + v]), the probability g,(z) with which a randomly-selected individual will play strategy A, conditional on receiving this signal. Given any particular state-of-play function g,(.), there is a best-response

correspondence rt(.), which gives, for each feasible signal z, the set of pure strategies r&z> which are best responses to that state-of-play function for a player who has received that signal. I shall make the following assumptions about the

4 In some ways, it would have been neater to follow Ellison and to assume social space to be the set

of points on a circle. The main advantage of using a line is that it permits a more familiar notation for

continuous functions. For example, probability density and distribution functions are normally defined

on the real line.

R. Sugden /J. of Economic &h&or & Org. 28 (1995) 241-256 249

dynamics of the model. For each z, if rt(z) = {A}, then the value of g,(z) increases over time (unless g,(z) = 1, in which case this value remains constant); and if rt(z) = {A) remains true for a sufficiently long finite time, the value of g,(z) will reach 1. Similarly, if rt(z) = {B}, then the value of g,(z) decreases over time (or remains constant at g,(z) = 0); and if rt(z> = {B} remains true for a sufficiently long finite time, the value of g,(z) will reach 0. These assumptions are analogous with those made by Ellison, except that I have not specified any mutation mechanism. My strategy will be to investigate the dynamics of the model in the absence of mutations, and then to consider the effect of introducing exogenous mutations or invasions.

It will now be useful to consider how the best-response correspondence is derived from any given state-of-play function g(.). 5 Let the parameter (Y be defined by the equation:

o/(1 - 01) = (b - c)/(a - d). (1)

Strategy A is optimal for a player if and only if the probability that her opponent plays strategy A is greater than or equal to a; strategy B is optimal if and only if this probability is less than or equal to cr. Notice that (Y is an index of the risk-dominance relation between the two conventions; if (Y < 0.5, convention A is risk-dominant, while if a > 0.5, convention B is risk-dominant. The parameter restrictions of Game 1 imply that (Y < 0.5.

Let H(x I z) be the cumulative distribution function for one player’s signal, conditional on the other player’s signal being z. That is,

H(xlz) = pr[zi I xlzj = z] (ij = 1,2;i # j) (2)

where zi, zj are the signals received by the two players. Let h(x I z) be the corresponding probability density function. For a player who has received the signal z, the probability that her opponent plays strategy A is m(z), where

m(z) = /h(xlz)g(x)dx. (3)

Thus, for a player who receives the signal z, strategy A is optimal if r(z) 2 (Y, and strategy B is optimal if m(z) I CX. Because of the continuity properties of f(.>

and e(.), 7rC.j is a continuous function. It will sometimes be convenient to work with a function c$(.), defined so that

4(z) = H(z I z) f or all feasible signals z. Thus 4(z) is the probability that one player’s signal is no greater than z, conditional on the other player’s signal being

z. It follows from the assumptions that have been made about K.) and e(.) that +(.> is continuous, with 4(-v> = 0 and c#& + v) = 1. This function is useful in the

’ To simplify notation, I shall omit time subscripts whenever this can be done without causing

confusion.

250 R. Sugden /J. of Economic Behavior & Org. 28 (I 995) 241-256

analysis of behaviour at the boundaries of regions in which different strategies are played. For a given state-of-play function g(.>, I shall define an A-playing region

(respectively, a B-playing region) as an interval (z’, z”) such that g(z) = 1 (respectively, g(z) = 0) for all z E (z’, 2”). I shall refer to z” - z’ as the width of such a region.

I shall begin by investigating the properties of the model under the assumption that f(y) = 1 for all y E [O, 11. On the analogue of Ellison’s model, we might expect to find that, once an A-playing region of some critical size comes into existence, A-playing behaviour will spread throughout social space. Theorem 1,

which is proved in an appendix, establishes that this is indeed the case:

Theorem 1. If fly) = 1 f or ally E (0, 11, the following is true. Let g * (.) be any

state-of-play function which has the property that, for some Z’ E [2u, 1 - 2~1, (z’ - u, .z’ + u) is an A-playing region. If at any time, the state-of-play function is

g * (.), then the model will tend to a stationary state in which g(z) = 1 for all z E

I-v, I +u].

The intuition behind this theorem can be explained quite simply. Whether an A-playing region will expand depends on the optimal responses to signals which are exactly on the boundaries of that region. If strategy A is the unique best response to a signal on either of the region’s boundaries, then the region will expand. If an A-playing region is at least 2v wide, then if a player receives a signal on one boundary, her opponent’s signal cannot be beyond the other boundary. Thus, if the first player’s signal is on the lower boundary of the A-playing region, i.e., at z’, the probability that the opponent’s signal is within the region is 1 - 4(z’); if the first player’s signal is on the upper boundary, i.e., at z”, the probability that the opponent’s signal is within the region is +(z”). If the

density of interaction is constant, neither +(z’) nor 1 - +(z”) can be less than 0.5, and so both must be strictly greater than (Y. Thus, irrespective of the pattern of play outside the A-playing region, strategy A is uniquely optimal at each bound- ary; this leads to the expansion of the region.

Clearly, there can be a stationary state in which convention A is universally followed (i.e., g(z) = 1 for all feasible signals z); and there can be a stationary state in which convention B is universally followed (i.e., g(z) = 0 for all feasible signals z). Either of these stationary states would be stable with respect to small perturbations of g(.>. But Theorem 1 implies that the B-playing stationary state has a smaller basin of attraction than the A-playing stationary state. Or, to put the same idea another way, the B-playing state is more vulnerable to invasion by convention A than the A-playing state is to invasion by convention B.

A further implication of Theorem 1 is that when the density of interaction is constant, there cannot be localized coexistence of the two conventions. In order for there to be localized coexistence, there must be some signal z’ at which strategy A

R. Sugden / J. of Economic Behavior & Org. 28 (I 995) 241-256 2.51

is played, for which rr(z’) = 1 is true (i.e., a player receiving this signal can be sure that her opponent will play strategy A). But this requires 6 there to be an A-playing region whose width is at least 4v. Theorem 1 implies that such a region would expand until strategy A was played at all feasible signals: it could not coexist with a B-playing region.

It is easy to see that this result depends on the assumption that signals are diffused. If players had perfect knowledge of the locations of stage games (i.e., if v = O), the two conventions could coexist in any spatial pattern. That is, we could take any partition of social space into two (not necessarily compact) subsets, and then set g(z) = 1 for all values of z in one subset and g(z) = 0 for all values of z in the other; and this state-of-play function would be a stationary state. But any diffusion in the signals, however slight, rules out the localized coexistence of the conventions.

3. The coexistence of conventions

I shall now consider the implications of relaxing the assumption that the density of interaction is constant. It may help to think again about the example of the ships in the channel. The analysis of Section 2 would apply to this case if the density of shipping was the same everywhere in the channel. But suppose instead that this density varies. In particular, suppose that, for some location z * E [v, 1 - v], the density of shipping is increasing over the interval [z * - v, z * + v]. Now suppose that player 1 is the captain of a ship who plots his position as z * . He then finds himself on a collision course with another ship, whose captain is player 2. What probability +(z’> will player 1 assign to player 2’s having plotted her position as Z* or less? This probability will be less than 0.5, because, other things being equal, the observation of another ship is a signal that one is likely to be in a part of the channel where shipping is dense. Such asymmetries may allow risk-dominant and risk-dominated conventions to coexist. For example, suppose that z * is the upper bound of an A-playing region and the lower bound of a B-playing region. Each region has a width of at least 2v. It is a necessary condition for this boundary to be part of a stationary state that 4(z * ) = a. If the density of interactions is increasing around z * , this condition may be satisfied.

This example suggests that the two conventions can coexist, provided there is some interval of social space in which the density of interactions is increasing or decreasing to a sufficient degree. A boundary between an A-playing region and a B-playing region can exist within such an interval, the A-playing region being

6 Strictly, there is another possibility: there could be an A-playing region, at least 2v wide, either with a lower bound at z = - v or with an upper bound at z = 1 + v. Theorem 1 can be adapted to show

that such a region would expand until strategy A was played at all feasible signals.

252 R. Sugden /J. of Economic Behavior & Org. 28 (I 955) 241-256

located on whichever side of the boundary has the lower density of interaction. This idea is expressed more formally in the following theorems:

Theorem 2. Let g * (.) be any state-of-play function which has the following properties for some feasible signals z’, 2”: (i) (z’, z”) is an A-playing region; (ii) either i” - z’ 2 4v or [z’ = - u and Z” - z’ 2 2v] or [fl = 1 + v and Z” - z’ 2 2v]; (iii) 1 - +(z’) 2 cu; and (iv) Q(z”) 2 cy. If at any time t, the state-of-play function is g * (.), then (a) in all later periods, (z’, z”) will be an A-playing region; (b) if z’ > -v and 1 - d&‘) > (Y, then the A-playing region will expand at its lower bound; and (c) if zn < 1 + v and c$(z”) > (Y, then the A-playing region will expand at its upper bound.

Theorem 3. Let g * (.) be any state-of-play function which has the following properties for some feasible signals z’, z”: (i) (z’, z”) is a B-playing region; (ii) either z” - t’ 2 411 or [z’ = - v and z” - z’ 2 211/ or (z” = 1 + v and z” - z’ 2 2v]; (iii) +(z’) I (Y; and (iv) 1 - #J(z”) I (Y. If at any time t, the state-of-play function is g * (.), then (a) in all later periods, (z’, z”) will be a B-playing region; (6) if z’ > - v and c$(z’) < (Y, then the B-playing region will expand at its lower bound; and (c) if z” < 1 + v and 1 - c$(z”) < (Y, then the B-playing region will expand at its upper bound.

Theorem 2 is proved in an appendix. The proof of Theorem 3 is very similar to that of Theorem 2, and so is omitted

To confirm that there can be localized coexistence of conventions, consider the

following case. Suppose there is some z * E [v,l - v] such that C$(Z * ) = (Y. If the density of interaction is increasing in an interval of social space around z *, this supposition is consistent with the assumptions that have been made about f(.> and

e(.>. Now suppose that (- v, z * ) is an A-playing region and that (z * , 1 + v> is a B-playing region. It is an implication of Theorems 2 and 3 that this pattern of play is a stationary state. This pattern of play implies that rr( -v> = 1 and that

a(1 + v) = 0, satisfying the definition of localized coexistence.

4. Perturbations and mutations

Theorems 2 and 3 imply that, under certain conditions, the two conventions can coexist in a stationary state. But notice that a “stationary state” is being understood as a property of a model in which there are no mutations. One may still ask whether a state of localized coexistence would be stable with respect to small perturbations of the model, and whether, in the long run, it would be vulnerable to invasion by mutants.

As a starting point, consider the stationary state described at the end of Section 3: C-v, z * > is an A-playing region, (z *, 1 + v) is a B-playing region, and

R. Sugden / J. of Economic Behavior & Org. 28 (1995) 241-256 253

f$(z * ) = a. If d(.) is strictly decreasing at z * , this stationary state is stable with respect to small perturbations of g(.>. To see why, suppose that the boundary between the two regions were to be shifted slightly below z * . Theorem 2 implies that in this case, the A-playing region would tend to expand, moving the boundary back to z*. Similarly, Theorem 3 implies that if the boundary were to be shifted slightly above z * , the B-playing region would tend to expand, moving the boundary back to z * . ’

Theorem 2 implies that, under certain conditions, a relatively small A-playing region will tend to expand. Theorem 3 implies that, under other conditions, a relatively small B-playing region will tend to expand. Given that cr < 0.5, it is easy to see that the conditions that allow A-playing regions to expand are less demanding than those that allow B-playing regions to expand: a risk-dominant convention has an advantage over a risk-dominated one. This may suggest that, were we to add some mutation mechanism to the model, there would be a long-run tendency towards a state of affairs in which almost all interactions were governed by convention A. Whether there would be such a tendency would depend on the properties of f(.), as well as on the mutation mechanism; but I conjecture that for most of the sets of assumptions that one might plausibly make, this tendency would indeed exist.

However, for a given mutation mechanism, variations in the density of interac- tion tend to slow down the spread of the risk-dominant convention. I shall use the term localized invasion for an episode in which a coincidence of mutations creates an A-playing or B-playing region of at least some critical size (4v or 2v according to the context). Recall that, if the density of interaction is constant, a single localized invasion by convention A will set in motion a chain of events which ultimately eliminates all B-playing behaviour. But if the density of interaction varies over social space, a single localized invasion may not be enough to produce this result. If there are intervals of the space of feasible signals in which c#&) < (Y, these act as barriers to the “rightward” (or “upward”) expansion of A-playing regions; conversely, intervals in which 4(z) < 1 - (Y act as barriers to the “leftward” (or “downward”) expansion of such regions, Thus several separate localized invasions may be needed to eliminate B-playing behaviour.

Some feel for the mechanisms involved may be gained by using a metaphor. Think of social space as if it were geographical space, and think of the density of interaction as if it were elevation. Then the function f(.) defines a terrain; where 6.) is increasing, there is an upward slope from left to right; where A.> is decreasing, there is a downward slope. Roughly speaking, upward slopes from left

’ If (-v, z l ) was a B-playing region and (z * , 1+ v) was an A-playing region, the condition for

stationarity would be 1 - C$(Z l ) = Q, and stability would require 4C.j to be strictly decreasing at z *

254 R. Sugden /J. of Economic Behavior & Org. 28 (1995) 241-256

to right correspond with low values of 4(z) and downward slopes with high values; flat terrain corresponds with 4(z) = 0.5. An A-playing region, provided it is of at least the critical size, can expand downhill and across flat terrain. It can also expand uphill, provided the slope is not too steep; the lower the value of (Y, the steeper the slope that can be climbed. A B-playing region, if of at least the critical size, can expand only downhill, and then only if the downward slope is

sufficiently steep. Thus A-playing and B-playing regions can coexist, provided that their boundaries follow sufficiently steep escarpments, with the B-playing region on the uphill side. Such coexistence can be disturbed by localized inva-

sions. Every part of a B-playing region is vulnerable to invasion by convention A, although, depending on the terrain inside the region, a single invasion may not be sufficient to eliminate the region altogether. In contrast, a localized invasion of an A-playing region by convention B will not take hold unless it occurs at the top of a hill, with a sufficiently steep downhill slope on both sides; and even then, it will spread only down sufficiently steep slopes.

If mutations are relatively infrequent, and if players converge on best responses relatively quickly, the evolution of play in a model which includes mutations seems likely to have the following pattern. Starting from an arbitrary state-of-play function, there will be an initial phase of rapid adjustment, leading to a state of affairs which, in the absence of mutations, would be a stable stationary state. In this state, the space of feasible signals will typically be partitioned into A-regions and B-regions, so that the conventions coexist. From then on, the history of the model will be one of extended periods of stability, punctuated by episodes in which regions ‘ ‘tip’ ’ ’ from one convention to another - more commonly from B to A than vice versa.

5. Conclusion

My main aim in this paper has been to explore the implications of relaxing the assumption, common to many recent models of the evolution of conventions, that the “social space” in which interactions occur is uniform. The model that I have presented has been abstract and stylized, but some of its properties seem to correspond with significant features of the real world. In particular, the model allows two conventions to coexist, each being followed in different regions of a continuous social space; and it can generate episodes of “tipping”, in which one convention rapidly supplants another within a region. I suggest that evolutionary models of the kind presented in this paper are worthy of further study.

8 Schelling (1971) gives a classic analysis of the phenomenon of “tipping” in the context of

spontaneously-generated racial segregation.

R. Sugden / .I. of Economic Behuuior & Org. 28 (I 995) 241-256 255

Acknowledgements

This paper grew out of discussions that took place at the first of two workshops on the Emergence and Stability of Norms, held at the UniversitC Catholique de Louvain in June 1991 and December 1992. It is particularly influenced by two papers that were presented at that workshop, one written by Robert Boyer and Andrt OrlCan (19921, the other by Hans Carlsson and Eric Van Damme (1993). It was presented at the second workshop. I am grateful for the comments of the participants in that workshop, particularly my discussant, Andr6 OrlCan. I have also benefitted from discussing the paper with Daniel Probst, Karl Schlag and Avner Shaked, and from the comments of two anonymous referees.

Appendix A. Proof of theorems

A.1. Proof of theorem I

The assumption that f(y) = 1 for all y E [O,l], in conjunction with the assumptions that have been made about e(.), implies that the function h(x 1 z) has the following properties for all signals z E [v, 1 - v] and for all feasible signals x: 6) h(x 1 z) > 0 if and only if x E (z - 2v, z + 2~); (ii) h(x ( z) is symmetrical around a mean of z; and (iii) h(x ( z) is increasing for all x E (z - 2v, z), and is decreasing for all x E (z, z + 2~). Using Eq. (3) and the fact that CY < 0.5, properties @-(iii) imply that, if the state-of-play function is g * (.>, then strategy A is uniquely optimal in response to every signal z E (z’ - v, z’ + v>. Thus the dynamic properties of the model will preserve (z’ - v, z’ + v) as an A-playing region. At the lower bound of the region, rr(z’> > 1 - +(z’ - v) = 0.5 > (Y, and so A is the unique best response. At the upper bound of the region, I 2 4(z’ + v) = 0.5 > (Y, and so A is again the unique best response. Because of the continuity properties of the model, the results stated in the last two sentences imply that the A-playing region will expand at both ends. Property (iii> implies that 4(z) = 0.5 for all z E [v, 1 - v]; it can be shown that 4(z) < 0.5 for all z E L-v, v), and that 4(z) > 0.5 for all z E (1 - v, 1 + v]. Thus, however far the A-playing region expands, strategy A will be uniquely optimal in response to signals at both the upper and lower bounds of the region; and so expansion will continue until a stationary state is reached in which g(z) = 1 for all z E [-v, 1 + VI.

A.2. Proof of theorem 2

Given the assumed properties of ff.> and e(.>, it can be shown that aH(x ) z)/az < 0 for all feasible signals x, z such that x E [z - 2v, z + 2~1. Thus, for all z E [z’,z’+~v], 7r(z)>l-H(z’Iz>>l-+(z’)> (Y. Similarly,forallz E

256 R. Sugden /J. of Economic Behavior & Org. 28 (1995) 241-2.54

[z” - 2v, z”], T(Z) 2 H(z” 1 z) 2 +(z”> > a. Clearly, rr(z) = 1 for all z E (z’ + 2v, z” - 2~1. Thus the dynamic properties of the model will preserve (z’, z”) as an A-playing region. Because of the continuity properties of the model, if IT > (Y, then strategy A is optimal for all feasible signals in some neighbour- hood of z’; thus if z’ > -v, the A-playing region will expand at its lower end. Similarly, if &z”) > a, then strategy A is optimal for all feasible signals in some

neighbourhood of z”; thus if z” < 1 + v, the A-playing region will expand at its

upper end.

References

Axelrod, Robert, 1981, The emergence of cooperation among egoists, American Political Science

Review, 75, 306-18.

Axelrod, Robert, 1984, The evolution of cooperation, New York, Basic Books.

Axelrod, Robert and William D. Hamilton, 1981, The evolution of cooperation, Science, 211, 1390-6.

Blume, Lawrence E., 1993, The statistical mechanics of strategic interaction, Games and Economic

Behavior, 5, 387-424.

Boyer, Robert and Andre Orlean, 1992, How do conventions evolve?, Journal of Evolutionary

Economics, 2, 165-177.

Carlsson, Hans and Eric Van Damme, 1993, Global games and equilibrium selection, Econometrica,

61, 989-1018. Ellison, Glenn, 1993, Learning, local interaction, and coordination, Econometrica, 61, 1047-1071.

Goyal, Sanjeev and Maarten Janssen, 1993, Interaction structure and the stability of conventions,

Mimeo, Erasmus University, Rotterdam.

Harsanyi, John C. and Reinhard Selten, 1988, A general theory of equilibrium selection in games,

Cambridge, Mass., MIT Press.

Hume, David, 1740, A treatise of human nature, 1978 edition published by Clarendon Press, Oxford.

Maynard Smith, John, 1982, Evolution and the theory of games, Cambridge, Cambridge University

Press.

Maynard Smith, John and G.R. Price, 1973, The logic of animal conflicts, Nature, 246, 15-18.

Kandori, Michihiro, George J. Mailath and Rafael Rob, 1993, Learning, mutation, and long run

equilibria in games, Econometrica, 61, 29-56.

Lewis, David K., 1969, Convention: a philosophical study, Cambridge, Mass., Harvard University

Press.

Schelling, Thomas C., 1960, The strategy of conflict, Cambridge, Mass., Harvard University Press.

Schelling, Thomas C., 1971, Dynamic models of segregation, Journal of Mathematical Sociology, 1,

143-186.

Sugden, Robert, 1986, The economics of rights, cooperation and welfare, Oxford, Basil Blackwell.

Sugden, Robert, 1989, Spontaneous order, Journal of Economic Perspectives, 3, 85-98. Young, H. Peyton, 1993, The evolution of conventions, Econometrica, 61, 57-84.