39
Journal of Economic Theory 94, 745 (2000) Large Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate School of Management, Northwestern University, Evanston, Illinois 60208-2009 myersonnwu.edu Received July 14, 1997; final version received June 6, 1998 Existence of equilibria is proven for Poisson games with compact type sets and finite action sets. Then three theorems are introduced for characterizing limits of probabilities in Poisson games when the expected number of players becomes large. The magnitude theorem characterizes the rate at which probabilities of events go to zero. The offset theorem characterizes the ratios of probabilities of events that differ by a finite additive translation. The hyperplane theorem estimates probabilities of hyperplane events. These theorems are applied to derive formulas for pivot probabilities in binary elections, and to analyze a voting game that was studied by Ledyard. Journal of Economic Literature Classification Numbers: C63, C70. 2000 Academic Press 1. INTRODUCTION This paper develops some fundamental mathematical tools for analyzing games with a very large number of players, such as the game played by the voters in a large election. In such games, it is unrealistic to assume that every player knows all the other players in the game; instead, a more realistic model should admit some uncertainty about the number of players in the game. Furthermore, if we assume that such uncertainty about the number of players in the game can be described by a Poisson distribution, then the special properties of the Poisson distribution may actually make our analysis of the game simpler than under the questionable assumption that the exact number of players was common knowledge. In a previous paper by this author (Myerson [8]) fundamental principles for analyzing general games with population uncertainty have been introduced, and it has been shown that some convenient simplifying properties (inde- pendent actions and environmental equivalence) are uniquely satisfied by Poisson games with population uncertainty. In this paper, we will focus on some general theorems that facilitate the analysis of large Poisson games. doi:10.1006jeth.1998.2453, available online at http:www.idealibrary.com on 7 0022-053100 35.00 Copyright 2000 by Academic Press All rights of reproduction in any form reserved.

Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

Journal of Economic Theory 94, 7�45 (2000)

Large Poisson Games

Roger B. Myerson

MEDS Department, J. L. Kellogg Graduate School of Management, Northwestern University,Evanston, Illinois 60208-2009

myerson�nwu.edu

Received July 14, 1997; final version received June 6, 1998

Existence of equilibria is proven for Poisson games with compact type sets andfinite action sets. Then three theorems are introduced for characterizing limits ofprobabilities in Poisson games when the expected number of players becomes large.The magnitude theorem characterizes the rate at which probabilities of events go tozero. The offset theorem characterizes the ratios of probabilities of events that differby a finite additive translation. The hyperplane theorem estimates probabilities ofhyperplane events. These theorems are applied to derive formulas for pivotprobabilities in binary elections, and to analyze a voting game that was studied byLedyard. Journal of Economic Literature Classification Numbers: C63, C70.� 2000 Academic Press

1. INTRODUCTION

This paper develops some fundamental mathematical tools for analyzinggames with a very large number of players, such as the game played by thevoters in a large election. In such games, it is unrealistic to assume thatevery player knows all the other players in the game; instead, a morerealistic model should admit some uncertainty about the number of playersin the game. Furthermore, if we assume that such uncertainty about thenumber of players in the game can be described by a Poisson distribution,then the special properties of the Poisson distribution may actually makeour analysis of the game simpler than under the questionable assumptionthat the exact number of players was common knowledge.

In a previous paper by this author (Myerson [8]) fundamental principlesfor analyzing general games with population uncertainty have been introduced,and it has been shown that some convenient simplifying properties (inde-pendent actions and environmental equivalence) are uniquely satisfied byPoisson games with population uncertainty. In this paper, we will focus onsome general theorems that facilitate the analysis of large Poisson games.

doi:10.1006�jeth.1998.2453, available online at http:��www.idealibrary.com on

70022-0531�00 �35.00

Copyright � 2000 by Academic PressAll rights of reproduction in any form reserved.

Page 2: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

In Section 2, a general model of Poisson games is formulated, and existenceof equilibrium is proven. Section 3 develops some general formulas thatcan be useful for characterizing the limits of equilibria of Poisson gamesas the expected number of players goes to infinity. The main results inSection 3 are the magnitude theorem which enables us to easily charac-terize the relative orders of magnitude of the probabilities of events, theoffset theorem which characterizes the ratios of probabilities of events thatdiffer by a finite additive translation, and the hyperplane theorem whichgives probabilities of linear events. The implications of these limit theoremsfor simple one-dimensional events are developed in Section 4. Section 5then applies these results to derive formulas for pivot probabilities in largebinary elections. An application of these formulas to a voting game studiedby Ledyard [5] is developed in Section 6. The proofs of the limit theoremsof Section 3 are presented in Section 7.

2. POISSON GAMES AND THEIR EQUILIBRIA

In a Poisson game, we assume that the number of players is a randomvariable drawn from a Poisson distribution with some mean n. (See Haight[3] and Johnson and Kotz [4].) Given this parameter n, the probabilitythat there are k players in the game is

p(k | n)=e&nnk�k !.

From the perspective of any one player in the game, the number of otherplayers in the game (not counting this player) is also a Poisson randomvariable with the same mean n. This property of Poisson games is calledenvironmental equivalence; see Myerson [8] for a formal derivation. Tounderstand this environmental-equivalence property of Poisson games,imagine that you are a player in a game with population uncertainty. Thenumber of players other than you is one less than the number of allplayers; but the fact that you have been recruited as a player in the gameis itself evidence in favor of a larger number of players. These two effectsexactly cancel out in the case where the number of players has been drawnfrom a Poisson distribution. That is, after learning that you are a player ina Poisson game, your posterior probability distribution on the number ofother players is the same as an outside observer's prior distribution on thenumber of all players.

The private information of each player in the game is his (or her) type,which is a random variable drawn from some given set of possible types T.In this paper, we assume that this type set T is a compact metric space. Theprevious paper (Myerson [8]) assumed a finite type set T. The class of

8 ROGER B. MYERSON

Page 3: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

compact metric spaces includes any finite set, as well as any closed andbounded subset of a finite-dimensional vector space; so more generality isbeing allowed here.

Each player's type is independently drawn from this type set T accordingto some given probability distribution which we denote by r. That is, forany set S that is a Borel-measurable subset of T, we let r(S ) denote theprobability that any given player's type is in S, and this probability isassumed to be independent of the number and types of all other players. Bythe decomposition property of the Poisson distribution (see Myerson [8]),the total number of players with types in the subset S is a Poisson randomvariable with mean nr(S ), and this random variable is independent of thenumbers of players with types in any other disjoint sets.

Each player in the game must choose an action from a set of possibleactions which we denote by C. In this paper, we assume that this action setC is a nonempty finite set.

The action profile of a group of players is the vector that lists, for eachaction c, the number of players in this group who are choosing action c.We let Z(C) denote the set of possible action profiles for the players in aPoisson game. That is, Z(C) is the set of vectors x=(x(c))c # C , withcomponents indexed on the actions in C, such that each component x(c)is a nonnegative integer. Notice that Z(C) is a countable set, because C isfinite.

The utility payoff to each player in a Poisson game depends on his type,his action, and the numbers of other players who choose each action. Soutility payoffs can be mathematically specified by a utility function of theform U: Z(C)_C_T � R. Here U(x, b, t) denotes the utility payoff to aplayer whose type is t and who chooses action b, when x is the actionprofile of the other players in the game (that is, when, for each c in C, thereare x(c) other players who choose action c, not counting this player in thecase of c=b). We assume here that U( } , } , } ) is a bounded function andU(x, b, } ) is a continuous function on the type set T, for every x in Z(C )and every b in C.

These parameters (T, n, r, C, U ) together define a Poisson game. Forother related models of population uncertainty see also Myerson [8, 9]and Milchtaich [6].

The strategic behavior of players in a Poisson game can be described bya distributional strategy, following Milgrom and Weber [7]. A distributionalstrategy for a Poisson game (T, n, r, C, U ) is any probability distributionover the set C_T such that the marginal distribution on T is equal to r.So if { is a distributional strategy then, for any action c in C and any setS that is a Borel-measurable subset of the type set T, {(c, S ) can be inter-preted as the probability that a randomly sampled player will have a typein the set S and will choose the action c. Because the game specifies that

9LARGE POISSON GAMES

Page 4: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

players' types are drawn from the distribution r, the marginal distributionof { on the type set T is required to satisfy the equation

:c # C

{(c, S )=r(S )

for every set S that is a Borel measurable subset of T. Also, a measure, {must be countably additive on measurable partitions of T.

Any distributional strategy { is associated with a unique strategy function_ that specifies numbers _(c | S ) such that

_(c | S )={(c, S )�r(S ),

for any measurable set of types S that has positive probability, and for anyaction c in C. Here _(c | S ) can be interpreted as the conditional probabilitythat a randomly sampled player will choose the action c given that theplayer's type is in the set S. In other papers (Myerson [8, 9]), strategyfunctions are used instead of distributional strategies to characterizeplayers' behavior in a Poisson game, but it will be more convenient hereto use distributional strategies.

We may let 2(C) denote the set of probability distributions on the finiteaction set C. Any distributional strategy { induces a marginal probabilitydistribution on C, which may also be denoted by { without danger ofconfusion. That is, under any distribution strategy {, the marginal probability{(c) of any action c in C is

{(c)={(c, T ),

When the players behave according to the distributional strategy { (orthe corresponding strategy function _), the number of players who chooseeach action c in C is a Poisson random variable with mean n{(c). Further-more, the number of players who choose the action c is independent of thenumbers of players who choose all other actions. This result is called theindependent-actions property, and it can be shown to characterize Poissongames (see Myerson [8]). So for any x in Z(C ), the probability that x isthe action profile of the players in the game is

P(x | n{)= `c # C \

e&n{(c)(n{(c))x(c)

x(c)! + .

By the environment-equivalence property of Poisson games, any playerin the game assesses the same probabilities for the action profile of theother players in the game (not counting himself). Thus, the expected payoff

10 ROGER B. MYERSON

Page 5: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

to a player of type t who chooses action b, when the other players areexpected to behave according to the distributional strategy {, is

:x # Z(C )

P(x | n{) U(x, b, t).

Let G(b, n{) denote the set of all types for whom choosing action b wouldmaximize this expected payoff over all possible actions, when n is theexpected number of players and { is the distributional strategy. That is,

G(b, n{)

={t # T } :x # Z(C )

P(x | n{) U(x, b, t)=maxc # C

:x # Z(C )

P(x | n{) U(x, c, t)= .

This set G(b, n{) is a closed subset of T, because it is defined by an equalityamong two continuous functions of t.

A distributional strategy { is an equilibrium of the Poisson game iff

{(b, G(b, n{))={(b), \b # C.

That is, a distributional strategy is an equilibrium iff, for every action b, allthe probability of choosing action b comes from types for whom b is anoptimal action, when everyone else is expected to behave according to thisdistributional strategy.

Our first main result is a general existence theorem for equilibria ofPoisson games. (The existence theorem of Myerson [8] allows forms ofpopulation uncertainty more general than the Poisson, but only allowsfinite type sets, whereas infinite type sets are allowed here.)

Theorem 0. For any Poisson game (T, n, r, C, U) as above (where T isa compact metric space, C is a finite set, and U is continuous and bounded ),there must exist at least one distributional strategy that is an equilibrium.

Proof. We use a fixed-point argument on 2(C ), the set of probabilitydistributions on the finite action set C. Notice that G(b, n') is well definedfor any vector '=('(c))c # C in 2(C ), because the definitions of P(X | n{)and G(b, n{) above depend only on the components ({(c))c # C , which forma vector in 2(C ).

For any vector ' in 2(C), let R*(') denote the set of distributionalstrategies { that satisfy the equation {(c)={(c, G(b, n')) for every c in C.Let R(') denote the set of all vectors ({(c))c # C in 2(C ) such that { is adistributional strategy in R*('), where we use the convention {(c)={(c, T ).These sets R*(') and R(') are convex, because they are defined by linearconditions on {.

11LARGE POISSON GAMES

Page 6: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

The sets R*(') and R(') are also nonempty. To show this, put anarbitrary ordering on the finite set C and consider the distributionalstrategy { such that

{(c, S )=r([t # S | c=min[b | t # G(b, n')]]).

This distributional strategy { assigns all type-t players to the minimalaction (according to our ordering) among their optimal responses to theanticipated behavior '. Then this distributional strategy { is in R*('), andthe vector ({(c))c # C is in R(').

G(b, n') is a closed subset of T and depends upper-hemicontinuously on', because the probabilities P(x | n') are continuous functions of ' and theutility numbers U(x, c, t) are bounded. Now suppose that we are givensequences ['k]�

k=1 and [{k]�k=1 such that {k # R*('k) for every k, and

suppose that 'k � ' as k � �. The set of distributional strategies on thecompact set C_T is itself a compact metric space (see Milgrom andWeber [7] and Billingsley [2]), and so there must exist an infinite sub-sequence in which {k converges to some distributional strategy { in the weaktopology on measures. For every action b we have {k (b, T"G(b, n'k))=0 foreach k, and so {(b, T"G(b, n'))=0. The limit vector ({(c))c # C must there-fore be in R('), and so R: 2(C ) ��2(C ) is an upper-hemicontinuouscorrespondence.

Thus, by the Kakutani fixed-point theorem, there exists some ' in 2(C )such that ' # R('). The distributional strategy in R*(') that verifies thisinclusion is an equilibrium. Q.E.D

3. LIMITS OF PROBABILITIES IN LARGE POISSON GAMES

We now develop some general theorems for estimating probabilities ofevents in equilibria of large Poisson games. Let us consider a sequence ofPoisson games that are parameterized by the expected size parameter n.For each of these games, suppose that some equilibrium {n has been iden-tified that predicts what the players' behavior would be in the game. Ourgoal is to characterize the limits of probabilities in these equilibria as thesize parameter n goes to infinity. In this section, we will not actually use thefull Poisson-game structure (T, n, r, C, U ) that was introduced in theprevious section. We will only use the set of actions C and the size parametern, along with the corresponding equilibrium {n .

In this section, we can let {n denote the vector ({n(c))c # C , because theother components of the distributional strategy that was denoted by { inthe preceding section will not be used here. For each n and each c in C,{n(c) is defined such that n{n(c) is the expected number of players whowould choose action c in the predicted equilibrium of the game of size n.

12 ROGER B. MYERSON

Page 7: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

The vector n{n=(n{n(c))c # C is the expected action profile in the game ofsize n.

The size parameter n denotes the expected total number of players in thegame, so we could assume that �c # C n{n(c)=n. Actually, we will only usethe weaker assumption that

:c # C

n{n(c)�n and :c # C

{n(c)�1, \n.

With this weaker assumption, the set C can be reinterpreted as a subset ofthe players' feasible actions, excluding from C those actions which do notaffect the events that we want to study. For example, in a voting gamewhere players can vote for a candidate or abstain, it may be convenient toreduce the dimensionality of the action profile in Z(C ) by ignoring thenumber of players who choose to abstain. Boundedness here implies that,selecting a subsequence if necessary, we can assume that the {n distributionsconverge to some limit { as n � �.

To approximate Poisson probabilities, we can use Stirling's formula,one version of which asserts that k ! is approximately equal to(k�e)n

- 2?k+?�3 when k is large. To be more precise, let is define

@(k)=k !

(k�e)k- 2?k+?�3

,

for any nonnegative integer k. Then Stirling's formula (see Abramowitz andStegun [1, Eq. 6.1.38]) implies that

e&1�(12k)<@(k)<e1�(12k) for all k>0,

and so

limk � �

@(k)=1.

Thus, with expected action profile n{n , the Poisson probability of anypossible action profile x in Z(C ) is

P(x | n{n)= `c # C \

e&n{n(c)(n{n(c))x(c)

x(c)! += `

c # C \e&n{n(c)(n{n(c))x(c)

@(x(c))(x(c)�e)x(c)- 2?x(c)+?�3+

= `c # C \

ex(c)&x(c) log(x(c)�(n{n(c)))&n{n(c)

@(x(c)) - 2?x(c)+?�3 + .

13LARGE POISSON GAMES

Page 8: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

(Here the log function is the logarithm base e.) To simplify this probabilityformula, let us define the function �: R+ � R by the equations

�(%)=%(1&log(%))&1=&|%

1log(#) d#, \%>0,

�(0)= lim% � 0

�(%)=&1.

It is straightforward to verify that �( } ) is a concave function withderivative

�$(%)=&log(%).

The graph of this � function is shown in Fig. 1. Notice that �(%) is negativewhen %{1, and

�(1)=0=max%�0

�(%).

With this � function, the Poisson probabilities can be written

P(x | n{n)= `c # C \

en{n(c) �(x(c)�(n{n(c)))

@(x(c)) - 2?x(c)+?�3+ . (3.1)

To make Eq. (3.1) valid in the case where {n(c)=0, we adopt the convention

if {n(c)=0 and x(c)=0 then {n(c) �(x(c)�(n{n(c)))=0,

if {n(c)=0 and x(c)>0 then {n(c) �(x(c)�(n{n(c)))=&�

(and e&�=0). Taking the logarithm of Eq. (3.1), we get

log(P(xn | n{n))�n

= :c # C

{n(c) �(xn(c)�(n{n(c)))+�c # C (log(@(xn(c)))+0.5 log(2?xn(c)+?�3))

n.

(3.2)

Let us say that a sequence of vectors [xn]�n=1 in Z(C ) has a magnitude

+ iff the sequence log(P(xn | n{n))�n is convergent to + as n goes to infinity,that is,

+= limn � �

log(P(xn | n{n))�n.

Notice that this magnitude must be zero or negative, because the logarithmof a probability is never positive. When the magnitude + is negative, theprobabilities P(xn | n{n) are going to zero at the rate of e+n. The following

14 ROGER B. MYERSON

Page 9: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

FIG. 1. Graph of the psi function.

lemma, which follows easily from Eq. (3.2), is useful for computingmagnitudes of sequences.

Lemma 1. Let [xn]�n=1 be any sequence of possible action profiles in

Z(C ). Then

limn � �

log(P(xn | n{n))n

= limn � �

:c # C

{n(c) � \ xn(c)n{n(c)+ .

When we assert here the equality of two limits, we mean that if eitherlimit exists then both exist and are equal. This lemma and the other mainresults of this section are proven in Section 7.

We now extend the notion of magnitude to sequences of events. We canrepresent any event A as a subset of Z(C ). When n{n is the expected actionprofile, the probability of the event A is

P(A | n{n)= :x # A

P(x | n{n).

15LARGE POISSON GAMES

Page 10: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

Given any sequence of events [An]�n=1 such that An �Z(C) for each n, we

may say that the magnitude of the sequence [An]�n=1 is

limn � �

log(P(An | n{n))�n,

whenever this limit exists.Let us say that [xn]�

n=1 is a major sequence of points in the event-sequence [An]�

n=1 iff each xn is a point in An and the sequence of points[xn]�

n=1 has a magnitude that is equal to the greatest magnitude of anysequence that can be selected from the An events; that is,

xn # An \n,

and

limn � �

log(P(xn | n{n))�n= limn � �

maxy # An

log(P( y | n{n))�n.

To satisfy the definition of a major sequence, we require that the limits inthe above equation must exist. The following theorem asserts that themagnitude of any sequence of events must coincide with the magnitude ofany major sequence of points in these events. Similar results have beenfound for other probabilistic models in the mathematical theory of largedeviations (see Strook [11] or Shwartz and Weiss [10]).

Theorem 1. Let [An]�n=1 be any sequence of events in Z(C ). Then

limn � �

log(P(An | n{n))�n= limn � �

maxyn # An

log(P( yn | n{n))�n

= limn � �

maxyn # An

:c # C

{n(c) � \ yn(c)n{n(c)+ .

Theorem 1 implies as a corollary that, in large Poisson games, almost allof the probability in any event must be concentrated in the regions wherethe formula

:c # C

{n(c) � \ x(c)n{n(c)+

is close to its maximum. If Bn �An for all n then the hypothesis about[Bn]�

n=1 in the following corollary is equivalent to assuming that everymajor sequence of points in [An]�

n=1 can have only finitely many pointsthat are in the corresponding subsets [Bn]�

n=1 .

16 ROGER B. MYERSON

Page 11: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

Corollary 1. Suppose that [An]�n=1 is a sequence of events that has a

finite magnitude. Suppose that [Bn]�n=1 is a sequence of events such that

lim supn � �

maxyn # Bn

:c # C

{n(c) � \ yn(c)n{n(c)+< lim

n � �maxxn # An

:c # C

{n(c) � \ xn(c)n{n(c)+ .

Then

limn � �

(P(Bn | n{n)�P(An | n{n))=0 and limn � �

(P(An"Bn | n{n)�P(An | n{n))=1.

Theorem 1 and Corollary 1 alert us to a useful way of recalibratingaction profiles. For any possible action profile x in Z(C ), for any action cin C, the ratio x(c)�(n{n(c)) may be called the c-offset of x when n{n is theexpected action profile. That is, the c-offset is a ratio which describes thenumber of players who are choosing c as a fraction of the mean ofthe Poisson distribution from which this number was drawn.

For any action c in C, we may say that :(c) is the limit of major c-offsetsin the sequence of events [An]�

n=1 iff [An]�n=1 has a finite magnitude and,

for every major sequence of points [xn]�n=1 in [An]�

n=1 , we have

:(c)= limn � �

xn(c)�(n{n(c)).

Consider any vector w=(w(c))c # C in RC such that each component w(c)is an integer (which may be positive or negative or zero). For any event A,we let A&w denote the set of vectors in Z(C ) such that adding the vectorw would yield a vector in the event A; that is,

A&w=[x&w | x # A, x&w # Z(C)].

The following theorem relates the probabilities of pairs of events that differby such an additive translation in large Poisson games, when limits ofmajor offsets exist.

Theorem 2. Let w be any vector in RC such that each component w(c)is an integer. For each action c such that w(c){0, suppose thatlimn � � n{n(c)=+�, and suppose that some number :(c) is the limit ofmajor c-offsets in the sequence of events [An]�

n=1 . Then

limn � �

P(An&w | n{n)P(An | n{n)

= `c # C

:(c)w(c).

Theorem 2 has the following corollary, for infinite unions of additivetranslations.

17LARGE POISSON GAMES

Page 12: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

Corollary 2. Let w be any vector in RC such that each component w(c)is an integer. Suppose that for every positive integer #, the sets An andAn&#w are disjoint. Let

Bn=[x&#w # Z(C ) | x # An , # is a nonnegative integer].

For each action c such that w(c){0, suppose that limn � � n{n(c)=+�,and suppose that :(c) is the limit of major c-offsets in the sequence of events[Bn]�

n=1 . Then

limn � �

P(An | n{n)P(Bn | n{n)

=1&\ `c # C

:(c)w(c)+ .

Proof of Corollary 2. Notice that Bn=An _ (Bn&w), and An & (Bn&w)=<. So P(An | n{n)=P(Bn | n{n)&P(Bn&w | n{n). But Theorem 2 impliesthat

limn � �

P(Bn&w | n{n)�P(Bn | n{n)= `c # C

:(c)w(c). Q.E.D

Our definitions of magnitude and major sequence can be applied to asingle event A as well as to a sequence of events [An]�

n=1 in the obviousway. That is, given A�Z(C ), + is the magnitude of the A and [xn]�

n=1 isa major sequence in A iff xn # A for all n and

+= limn � �

log(P(A | n{n))�n= limn � �

log(P(xn | n{n))�n.

Theorem 1 may be called the magnitude theorem, and Theorem 2 may becalled the offset theorem. If the magnitude of an event A is larger thanthe magnitude of some other event B, then we know that the probabilityof B will become infinitesimal relative to the probability of A, and theconditional probability of B given A _ B will go to 0 as n � �. But themagnitude theorem is not useful for comparing the probabilities of the eventsthat differ by adding or subtracting a fixed vector, because the differencebetween such events may seem small in large Poisson games and so theyusually have the same magnitude. So relative probabilities of events thatdiffer by a simple additive translation must be compared using the offsettheorem instead.

The magnitude of an event only tells us about the rate at which itsprobability goes to zero. Our next limit theorem gives more precise estimatesof the probabilities of events that have a simple linear structure.

Let J be a positive integer. Let w1 , ..., wJ be vectors such that, for eachi, wi=(wi (c))c # C is a vector in RC and each component wi (c) is an integer.We allow that wi (c) may be a negative integer (in which case wi would not

18 ROGER B. MYERSON

Page 13: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

be in Z(C ), because Z(C ) only includes the nonnegative integer vectors inRC). Suppose that the vectors w1 , ..., wJ are linearly independent vectors.

For any vector y in Z(C), let H( y, w1 , ..., wJ) denote the set of allvectors x in Z(C) such that there exist integers #1 , ..., #J such that x=y+#1w1+ } } } +#JwJ . (Notice that the integers #1 , ..., #J may be negative,but the linear combination y+#1 w1+ } } } +#J wJ must have all non-negative components to be Z(C ).) This set H( y, w1 , ..., wJ) may be calledthe hyperplane event in Z(C ) that includes y plus all linear combinations of[w1 , ..., wJ]. That is,

H( y, w1 , ..., wJ)

={y+ :J

i=1

#iwi | y(c)+ :J

i=1

#iwi (c)�0 \c, # i is an integer \i=�Z(C ).

Let H*( y, w1 , ..., wJ) denote the set that we get if we drop the restrictionthat each #i coefficient must be an integer; that is,

H*( y, w1 , ..., wJ)

={y+ :J

i=1

#i wi | y(c)+ :J

i=1

#iwi (c)�0, #i # R \i=�RC.

We say that the vector y is a near-maximizer in H( y, w1 , ..., wJ) of afunction f (x) over x in H*( y, w1 , ..., wJ) iff there exist numbers (#1 , ..., #J)such that

&1<#i<1, \i # [1, ..., J],

and

f \y+ :J

i=1

#i wi+= maxx # H*( y, w1, ..., wJ)

f (x).

That is, a near-maximizer is a rounding into the lattice H( y, w1 , ..., wJ) ofthe maximizer over H*( y, w1 , ..., wJ).

Theorem 3. Given w1 , ..., wJ as above, let [ yn]�n=1 be a sequence in

Z(C ). For each n, suppose that yn is a near-maximizer of

:c # C

{n(c) � \ x(c)n{n(c)+

over x in H*( yn , w1 , ..., wJ). Suppose also that, for each c in C, both {n(c)and yn(c)�n converge to finite positive limits as n � �. Let M( yn) be the

19LARGE POISSON GAMES

Page 14: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

J_J matrix such that, for each i and each j in [1, ..., J], the (i, j ) compo-nent is

Mij ( yn)= :c # C

wi (c) wj (c)�yn(c).

Then for any sequence [xn]�n=1 such that xn # H( yn , w1 , ..., wJ) for all n, we

have

limn � �

P(xn | n{n)P( yn | n{n)

= limn � �

`c # C

e&(xn(c)& yn(c))2�(2yn(c)). (3.3)

Furthermore

limn � �

P(H( yn , w1 , ..., wJ) | n{n)P( yn | n{n)(2?)J�2 (det(M( yn)))&0.5=1. (3.4)

Equation (3.3) in Theorem 3 implies that

limn � �

log(P(xn | n{n))�n& limn � �

log(P( yn | n{n))�n

= limn � �

& :c # C

yn(c)2n \xn(c)

yn(c)&1+

2

.

So any major sequence [xn]�n=1 in [H( yn , w1 , ..., wJ)]�

n=1 must satisfy

limn � �

xn(c)�yn(c)=1, \c # C.

The right-hand side of Eq. (3.3), as a function of xn , is proportional tothe product of probability densities of independent Normal randomvariables with mean yn(c) and standard deviation - yn(c). Thus, in thelarge Poisson game with expected action profile n{n , the conditionalprobability distribution within the hyperplane event H( yn , w1 , ..., wJ) isalmost the same as it would be in a game where the number of playerschoosing each action c is the integer-rounding of an independent Normalrandom variable with mean yn(c) and standard deviation - yn(c). (ThisNormal approximation cannot be used, however, to estimate the relativeprobabilities of two subsets of H( yn , w1 , ..., wJ) if the probabilities of thesesubsets are both becoming infinitesimal relative to P( yn | n{n) as n � �.)

20 ROGER B. MYERSON

Page 15: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

To interpret Eq. (3.4) in Theorem 3, let us use the approximate equalitysymbol r to indicate functions of n whose ratio converges to 1 as n goesto infinity. With this notation, Eq. (3.4) may be rewritten

P(H( yn , w1 , ..., wJ) | n{n)r(2?)J�2 P( yn | n{n)

- det(M( yn))(3.5)

when yn is a near-maximizer of the magnitude formula over the hyperplane.To complement this approximate equality, we can use Eq. (3.1) (with theassumption that each yn(c) � � as n � �) to get the following approximationformula for P( yn | n{n):

P( yn | n{n)r `c # C \

en{n(c) �( yn(c)�(n{n(c)))

- 2?yn(c) + . (3.6)

In the case where {n={ and yn=nw0 for all n, where w0 is in Z(C ), (3.6)and (3.5) yield

P(H(nw0 , w1 , ..., wJ) | n{)re�c # C n{(c) �(w0(c)�{(c))

(2?n) (*C&J )�2- det(M(w0)) >c # C w0(c)

. (3.7)

Theorem 3 can be applied in the special case where J=*C and [w1 , ..., wJ]are the unit vectors that span Z(C ). In this case, the hyperplane eventH( yn , w1 , ..., wJ) is all of Z(C ), and the near-maximizer yn is an integer-rounding of the vector n{n . Then Eq. (3.3) tells us that, for computing theprobabilities of events that have positive limiting probability as n � �,the number of players choosing each action c can be approximatedby the integer-rounding of a Normal random variable with mean n{n(c)and standard deviation - n{n(c). (However, this well-known Normalapproximation for Poisson random variables cannot be used for estimatingratios of probabilities that go to zero with negative magnitude as n � �,and hence the need for the theorems of this section.)

4. ONE-DIMENSIONAL EVENTS

As an application of the limit theorems from the preceding section,consider an event that consists of a single ray from the origin of RC. Letw be a vector in Z(C ) such that w(c)>0 for all c, and let L be the set ofall multiples of w,

L=[#w | # is a nonnegative integer].

21LARGE POISSON GAMES

Page 16: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

Let | denote the sum of the components of w,

|= :c # C

w(c).

We assume in this section that the distributions {n are convergent to a limitdenoted by {; that is,

{(c)= limn � �

{n(c), \c # C.

By Theorem 1, the magnitude + of the event L is

+= limn � �

log(P(L | n{n))�n= limn � �

max#�0

:c # C

{n(c) � \ #w(c)n{n(c)+ .

Because �$=&log, an optimal solution of this maximization must satisfythe first-order condition

0= :c # C

w(c) log \ #w(c)n{n(c)+

=| log(#�n)+ :c # C

w(c) log(w(c)�{n(c))

(when the integer restriction on # is ignored), which gives us a uniqueoptimal solution such that

#�n= `c # C \

{n(c)w(c)+

w(c)�|

.

Let }n denote this ratio

}n= `c # C \

{n(c)w(c)+

w(c)�|

and let

}= limn � �

}n= `c # C \

{(c)w(c)+

w(c)�|

.

So when ready #n is the rounding of n}n to the nearest integer, the sequence[#nw]�

n=1 is a major sequence in L, and it yields the magnitude

22 ROGER B. MYERSON

Page 17: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

+= limn � �

:c # C

{(c) � \n}n w(c)n{n(c) +

= limn � �

:c # C

{n(c) \}nw(c){n(c) \1&log \}n w(c)

{n(c) ++&1+= lim

n � �}n|& :

c # C

{n(c)=}|& :c # C

{(c). (4.1)

By Lemma 1 and the uniqueness of this optimal solution, for any c suchthat {(c)>0, any major sequence [xn] in L must have xn(c)�(n{n(c))converging to the limit of #n w(c)�(n{n(c)). So the limit of major c-offsets inL is

:(c)= limn � �

}nw(c)�{n(c)=}w(c)�{(c) if {(c)>0. (4.2)

The event L is of course a hyperplane event with J=1 dimension. Sowe can apply Theorem 3 here if we add the assumption that {(c)>0 forall c. When #n is an integer-rounding of n}n , the vector yn=#n w is thenear-maximizer required by Theorem 3, and the J_J matrix M( yn) is justthe number

M(#nw)= :c # C

w(c)2�(#nw(c))r|�(n}n).

Then from Theorem 3, the approximate equalities (3.5) and (3.6) become

P(L | n{n)r- 2?n}n �| P(#n w | n{n)

r- 2?n}n �| `c # C \

en{n(c) �(}nw(c)�{n(c))

- 2?n}nw(c) +r

en(}n|&�c # C {n(c))

(2?n}n) (*C&1)�2 |1�2 >c # C w(c)1�2 . (4.3)

It may be useful to compare these results for Poisson games to those thatwe would get from considering the corresponding Multinomial model inwhich the number of players is known to be exactly equal to n (instead ofbeing a Poisson random variable with mean n), and each player's action isindependently drawn from C according to the probability distribution {n .To formulate the Multinomial model, we need to assume that

:c # C

{n(c)=1

(that is, C includes all feasible actions). Then the event L has a positiveprobability only if n is an integer multiple of |, because otherwise L does

23LARGE POISSON GAMES

Page 18: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

not include any vector that has components summing to n. When n is aninteger multiple of |, the probability of event L is just the probability ofthe point (n�|) w in the Multinomial model, that is,

n ! `c # C \

{n(c)nw(c)�|

(nw(c)�|)!+ .

Using Stirling's formula to get log((kn)!)�nrk(log(kn)&1) for any integerk, the magnitude of this event L in the Multinomial model is then

limn � �

log(n !)+�c # C ((nw(c)�|) log({n(c))&log((nw(c)�|)!))n

= limn � � \log(n)&1+ :

c # C

(w(c)�|)(log({n(c))+1&log(nw(c)�|))+= :

c # C

w(c)|

log \{(c) |w(c) +=log(}|)=log(++1),

where + is the magnitude of the event L that we found above in thePoisson model.

Notice that log(++1) is an increasing function of +. So among anycollection of such ray events, the one with the highest magnitude in thePoisson model would also have the highest magnitude (when it haspositive probability) in the corresponding Multinomial model. In thissense, we should anticipate similar results from analyzing large Poissongames and large Multinomial games. But any ray that does not go in thedirection { will have a smaller magnitude in the Multinomial model thanin the Poisson model, because log(++1)�+.

5. PIVOT PROBABILITIES IN VOTING GAMES

Let us now apply the formulas from the preceding section to the specialcase where C=[1, 2] and w(1)=w(2)=1. The ray event L generated bythis vector w is

L=[(#, #) | # is a nonnegative integer].

This game can be interpreted as an election where each player must voteeither for candidate 1 or candidate 2, and the event L can be interpretedas the event of a tie between the two candidates. The expected vote totalfor each candidate c is then n{n(c) in the game of size n. We let {(c) denotethe limit of {n(c) as n � �.

24 ROGER B. MYERSON

Page 19: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

In the notation of the preceding section, we now have |=w(1)+w(2)=2,and

}n= `c # C \

{n(c)w(c)+

w(c)�|

=- {n(1) {n(2) and }= limn � �

}n=- {(1) {(2).

So in this Poisson voting game, our formula (4.1) for the magnitude + ofthe tie event L can now be written

+=2 - {(1) {(2)&{(1)&{(2)=&(- {(1)&- {(2))2. (5.1)

Formula (4.2) for the limits of major c-offsets in the tie event L herebecomes

:(1)= limn � �

}nw(1)�{n(1)

= limn � �

- {n(2)�{n(1)=- {(2)�{(1), if {(1)>0, (5.2)

and

:(2)=- {(1)�{(2) if {(2)>0.

With the assumption that both {(1)>0 and {(2)>0, formula (4.3) herebecomes

P(L | n{n)ren(2 - {n(1) {n(2)&{n(1)&{n(2))

2 - ?n - {n(1) {n(2). (5.3)

Suppose that the winner of this voting game will be the candidate withthe most votes, but the winner will be determined by the toss of a fair coinin the event of a tie. In the analysis of rational voting behavior, we maywant to estimate the probability that one more vote for some candidate cwould change the outcome of the election. This probability is called thepivot probability of a vote for c, and we may denote this pivot probabilityby v(c | n{n). Considering the case of c=1, there are two ways that addingone vote for candidate 1 could change the outcome of the election: Thisadditional vote could break a tie in which candidate 1 would have lost thefair coin toss, or this vote could make a tie in which candidate 1 would winthe fair coin toss. Notice that, with L denoting the event of a tie, the eventthat candidate 1 is one vote behind candidate 2 is L&(1, 0). Thus, thepivot probability of a vote for candidate 1 is

v(1 | n{n)=(P(L | n{n)+P(L&(1, 0) | n{n))�2.

25LARGE POISSON GAMES

Page 20: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

By Theorem 2, the probability of L&(1, 0) can be approximated by theprobability of L multiplied by the limit of major 1-offsets in L,

limn � �

P(L&(1, 0) | n{n)P(L | n{n)

=:(1)=�{(2){(1)

.

Even if {(1)=0, we can get the same result with {(2)>0, because

L&(1, 0)=L&(0, &1),

and

limn � �

P(L | n{n)P(L&(0, &1) | n{n)

=:(2)=�{(1){(2)

.

So assuming only that {(1)+{(2)>0, we find that the pivot probability forcandidate 1 satisfies

limn � �

P(L | n{n)v(1 | n{n)

=1

(1+- {(2)�{(1))�2=

2 - {(1)

- {(1)+- {(2).

Then the pivot probability for candidate 2 similarly satisfies

limn � �

=P(L | n{n)v(2 | n{n)

=2 - {(2)

- {(1)+- {(2),

and so we get

limn � �

v(1 | n{n)v(2 | n{n)

= limn � � �{n(2)

{n(1). (5.4)

In particular, if the expected vote total for candidate 1 is less than theexpected vote total for candidate 2, then a vote for candidate 1 is morelikely to be pivotal than a vote for candidate 2, because the probability ofcandidate 1 being behind by one vote is greater than the probability ofcandidate 2 being behind by one vote.

With the additional assumption that the limiting fractions {(1) and {(2)are both positive, the approximate equality (5.3) can be applied to get

v(c | n{n)ren(2 - {n(1) {n(2)&{n(1)&{n(2))

4 - ?n - {n(1) {n(2) \- {n(1)+- {n(2)

- {n(c) + , \c # [1, 2].

(5.5)

(Recall that approximate equality r is being used to indicate functions ofn whose ratio converges to 1 as n � �.) Notice that c appears in the right-hand side of (5.5) only in the denominator of the second factor.

26 ROGER B. MYERSON

Page 21: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

We have been assuming that all players in the voting game must vote foreither candidate 1 or candidate 2. Let us now drop this assumption andsuppose that voters have a third option of abstaining.

We find some difficulty extending the preceding analysis to the gamewith abstention only because the fraction of abstainers may go to one. Oneway to avoid this difficulty is to reinterpret the parameter n as the expectednumber of players who choose not to abstain, with {n(c) reinterpreted asthe fraction who vote for c among these nonabstaining voters, so that{n(1)+{n(2)=1. (In this reinterpretation, n is no longer a parameter of thegame alone, because it depends on the equilibrium strategies, but the aboveanalysis took the strategies as given.) With this reinterpretation of n, thepreceding analysis can be directly applied to the game with abstention,provided that these reinterpreted values of n are still going to infinity. Thatis, our above results can be extended to voting games with abstention ifthe expected number of nonabstaining voters (n{n(1)+n{n(2)) is going toinfinity in the sequence of games.

Notice that the reinterpretation proposed above did not affect thenumerical value of n{n(1) or n{n(2) or {n(1)�{n(2). The reinterpretationdecreased n by excluding those players who choose to abstain, but it alsoincreased the fractions {n(1) and {n(2) in the same proportion. So thevalues of the pivot-probability formulas in (5.4) and (5.5) were not affectedby this reinterpretation. Thus, when we return to our original interpretationof n as the expected number of players in the voting game, Eq. (5.4)remains valid if limn � � n{n(1)+n{n(2)=+�. Equation (5.5) remainsvalid if limn � � n{n(1)+n{n(2)=+� and 0<limn � � {n(1)�{n(2)<+�.

(These pivot-probability formulas can also be derived from mathematicalformulas involving Bessel functions. When the number of votes for can-didates 1 and 2 is an independent Poisson random variable with meansn{(1) and n{(2), respectively, the probability that candidate 1 gets exactlyk more votes than candidate 2 is

e&n({(1)+{(2)) \{(1){(2)+

k�2

Ik (2n - {(1) {(2)),

where Ik is a modified Bessel function; see formula 9.6.10 in Abramowitzand Stegun [1]. Such modified Bessel functions can be approximated bythe following formula for large z,

Ik (z)re- z2+k2

- 2?- z2+k2 \- z2+k2&k

- z2+k2+k+k�2

,

using formulas 9.7.1 and 9.7.7 in Abramowitz and Stegun [1].)

27LARGE POISSON GAMES

Page 22: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

6. LEDYARD'S MODEL WITH COSTLY VOTING

To illustrate the power of these results, we now derive a Poisson versionof a basic theorem in social choice that was originally shown by Ledyard[5]. By using a Poisson model, we should be able to derive Ledyard'sresults more cleanly and simply than was possible with the Multinomialmodel of nonrandom population size that Ledyard [5] used.

We again consider a voting game in which the players can either abstainor vote for one of two candidates. In this voting game, each player's typehas two components: his policy type and his voting cost. Suppose that theset of possible policy types is some finite set 3, and suppose that the votingcosts are drawn out of the interval from 0 to 1. So the type set T is thecompact set

T=3_[0, 1].

As in Section 5, there are two candidates numbered 1 and 2, and eachplayer has three possible actions denoted by elements in the setC=[0, 1, 2]. Here action 1 is voting for candidate 1, action 2 is voting forcandidate 2, and action 0 is abstaining. As above, the winner is thecandidate with the most votes, and we assume that the winner will bedetermined by the toss of a fair coin in the event of a tie.

Each player's policy type % in 3 determines the policy benefits u(c, %)that he will get if candidate c is the winner of the election. But we must alsotake the cost of voting into account. When candidate c wins, a player whohas policy type % and voting cost # would get a total utility payoff equalto u(c, %)&# if he voted in the election, while a similar player would geta total utility payoff equal to u(c, %) if he abstained in the election.

Let the number of players in this voting game be a Poisson randomvariable with mean n. Each player's policy type is a random variable drawnfrom 3 according to some probability distribution \, where \(%) denotesthe probability of having policy type %. Each player's voting cost is arandom variable drawn from [0, 1] according to a probability distributionthat has a cumulative distribution function F such that the derivative atzero F $(0) is strictly positive. That is, we assume that the probabilitydensity of voting costs must be strictly positive at 0, but nobody can havea negative cost of voting.

We also assume that the policy types and voting costs of all players areindependent random variables. That is, each player's policy type and votingcost are independent of each other and of all other players' types.

The total utility payoffs defined above are bounded and dependcontinuously on the voter's type, as the equilibrium-existence theorem inSection 2 requires. Thus, this Poisson game of size n has at least one

28 ROGER B. MYERSON

Page 23: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

equilibrium, which we may denote by {n . The main result of this section isthat, if the expected number of players in the voting game is large, then thecandidate who offers the greater expected policy benefits will almost surelywin in equilibrium.

Theorem 4. In the voting game described above, suppose that the expectedpolicy benefits for a randomly sampled voter are greater from candidate 2than from candidate 1; that is,

:% # 3

\(%) u(2, %)> :% # 3

\(%) u(1, %).

Then the probability of candidate 2 winning in the voting game of size nunder the equilibrium {n must converge to 1 as the size parameter n goes toinfinity.

Proof. Let 31 denote the set of policy types in 3 that prefer candidate1, and let 32 denote the other policy types; that is,

31=[% # 3 | u(1, %)>u(2, %)], 32=[% # 3 | u(1, %)�u(2, %)].

In equilibrium, for each candidate c, each player with policy type in 3c willeither vote for his preferred candidate c or abstain, because voting for theless preferred candidate is strictly dominated by abstaining.

Given any candidate c in this two-candidate election, let &c denote theother candidate. Let vn(c) denote the probability that an additional vote forcandidate c would be pivotal in the equilibrium {n of the voting game ofsize n. In this equilibrium, a player of policy type % in 3c prefers to actuallyvote for candidate c (rather than abstain) iff his voting cost is less than

(u(c, %)&u(&c, %)) vn(c).

(Environmental equivalence is being applied here.) Thus, the probabilitythat a randomly sampled player will vote for candidate c in equilibrium,which we denote by {n(c), must satisfy

{n(c)= :% # 3c

\(%) F((u(c, %)&u(&c, %)) vn(c)). (6.1)

We now claim that the expected total number of votes n{n(1)+n{n(2)must go to infinity as n � �. If not, then both candidates' expected scoren{n(1) and n{n(2) would have finite limits (taking a subsequence ifnecessary), and then the pivot probabilities vn(1) and vn(2) would convergeto the positive pivot probabilities that are associated with independentPoisson-distributed vote totals that have these limiting expected values. But

29LARGE POISSON GAMES

Page 24: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

then (6.1) would imply that {n(2) must have a strictly positive limit, and son{n(2) goes to infinity, as claimed.

So if we look only at the players who actually vote, then the sequenceof games considered here has an expected voting turnout that goes toinfinity as n � �. So as shown in Section 5, we can apply Eq. (5.4) here toget

limn � �

vn(1)vn(2)

= limn � � �{n(2)

{n(1). (6.2)

The fact that at least one candidate's expected score is going to infinityimplies that both pivot probabilities vn(1) and vn(2) go to 0 as n � �.Thus, by differentiability of the cumulative distribution function F at zero

:% # 3c

\(%) F((u(c, %)&u(&c, %)) vn(c))

r :% # 3c

\(%) F $(0)(u(c, %)&u(&c, %)) vn(c)

for each candidate c. Then (6.1) gives us

{n(1)

{n(2)r

�% # 31\(%) F $(0)(u(1, %)&u(2, %)) vn(1)

�% # 32\(%) F $(0)(u(2, %)&u(1, %)) vn(2)

.

So applying (6.2) we get

{n(1)

{n(2)r

�% # 31\(%)(u(1, %)&u(2, %))

�% # 32\(%)(u(2, %)&u(1, %)) �

{n(2){n(1)

.

That is,

{n(1)

{n(2)r\

�% # 31\(%)(u(1, %)&u(2, %))

�% # 32\(%)(u(2, %)&u(1, %))+

2�3

. (6.3)

By the basic assumption that candidate 2 offers greater expected policybenefits than candidate 1, the right-hand side of (6.3) is strictly less thanone. So the expected score of candidate 2 must be going to infinity and, inthe limit, the expected score of candidate 1 is less than the expected scoreof candidate 2 by a strictly positive fraction of candidate 2's expected score.

Recall that the standard deviation of any Poisson random variable is thesquare root of its expected value, and this square root is a vanishingfraction of the expected value as the expected value becomes large. So theexpected excess of candidate 2's score over candidate 1's score is becominginfinitely many times the standard deviation of either score as n � �. Thus,the probability that candidate 2 wins must be converging to one. Q.E.D

30 ROGER B. MYERSON

Page 25: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

Following Ledyard [5], we can now take the story back one stage to thepoint in time where the candidates choose their policy positions. Supposethat the players in the voting game have preferences over some given policyspace, and each candidate can choose any policy position in this space.After the candidates choose these policies, the policy benefits u(c, %) will beequal to the benefits that a player of policy type % would get from thepolicy position chosen by candidate c. Theorem 4 tells us that, when n islarge, any candidate who does not choose a policy position that maximizesthe players' expected benefits can be beaten almost surely by a candidatewho chooses a policy position that maximizes the players' expectedbenefits. Thus, both candidates should rationally choose a policy positionthat maximizes the players' expected benefits. If there is a unique policyposition that maximizes the players' expected benefits, then both can-didates must rationally choose that same position, in which case nobodywill actually vote in the voting game. Thus Ledyard [5] showed thatdemocracy may achieve the classical utilitarian ideal of expected welfaremaximization in a voting game where nobody actually votes in equilibrium!

7. PROOFS OF THE LIMITS THEOREMS

We begin with a useful fact about �(%)=%(1&log(%))&1. For anynonnegative number %,

�(%)<2&%. (7.1)

To verify this inequality, it can be shown by differentiation that the convexfunction 2&%&�(%) is minimal when %=e, where is equal to 3&e, whichis positive.

Lemma 1. Let [xn]�n=1 be any sequence of possible action profiles in

Z(C ). Then

limn � �

log(P(xn | n{n))n

= limn � �

:c # C

{n(c) � \ xn(c)n{n(c)+ .

Proof. From Eq. (3.1) we get

log(P(xn | n{n))�n& :c # C

{n(c) �(xn(c)�(n{n(c)))

= :c # C

(log(@(xn(c)))+0.5 log(2?xn(c)+?�3))�n.

31LARGE POISSON GAMES

Page 26: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

The term log(@(x(c)))�n must go to zero as n goes to infinity, because@(x(c)) is always close to 1, and the term log(2?xn(c)+?�3) is alwayspositive. So the equality in Lemma 1 can fail only if there exists someaction c, some positive number =, and some infinite subsequence of the n'ssuch that

log(2?xn(c)+?�3)>=n and xn(c)>e=n�(2?)&1�6, \n,

and so xn(c)�n goes to +�. Inequality (7.1) implies

{n(c) �(xn(c)�(n{n(c)))<2{n(c)&xn(c)�n,

and

{n(c) �(xn(c)�(n{n(c)))+0.5 log(2?xn(c)+?�3)�n

<2{n(c)&(xn(c)&0.5 log(2?xn(c)+?�3))�n.

With xn(c)�n going to +� and {n(c) bounded, the right-hand sides ofthese two inequalities must both go to &� as n � +�. But then�c # C {n(c) �(xn(c)�(n{n(c))) and log(P(xn | n{n))�n must both go to &�. Thatis, in any subsequence where the difference between �c # C {n(c) �(xn(x)�(n{n(c))) and log(P(xn | n{n))�n goes to any limit other than zero, bothexpressions must go to &�. Thus their limits must be equal. Q.E.D

For any integer k and any nonnegative number *, let p(k | *) denote theprobability that a Poisson random variable with mean * would equal k.That is,

p(k | *)=e&**k�k !.

We now prove two more computational lemmas about Poisson distributions.

Lemma 2. Let * be any nonnegative number and let i be any integer suchthat i>*. Consider the event that a Poisson random variable with mean * isgreater than or equal to i. The probability of this event can be bounded bythe inequality

:�

k=i

p(k | *)�p(i | *) \ ii&*+ .

Proof. For any positive integer $,

p(i+$ | *)=e&** i+$�(i+$)!

=(e&** i�i !)(*$�((i+1) } } } (i+$)))�p(i | *)(*�i )$.

32 ROGER B. MYERSON

Page 27: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

Thus,

:�

k=i

p(k | *)�p(i | *) \ :�

$=0

(*�i )$+= p(i | *) \ 11&*�i+= p(i | *) \ i

i&*+ .

Q.E.D

Lemma 3. Let * be any positive number, and let h and k be any twointegers. Then there exists some numbers ` and " such that ` is between 0 and1, " is between h+` and k+`, and

p(k | *)p(h | *)

=e*(�((k+`)�*)&�((h+`�*)))=\ *h+=+

k&h

e&(k&h)2�(2").

Proof. Suppose first that k<h. Then

log \ p(k | *)p(h | *)+=log \ *kh !

k ! *h+= :h

i=k+1

log(i�*).

But from the basic definition of the Riemann integral,

* |h�*

k�*log(%) d%� :

h

i=k+1

log(i�*)�* |(h+1)�*

(k+1)�*log(%) d%,

because the log function is monotone increasing. So there must exist some` between 0 and 1 such that

log \p(k | *)p(h | *)+=* |

(h+`)�*

(k+`)�*log(%) d%=*(�((k+!)�*)&�((h+`)�*)),

where the second equality follows from the fact that �$(%)=&log(%).Reversing the roles of k and h, it is straightforward to show that theseequalities also hold in the case where k>h.

By second-order Taylor expansion, there is a number " between k+`and h+` such that

� \k+`* +&. \h+`

* +=&log \h+`* +\k&h

* +&12 \

*"+\

k&h* +

2

.

The second equality in the lemma follows easily from this Taylor expansion.Q.E.D

33LARGE POISSON GAMES

Page 28: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

Theorem 1. Let [An]�n=1 be any sequence of events in Z(C ). Then

limn � �

log(P(An | n{n))�n= limn � �

maxyn # An

log(P( yn | n{n))�n

= limn � �

maxyn # An

:c # C

{n(c) � \ yn(c)n{n(c)+ .

Proof. For any negative number &, define the set S(&, n) such that

S(&, n)={x # Z(C ) } :c # C

{n(c) �(x(c)�(n{n(c)))<&= .

Our first claim is that lim supn � � log(P(S(&, n) | n{n))�n is not greaterthan &.

To prove this claim, we cover S(&, n) by *C+1 subsets. (Here *Cdenotes the number of actions in the set C.) For any action c in C, letSc(&, n) be the set such that

Sc(&, n)=[x # S(&, n) | x(c)�n(2&&)].

Also, let S*(&, n) be the set such that

S*(&, n)=[x # S(&, n) | x(c)<n(2&&), \c # C].

Thus, S(&, n)�S*(&, n) _ (�c # C Sc(&, n)).Let % denote the next integer larger than n(2&&). By (7.1), the inequality

%�n(2&&) implies that

{n(c) �(%�(n{n(c)))<{n(c)(2&%�(n{n(c)))�2{n(c)&(2&&)�&.

So the probability that exactly % players choose c satisfies

p(% | n{n(c))=en{n(c) �(%�(n{n(c)))

@(%) - 2?%+?�3<en&.

Our set Sc(&, n) is a subset of the event that at least % players choose actionc. So by Lemma 2,

P(Sc(&, n) | n{n)<en& \ n(2&&)n(2&&)&n{n(c)+<en& \2&&

1&&+ .

The set S*(&, n) contains at most (n(2&&))*C points, each of which hasa probability less than en&. So

P(S*(&, n) | n{n)<en&(n(2&&))*C.

34 ROGER B. MYERSON

Page 29: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

Thus we get

P(S(&, n) | n{n)<((n(2&&))*C+*C(2&&)�(1&&)) en&, (7.2)

which proves our first claim.Now suppose that [An]�

n=1 is a sequence of events that has at least onemajor sequence of points, and let + denotes the magnitude of such a majorsequence in [An]�

n=1 ; that is,

+= limn � �

maxy # an

log(P( y | n{n))�n.

Let = be any strictly positive number. Then for all sufficiently large n, theset An must be a subset of S(++=, n). So for all sufficiently large n, we get

P(An | n{n)<(n(2&+&=)*C+*C(2&+&=)�(1&+&=)) en(++=)

<en(++2=),

because en=>n(2&+&=)*C+*C(2&+&=)�(1&+&=) when n is large.Thus,

lim supn � �

log(P(An | n{n))�n�++2=, \=>0,

which in turn implies that

lim supn � �

log(P(An | n{n))�n�+.

But the assumption that a major sequence [xn]�n=1 in [An]�

n=1 hasmagnitude + also implies that

lim infn � �

log(P(An | n{n))�n� limn � �

log(P(xn | n{n))�n=+,

because each xn is in An . Thus we conclude

limn � �

log(P(A | n{))�n=+.

That is, if there is a major sequence in [An]�n=1 , then its magnitude is

equal to the magnitude of [An]�n=1 .

Now suppose that the sequence of events [An]�n=1 has a magnitude +.

Obviously, no subsequence of points in [An]�n=1 can have a magnitude

greater than +, and so

+�lim supn � �

maxyn # An

log(P( yn | n{n))�n.

35LARGE POISSON GAMES

Page 30: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

If lim infn � � maxyn # Anlog(P( yn | n{n))�n were strictly less than +, then we

could choose an infinite subsequence in which the numbers maxyn # An

log(P( yn | n{n))�n converge to this limit-infimum. But along this subsequence,a major sequence of points would exist, and so (as just shown) the limit ofmaxyn # An

log(P( yn | n{n))�n would be equal to the limit of log(P(An | n{n)�n,which equals +. Thus, limn � � maxyn # An

log(P( yn | n{n))�n must exist andmust equal +. Q.E.D

Corollary 1. Suppose that [An]�n=1 is a sequence of events that has a

finite magnitude. Suppose that [Bn]�n=1 is a sequence of events such that

lim supn � �

maxyn # Bn

:c # C

� \ yn(c)n{n(c)+< lim

n � �maxxn # An

:c # C

{n(c) � \ xn(c)n{n(c)+ .

Then limn � � (P(Bn | n{n)�P(An | n{n))=0 and limn � � (P(An"Bn | n{n)�P(An | n{n))=1.

Proof. Let + denote the magnitude of [An]�n=1 . If the corollary failed

then we could find some infinite subsequence along which P(Bn | n{n)�P(An | n{n) is bounded below by some positive number q, and so

0�lim infn � �

log(P(Bn | n{n))�n� limn � �

(log(q)+log(P(An | n{n)))�n

= limn � �

log(P(An | n{n))�n=+.

So this subsequence could also be chosen so that the [Bn] subsequence hasa magnitude and

limn � �

log(P(Bn | n{n))�n�+.

By Theorem 1, we could then select points yn in Bn such that, along thissubsequence

limn � �

log(P( yn | n{n))�n�+,

and then Lemma 1 would imply

limn � �

:c # C

{(c) �( yn(c)�(n{n(c)))�+.

But this result would contradict the strict inequality that was assumed inthe corollary. Q.E.D

36 ROGER B. MYERSON

Page 31: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

Theorem 2. Let w be any vector in RC such that each component w(c)is an integer. For each action c such that w(c){0, suppose thatlimn � � n{n(c)=+�, and suppose that some number :(c) is the limit ofmajor c-offsets in the sequence of events [An]�

n=1 . Then

limn � �

P(An&w | n{n)P(An | n{n)

= `c # C

:(c)w(c).

Proof. Let = be any positive number. Let Dn(=) be the set of all x in An

such that

:(c)&=<x(c)&|w(c)|

n{n(c)and

x(c)+|w(c)|n{n(c)

<:(c)+=

for every c such that w(c){0. Because w(c)�(n{n(c)) converges to 0 and:(c) is the limit of major c-offsets in [An]�

n=1 for each such c, any majorsequence in [An]�

n=1 must have at most finitely many points outside ofDn(=). So by Corollary 1,

limn � �

P(Dn(=) | n{n)P(An | n{n)

=1.

Let + denote the magnitude of [An]�n=1 . Because all major sequences in

[An]�n=1 are eventually in [Dn(=)]�

n=1 , we know that + is also themagnitude of [Dn(=)]�

n=1 . If [xn&w]�n=1 is any sequence of points in

[(An&w)" (Dn(=)&w)]�n=1 then

lim supn � �

:c # C

{n(c) � \xn(c)&w(c)n{n(c) +=lim sup

n � �:

c # C

{n(c) � \ xn(c)n{n(c)+<+,

because � is continuous and n{n(c) � +� whenever w(c){0. So byCorollary 1,

limn � �

P((An&w)" (Dn(=)&w) | n{n)P(Dn(=) | n{n)

=0.

Thus,

limn � �

P(An&w | n{n)P(An | n{n)

= limn � �

P(An&w | n{n)P(Dn(=) | n{n)

= limn � �

P(Dn(=)&w | n{n)P(Dn(=) | n{n)

.

37LARGE POISSON GAMES

Page 32: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

Now consider any point x&w in Dn(=)&w and the corresponding pointx in Dn(=). The ratio of the probabilities of these two points is

P(x&w | n{n)P(x | n{n)

= `c # C

e&n{n(c)(n{n(c))x(c)&w(c)�(x(c)&w(c))!e&n{n(c)(n{n(c))x(c)�x(c)!

= `c # C \

x(c)!�(x(c)&w(c))!(n{n(c))w(c) + .

If w(c)>0 then x(c)!�(x(c)&w(c))! is the product of w(c) factors betweenx(c) and x(c)&w(c). Similarly, if w(c)<0 then (x(c)&w(c))!�x(c)! is theproduct of &w(c) factors between x(c) and x(c)&w(c). Applying thedefinition of Dn(=), we then get

`c # C

(:(c)&=)w(c)�P(x&w | n{n)

P(x | n{n)� `

c # C

(:(c)+=)w(c).

Now there are two cases to consider. First, consider the case where, forall sufficiently large n, for every point x in Dn(=), the point x&w has allnonnegative components and so is in Z(C ). Then for all sufficiently largen, P(Dn(=)&w | n{n) and P(Dn(=) | n{n) are sums of point probabilities thatcan be put in a one-to-one correspondence where each corresponding pairhas a ratio between >c # C (:(c)&=)w(c) and >c # C (:(c)+=)w(c). So we get

`c # C

(:(c)&=)w(c)�P(Dn(=)&w | n{n)

P(Dn(=) | n{n)� `

c # C

(:(c)+=)w(c).

Because these inequalities hold for all =, we can conclude

limn � �

P(An&w | n{n)P(An | n{n)

= `c # C

:(c)w(c)

which proves the theorem for this case.Now consider the alternative case where there exist arbitrarily large n

such that, for some point x in Dn(=), the point x&w has some negativecomponents and so is not in Z(C ). In this case, the argument in thepreceding paragraph fails only because there may be some extra terms inP(Dn(=) | n{n) that do not correspond to any terms in P(Dn(=)&w | n{n).Thus, we can only claim

P(Dn(=)&w | n{n)P(Dn(=) | n{n)

� `c # C

(:(c)+=)w(c).

38 ROGER B. MYERSON

Page 33: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

Because this condition holds for any positive =, we get

0� limn � �

P(An&w | n{n)P(An | n{n)

� `c # C

:(c)w(c).

But notice that this case can occur only if there is some action c such thatw(c)>0 and :(c)=0, because otherwise the condition x # Dn(=) wouldforce x(c) to be larger than w(c) for all sufficiently large n. So in this casewe can also conclude that

limn � �

P(An&w | n{n)P(An | n{n)

= `c # C

:(c)w(c)=0. Q.E.D

Theorem 3. Given w1 , ..., wJ as above, let [ yn]�n=1 be a sequence in

Z(C ). Suppose that, for each n, yn is a near-maximizer of

:c # C

{n(c) � \ x(c)n{n(c)+

over x in H*( yn , w1 , ..., wJ). Suppose also that, for each c in C, both {n(c)and yn(c)�n converge to finite positive limits as n � �. Let M( yn) be theJ_J matrix such that, for each i and each j in [1, ..., J], the (i, j ) compo-nent is

Mij ( yn)= :c # C

wi (c) wj (c)�yn(c).

Then for any sequence [xn]�n=1 such that xn # H( yn , w1 , ..., wJ) for all n, we

have

limn � �

P(xn | n{n)P( yn | n{n)

= limn � �

`c # C

e&(xn(c)& yn(c))2�(2yn(c)). (3.3)

Furthermore,

limn � �

P(H( yn , w1 , ..., wJ) | n{n)P( yn | n{n)(2?)J�2 (det(M( yn)))&0.5=1. (3.4)

Proof. Let $n be the vector in RC such that

yn+$n # arg maxx # H*( yn, w1, ..., wJ)

:c # C

{n(c) � \ x(c)n{n(c)+ .

39LARGE POISSON GAMES

Page 34: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

The assumption that yn is a near-maximizer implies that these $n vectorsare uniformly bounded, because

|$n(c)|� :i # [1, ..., J]

| wi (c)|, \c # C.

The objective in the above optimization is strictly concave, goes to &� ifany x(c) goes to +�, and has partial derivatives ���x(c) that go to +�when any x(c) approaches to 0. Thus, the maximize yn+$n is unique andall of its components yn(c)+$n(c) are strictly positive. But we can movewithin the hyperplane from this maximizer in the direction wi , and so(using �$(%)=&log(%)), the first-order conditions for optimality give us

:c # C

wi (c) log \yn(c)+$n(c)n{n(c) +=0, \i # [1, ..., J].

Thus

`c # C \

yn(c)+$n(c)n{n(c) +

wi (c)

=1, \i # [1, ..., J]. (7.3)

Now consider any xn # H( yn , w1 , ..., wJ). Lemma 3 implies that, for,there exists some vectors `n(c) and "n in RC such that

0�`n(c)�1 and yn(c)+`n�"n(c)�xn(c)+`n , \c # C,

and

P(xn | n{n)P( yn | n{n)

= `c # C \

n{n(c)yn(c)+`n(c)+

xn(c)& yn(c)

e&(xn(c)& yn(c))2�(2"n(c)).

But xn& yn is a linear combination of the vectors [w1 , ..., wJ], and soEq. (7.3) implies

`c # C \

n{n(c)yn(c)+$n(c)+

xn(c)& yn(c)

=1.

Thus we get

P(xn | n{n)P( yn | n{n)

= `c # C \

1+$n(c)�yn(c)1+`n(c)�yn(c)+

xn(c)& yn(c)

e&(xn(c)& yn(c))2�(2"n(c)). (7.4)

40 ROGER B. MYERSON

Page 35: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

Because $n(c) and `n(c) are uniformly bounded and yn(c) � � as n � �,we get

limn � � \1+$n(c)�yn(c)

1+`n(c)�yn(c)+xn(c)& yn(c)

= limn � �

e($n(c)&`n(c))(xn(c)�yn(c)&1).

In any subsequence where xn(c)�yn(c) converges to 1, the ratio "n(c)�yn(c)must also converge to 1, and so the two quantities

e($n(c)&`n(c))(xn(c)�yn(c)&1)e&(xn(c)& yn(c))2�(2"n(c))

and

e($n(c)&`n(c))(xn(c)�yn(c)&1)e&(xn(c)& yn(c))2�(2yn(c))

cannot converge to different limits. On the other hand, in any subsequencewhere xn(c)�yn(c) does not converge to 1, the above two quantities mustboth converge to 0. Thus,

limn � �

P(xn | n{n)P( yn | n{n)

= limn � �

`c # C

e&(xn(c)& yn(c))2�(2yn(c)),

which is Eq. (3.3) in the theorem.This Eq. (3.3) immediately implies that [ yn]�

n=1 is a major sequence in[H( yn , w1 , ..., wJ)]�

n=1 , which this has a finite magnitude (because, foreach c, yn(c)�(n{n(c)) converges to a finite limit as n � �).

Now let = be any small positive number. For each n, let 4n(=) be

4n(=)=[xn # H( yn , w1 , ..., wJ) | (1&=) yn(c)<xn(c)

<(1+=) yn(c)&1, \c # C].

As argued in Section 3, Eq. (3.3) implies that every major sequence in[H( yn , w1 , ..., wJ)]�

n=1 must eventually be in 4n(=), for all sufficiently largen, and so (by Corollary 1),

limn � �

P(4n(=) | n{n)P(H( yn , w1 , ..., wJ) | n{n)

=1.

Now let xn be any vector in 4n(=). Let 0 denote the maximum of all|wj (c)|, over all i and c, so that 0 is a uniform upper bound for all |$n(c)|.With xn in 4n(=), we have

e&=(0+1)�\1+$n(c)�yn(c)1+`n(c)�yn(c)+

xn(c)& yn(c)

�e=(0+1).

41LARGE POISSON GAMES

Page 36: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

Because "n(c) is between yn(c)+`n(c) and xn(c)+`n(c) and `n(c) isbetween 0 and 1, we also have

e&(xn(c)& yn(c))2�(2(1+=) yn(c))�e&(xn(c)& yn(c))2�(2"n(c))�e&(xn(c)& yn(c))2�(2(1&=) yn(c)).

So the probability of the vector xn in 4n(=) satisfies

`c # C

e&=(0+1)e&(xn(c)& yn(c))2�(2(1+=) yn(c))

�P(xn | n{n)P( yn | n{n)

� `c # C

e=(0+1)e&(xn(c)& yn(c))2�(2(1&=) yn(c)).

There exist integers #1 , ..., #J such that

xn= yn+ :J

i=1

#i wi .

With these integers, the above equalities can be rewritten

e&=(0+1) *Ce&�c # C (�Ji=1

#iwi (c))2�(2(1+=) yn(c))

�P(xn | n{n)P( yn | n{n)

�e=(0+1) *Ce&�c # C (�Ji=1

#i wi (c))2�(2(1&=) yn(c)).

In the theorem we defined

Mij ( yn)= :c # C

wi (c) wj (c)�yn(c).

With this notation, we get the equations

:c # C \ :

J

i=1

#iwi (c)+2

<yn(c)= :c # C

:J

i=1

:J

j=1

#i wi (c) #jwj �yn(c)

= :J

i=1

:J

j=1

#i#j Mij ( yn).

42 ROGER B. MYERSON

Page 37: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

So the above inequalities become

e&=(0+1) *Ce&�Ji=1 �J

j=1 #i #jMij ( yn)�(2(1+=))

�P(xn | n{n)P( yn | n{n)

�e=(0+1) *Ce&�Ji=1

�Jj=1

#i#jMij ( yn)�(2(1&=)), (7.5)

when xn= yn+#1w1+ } } } +#JwJ is in 4n(=).As n � �, the numbers yn(c) go to infinity, and so the numbers Mij ( yn)

go to zero. But nMij ( yn) converges to a finite limit that can be differentfrom zero as n � �, because

nMij ( yn)= :c # C

wi (c) wj (c) n�yn(c)

and yn(c)�n is assumed to converge to a finite limit as n � �. So let

M� ij= limn � �

nMij ( yn).

To make use of this matrix in the limit of inequalities (7.5), we need achange of variables. Let

hi=#i �- n, \i.

So when

xn= yn+ :J

i=1

- n hiwi # 4n(=),

we get

e&=(0+1) *Ce&�Ji=1

�Jj=1

hihjnMij ( yn)�(2(1+=))

�P(xn | n{n)P( yn | n{n)

�e=(0+1) *Ce&�Ji=1

�Jj=1

hi hj nMij ( yn)�(2(1&=)). (7.6)

Let Kn(=) denote the set of all vectors h=(h1 , ..., hJ) such that each hi isan integer multiple of 1�- n, and

&=yn(c)< :J

i=1

- n h iwi (c)<=yn(c)&1, \c # C.

43LARGE POISSON GAMES

Page 38: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

Then summing (7.6) over all xn in 4n(=) and multiplying through by n&J�2,we get

e&=(0+1) *C :h # Kn(=)

e&�Ji=1

�Jj=1

hihj nMij ( yn)�(2(1+=)) \ 1

- n+J

�P(4n(=) | n{n)nJ�2P( yn | n{n)

�e=(0+1) *C :h # Kn(=)

e&�Ji=1

�Ji=1

hi hjnMij ( yn)�(2(1&=)) \ 1

- n+J

.

For any fixed =, Kn(=) grows as n � � to include all vectors (h1 , ..., hJ) inwhich the components are (positive and negative) integer multiples of1�- n, and the above sums converge to integrals, giving us the inequalities

e&=(0+1) *C |+�

&�} } } |

+�

&�e&�J

i=1�J

j=1hi hjM� ij �(2(1+=)) dh1 } } } dhJ

� limn � �

P(4n(=) | n{n)nJ�2P( yn | n{n)

�e=(0+1) *C |+�

&�} } } |

+�

&�e&�J

i=1�J

j=1hi hj M� ij �(2(1&=)) dh1 } } } dhJ .

We can replace P(4n(=) | n{n) by P(H( yn , w1 , ..., wJ | n{n) in the abovelimit, because the ratio of these two probabilities goes to 1 as n � �. Thentaking = to zero, we get

limn � �

P(H( yn , w1 , ..., wJ) | n{n)nJ�2P( yn | n{n)

=|+�

&�} } } |

+�

&�e&�J

i=1�J

j=1hihj M� ij �2 dh1 } } } dhJ .

Multiplying the above integrand by det(M� )0.5�(2?)J�2 yields a MultivariateNormal probability density with mean 09 and covariance matrix M� &1, andthe integral of such a density is 1. Thus,

limn � �

P(H( yn , w1 , ..., wJ) | n{n)(2?)J�2 det(M� )&0.5 nJ�2P( yn | n{n)

=1.

To complete the proof of Eq. (3.4) in the theorem, notice that

limn � �

det(M� )&0.5 nJ�2

det(M( yn))&0.5=1,

because the J_J matrix M� is the limit of nM( yn). Q.E.D

44 ROGER B. MYERSON

Page 39: Large Poisson Games - University of Torontohomes.chass.utoronto.ca/~haoli/teaching/econ515a/myerson00.pdfLarge Poisson Games Roger B. Myerson MEDS Department, J. L. Kellogg Graduate

ACKNOWLEDGMENTS

Support from the National Science Foundation Grant SES-9308139 and from the DisputeResolution Research Center is gratefully acknowledged.

REFERENCES

1. M. Abramowitz and I. Stegun, ``Handbook of Mathematical Tables,'' Dover, New York,1965.

2. P. Billingsley, ``Convergence of Probability Measures,'' Wiley, New York, 1968.3. F. A. Haight, ``Handbook of the Poisson Distribution,'' Wiley, New York, 1967.4. N. L. Johnson and S. Kotz, ``Discrete Distributions,'' Wiley, New York, 1969.5. J. Ledyard, The pure theory of large two-candidate elections, Public Choice 44 (1984),

7�41.6. I. Milchtaich, ``Random-Player Games,'' Northwestern University Math Center, Discussion

Paper No. 1178, 1997.7. P. Milgrom and R. Weber, Distributional strategies for games with incomplete information,

Math. Oper. Res. 10 (1985), 619�632.8. R. Myerson, Population uncertainty and Poisson games, Int. J. Game Theory, in press.9. R. Myerson, Extended Poisson games and the Condorcet jury theorem, Games Econ.

Behavior, in press.10. A. Shwartz and A. Weiss, ``Large Deviations for performance Analysis,'' Chapman 6 Hall,

London�New York, 1995.11. D. W. Strook, ``Introduction to the Theory of Large Deviations,'' Springer-Verlag, New

York, 1984.

45LARGE POISSON GAMES