28
Subgame Perfect Equilibria in Infinite Stage Games: an alternative existence proof Alejandro M. Manelli * Department of Economics Arizona State University Tempe, Az 85287-3806 Version: January 2003 Abstract Any stage-game with infinite choice sets can be approximated by finite games obtained as increasingly finer discretizations of the infinite game. The subgame perfect equilibrium outcomes of the finite games converge to a limit distribution. We prove that (i) if the limit distribution is feasible in the limit game, then it is also a subgame perfect equilibrium outcome of the limit game; and (ii) if the limit distribution prescribes sufficiently diffused behavior for first-stage players, then it is a subgame perfect equilibrium outcome of the limit game. These results are potentially useful in determining the existence of subgame perfect equilibria in applications. * I am grateful to Kim Border for helpful discussions and for bringing to my attention the Arsenin-Kunugui Selection Theorem. I also benefited from conversations with Andreas Blume, Hector Chade, Arthur Robson, Edward Schlee, and Jeroen Swinkels. Financial support from the National Science Foundation under Grant SBR-9810840 is gratefully acknowledged. 1

Subgame Perfect Equilibria in In nite Stage Games: an

  • Upload
    others

  • View
    10

  • Download
    0

Embed Size (px)

Citation preview

Subgame Perfect Equilibria in Infinite Stage Games: an

alternative existence proof

Alejandro M. Manelli∗

Department of Economics

Arizona State University

Tempe, Az 85287-3806

Version: January 2003

Abstract

Any stage-game with infinite choice sets can be approximated by finite games obtained

as increasingly finer discretizations of the infinite game. The subgame perfect equilibrium

outcomes of the finite games converge to a limit distribution. We prove that (i) if the limit

distribution is feasible in the limit game, then it is also a subgame perfect equilibrium

outcome of the limit game; and (ii) if the limit distribution prescribes sufficiently diffused

behavior for first-stage players, then it is a subgame perfect equilibrium outcome of the

limit game. These results are potentially useful in determining the existence of subgame

perfect equilibria in applications.

∗I am grateful to Kim Border for helpful discussions and for bringing to my attention the Arsenin-Kunugui

Selection Theorem. I also benefited from conversations with Andreas Blume, Hector Chade, Arthur Robson,

Edward Schlee, and Jeroen Swinkels. Financial support from the National Science Foundation under Grant

SBR-9810840 is gratefully acknowledged.

1

1 Introduction

Games where players have a continuum of choices (henceforth infinite games) are commonly

used to model economic phenomena. Despite their numerous applications, infinite games

with incomplete or imperfect information, even if very well behaved, may have no equilib-

rium. Indeed van Damme (1987) constructs a signaling game with no sequential equilibrium,

and Harris, Reny, and Robson (1995) construct a two-stage game with no subgame per-

fect equilibrium. (We provide a brief summary of the latter example toward the end of the

Introduction for the benefit of the reader unfamiliar with the subject.)

General existence results that don’t rely on particular payoff functions, partial effects, or

distributional assumptions are few. Still certain similarities have been identified in seemingly

different infinite games: correlation in various forms “restores” the existence of equilibrium.

Cotter (1991) proves the existence of a correlated equilibrium in simultaneous-move games

of incomplete information. (Whether those games admit a Bayesian Nash equilibrium is

an open question.1) In addition to their non-existence example Harris, Reny, and Robson

(1995) prove the existence of a subgame perfect equilibrium in stage-games that have a public,

randomization device in each stage. In previous work, we have shown that infinite signaling

games have a sequential equilibrium when cheap talk is allowed.

In this essay we provide an alternative proof of the existence of a subgame perfect equi-

librium in stage games with public randomization devices. Our approach may prove useful

in analyzing other classes of infinite games.

Throughout the paper, we consider games with two stages. In each stage finitely many

players simultaneously and independently select their actions (from a compact set) after

observing choices made in previous stages. Payoffs depend continuously on all players’ choices.

Harris, Reny, and Robson’s non-existence example is within the class of games that we study.

The approach we follow has three main steps.

1. Fix an infinite game and consider a sequence of finite games constructed by taking

increasingly finer discretizations of the infinite game. Each finite game has a subgame

perfect outcome, the outcome of a subgame perfect equilibrium. Outcomes are distri-

butions and converge in a subsequence to a limit probability distribution.

2. We prove that the limit distribution is a subgame perfect outcome of the infinite game

if and only if it is feasible, i.e., if there are some strategies, though not necessarily equi-1Milgrom and Weber (1985), Balder (1988) and more recently Al-Najjar and Solan (1999) prove existence

of Bayesian Nash equilibrium using restrictions on the distribution of types. Khan and Sun (1995) extend the

purification theorems in Radner and Rosenthal (1982) to countable action spaces.

2

librium strategies, of the infinite game that generate the limit distribution (Corollary 1

of Theorem 1).

3. We identify conditions on or variations of the infinite game under which either the limit

distribution is always feasible (and therefore a subgame perfect equilibrium exists), or

a variation of the limit distribution is a subgame perfect equilibrium (Theorem 2).

The first step is a straight forward observation that holds generally.2

The second step contains a result that holds in the stage games we consider as well as in

signaling games (Manelli (1996), Theorem 1). We believe that a similar result—that limit

distributions when feasible are sequentially rational outcomes of the limit game—may hold

for other types of infinite games. If the limit distribution is infeasible, the reason is typically

that some sort of correlation has been added to it through the limiting process. Note that

in Harris, Reny, and Robson’s non-existence example the limit distribution is infeasible;

it requires that second-stage players correlate their actions, and since players must move

independently, no strategies of the infinite game can generate the necessary correlation. In

van Damme’s non-existence example the limit distribution requires that the receiver in the

signaling game correlate her actions with the sender’s private information; since the sender’s

signals are uninformative, it is again impossible to attain the necessary correlation. Still a

similar phenomenon occurs in some examples due to Jackson, Simon, Swinkels, and Zame

(2001) of non-existence of Bayesian Nash equilibrium in auctions. (In their games, however,

payoff functions are not continuous on actions.)

The third step identifies conditions to transform the second step into an existence result.

One may envision three types of conditions. First, one may impose restrictions so that the

limit distribution does not exhibit correlation. This is hard to accomplish; spurious corre-

lation is easily generated specially when the approximating games have multiple equilibria.

(We expand on this point in Section 3 when presenting our formal results.)

Second, one may add “features” to the infinite game (or identify a class of games) such that

strategies can reproduce any potential correlation of the limit distribution. For instance, in

signaling games the ability of the sender to transmit cheap-talk messages is one such feature.

Public randomization devices in stage games is another example. A similar “addition or

extension” restores existence in the auctions studied by Jackson, Simon, Swinkels, and Zame

(2001).

Finally, it is sometimes possible to alter the limit distribution to identify an equilibrium

distribution. This is accomplished by Theorem 2, the main technical contribution of the2Provided action sets are compact metric spaces, any sequence of outcome distributions has a convergent

subsequence with respect to the topology of weak* convergence.

3

essay, for the games we study. As noted earlier, the limit distribution is infeasible whenever it

requires second-stage correlation, i.e., a “mixture” of second-stage Nash equilibria. Theorem 2

provides, to some extent, a “purification” of this mixture: if according to the limit distribution

first-stage players choose non-atomic strategies, then even if the limit distribution is infeasible

there is a subgame perfect equilibrium of the limit game. (Of course the identified equilibrium

outcome may be different from the limit distribution.)

The combination of Theorems 1, Corollary 1 and Theorem 2 may be useful in determining

existence in applications. For instance, it follows from those results that if the limit distri-

bution prescribes non-atomic behavior for first-stage players, then the infinite game has a

subgame perfect equilibrium (Corollary 2). Similarly consider extensions of the game that

incorporate cheap talk—the ability of first-stage players to send payoff-irrelevant messages

to second-stage players. Equilibrium outcomes of approximating games can be made into

equilibrium outcomes of the cheap-talk extension of those games by letting players send com-

pletely uninformative messages—say all cheap-talk messages are sent with equal probability.

With a sufficiently rich cheap-talk space, the limit outcome will prescribe non-atomic be-

havior for first-stage players. Hence, infinite games with sufficiently rich cheap talk have a

subgame perfect equilibrium (Corollary 3). The existence of subgame perfect equilibria in

stage games with cheap talk was first established by Harris, Reny, and Robson (1995) (see

their discussion following their Theorem 45). Corollary 2 can be derived directly from Harris,

Reny, and Robson’s results (Manelli (2001)).

Our proofs differ from those in Harris, Reny, and Robson (1995). These authors de-

rive their existence result from a general upper hemi-continuity property. The subgame

perfect outcomes of games with correlation devices are distributions on spaces that include

payoff-irrelevant variables, the realization of the public randomization devices. Consider the

projection of those distributions (i.e., their marginals) on the payoff-relevant variables. Har-

ris, Reny, and Robson prove the upper hemi-continuity of the correspondence that maps

games with public randomization devices into the projection of subgame perfect outcomes on

payoff-relevant variables. Their proof uses an argument reminiscent of backward induction.

Proceeding from the last to the first stage, a backward step identifies for each history the

combinations of actions and continuation-payoffs that are optimal in that stage. Proceeding

from the first to the last stage, a forward induction step shows that there are equilibrium

strategies that realize them.3

The proof of Theorem 1 and Corollary 1 shows that certain equilibrium properties of the3The interested reader should consult Harris, Reny, and Robson’s paper for precise statements, important

additional results, and an accurate description of their method of proof.

4

approximating outcomes are inherited by the limit distribution. Essentially given any history,

the expected payoff of the continuation outcome converges to the expected payoff derived from

the limit distribution. Equilibrium strategies are then constructed, when possible (i.e. when

the limit distribution is feasible), from the marginal and conditional probabilities implicit in

the limit distribution. This procedure identifies subgame perfect equilibrium strategies on the

equilibrium path. The procedure is completed by constructing strategies off the equilibrium

path.

The proof of Theorem 2 has a geometric quality. The theorem is, technically, a bang-

bang result; it permits the “purification” of the mixing implicit in the limit distribution. The

limit distribution implies certain behavior on the part of first-stage players. The implicit

continuation, however, may involve correlation. The idea behind Theorem 2 is to identify

an alternative continuation that does not involve correlation but that still provides the cor-

rect incentives for first-stage players. The set of possible continuations that provide correct

incentives for first-stage players is a convex, compact set of functions, and must therefore

have extreme points. It is then shown that extreme points of that set do not involve correla-

tion. Therefore, there exists a continuation that provides the correct incentives for first-stage

players and does not prescribe correlation.

We conclude the introduction by listing some salient features of Harris, Reny, and Rob-

son’s non-existence example.

First, there are two stages and two players in each stage. Only one player, say player 1

in the first stage, has a continuum of choices. If player 1’s choices are discretized, the game

has a subgame perfect equilibrium. Suppose that in the infinite game player 1’s choice set

is the interval [−1, 1]. Imagine finite games indexed by the integer n where player 1’s choice

set is replaced by the finite set {−1, . . . , −2/n, −1/n, 0, 1/n, 2/n, . . . , 1} but where payoff

functions and other players’ choice sets remain unchanged.

Second, the example is constructed so that in the subgame perfect equilibrium of the

nth game player 1 randomizes between −1/n and 1/n with equal probability. Second-stage

players respond by choosing “Up” when they observe 1/n and “Down” when they observe

−1/n. Thus, second-stage players use the randomization in the first stage to coordinate their

actions between two Nash equilibria of the second-stage game, (Up,Up) and (Down, Down).

Third, as n increases, the nth game and the infinite game become closer. The subgame-

perfect-equilibrium outcomes of the finite games specify, in the limit, that player 1 choose

zero with certainty and that second stage players continue to coordinate their actions, playing

(Up,Up) and (Down,Down) with equal probability. This, however, is impossible. Second-

stage players choose their actions independently and player 1’s choice, always zero, does not

5

help them correlate second-stage actions. Were each player independently to select Up and

Down with equal probability, the outcome of the second stage would include (Up,Down) and

(Down, Up).

Finally, payoffs in the example are specified so that no subgame perfect equilibrium exists:

in order to induce second-stage correlation—which player 1 prefers to no correlation—player

1 is willing to mix between some positive and some negative action. Using non-zero actions,

however, is costly to player 1 and such cost increases as the selected actions move away from

zero. Player 1 would therefore prefer to randomize between two actions, a positive and a

negative one, as close to zero as possible. Since player 1’s choice set is an interval, there is

no strictly positive or strictly negative action closest to zero. There is no “least expensive”

way to ensure second-stage correlation.

2 Definitions and Notation

We consider the following game. In the first stage, each player i in a finite set I selects in

isolation an action ai from a set Ai. After observing the actions selected in the first stage,

each player j in a finite set J selects also in isolation an action bj from a set Bj .4 Action

spaces are compact metric spaces. All payoff functions {U`}`∈I∪J are continuous, real valued,

and have as arguments the actions selected by all players. The game described is summarized

by Γ = [I, J, {Ai}i∈I , {Bj}j∈J , {U`}`∈I∪J ].

For convenience, A denotes the Cartesian product of the action spaces {Ai}i∈I , a is a

typical element of A, and a−i is the element obtained by removing the ith component from

a. The same notational convention applies to other objects, for instance, B =∏j∈J Bj ,

B−` =∏j∈J\{`}Bj , b ∈ B, and b−j = (b1, . . . , bj−1, bj+1, . . . ).

Given any metric space X, M(X) denotes the set of probability distribution on X. Prob-

ability measures are defined on the corresponding Borel σ-fields. Unless otherwise specified,

all sets and functions mentioned are measurable.

For any first-stage player i ∈ I a typical (mixed) strategy is a distribution over i’s available

actions, αi ∈ M(Ai). For any second-stage player j ∈ J a typical (behavior) strategy is a

measurable function that assigns to each first-stage realization a ∈ A a distribution over j’s

actions, βj : A −→M(Bj).

Let α be the collection (α1, α2, . . . , αI) of first-stage strategies and β the collection4For notational simplicity, a player that moves in both stages is split into two players, one per stage, each

with the payoff function of the original player. Any subgame perfect equilibrium of the game so obtained will

also be a subgame perfect equilibrium of the original game. Thus, we assume that no player moves in both

stages.

6

(β1, β2, . . . , βJ) of second-stage strategies. Then, β(a) = (β1(a), β2(a), . . . , βJ(a)) indicates

the choices made by second-stage players after observing the realization a. Sometimes, spe-

cially when integrating, we will write βj(dbj |a) and β(db|a) = (β1(db1|a)× β2(db2|a)× . . .×βJ(dbj |a)).

Player i’s expected payoff when players move according to (α, β) is E[Ui|α, β]. Similarly,

E[Ui|ai, α−i, β] is player i’s expected payoff when i chooses ai and the other players move

according to (α−i, β). The same notational convention will be used to represent expected

payoffs given other combinations of play.

A profile (α, β) is a subgame perfect equilibrium of Γ if

∀i ∈ I, E[Ui|α, β] ≥ E[Ui|ai, α−i, β], ∀ai ∈ Ai, and (1)

∀a ∈ A, j ∈ J E[Uj |a, β(a)] ≥ E[Uj |a, bj , β−j(a)], ∀bj ∈ Bj . (2)

Condition (1) states that every first-stage player selects a best response. Condition (2)

requires that for any realization a in the first stage, β(a) be a Nash equilibrium of the

ensuing second-stage game. For any a ∈ A, let NE(a) be the set of Nash equilibria of the

second-stage game defined after a is realized. Thus, β(a) ∈ NE(a) if and only if (2) holds for

β(a).

The outcomes of approximating finite games will be used to establish equilibrium prop-

erties of limit infinite games. Typically, given a strategy profile β the outcome of a game is

defined as the distribution on terminal nodes (i.e., an element of M(A × B)) generated by

the given profile. It is useful to represent the outcome as a distribution on a slightly different

space: for any history a ∈ A (i.e., any realization of the first stage game), the strategies β

determine a continuation path, i.e., an element η of∏j∈JM(Bj). By considering outcomes

as distributions on M [A ×∏j∈JM(Bj)] any element of the support specifies a first-stage

realization and its continuation path. Formally, define the outcome of playing the profile

(α, β) as

λ =

(∏i∈I

αi

)◦ f−1, where f(a) = (a, β(a)),

and∏i∈I αi is the product distribution obtained from first-stage strategies.5

Finally, η denotes a typical element of∏j∈JM(Bj), λA and λAi denote the marginal

distributions of λ on A and Ai respectively.5The symbol

∏is used to indicate products of measures as well as Cartesian products. Abusing notation

we will sometimes write β(a) = (β1(a), β2(a), . . . , βJ(a)) for∏j∈J βj(a) and vice versa.

7

3 Results

Fix a game Γ = [I, J, {Ai}i∈I , {Bj}j∈J , {U`}`∈I∪J ] for the remainder of the paper. Finite

games, i.e., games with finite action spaces, that approximate Γ can be constructed by dis-

cretizing Γ. We say that a sequence of finite games Γn = [I, J, {Ani }i∈I , {Bnj }j∈J , {U`}`∈I∪J ],

n = 1, 2, . . . , converges to the limit game Γ if for all players i ∈ I, j ∈ J ,

(i) Ani ⊂ Ai and Bnj ⊂ Bj ∀n, and

(iii) lim infn→∞Ani = Ai and lim infn→∞Bnj = Bj .

The last condition requires that feasible actions in the limit game be approximated by feasible

actions of the approximating games. Thus, finite approximating games are simply increasingly

finer discretizations of the infinite game.

Since finite games always have a subgame perfect equilibrium, any sequence Γn of fi-

nite approximating games yields a sequence of equilibrium outcomes λn. These outcomes

are distributions (on a compact space) and therefore converge in a subsequence to a limit

distribution λ.

Our first results are Theorem 1 and Corollary 1. Corollary 1 states that if there are

strategies of the limit game that implement the limit distribution λ, then there are subgame

perfect equilibrium strategies that also implement λ. Theorem 1 offers somewhat weaker

conditions: if there are strategies in Γ that yield to first-stage players the same expected

payoffs they would obtain under λ, and the second-stage strategies are sequentially rational

for second-stage players, then a subgame perfect equilibrium exists. A more informative

commentary follows the formal presentation of the theorem.

Theorem 1 Let Γn, n = 1, 2, . . . , be a sequence of finite games converging to a limit game

Γ, and let λn be a subgame perfect equilibrium outcome of Γn. Through a subsequence, λn

converges to a limit distribution λ. Let γ(·|a) be a version of the regular conditional distri-

bution of η ∈∏j∈JM(Bj) given a derived from λ. If there are any second-stage strategies

β′ = (β′1, β′2, . . . , β

′J), such that

(i) ∀i ∈ I, E[Ui|ai, λA−i , β′] = E[Ui|ai, λA−i , γ] λAi–a.e., and

(ii) β′(a) ∈ N(a) λA–a.e.,

then there exists a subgame perfect equilibrium (α, β) where αi = λAi for all i ∈ I, and

βj(a) = β′j(a) λA-a.e. for all j ∈ J .

8

The limit distribution λ implicitly specifies certain behavior on the limit game: first-stage

players must move according to their corresponding marginal distribution λAi ; second-stage

players must move according to the conditional distribution γ(·|a) derived from λ. Thus,

E[Ui|ai, λA−i , γ] represents player i’s expected payoff of choosing the action ai when all other

players move according to λ. Condition (i) states that there are strategies that give first-

stage players the same payoffs they would obtain under λ. Condition (ii) simply states that

second-stage players choose best responses almost always.

The proof of Theorem 1 (and Corollary 1) constructs equilibrium strategies of the limit

game (on the equilibrium path) from the marginal and conditional probabilities of the limit

distribution λ. The limit distribution λ, however, conveys little or no information about

behavior off the equilibrium path. As previously noted, any element in the support of an

outcome λn specifies a first-stage realization an ∈ An and a continuation ηn ∈∏j∈JM(Bj).

Any element in the support of the limit distribution can be approximated by elements in

the support of λn. This property and the continuity of expected payoffs with respect to

outcomes are used repeatedly in the proof to show that the equilibrium inequalities of the

approximating equilibria hold in the limit. Although the details are different, we used the

same ideas elsewhere to prove the analogue of Corollary 1 for signaling games with infinite

type and action spaces. Therefore, we relegate the proof to the appendix. We believe that a

similar relationship between equilibrium outcomes of finite approximating games and those

of infinite games might hold in other classes of infinite games. Our proof of Theorem 1 may

help elucidate that relationship.

Proof of Theorem 1: provided in the Appendix.

It follows as a corollary to Theorem 1 that the limit distribution is a subgame perfect

equilibrium outcome of the limit game if and only if it is feasible. Thus, the non-existence

example briefly discussed in the introduction captures the essence of the problem.

Corollary 1 The limit distribution λ is a subgame perfect equilibrium outcome of the limit

game Γ if and only if there are strategies in Γ that generate λ.

Proof: If the strategy profile (α, β′) generates λ, αi = λAi for all i ∈ I. Second-stage

strategies β′ are equivalent to γ: note that γ : A −→ M [∏j∈JM(Bj)], and β′ : A −→∏

j∈JM(Bj). Both (α, γ) and (α, β′) generate λ. Therefore, γ(a) = δβ′(a) α–a.e. (where δη ∈M [∏j∈JM(Bj)] represents the degenerate distribution at η ∈

∏j∈JM(Bj)). Hence, (i) is

satisfied. The outcomes λn belong to M [graph(NEn)] where NEn(a) is the set of second-stage

Nash equilibria in Γn after the realization a ∈ A. Since lim supn graph(NEn) ⊂ graph(NE)

9

(Lemma 3 in the Appendix), λ belongs to M [graph(NE)]. Hence, (ii) is also satisfied.

Q.E.D.

Corollary 1 implies, albeit for a much smaller class of games, Harris, Reny, and Robson’s

existence result for games with public, randomization devices: adding such device to Γ al-

lows second-stage players to coordinate their actions, thus making the limit distribution λ

feasible. When λ is feasible (without altering the game Γ), then randomization devices are

not necessary to obtain a subgame perfect equilibrium.

Corollary 1 does not follow immediately from Harris, Reny, and Robson’s upper hemi-

continuity theorem. To see this, consider a sequence of finite games that converge to Γ, and

the corresponding subgame perfect outcomes λn that converge to a feasible limit distribution

λ. Extend Γ with a randomization device whose realizations z come from some set Z. The

upper hemi-continuity result in Harris, Reny, and Robson (1995) identifies a subgame perfect

equilibrium such that (a) each first-stage player i moves according to λAi ; (b) every second-

stage player j uses a strategy β∗(a, z); and (c) the outcome of the profile (λA, β∗) is a

distribution over A× Z ×∏j∈JM(Bj) whose marginal distribution over the payoff-relevant

variables A ×∏j∈JM(Bj) is λ. Subgame perfection implies that β∗(a, z) is a second-stage

Nash equilibrium for any (a, z). To obtain Corollary 1, one must find second-stage strategies

β′(a) that depend only on first-stage realizations, that generate λ, and that constitute a

second-stage Nash equilibrium, i.e., β′(a) ∈ NE(a) for all a. Any such strategies β′ must

coincide (almost everywhere) with the strategies obtained by integrating β∗(a, z) over z (by

(c) above). It is not immediate that the strategies β′(a), obtained by integrating out z, are

a second-stage Nash equilibrium.

Even if the limit distribution λ is not feasible, Γ may have a subgame perfect equilibrium.

This is our next result. Theorem 2, the main technical contribution in the essay, establishes

that whenever the limit distribution exhibits diffused behavior there are strategies in Γ that

satisfy the conditions of Theorem 1, and thus an equilibrium exists.

Given a correspondence ϕ, a measure λ, and a set of continuous functions, Theorem 2

establishes the existence of a measurable selection β′ = (β′1, . . . , β′J) that satisfies certain

properties.

Theorem 2 Let {Ai}i∈I and {Bj}j∈J be finite families of compact metric spaces, {Ui}i∈I be

continuous real-valued functions defined on A × B, ϕ : A −→∏j∈JM(Bj) be a correspon-

dence with closed graph, λ be any probability measure on graph(ϕ(·)) with regular conditional

distribution of η given a denoted by γ(·|a). If for any two i′, i′′ ∈ I, the marginal distributions

λAi′ and λAi′′ are non-atomic, then there are measurable functions β′j : A −→M(Bj), ∀j ∈ J

10

such that

(i) ∀i ∈ I, E[Ui|ai, λA−i , β′] = E[Ui|ai, λA−i , γ] λAi–a.e., and

(ii) β′(a) ∈ ϕ(a) λA–a.e.

In our application ϕ(a) is the second-stage Nash correspondence NE(a), λ is the limit

distribution, and β′ is a second-stage strategy-profile. Condition (ii) states that β′(·) is an

a.e. selection, i.e. a Nash equilibrium of the second stage almost always; condition (i) states

that for almost any action ai, player i receives the same expected payoff under β′ than under

the limit distribution λ.

A few comments before the proof of Theorem 2 are in order. The limit distribution λ

can be infeasible in two main cases. The first case is illustrated by the example described

in the introduction: all through the sequence of approximating games a first-stage player

uses a mixed strategy. Second-stage players focus on first-stage realizations to correlate their

actions. Although the support of the mixed strategy collapses in the limit rendering second-

stage coordination impossible, the limit distribution still requires second-stage players to

correlate their moves: the conditional distribution of η given a derived from λ prescribes

mixing among different Nash equilibria of the second stage.

The second case in which the limit distribution λ is infeasible occurs when second-stage

strategies oscillate at an increasing rate along the sequence of approximating games. As

a result of such oscillation, the limit outcome prescribes correlation among second-stage

players. To see this, imagine for instance a game with two players in each stage. In the nth

approximating game, player 1 plays all her available actions ( An1 = {k/n, k = 1, . . . , n})with equal probability (αn1 (a1) = 1/n, ∀a1 ∈ An1 ). (For simplicity, player 2, a dummy player,

uses a similar strategy.) Second stage players coordinate their actions on player 1’s choices:

βn1 (a1, a2) = βn2 (a1, a2) = Up if a1 = k/n for k even, and βn1 (a1, a2) = βn2 (a1, a2) = Down

if a = k/n for k odd. Thus second-stage strategies jointly oscillate between (Up, Up) and

(Down, Down) as the finite games approach the limit game. The limit distribution λ stipulates

that for any given (a1, a2), the second-stage outcome be (Up, Up) or (Down, Down) with

equal probability. Hence, the limit distribution is infeasible.6

A stage-correlation device “solves” this problem by making the limit distribution feasible.

Theorem 2 finds, instead, that the limit distribution can be “purified.” A limit distribution λ

is infeasible only when its conditional distribution γ(·|a) randomizes among different second-

stage Nash equilibria. Theorem 2 establishes the existence of a second-stage strategy profile6One may define all payoffs to be identically one for all actions. Then the strategies of the approximating

games form a subgame perfect equilibrium but the limit distribution is still infeasible.

11

β′ that avoids the randomization in γ(·|a), prescribes equilibrium behavior for second-stage

players (i.e., it is a selection of the Nash correspondence), and guarantees first-stage players

the same expected payoffs they obtained under γ.

Theorem 2 extends Liapunov’s Theorem in the following sense. Expected payoffs under

λ for first-stage players are (E[U1|λ], . . . , E[UI |λ]) = (E[U1|λA, γ], . . . , E[UI |λA, γ]). This

I-tuple belongs to the convex hull of∫ϕ dλA. Richter’s version of Liapunov’s Theorem

(see for instance Theorem 3, D.II.4, page 62, Hildenbrand (1974)) implies the existence of a

measurable selection β′ of ϕ such that

E[Ui|λA, β′] = E[Ui|λA, γ], ∀i ∈ I. (3)

This equivalence, however, is not sufficient to prove our results: even though β′ yields i’s

expected payoffs under λ, player i may have an incentive to choose a strategy different from

λAi . Player i will not mix according to λAi if i’s expected payoff E[Ui|ai, λ−i, β′] for different

actions ai (in the support of λAi) is not the same. To guarantee that player i moves according

to λAi , β′ must satisfy a stronger requirement such as condition (i). Integrating both sides

of (i) with respect to λAi , (3) obtains.

The proof of Theorem 2 involves three main parts. The first one uses ideas in Dvoretzky,

Wald, and Wolfowitz (1951). The result is first proved for the case where the range of the

correspondence NE(·) is finite, and then it is extended to the continuum case by a limiting

argument.

The second part uses ideas in Lindenstrauss (1966). It is shown that the set W of

candidate solutions, i.e., all second-stage continuations f that yield the desired payoffs

(E[Ui|ai, λAi , f ] = E[Ui|ai, λAi , γ], for most ai and all i ), is convex and compact and there-

fore has an extreme point. Demonstrating that any member of W that prescribes correlation

cannot be an extreme point is the last step of the proof.

An illustrative (but not entirely accurate) description of the third part follows. The objec-

tive is to find a non-trivial continuation f ′ 6= 0 that yields expected payoff zero (E[Ui|ai, λAi , f ′] =

0 for λAi-almost all ai and all i). Armed with such a continuation, it is possible to show that

any candidate f ∈W that prescribes correlation cannot be an extreme point: the linearity of

expected payoffs implies that both f + f ′ and f − f ′ belong to W . To see how the existence

of such f ′ is established, suppose there are only two first-stage players. Using non-atomicity,

their choice spaces are divided into two sets of equal measure, thus partitioning A1×A2 into

four sets. As indicated in the graph below, we construct a function f that takes values 1 and

−1 in alternating sets.

12

1

-1

-1

1

A2

A1

If the payoff functions Ui were constant, f would be the desired continuation f ′. The

continuation f and an application of the Banach Inverse Theorem (Lemma 2) are used to

prove the existence of the desired f ′ with zero expected payoff.

Proof of Theorem 2: Suppose first that the set⋃a∈A ϕ(a) (which in our application is

the set of Nash equilibria of the second-stage game) is finite, and represent its elements by

ηk, k = 1, 2, . . . ,K. Given a first-stage realization a ∈ A, each ηk is played with certain

probability γk(a). Thus, γ is represented by measurable functions γk : A −→ [0, 1], k =

1, 2, . . . ,K, with∑K

k=1 γk(a) = 1, almost everywhere in A.

Let Ek = {a ∈ A : γk(a) > 0}. For any g = (g1, . . . , gK) ∈ (L∞(A, λA))K , define

Tig =K∑k=1

∫A−i

1Ek

[∫BUi(ai, a−i, b) dηk

]gk(ai, a−i) dλA−i , ∀i ∈ I, and

Tg = (T1g, . . . , TIg) .

The expression Tiγ(ai) (i.e, Tiγ evaluated at ai ∈ supp[λAi ]) represents, in our application,

player i’s expected payoff when i selects ai and i’s opponents play according to γ.

The operators Ti and T have several continuity properties. Let

‖g‖∞ = max1≤k≤K

‖gk‖∞ and ‖Tg‖∞ = maxi∈I‖Tig‖∞. (4)

The operators Ti, T are linear and continuous with respect to the the described norms; they

are also weak∗ continuous.7

Let

W = T−1(Tγ) ∩ {g :K∑k=1

gk(a) = 1 λA–a.e., and 0 ≤ gk ≤ 1, k = 1, . . . ,K, }. (5)

The set W contains potential continuations of the game that give first-stage players the same

payoff as γ; indeed γ belongs to W . The set T−1(Tγ) is weak∗ closed and convex by continuity7For n = 1, 2, . . . , let gn ∈ (L∞(A, λA))K . Weak∗ continuity is understood in the following sense: [gn

w∗−→g ⇒ Tgn

w∗−→ Tg], where gnw∗−→ g if for k = 1, . . . ,K, gkn converges to gk with respect to the weak∗ topology

(denoted gknw∗−→ gk). Similarly, Tgn

w∗−→ Tg if for all i ∈ I, Tignw∗−→ Tig.

13

and linearity. The second set in the definition of W is a weak∗ closed, convex, subset of the

unit ball. Hence W is weak∗ compact (Banach-Aloglou Theorem), and since it is convex it

has an extreme point (Krein-Milman Theorem).

To complete the proof in the case where⋃a∈A ϕ(a) is finite, it suffices to show that there

is an element g ∈ W all of whose coordinate functions gk are indicator functions. We now

prove that all extreme points of W have this property.

Suppose then that g ∈ W has some component that is not an indicator function. Rela-

beling coordinates if necessary, suppose such component is g1. To demonstrate that g is not

an extreme point of W we must identify a function f 6= 0 such that g ± f ∈ W . It follows

from the definition of W that the candidate function f must satisfy three conditions:

Tf = 0, (6)

0 ≤ fk + gk ≤ 1, for k = 1, . . . ,K, and (7)K∑k=1

fk = 0 λA − a.e. (8)

Given (7) and (8), g ± f belong to the second set in (5); (6) implies that they also belong to

the first.

Lemma 1 below establishes some properties of any g ∈W that are used to show that such

an f exists.

For any two elements ηk′, ηk

′′of⋃a∈A ϕ(a), define

∆Ui(a, ηk′, ηk

′′) =

∫BUi(a, b) dηk

′ −∫BUi(a, b) dηk

′′.

Lemma 1 Let g = (g1, . . . , gK) be any element of W . Suppose that for some k′, λA({a : 0 <

gk′(a) < 1}) > 0. Then there is an ε > 0, a coordinate gk

′′, k′′ 6= k′, and a set D =

∏i∈I Di

with Di ⊂ Ai ∀i ∈ I, such that

(a) λAi(Di) > 0 ∀i ∈ I;

(b) ∀a, a′ ∈ D, ‖∆Ui(a, ηk′, ηk

′′)−∆Ui(a′ηk

′, ηk

′′)‖ < ε,

(c) ∀a ∈ D, ε < gk′(a) < 1− ε and ε < gk

′′(a) < 1− ε.

The proof of the lemma is provided in the Appendix; only a brief intuition is offered

here. Suppose g is an element of W and gk′

is not an indicator function. Then there must

be some set where gk′(a) takes values strictly below one and strictly above zero. Since the

different coordinate functions must add up to one, there must be another function gk′′, and

a set of positive measure in which both functions are bounded away from zero and one. This

14

reasoning leads to (a) and (c). Uniform continuity of the payoff functions yields condition

(b).

We proceed with the proof of Theorem 2. Relabeling coordinates if necessary, let gk′

= g1

and gk′′

= g2 (Lemma 1 (c)).

Let X be the subspace of (L∞(A, λA))K defined by

X = {f : f2 = −f1, f1(a) = f2(a) = 0 ∀a /∈ D, and fk = 0 for 3 ≤ k ≤ K}.

Note that X is a closed linear subspace and therefore a Banach space. Define Y = TX.

Recall that we must find f ∈ X, f 6= 0 with Tf = 0. Suppose f = 0 is the unique solution

of Tf = 0 in X. Then T−1 : Y −→ X is a well-defined function. The space Y with the norm

‖ · ‖T defined below is a Banach space:

∀y ∈ Y, ‖y‖T =12‖y‖∞ +

12‖T−1y‖∞. (9)

Lemma 2 Let T be a bounded linear operator from a Banach space X onto a Banach space

Y . Then, the equation Tf = 0 has f = 0 as its unique solution if and only if ∃C > 0 such

that ∀f ∈ X, ‖f‖ ≤ C‖Tf‖.

The proof of the lemma is provided in the Appendix.

We now show that

∀C > 0, ∃fc ∈ X such that ‖fc‖∞ > C‖Tfc‖T ,

and then use Lemma 2.

By hypothesis, at least two marginal distributions λAi′ and λAi′′ are non-atomic. Relabel-

ing first-stage agents if necessary, suppose that λA1 and λA2 are the non-atomic distributions.

For ` = 1, 2 partition D` (identified in Lemma 1) into D′` and D′′` so that

D′` ∩D′′` = ∅, D′` ∪D′′` = D`, and λA`(D′`) = λA`(D

′′` ). (10)

Such partition exist because λA` is non-atomic. Let

D+ = [(D′1 ×D′2) ∪ (D′′1 ×D′′2)]×I∏i=3

Di,

D− = [(D′1 ×D′′2) ∪ (D′′1 ×D′2)]×I∏i=3

Di,

15

and let

f = ((1D+ − 1D−),−(1D+ − 1D−), 0, · · · , 0).

Then, for any i ∈ I

Tif =∫D−i

∫BUi(a, b) dη1 (1D+ − 1D−) dλA−i −

∫D−i

∫BUi(a, b) dη2 (1D+ − 1D−) dλA−i

=∫D−i

(∫BUi(a, b) dη1 −

∫BUi(a, b) dη2

)(1D+ − 1D−) dλA−i

=∫D−i

∆Ui(a, η1, η2)(1D+ − 1D−) dλA−i

=∫D−i

∆Ui(a, η1, η2) 1D+ dλA−i −∫D−i

∆Ui(a) 1D− dλA−i

≤(

supa∈D

∆Ui(a, η1, η2))∫

D−i

1D+ dλA−i −(

infa∈D

∆Ui(a, η1, η2)) ∫

D−i

1D− dλA−i .

=(

supa∈D

∆Ui(a, η1, η2)− infa∈D

∆Ui(a, η1, η2)) ∫

D−i

1D+ dλA−i

≤ ε12λA−i(D−i), (11)

where the first four equations follow using the definition of Ti, rearranging terms, and using

the definition of ∆Ui(a, η1, η2). The first inequality follows by taking extreme values over D,

which includes the area of integration. The last two lines follow from the fact that∫D−i

1D+ dλA−i =∫D−i

1D− dλA−i , ∀i ∈ I.

To see this, note that (D′1 ×D′2) ∩ (D′′1 ×D′′2) = ∅ and therefore using (10)

λA1 × λA2 [(D′1 ×D′2) ∪ (D′′1 ×D′′2)] = λA1(D′1)λA2(D′2) + λA1(D′′1)λA2(D′′2)

= (λA2(D′1) + λA1(D′′1))λA2(D′2)

= λA1(D1)λA2(D′2)

=λA1(D1)λA2(D2)

2.

Thus, λA(D+) = 12λA(D). Hence

∫D−i

1D+ dλA−i = 12λA−i(D−i). The same argument applies

to D−.

16

For C > 0, let fc = Cf . Then, using the definitions of ‖ · ‖T and ‖ · ‖∞ ((9) and (4)

respectively), the definition of fc, and the inequality (11),

‖Tfc‖T = = maxi∈I

12‖Tifc‖∞ +

12‖fc‖∞

= maxi∈I

12C‖Tif‖∞ +

12C

≤ maxi∈I

12C ε

12λA−i(D−i) +

12C

< C = ‖fc‖∞.

By Lemma 2 there exists f ′ ∈ X, f ′ 6= 0, such that Tf ′ = 0. Let

f =εf ′

2‖f ′‖∞.

It is immediate that f satisfies (6). For k = 1, 2, ‖fk‖∞ < ε/2, and for k ≥ 3 fk = 0. Thus

f satisfies (7). By construction f also satisfies (8). Then g ± f ∈ W and therefore g cannot

be an extreme point.

This completes the proof when⋃a∈A ϕ(a) is finite.

Using the argument in Dvoretzky, Wald, and Wolfowitz (Section 4, 1951), it follows that

the theorem holds when⋃a∈A ϕ(a) is compact, which is the case here. Roughly a sequence of

increasingly finer partitions of B is constructed. Fix a partition. Expected payoffs are defined

so as to be constant on each partition element. Our theorem for the finite case then applies

because each partition has finitely many elements. As the partitions become increasingly

finer, a limiting argument yields the desired result. Q.E.D.

Note that the proof of Theorem 2 could be applied directly to the case of public random-

ization devices; no extreme point involves correlation.

To conclude this section, we illustrate one potential application of our results with two

corollaries that follow from combining Theorems 1 and 2.

According to the limit distribution λ, any first-stage player i must choose the strategy

λAi . Corollary 2 states that if the limit distribution stipulates sufficiently diffused behavior

for at least two first-stage players, then Γ has a subgame perfect equilibrium. We emphasize,

however, that λ itself need not be a subgame perfect equilibrium outcome.

Corollary 2 Let Γn, n = 1, 2, . . . , be a sequence of finite games converging to a limit game

Γ, and let λn be a subgame perfect equilibrium outcome of Γn. Through a subsequence, λn

converges to a limit distribution λ. If for two first stage agents, i′, i′′ ∈ I, the marginal

distributions λAi′ and λAi′′ are non-atomic, then Γ has a subgame perfect equilibrium.

17

The diffuse behavior in the first stage is used in place of a public randomization device

by second-stage players to coordinate their actions. Having at least two first-stage players

with non-atomic behavior guarantees that no first-stage player has, individually, the ability

to manipulate to her advantage the continuation of the game. Corollary 2 can be obtained

directly from Harris, Reny, and Robson’s results (Manelli (2001)). As an existence result,

Corollary 2 is not entirely satisfactory because imposes conditions on limit distributions, not

on primitives. Also, it might be difficult to check the non-atomicity of the limit distribution

in applications.

Corollary 3 states that if the infinite game Γ is extended to incorporate cheap talk, i.e.,

first-stage players ability to send payoff irrelevant messages to second-stage players, then the

new game has a subgame perfect equilibrium. Equilibrium outcomes of finite approximating

games without cheap talk can be made into equilibrium outcomes of games with cheap talk:

players use completely uninformative messages such as randomization with equal probabil-

ity over all their cheap-talk possibilities. If the cheap-talk spaces are sufficiently rich, the

unit interval for instance, then the limit distribution obtained from the approximating finite

outcomes prescribes non-atomic behavior for first-stage players who randomize over all their

cheap-talk messages. This argument leads to the following corollary.

Corollary 3 was first established by Harris, Reny, and Robson (1995) in their discussion

following their Theorem 45.

Corollary 3 Let Γn, n = 1, 2, . . . , be a sequence of finite games converging to a limit game

Γ, and let λn be a subgame perfect equilibrium outcome of Γn. Through a subsequence,

λn converges to a limit distribution λ. Extend Γ so that first-stage players can send to

second-stage players payoff-irrelevant messages from the unit interval or any other compact

metric space with a continuum of elements. Then the extended game has a subgame perfect

equilibrium. Furthermore, there is a subgame perfect equilibrium of the extended game that

generates the distribution λ over the payoff-relevant variables.

4 Conclusions

We conclude with a few remarks and comments on related literature.

1. Harris, Reny, and Robson (1995) prove many interesting results that we have not men-

tioned. They also consider more general games than those studied in this essay, mainly

games with arbitrarily many stages. We believe that our approach would extend to

those games but the details are by no means trivial. It is likely that the extension

18

would be as lengthy as Harris, Reny, and Robson’s original treatment. Our proof em-

phasizes certain regularities observed in different types of infinite games. It may prove

useful in analyzing other infinite games with imperfect or incomplete information.

2. Formally the games considered by Simon and Zame (1990) are not stage games. An

interpretation that Simon and Zame favor, however, is that their model is a reduced

form of a two-stage game where the second-stage players are not explicitly modeled.

Instead, second-stage players are replaced by a payoff correspondence specifying feasible

first-stage payoffs for any realization in the first stage. (The payoff correspondence is

obtained from the second-stage Nash equilibrium correspondence by associating to ev-

ery Nash equilibrium its corresponding payoffs for first-stage players.) Simon and Zame

prove that if the payoff correspondence is upper hemi-continuous and convex-valued,

then it has a measurable selection for which an equilibrium exists. Our results comple-

ment Simon and Zame’s. We replace the assumption of a convex payoff-correspondence

with the non-atomicity of first-stage behavior.

3. Reny and Robson (1995) provide an alternative proof of the main existence theorem in

Harris, Reny, and Robson (1995) using Simon and Zame’s result. Their proof also uses

a backward and forward step. It is shorter than the proof in Harris, Reny, and Robson’s

paper but focus entirely on equilibrium payoffs rather than on equilibrium paths.

4. As noted, the addition of cheap talk “solves” the non-existence problem in stage games

and in signaling games with infinite choice-sets. It would be of interest to identify

the class of games in which this property holds. Harris, Stinchcombe, and Zame (1999)

report an example of an incomplete-information game in which cheap talk fails to restore

existence. They also use non-standard analysis to study in a single framework cheap

talk, public randomization devices, and sharing rules.

5. Chakrabarti (1999) proves the existence of epsilon-perfect equilibria for a general class

of dynamic games that includes the games we study.

6. Borgers (1991) defines a notion of approximation for a class of games and proves that

the map from games to pure subgame perfect equilibrium outcomes is upper hemi-

continuous. Borgers’ result only applies to outcomes generated by pure strategies.

Since finite games may not have pure strategy equilibria, his result is not enough to

yield existence. Our proof of Theorem 1 and Corollary 1 uses the outcomes of behavior

strategies.

19

7. There are certain similarities between the games considered in this essay and games of

incomplete information. Rosenthal and Radner (1982) and Milgrom and Weber (1985)

study games in which players observe some private information (from a continuum of

alternatives) and then simultaneously and independently choose an action.8 Under cer-

tain distributional assumptions on information, they show that there exist equilibria in

behavior strategies. If in addition action spaces are finite, they prove that pure strategy

equilibria exist (or equivalently, that mixed-strategy equilibria can be “purified”). This

is not the case, however, when there is a continuum of actions. Khan, Rath, and Sun

(1999) have constructed a well-behaved game with intervals as choice sets and with no

pure strategy equilibrium.

8A strategy maps a player’s types into actions. In stage games, a second-stage strategy maps first-stage

realizations into actions. Thus, the vector of first-stage realizations is the analogue of the vector of types.

20

5 References

Al-Najjar, N. and E. Solan, Equilibrium Existence in Incomplete Information Games with

Atomic Posteriors, working paper, MEDS, Northwestern University, May 1999.

Balder, E.J., Generalized Equilibrium Results for Games of Incomplete Information, Math-

ematics of Operations Research 13 (1988): 265-276.

Billingsley, P., Convergence of Probability Measures. New York: John Wiley, 1968.

————-, Probability and Measure. New York: John Wiley, 1979.

Borgers, T., “The Upper Hemi-Continuity of the Correspondence of Subgame Perfect Equi-

librium outcomes,” Journal of Mathematical Economics 20 (1991): 89-106.

Chakrabarti, S., “Finite and infinite action dynamic games with imperfect information,”

Journal of Mathematical Economics 32 (1999): 243-266.

Dvoretzky, A., A. Wald, and J. Wolfowitz, “Elimination of Randomization in Certain Statis-

tical Decision Procedures and Zero-Sum Two-Person Games,” Annals of Mathematical

Statistics 22 (1951): 1-21.

Harris, C. J., Reny P. J. and A. J. Robson, “The Existence of Subgame Perfect Equilibrium

in Continuous Games with Almost Perfect Information: a Case for Extensive-Form

Correlation,” Econometrica 63 (1995): 507-544.

Harris, C. J., Stinchcombe, M. B., and W. Zame, “The Finitistic Theory of Infinite Games,”

photocopy, (1999).

Hildenbrand, W., Core and Equilibria of a Large Economy. Princeton University Press,

Princeton, (1974).

Jackson, M., L. Simon, J. Swinkels, and W. Zame, “Communication and Equilibrium in

Discontinuous Games of Incomplete Information,” photocopy, (2001).

Khan, M. Ali and Y. Sun, “Pure Strategies in Games with Private Information,” Journal

of Mathematical Economics 24 (1995): 633-653.

Khan, M. Ali, Rath K. P., and Y. Sun, “On a private information game without pure

strategy equilibria,” Journal of Mathematical Economics 31 (1999): 341-359.

Kechris, A. S., Classical Descriptive Set Theory. Springer-Verlag, New York, (1995).

Lindenstrauss, J., “A Short Proof of Liapunov’s Convexity Theorem,” Journal of Mathe-

matics and Mechanics, 15, 6, (1966): 971-972.

Manelli, A.M., “Cheap Talk and Sequential Equilibria in Signaling Games,” Econometrica,

66 (1996): 917-942.

21

Manelli, A.M., “Subgame Perfect Equilibria in Stage Games,” forthcoming in Journal of

Economic Theory.

Milgrom, P., and R. J. Weber, “Distributional Strategies for Games with Incomplete Infor-

mation,” Mathematics of Operations Research 10 (1985): 619-32.

Parthasarathy, K., Probability Measures on Metric Spaces. New York: Academic Press,

(1967).

Radner, R., and R. W. Rosenthal, “Private Information and Pure Strategy Equilibria,”

Mathematics of Operations Research 7 (1982): 401-409.

Reny, P. J. and A. J. Robson, “A Short Proof of Existence of Subgame Perfect Equilibrium

in Continuous Stage Games With Public Randomization,” photocopy, (1995).

Simon, L., and W. Zame, “Discontinuous Games and Endogenous Sharing Rules,” Econo-

metrica 58 (1990): 861-872.

Van Damme, E., “Equilibria in Non-Cooperative Games,” in Surveys in Game Theory and

Related Topics (H. Peters and O. Vrieze, Eds.) C.W.I. Tract 39. Amsterdam, 1987.

22

6 Appendix

Lemma 3 is used in the proof of Corollary 1 and in the proof of Theorem 1 step three. Recall

that NE(a) is the set of second-stage Nash Equilibria in Γ after the first-stage realization a.

Lemma 3 Let Γn, n = 1, 2, . . . , be a sequence of finite games and NEn be the corresponding

sequence of second-stage Nash equilibrium correspondences. Suppose Γn converges to Γ. Then

lim supn→∞

graph(NEn) ⊂ graph(NE).

As a consequence, NE(a) is non-empty ∀a ∈ A, and if λn is a subgame perfect equilibrium

outcome of Γn and λn ⇒ λ, then supp[λ] ⊂ graph(NE).

Proof of Lemma 3: If (a, η1, . . . , ηJ) belongs to lim sup graph(NEn), then there is a se-

quence (an, ηn1 , . . . , ηnJ ) ∈ graph(NEn) for n = 1, 2, . . . , such that (an, ηn1 , . . . , η

nJ ) converges

to (a, η1, . . . , ηJ). For any j ∈ J ,

E[Uj |an, ηn1 , . . . , ηnJ ] ≥ E[Uj |an, bnj , ηn−j ], ∀bnj ∈ Bnj . (12)

For any bj ∈ Bj , there is bnj ∈ Bn such that bnj converges to bj . Taking limits in (12), it

follows that (a, η1, . . . , ηJ) is in graph(NE).

Q.E.D.

Proof of Theorem 1: Let αi = λAi for all i ∈ I. The proof constructs second-stage

strategies β and shows that (α, β) satisfies (1) and (2). The equilibrium strategies βj for

second-stage players are constructed in successive steps by defining strategies that satisfy (2)

and then refining them to satisfy (1).

Note for later use that there is a subgame perfect equilibrium (αn, βn) in Γn that generates

λn.

The proof proceeds in four steps.

[ 1.] Second-stage equilibrium.

By hypothesis (ii), there is a set A′ ⊂ A, λA(A′) = 1, such that for any a ∈ A′, β′(a) ∈N(a).

Let β′′ be any measurable everywhere selection from the Nash Equilibrium correspondence

NE(·). Such a selection exists because NE(·) is upper hemi-continuous and compact valued

(see for instance, Arsenin-Kunugui Theorem, Kechris (1995), Theorem 35.46, page 297). For

each j ∈ J , define

βj(a) =

{β′j(a) if a ∈ A′

β′′j (a) otherwise.

23

Note that β(a) ∈ NE(a) ∀a ∈ A and β(a) = β′(a) λA-a.e.

[ 2.] Second-stage equilibrium strategies.

The second-stage strategies βj are modified in a set of λA-measure zero to ensure that

(1) holds.

For each i ∈ I consider the set of least-preferred, second-stage Nash equilibrium outcomes:

ψ(a) = arg minη∈NE(a)

E[Ui|a, η].

The correspondence ψ(·) is compact valued—which follows from verifying the definitions—

and has a measurable graph—which follows, for instance, from Proposition 3, D.II.3, page 60,

in Hildenbrand (1971) and the fact that NE(·) is upper hemi-continuous. Therefore ψ(·) has

a measurable everywhere selection βi = (βi1, βi2, · · · , βiJ) (Arsenin-Kunugui Theorem, Kechris

(1995), Theorem 35.46, page 297).

Let

Ai = {ai ∈ supp[λAi ] : E[Ui|α, β] ≥ E[Ui|ai, α−i, β]},

and A =∏i∈I Ai. Then, for every j ∈ J define

βj(a) =

{βij(a) if ai 6∈ Ai and a−i ∈ A−i ,βj(a) otherwise.

Assume temporarily that αi(Ai) = 1 ∀i ∈ I. This will be proved in step four. We refer to A

as the equilibrium path.

The strategy βj has two components. The first component, βij(a), specifies j’s equilibrium

response to a deviation by player i (when all other first-stage players remain in the equilibrium

path). The corresponding continuation βi(a) = (βi1(a), βi2(a), . . . βiJ(a)) is the least preferred

second-stage Nash equilibrium of the deviant player. The second component, βj , defines

player j’s response on the equilibrium path, or after multiple first-stage deviations.

Under the assumption that αi(Ai) = 1, β = β′ λA-a.e.

[ 3.] Subgame perfect equilibrium.

It is now verified that (α, β) is a subgame perfect equilibrium.

First, we show that β(a) ∈ NE(a) ∀a ∈ A. Note that β(a) ∈ NE(a), and β(a) differs

from β(a) only when a ∈⋃i∈I(A

ci × A−i), in which case β(a) is also a second-stage Nash

equilibrium.

Second, we show that (α, β) satisfies (1).

Suppose that ai is in Ai. Under the assumption that αi(Ai) = 1 ∀i ∈ I, the strategy

βj differs from βj only on a set of measure zero. Therefore the left side of (1) becomes

24

E[Ui|α, β] = E[Ui|α, β]. Since ai ∈ Ai, and the integration may take place over A−i, we have,

by definition of β, that

E[Ui|ai, α−i, β] =∫A−i

Ui(ai, a−i, b) β1(db1|a) . . . βJ(dbJ |a) dλA−i = E[Ui|ai, α−i, β].

Hence, by definition of Ai, (1) is satisfied.

Suppose, alternatively, that ai is not in Ai. Let ani ∈ Ani for all n, and ani −→ ai.

(Such sequence exists because Γn converges to Γ. In Γn, when i chooses ani and all other

players move according to their equilibrium strategies (αn−i, βn), i can expect the following

continuation:

µn(ani ) = αn−i ◦ f−1n , where fn(a−i) = (a−i, βn(ani , a−i)).

In a subsequence, (ani , µn(ani )) −→ (ai, µ) for some µ ∈M(A−i×

∏j∈JM(Bj)). Since (αn, βn)

is a subgame perfect equilibrium of Γn,

E[Ui|αn, βn] ≥ E[Ui|ani , αn−i, βn] = E[Ui|ani , µn(ani )].

As n −→∞, E[Ui|α, β] ≥ E[Ui|ai, µ].

We now show that for ai 6∈ Ai, E[Ui|ai, µ] ≥ E[Ui|ai, α−i, βi] which establishes (1).

Note that µA−i = λA−i . Then, by Theorem V.8.1 in Parthasarathy (1967), there exists a

version of the regular conditional distribution of η given a−i derived from µ, γ0 : A−i −→M(∏j∈JM(Bj)), and a set A0 ⊂ A−i with λA−i(A

0) = 1 such that(a−i ∈ A0 and η ∈ supp[γ0(a−i)]

)=⇒ (a−i, η) ∈ supp[µ].

If (a−i, η) belongs to supp[µ], there is (an−i, ηn) ∈ supp[µn(ani )] such that (an−i, η

n) −→ (a−i, η).

Since ηn belongs to NEn(ani , an−i), then η must belong to NE(ai, a−i) (Lemma 3). Hence,

supp[γ0(a−i)] ⊂ NE(ai, a−i) for all a−i ∈ A0. (13)

Note that by construction of β, for any a−i ∈ (A0 ∩ A−i) and η ∈ NE(ai, a−i),∫BUi(ai, a−i, b)βi1(db1|ai, a−i) . . . βiJ(dbJ |ai, a−i) ≤

∫BUi(ai, a−i, b) η(db).

Using (13), we obtain∫BUi(ai, a−i, b)βi1(db1|ai, a−i) . . . βiJ(dbJ |ai, a−i) ≤

∫∏j∈J M(Bj)

∫BUi(ai, a−i, b) η(db) γ0(dη|a−i).

Since λA−i(A0∩ A−i) = 1, integrating both sides of the inequality above with respect to λA−i

we have

E[Ui|ai, α−i, βi] =∫A−i

∫BUi(ai, a−i, b)βi1(db1|ai, a−i) . . . βiJ(dbJ |ai, a−i) dλA−i

≤∫A−i

∫∏j∈J M(Bj)

∫BUi(ai, a−i, b) η(db)γ0(dη|a−i) dλA−i

= E[Ui|ai, µ].

25

This establishes that (α, β) satisfies (1) under the assumption that Ai has λAi-measure

one for all i.

[ 4.] The set Ai has λAi-measure one.

Pick any agent i ∈ I. We now prove that αi(Ai) = 1 by contradiction. To this effect, let

Aci be the complement of Ai, and suppose that αi(Aci ) > 0. Define

g(ai) =1

αi(Aci )1Aci

(ai).

Note for later reference that∫g(ai) dαi = 1, and that 0 ≤ g ≤ 1/αi(Aci ). The density function

g defines a new strategy αig ∈M(Ai):

αig(E) =∫Eg(ai) dαi, for all measurable E ⊂ Ai.

By construction, αig is a profitable deviation for player i: E[Ui|αig, α−i, β] > E[Ui|α, β].

We now show that there is a continuous density g such that αig is also a profitable

deviation for player i, i.e.,

E[Ui|αig, α−i, β] > E[Ui|α, β]. (14)

As a consequence of Lusin’s Theorem, for any positive integer n, there is a continuous function

gε and a compact set Kε with αi(Kε) > 1− ε such that gε(ai) = g(ai),∀ai ∈ Kε; furthermore,

inf g(ai) ≤ gε ≤ sup g(ai). Let

g(ai) =gε(ai)∫gε(ai) dαi

.

For later reference we prove that∫gε(ai) dαi → 1 as ε→ 0. (15)

To see this, note that ∣∣∣∣∫ gε(ai) dαi − 1∣∣∣∣ =

∣∣∣∣∫ gε(ai) dαi −∫g(ai) dαi

∣∣∣∣≤∫Kcε

|gε(ai)− g(ai)| dαi

≤ αi(Kcε ) ‖gε − g‖∞

≤ αi(Kcε )

1αi(Kc

ε ).

26

We now compare i’s payoff under αig and αig:

E[Ui|αig, α−i, β]− E[Ui|αig, α−i, β] =∫Ai

E[Ui|ai, α−i, β][g(ai)− g(ai))] dαi

=∫Kε

E[Ui|ai, α−i, β](

g(ai)∫gε(ai) dαi

− g(ai))dαi

+∫Kcε

E[Ui|ai, α−i, β](

gε(ai)∫gε(ai) dαi

− g(ai))dαi.

We now prove successively that both term in the last expression converge to zero as ε tends

to zero. The first term equals∫Kε

E[Ui|ai, α−i, β] g(ai)(

1∫gε(ai)αi(dai)

− 1)dαi,

and given (15), it converges to zero.

The second term is less than

αi(Kcε ) × sup

ai∈Kcε

∣∣∣∣E[Ui|ai, α−i, β](

gε(ai)∫gε(ai) dαi

− g(ai))∣∣∣∣ .

Given (15), all functions over which the supremum is applied are bounded; thus, the expres-

sion above converges to zero. Hence, (14) is established.

Define,

gn(ai) =

g(ai)∫

Aig(ai) dαni

if∫Aig(ai) dαni > 0

0 otherwise.

Each gn is a continuous density and, as n→∞, gn converges uniformly to g. Since (αn, βn)

is a subgame perfect equilibrium of Γn,

E[Ui|αn, βn] ≥ E[Ui|αni gn, αn−i, βn].

As n→∞, the left side of the inequality converges to E[Ui|α, β] (since βn generates λn and

λn ⇒ λ. The right side of the inequality converges to E[Ui|αig, α−i, β] (Theorem 5.5, page

34, Billingsley (1968)). This contradicts (14). Hence, Ai must have [αi]-measure one.

Q.E.D.

Proof of Lemma 1: Relabeling coordinates if necessary, let k′ = 1 and k′′ = 2. The proof

proceeds in three steps.

1. We prove that there is an ε > 0 and a set E with λA(E) > 0 such that ε < gk(a) < 1− εa.e., for k = 1, 2.

Suppose gk′

is not an indicator function. Let Ek′

= {a ∈ Ek′

: 0 < gk′(a) < 1}.

For k 6= k′, let Ek = {a ∈ Ek′

: 0 < gk(a) < 1}. Since∑K

k=1 gk = 1 a.e., then

27

λA(Ek′) = λA[Ek

′ ∩ (⋃k 6=k′ E

k)]. Since λA(E1) > 0 there must exist k′′ 6= k′ such that

λA({a : 0 < gk < 1, k = k′, k′′}) > 0. Hence, there must be an ε > 0 with the desired

property.

2. We prove that there exists Di ⊂ Ai for all i ∈ I, such that λA(∏i∈I Di) > 0, and for

k = 1, 2, ε1 < gk(a) < 1− ε1 a.e. in∏i∈I Di.

Let F be the set of measurable rectangles in A such that ε < gk < 1−ε a.e. for k = 1, 2.

The set F is a semiring (i.e, F is closed under finite intersections, contains the empty

set, and if B,C ∈ F and B ⊂ C, then there are disjoint sets C1, C2, . . . , CN such that

C \B =⋃Nn=1Cn).

Let G = {G : G = E ∩ E, for some E ∈ A}. G is a σ-field. Clearly, F ⊂ G and the

restriction of λA to G is a finite measure. By Dinkin’s π − λ Theorem, σ(F) = G.

By Theorem 11.4 in Billingsley (1978), page 140, ∀ε′ > 0, there is a disjoint sequence

of sets {Bn}∞n=1, Bn ∈ F ∀n, such that E ⊂⋃∞n=1Bn and λA(

⋃∞n=1Bn \ E) < ε′. If all

elements of F had λA–measure zero, then E would have measure zero. Thus, there is

some Bn with positive measure. Since Bn ∈ F , the desired result is established.

3. The previous steps prove that there are sets Di satisfying (a) and (c). We will prove

that for each i ∈ I there is a set Di ⊂ Di also satisfying (b).

Let

∆U(a, η1, η2) = (∆U1(a, η1, η2),∆U2(a, η1, η2), . . . ,∆UI(a, η1, η2)).

The function ∆U is a continuous function defined on a compact set and hence it is

uniformly continuous. Therefore,

∀ε > 0, ∃δ > 0 such that [ ‖a− a′‖ < δ ⇒ ‖∆U(a)−∆U(a′)‖ < ε].

For each i ∈ I identify a subset of Di ⊂ Di such that ∀ai, a′i ∈ Di, ‖ai − a′i‖ < δ. If λAiis non-atomic such a subset exists. If λAi is atomic, Di could be any atom in Di.

Q.E.D.

Proof of Lemma 2: If Tf = 0 has a unique solution, T−1 : TX → X is well defined:

∀y ∈ TX, there is a unique f ∈ X such that Tf = y. From the Banach Inverse Theorem,

T−1 is bounded. Then, ∃C such that ‖T−1y‖ ≤ C‖y‖, ∀y ∈ TX. Since f = T−1Tf and

y = Tf , then ∀f ∈ X, ‖f‖ ≤ C‖Tf‖.If ∀f ∈ X, ‖f‖ ≤ C‖Tf‖, then if Tf = 0, ‖f‖ = 0 which implies, f = 0.

Q.E.D.

28