Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Supermodular Games

Jonathan Levin
April 2006

These notes develop the theory of supermodular games. Supermodular


games are those characterized by strategic complementarities roughly,
this means that when one player takes a higher action, the others want to
do the same. Supermodular games are interesting for several reasons. First,
they encompass many applied models. Second, they have the remarkable
property that many solution concepts yield the same predictions. Finally,
they tend to be analytically appealing they have nice comparative statics
properties and behave well under various learning rules. Much of the theory
is due to Topkis (1979), Vives (1990) and Milgrom and Roberts (1990).

Monotone Comparative Statics

We take a brief detour to review monotone comparative statics, starting


with the property of increasing dierences (or supermodularity). For this,
suppose X R and T some partially ordered set.
Definition 1 A function f : X T R has increasing dierences in
(x, t) if for all x0 x and t0 t,
f (x0 , t0 ) f (x, t0 ) f (x0 , t) f (x, t).
What does this mean? If f has increasing dierences in (x, t), then the
incremental gain to choosing a higher x (i.e. x0 rather than x) is greater
when t is higher. That is, f (x0 , t) f (x, t) is nondecreasing in t. You can
check that increasing dierences is symmetric an equivalent statement is
that if t0 > t, then f (x, t0 ) f (x, t) is nondecreasing in x.
Note that f need not be nicely behaved, nor do X and T need to be
intervals. For instance, we could have X = {0, 1} and just a few parameter
values T = {0, 1, 2}. If, however, f is nicely behaved, we can re-write
increasing dierences in terms of derivatives.
1

Lemma 1 If f is twice continuously dierentiable, then then f has increasing dierences in (x, t) if and only if t0 t implies that fx (x, t0 ) fx (x, t)
for all x, or alternatively that fxt (x, t) 0 for all x, t.
A central question in monotone comparative statics is to identify when:
x(t) = arg max f (x, t)
xX

will be non-decreasing (or increasing) in t. The main result we will use is


due to Topkis (1968).
Theorem 1 Let X R be compact and T a partially ordered set. Suppose
f : X T R has increasing dierences in (x, t), and is upper semicontinuous in x.1 Then (i) for all t, x(t) exists and has a greatest and least
element x(t) and x(t); and (ii) if t0 t, then x(t0 ) x(t) in the sense that
x(t0 ) x(t) and x(t0 ) x(t).
Proof. (i) Fix t, and pick x1 x2 ..., with each xk x(t), and let
x = limk xk . Then for all x X,
f (xk , t) f (x, t)

f (x, t) f (x, t)

by continuity. Thus, x x(t). It follows that x(t) must have a largest (and
by the same argument, smallest) element.
(ii) Fix t and t0 , and let x x(t) and x0 x(t0 ) to be two maximizers.
By the fact that x maximizes f (x, t),
f (x, t) f (min(x, x0 ), t) 0.

This implies (check the two cases that x x0 and x x0 ) that:


f (max(x, x0 ), t) f (x0 , t) 0,

so by supermodularity
f (max(x, x0 ), t0 ) f (x0 , t0 ) 0.

Thus, max(x, x0 ) maximizes f (, t0 ). Now if we pick x = x(t) and x0 = x(t0 ),


an immediate implication is that x0 x. A similar argument applies to the
lowest maximizers.
Q.E.D.
Topkis Theorem says that if f has increasing dierences, then the set
of maximizers x(t) is increasing in t in the sense that both the highest and
lowest maximizers will not decrease if t increases.
1

Recall that a function f : X R is upper semi-continuous at x0 if for any , there


exists a neighborhood U (x0 ) such that x U(x0 ) implies that f (x) < f (x0 ) + . The
function f is upper semi-continuous if it is upper semi-continuous at each x0 X.

Supermodular Games

We now introduce the notion of a supermodular game, or game with strategic


complementarities.
Definition 2 The game (S1 , ..., SI ; u1 , .., uI ) is a supermodular game if
for all i:
Si is a compact subset of R;
ui is upper semi-continuous in si , si .
ui has increasing dierences in (si , si ).
Applying Topkis Theorem in this context shows immediately that each
players best response function is increasing in the actions of other players.
Corollary 1 Suppose (S, u) is a supermodular game, and let
BRi (si ) = arg max ui (si , si )
si Si

Then
(i) BRi (si ) has a greatest and least element BRi (si ), BRi (si ).
(ii) If s0i si , then BRi (s0i ) BRi (si ) and BRi (s0i ) BRi (si ).

2.1

Examples

1. (Investment Game) Suppose firms 1, ..., I simultaneously make investments si {0, 1} and payos are:

( P
I
sj k if si = 1

j=1
ui (si , si ) =
0
if si = 0
where is increasing in aggregate investment.
2. (Bertrand Competition) Suppose firms 1, ..., I simultaneously choose
prices, and that
X
dij pj
Di (pi , pi ) = ai bi pi +
j6=i

+
where
2bi , dij 0. Then Si = R and (pi , pi ) = (pi ci )Di (pi , pi )
has i /(pi pj ) = dij 0. So the game is supermodular.

3. (Cournot Competition) Cournot duopoly is supermodular if we take


s1 =

Firm 1s quantity

s2 = Negative of Firm 2s quantity


4. (Diamond Search Model) Consider a simplified version of Diamonds
(1982) search model (suggested by Milgrom and Roberts, 1990). There
are I agents who exert eort looking for trading partners. Let ei denote
the eort of agent i, and c(ei ) the cost of this eort, where c is P
increasing and continuous. The probability of finding a partner is ei j6=i ej
and the cost is c(ei ). Then:
X
ui (ei , ei ) = ei
ej c(ei )
j6=i

has increasing dierences in ei , ei so the game is supermodular.

2.2

Solving the Bertrand Game

Consider the Bertrand game from above, where firms 1 and 2 choose prices
p1 , p2 . Suppose they have zero marginal costs, and that Di (pi , pj ) = 12pi +
pj . Then
i (pi , pj ) = pi [1 2pi + pj ] .
Note that

i
(pi , pj ) = 1 4pi + pj
pi

Lets apply iterated strict dominance to this game.


Set Si0 = [0, 1].
If pi < 14 , then
dominated.

i
pi

> 1 4 14 + pj 0 any pi <

1
4

is strictly

If pi > 12 , then
dominated.

i
pi

< 1 4 12 + pj 0 any pi >

1
2

is strictly

So Si1 = [ 14 , 12 ]. Note that this is the interval of best-responses: BRi (pj )

[ 14 , 12 ].

Let Sik = [sk , sk ], where


sk =

1
1
sk2
1
1
1
1 sk1
s0
+
= +
+
= ... = +
+ ... + k + k
4
4
4 16
16
4 16
4
4
4

sk =

sk2
sk
1
1
1
1
1
1 sk1
+
= +
+
= ... = +
+ ... + k + k
4
4
4 16
16
4 16
4
4

So

1
lim sk = lim sk = .
k
3

So ( 13 , 13 ) is the only Nash equilibrium, and the unique rationalizable


profile.

Main Result

We now use the properties of supermodular games to show that the correspondence between rationalizable and Nash strategies in the Bertrand example is significantly more general than might appear at first glance.
Theorem 2 Let (S, u) be a supermodular game. Then the set of strategies
surviving iterated strict dominance has greatest and least elements s, s and
s, s are both Nash equilibria.
Corollary 2 This implies the following.
1. Pure strategy NE exist in supermodular games
2. The largest and smallest strategies compatible with ISD, rationalizability, correlated equilibrium and Nash equilibrium are the same.
3. If a supermodular game has a unique NE, then it is dominance solvable
(& lots of learning or adjustment rules will converge to it (e.g. bestresponse dynamics)).
Proof. As in the example, we iterate the best response mapping. Let S 0 =
S, and let s0 = (s01 , ..., s0I ) be the largest element of S. Let s1i = BRi (s0i ),
/ Si1 , i.e. si > s1i , then it is dominated
and Si1 = {si Si0 : si s1i }. If si
0 because (by increasing dierences and the fact that s0
by s1i when si Si
i
is the biggest maximizer)
ui (si , si ) ui (s1i , si ) ui (si , s0i ) ui (s1i , s0i ) < 0
Note that s1i = BRi (s0i ) and s1i s0i .
Iterating this argument, define
= BRi (ski )
sk+1
i

and

n
o
Sik+1 = si Si : si sk+1
i
5

k
Now, if sk sk1 , this implies that sk+1
= BRi (ski ) BRi (sk1
i
i ) = si . So
k
by induction, si is a decreasing sequence for each i. Define:

si = lim ski
k

This limit exists and only strategies si si are undominated.


Similarly, we can start with s0 = (s01 , ..., s0I ) the smallest elements in S
and identify s.
To complete the proof, we need to show that s = (s1 , ..., sI ) is a Nash
equilibrium. Then for all i, si ,
ui (sk+1
, ski ) ui (si , ski )
i
Taking limits as k ,
ui (si , si ) ui (si , si ).
Q.E.D.

Properties of Supermodular Games

A useful propery of supermodular games is that we can use monotonicity


to prove comparative statics results. Our first result shows how changes in
parameters that aect the marginal returns to action shift the equilibria of
a supermodular game.
A supermodular game (S, u) is indexed by t if each players payo function is indexed by t T, some ordered set, and for all i, ui (si , si , t)
has increasing dierences in (si , t).
Proposition 1 Suppose (S, u) is a supermodular game indexed by t. The
largest and smallest Nash equilibria are increasing in t.
Proof. Let BR(s, t) : S T S be the largest best response function
as defined above for the game with parameter t. Then BRi (s, t) is is best
response to si given parameter value t and is nondecreasing in s and t by
Topkis Theorem. Thus BR(s, t) is nondecreasing. Every Nash equilibrium
satisfies BR(s, t) s, and moreover s(t) = sup{s : BR(s, t) s} is the
largest first point of BR(s, t) and hence the largest Nash equilibrium (formally this follows from Tarskis Fixed Point Theorem). Since BR(s, ) is
6

nondecreasing, s is nondecreasing. A similar argument proves the result for


the smallest Nash equilibrium.
Q.E.D.
Because there is a positive feedback between the strategic choices of
dierent players in a supermodular game, there are often multiple equilibria.
The second property we consider a welfare theorem that is particularly useful
when considering such games.
A supermodular game (S, u) has positive spillovers if for all i, ui (si , si )
is increasing in si .
Proposition 2 Suppose (S, u) is a supermodular game with positive spillovers.
Then the Nash equilibrium are ordered in accordance with Pareto preference.
This result implies that the largest Nash equilibrium is Pareto-preferred
among the set of all Nash equilibria. Nevertheless, it need not be Pareto
optimal among the set of all strategy profiles.
We have now seen that the greatest and least equilibria in a supermodular game are pure strategy nash equilibria and that it is possible to obtain
nice comparative statics results for these equilibria. But what about mixed
strategy equilibria? Echenique and Edlin (2003) show that when a supermodular game has mixed strategy equilibria, these equilibria are always
unstable under a variety of dynamic adjustment processes, thus justifing
a focus on pure strategy equilibria.
Their idea can be seen using Battle of the Sexes as an example.
B
F

B
2, 1
0, 0

F
0, 0
1, 2

Recall that Battle of the Sexes has two pure nash equilibria (B, B) and (F, F )
and a mixed equilibrium ( 23 B + 13 F, 13 B + 23 F ). To make this a supermodular
game, we need to define an order on the strategy sets. Let F >i B for both
players. Then ui (si , si ) has increasing dierences in (si , si ).
In the mixed equilibrium it is crucial that player 1 believes that player
2 is playing exactly 13 B + 23 F . If player 1 believes player 2 will play F with
probability 2/3 + , even for > 0 small, then player 1 will strictly prefer
F . Similarly, if player 2 believes 1 will play F with any probability above
1/3, player 2 will strictly prefer F .
Now, imagine the players play repeatedly, with player 1 initially believing
2 will play F with probability 2/3 + and player 2 initially believing 1 will
7

play F with probability 1/3 + . Both will play F . If they then adjust
their beliefs so they put more weight on their opponents playing F (Im
purposely being a little loose about the dynamic adjustment process here),
they will play F again in the next period, and so on until they always play
F and have moved away from mixed strategy beliefs.
This situation is not contrived. The more general point is that so long
as player i adjusts his beliefs toward j playing F when j does play F , and so
long as is response to this change is to herself play F more often, then any
move toward (F, F ) (or toward (B, B)) and away from the mixed equilibrium
is self-reinforcing, and many reasonable dynamic processes will move away
from the mixed equilibrium toward a pure equilibrium.2

Comments
1. (Extensions) These results extend to games where players have multidimensional strategy spaces. If Si Rn , we need two further assumptions. First, for all i, Si must be a complete sublattice; second, for all i,
ui must be supermodular in si as well as having increasing dierences
in (si , si ). For precise definitions, see that Monotone Comparative
Statics handout. The results also extend to the case where ui satisfies the single crossing property in (si , si ) as opposed to the stronger
assumption of increasing dierences (see Milgrom and Shannon, 1994).
2. (Comparing Fixed Points) Milgrom and Roberts (1994) use similar
arguments to derive comparative statics for models where equilibria
are the solutions to some equation f (x, t) = 0. Roughly, they show
that if f is increasing in t and continuous (in a weak sense) in x, then
the largest fixed point of f (x, t) = 0 is increasing in t. Thus their
results provide analogues of Proposition 1 for another class of models.

References
[1] Bulow, J., J. Geanakoplos and P. Klemperer (1985) Multimarket
Oligopoly: Strategic Substitutes and Complements, J. Pol. Econ., 93,
488511.
2

Because our comparative statics results refer to the highest and lowest equilibria,
you might also ask what we should make of interior equilibria. Echenique (2002) uses a
related stability idea to argue that under reasonable dynamic adjustment processes, our
comparative statics predictions should carry through even if players dont always end up
at the lowest (or highest) equilibrium.

[2] Diamond, P. (1982) Aggregate Demand Management in Search Equilibrium, J. Pol. Econ.
[3] Echenique, F. (2002) Comparative Statics by Adaptive Dynamics and
the Correspondence Principle, Econometrica, 70(2).
[4] Echenique, F. and A. Edlin (2003) Mixed Strategy Equilibria in Games
of Strategic Complements are Unstable, Caltech Working Paper.
[5] Milgrom, P. and J. Roberts (1990) Rationalizability and Learning in
Games with Strategic Complementarities, Econometrica, 58, 1255
1278.
[6] Milgrom, P. and J. Roberts (1994) Comparing Equilibria, American
Economic Review.
[7] Milgrom, P. and C. Shannon (1994) Monotone Comparative Statics,
Econometrica.
[8] Topkis, D. (1968) Ordered Optimal Decisions. Ph.D. Dissertation, Stanford University.
[9] Topkis, D. (1979) Equilibrium Points in Non-Zero Sum n-Person Submodular Games, SIAM J. of Control and Optimization, 17(6), 773
787.
[10] Topkis, D. (1998) Supermodularity and Complementarity, Princeton
University Press.
[11] Vives, X. (1990) Nash Equilibrium with Strategic Complementarities,
J. Math. Econ, 19, 305321.

You might also like