Introduction
June 30, 1993
History
Not long ago, the scoffer could say that econometrics and game theory were like Japan
and Argentina. In the late 1940s both disciplines and both economies were full of promise,
poised for rapid growth and ready to make a profound impact on the world. We all know what
happened to the economies of Japan and Argentina. Of the disciplines, econometrics became
an inseparable part of economics, while game theory languished as a subdiscipline,
interesting to its specialists but ignored by the profession as a whole. The specialists
in game theory were generally mathematicians, who cared about definitions and proofs
rather than applying the methods to economic problems. Game theorists took pride in the
diversity of disciplines to which their theory could be applied, but in none had it become
indispensable.
In the 1970s, the analogy with Argentina broke down. At the same time
that Argentina was inviting back Juan Peron, economists were beginning to discover what
they could achieve by combining game theory with the structure of complex economic
situations. Innovation in theory and application was especially useful for situations with
asymmetric information and a temporal sequence of actions, the two major themes of this
book. During the 1980s, game theory has become dramatically more important to mainstream
economics. Indeed, it seems to be swallowing up microeconomics, just as econometrics has
swallowed up empirical economics.
Game theory is generally considered to have begun with the publication
of von Neumann & Morgenstern's The Theory of Games and Economic Behaviour in
1944. Although very little of the game theory in that thick volume is relevant to the
present book, it introduced the idea that conflict could be mathematically analyzed and
provided the terminology with which to do it. The development of "The Prisoner's
Dilemma" (Tucker [unpub]) and Nash's papers on the definition and existence of
equilibrium (Nash [1950b, 1951]) laid the foundations for modern noncooperative game
theory. At the same time, cooperative game theory reached important results in papers by
Nash (1950a) and Shapley (1953b) on bargaining games and Gillies (1953) and Shapley
(1953a) on the core.
By 1953 virtually all the game theory that was to be used by economists
for the next 20 years had been developed. Until the mid-1970s, game theory remained an
autonomous field with little relevance to mainstream economics, important exceptions being
Schelling's 1960 book, The Strategy of Conflict, which introduced the focal
point, and a series of papers (of which Debreu & Scarf [1963] is typical) that showed
the relationship of the core of a game to the general equilibrium of an economy.
In the 1970s, information became the focus of many models as economists
started to put emphasis on individuals who act rationally but with limited information.
When attention was given to individual agents, the time ordering in which they carried out
actions began to be explicitly incorporated. With this addition, games had enough
structure to reach interesting and non-obvious results. Important "toolbox"
references include the earlier but long unapplied articles of Selten (1965) (on
perfectness) and Harsanyi (1967) (on incomplete information), the papers by Selten (1975)
and Kreps & Wilson (1982b) extending perfectness, and the article by Kreps, Milgrom,
Roberts & Wilson (1982) on incomplete information in repeated games. Most of the
applications in the present book were developed after 1975, and the flow of research shows
no sign of diminishing.
Game Theory's Method
Game theory has been successful in recent years because it fits so well into the new
methodology of economics. In the past, macroeconomists started with broad behavioral
relationships like the consumption function, and microeconomists often started with
precise but irrational behavioral assumptions such as sales maximization. Now all
economists start with primitive assumptions about the utility functions, production
functions, and endowments of the actors in the models (to which must often be added the
available information). The reason is that it is usually easier to judge whether primitive
assumptions are sensible than to evaluate high-level assumptions about behavior. Having
accepted the primitive assumptions, the modeller figures out what happens when the actors
maximize their utility subject to the constraints imposed by their information,
endowments, and production functions. This is exactly the paradigm of game theory: the
modeller assigns payoff functions and strategy sets to his players, and sees what happens
when they pick strategies to maximize their payoffs. The approach is a combination of the
"Maximization Subject to Constraints" of MIT and the "No Free Lunch"
of Chicago. We shall see, however, that game theory relies only on the spirit of these two
approaches: it has moved away from maximization by calculus, and inefficient allocations
are common. The players act rationally, but the consequences are often bizarre, which
makes application to a world of intelligent men and ludicrous outcomes appropriate.
Exemplifying Theory
Along with the trend towards primitive assumptions and maximizing behavior has been a
trend toward simplicity. I called this "no-fat modelling" in the first edition,
but the term "exemplifying theory" from Fisher (1989) is more apt. This has also
been called "modelling by example" or "MIT-style theory". A more
smoothly flowing name, but immodest in its double meaning, is "exemplary
theory". The heart of the approach is to discover the simplest assumptions needed to
generate an interesting conclusion--- the starkest, barest, model that has the desired
result. This desired result is the answer to some relatively narrow question. Could
education be just a signal of ability? Why might bid-ask spreads exist? Is predatory
pricing ever rational?
The modeller starts with a vague idea such as "People go to
college to show they're smart." He then models the idea formally in a simple way. The
idea might survive intact, it might be found formally meaningless, it might survive with
qualifications, or its opposite might turn out to be true. The modeller then uses the
model to come up with precise propositions, whose proofs may tell him still more about the
idea. After the proofs, he goes back to thinking in words, trying to understand more than
whether the proofs are mathematically correct.
Good theory of any kind uses Occam's razor, which cuts out superfluous
explanations, and the ceteris paribus assumption, which restricts attention to
one issue at a time. Exemplifying theory goes a step further by providing, in the theory,
only a narrow answer to the question. As Fisher says, "Exemplifying theory does not
tell us what must happen. Rather it tells us what can happen." In
the same vein, at Chicago I have heard the style called "Stories that Might be
True." This is not destructive criticism if the modeller is modest, since there are
also a great many "Stories that Can't be True". The aim should be to come up
with one or more stories that might apply to a particular situation, and then try to sort
out which story gives the best explanation. In this, economics combines the deductive
reasoning of mathematics with the analogical reasoning of law.
A critic of the mathematical approach in biology has compared it to an
hourglass (Slatkin [1980]). First, a broad and important problem is introduced. Second, it
is reduced to a very special but tractable model that hopes to capture its essence.
Finally, in the most perilous part of the process, the results are expanded to apply to
the original problem. Exemplifying theory does the same thing.
The process is one of setting up "If-Then" statements,
whether in words or symbols. To apply such statements, their premises and conclusions need
to be verified, either by casual or careful empiricism. If the required assumptions seem
contrived or the assumptions and implications contradict reality, the idea should be
discarded. If "reality" is not immediately obvious and data is available,
econometric tests may help show whether the model is valid. Predictions can be made about
future events, but that is not usually the primary motivation: most of us are more
interested in explaining and understanding than predicting.
The method just described is close to how, according to Lakatos (1976),
mathematical theorems are developed. It contrasts sharply with the common view that the
researcher starts with a hypothesis and proves or disproves it. Instead, the process of
proof helps show how the hypothesis should be formulated.
An important part of exemplifying theory is what Kreps & Spence
(1984) have called "blackboxing": treating unimportant subcomponents of a model
in a cursory way. The game "Entry for Buyout" of Section 14.4, for example, asks
whether a new entrant would be bought out by the industry's incumbent producer, something
that depends on duopoly pricing and bargaining. Both pricing and bargaining are
complicated games in themselves, but if the modeller does not wish to deflect attention to
those topics, he can use the simple Nash and Cournot solutions to those games and go on to
analyze buyout. If the entire focus of the model were duopoly pricing, then using the
Cournot solution would be open to attack, but as a simplifying assumption, rather than one
that "drives" the model, it is acceptable.
Despite the style's drive towards simplicity, a certain amount of
formalism and mathematics is required to pin down the modeller's thoughts. Exemplifying
theory treads a middle path between mathematical generality and nonmathematical vagueness.
Both alternatives will complain that exemplifying theory is too narrow. But beware of
calls for more "rich", "complex", or "textured"
descriptions; these often lead to theory which is either too incoherent or too
incomprehensible to be applied to real situations.
Some readers will think that exemplifying theory uses too little
mathematical technique, but others, especially non-economists, will think it uses too
much. Intelligent laymen have objected to the amount of mathematics in economics since at
least the 1880s, when George Bernard Shaw said that as a boy he (1) let someone assume
that a = b, (2) permitted several steps of algebra, and (3) found he had
accepted a proof that 1 = 2. Forever after, Shaw distrusted assumptions and algebra.
Despite the effort to achieve simplicity (or perhaps because of it), mathematics is
essential to exemplifying theory. The conclusions can be retranslated into words, but
rarely can they be found by verbal reasoning. The economist Wicksteed put this nicely in
his reply to Shaw's criticism:
Mr Shaw arrived at the sapient conclusion that there "was a screw loose
somewhere"--- not in his own reasoning powers, but---"in the algebraic
art"; and thenceforth renounced mathematical reasoning in favour of the literary
method which enables a clever man to follow equally fallacious arguments to equally absurd
conclusions without seeing that they are absurd. This is the exact difference
between the mathematical and literary treatment of the pure theory of political economy.
(Wicksteed [1885] p. 732)
In exemplifying theory, one can still rig a model to achieve a wide
range of results, but it must be rigged by making strange primitive assumptions. Everyone
familiar with the style knows that the place to look for the source of suspicious results
is the description at the start of the model. If that description is not clear, the reader
deduces that the model's counterintuitive results arise from bad assumptions concealed in
poor writing. Clarity is therefore important, and the somewhat inelegant
Players-Actions-Payoffs presentation used in this book is useful not only for helping the
writer, but for persuading the reader.
This Book's Style
Substance and style are closely related. The difference between a good model and a bad
one is not just whether the essence of the situation is captured, but in how much froth
covers the essence. In this book, I have tried to make the games as simple as possible.
They often, for example, allow each player a choice of only two actions. Our intuition
works best with such models, and continuous actions are technically more troublesome.
Other assumptions, such as zero production costs, rely on trained intuition. To the
layman, the assumption that output is costless seems very strong, but a little experience
with these models teaches that it is the constancy of the marginal cost that usually
matters, not its level.
What matters more than what a model says is what we understand it to
say. Just as an article written in Sanskrit is useless to me, so is one that is
excessively mathematical or poorly written, no matter how rigorous it seems to the author.
Such an article leaves me with some new belief about its subject, but that belief is not
sharp, or precisely correct. Overprecision in sending a message creates imprecision when
it is received, because precision is not clarity. The result of an attempt to be
mathematically precise is sometimes to overwhelm the reader, in the same way that someone
who requests the answer to a simple question in the discovery process of a lawsuit is
overwhelmed when the other side responds with seventy boxes of tangentially related
documents. The quality of the author's input should be judged not by some abstract
standard but by the output in terms of reader processing cost and understanding.
In this spirit, I have tried to simplify the structure and notation of
models while giving credit to their original authors, but I must ask pardon of anyone
whose model has been oversimplified or distorted, or whose model I have inadvertently
replicated with crediting them. In trying to be understandable, I have taken risks with
respect to accuracy. My hope is that the impression left in the readers' minds will be
more accurate than if a style more cautious and obscure had left them to devise their own
errors.
Readers may be surprised to find occasional references to newspaper and
magazine articles in this book. I hope these references will be reminders that models
ought eventually to be applied to specific facts, and that a great many interesting
situations are waiting for our analysis. The principal-agent problem is not found only in
back issues of Econometrica: it can be found on the front page of today's Wall
Street Journal if one knows what to look for.
I make the occasional joke here and there, and game theory is a subject
intrinsically full of paradox and surprise. I want to emphasize, though, that I take game
theory seriously, in the same way that Chicago economists like to say that they take price
theory seriously. It is not just an academic artform: people do choose actions
deliberately and trade off one good against another, and game theory will help you
understand how they do that. If it did not, I would not advise you to study such a
difficult subject; there are much more elegant fields in mathematics, from an aesthetic
point of view. As it is, I think it is important that every educated person have some
contact with the ideas in this book, just as they should have some idea of the basic
principles of price theory.
I have been forced to exercise more discretion over definitions than I
had hoped. Many concepts have been defined on an article-by-article basis in the
literature, with no consistency and little attention to euphony or usefulness. Other
concepts, such as "asymmetric information" and "incomplete
information," have been considered so basic as to not need definition, and hence have
been used in contradictory ways. I use existing terms whenever possible, and synonyms are
listed.
I have often named the players Smith and Jones so that the reader's
memory will be less taxed in remembering which is a player and which is a time period. I
hope also to reinforce the idea that a model is a story made precise; we begin with Smith
and Jones, even if we quickly descend to s and j. Keeping this in mind,
the modeller is less likely to build mathematically correct models with absurd action
sets, and his descriptions are more pleasant to read. In the same vein, labelling a curve
"U = 83," sacrifices no generality: the phrase "U = 83
and U = 66" has virtually the same content as "U = a and U = b,
where a > b,"
but uses less short-term memory.
A danger of this approach is that readers may not appreciate the
complexity of some of the material. While journal articles make the material seem harder
than it is, this approach makes it seem easier (a statement that can be true even if
readers find this book difficult). The better the author does his job, the worse this
problem becomes. Keynes (1933) says of Alfred Marshall's Principles,
The lack of emphasis and of strong light and shade, the sedulous rubbing away of rough
edges and salients and projections, until what is most novel can appear as trite, allows
the reader to pass too easily through. Like a duck leaving water, he can escape from this
douche of ideas with scarce a wetting. The difficulties are concealed; the most ticklish
problems are solved in footnotes; a pregnant and original judgement is dressed up as a
platitude.
This book may well be subject to the same criticism, but I have tried to face up to
difficult points, and the problems at the end of each chapter will help to avoid making
the reader's progress too easy. Only a certain amount of understanding can be expected
from a book, however. The efficient way to learn how to do research is to start doing it,
not to read about it, and after reading this book, if not before, many readers will want
to build their own models. My purpose here is to show them the big picture, to help them
understand the models intuitively and give them a feel for the modelling process.
NOTES
 | Perhaps the most important contribution of von Neumann & Morgenstern (1944)
is the theory of expected utility (see Section 2.3). Although they developed the theory
because they needed it to find the equilibria of games, it is today heavily used in all
branches of economics. In game theory proper, they contributed the framework to describe
games, and the concept of mixed strategies (see Section 3.1). |
 | On method, see the dialogue by Lakatos (1976), or Davis & Hersh (1981),
Chapter 6 of which is a shorter dialogue in the same style. M. Friedman (1953) is the
classic essay on a different methodology: evaluating a model by testing its predictions.
Kreps & Spence (1984) is a discussion of exemplifying theory. |
 | Because style and substance are so closely linked, how one writes is important.
For advice on writing, see McCloskey (1985, 1987) (on economics), Basil Blackwell (1985)
(on books), Bowersock (1985) (on footnotes), Fowler (1965), Fowler & Fowler (1949),
Halmos (1970) (on mathematical writing), Strunk & White (1959), Weiner (1984), and
Wydick (1978). |
 | A Fallacious Proof that 1 = 2. Suppose that a = b.
Then ab = b2 and ab - b2 = a2
- b2. Factoring the last equation gives us b(a - b)
= (a + b)(a - b), which can be simplified to b
= a + b. But then, using our initial assumption, b = 2b and 1 = 2. (The
fallacy is division by zero.) |
|