          \documentstyle[12pt,epsf] {article}
\parskip 10pt
\reversemarginpar
   \topmargin  -.4in
  \oddsidemargin .25in
  \textheight  8.7in
 \textwidth 6in  
    
         \begin{document}
 
  \parindent 24pt
\parskip 10pt



\setcounter{section}{13}
 \setcounter{page}{386} 
  
\section*{  13 Pricing }  
\noindent
 June 28, 1993.April 18, 1999

 

 \subsection{Quantities as Strategies:   Cournot Equilibrium
Revisited} %13.1

 \noindent
 Chapter 13 is about how firms with market power set prices.  Section  13.1 generalizes the Cournot Game of section 3.5, in which   two firms choose the
quantities they sell, while section 13.2 sets out the Bertrand model
of firms choosing   prices.  Both the Bertrand and Cournot models are then expanded to allow for differentiated products.   
  Section
13.3 goes back to the origins of product differentiation, and  develops two Hotelling location models. Section 13.4 shows how to do comparative statics in games, using the  differentiated Bertrand  model  as an example  and  supermodularity and the implicit function theorem  as tools.    Section 13.5  shows that even if a firm is a
monopolist, if it sells a durable good it suffers competition from
its future self.


\bigskip 
 \noindent 
 {\bf Cournot Behavior with General Cost and Demand Functions}

\noindent
 In the next few sections, sellers compete against each other  while moving simultaneously.  We will start by generalizing The  Cournot
Game of section 3.5 from linear demand and zero costs to a
wider class of functions. The two players are firms Apex and
Brydox, and their strategies are their choices of the  quantities $q_a$ and $q_b$.  The
payoffs are based on the total cost functions, $c(q_a)$ and $c(q_b)$,
and the demand function, $p(q)$, where $q=q_a + q_b$.  This
specification  says that only the sum of the outputs  affects 
the price. The implication is that the firms produce an identical
product, because whether it is Apex or Brydox that produces an
extra unit, the effect on  the price is the same.

 Let us take the point of view of Apex. In the Cournot-Nash
analysis, Apex chooses its output of $q_a$ for a given level of
$q_b$ as if its choice did not affect $q_b$. From its point of view,
$q_a$ is a function of $q_b$, but $q_b$ is exogenous. Apex sees the
effect of its output on price as
 \begin{equation} \label{e13.1}
  \frac{ \partial p}{\partial q_a}= \frac{dp}{ d q}
\frac{\partial q}{ \partial q_a}= \frac{dp}{ d q}. 
  \end{equation} 
  Apex's payoff function is 
 \begin{equation} \label{e13.2}
\pi_a = p(q) q_a - c(q_a).
  \end{equation}
  To find Apex' reaction function, we  differentiate with
respect to its strategy to obtain
  \begin{equation} \label{e13.3}
 \frac{d\pi_a}{dq_a} = p + \frac{dp}{dq} q_a - \frac{dc}{d q_a} = 0,
  \end{equation}
which implies
 \begin{equation} \label{e13.4}
 q_a = \frac{\frac{dc}{dq_a}  - p}{\frac{dp}{dq}},
  \end{equation}
or, simplifying the notation,
 \begin{equation} \label{e13.5}
 q_a = \frac{c'  - p}{p'}. 
  \end{equation}
 If particular functional forms for $p(q)$ and $c(q_a)$ are
available, equation (\ref{e13.5}) can be solved to find $q_a$ as a
function of $q_b$.  More generally, to find the change in Apex'
best response for an exogenous change in Brydox's output, 
differentiate (\ref{e13.5}) with respect to $q_b$, remembering that
$q_b$ exerts not only a direct effect, but possibly an indirect
effect on $q_a$. 
  \begin{equation} \label{e13.6}
  \frac{d q_a}{d q_b} = \frac{(p-c')(p'' + p'' \frac{d q_a}{d
q_b})}{p'^2} + \frac{ c'' \frac{d q_a}{d q_b} - p' - p' \frac{d q_a}{d
q_b}}{p'}. 
  \end{equation}
 Equation (\ref{e13.6}) can be solved for $\frac{d q_a}{d q_b}$ to obtain
the slope of the reaction function, 
  \begin{equation} \label{e13.7}
  \frac{d q_a}{d q_b} = \frac{(p-c')p''- p'^2} {2p'^2 - c''p' - (p-
c')p''}
  \end{equation}
 If both costs and demand are linear, as   in section 3.5,
then $c''=0$ and $p''=0$, so equation (\ref{e13.7}) becomes 
  \begin{equation} \label{e13.8}
  \frac{d q_a}{d q_b} = - \frac{p'^2 }{2p'^2 } = -\frac{1}{2}.
  \end{equation}
 The general model faces two problems that did not arise in the
linear model: nonuniqueness and nonexistence. If demand is concave
and costs are convex, which implies that $p'' < 0$ and $c''> 0$, then all is
well as far as existence goes. Since price is greater than marginal
cost ($p >c'$), equation (\ref{e13.7}) tells us that the reaction
functions are downward sloping, because $2p'^2 - c''p' - (p-c')p''$
is positive and both $(p-c')p''$ and $-p'^2$ are negative.  If the
reaction curves are downward sloping, they cross and an equilibrium
exists, as was shown in figure 3.1 for the linear case represented by
equation (\ref{e13.8}).    We usually do assume that costs are at least weakly convex, since that is the result of diminishing or constant returns, but there is no reason to believe that demand is either concave or convex.  If the demand curves are not linear, the
contorted reaction functions of equation (\ref{e13.7}) might give rise
to multiple Cournot equilibria as in figure 13.1. 


\begin{center}
  {\bf Figure 13.1}  Multiple Cournot-Nash Equilibria 
\end{center} 

\epsfysize=3in

 
\epsffile{/Users/erasmuse/AAANewChapters/Figures/f_12.1.eps}



  If demand is convex or costs are concave, so   $p'' > 0$ or
$c''<0$, the reaction functions can be upward sloping, in which case
they might never cross and no equilibrium would exist.  The problem
can also be seen from Apex' payoff function, equation (\ref{e13.2}).
If $p(q)$ is convex, the payoff function might not be concave, in
which case standard maximization techniques break down.  The problems
of the general Cournot model teach a lesson to modellers: sometimes simple assumptions  such as linearity  generate atypical results.

 
\bigskip
\noindent
{\bf Many Oligopolists}

\noindent
 Let us return to the simpler game in which production costs are zero
and demand is linear. For concreteness, we will use the particular
inverse demand function
 \begin{equation} \label{e13.9}
  p(q) = 120 - q. 
 \end{equation}
 Using (\ref{e13.9}), the  payoff function (\ref{e13.2}) becomes
 \begin{equation} \label{e13.10}
\pi_a  = 120q_a - q_a^2 - q_b q_a.
  \end{equation}
  In section 3.5,  firms picked outputs of 40 apiece
given demand function (\ref{e13.9}). This  generated a price of 40.  With
$n$ firms instead of two, the demand function is
  \begin{equation} \label{e13.11}
 p \left(\sum_{i=1}^n q_i \right) = 120 - \sum_{i=1}^n q_i,
  \end{equation}
and  firm $j$'s payoff function is
 \begin{equation} \label{e13.12}
\pi_j  = 120q_j - q_j^2 - q_j\sum_{i \neq j} q_i.
  \end{equation}
  Differentiating $j$'s payoff function with respect to $q_j$  yields \begin{equation} \label{e13.13}
\frac{d\pi_j}{dq_j}  = 120 - 2q_j - \sum_{i \neq j} q_i= 0.
  \end{equation}
The first step in finding  the equilibrium is to  guess that it is symmetric, so
that $q_j = q_i,( i = 1,\ldots, n)$. This is  an educated guess, since every
player faces a first-order condition like (\ref{e13.13}).  By 
symmetry, equation (\ref{e13.13}) becomes $120 - (n+1)q_j = 0$, so that
   \begin{equation}\label {e13.14}
q_j = \frac{120}{n+1}.
\end{equation}
   Consider several different values for $n$. If $n=1$, then $q_j =
60$, the monopoly optimum; and if $n=2$ then $q_j = 40$, the
Cournot output found in section 3.5.  If $n = 5$, $q_j = 20$;
and as $n$ rises, individual output shrinks to zero. Moreover, the
total output of $ nq_j =\frac{120n}{n+1}$ gradually approaches 120, the
competitive output, and the market price falls to zero, the marginal
cost of production.  As the number of firms increases, profits fall.

 
\bigskip
\noindent
{\bf Conjectural Variation} 

\noindent
 Conjectural variation, an equilibrium concept different in flavor
from any that has yet appeared in this book, is a way to quantify
the degree of cooperation between oligopolists.  Let us continue to
specify the strategies as quantities.  In a  Nash
equilibrium, no player wants to deviate, and his beliefs
about how the other players would behave are confirmed whatever
nodes are reached.  Under conjectural variation, a player believes,
for  reasons outside the model, that if he deviated, the other players would
deviate in specified ways. This should seem quite an unnatural idea to anyone who has read this far in the book, since it violates  the basic assumptions of Bayesian games and it is rather hazy about what is happening in this simultaneous-move game. The idea  may be clearer in an example.
Returning to the two-player model, we can use equation (\ref{e13.3}) to
write Apex' self-perceived first order condition as 
 \begin{equation}\label{e13.15}
  \frac{d\pi_a}{dq_a} = 
 p + \left( \frac{dp}{dq} \right) \left( \frac{dq }{d q_a} \right)
q_a - \frac{dc}{dq_a} = 0.
  \end{equation}
 The difference between the first-order-conditions (\ref{e13.3}) and
(\ref{e13.15}) is that (\ref{e13.15}) contains
 \begin{equation}\label{e13.16}
 \frac{dq }{d q_a} = 1 + \frac{d q_b }{d q_a}.
  \end{equation}
  Equation (\ref{e13.16}) says that the expected effect on industry
output of an increase in $q_a$ by one unit has two components: a
direct increase of one unit, and an indirect increase from Brydox
increasing his output in response. The first-order condition (\ref{e13.15})
must be qualified by ``self-perceived'' because Apex might be
mistaken in his beliefs about Brydox' response.  The belief implicit
in Nash equilibrium, that Apex' deviation is not followed by a
response from Brydox, is the only belief that supports an
equilibrium in which one player or the other is not mistaken. But if
consistency of beliefs is not required, other beliefs are possible
that lead to different behavior.

 \noindent
{\it Firm $i$'s {\bf conjectural variation} is the rate $\frac{d
q_{-i} }{d q_i}$ at which he conjectures that the output of other
firms would change if $i$'s own output changed.}

\noindent
  {\it CV} = 0\\
  In a Cournot-Nash equilibrium, Apex believes that if he deviated
by producing more, Brydox would not deviate, so the conjectural
variation equals 0.

 \noindent
{\it CV} = $-1$\\
 If Apex believes that an increase in his output is matched by a
decrease in Brydox' output, so the total industry output is left
unchanged, the conjectural variation is $-1$.  If both firms use this
conjectural variation, the industry output is the competitive level;
firms ignore the effect of their output in depressing the price. Of
course, if both firms use a negative value, their beliefs are
inconsistent. 

 \noindent
  {\it CV} = 1\\
  If Apex believes that Brydox would exactly match his output
changes, the conjectural variation is 1.  With two firms, with identical cost curves, industry
output is at the cartel level, though an $n$-player game would need
$CV = n-1$ to achieve that level.

     In Stackelberg equilibrium (section 3.5), the conjectural
variation of the Stackelberg follower is between 0 and 1, and takes
the value given by a reaction function like equation (\ref{e13.7}).

  In the world oil market, fringe producers like Britain face the
OPEC cartel.  If Britain's conjectural variation equals $-1$, Britain
believes that producing more would make OPEC cut back an equal
amount; if 0, that OPEC would ignore Britain; if 0.5, that OPEC would
follow with a smaller increase; if 1, that OPEC would match every
increase; and if 10, that OPEC would respond by flooding the market.
Setting up equations with the appropriate value for the conjectural
variations of all the players, we could solve for the equilibrium
output.  The idea is useful for organizing different models of
duopoly and it is simple enough to be empirically estimated. Even
without knowing the correct theory, an estimate could be made of how
much OPEC actually does respond to Britain.

 
 \subsection{Prices as Strategies:   Bertrand Equilibrium} %13.2

\noindent
    The Bertrand (1883) duopoly model seems to be only slightly
different from the Cournot model, but it reaches radically different
conclusions. The Bertrand solution is nothing more than a Nash
equilibrium in prices rather than quantities. We will use the same
two-player, zero-cost, linear-demand world as before, but now the
strategy spaces will be the prices, not the quantities.  We will also use
the same demand function, equation (\ref{e13.9}), which implies that if
$p$ is the lowest price, $q = 120 - p$.  In the Cournot model, firms
chose quantities but allowed the market price to vary freely; in the
Bertrand model, they choose prices and sell as much as they can.  The
strategies for Apex and Brydox are $p_a$ and $p_b$. The payoff
function for Apex (and analogously for Brydox) is

\begin{tabular}{ll}
$   \;\;\;\;\;\;\;\;  \pi_a =$ & $\left\{
\begin{tabular}{ll}
$ p_a (120 - p_a)$ &if $p_a < p_b$  \\ 
  $ \frac{p_a(120 - p_a)}{2}$& if $p_a = p_b $ \\
                    0        & if $ p_a > p_b$ \\
\end{tabular}
\right.$ 
\end{tabular}
 
\noindent
  The Bertrand game has a unique Nash equilibrium: $p_a = p_b = 0$.
No other pair of prices could be an equilibrium, because one firm
could capture the entire market by slightly undercutting the other's
price. The only pair of prices where undercutting is not a temptation
is (0,0). Duopoly profits are not just less than monopoly profits,
they are zero.


Like the surprising outcome of Prisoner's Dilemma, the Bertrand
equilibrium is less surprising once one thinks about the
   model's limitations. What it shows is that duopoly profits do
not arise  just because there are   two firms. Profits
arise from something else, such as multiple periods, incomplete
information, or differentiated products.

   Both the Bertrand and   Cournot models are
in common use. The Bertrand model can be awkward mathematically
because of the discontinuous jump from a market share of 0 to 100
percent after a slight price cut. The Cournot model is useful
as a simple model that avoids this problem  and which predicts that  the price will 
fall  gradually as more firms enter the market.   There are also ways to modify the Bertrand
model to obtain   intermediate prices and gradual effects of entry, and we will proceed to look at  some of these modifications. 
 
\bigskip
\noindent
{\bf Capacity Constraints: The Edgeworth Paradox}

\noindent
    Let us start by altering the Bertrand model by constraining each
firm to sell no more than $K = 70$ units.  The industry capacity of
140 exceeds the competitive output, but do profits continue to be
zero? 

 When capacities are limited we require additional assumptions
because of the new possibility that a firm with a lower price might attract more
customers than it can supply. We need to specify a {\bf rationing
rule} telling which customers are served at the low price and which
must buy from the high-price firm. The rationing rule is unimportant
to the payoff of the low-price firm, but crucial to the high-price
firm.  One possible rule is

\noindent
 {\bf Intensity rationing.} { \it The customers that value the
product most buy from the firm with the lower price.}

 The inverse demand function from equation (\ref{e13.9}) is $p = 120-
q$, and under intensity rationing, the $K$ customers with the
strongest demand buy from the low-price firm. Suppose that Brydox is
the low-price firm, charging a price of 30 so that 90 consumers wish
to buy from it. The residual demand facing Apex is then 
 \begin{equation}\label{e13.18}
 q_a = 120 - p_a - K. 
 \end{equation}
  The demand curve  is shown in figure 13.2a.

\begin{center}
{\bf Figure 13.2     Rationing Rules} (a) intensity rationing if K=70; (b) proportional rationing 
 \end{center}

\epsfysize=3in

 
\epsffile{/Users/erasmuse/AAANewChapters/Figures/f_12.2.eps}



\noindent
  Under intensity rationing, the payoff functions are, given that $K=70$,  
\begin{equation}\label{e13.19}
\begin{array}{ll}
   \pi_a = & \left\{
\begin{array}{llr}
 p_a \cdot  Min \{ 120 - p_a, 70 \} &{\rm if} \;p_a < p_b  & (a) \\ 
  \frac{p_a(120 - p_a)}{2}  & {\rm if }\; p_a = p_b & (b)  \\
                    0        & {\rm if }\; p_a > p_b, p_b \geq 50 & (c)\\
           p_a (120 - p_a- 70) & {\rm if}\; p_a > p_b, p_b < 50  & (d) \\ 
 \end{array}
 \right.
  \end{array}
 \end{equation}
  


 The appropriate rationing rule depends on what is being modelled.
Intensity rationing is appropriate if buyers with more intense demand
make greater efforts to obtain low prices. If the intense buyers are
wealthy people who are unwilling to wait in   line, the
least intense buyers might end up at the low-price firm which is the case of {\bf
inverse-intensity rationing}.  An intermediate rule is proportional
rationing, under which every type of consumer is equally likely to be
able to buy at the low price.

\noindent
  {\bf Proportional rationing.} {\it Each customer has the same
probability of being able to buy from the low-price firm.}

  Under proportional rationing, if $K= 70$ and 90 customers wanted to
buy from Brydox, 2/9 ($=\frac{q(p_b)-K}{q(p_b)}$) of each type of customer  will be forced to buy from Apex (for example, 2/9 of the type
willing to pay 120).  The residual
demand curve facing Apex, shown in figure 13.2b and equation
(\ref{e13.20}), intercepts the price axis at 120, but slopes down at a
rate three times as fast as market demand because there are only 2/9
 as many remaining customers of each
type. 
 \begin{equation}\label{e13.20}
 q_a = (120 - p_a) \left(\frac{120- p_b - K}{120 - p_b} \right)  
 \end{equation}

     The capacity constraint has a very important effect: (0,0) is no
longer a Nash equilibrium in prices. Consider Apex' best response
when Brydox charges a price of zero. If Apex raises his price above
zero, he retains most of his customers (because Brydox is already
producing at capacity), but his profits rise from zero to some
positive number, regardless of the rationing rule.  In any
equilibrium, both players must charge prices within some small amount
$\epsilon$ of each other, or the one with the lower price would
deviate by raising his price.  But if the prices are equal, then both
players have unused capacity, and each has an incentive to undercut
the other. No pure-strategy   equilibrium exists under either rationing rule.  This is
known as the {\bf Edgeworth paradox}, after Edgeworth (1897).


 Suppose that demand is linear, with the highest reservation price being $P=100$ and the maximum market quantity $Q=100$ at $P=0$.  Suppose also that  there are two firms, Apex and Brydox,  each having a  constant marginal cost of  0 up to capacity of $Q=80$ and  infinity thereafter.  We will  assume intensity   rationing  of  buyers. 

Note  that  industry capacity of 160  exceeds market demand of 100 if price equals marginal cost.    Note also that the  monopoly price is  50, which with quantity of  50 yields industry profit of 2500.  But what will be the equilibrium? 

  Prices of $(P_a=0, P_b=0)$  are not an equilibrium.   Apex's profit would be zero in that strategy combination.   If Apex increased its price to  5, what would happen?  Brydox would immediately sell $Q=80$, and to the most intense  80 percent of buyers.  Apex would be left with all the buyers between  $P=20$ and $P= 5$ on the demand curve,  for $Q_a=15$ and profit of  $\pi_a= (5) (15) = 75$.   So deviation by Apex is profitable. (Of course, $P=5$ is not necessarily  the most profitable deviation-- but we do not need to check that; I looked for an {\it easy} deviation.) 



  Equal prices of  $(P_a, P_b)$ with $P_a = P_b>0$ are not an equilibrium.    Even if the price is close to 0, Apex would sell at most 50 units as its half of the market, which is less than its capacity of 80.    Apex could deviate to  just below $P_b$     and have a discontinuous jump in   sales  for an increase in profit, just as in the basic Bertrand game.    
  
\newpage

   Unequal prices of  $(P_a, P_b)$  are not an equilibrium. Without loss of generality, suppose  $P_a > P_b$.   So long as $P_b$ is less than the monopoly price of 50,  Brydox would deviate to  a  new price even close to but not exceeding $P_a$. And this is not {\it just} the open-set problem. Once  Brydox is close enough  to Apex, Apex would deviate by jumping to  a price just below Brydox.  

 
If capacities are large enough, the Edgeworth Paradox disappears.  Consider capacities of 150 per firm, for example.  The argument made above  for why equal prices of 0 is not an equilibrium fails, because if Apex  were to deviate to a positive price, Brydox would be fully capable of serving the entire market, leaving Apex with no customers. 

If capacities are small enough, the Edgeworth Paradox also disappears, but so does the Bertrand Paradox.    Suppose each firm has a capacity of  20. They each will choose to  sell at a price of  60, in which case they will each sell 20 units, their entire capacities. Apex will have a payoff of 1200.  If Apex deviates to a lower price, it will not sell any more, so that would be unprofitable.  If Apex deviates to a higher price, it will sell  fewer,  and since the monopoly price is 50,  its profit will be lower; note that  a price of  61 and a quantity of  19 yields profits of  1159, for example. \footnote{Inverse intensity rationing  might change this result. Think about it.}   
 
  We could have  expanded the model to explain why the firms have small capacities by adding a prior move in which they choose capacity  subject to a cost per unit of capacity, foreseeing what will happen later in the game.  

 

 A mixed strategy   equilibrium does exist, calculated using
intensity rationing by Levitan \& Shubik (1972) and analyzed in
Dasgupta \& Maskin (1986b).  Expected profits are positive, because
the firms charge positive prices.  Under proportional rationing, as
under intensity rationing, profits are positive in equilibrium, but
the high-price firm does better with proportional rationing. The
high-price firm would do best with {\bf inverse-intensity rationing}, under
which the customers with the least intense demand are served at the
low-price firm, leaving the ones willing to pay more at the mercy of
the high-price firm.


  Even if capacity were made endogenous, the outcome would be
inefficient, either because firms would charge prices higher than
marginal cost (if their capacity were low), or they would invest in
excess capacity (even though they price at marginal cost).

\bigskip
\noindent
 {\bf Product Differentiation}

\noindent
     The Bertrand model without capacity constraints generates zero
profits because only slight price discounts are needed to bid away
customers.  The assumption behind this is that the two firms sell
identical goods, so   if Apex' price is slightly higher than
Brydox' all the customers go to Brydox.  If customers have brand
loyalty or poor price information, the equilibrium is different
and the demand curves facing Apex and Brydox might be
 \begin{equation} \label{e13.21} 
   q_a = 24 - 2p_a + p_b 
\end{equation}
and 
 \begin{equation} \label{e13.22} 
 q_b = 24 - 2p_b + p_a.
 \end{equation}
 The greater the difference in the coefficients on prices in demand
curves like these, the less substitutable are the products.  As with
standard demand curves like (\ref{e13.9}), we have made implicit
assumptions about the extreme points of (20) and
(21). These equations only apply if the quantities demanded
turn out to be nonnegative, and we might also want to restrict them
to prices below some ceiling, since otherwise the demand facing one
firm becomes infinite as the other's price rises to infinity. With
those restrictions, the payoffs are
  \begin{equation} \label{e13.22} 
  \pi_a = p_a (24 - 2p_a + p_b)
 \end{equation}
 and
  \begin{equation} \label{e13.23} 
  \pi_b = p_b (24 - 2p_b + p_a).
\end{equation}
 Maximizing Apex' payoff, we obtain the first-order condition
 \begin{equation} \label{e13.24} 
\frac{ d\pi_a}{d p_a} = 24 - 4p_a + p_b = 0,
\end{equation}
 and the reaction function
 \begin{equation} \label{e13.25} 
p_a = 6 + p_b/4.
\end{equation}

    Since Brydox has a parallel first-order condition, the
equilibrium occurs where $p_a = p_b = 8.$ The quantity each firm produces
is 16, which is below the 24 each would produce at prices of zero.
Figure 13.3 shows that the reaction functions intersect. Apex'
demand curve has the elasticity
 \begin{equation} \label{e13.26} 
  \left( \frac{\partial q_a}{\partial p_a} \right) \cdot
\left( \frac{p_a}{q_a} \right) = - 2 \left(
\frac{p_a}{q_a} \right),
  \end{equation}
  which is  finite even when $p_a = p_b$, unlike the case of the
standard Bertrand model.


\begin{center}
  {\bf Figure 13.3}  Bertrand Reaction Functions with Differentiated
Products 
 \end{center}

\epsfysize=3in

 
\epsffile{/Users/erasmuse/AAANewChapters/Figures/f_12.3.eps}



\bigskip
\noindent
{\bf Cournot Equilibrium with Differentiated Products}

\noindent
  We can also work out the Cournot equilibrium for demand functions
(20) and (21), but product differentiation does not affect it by
much. Start by expressing the price in terms of quantities alone,
obtaining 
  \begin{equation} \label{e13.28} 
    p_a = 12 - \frac{1}{2}q_a +\frac{1}{2} p_b 
 \end{equation}
and 
 \begin{equation} \label{e13.29} 
  p_b = 12 - \frac{1}{2}q_b + \frac{1}{2}p_a.
 \end{equation}
  After substituting from (28) into (27) and solving
for $p_a$, we obtain
  \begin{equation} \label{e13.30} 
   p_a = 24 - 2q_a/3 - q_b/3.
 \end{equation}
   The first-order condition for Apex' maximization problem is
  \begin{equation} \label{e13.30} 
\frac{d \pi_a}{dq_a} = 24 - 4q_a/3 - q_b/3 = 0,
 \end{equation}
 which gives rise to the reaction function
  \begin{equation} \label{e13.31} 
q_a = 18 - q_b/4. 
 \end{equation}
  We can guess that $q_a = q_b$. It follows from (31) that
$q_a = 14.4$ and the market price is 9.6. On checking, you would find
this to indeed be a Nash equilibrium.

 
\subsection{Location Models }%13.3

\noindent
       In section 13.2 we analyzed the Bertrand model with
differentiated products using demand functions whose arguments were
the prices of both firms. Such a model is suspect because it is not
based on primitive assumptions.  In particular, the demand functions
might not be generated by maximizing any possible utility function. A
demand curve with a constant elasticity less than one, for example,
is impossible because as the price goes to zero, the amount spent on
the commodity goes to infinity. Also, demand (20) and
(21) were  restricted to prices below a certain level, and
it would be good to be able to justify that restriction.

   Location models construct demand functions like (20) and
(21) from primitive assumptions.  In location models, a
differentiated product's characteristics are points in a space. If
cars differ only in their mileage, the space is a one-dimensional
line.  If acceleration is also important, the space is a
two-dimensional plane.  An easy way to think about this
approach is to consider the location where a product is sold.  The product ``gasoline
sold at the corner of Wilshire and Westwood,'' is different from
``gasoline sold at the corner of Wilshire and Fourth.''  Depending on
where consumers live, they have different preferences over the two,
but, if prices diverge enough, they will be willing to switch from one gas
station to the other.

        Location models form a literature in themselves. We will look
at the first two models analyzed in the classic article of Hotelling
(1929), a model of price choice and a model of location choice.
Figure 13.4 shows what is common to both. Two firms are located at
points $x_a$ and $x_b$ along a line running from zero to one, with a
constant density of consumers throughout. In the Hotelling Pricing
Game, firms choose prices for given locations.  In the Hotelling
Location Game, prices are fixed and the firms choose the locations.

\begin{center}
{\bf  Figure 13.4:} Location Models 
\end{center}

\epsfysize=3in

 
\epsffile{/Users/erasmuse/AAANewChapters/Figures/f_12.4.eps}




 \begin{center} 
{\bf  The Hotelling Pricing Game}\\
 (Hotelling [1929]) 
 \end{center}
{\bf Players}\\
 Sellers Apex and Brydox, located at $x_a$ and $x_b,$ where $x_a <
x_b$, and a continuum of buyers indexed by location $x \in [0,1]$.

 
\noindent
{\bf Order of Play }\\
 (1)   The sellers simultaneously choose  prices $p_a$ and $p_b$.\\
 (2) Each buyer chooses a seller.

\noindent
 {\bf Payoffs}\\ 
 Demand is uniformly distributed on the interval [0,1] with a density
equal to one (think of each consumer as buying one unit). Production
costs are zero. Each consumer always buys, so his problem is to
minimize the sum of the price plus the linear transport cost, which
is $\theta$ per unit distance travelled. 
 \begin{equation} \label{e13.33} 
   \pi_{buyer \;at \;x} = -Min\{ \theta |x_a -x| + p_a, \; \theta
|x_b - x| + p_b \}.
 \end{equation}

  \begin{tabular}{ll}
 $   \pi_a = \left\{
\begin{tabular}{llr}
 0 & if $p_a - p_b > \theta (x_b - x_a)$ & (33a)\\
 &    (Brydox captures entire market) & \\
 & & \\
 $p_a$ & if $p_b - p_a  > \theta (x_b - x_a)$ & (33b)\\
  & (Apex captures entire market)  & \\
 & & \\
  $p_a ( \frac{1}{2\theta} \left[ (p_b - p_a) + \theta(x_a + x_b)
\right] )$ & otherwise (market is divided)& (33c) \\
 \end{tabular}
\right. $ 
\end{tabular} 


 
\noindent
Brydox has analogous payoffs.

 
 The payoffs result from buyer behavior.   A buyer's utility depends on the
price he pays and the distance he travels.  Price aside, Apex is
most attractive to the customer at $x=0$ (``Customer 0'')  and least attractive to the
customer at $x = 1 $ (``Customer 1'').  Customer   0  will buy from Apex so
long as 
 \begin{equation}\label{e13.34}
 \theta x_a + p_a < \theta x_b + p_b,
 \end{equation}
 which implies that 
  \begin{equation} \label{e13.35} 
 p_a - p_b  < \theta (x_b - x_a),
 \end{equation}
  which yields payoff (33a).  Customer    1  will buy from
Brydox if
 \begin{equation}\label{e13.36}
 \theta(1- x_a) +p_a >  \theta (1-x_b) + p_b,
 \end{equation}
 which implies that 
 \begin{equation}\label{e13.37} 
  p_b - p_a  < \theta (x_b - x_a),
 \end{equation}
which  yields payoff (33b).

  Very likely, inequalities (35) and (36) are both
satisfied, in which case Customer 0 goes to Apex and Customer 1
goes to Brydox. This is the case represented by payoff (33c), and the next task is to find the location of    Customer $x^*$,   defined as the customer who is 
  at the boundary between the two markets,
indifferent between  Apex and Brydox.  First, notice that if Apex
attracts Customer $x_b$, he also attracts all $x > x_b$, because
beyond $x_b$ the customers' distances from both sellers
increase at the same rate. So we know that if there is an indifferent
consumer he is between $x_a$ and $x_b$. Knowing this, (32)
tells us that
 \begin{equation}\label{e13.38} 
 \theta(x^*- x_a) + p_a =  \theta (x_b -x^*) + p_b,
 \end{equation}
so that
 \begin{equation}\label{e13.39} 
 p_b - p_a = \theta (2x^*- x_a - x_b ), 
 \end{equation}
and
 \begin{equation}\label{e13.40}
 x^* = \frac{1}{2\theta} \left[ (p_b - p_a) + \theta(x_a + x_b)
\right].
 \end{equation}
 Keep in mind that equation (40) is valid only if there really does exist a consumer who is indifferent-- if such a consumer does not exist, equation (40) will generates a number for $x^*$, but that number is meaningless. 

 Since Apex keeps all the customers between 0 and $x^*$, equation
(40) is the demand function facing Apex so long as he does
not set his price so far above Brydox's that he loses even Customer
0. The demand facing Brydox equals $(1 - x^*)$.  Note that if $p_b =
p_a$, then from (40), $x^* = \frac{x_a + x_b}{2}$, independent
of $\theta$, which is just what we would expect. Demand is linear in
the prices of both firms, and looks similar to demand curves (20)
and (21), which were used in section 13.2 for the Bertrand game with
differentiated products.


\begin{large}
 \newpage
\noindent
  HOTELLING PRICING GAME INTERIOR EQUILIBRIUM 

 Now that we have found the demand functions, the Nash equilibrium
can be calculated in the same way as in section 13.2, by setting up
the profit functions for each firm, differentiating with respect to
the price of each, and solving the two first-order conditions for the
two prices. If there exists an equilibrium in which      
the firms are willing to pick prices to satisfy inequalities
(35) and (37),  then it  is 
 \begin{equation}\label{e13.41}
 p_a = \frac{(2 + x_a + x_b)\theta}{3}, \;\;p_b = \frac{(4 - x_a -
x_b)\theta}{3}. 
 \end{equation}
   From (41) one can see that Apex charges a higher price if a
large $x_a$ gives it more safe customers or a large $x_b$ makes the
number of contestable customers greater. The simplest case is when
$x_a = 0$ and $x_b =1$, when (41) tells us that both firms charge
a price equal to $\theta$. Profits are positive and increasing in the
transportation cost.

  We cannot rest satisfied with the neat equilibrium of
equation (41), however,  because the assumption that  there exists an equilibrium in which the firms choose prices so as to split the market  on each side of some boundary  consumer $x^*$ is often violated.
Hotelling did not notice this, and fell into common  trap  of  game theory situations. Economists are used to
models in which the calculus approach gives an answer that is both
the local optimum and the global optimum. In games like this one, however, the local optimum is not global, because of the discontinuity in the
objective function. Vickrey (1964) and  D'Aspremont, Gabszewicz, \& Thisse (1979) have
shown that if $x_a$ and $x_b$ are close together, no pure-strategy
equilibrium exists, for reasons similar to why none exists in the
Bertrand model with capacity constraints.  If both firms charge
non-random prices, neither would deviate to a slightly different
price, but one might deviate to a much lower price that would capture
every single customer.      But if both firms charged that low price, each
would deviate by raising his price slightly.  It turns out that if
Apex and Brydox are located symmetrically around the center of the
interval, then, if $x_a \geq 0.25$ and $x_b \leq 0.75$, no pure-
strategy equilibrium exists.\


\newpage

$$
 p_a = \frac{(2 + x_a + x_b)\theta}{3}, \;\;p_b = \frac{(4 - x_a -
x_b)\theta}{3}. 
$$

$$
   x^* = \frac{1}{2\theta} \left[ (p_b - p_a) + \theta(x_a + x_b)
\right].
$$

Case 1:  Try    $x_a = 0, x_b = .7$ and $\theta =.5$.  Then equation (41) says 
 $ p_a= (2+0+.7).5/3 =0.45 $ and     $p_b= (4-0-.7).5/3 =  0.55$. Equation (40) says that $  x^* = \frac{1}{2*0.5} \left[ (0.55-0.45) + 0.5(0+.7)\right]= .45$. That works out just fine. 


Case 2: Try  $x_a = .9, x_b = .9$ and $\theta =.5$.  Then equation (41) says 
 $ p_a= (2+.9+.9).5/3 \approx .63$ and     $p_b= (4-.9-.9).5/3 \approx   .37$. But that means Firm B would capture the entire market! This result is nonsense, because  its derivation relied on the assumption that $x_a < x_b$, which is false.  

 
Case 3. Try  $x_a = .7, x_b = .9$ and $\theta =.5$.     Then equation (41) says 
 $ p_a= (2+.7+.9).5/3 =.6$ and     $p_b= (4-.7-.9).5/3 =.4$.  But what about the split-up of the consumers?  Equation (40) says that $  
 x^* = \frac{1}{2*.5} \left[ (.4-.6) + .5(.7+.9)
\right]= .6$. But that is less than  $x_a$, violating  our implicit assumption that the players split the market!   Equation (40) is based on the premise that there does exist some indifferent consumer, and when that is a false premise,  equation (40) will still  spit out a value of $x^*$, but it will not mean anything.    And, in fact, consumer $x=.6$ is not really indifferent between Apex and Brydox. He   could buy from Apex at  a total cost of   .6 + .1(.5) = .65 or from Brydox, at a total cost of .4 + .3 (.5) = .55.     There are no consumers who strictly prefer Apex, in fact.      Even Apex's `home' consumer at $x= .7$ would have a total cost of buying from Brydox of  $.4 + .5 (.9-.7) =0.5$, and would prefer Brydox.   Similarly, the consumer at $x=0$ would have a total cost of buying from Brydox of  $.4 + .5 (.9-0) =  .85$, compared to a total cost of buying from Apex of  $.6 + .5(.7-0)= .95$, and would prefer Brydox.   
   
\newpage 

 The problem in both Cases 2 and 3 is that the firm with the higher price would do better to deviate with a discontinuous price cut to just below the  other firm's price.  Equation (41) was derived by calculus,  with the implicit assumption that  a local profit  maximum was also a global profit maximum, or, put differently, that  if no small change could raise a firm's payoff, then it had found the optimal strategy.  Sometimes a big change will increase a player's payoff even though a small change would not. Perhaps this is what they mean in business by the importance of  ``nonlinear thinking'' or ``thinking out of the envelope''.  The  everyday manager  or  scientist as described by Joseph  Schumpeter   and  Thomas Kuhn    concentrates on  analyzing      incremental  changes  and  only the entrepreneur or  genius  breaks through with a  discontinuously  new idea, the  profit source or  paradigm shift.\footnote{See Schumpeter, Joseph (1911/1934)  {\it Theory of Economic Development}, translated from the  German 3rd Edition  by Redvers Opie,  Cambridge, Mass: Harvard University Press, 1934.  and Kuhn, Thomas (1970) {\it The Structure of Scientific Revolutions}, Chicago:  University of Chicago Press, 1970.} 

 Hotelling should  have done some numerical examples.  And he should have thought about the comparative statics carefully.   Equation (41) implies  that Apex should choose a higher price if both $x_a$ and $x_b$ increase, but it is odd that if the firms are locating closer together, say at .9 and .91, that Apex should be able to charge a higher price, rather  than suffering from more intense competition. This kind of  odd result is a typical clue that the result has a logical flaw somewhere. Until  the modeller can figure out an intuitive reason for his odd result, he should suspect an error.  

\newpage
   
 So what is the equilibrium in  Case 3? (In Case 2 it is simple: $p_a=p_b=0$.)  First note that in Case 3,  $p_a=p_b=0$ is not an equilibrium.   Suppose Apex  deviated to  $p_a=0.05$.  If there exists an   indifferent consumer , Equation (40)  tells us where he is, and in this case Equation (40)    yields $   x^* = \frac{1}{2*.5} \left[ (.0-.05) + .5(.7+.9)\right]= 0.75.$.   Consumer $x = .75$ has a cost of buying from Apex of  $.05+.5(.75-.7) =  .075$, and a cost of buying from Brydox of  $0+ 0.5 (0.9-0.75) = .075$,  so he is truly indifferent,  an indifferent consumer does exist, and Equation (40) is valid.  Since Apex will now sell to all consumers in the interval $[0,.0.75]$ at the positive price  of $p_a =.05$, Apex's deviation is profitable, and  $p_a=p_b=0$ is not an equilibrium. 

    In starting to look for the mixed strategy equilibrium,  think about the support of the mixing  distribution.    In the equilibrium,  Apex will mix using density $f_a(p)$ on the support   $[L_a, U_a]$.  Notice that it cannot be that  $L_a=0$, because  at that price, Apex would make zero profit, and at higher prices it can earn positive expected profit. Thus, the lower bound for mixing is strictly greater than marginal cost.  How about the upper bound?  $U_a = 10$, equal to the reservation price, is a good guess.  

   I think it is not worth going into the calculations further, tho, because this is a particularly tricky mixed strategy equilibrium to calculate. That is because  the payoff to a particular price $p_a$ takes one of two forms.  First, if $p_b$ is low enough, Apex is shut out of the market and earns zero. Second, if $p_b$ is moderately greater, we would calculate an indifferent consumer $x^*(p_a,p_b)$ and find Apex's profit. Third, if $p_b$ is much greater, Apex gets the entire market.  The expected profit from Apex charging the pure strategy of $p_a$ thus has three components, depending on the part of the density $f_b(p_b)$ that  results in each possibility.  This will get computationally intricate. 


\newpage
 
 

 \begin{center} 
{\bf The  Hotelling Location Game}\\
 (Hotelling [1929])
 \end{center}
  {\bf Players}\\
 $n$ Sellers. 

 
 \noindent
 {\bf Order of Play }\\
  The sellers simultaneously choose locations $x_i \in [0,1].$

 \noindent
 {\bf Payoffs}\\
  Consumers are distributed along the interval [0,1] with a uniform
density equal to one. The price equals one, and production costs are
zero.  The sellers are ordered by their location so $x_1 \leq x_2
\leq \ldots \leq x_n$, $x_0 \equiv 0$ and $x_{n+1} \equiv 1.$ Seller
$i$ attracts half the customers from the gaps on each side of him, so
that his payoff is 
\begin{equation}\label{e13.42}
   \pi_1 = x_1 + \frac{x_2 - x_1}{2},  
 \end{equation}
\begin{equation}\label{e13.43}
   \pi_n =    \frac{x_n - x_{n-1}}{2} + 1 - x_n, 
 \end{equation}
or, for $i = 2, \ldots n-1$, 
 \begin{equation}\label{e13.44}
   \pi_i = \frac{x_i - x_{i-1}}{2} + \frac{x_{i+1} - x_i}{2}.  
 \end{equation}

\bigskip

  With {\bf one seller}, the location does not matter in this model,
since the customers are captive. If price were a choice variable and
demand were elastic, we would expect the monopolist to locate at
$x=0.5$.

  With {\bf two sellers}, both firms locate at $x= 0.5$, regardless of
whether or not demand is elastic.  This is a stable Nash equilibrium, as
can be seen by inspecting figure 13.4 and imagining best responses to
each other's   location. The best response is always to locate
$\varepsilon$ closer to the center of the interval than one's rival. When
both firms do this, they end up splitting the market since both of them end up
exactly at the center.

\newpage
 

     With {\bf three sellers} the model does not have a Nash
equilibrium in pure strategies.  Consider any strategy profile in
which each player locates at a separate point. Such a strategy
profile is not an equilibrium, because the two players nearest
the ends would edge in to squeeze the middle player's market share.
But if a strategy profile has any two players at the same point,
the third player would be able to acquire a share of at least
$(0.5 -\epsilon)$ by moving next to them; and if the third player's
share is that large, one of the doubled-up players would deviate by
jumping to his other side and capturing his entire market share.  The
only equilibrium is in mixed strategies.Suppose all three players use the same mixing density,  with $m(x)$ the probability density for location $x$, and positive density on  the support $[g,h]$. 

   Firm 2 has location $x$ with density  $m(x)$, and Firm 3's location is greater than that with probability  $1-M(x)$, so the density for Firm 2 having location  $x$ and it being   smaller is  $m(x)[1-M(x)]$. The  density for either Firm 2 or Firm 3 choosing $x$ and it being smaller than the other firm's location is then    $2m(x)[1-M(x)]$.

 Firm 2 has location $x$ with density  $m(x)$, and Firm 3's location is less than that with probability  $ M(x)$,   so the density for Firm 2 having location  $x$ and it being   smaller is  $m(x) M(x)$. The  density for either Firm 2 or Firm 3 choosing $x$ and it being smaller than the other firm's location is then    $2m(x)M(x)$.

If Player 1 chooses $x=g$, then his  expected payoff is 
 $$
  \pi_1(x_1=g) =  g + \int_g^h   2m(x)[1-M(x)]   \left( \frac{  x-g }{2} \right) dx,  
 $$
 where   $g$ is  the  safe set of customers to his left, $2m(x)[1-M(x)]$ is the density for $x$ being the next biggest firm location, and $\frac{  x-g }{2}$ is Firm 1's share of the customers between his own location of $g$ and the next biggest location of $x$.  

If Player 1 chooses $x=h$, then his  expected payoff is, similarly, 
 $$
  \pi_1(x_1=h) =  h + \int_g^h   2m(x) M(x)   \left( \frac{  h-x }{2} \right) dx  
 $$
   In a mixed strategy equilibrium, Player 1's payoffs from  these two pure strategies must be equal, and they are also equal to his payoff from  a location of 0.5, which we can plausibly guess is in the support of his mixing distribution.  
 Going on from this point, the algebra and calculus start  to become fierce.    Shaked (1982) has computed
the symmetric mixing probability density $m(x)$ to be
 \begin{equation}\label{e13.45}
 m(x)=  \left\{
\begin{array}{ll}
  2 &{\rm if}\;\; \frac{1}{4} \leq x \leq \frac{3}{4}  \\ 
  0 &{\rm  otherwise}.\\
 \end{array}
 \right. 
 \end{equation}
 I do not know how Shaked came to his answer, but I would tackle the problem by  guessing that  $M(x)$   was a uniform distribution and seeing if it worked, which was perhaps his method too.  

 \newpage
 
  Strangely enough, three is a special number. With {\bf more than
three sellers}, an equilibrium in pure strategies does exist (Eaton
\& Lipsey [1975]). Dasgupta \& Maskin (1986b), as amended by Simon
(1987), have also shown that an equilibrium, possibly in mixed
strategies, exists for any number of players $n$ in a space of any
dimension $m$.

  Since prices are inflexible, the competitive market does not
achieve efficiency.  A benevolent social planner or a monopolist who
could charge higher prices if he located his outlets closer to more
consumers would choose different locations than competing firms.  In
particular, when two competing firms both locate in the center of the
line, consumers are no better off than if there were just one firm.
The average distance of a consumer from a seller would be minimized
by setting $x_1 = 0.25$ and $x_2 = 0.75$, the locations that would be
chosen either by the social planner or the monopolist.

    The Hotelling Location Model, however,  is very well suited to politics.  Often there is just one dimension of importance in political races, and  voters will vote for the candidate closest to their own position, so there is no analog to price.  The Hotelling Location Model predicts that the two candidates will both choose the same position, right on top of the median voter.   This seems descriptively realistic; it accords with the common complaint that  all   politicians are pretty much the same. 

  One way to modify the model is by  looking at two-dimensional location. It turns out that this is difficult to analyze and generally has mixed-strategy solutions, even with just two firms or politicians. 



\newpage


 \subsection{Comparative Statics and   Supermodular Games}   %13.4 section. 

   Comparative statics  is the  analysis of  what happens to  endogenous variables in a model when the exogenous variable change. This is a central part of economics. When   wages rises, for example,  we wish to know how the price of steel will change in response.    Game theory presents special problems for comparative statics, because when a parameter changes, not only does  Smith's equilibrium strategy change in response, but Jones' strategy changes as a result of Smith's change as well.  A small change in the parameter might produce a large change in the equilibrium   because of feedback between the different players' strategies. 
 

Let us use a differentiated Bertrand game as an example. Suppose there are $N$ firms, and for firm $n$ the demand curve is
    \begin{equation} \label{e13.46} 
Q_n = Max \{ \alpha -  \beta_n p_n + \sum_{m \neq n} \gamma_m p_m, 0\}, 
\end{equation}
with $\alpha \in (0, \infty)$ , $\beta_n \in (0, \infty)$,  and  $\gamma_n \in (0, \infty)$ for all $n$. 
 Assume that   the effect of $p_n$ on firm $n$'s sales is larger than the effect of   the other firms'  prices,  so that
   \begin{equation} \label{e.47}
   \beta_n > \sum_{m \neq n} \gamma_m. 
   \end{equation}
  Let firm $n$ have constant marginal cost $\kappa   c_n$, where $\kappa \in \{1,2\}$ and  $c_n \in (0, \infty)$, and let us assume that firm $n$'s costs are low enough that it does operate in equilibrium.   The shift variable $\kappa$ represents the effect of the political regime on costs. 
  The payoff function for firm $n$ is  
 \begin{equation} \label{e13.48}
\pi_n = (p_n  - \kappa c_n)(\alpha  - \beta_n p_n + \sum_{m \neq n}\gamma_m p_m).
\end{equation}
Firms choose prices simultaneously.

   Does this game have an equilibrium? Does it have several equilibria? What happens to the equilibrium price if a parameter such as $c_n$ or $\kappa$ changes? These are difficult questions because if $c_n$ increases, the immediate effect is to change firm $n$'s price, but the other firms will react to the price change, which in turn will affect $n$'s price. Moreover, this is not a symmetric game--- the costs and demand curves differ from firm to firm, which could make algebraic solution of the Nash equilibrium quite messy. It is not even clear whether the equilibrium is unique.  

 Two approaches to comparative statics can be used here: the implicit function theorem, and supermodularity. We will look at each in turn.  

 
 
\noindent
 {\bf  The Implicit Function Theorem} 

The implicit-function theorem says that if $f(x,y) = 0$, then  
   \begin{equation} \label{e13.49}
  \frac{ \partial x} {  \partial y } = - \left(
  \frac{
\frac{\partial f}{ \partial y} 
   } 
 {  
 \frac{\partial f}{   \partial x} 
}  
  \right).  
  \end{equation}
  This is especially useful   if $x$ is a choice variable and $y$ a parameter, because then the  first-order condition  takes the form $f(x, y) =0$, and   the second-order condition determines the sign of $\frac{\partial f}{   \partial x}$. One only   has to make certain that the solution is an interior solution, so the first- and second-order conditions are valid.  
   




In  the differentiated Bertrand game,   equilibrium 
   prices  will  lie inside the interval  ($c_n, \overline {p}$) for some large number $\overline {p}$, 
  because a price of  $c_n$ would yield zero profits, rather than the positive profits of a slightly higher price,  and    $\overline{p}$ can be chosen to    yield zero  quantity demanded  and hence zero profits. The equilibrium  or equilibria are, therefore,   interior solutions, in which case in this well-behaved problem  they satisfy the first-order condition,
   \begin{equation} \label{e.50}
   \frac{\partial  \pi_n }{ \partial p_n  } =  \alpha -  2\beta_n p_n+ \sum_{m\neq n} \gamma_m p_m + \kappa c_n \beta_n  = 0,  
     \end{equation}
 and the second-order condition,  
 \begin{equation} \label{e.51}
  \frac{\partial^2  \pi_n }{ \partial p_n^2  } = -2 \beta_n   < 0.
   \end{equation}
   
 
We can apply the implicit function theorem by  letting $\frac{\partial  \pi_n(p_n, c_n) }{ \partial p_n  } = 0$ from   equation (50)  be our $f(x,y) = 0$ and applying equation (49).   Then 
  \begin{equation} \label{e.52}
 \begin{array}{ll}
  \frac{ \partial  p_n} {  \partial c_n } &=    - \left(
  \frac{
\frac{\partial^2 \pi_n}{ \partial p_n  \partial c_n} 
   } 
 {  
 \frac{\partial^2 \pi_n}{   \partial p_n^2} 
}  
  \right)  \\
 & \\
   & =   - \left(
  \frac{
 \kappa \beta_n
   } 
 {  
  - 2  \beta_n    } \right)\\
  & \\
 & = 0. 
  \end{array}
  \end{equation}
 Thus, an increase in $n$'s individual cost parameter increases its price at a rate of $ frac {/kappa}{2}$. Keep in mind,  however, that  the implicit-function theorem only tells about infinitesimal changes, not finite changes. If $c_n$ increases enough, the nature of the equilibrium changes drastically, because firm $n$ goes out of business. 
  
 We cannot go on to discover the effect of changing $\kappa$ on $p_n$,  because $\kappa$ is a discrete variable, and the implicit-function theorem only applies to continuous variables.  The implicit-function theorem is nonetheless very useful when it does apply. This is a simple example, but the approach can be used even when the functions involved are very complicated. In complicated cases, knowing that the second-order condition  holds allows the modeller to avoid having to determine the sign of the denominator if all that interests him is the sign of the relationship between the two variables. 



\noindent
 {\bf Supermodularity}

The second approach uses the idea of the supermodular game, an idea related to that of strategic complements.  Suppose that  there are $N$ players in a game, subscripted by $m$ and $n$, and  that player $n$  has a strategy consisting of $k_n$  elements, subscripted by $i$ and $j$,  so his strategy is the vector  $ x_n = (x_{n1}, \ldots, x_{nk_n})$. Let his strategy set be $S_n$ and his payoff function be $\pi_n(x_n, x_{-n};  \tau)$, where $\tau$   represents a fixed parameter.  We say that the game is a {\bf smooth supermodular game} if the following four conditions are satisfied:

(A1$'$) The strategy set is an interval in $\boldmath{R^{kn}}$: 
 \begin{equation} \label{e13.53}
 S_n = [\underline{x_n}, \overline{x_n}]. 
\end{equation}

 (A2$'$) $\pi_n$  is twice continuously differentiable on $S_n$.
 
(A3$'$)  (supermodularity) Increasing one component of  player $n$'s strategy does not decrease the net marginal benefit of any other component: for all $n$, and  all $i$  and $j$  such that $ 1 \leq i < j  \leq k_n$, 
\begin{equation} \label{e13.54}
 \frac{\partial^2 \pi_n}{ \partial x_{ni} \partial x_{nj}} \geq 0 .
  \end{equation}

(A4$'$) (increasing differences in one's own and other strategies) Increasing one component of $n$'s strategy does not decrease the net marginal  benefit of increasing any component of player $m$'s strategy: for all $n \neq m$, and all   $i$ and $j$ such that $1  \leq i \leq k_n$ and  $1  \leq   j \leq k_m$, 
\begin{equation} \label{e13.55}
 \frac{\partial^2 \pi_n}{ \partial x_{ni} \partial x_{mj}} \geq 0 .
  \end{equation}


In addition, we will be able to talk about the comparative statics of smooth supermodular games if a fifth condition is satisfied, the  increasing differences condition,   (A5$'$). 



(A5$'$) (increasing differences in one's own strategies and parameters) Increasing parameter $c$  does not decrease the net marginal benefit to player $n$ of any   component of his ownstrategy: for all $n$, and  all $i$    such that $ 1 \leq i  \leq k_n$, 
\begin{equation} \label{e13.56}
 \frac{\partial^2 \pi_n}{ \partial x_{ni} \partial \tau } \geq 0 .
  \end{equation}

The heart of supermodularity is in assumptions $(A3')$ and $(A4')$.  Assumption $(A3')$  says that the components of player $n$'s strategies are all {\bf complementary inputs};  when one component increases, it is worth increasing the other components too. This means that  even if a strategy is a complicated one, one can still arrive at qualitative results about the strategy,  because all the components of the optimal strategy will move in the same direction together.  Assumption (A4$'$) says that the strategies of players $m$ and $n$ are {\bf strategic complements}; when  player $m$ increases a component of his strategy, player $n$ will want to do so also. When the strategies of the players reinforce each other in this way, the feedback between them is less tangled than if they  undermined each other. 
 

I have put primes on the assumptions because they are the special cases, for smooth games, of the general definition of supermodular games (see list in Appendix B). Smooth games use differentiable functions, but the  supermodularity theorems apply more generally.    One condition that is relevant here is condition (A5): 

(A5) $\pi_n$ has increasing differences in $x_n$ and $\tau$, for fixed $x_{-n}$; 
 for all $x_n \geq x_n'$, the difference $\pi_n( x_n, x_{-n},\tau  ) - \pi_n( x_n', x_{-n}, \tau ) $ is nondecreasing with respect to $\tau$.

\bigskip
\noindent
 Is the differentiated Bertrand game  supermodular?   
  The strategy set  can be restricted to  [$c_n$, $\overline{p}$] for   player $n$, so  (A1$'$) is satisfied. 
   $\pi_n$ is twice continuously differentiable on the interval[$c_n,  \overline{p}$], so (A2$'$) is satisfied.   
 A player's strategy has just one component, $p_n$, so (A3$'$)   is  immediately  satisfied. The following inequality is true, 
 \begin{equation} \label{e13.57}
\frac {\partial^2 \pi_n} {\partial p_n \partial p_m} = \gamma_m >0, 
\end{equation}
 so (A4$'$) is satisfied. And it is also true that
  \begin{equation} \label{e13.58}
\frac {\partial^2 \pi_n} {\partial p_n \partial c_n} = \kappa \beta_n > 0,
\end{equation}
so (A5$'$) is satisfied  for $c_n$. 

From equation (50), $\frac{\partial \pi_n}{\partial p_n}$ is increasing in $\kappa$,  so    $\pi_n(p_n, p_{-n}, \kappa  ) - \pi_n( p_n', p_{-n},  \kappa)  $ is nondecreasing in $\kappa$ for $p_n> p_n'$, and (A5) is satisfied for $\kappa$. 

Thus, all the assumptions are satisfied.  This being the case,   a number of theorems   can be applied.  Two of them are Theorems 13.1 and 13.2. 

{\bf Theorem 13.1} //
{\it  If the game is supermodular, there exists a largest and a smallest Nash equilibrium in pure strategies.}

 
{\bf Theorem 13.2}//
 {\it   If the game is supermodular and assumption  (A5) or (A5$'$)  is satisfied, then the largest and smallest equilibrium are nondecreasing functions of the parameter $\tau$.}

Applying Theorems 13.1 and 13.2 yields the following results for the differentiated Bertrand game:     


(1)    There exists a largest and a smallest Nash equilibrium in pure strategies (Theorem 13.1).

 (2)  The largest and smallest equilibrium prices for   firm $n$ are nondecreasing functions of the cost parameters $c_n$  and $\kappa$ (Theorem 13.2).

Note that supermodularity has yielded comparative statics on $\kappa$, unlike the implicit function theorem. It yields weaker comparative statics on $c_n$, however, because it just finds the effect of $c_n$ on $p_n^*$ to be nondecreasing, rather than telling us its value. 

Theorem 13.2 is also useful in proving that the 
  equilibrium here is, in fact, unique--- the largest and smallest equilibrium are one and the same.   
  Since
  \begin{equation} \label{e13.59}
  \frac{\partial^2  \pi_n }{ \partial p_n p_m  } =  \gamma_m, 
   \end{equation}
 it will be true that 
 \begin{equation} \label{e13.60}
  - \left(
 \frac{\partial^2  \pi_n }{ \partial p_n^2  }  \right)   >  \sum_{m \neq n}  \frac{\partial^2  \pi_n }{ \partial p_n p_m  } .
   \end{equation}
   Condition (60) is what is commonly called a {\bf dominant-diagonal condition}. It says that direct effects on profits are more important than all the indirect effects, so   if one expresses the second derivatives in matrix form, the main diagonal would have the largest elements. For a three-firm case that matrix would be
   \begin{equation} \label{e13.61}
 \left[  \begin{array}{lll}
\frac{\partial^2  \pi_1 }{ \partial p_1^2    } & \frac{\partial^2  \pi_1 }{        \partial p_1 \partial p_2  } & \frac{\partial^2  \pi_1 }{ \partial p_1 \partial p_3  }\\
       \frac{\partial^2  \pi_2 }{ \partial p_2 \partial p_1   } & \frac{\partial^2  \pi_2 }{ \partial p_2^2  } & \frac{\partial^2  \pi_2 }{ \partial p_2 \partial p_3  }\\
    \frac{\partial^2  \pi_3 }{ \partial p_3 \partial p_1    } & \frac{\partial^2  \pi_3 }{ \partial p_3 \partial p_2  } & \frac{\partial^2  \pi_3 }{ \partial p_3^2  }\\
 \end{array} \right]
 \end{equation}


    Suppose there were two equilibrium price  profiles, $p$ and $\hat{p}$.   Theorem 13.1 says that the largest and smallest equilibria can be ranked, so for every  strategy in the strategy profile, it would be  true that $\hat{p} \geq p$. But because the first-order condition applies at both equilibria, we know that 
   \begin{equation} \label{e13.62}
  \frac{\partial  \pi_n(p) }{ \partial p_n  } -   \frac{\partial  \pi_n(\hat{p}) }{ \partial p_n  } = 0. 
   \end{equation}
 
One can rewrite equation (62) differently. Starting at equilibrium $p$ and moving to $\hat{p}$,  the first derivative would change as all the components of $p$ changed. If we use  $t$ to index the slow changes in the components, we can write these changes as  
  \begin{equation} \label{e13.63}
  \int_0^1  \left\{ \left( (\hat{p_n} - p_n)  \cdot \frac{\partial^2  \pi_n [t \hat{p } + (1-t) p ]  }{ \partial p_n^2  }  \right)  +  
  \left( \sum_{m \neq n} (\hat{p_m} - p_m)  \cdot \frac{\partial^2  \pi_n [t \hat{p } + (1-t) p ]  }{ \partial p_n \partial p_m  }
 \right) \right\} dt. 
 \end{equation}
   Expression (63)  equals expression (62).  But from equation (60), expression (63) must be negative, and equation (62) equals zero. This is a contradiction, so there cannot really be two different equilibria. The biggest and smallest equilibria are one and the same, and the equilibrium is unique. 


    
  
\subsection{Durable Monopoly}   % sctn 13.5. 

 \noindent
 Introductory economics  courses are quite vague on the issue of the time period over which transactions take place. When a   diagram    shows the supply and demand for widgets, the $x$-axis is labelled ``widgets,'' not ``widgets per week'' or ``widgets per year.''  Also, the diagram splits off one time period from future time periods, using the implicit assumption that supply and demand in one period is unaffected by events of future periods. One problem with this on the demand side is that the purchase of a good which lasts for more than one use  is an investment;  although the price is paid now, the utility from the good continues into the future. If Smith buys a house, he is buying not just the  right to live in the house tomorrow, but the right to live in it for many years to come, or even to  live in it for a few years and then sell the remaining years to someone else.  The continuing  utility he receives from this durable good is called its {\bf service flow}.    Even though he may not intend to rent out the house, it is an investment decision for him because it trades off present  expenditure  for future utility.    Since  even a shirt produces a service flow over more than an instant of time, the durability of goods presents difficult definitional problems for national income accounts. Houses are counted as part of national  investment (and an estimate of their service flow as part of services consumption),  automobiles as durable goods consumption, and shirts as nondurable goods consumption, but all are  to some extent durable investments.   

  In microeconomic theory,    
``durable monopoly'' refers not to monopolies that last a long time, but to monopolies that sell durable goods.  These present a curious problem.     
  When a monopolist sells
something like a  refrigerator to a consumer, that consumer drops out
of the market   until the refrigerator wears out. The
demand curve is, therefore, changing over time as a result of the
monopolist's choice of price, which means that the modeller  should not make his decisions in one period  and ignore future periods.  Demand is not {\bf time separable},
because a rise in price at time $t_1$ affects the quantity demanded
at time $t_2$.

  The  durable monopolist has a special problem because in a sense he does have a   competitor--- himself in the later periods. If he were to set  a
high price in the first period, thereby removing   high-demand buyers from
the market, he would be tempted to set a lower price in the next
period to take advantage of the remaining consumers. But if it were
known he would lower the price, the high-demand buyers would not buy
at a high price in the first period.  The threat of the future low
price forces the monopolist to keep his current price low.

 
 To formalize this situation,  let  the   seller have a
monopoly on a durable good  which lasts two periods. He must set a price for each period, and the    buyer must decide
what quantity  to buy in each period.  Because this one
buyer is meant to represent the entire market demand, the moves are
ordered so that he has no market power, as in the principal-agent
models in section 7.3.  Alternatively, the buyer can be viewed as
representing a continuum of consumers (see Coase [1972] and Bulow
[1982]).  In this interpretation,  instead of  ``the buyer'' buying  $q_1$ in  the  first period, $q_1$ of the buyers each buy one unit in the first period. 
  

\begin{center}
{\bf Durable Monopoly}
\end{center}
  {\bf Players}\\
  A buyer and a seller.

 
\noindent
 {\bf Order of Play }\\
 (1) The seller picks  the first-period    price, $p_1$.\\
 (2) The buyer buys quantity  $q_1$ and consumes service flow $q_1$.\\
 (3) The seller picks  the second-period price, $p_2$.\\
 (4) The buyer buys   additional quantity  $q_2$ and consumes service flow $(q_1+q_2)$.

\noindent
  {\bf Payoffs}\\
 Production cost is zero and there is no discounting. The seller's
payoff is his revenue, and the buyer's payoff is the sum across periods of
his benefits  from consumption minus his expenditure.  His
benefits arise from his being willing to pay as much as
 \begin{equation} \label{e13.64}
 B(q_t) = 60 - \frac{q_t}{2} 
 \end{equation}
  for the marginal unit  service flow  consumed in period $t$, as  shown in figure 13.5. The payoffs are therefore
    \begin{equation} \label{e13.65}
  \begin{array}{lllr}
 \pi_{seller} & = & q_1 p_1 + q_2p_2  &  \\
\end{array}
\end{equation}
 and 
\begin{equation}\label{e13.66}
   \begin{array}{lllr}
    \pi_{buyer} & =& [consumer \;surplus_1] + [consumer \;surplus_2]\\ 
 & & & \\
  & =& [total \;benefit_1 - expenditure_1]+  [total \;benefit_2 - expenditure_2] &\\
 & & & \\
  & =& \left[\frac{(60-B(q_1))q_1}{2}   + B(q_1)q_1 - p_1q_1 \right] &\\
 & & & \\
  & & + \left[\frac{60-B(q_1 + q_2)}{2} \left( q_1 + q_2 \right) +
B(q_1+q_2)(q_1+q_2) - p_2q_2 \right] &  
\end{array}
  \end{equation}

 Thinking about durable monopoly is hard   because we are used to one-period models in which the demand curve, which relates the price to the quantity demanded, is identical to  the marginal-benefit curve, which relates the marginal benefit to the quantity consumed.  Here, the two curves are different. 
The marginal benefit curve is the same each period,  since it is part of the rules of the game, relating consumption to utility.  
The  demand curve  will change over time and depends on the equilibrium strategies, depending as it does on   the number of  periods left in which to consume the good's services,   expected future prices, and the quantity already owned. Marginal benefit is a given for the buyer; quantity demanded is his strategy.    
 

   The  buyer's  total benefit  in period 1 is the dollar value of his utility from his purchase of $q_1$, which equals the  amount he would have
been willing to pay to rent $q_1$.  This is composed of the two areas shown in figure 13.5a, the upper triangle of area $ \left(\frac{1}{2} \right)\left( q_1 + q_2 \right) \left( 60-B(q_1 + q_2) \right)  $ and the lower rectangle of area  $(q_1+q_2)B(q_1+q_2)$. From this must be subtracted his expenditure in period 1,   $p_1q_1$,  to obtain what we might call his consumer surplus in the first period. Note that $p_1 q_1$ will not be the lower rectange, unless by some strange accident, and the ``consumer   surplus'' might easily  be negative, since the expenditure in period  1 will also yield utility in period 2 because the good is durable. 

\begin{center}
 {\bf Figure 13.5 Buyer's Marginal Benefit per Period in the Game of Durable Monopoly }
\end{center}

\epsfysize=3in

 
\epsffile{/Users/erasmuse/AAANewChapters/Figures/f_12.5.eps}

   To find the equilibrium price path one cannot simply differentiate the seller's utility with respect to $p_1$ and $p_2$, because that would violate  the sequential rationality of the seller and the rational response  of the buyer.  Instead, one must look for a subgame perfect equilibrium, which means starting in the second period and discovering how much the buyer would purchase given his first-period purchase of $q_1$, and what  second-period price the seller would charge given the buyer's second-period demand function. 


   In the first period, the marginal unit  consumed was the $q_1$-th. In the second period, it will be the $(q_1+q_2)$-th.   The residual demand curve    after the first
period's purchases is shown in figure 13.5a.  It is a demand curve very much like the demand curve resulting from intensity rationing in the capacity-constrained Bertrand game of section 13.2, as shown in figure 13.2a. The most intense portion of the buyer's demand, up to $q_1$ units, has already been satisfied, and what is left begins with a marginal benefit of $B(q_1)$,  and falls at the same slope as the original marginal benefit curve. The equation for the residual demand is    
therefore, using  equation (66), 
 \begin{equation}\label{e13.67}
 p_2    = B(q_1) - \frac{q_2}{2}= 60 - \frac{(q_1 )}{2}- \frac{(q_2 )}{2}.
\end{equation}
   Solving for the monopoly quantity, $q_2^*$, the seller  maximizes
$q_2p_2$, solving the problem
 \begin{equation}\label{e13.68}
 \stackrel{Maximize}{q_2}  q_2\left(60 - \frac{q_1 + q_2 }{2} \right),
 \end{equation}
 which generates the first-order condition
 \begin{equation}\label{e13.56}
60 - q_2 - q_1/2 = 0,
\end{equation}
 so that
 \begin{equation}\label{e13.70}
q_2^* = 60 - q_1/2. 
 \end{equation}
From equations (64) and (70),  it can be seen that $ p_2^* = 30 -q_1/4$.


   We must now find $q_1^*$.  In period one,  the buyer looks ahead to
the possibility of buying in period two at a lower price. Buying in the first period has two benefits: consumption of the service flow in the first period and consumption of the service flow in the second period.  The price
he would pay for a unit in period one cannot exceed  the marginal benefit from the first-period service flow in
period one plus the foreseen value of $p_2$, which from (70)
is $30-q_1/4$. If the seller chooses to sell $q_1$ in the first period, therefore,   he can do so at the price 
   \begin{equation}\label{e13.71}
\begin{array}{ll}
  p_1 (q_1)&= B(q_1) + p_2  \\
 & = ( 60 - q_1/2) +( 30 - q_1/4),\\
  &= 90 - \frac{3}{4}q_1.\\
\end{array}
\end{equation}
 Knowing that in the second period he will choose $q_2$ according to
(70), the seller combines (70) with (71) to  give the maximand in the   problem of choosing $q_1$ to maximize profit over the two periods, which is 
  \begin{equation}\label{e13.72}
\begin{array}{ll}
     \left(p_1 q_1 + p_2q_2 \right) & = (90 - \frac{3}{4}q_1)q_1 +
( 30 - q_1/4) ( 60 - q_1/2)\\
 & = 1800 + 60q_1 - \frac{5}{8}q_1^2,
 \end{array}
  \end{equation}
 which has the first-order condition 
 \begin{equation}\label{e13.73}
60 - \frac{5}{4} q_1 = 0,
\end{equation}
 so that   
 \begin{equation}\label{e13.74}
q_1^* = 48 
\end{equation}
  and, making use of (71), $p_1^* = 54$.

  It follows from (70) that $q_2^*=36$ and $p_2 = 18.$ The
seller's profits over the two periods are  $\pi_s = 3,240$ ($ = 54(48) + 18(36))$.

    The purpose of these calculations is to compare the situation
with three other market structures: a competitive market, a
monopolist who rents instead of selling, and a monopolist who commits
to selling only in the first period. 


A {\it competitive market} bids
down the price to the marginal cost of zero. Then,  $p_1 = 0$ and $q_1 =
120$ from (66), and profits equal zero.

  If the monopolist {\it rents } instead of selling, then equation (66)
is like an ordinary demand equation, because the monopolist is
effectively selling the good's services separately each period. He
could rent a quantity of 60 each period at a rental fee of $30$ and
his profits would sum to $\pi_s = 3,600$.  That is higher than 3,240,
so profits are higher from renting than from selling outright.  
 The problem  with selling outright  is that the first-period price cannot be very high or  the buyer knows that the seller will be
tempted to lower the price once the buyer has bought in the first
period.  Renting avoids this problem. 
 

  If the monopolist can {\it commit to not producing in the second period},
he will do just as well as the monopolist who rents, since he can
sell a quantity of 60 at a price of 60, the sum of the rents for the
two periods.   An example is the artist who breaks the plates for his
engravings after a production run of announced size.   We must also
assume that the artist can convince the market that he has broken the
plates.
 People joke that the best way an artist can increase the value of his work is by dying, and that, too,  fits the  model. 


 If the modeller ignored sequential rationality and simply looked for  the Nash equilibrium that maximized the payoff of the seller by his choice of $p_1$ and $p_2$, he would come to the commitment result. An example of such an equilibrium is ($p_1=60$, $p_2=200$, {\it Buyer purchases according to $q_1 = 120-p_1$, and $q_2=0$}). This is Nash because neither player has incentive to deviate given the other's strategy, but it fails to be subgame perfect, because the  seller should realize that if he deviates and  chooses a  lower price once the second period is reached, the buyer will respond by  deviating from $q_2=0$ and will buy more units. 


 With more than two periods, the difficulties of the durable-goods
monopolist become even more striking. In an infinite-period model
without discounting, if the marginal cost of production is zero,
the equilibrium price for outright sale instead of renting is
constant--- at zero!  Think about this in the context of a model with
many buyers.  Early consumers foresee that the monopolist has an
incentive to cut the price after they buy, in order to sell to the
remaining consumers who value the product less. In fact, the
monopolist would continue to cut the price and sell more and more
units to consumers with weaker and weaker demand until the price fell
to marginal cost.  Without discounting, even the high-valuation
consumers refuse to buy at a high price, because they know they could
wait until the price falls to zero. And this is not a trick of
infinity: a large number of periods generates a price close to zero. 

  We can also use the durable monopoly model to think about the
durability of the product.  If the seller can develop a product so
flimsy that it only lasts one period, that is equivalent to renting.
A consumer is willing to pay the same price to own a one-hoss shay
that he knows will break down in one year as he would pay to rent it
for a year.  Low durability leads to the same output and profits as
renting, which explains why a firm with market power might produce
goods that wear out quickly. The explanation is not that the
monopolist can use his market power to inflict lower quality on
consumers--- after all, the price he receives is lower too--- but
that the lower durability makes it credible to high-valuation buyers
that the seller expects their business in the future and will not
lower his price.
  
 

 
\begin{small}


\bigskip 
 \noindent
  {\bf Notes}

\noindent
 {\bf N13.1} {\bf Quantities as Strategies: the Cournot Equilibrium
Revisited} 
\begin{itemize}
\item
 Articles on the existence of a pure-strategy equilibrium in the
Cournot model include Novshek (1985) and Roberts \& Sonnenschein
(1976).

 \item
 {\bf Merger in a Cournot Model.} A problem with the Cournot model is
that a firm's best policy is often to split up into separate firms.
Apex gets half the industry profits in a duopoly game.  If Apex
split into firms $Apex_1$ and $Apex_2$, it would get two thirds
of the profit in the Cournot triopoly game, even though industry
profit falls.   

$\;\;\;$ This point was made by Salant, Switzer, \& Reynolds (1983)  and is the subject of problem 13.2.  
It is interesting that nobody noted this earlier, given the intense
interest in Cournot models. The insight comes from approaching the
problem from asking whether a player could improve his lot if his
strategy space were expanded in reasonable ways.

\item
 An ingenious look at how the number of firms in a market affects the price is Bresnahan \& Reiss (1991), which looks  empirically at a number of  very small markets with one, two, three or more competing firms. They find a big decline in the price from one to two firms, a smaller decline from two to three, and not much change thereafter. 

Exemplifying theory, as discussed in the Introduction to this book, lends itself to explaining particular cases, but it is much less useful for making generalizations across industries.  Empirical work associated with exemplifying theory tends to  consist of historical anecdote rather than the linear regressions to which economics has become accustomed.  Generalization and econometrics are still often useful in industrial organization, however, as  Bresnahan \& Reiss (1991)   shows. The most ambitious attempt to  connect general data with the modern theory of industrial organization is Sutton's 1991 book, {\it Sunk Costs and Market Structure}, which is an extraordinarily well-balanced mix of theory, history, and numerical data.  
 
\item
  The idea of conjectural variation is attributed to Bowley (1924)
and is discussed in Jacquemin (1985) and  Varian (1992, p. 302). 

\item
   Do not confuse a conjectural variation of $-1$ with perfect
competition, even though both may lead to the efficient output. In
perfect competition, individuals do not believe that they affect the
rest of the market, but if $CV = -1$, a firm believes that other firms
will cut back when it produces more. Perfect competition is more like
a game with players so small relative to the market that even though
$CV = 0$, as in Nash equilibrium, each player correctly believes that
his actions have a trivial effect on the market price.

 
  \end{itemize}

 

\bigskip

\noindent 
 {\bf N13.2} {\bf Prices as Strategies: the Bertrand Equilibrium} %10.3
\begin{itemize}
 \item
  Intensity rationing has also been called {\bf efficient rationing}.
Sometimes, however, as in section 13.2, this rationing rule is
inefficient. Some low-intensity customers left facing the high price
decide not to buy the product even though their benefit is greater
than its marginal cost. The reason intensity rationing has been
thought to be efficient is that it is efficient if the rationed-out
customers are unable to buy at any price.

 \item
  OPEC has tried both price and quantity controls (``OPEC, Seeking
Flexibility, May Choose Not to Set Oil Prices, but to Fix Output,''
{\it Wall Street Journal},  October 8, 1987, p. 2,29; ``Saudi King Fahd is
Urged by Aides To Link Oil Prices to Spot Markets,'' {\it Wall Street
Journal},  October 7, 1987, p. 2).  Weitzman (1974) is the classic
reference on price vs. quantity control by regulators, although he
does not use the context of oligopoly.  The decision rests partly on
enforceability, and OPEC has also hired accounting firms to monitor
prices (``Dutch Accountants Take On a Formidable Task: Ferreting Out
`Cheaters' in the Ranks of OPEC,'' {\it Wall Street Journal}, 
February 26, 1985, p.  39).


\item
  Kreps \& Scheinkman (1983) show how capacity choice and Bertrand
pricing can lead to a Cournot outcome.  Two firms face
downward-sloping market demand.  In the first stage of the game, they
simultaneously choose capacities, and in the second stage they
simultaneously choose prices (possibly by mixed strategies).  If a
firm cannot satisfy the demand facing it in the second stage (because
of the capacity limit), it uses intensity rationing. (The results
depend on this.)  The unique subgame perfect equilibrium is for each
firm to choose the Cournot capacity and price. 


 
   \item
     Haltiwanger \& Waldman (unpub) have suggested a dichotomy
applicable to many different games between players who are {\bf
responders}, choosing their actions flexibly, and those who are {\bf
nonresponders}, who are inflexible. A player might be a nonresponder
because he is irrational, because he moves first, or simply because
his strategy set is small. The categories are used in a second
dichotomy, between games exhibiting {\bf synergism}, in which
responders choose to do whatever the majority do (upward sloping
reaction curves), and games exhibiting {\bf congestion}, in which
responders want to join the minority (downward sloping reaction
curves).  Under synergism, the equilibrium is more like what it would
be if all the players were nonresponders; under congestion, the
responders have more influence.  Haltiwanger \& Waldman apply the
dichotomies to network externalities, efficiency wages, and
reputation.  

$\;\;\;$ If the reaction functions of two firms are upward sloping, it has
been said that the actions are {\bf strategic complements}, and if
they are downward sloping, {\bf strategic substitutes} (Bulow,
Geanakoplos, \& Klemperer [1985]).  Gal-Or (1985) notes that if
reaction curves slope down (as in Cournot) there is a first-mover
advantage, while if they slope upwards (as in differentiated Bertrand)
there is a second-mover advantage. 

\item
  Section 13.3 shows how to generate demand curves (20) and
(21) using a location model, but they can also be generated
directly by a quadratic utility function. Dixit (1979) states with respect to  three goods 0, 1, and 2, the utility function
 \begin{equation} \label{e13.75}
 U = q_0 + \alpha_1 q_1 + \alpha_2 q_2 - \frac{1}{2} \left(\beta_1
q_1^2 + 2 \gamma q_1 q_2 + \beta_2 q_2^2 \right)
  \end{equation}
 (where the constants $\alpha_1,\alpha_2, \beta_1$, and $\beta_2$ are
positive and $\gamma^2 \leq \beta_1 \beta_2$) generates the inverse
demand functions
  \begin{equation} \label{e13.76}
 p_1 = \alpha_1 - \beta_1 q_1 - \gamma q_2
\end{equation}
 and 
\begin{equation} \label{e13.77}
 p_2 = \alpha_2 - \beta_2 q_2 - \gamma q_1.
\end{equation}
   \end{itemize}

\bigskip
\noindent
{\bf N13.3} {\bf Location Models}
\begin{itemize}
 \item
 For a booklength treatment of location models, see Greenhut \& Ohta (1975).  

 \item
 Vickrey   notes the  possible absence of a pure-strategy equilibrium in     Hotelling's model  in pp. 323-24 of his  1964 book {\it Microstatics}, but does not  go on to consider the mixed-strategy equilibrium that can be found in  D'Aspremont
et al. (1979).     

  
  \item
 Location models and switching cost models are attempts to go beyond
the notion of a market price.  Antitrust cases are good sources for
descriptions of the complexities of pricing in particular markets.
See, for example, Sultan's 1974 book on electrical equipment in the
1950's, or antitrust opinions such as {\it US v. Addyston Pipe
\& Steel Co.  et al.}, 85 Fed 271.

 

\item
 It is important in location models whether the positions of the
players on the line are moveable. See, for example, Lane (1980).

\item
   The location games in this chapter model use a one-dimensional
space with end points, i.e., a line segment. Another kind of
one-dimensional space is a circle (not to be confused with a disk).
The difference is that no point on a circle is distinctive, so no
consumer preference can be called extreme.  It is, if you like,
Peoria versus Berkeley.  The circle might be used for modelling
convenience or because it fits a situation: e.g., airline flights
spread over the 24 hours of the day.  With two players, the Hotelling
location game on a circle has a continuum of pure-strategy equilibria that are one 
of two types: both players locating at the same spot, versus
players separated from each other by 180$^\circ$. The three-player
model also has a continuum of pure-strategy equilibria, each player
separated from another by 120$^\circ$, in contrast to the
nonexistence of a pure-strategy equilibrium when the game is played
on a line segment.

 \item
 Characteristics such as the color of cars could be modelled as
location, but only on a player-by-player basis, because they have no
natural ordering. While Smith's ranking of (red=1, yellow=2, blue=10)
could be depicted on a line, if Brown's ranking is (red=1, blue=5,
yellow=6) we cannot use the same line for him. In the text, the
characteristic was something like physical location, about which
people may have different preferences but agree on what positions are
close to what other positions.
  \end{itemize}

 
\bigskip
\noindent
{\bf N13.5} {\bf Durable Monopoly}
\begin{itemize}
 \item
 The proposition that price falls to marginal cost in a durable
monopoly with no discounting and infinite time is called the ``Coase
Conjecture,'' after Coase (1972).  It is really a proposition and not a conjecture,
but alliteration was too strong to resist.

\item
  Gaskins (1974) has written a well-known article on the problem of
the durable monopolist who foresees that he will be creating his own future
competition in the future because his product can recycled, using the
context of the aluminum market. 

\item
  Leasing by a durable monopoly was the main issue in the antitrust
case {\it US v. United Shoe Machinery Corporation}, 110 F. Supp. 295
(1953), but not because it increased monopoly profits. The complaint
was rather that long-term leasing impeded entry by new sellers of
shoe machinery, a curious idea when the proposed alternative was
outright sale.  More likely, leasing was used as a form of financing
for the machinery customers; by leasing, they did not need to borrow
as they would have to do if it was a matter of financing a purchase.  See Wiley {\it et al.} (1990).


\item
   Another way out of the durable monopolist's problem is to give
best-price guarantees to customers, promising to refund part of the
purchase price if any future customer gets a lower price. Perversely,
this hurts consumers, because it stops the seller from being tempted
to lower his price.  The ``most-favored-nation'' contract, which is
the analogous contract in markets with several sellers, is analyzed
by Holt \& Scheffman (1987), for example, who demonstrate how it can
maintain high prices, and Png \& D. Hirshleifer (1987), who show how it
can be used to price discriminate between different types of buyers.

\item
 The durable monopoly model should remind you of bargaining under incomplete information.  Both situations can be modelled using two periods, and in both situations the problem for the seller is that he is tempted to offer a low price in the second period after having offered  a high price in the first period.  In the durable monopoly model this would happen if the high-valuation sellers bought in the first period and thus were absent from consideration by the second period. In  the bargaining model  this would happen if the  seller rejected the first-period offer and  could conclude that  he must have a low-valuation and act accordingly in the second period. With a rational buyer, neither of these things can happen,  and the models' complications arise from the attempt of the seller to get around the problem.  



In the durable-monopoly model this would happen if the high-valuation buyers bought in the first period and thus were absent from consideration by the second period.  In the bargaining model this would happen if the buyer rejected the first-period offer and the seller could conclude that he must have a low valuation and act accordingly in the second period.  For further discussion, see the survey by Kennan \& R. Wilson (1993).



 \end{itemize}

{\bf Problems}
\bigskip

 {\bf 13.1: Differentiated Bertrand with Advertising.}\\

 Two firms that
produce substitutes are competing with demand curves
 \begin{equation} \label{e13.78}
 q_1= 10 - \alpha p_1 + \beta p_2
 \end{equation}
 and
 \begin{equation} \label{e13.79}
 q_2= 10 - \alpha p_2 + \beta p_1.
 \end{equation} 
 Marginal cost is constant at $c=3$.  A player's strategy is his
price. Assume that $ \alpha > \beta/2.$

 \hspace*{16pt}(13.1a) What is the reaction function for Firm 1? Draw
the reaction curves for both firms.  


   \hspace*{16pt} (13.1b) What is the equilibrium? What is the
equilibrium quantity for Firm 1?   

   \hspace*{16pt} (13.1c) Show how Firm 2's reaction function changes
when $\beta$ increases. What happens to the reaction curves in the
diagram?  
  
 
      \hspace*{16pt} (13.1d) Suppose that an advertising campaign
could increase the value of $\beta$ by one, and that this would
increase the profits of each firm by more than the cost of the
campaign. What does this mean? If either firm could pay for this
campaign, what game would result between them?  



\bigskip
 {\bf 13.2: Cournot Mergers.  }\footnote{ See Salant, Switzer, \& Reynolds (1983).}//


  There are three identical firms in an industry with demand given by
$P = 1-Q$, where $Q = q_1+q_2+q_3$.  The marginal cost is zero. 
 
       \hspace*{16pt}(13.2a) Compute the Cournot equilibrium price
and quantities.  

       \hspace*{16pt} (13.2b) How do you know that there are no
asymmetric Cournot equilibria, in which one firm produces a different
amount than the others?  


      \hspace*{16pt} (13.2c) Show that if two of the firms merge,
their shareholders are worse off.   
    

 
  {\bf 13.3: Differentiated Bertrand.}\\

 Two firms that produce
substitutes  have the   demand curves 
   \begin{equation} \label{e13.80}
 q_1=  1 - \alpha p_1 + \beta (p_2-p_1)    
 \end{equation}
 and
 \begin{equation} \label{e13.81}
 q_2= 1 -   \alpha p_2 + \beta (p_1-p_2),
 \end{equation}
  where $\alpha > \beta$.  Marginal cost is constant at $c$, where
$c < 1/\alpha$.  A player's strategy is his price. 


 \hspace*{16pt} (13.3a) What are the equations for the reaction
curves $p_1(p_2)$ and $p_2(p_1)$? Draw them. 

\hspace*{16pt} (13.3b) What is the pure-strategy equilibrium for this
game?  
 
   \hspace*{16pt}(13.3c) What happens to prices if $\alpha$, $\beta$,
or $c$ increase?   

  \hspace*{16pt}(13.3d) What happens to each firm's price if $\alpha$
increases, but only Firm 2 realizes it (and Firm 2 knows that Firm 1
is uninformed)?  Would Firm 2 reveal the change to Firm 1?  

 






\bigskip
\noindent
 {\bf Problem 13.4: Asymmetric Cournot  Duopoly.}//

    Apex    has  variable costs of $q_a^2$  and a fixed cost of 1000, while  Brydox has variables costs of $2q_b^2$ and  no fixed cost.   Demand is $p = 115 - q_a - q_b$.

       \hspace*{16pt}(13.4a) What is the equation for Apex' Cournot
reaction function? 

\hspace*{16pt}(13.4b) What is the equation for Brydox' Cournot
reaction function?

      \hspace*{16pt}(13.4c) What are the outputs and profits in the
Cournot equilibrium? 



\end{small}


\end{document}



