     \documentstyle[12pt,epsf] {article}
\parskip 10pt
\reversemarginpar
   \topmargin  -.4in
  \oddsidemargin .25in
  \textheight  8.7in
 \textwidth 6in  
    
         \begin{document}
    \parindent 24pt
\parskip 10pt

\setcounter{section}{8}
 \setcounter{page}{249}

\noindent
June 29, 1993.april 20, 1999
  
\begin{large} 



\section*{ 9   Mechanism Design in Adverse Selection and in Moral Hazard with Hidden Information } 


\noindent
 {\bf  9.1 title here xxx.}
  
\noindent
    In Chapter 9  we will  look at mechanism design. Section 9.1  introduces hidden knowledge
and distinguishes between pooling and
separating   equilibria.    It also discusses a modelling simplification called
the Revelation Principle.  Section 9.2 uses diagrams to apply the
model to the selection of a sales strategy. \footnote{xxx This intro needs updating. }

  
   Information is complete in moral hazard games, but  in  moral hazard with hidden knowledge,   the agent, but not the principal,  observes a move of Nature  after the game begins.   Information is symmetric at the time of contracting, but becomes asymmetric later.    From the principal's point of view, agents are identical at the beginning of the game  but   develop  private types midway through it,  
  depending on what they have seen. His  chief concern is to
 give them incentives to disclose   their types later,  which gives games with hidden knowledge   a flavor close to that of the adverse-selection models   in chapter 8.  The agent may exert effort, but effort's 
contractibility is  less important when the principal does not know which effort is appropriate because he is  ignorant of the
state of the world chosen by Nature.   The main difference  in technical  analysis between moral hazard with hidden knowledge and adverse selection is that if the game begins with symmetric information and   only becomes asymmetric after a contract has been agreed upon,  the contract must satisfy a participation constraint which takes into account the fact that the agent's type is not yet known to him.

 There is more hope for obtaining efficient outcomes in  
moral hazard with hidden knowledge  than in the  other two kinds of games of asymmetric information. The advantage over adverse selection  is that information is symmetric at the time of contracting, so  neither player can use    private information to extract surplus from the other by choosing inefficient contract terms. The advantage over   hidden actions is that the post-contractual asymmetry is with respect to knowledge only,  which is neutral in itself, rather than over whether the  agent exerted high effort, which  causes direct disutility to him. 

  

 For a comparison  between the two types of moral hazard, let us modify Production Game V from section 7.2 to
turn it into a game of hidden knowledge.




\begin{center}
 {\bf Production Game VI: Hidden Knowledge}
 \end{center}
  {\bf Players}\\
  The principal and the agent.

 \noindent
 {\bf Order of Play}\\
 (1) The principal offers the agent a wage contract of the form
$w(q,m)$.\\
 (2) The agent accepts or rejects the principal's offer.\\
 (3) Nature chooses the state of the world $\theta$, according to
probability distribution $F(\theta)$.  The agent observes $\theta$,
but the principal does not.\\
  (4) If the agent accepts, he exerts effort $e$ and sends a message
$m$, both observed by the principal.\\
 (5) Output is $q(e, \theta)$.

\noindent
 {\bf Payoffs}\\
 If the agent rejects the contract,   $\pi_{agent} = \bar{U}$ and
$ \pi_{principal} = 0$.\\
 If the agent accepts the contract, $\pi_{agent}= U(e,w,\theta)$
and $\pi_{principal}= V(q - w)$.

 \bigskip
 
   The principal would like to know $\theta$ so he can tell which effort level is    appropriate. In an ideal world he would employ 
  an honest agent who always chose $m=\theta$,   but in 
noncooperative games, talk is cheap.  Since the
agent's words are worthless, the principal must try to design a
contract that either provides incentives for truthfulness or takes
lying into account.    The {\bf mechanism design} literature addresses   this question. It said that the principal  {\bf implements} a {\bf mechanism} to
extract the agent's information. 


\bigskip
\noindent
{\bf Pooling and Separating Equilibria} 

\noindent
   In hidden-action models, the principal tries to construct a
contract which will induce the agent to take the single appropriate
action. In hidden-knowledge models, the principal tries to make
different actions attractive under different states of the world, so
the agent's choice depends on the hidden state.

 \noindent
  {\it If all types of agents pick the same strategy in all states,
the equilibrium is {\bf pooling}.  Otherwise, it is {\bf
separating}.}

 The distinction between pooling and separating is different from the distinction between equilibrium concepts. 
  A model might have multiple Nash equilibria,
some pooling and some separating. Moreover, a single equilibrium---
even a pooling one--- can include several contracts, but if it is
pooling the agent always uses the same strategy, regardless of type.
If the agent's equilibrium strategy is mixed, the equilibrium is
pooling if the agent always picks the same mixed strategy, even
though the messages and efforts would differ across realizations of
the game.

 These two terms came up in section 6.2 in the game of PhD
Admissions.   Neither type  of student applied in the pooling
equilibrium, but   one type did in the separating equilibrium. In a principal-agent model, the principal tries to design the contract to achieve separation unless the incentives turn out to be too costly. 


   A separating contract need not be fully separating. If agents who
observe $\theta \leq 4$ accept contract $C_1$  but other agents
accept $C_2$, then the equilibrium is separating but it does not
separate out every type. We say that the equilibrium is {\bf fully
revealing} if the agent's choice of contract always conveys his
private information to the principal.  Between pooling and fully
revealing equilibria are the {\bf imperfectly separating} equilibria
synonymously called {\bf semi-separating}, {\bf partially
separating}, {\bf partially revealing}, or {\bf partially pooling } equilibria.

  The principal's problem, as in Production Game V, is to maximize
his profits subject to 

\noindent 
 (1) {\bf Incentive compatibility} (the agent picks the desired contract
and actions). 

\noindent
  (2) {\bf Participation} (the agent prefers the contract to his
reservation utility). 
   
In a model with hidden knowledge, the incentive compatibility constraint is customarily  called the {\bf self-selection constraint}, because it
induces the different types of agents to pick different contracts.
As with hidden actions, if principals compete in offering contracts,
a {\bf competition constraint} is added: the equilibrium contract
must be as attractive as possible to the agent, since otherwise
another principal could profitably lure him away.
    An equilibrium  may also need to  satisfy a part of the competition
constraint not found in hidden actions models: either a {\bf
nonpooling constraint} or a {\bf nonseparating constraint}.   If one of several competing principals wishes to
construct a pair of separating contracts $C_1$ and $C_2$, he must
construct it so that not only do agents choose $C_1$ and $C_2$
depending on the state of the world (to satisfy incentive compatibility), but
also  they prefer ($C_1, C_2$) to a pooling contract $C_3$
(to satisfy nonpooling).

\bigskip
\noindent
 {\bf Unravelling the Truth when Silence is the Only Alternative}

 \noindent
 Before going on to look at  a self-selection contract,  let us look at a special case in which hidden-knowledge paradoxically makes no difference. 
      The usual hidden knowledge model has no penalty for lying, but
let us briefly consider what happens if the agent cannot lie but he
can be silent or tell half-truths. 
  Suppose that Nature uses the
uniform distribution to assign the variable $\theta$ some value in
the interval $[0, 10]$, and the agent's payoff is increasing in the
principal's estimate of $\theta$.  Usually we assume that the agent
can lie freely, sending a message $m$ taking any value in $[0,10]$,
but let us assume instead that he cannot lie, although he is free to
conceal information. Thus, if $\theta = 2$, he can send the
uninformative message $m \geq 0$ (equivalent to no message), or $m
\geq 1$, or $m=2$, but not the lie $m \geq 4$. 

 When $\theta=2$ the agent might as well send a message that  is the exact truth: ``$m=2$.''  If he were to choose ``$m \geq 1$,''
for example, the principal's first thought might be to estimate
$\theta$ as the average value of the interval $[1,10]$, which is 5.5.
But the principal would realize that no agent with a value of
$\theta$ greater than 5.5 would want to send that message in a Nash equilibrium. This
realization restricts the possible interval to [1, 5.5], which in
turn has an average of 3.25. But then no agent with $\theta > 3.25$ would
send the message ``$m \geq 1$.''  The principal can continue this process of
logical {\bf unravelling} to conclude that $\theta = 1$.  The message
``$m \geq 0$'' would be even worse, making the principal believe that
$\theta = 0$. In this model, ``No news is bad news.''  The agent
would therefore not send the message ``$m \geq 1$''  and  he would be indifferent between ``$m=2$''  and   ``$m \geq 2$'' because the 
principal would make the same deduction from either message.

   Perfect revelation  is paradoxical, but that is
because the assumptions just described are rarely satisfied in the real
world. In particular, unpunishable lying and genuine ignorance allow
information to be concealed.  If the seller is free to lie without
punishment, then, in the absence of other incentives, he always
pretends that his information is extremely favorable, so nothing he
says conveys any information, favorable or unfavorable. If he really
is ignorant in some states of the world, then his silence could mean
either that he has nothing to say or that he has nothing he wants to say.
The unravelling argument fails because if he sends an uninformative
message the buyers will attach some probability to ``no news'' instead of
``bad news.''  Problem 8.3 at the end of this chapter explores unravelling further. 


\bigskip
\noindent
{\bf The Revelation Principle}

 \noindent
  The principal might choose to offer a contract that induces the
agent to lie in equilibrium, since he can take lying into account
when he designs the contract, but this complicates the analysis. Each
state of the world has a single truth, but a continuum of lies:
generically speaking, almost everything is false.  The revelation
principle helps us simplify.

 \noindent
	{\bf The Revelation Principle.} {\it For every contract
$w(q,m)$ that leads to lying (that is, to $m \neq \theta$), there is
a contract $w^*(q,m)$ with the same outcome for every $\theta$ but no
incentive for the agent to lie.}  

     In many  possible contracts, sending false messages is profitable for the   agent in that when the state of the world is $a$ he receives a reward of $x_1$ for the true report of $a$ and  $x_2>x_1$ for the false report of $b$.   A contract which gives the agent a reward of $x_2$ regardless of whether he reported $a$ or $b$ would lead to exactly the same payoffs for each player while giving  the agent no incentive to lie. The revelation principle  notes that a   contract with no lying can always be found by imitating the relation between states of the  world and payoffs in the equilibrium  of the contract with lying. This idea can also be applied to games in which both players must make reports to each other.  


 Applied to concrete examples, the revelation principle may seem
obvious.  Suppose we are concerned with the effect on the moral
climate of cheating on income taxes, but anyone who makes \$70,000
a year can claim he makes \$50,000 and the government
does not have the resources to  catch him.  The revelation principle says that we can rewrite
the tax code to set the tax to be the same for taxpayers earning \$70,000
and  for those earning \$50,000, and the same amount of taxes will be collected without anyone having  the incentive to lie.   Applied to moral education, the
principle says that the mother who agrees never to punish her
daughter if she tells her  all her escapades will never hear any untruths.
Clearly, the principle's usefulness is not  so much to improve outcomes as  
to simplify contracts. The principal (and the modeller) need only look at
contracts which induce truthtelling, so the relevant strategy space
is shrunk and we can add a third constraint  to  the incentive compatibility and participation constraints to help calculate the
equilibrium:

\noindent 
 (3) {\bf Truthtelling.} The equilibrium contract makes the    agent willing
to choose $m =\theta$.

The  revelation principle says that a truthtelling equilibrium exists, but not that it is unique.  It may well happen that the equilibrium is a weak Nash equilibrium in which   the optimal contract  gives the agent no incentive to lie but also no incentive to tell the truth. This is similar to the open-set problem discussed in section 4.3;   the optimal contract may satisfy the agent's participation constraint but makes him indifferent  between  accepting and rejecting the contract. If agents derive the slightest utility from telling the truth, of course, then truthtelling becomes a strong equilibrium, but  if their utility from telling the truth is really significant, it should be made an explicit part of the model. If the utility of truthtelling is strong enough, in fact, agency problems and the costs associated with them  disappear.   This is one reason  why morality is useful to business.   
 


\vspace{1in} 
\noindent
 {\bf 9.2 An Example of   Moral Hazard with Hidden Knowledge: The Salesman Game}  

 \noindent
 The next game illustrates the differences between pooling and
separating equilibria. The manager of a company has told his salesman
to investigate a potential customer, who is either a $Pushover$ or a
$Bonanza$.  If he is a $Pushover$, the efficient sales effort is low
and sales should be moderate.  If he is a $Bonanza$, the effort and
sales should be higher.


\begin{center}
 {\bf The  Salesman Game}
 \end{center}
  {\bf Players}\\
  A manager and a salesman.

 
\noindent
 {\bf Order of Play}\\
 (1) The manager offers the salesman a contract of the form $w(q,m)$,
where $q$ is sales and $m$ is a message.\\
 (2) The salesman decides whether or not to accept the contract.\\
 (3) Nature chooses whether the customer is a $Bonanza$ or a
$Pushover$ with probabilities 0.2 and 0.8. Denote the state variable
``customer status'' by $\theta$.  The salesman observes the state,
but the manager does not.\\
 (4) If the salesman has accepted the contract, he chooses his sales
level $q$, which implicitly measures his effort.

\noindent
 {\bf Payoffs}\\
 The manager is risk neutral and the salesman is risk averse. 
 If the salesman rejects the contract, his payoff is $\bar{U}= 8$ and
the manager's is zero. If he accepts the contract, then\\
 \begin{tabular}{ll}
  $ \pi_{manager}$ & $= q - w$.\\ 
 $\pi_{salesman}$ &$ = U(q, w, \theta)$, where $\frac{\partial
U}{\partial q} < 0, \frac{\partial^2 U}{\partial q^2} < 0,
\frac{\partial U}{\partial w} > 0, \frac{\partial^2 U}{\partial w^2} <
0.$
  \end{tabular}
  \bigskip

   Figure 9.1 shows the indifference curves of manager and salesman,
labelled with numerical values for exposition.  The manager's
indifference curves are straight lines with slope $1$  because he is
acting on behalf of a risk-neutral company.  If the wage and the
quantity both rise by a dollar, profits are unchanged, and the
profits do not depend directly on whether $\theta$ takes the value
$Pushover$ or the value $Bonanza$.

  The salesman's indifference curves also slope upwards, because he
must receive a higher wage to compensate for the extra effort that
makes $q$ greater. They are convex because the marginal
utility of dollars is decreasing and the marginal disutility of effort
is increasing. As Figure 9.1 shows, the salesman has two sets of
indifference curves, solid for $Pushovers$ and dashed for $Bonanzas$,
since the effort that secures a given level of sales depends on the
state.

\begin{center} 
 {\bf Figure 9.1  The  Salesman Game  with  Curves for Pooling Equilibrium}
\end{center}
\epsfysize=3in

    \epsffile{/Users/erasmuse/AAANewChapters/Figures/f7.1.eps}



  Because of the participation constraint, the manager must provide
the salesman with a contract giving him at least his reservation
utility of 8, which is the same in both states.  If the true state is
that the customer is a $Bonanza$, the manager would like to offer a
contract that leaves the salesman on the dashed indifference curve
$\tilde{U_S}=8$, and the efficient outcome is ($q_2$,$w_2$), the point
at which the salesman's indifference curve is tangent to one of the
manager's indifference curves.  At that point, if the salesman sells
an extra dollar he requires an extra dollar of
compensation. 

   If it were common knowledge that the customer was a $Bonanza$, the
principal could choose $w_2$ so that $U(q_2, w_2, Bonanza)= 8$ and
offer the forcing contract
 \begin{equation} \label{e1}
 \begin{array}{ll}
   w =& \left\{
\begin{array}{ll}
 0   & {\rm if} \; q < q_2.\\
  w_2 & {\rm if} \;q \geq q_2.\\
\end{array}
 \right. 
\end{array}
\end{equation}
    The salesman would accept the contract and choose $q = q_2$. But
if the customer were actually a $Pushover$, the salesman would still
choose $q = q_2$, an inefficient outcome that does not maximize
profits.  High sales would be inefficient because the salesman would
be willing to give up more than a dollar of wages to escape having to
make his last dollar of sales.  Profits would  not be  maximized, because
the salesman achieves a utility of 17, and he would have been willing
to work for less.

    The revelation principle says that in searching for the optimal
contract we need only look at contracts that induce the agent to
truthfully reveal what kind of customer he faces. If it required more
effort to sell any quantity to the $Bonanza$, as shown in Figure 9.1,
the salesman would always want the manager to believe that he
faced a $Bonanza$, so he could extract the extra pay necessary to achieve a
utility of 8 selling to $Bonanzas$.  The only optimal truth-telling contract is the
pooling contract that pays the intermediate wage of $w_3$ for the
intermediate quantity of $q_3$, and zero for any other quantity,
regardless of the message.  The pooling contract is a second-best
contract, a compromise between the optimum for $Pushovers$ and the
optimum for $Bonanzas$. The point $(q_3,w_3)$ is closer to $(q_1,w_1)$
than to $(q_2,w_2)$, because the probability of a $Pushover$ is higher
and the contract must satisfy the participation constraint
 \begin{equation}\label{e2}
 0.8 U(q_3, w_3, Pushover) + 0.2 U(q_3, w_3, Bonanza)   \geq 8. 
 \end{equation}
 The nature of the equilibrium depends on the shapes of the
indifference curves. If they are shaped as in Figure 9.2, the
equilibrium is separating, not pooling, and there does exist a
first-best, fully revealing  contract.

\begin{center}
 {\bf Figure 9.2     Indifference Curves fro a Separating Equilibrium }
  \end{center}
\epsfysize=3in

    \epsffile{/Users/erasmuse/AAANewChapters/Figures/f7.2.eps}



\begin{equation} \label{e3}
 {\rm Separating\;\; Contract } \left\{
\begin{array}{ll}
{\rm Agent\;\; announces}\;\; {\it Pushover}:&
\begin{array}{ll}
   w =& \left\{
\begin{array}{l}
 0 \;  {\rm if}\; q < q_1.\\
  w_1 \; {\rm if }\; q \geq q_1.\\
\end{array}
 \right. \\
\end{array}
\\
{\rm Agent\;\; announces\;\;} { \it Bonanza}: &
\begin{array}{ll}
 w = & \left\{
\begin{array}{l}
 0 \; {\rm if}\; q < q_2.\\ 
 w_2\; {\rm if} \; q \geq q_2.
   \end{array}
 \right. 
  \end{array}
 \end{array}
 \right.
\end{equation}

  Again, we know from the revelation principle that we can narrow
attention to contracts that induce the salesman to tell the truth.
With the indifference curves of Figure 9.2, contract (\ref{e3})
induces the salesman to be truthful and the incentive compatibility
constraint is satisfied. If the customer is a $Bonanza$,  but the
salesman claims to observe a $Pushover$ and chooses $q_1$, his utility
is less than 8 because the point $(q_1,w_1)$ lies below the
$\tilde{U_S}=8$ indifference curve. If the customer is a $Pushover$ and
the salesman claims to observe a $Bonanza$, then although $(q_2,w_2)$
does yield the salesman a higher wage than $(q_1,w_1)$, the extra
income is not worth the extra effort, because $(q_2,w_2)$ is far
below the indifference curve $\carat{U_S}=8$.

Another way to phrase the description  of a separating equilibrium  is to say that it gives the    salesman a choice of contracts, rather than saying that it gives him a single contract that specifies different wages for different outputs. He agrees to work with the manager, and after he discovers what type the customer is he chooses either the contract $(q_1,w_1)$ or the contract $(q_2,w_2)$, where each is a forcing contract and he receives 0 if after choosing  the contract $(q_i,w_i)$ he produces output of $q \neq q_i$. In this interpretation, we say that the manager offers a {\bf menu of contracts} and the salesman selects one of them after learning his type. This is simply a different way of describing the same equilibrium. 

 Sales contracts in the real world  are often complicated, because it is easy to measure the major component of output, sales, and hard to measure the inputs of workers who are out in the field away from direct supervision. The Salesman Game is a real problem. 
 Gonik (1978) describes hidden knowledge contracts used by IBM's
subsidiary in Brazil.  Salesmen were first assigned quotas. They then
announced their own sales forecast as a percentage of quota and chose
from among a set of contracts, one for each possible forecast.
Inventing some numbers for illustration, if Smith were assigned a
quota of 400 and he announced 100 percent, he might get $w=70$ if he
sold 400 and $w=80$ if he sold 450; but if he had announced 120
percent, he would have gotten $w= 60$ for 400 and $w=90$ for 450. The
contract encourages extra effort when the extra effort is worth the
extra sales.  The idea here, as in the Salesman Game, is to reward salesmen not just for high effort, but for appropriate effort. 



   The Salesman Game illustrates a number of ideas.  It can have
either a pooling or a separating equilibrium, depending on the
utility function of the salesman. The revelation principle can be
applied to avoid having to consider contracts in which the manager
must interpret the salesman's lies.  It also shows how to use diagrams when
the algebraic functions are intractable or unspecified, a  problem that does not arise in most of the 
 two-valued numerical examples in this book. 



  
\vspace{1in}
\noindent
 {\bf  9.3  Tournaments} 

 \noindent
 Games in which relative performance is important are called {\bf
tournaments}. Tournaments are similar to auctions, the difference
being that the actions of the losers matter directly, unlike in
auctions. Like auctions, they are especially useful when the
principal wants to elicit information from the agents. A
principal-designed tournament is sometimes called a {\bf yardstick
competition} because the agents provide the measure for their wages.

   Farrell (unpublished) uses a tournament to explain how ``slack'' might be
the major source of welfare loss from monopoly, an old idea usually
prompted by faulty reasoning.  The usual claim is that monopolists
are inefficient because , unlike competitive firms, they do not have
to maximize profits to survive. This   relies on the dubious assumption  that firms
care about survival, not profits.  Farrell makes a subtler point:
although the shareholders of a monopoly maximize profit, the managers
maximize their own utility, and moral hazard is severe without the
benchmark of other firms' performances. 

     Let firm Apex have two possible production techniques, $Fast$
and $Careful$. Independently for each technique, Nature chooses
production cost $c=1$ with probability $\theta$ and $c=2$ with
probability $1-\theta$. The manager can either choose a technique at
random or investigate the costs of both techniques at a utility cost to himself of 
$\alpha$.  The shareholders can observe the resulting production
cost, but not whether the manager investigates.  If they see the
manager pick $Fast$ and a cost of $c=2$, they do not know whether he
chose it without investigating, or investigated both techniques and
found they were both costly.  The wage contract is based on what the
shareholders can observe, so it takes the form $(w_1,w_2)$, where
$w_1$ is the wage if $c=1$ and $w_2$ if $c=2$.  The manager's utility
  is log $w$ if he does not investigate, log $w$ $-\alpha$ if
he does, and the reservation utility of log $\bar{w}$ if he quits.

   If the shareholders want the manager to investigate, the contract
must satisfy the self-selection constraint 
 \begin{equation} \label{e4}
  U({\rm  not   investigate }) \leq U ({\rm   investigate }).
  \end{equation}
 If the manager investigates, he still fails to find a low-cost
technique with probability $(1-\theta)^2$, so (\ref{e4}) is equivalent to
 \begin{equation} \label{e5}
  \theta {\rm log}\;w_1 + (1-\theta) {\rm log}\; w_2 \leq [1- (1-\theta)^2]
{\rm log} \;w_1 + (1-\theta)^2 {\rm log} \;w_2 - \alpha. 
  \end{equation} 
 The self-selection constraint is binding, since the shareholders want to keep the manager's compensation to a minimum. Turning inequality (\ref{e5}) into an equality and simplifying yields
 \begin{equation} \label{e6}
 \theta (1-\theta) {\rm log}\; \frac{w_1}{w_2}  =  \alpha.
  \end{equation} 
   The participation constraint, which is also binding,  is $ U(\bar{w}) = U ({\rm
investigate})$, or
 \begin{equation} \label{e7}
 {\rm log}\; \bar{w} =[1- (1-\theta)^2] {\rm log}\; w_1 + (1-\theta)^2
{\rm log}\; w_2 - \alpha. 
  \end{equation} 
  Solving equations (\ref{e6}) and (\ref{e7}) together for $w_1$ and $w_2$, yields
  \begin{equation} \label{e8}
\begin{array}{rl}
w_1 = & \bar{w}e^{\alpha/\theta}.\\
w_2 = & \bar{w}e^{-\alpha/(1-\theta)}.\\
 \end{array}
  \end{equation} 
The expected cost to the firm is
 \begin{equation} \label{e9}
 [1- (1-\theta)^2] \bar{w}e^{\alpha/\theta} + (1-\theta)^2
\bar{w}e^{-\alpha/(1-\theta)}.
   \end{equation}
        If the parameters are $\theta = 0.1$, $\alpha = 1$, and
$\bar{w} = 1$, the rounded values are $w_1 =22,026$ and $w_2 = 0.33$,
and the expected cost is $4,185$. Quite possibly, the shareholders
decide it is not worth making the manager investigate.

  But suppose that Apex has a competitor, Brydox, in the same
situation.  The shareholders of Apex can threaten to boil their
manager in oil if Brydox adopts a low-cost technology and Apex does
not. If Brydox does the same, the two managers are in a prisoner's
dilemma, both wishing not to investigate, but each investigating from
fear of the other. The forcing contract for Apex specifies
$w_1=w_2$ to fully insure the manager, and boiling-in-oil if Brydox
has lower costs than Apex.  The contract need satisfy only the
participation constraint that log $w - \alpha =$ log $\bar{w}$, so $
w = 2.72$ and the cost of learning to Apex is only $2.72$, not
$4,185$.  Competition raises efficiency,  not through the threat
of firms going bankrupt  but through the threat of managers being
fired.
 





  
  
\newpage

\noindent
 {\bf 9.4 Rate of Return Regulation and Government Procurement}

 
 \begin{center}
{\bf ``Government Procurement''}
\end{center}
 {\bf Players}\\
  The government and the firm. 

 

\noindent
 {\bf Order of Play }\\
 (0)  Nature assigns the firm   a cost  parameter $\beta$. The low cost,  $\beta= L  $,  has probability $\theta$ and the   high cost,  $\beta= H,   $ has probability $(1-\theta)$.  \\
 (1) The government offers a contract $s(c)$ agreeing to cover the firm's costs of producing a space station and specifying the subsidy for each cost level that the firm might report.\\
 (2) The firm accepts or rejects the contract.\\
 (3) If the firm accepts, it chooses an effort level $e$. \\
 (4) The firm finishes the space station at a cost  of  $c = \beta - e$. The government reimburses the cost and pays the appropriate subsidy.  


\noindent
 {\bf Payoffs}\\
Both firm and government are risk neutral, and both receive payoffs of zero if the firm rejects the contract.  
 If the firm accepts, its   payoff is 
 \begin{equation} \label{e15.1}
 \pi_{firm} =s - f(e), 
\end{equation}
 where $f(e)$, the disutility of effort, is increasing and convex, so $f'>0$ and $f''>0$, and, for technical convenience, it is  increasingly convex, so $f'''>0$.\footnote{The assumption that  $f'''>0$  allows the use of first order conditions   by making the maximand in (9.\ref{e15.13})-- a difference of two concave functions-- concave. See p. 58 of  Laffont \& Tirole (1993).} The government's payoff is  
   \begin{equation} \label{e15.2}
   \pi_{government}  = B -  (1+\lambda) c - \lambda s - f(e),
  \end{equation}
  where $B$ is the benefit from the space station and $\lambda$ is the deadweight loss from the taxation needed  for government spending.\footnote{This loss is estimated to be around \$0.30 for each \$1 of tax revenue raised, at the margin for the United States [Hausman \& Poterba [1987]). } 

\newpage
\noindent
ASSUMPTIONS: 

Assume for the moment that $B $ is large enough that the government definitely wishes to build the station.  (How large will   become apparent later.)  Cost, not output,  is the focus of this model. The  optimal output is one space station regardless of agency problems, but the government wants to minimize the cost of producing the station. 


  This model differs from  most    principal-agent models in this  book in that the government  is altruistic towards the agent. If the government were   selfish, its payoff would be $B -  (1+\lambda) c - (1+\lambda) s$. Instead, it maximizes social welfare, which includes the welfare of both  the citizenry and   the firm. The welfare of the citizenry is 
 $B -  (1+\lambda) c - (1+\lambda) s$ and that of the firm is $s - f(e)$.   Summing   these  yields    equation (9.\ref{e15.2}). 

  Note  that the $Low$ type of firm is good, not bad, unlike in previous models, because here the type refers to cost, not to ability or effort.  

\newpage

\bigskip
\noindent
 {\bf ``Government Procurement I: Symmetric Information'' } 


   In the first version of the game,  the cost parameter,$\beta$,  is   observed by the  government, which can assign different contracts to the two types of firms. 
 The government pays  subsidies of $s_1$ to a  low-cost firm of type $\beta=  L    $   (``Firm $ L  $'')   for the low cost $\underline{c}$,    $s_2$  to a firm  of type $\beta= H   $   (``Firm $ H   $'') for the high cost $\overline{c}$, and a boiling-in-oil subsidy of $s=-\infty$ to a firm  that does not choose  the    contract designed for it.  

  The participation constraints will   be binding for both types of firms, and to make a firm's  payoffs zero  the   government will provide subsidies that exactly 
cover the firm's disutility of effort. Since there is no uncertainty, we can invert the cost equation and write it as $e =  \beta - c$.   The subsidies will be  $s_1 = f( L  -\underline{c})$   and $ s_2 = d (  H   - \overline{c} )$. Substituting  these into the government's payoff function yields  
\begin{equation} \label{e15.3}
   \pi_{government}  = B -  (1+\lambda) \underline{c} - \lambda  f( L  -\underline{c}) - f( L  -  \underline{c}   ) 
  \end{equation}
 for firm $ L  $. Since $f''>0$, the government's payoff function is concave, and the  standard optimization technique  can be used.  The first order condition for the optimal level of $\underline{c}$ is
 \begin{equation} \label{e15.4}
  \frac{ \partial \pi_{government}}{\partial \underline{c}} =   -(1+\lambda)   + \lambda  f'( L  -\underline{c}) + f'( L  -  \underline{c}   ) = 0,
  \end{equation}
so
 \begin{equation} \label{e15.5}
         f'( L  -\underline{c})   =  1.
  \end{equation}
 Equation (9.\ref{e15.5}) says that at the efficient effort level, the marginal disutility of effort equals the marginal reduction in cost because of effort. 
 Exactly the same is true for firm $ H   $, so $ L  -  \underline{c}   =   H   - \overline{c} $ and it follows that $s_1= s_2$. The cost targets assigned to each firm are  $\underline{c}  = L - e^*$ and $\overline{c}  = H - e^*$.  The two firms exert the same efficient effort level and are paid the same positive subsidy as a return to their disutility of effort. Let us call this effort level $e^*$ and the subsidy level $s^*$.  The assumption that $B$ is sufficiently large can now be made more specific: it is that $B - (1+\lambda) ( H    - e^*-f(e^*)) \geq 0$. 


\newpage
 
\noindent
{\bf ``Government Procurement II: Asymmetric Information'' } 

  In the second variant of the game, $\beta$ is   not observed by the  government, which must therefore   provide incentives for the firms to volunteer  its  type if     firm $L$  is to produce at lower costs than firm $H$. 


 The government could use a pooling contract, 
  simply providing a contract with a    subsidy of $f(e^*)$ for a cost of $ H   - e^*$, just  enough to compensate firm $H$ for its effort,  and an infinitely negative subsidy for any other cost. Both types would accept this, but   firm $L$ could exert   effort less than $e^*$ and still receive the subsidy. 

 Let us find the optimal contract   with values ($\underline{c}, s_1)$ and ($\overline{c}, s_2$)   and heavy punishments for other cost levels.   

 The participation constraint  for     firm $ L  $ is
\begin{equation} \label{e15.6}
  s_1 - f( L  -  \underline{c}  ) \geq  0
\end{equation}
 and  for firm $ H   $ it is
  \begin{equation} \label{e15.7}
    s_2 - f(  H   - \overline{c} ) \geq  0.
 \end{equation}

  The incentive compatibility constraint for firm $ L  $ is 
 \begin{equation} \label{e15.8}
  s_1 - f( L  -  \underline{c}  ) \geq  s_2 - f( L  -\overline{c}  )
\end{equation}
 and  for firm $ H   $ it is
  \begin{equation} \label{e15.9}
    s_2 - f(  H   - \overline{c} ) \geq  s_1- f( H   -\underline{c}  ).
 \end{equation}
   
\newpage


 The participation constraint  for     firm $ L  $ is
$$  s_1 - f( L  -  \underline{c}  ) \geq  0
$$
 and  for firm $ H   $ it is
$$    s_2 - f(  H   - \overline{c} ) \geq  0.
$$

  The incentive compatibility constraint for firm $ L  $ is 
$$
  s_1 - f( L  -  \underline{c}  ) \geq  s_2 - f( L  -\overline{c}  )
$$
 and  for firm $ H   $ it is
 $$
    s_2 - f(  H   - \overline{c} ) \geq  s_1- f( H   -\underline{c}  ).
$$

  
Since firm $ L  $ can imitate firm $ H   $ if it wishes, if constraint (9.\ref{e15.7}) is satisfied, so is (9.\ref{e15.6}).  Constraint (9.\ref{e15.7}) will be binding  (and therefore satisfied as an equality), because  the government will  reduce the subsidy  as much as possible in order to avoid the deadweight loss  of taxation that exists because $\lambda >0$.   The incentive compatibility constraint for firm $L$  must also  be binding, because  if the   pair ($\underline{c}, s_1$) is strictly more attractive for firm $L$, the government could reduce the subsidy $s_1$.  Constraint (9.\ref{e15.8}) is  therefore satisfied as an equality. (The same argument does not hold for firm $H$, because if $s_2$ were reduced, the participation constraint would be violated.)    Knowing that these two constraints are binding, we can write
      \begin{equation} \label{e15.11}
    s_2 = f(  H   - \overline{c} )  
 \end{equation}
 and, making use of both (9.\ref{e15.8}) and (9.\ref{e15.11}),   
\begin{equation} \label{e15.12}
    s_1 = f(  L  - \underline{c} )   + f(  H   - \overline{c} ) - f( L  -\overline{c}  ).
 \end{equation}
  
\newpage

 
 From (9.\ref{e15.2}), 
the government's maximization problem under incomplete information is 
   \begin{equation} \label{e15.10}
 \stackrel{ Maximize}{\underline{c}  , \overline{c}  , s_1, s_2} \;\;\; \theta \left[  B -  (1+\lambda) \underline{c}   - \lambda s_1 - f( L     - \underline{c}  )  \right] + \left[ 1- 
\theta \right] \left[  B -  (1+\lambda) \overline{c}   - \lambda s_2 - f( H      - \overline{c}  )  \right] . 
       \end{equation}
 Substituting  for $s_1$ and $s_2$  from (9.\ref{e15.11}) and (9.\ref{e15.12}) simplifies the problem to 
\begin{equation} \label{e15.13}
 \begin{array}{ll}
 \stackrel{ Maximize}{\underline{c}  , \overline{c}  }  & \theta \left[  B -  (1+\lambda) \underline{c}   - \lambda \left(  f(  L  - \underline{c} )   + f(  H   - \overline{c}   ) - f( L  -\overline{c} ) \right) - f( L     - \underline{c}  )  \right] +\\
   &  \left[ 1- 
\theta \right] \left[  B -  (1+\lambda) \overline{c}   - \lambda f(  H   - \overline{c} ) - f( H      - \overline{c}  )  \right] . 
 \end{array}
       \end{equation}
 The first order condition with respect to $\underline{c}  $ is 
\begin{equation} \label{e15.14}
   \theta \left[   -  (1+\lambda)  + \lambda      f'( L  -\underline{c}  ) + f'( L     - \underline{c}  )  \right] =0,  
       \end{equation}
 which simplifies to 
\begin{equation} \label{e15.15}
     f'( L  -\underline{c}   )  =1.      
                \end{equation}
 Thus, firm $ L    $ chooses the efficient effort level $e^*$ in equilibrium, and   $\underline{c}  $ takes the same value as it did in ``Government Procurement I''.  From the definition of $s^*= f(e^*)$ in that game,  equation (9.\ref{e15.12}) can be rewritten as 
\begin{equation} \label{e15.15a}
    s_1 = s^*   + f(  H   - \overline{c} ) - f( L  -\overline{c}  ).
 \end{equation}
 Because $ f( H   - \overline{c} ) > f( L  -\overline{c})$, equation (9.\ref{e15.15a}) shows that $s_1 > s^*$. Incomplete information increases the subsidy to the low-cost firm, which  earns more than its reservation  utility in the game with incomplete information. Since the high-cost firm will earn  exactly its reservation, this means that the government is on average  providing its supplier with an above-market rate of return, not because of corruption or political influence, but because that is the   way to induce   low-cost suppliers to reveal that their costs are low.  This should be kept in mind as an alternative  to the product quality model of Chapter 5 and the efficiency wage model of Chapter 7 for why above-average rates of return persist. 




\newpage


 Turning now to the contract alternative to be chosen by  the high-cost firm,  the 
 first order condition for maximizing the government payoff  (9.\ref{e15.13}) with respect to $\overline{c}  $ is 
  \begin{equation} \label{e15.16}
 \theta \left[    - \lambda \left(-f'( H  - \overline{c} )     + f'( L     - \overline{c}) \right)   \right] + \left[ 1- 
\theta \right] \left[  -  (1+\lambda)   +\lambda f'(  H   - \overline{c} ) + f'( H      - \overline{c}  )  \right] =0. 
       \end{equation}
 This can be rewritten as
   \begin{equation} \label{e15.17}
f'(  H   - \overline{c} )=1 - \left(\frac{\lambda}{1+ \lambda} \right)  \left(\frac{\theta}{1-\theta} \right)      \left[
f'(  H   - \overline{c} ) + f'(  L  - \overline{c} ) \right].
        \end{equation}
  Since the right-hand-side of equation (9.\ref{e15.17}) is less than one, firm $ H     $ has a lower level of $f'$ than firm $ L    $, and must be exerting effort less than $e^*$, since $f''>0$. Perhaps this explains the expression ``good enough for goveernment work''.  Also since the participation constraint  (9.\ref{e15.11}) is satisfied as an equality, it must also be true that $s_2 < s^*$.  The high-cost firm's subsidy is lower than under full information, although since its effort is also lower, its payoff stays the same. 


  We must also see that the incentive compatibility constraint for firm $ H     $ is satisfied as a weak inequality; the high-cost firm is not near being tempted to pick the low-cost firm's contract. This is a bit subtle. Setting  the left-hand-side of the incentive compatibility constraint (9.\ref{e15.9}) equal to zero because the participation constraint is binding for firm $ H     $,   substituting in for $s_1$ from equation (9.\ref{e15.12})  and rearranging, yields 
  \begin{equation} \label{e15.18}
  f(  H   - \underline{c} )-    f(  L  - \underline{c} ) \geq    f(  H   - \overline{c} )-   f(  L  - \overline{c} ) .
 \end{equation}
 This is true, and true as a strict inequality, because $f''>0$ and the arguments of $d$ on the left-hand-side of equation (9.\ref{e15.18}) take larger values than on the right-hand side. 

\newpage

``Government Procurement''  illustrates that there is a tradeoff between  the government's two objectives of inducing the correct amount of effort and minimizing the subsidy to the firm.  Even under complete information, the government cannot provide a subsidy of zero, or the firms will  refuse to build the space station.  Under incomplete information, not only must the subsidies be positive but the low-cost firm earns {\bf informational rents}; the government offers a contract that pays the low-cost firm more  then  under complete information to prevent it from mimicking the high-cost firm by choosing an inefficiently low effort. The high-cost firm, however, does choose an inefficiently low effort, because if it were assigned greater effort it would have to be paid a greater subsidy, which would tempt the low-cost firm to imitate it. In equilibrium, the government has compromised by having some probability of  an inefficiently high subsidy  ex post, and some probability of inefficiently low effort.   
  

A little reflection will provide a host of ways to  alter this model. What if the firm only discovers its costs after accepting the contract?   What if two firms bid against each other for the contract? What if the firm can bribe the government?  What if the firm and the government bargain over the gains from the project, instead of the government being able to make a take-it-or-leave-it contract offer? What if the game is repeated, so the government can use the information it acquires in the second period? If it is repeated, can the    government   commit to long-term contracts? Can it commit not to  renegotiate? See  Spulber (1989)  and Laffont \& Tirole (1993) if these are the kinds of questions than interest you.   


\newpage

\vspace{1in}
\noindent
{\bf  9.5 The Groves Mechanism  (formerly Section 8.xxx)}  

  \noindent
   Hidden knowledge is particularly important in public economics,
the study of government spending and taxation.  Government policy
involves moral hazard (remember the Welfare Game and the
Auditing Game), but often the government's task is simply to
extract information from the citizens in order to maximize welfare.
The optimal taxation literature starting with Mirrlees (1971) is an
example: citizens differ in their income-producing ability, and the
government wishes to demand higher taxes from the more able citizens.
An even purer problem of hidden knowledge is the problem of public
goods with private preferences. The government must decide whether it
is worthwhile to buy a public good based on the combined preferences
of all the citizens, but it needs to discover those preferences.  Unlike in the previous games in this chapter, a group of agents
is involved, not just a single agent. Moreover, the government is an
altruistic principal who cares directly about the utility of the
agents, rather than a car buyer or an insurance seller who cares
about the agents' utility only in order to satisfy self-selection and
participation constraints.  

 The next example is adapted from p. 426 of Varian (1992).  The mayor
of a town is considering installing a streetlight costing \$100.
Each of the five houses near the light would be taxed exactly \$20,
but the mayor will only install it if he decides that the sum of the
residents' valuations for it is greater than the cost.  The problem
is to discover the valuations.  If the mayor simply asks them,
householder Smith could say that his valuation is \$5,000, and Brown
says he likes the dark and would pay \$5,000 to keep the street dark,
but all the mayor could conclude would be that Smith's valuation
exceeds \$20 and Brown's does not.  Talk is cheap, and the dominant
strategy is to overreport or underreport. 

 The flawed mechanism just described can be denoted by 
  \begin{equation} \label{e18}
M_1: \;\;\; \left(  20, \sum_{i=1}^5 m_i \geq 100 \right),
 \end{equation}
  which means that each resident  pays $20$,  and the light is
installed if the sum of the messages exceeds 100. 


An alternative mechanism is to make 
 resident $i$ pay the amount of his message, or pay zero if it is
negative.  This mechanism is
  \begin{equation} \label{e19}
M_2: \;\;\; \left( Max\{ m_i, 0\}, \sum_{j=1}^5 m_j \geq 100 \right),
 \end{equation}
 in which case there is no dominant strategy.  Player $i$ would
announce $m_i=0$ if he thought the project would go through without
his support, but he would announce up to his valuation if necessary.
There is a continuum of Nash equilibria that attain the efficient
result. Most of these are asymmetric, and there is a problem of how
the equilibrium to be played out becomes common knowledge.  This is a
simple mechanism, however, and it already teaches a lesson: that people are more likely to report their true political preferences if they must bear part of the costs themselves. 
 

  Instead of just ensuring that the correct decision is
made in a Nash equilibrium,   it may be possible to design a
mechanism which makes truthfulness a {\bf dominant-strategy
mechanism}. Consider the mechanism
  \begin{equation} \label{e6} M_3: \;\;\; \left( 100-\sum_{j\neq j}
mj , \sum_{j=1}^5 m_j \geq 100 \right).  \end{equation}
 Under mechanism (\ref{e6}), player $i$'s message does not affect
his tax bill except by its effect on whether or not the streetlight is
installed.  If player $i$'s valuation is $v_i$, his full payoff is
$v_i- 100 + \sum_{j\neq i} m_j $ if $m_i + \sum_{j\neq i} m_j \geq
100$, and zero otherwise. It is not hard to see that he will be
truthful in a Nash equilibrium in which the other players are
truthful, but we can go further. Truthfulness is weakly
dominant. Moreover, the players will tell the truth whenever lying
would alter the mayor's decision.  

   Consider a numerical example.  Suppose that Smith's valuation
is 40 and the sum of the valuations is 110, so the project is indeed
efficient. If the other players report  their truthful sum of 70,
Smith's payoff from truthful reporting is his valuation of 40 minus
his tax of 30. Reporting more would not change his payoff, while
reporting less than 30 would reduce it to 0.

  If we are wondering whether Smith's strategy is dominant, we must
also consider his best response when the other players lie.  If they
underreported, announcing 50 instead of the truthful 70, then Smith
could make up the difference by overreporting 60, but his payoff
would be $-10$ ($=40 + 50 -100 $) so he would do better to report the
truthful 40, killing the project and leaving him with a payoff of 0.
If the other players overreported, announcing 80 instead of the
truthful 70, then Smith benefits if the project goes through, and he
should report at least 20 to obtain his payoff of 40 minus 20. He is
willing to report exactly 40, so there is an equilibrium with
truth-telling. 

 The problem with a dominant-strategy mechanism like the one facing
Smith is that it is not budget balancing. The government raises less
in taxes than it spends on the project (in fact, the taxes would be
negative).  Lack of budget balancing is a crucial feature of dominant-
strategy mechanisms. While the government deficit can be made either
positive or negative, it cannot be made zero, unlike in the case of Nash
mechanisms.





 



 \end{large}

\end{document}
 
