     \documentstyle[12pt,epsf]{msart}   
         \begin{document}   
   \textheight 9in 


\setcounter{section}{8}
 \setcounter{page}{249}

\noindent
march 1998
  
\section*{ 8  TOPICS IN MORAL HAZARD } 

      
 
\noindent
 {\bf Unravelling the Truth when Silence is the Only Alternative}

 
 
  Suppose that Nature uses the
uniform distribution to assign the variable $\theta$ some value in
the interval $[0, 10]$ and the agent's payoff is increasing in the
principal's estimate of $\theta$.  Usually we assume that the agent
can lie freely, sending a message $m$ taking any value in $[0,10]$,
but let us assume instead that he cannot lie, although he is free to
conceal information. Thus, if $\theta = 2$, he can send the
uninformative message $m \geq 0$ (equivalent to no message), or $m
\geq 1$, or $m=2$, but not the lie $m \geq 4$. 

 When $\theta=2$ the agent might as well send a message that  is the exact truth: ``$m=2$''.  If he were to choose ``$m \geq 1$'',
for example, the principal's first thought might be to estimate
$\theta$ as the average value of the interval $[1,10]$, which is 5.5.
But the principal would realize that no agent with a value of
$\theta$ greater than 5.5 would want to send that message in a Nash equilibrium. This
realization restricts the possible interval to [1, 5.5], which in
turn has an average of 3.25. But then no agent with $\theta > 3.25$ would
send the message ``$m \geq 1$''.  The principal can continue this process of
logical {\bf unravelling} to conclude that $\theta = 1$.  The message
``$m \geq 0$'' would be even worse, making the principal believe that
$\theta = 0$. In this model, ``No News is Bad News.''  The agent
would therefore not send the message ``$m \geq 1$''  and  he would be indifferent between ``$m=2$''  and   ``$m \geq 2$'' because the 
principal would make the same deduction from either message.

    \newpage

\bigskip
\noindent
{\bf The Revelation Principle}

 \noindent
  The principal might choose to offer a contract that induces the
agent to lie in equilibrium, since he can take lying into account
when he designs the contract, but this complicates the analysis. Each
state of the world has a single truth, but a continuum of lies:
generically speaking, almost everything is false.  The revelation
principle helps us simplify.

 \noindent
	{\bf The Revelation Principle.} {\it For every contract
$w(q,m)$ that leads to lying (that is, to $m \neq \theta$), there is
a contract $w^*(q,m)$ with the same outcome for every $\theta$ but no
incentive for the agent to lie.}  

      Applied to concrete examples, the revelation principle may seem
obvious.  Suppose we are concerned with the effect on the moral
climate of cheating on income taxes, but anyone who makes \$70,000
a year can claim he makes \$50,000 and the government
does not have the resources to  catch him.  The Revelation Principle says that we can rewrite
the tax code to set the tax to be equal for taxpayers earning \$70,000
and  for those earning \$50,000, and the same amount of taxes will be collected without anyone having   incentive to lie.   Applied to moral education, the
principle says that the mother who agrees never to punish her
daughter if she tells her  all her escapades will never hear any untruths.
Clearly, the principle's usefulness is not  so much to improve outcomes as  
to simplify contracts. The principal (and the modeller) need only look at
contracts which induce truthtelling, so the relevant strategy space
is shrunk and we can add a third constraint  to  the incentive compatibility and participation constraints to help calculate the
equilibrium:
 

\newpage

 \subsection{ An Example of   Moral Hazard with Hidden Knowledge:  ``The Salesman Game''} %8.2
 
\begin{center}
 {\bf ``The  Salesman Game''}
 \end{center}
  {\bf Players}\\
  A manager and a salesman.

 
\noindent
 {\bf Order of Play}\\
 (1) The manager offers the salesman a contract of the form $w(q,m)$,
where $q$ is sales and $m$ is a message.\\
 (2) The salesman decides whether or not to accept the contract.\\
 (3) Nature chooses whether the customer is a $Bonanza$ or a
$Pushover$ with probabilities 0.2 and 0.8. Denote the state variable
``customer status'' by $\theta$.  The salesman observes the state,
but the manager does not.\\
 (4) If the salesman has accepted the contract, he chooses his sales
level $q$, which implicitly measures his effort.

\noindent
 {\bf Payoffs}\\
 The manager is risk neutral and the salesman is risk averse. 
 If the salesman rejects the contract, his payoff is $\bar{U}= 8$ and
the manager's is zero. If he accepts the contract, then\\
 \begin{tabular}{ll}
  $ \pi_{manager}$ & $= q - w$.\\ 
 $\pi_{salesman}$ &$ = U(q, w, \theta)$, where $\frac{\partial
U}{\partial q} < 0, \frac{\partial^2 U}{\partial q^2} < 0,
\frac{\partial U}{\partial w} > 0, \frac{\partial^2 U}{\partial w^2} <
0.$
  \end{tabular}
  \bigskip

    
\newpage

\begin{equation} \label{e3}
 {\rm Separating\;\; Contract } \left\{
\begin{array}{ll}
{\rm Agent\;\; announces}\;\; {\it Pushover}:&
\begin{array}{ll}
   w =& \left\{
\begin{array}{l}
 0 \;  {\rm if}\; q < q_1.\\
  w_1 \; {\rm if }\; q \geq q_1.\\
\end{array}
 \right. \\
\end{array}
\\
{\rm Agent\;\; announces\;\;} { \it Bonanza}: &
\begin{array}{ll}
 w = & \left\{
\begin{array}{l}
 0 \; {\rm if}\; q < q_2.\\ 
 w_2\; {\rm if} \; q \geq q_2.
   \end{array}
 \right. 
  \end{array}
 \end{array}
 \right.
\end{equation}

      \newpage
 
\begin{center}
 {\bf ``The Repossession Game''}
 \end{center}
  {\bf Players}\\
  A  bank and a consumer.

 
\noindent
 {\bf Order of Play}\\
 (1) The bank can do nothing or can  offer the consumer an auto loan    which allows  him to buy a car that costs 11, but requires him to pay back $L$ or lose possession of the car to the bank. \\
 (2) The consumer accepts or rejects the loan.\\
 (3) The consumer chooses to $Work$, for an income of 15, or $Play$, for an income of 8. The disutility of work is 5. \\ 
 (4) The consumer repays the loan or defaults. \\
 (4a)  In one version of the game, the   bank offers to settle for amount $S$ and  leave possession of the car to the consumer.\\
 (4b) The consumer accepts or rejects the settlement $S$. \\
 (5) If the bank has not been paid $L$ or $S$, it repossesses the car. 

    
\noindent
 {\bf Payoffs}\\
 If the bank does not make any loan or the consumer rejects it, both players' payoffs are zero. The value of the car is 12 to the consumer and 7 to the bank, so  the bank's payoff  if a loan is made is 
 

$ \pi_{bank}=$  $\left|
\begin{tabular}{ll}
    $L-11$ & if the original loan is repaid\\ 
 $S-11$ & if a settlement is made\\
  $7-11$ & if the car is repossessed.
 \end{tabular}
 \right.$

 If the consumer chooses $Work$   his income $W$ is 15 and his disutility of effort $D$ is $-5$. If he chooses  $Play$, then $W=8$ and $D=0$.     His payoff is  
 
 $\pi_{consumer}=$  $\left|
\begin{tabular}{ll}
    $W+12-L-D$ & if the original loan is repaid\\ 
 $W+12-S-D$ & if a settlement is made\\
  $W-D$ & if the car is repossessed.
 \end{tabular}
 \right.$

  
\newpage



\noindent
   {\bf ``Repossession Game I''}

The first version of the game 
 does not allow renegotiation, so moves (4a) and (4b) are dropped from the game.  In equilibrium, the bank will make the loan at a rate of $L=12$, and  the consumer will choose $Work$ and repay the loan. Working back from the end of the game in accordance with sequential rationality, the consumer is willing to  repay because by repaying 12 he receives a car worth 12.\footnote{As usual, we could change the model slightly to make the consumer strongly desire to repay  the loan,  by substituting a bargaining subgame that splits the gains from trade between bank and consumer rather than  specifying that  the bank make a take-it-or-leave-it offer. See Section 4.3. }   He will  choose $Work$ because   he can then repay the loan and his payoff will be 10  $(= 15 +  12-12 -5 )$, but if he chooses $Play$   he will not be able to repay the loan and the bank will repossess the car, reducing his payoff  to 8  $(= 8 -0   )$.  The bank will offer at loan at $L=12$ because the consumer will repay it and that is the    maximum repayment  to which the consumer will agree.  The bank's equilibrium payoff is 1 $(=12-11)$. This is an efficient outcome because the consumer  does buy the car, which  he values at more than its cost to the car dealer, although it is the bank rather than the consumer that gains the surplus, because of the bank's bargaining power over the terms of the loan.    
 
 \newpage



\noindent
  {\bf ``Repossession Game II''}

 The second version of the game 
 does   allow renegotiation, so   moves (4a) and (4b) are  added back into the game.  Renegotiation turns out to be harmful, because it results in an equilibrium in which the bank refuses to make a loan,  reducing  the payoffs of bank and consumer to  (0,10) instead of (1,10); the   gains from trade are lost. 

 The equilibrium in ``Repossession Game I'' breaks down because the consumer would deviate by choosing $Play$. In ``Repossession Game I'',  this would result in the bank repossessing the car, and in ``Repossession Game II'', the bank still has the right to do this, for a payoff of  $-4\; (=7-11)$. If the bank chooses to renegotiate and offer $S=8$, however, this settlement will be accepted by the consumer, since in exchange he gets to keep a car worth 12, and the  payoffs of bank and consumer are    $-3\; (=8-11)$ and 12  $(= 8 + 12-8  )$. Thus, the bank will renegotiate, and the consumer will have increased his payoff from 10 to 12 by choosing $Play$. Looking ahead to this from move (1), however, the bank will see that it can do better by refusing to make the loan, resulting in the payoffs (0,10).   The bank cannot even break even by raising the loan rate $L$. If $L=30$, for instance, the consumer will still happily accept, knowing that when he chooses $Play$ and defaults the ultimate amount he will pay will be just $S=8$. 






Renegotiation has a paradoxical effect. In the subgame starting with consumer default it increases efficiency, by allowing the players to make a pareto improvement over an inefficient punishment. In the game as a whole, however, it reduces efficiency  by preventing players from using  punishments to deter inefficient actions. This is true of any situation in which punishment imposes a deadweight loss instead of being  simply   a transfer from the punished to the punisher. This may be why    American judges are less willing  than the general public to impose punishments on  criminals. By the time a criminal reaches the courtroom  extra years in jail have no beneficial effect (incapacitation aside)  and impose real costs on both criminal and society,   and  judges are   unwilling to impose   sentences which in each particular case are inefficient.  



 This game also illustrates the difficulty of deciding what ``bargaining power'' means.  This  is a   term that is very important to how many people think about law and  public policy but for which people only have hazy definitions in mind. 
 Chapter 11 will analyze bargaining in great detail, using the paradigm of splitting a pie, and the natural way to think of bargaining power is as the ability to get a bigger share of the pie. Here, the pie to be split is the surplus of 1 from the consumer's purchase of a car at cost 11 which will yield him 12 in utility. Both versions of 
``The Repossession Game'' give  all the bargaining power to the bank  in the sense that where there is a surplus to be split, the bank gets 100 percent of it.   But this does not help the bank in ``Repossession Game II'', because the consumer can put himself in a position where  the bank ends up a loser from the transaction despite its bargaining power. 

  

\newpage

   
 \subsection{Efficiency Wages} 

 \noindent
   The next three sections are about remedies for moral hazard that
can be applied to either hidden actions or hidden knowledge.  Most
of the illustrations will be of hidden actions, but usually ``low
effort'' can be replaced by ``lying about the hidden knowledge.''

    Shapiro \& Stiglitz (1984) show how involuntary unemployment can
be explained by a principal-agent model.  When all workers are
employed at the market wage, a worker who is caught shirking and
fired can immediately find another job just as good. Firing is
ineffective and effective penalties like boiling in oil are excluded
from the strategy spaces of legal businesses.  Becker \& Stigler
(1974) have suggested that workers post performance bonds, but if
workers are poor this is impractical.  Without bonds or boiling in
oil, the worker chooses low effort and receives a low wage.

   To induce a worker not to shirk, the firm can offer to pay him a
premium over the market-clearing wage, which he loses if he is caught
shirking and fired.  If one firm finds it profitable to raise its
wage, however, so do all firms. One might think that after the
wages equalized, the incentive not to shirk would disappear.  But
when a firm raise its wages, its demand for labor falls, and when all
firms raise their wages, the market demand for labor falls, creating
unemployment.  Even if all firms pay the same wages, a worker has an
incentive not to shirk, because if he were fired he would stay
unemployed, and even if there is a random chance of leaving the
unemployment pool, the unemployment rate rises sufficiently high that
workers choose not to risk being caught shirking. The equilibrium is
not first-best efficient, because even though the marginal revenue of
labor equals the wage, it exceeds the marginal disutility of effort,
but it is efficient in a second-best sense.  By deterring shirking,
the hungry workers hanging around the factory gates are performing a
socially valuable function (but they mustn't be paid for it!).

 The idea of paying high wages to increase the threat of dismissal is
old, and can even be found in {\it The Wealth of Nations} (Smith
[1776] p. 207).  What is new in Shapiro \& Stiglitz is the
observation that unemployment is generated by these ``efficiency
wages.''  These firms behave paradoxically.  They pay workers more
than necessary to attract them, and outsiders who offer to work for
less are turned away.  Can this explain why ``overqualified''
jobseekers are unsuccessful and mediocre managers are retained?
Employers are unwilling to hire someone talented, because he could
find another job after being fired for shirking, and trustworthiness
matters more than talent in some jobs.

   This discussion should remind you of the product quality game of Section 5.4. There too, purchasers   paid more than the reservation price in order to give the seller an incentive to behave properly,  because a seller who misbehaved  could be punished by termination of the relationship. The key characteristic  of  such models is that there is a constraint on the amount of  contractual punishment for misbehavior and that the participation constraint is not binding in equilibrium. 





  

\subsection{ Tournaments} 

 \noindent
 Games in which relative performance is important are called {\bf
tournaments}. Tournaments are similar to auctions, the difference
being that the actions of the losers matter directly, unlike in
auctions. Like auctions, they are especially useful when the
principal wants to elicit information from the agents. A
principal-designed tournament is sometimes called {\bf yardstick
competition} because the agents provide the measure for their wages.

   Farrell (unpublished) uses a tournament to explain how ``slack'' might be
the major source of welfare loss from monopoly, an old idea usually
prompted by faulty reasoning.  The usual claim is that monopolists
are inefficient because they, unlike competitive firms, do not have
to maximize profits to survive. This   relies on the dubious assumption  that firms
care about survival, not profits.  Farrell makes a subtler point:
although the shareholders of a monopoly maximize profit, the managers
maximize their own utility, and moral hazard is severe without the
benchmark of other firms' performances. 

     Let firm Apex have two possible production techniques, $Fast$
and $Careful$. Independently for each technique, Nature chooses
production cost $c=1$ with probability $\theta$ and $c=2$ with
probability $1-\theta$. The manager can either choose a technique at
random or investigate the costs of both techniques at a utility cost to himself of 
$\alpha$.  The shareholders can observe the resulting production
cost, but not whether the manager investigates.  If they see the
manager pick $Fast$ and a cost of $c=2$, they do not know whether he
chose it without investigating, or investigated both techniques and
found they were both costly.  The wage contract is based on what the
shareholders can observe, so it takes the form $(w_1,w_2)$, where
$w_1$ is the wage if $c=1$ and $w_2$ if $c=2$.  The manager's utility
  is log $w$ if he does not investigate, log $w$ $-\alpha$ if
he does, and the reservation utility of log $\bar{w}$ if he quits.

   If the shareholders want the manager to investigate, the contract
must satisfy the self selection constraint 
 \begin{equation} \label{e4}
  U({\rm not\; investigate}) \leq U ({\rm investigate}).
  \end{equation}
 If the manager investigates, he still fails to find a low cost
technique with probability $(1-\theta)^2$, so (8.\ref{e4}) is equivalent to
 \begin{equation} \label{e5}
  \theta {\rm log}\;w_1 + (1-\theta) {\rm log}\; w_2 \leq [1- (1-\theta)^2]
{\rm log} \;w_1 + (1-\theta)^2 {\rm log} \;w_2 - \alpha. 
  \end{equation} 
 The self selection constraint is binding, since the shareholders want to keep the manager's compensation to a minimum. Turning inequality (8.\ref{e5}) into an equality and simplifying yields
 \begin{equation} \label{e6}
 \theta (1-\theta) {\rm log}\; \frac{w_1}{w_2}  =  \alpha.
  \end{equation} 
   The participation constraint, which is also binding,  is $ U(\bar{w}) = U ({\rm
investigate})$, or
 \begin{equation} \label{e7}
 {\rm log}\; \bar{w} =[1- (1-\theta)^2] {\rm log}\; w_1 + (1-\theta)^2
{\rm log}\; w_2 - \alpha. 
  \end{equation} 
  Solving equations (8.\ref{e6}) and (8.\ref{e7}) together for $w_1$ and $w_2$, yields
  \begin{equation} \label{e8}
\begin{array}{rl}
w_1 = & \bar{w}e^{\alpha/\theta}.\\
w_2 = & \bar{w}e^{-\alpha/(1-\theta)}.\\
 \end{array}
  \end{equation} 
The expected cost to the firm is
 \begin{equation} \label{e9}
 [1- (1-\theta)^2] \bar{w}e^{\alpha/\theta} + (1-\theta)^2
\bar{w}e^{-\alpha/(1-\theta)}.
   \end{equation}
        If the parameters are $\theta = 0.1$, $\alpha = 1$, and
$\bar{w} = 1$, the rounded values are $w_1 =22,026$ and $w_2 = 0.33$,
and the expected cost is $4,185$. Quite possibly, the shareholders
decide it is not worth making the manager investigate.

  But suppose that Apex has a competitor, Brydox, in the same
situation.  The shareholders of Apex can threaten to boil their
manager in oil if Brydox adopts a low cost technology and Apex does
not. If Brydox does the same, the two managers are in a prisoner's
dilemma, both wishing not to investigate, but each investigating from
fear of the other. The forcing contract for Apex specifies
$w_1=w_2$ to fully insure the manager, and boiling in oil if Brydox
has lower costs than Apex.  The contract need satisfy only the
participation constraint that log $w - \alpha =$ log $\bar{w}$, so $
w = 2.72$ and the cost of learning to Apex is only $2.72$, not
$4,185$.  Competition raises efficiency,  not through the threat
of firms going bankrupt  but through the threat of managers being
fired.
 



\subsection{  Institutions and Agency Problems}

\noindent
 {\bf Ways to Alleviate   Agency Problems}

 \noindent
 Usually when agents are risk averse, the first-best cannot be
achieved,  because some tradeoff must be made between providing the
agent with incentives and keeping his compensation from varying too
much between states of the world, or  because it is not possible to punish him sufficiently.  We have looked at a number of different ways to solve the problem, and at this point a listing might be useful.   Each method is illustrated by application to the particular problem of
executive compensation, which is empirically important, and
interesting both because explicit incentive contracts are used and
because they are not used more often (see Baker, Jensen \& Murphy
[1988]).


 
 

\noindent
 (1) {\bf  Reputation} (5.3, 5.4, 6.4, 15.1)   \\
   Managers are promoted on the basis of past effort or truthfulness.

\noindent
 (2)  {\bf  Risk-Sharing Contracts} (7.3, 7.4, 7.5 ) \\
   The executive receives not only a salary, but call options on the
firm's stock. If he reduces the stock value, his options fall in
value.

 \noindent
 (3) {\bf Boiling in Oil}  (7.4)  \\
    If the firm would only become unable to pay dividends if the
executive shirked and was unlucky, the threat of firing him
when the firm skips a dividend will keep him working hard. 

\noindent
(4) {\bf  Selling the Store} (7.4)        \\
 The managers buy the firm in a leveraged buyout.

 \noindent 
 (5) {\bf Efficiency Wages} (8.4)\\
  To make him fear losing his job, the executive is paid a higher
salary than his ability warrants (cf. Rasmusen [1988b] on mutual banks). 

 \noindent
 (6) {\bf  Tournaments} (8.5)         \\
   Several vice presidents compete and the winner succeeds the
president.

\noindent
(7) {\bf  Monitoring} (3.4)  \\
   The directors hire a consultant to evaluate the executive's
performance.

 \noindent
 (8) {\bf  Repetition}        \\
  Managers are paid less than their marginal products for most of
their career, but are rewarded later with higher salaries or generous
pensions if their career record has been good.   

 \noindent 
  (9) {\bf Changing the Type of the Agent}\\
  Older executives encourage the younger by praising ambition and
hard work.


  We have   talked about all but the last two solutions.
Repetition   enables the contract to come closer to the
first-best if the discount rate is low (Radner [1985]). ``Production
Game V''   failed to attain the first-best   in Section 7.2 because output
depended on both the agent's effort and random noise.  If the game
were repeated 50 times with independent drawings of the noise, the
randomness would average out and the principal could form an
accurate estimate of the agent's effort. This is really begging
the question,  by saying that in the long run effort can be observed after all.

      Changing the agent's type by increasing the   direct utility from desirable   or decreasing that from undesirable behavior  is a solution that has received little attention from economists,  who have focussed on changing the utility by changing monetary rewards. Akerlof (1983), one of the few papers   on the subject
of changing type, points out that the moral education of children, not just their
  intellectual education,  affects their productivity and
success. The attitude of economics, however, has been that while virtuous agents exist,  the rules of an organization need to be designed with the unvirtuous agents in mind. As the Chinese  thinker Han Fei said some two thousand years ago 
 \begin{quotation}
 \begin{small}
``Hardly ten men of true integrity and good faith can be
found today, and yet the offices of the state number in the hundreds.
If they must be filled by men of integrity and good faith, then there
will never be enough men to go around; and if the offices are left
unfilled, then those whose business it is to govern will dwindle in
numbers while disorderly men increase.  Therefore the way of the
enlighted ruler is to unify the laws instead of seeking for wise men,
to lay down firm policies instead of longing for men of good faith.'' (Han Fei [1964], p. 109  
 from his chapter, ``The Five Vermin'')
 \end{small}
\end{quotation}
    The number of men of true integrity has probably not increased as fast as the size of government, so Han Fei's observation remains valid, but  it should be kept in mind that  honest men do exist and honesty can enter into rational models. There are tradeoffs between spending to foster honesty and spending  for other purposes, and there may be tradeoffs between using  the  second-best contracts designed for agents indifferent about the truth  and using the simpler contracts appropriate for honest agents. 
 






\bigskip
 \noindent
 {\bf Government Institutions and Agency Problems}


  The field of law is well-suited to analysis by principal-agent
models.  Even in the 19th century, Holmes (1881, p. 31) conjectured
in {\it The Common Law} that the reason why sailors at one time
received no wages if their ship was wrecked was to discourage them
from taking to the lifeboats too early to save it.  The reason why
such a legal rule may have been   suboptimal is not that it was unfair---
presumably sailors knew the risk before they set out--- but because
incentive compatibility and insurance work in opposite directions.   If sailors are
more risk averse than ship owners, and pecuniary advantage would not
add much to their effort during storms, then the owner ought to
provide insurance to the sailors by guaranteeing them wages whether
the voyage succeeds or not.

 Another legal question is who should bear the cost of an accident:
the victim (for example, a pedestrian hit by a car) or the person who
caused it (the driver). The economist's answer is that it depends on
who has the most severe moral hazard. If the pedestrian could have
prevented the accident at the lowest cost, he should pay; otherwise,
the driver. This idea of the {\bf least-cost avoider} is extremely useful in the economic analysis of law, and is  a major theme of  Posner's  treatise on law and economics (Posner [1992]).  Insurance or wealth transfer may also enter as
considerations. If pedestrians are more risk averse, drivers should
bear the cost, and according to some political views, if pedestrians
are poorer, drivers should bear the cost. Note that this last
consideration--- wealth transfer--- is not relevant to private
contracts. If a principal earning zero profits is required to bear
the cost of work accidents, the agent's wage will be lower than if he
bore them instead.

    Criminal law is also concerned with tradeoffs between incentives
and insurance.  Holmes (1881, p.  40) also notes, approvingly, that
Macaulay's draft of the Indian Penal Code made breach of contract for
the carriage of passengers a criminal offense. The reason is that the
palanquin-bearers were too poor to pay damages for abandoning their
passengers in desolate regions, so the power of the state was needed
to provide for heavier punishments than bankruptcy.  In general,
however, the legal rules actually used seem to diverge more from
optimality in criminal law than civil law.  If, for example, there is
no chance that an innocent man can be convicted of embezzlement,
boiling embezzlers in oil might be good policy, but most countries
would not allow this. Taking the example a step further, if the
evidence for murder is usually less convincing than for embezzling,
our analysis could easily indicate that the penalty for murder should
be less, but such reasoning offends the common notion of matching the
severity of punishment with the crime.

\bigskip
\noindent
 {\bf  Private Institutions and Agency Problems}


    While agency theory can be used to explain and perhaps improve
government policy, it also helps  explain the development of
many curious private institutions.  Agency problems are an important
hindrance to economic development, and may explain a number of
apparently irrational practices.  Popkin (1979, pp. 66, 73, 157)
notes a variety of these.  In Vietnam, for example, absentee
landlords were more lenient than local landlords, but improved the
land less, as one would expect of principals who suffer from
informational disadvantages {\it vis \`{a} vis} their agents. Along
the pathways in the fields, farmers would plant early-harvesting rice
that the farmer's family could harvest by itself in advance of the
regular crop, so that hired labor could not grab handfuls as they
travelled.  In 13th-century England, beans were seldom grown, despite
their nutritional advantages, because they were too easy to steal.
Some villages tried to solve the problem by prohibiting anyone from
entering the beanfields except during certain hours marked by the
priest's ringing the church bell, so everyone could tend and watch
their beans at the same official time.

  In less exotic settings, moral hazard provides another reason
besides tax benefits why employees take some of their wages in fringe
benefits.  Professors are granted some of their wages in university
computer time because this induces them to do more research.  Having
a zero marginal cost of computer time is a way around the moral
hazard of slacking on research, despite being a source of moral
hazard in wasting computer time. A less typical but more imaginative
example is that of the bank in Minnesota which, concerned about its
image, gave each employee \$100 in credit at certain clothing stores
to upgrade their style of dress.  By compromising between paying cash
and issuing uniforms the bank could hope to raise both its profits
and the utility of its employees.  (``The \$100 Sounds Good, but what
do they Wear on the Second Day?''  {\it Wall Street Journal}, 16
October 1987, p. 17.)

    Longterm contracts are an important occasion for moral hazard,
since so many variables are unforeseen, and hence noncontractible.
The term {\bf opportunism} has been used to describe the behavior of
agents who take advantage of noncontractibility to increase their
payoff at the expense of the principal (see Williamson [1975] and
Tirole [1986]).  Smith may be able to extract a greater payment from
Jones than was agreed in their contract, because when a contract is
incomplete, Smith can threaten to harm Jones in some way. This is
called {\bf hold-up potential} (Klein, Crawford, \& Alchian [1978]).
Hold-up potential can even make an agent introduce competing agents
into the game, if competition is not so extreme as to drive rents to
zero.  Michael Granfield tells me that the Fairchild company  once developed a new
patent on a component of electronic fuel injection systems that it
sought to sell to another firm, TRW.  TRW offered a much higher price if Fairchild
would license its patent to other producers, fearing the
hold-up potential of buying from just one supplier.  TRW could have
tried writing a contract to prevent hold-up, but knew that it would
be difficult to prespecify all the ways that Fairchild could cause
harm, including not only slow delivery, poor service, and low
quality, but also sins of omission like failing to sufficiently guard
the plant from shutdown due to accidents and strikes.

  It should be clear from the variety of these examples that moral
hazard is a common problem. Now that the first flurry of research on
the principal-agent problem has finished, researchers are beginning
to use the new theory to study institutions that were formerly
relegated to descriptive ``soft'' scholarly work.




  \subsection { Teams}  

 \noindent
 To conclude this chapter, let us switch our focus from the individual agent to a group of agents. We have already looked at
  tournaments, which   need more than one agent to work, but  a tournament still takes place in a situation where each agent's output   is distinct.  The tournament is a solution to the standard problem,  and the principal could always fall back on other solutions such as  individual risk-sharing contracts. In   this section,  the effect of there being a group of agents is to destroy the effectiveness of the individual risk-sharing contracts, because observed output is a joint function of the unobserved effort of many agents. Even though there is a group, a tournament is impossible, because only one output is observed.  The situation has much of the flavor of  ``The Civic Duty Game'' of Chapter 3:  the actions of a group of players produce a joint output, and each player wishes that the others will carry out the costly actions.    A teams  model is defined as follows. 


\noindent
  {\it A {\bf team} is a group of agents who independently choose
effort levels that result in a single output for the entire group.}

  We will look at teams using the following game.


\begin{center}
 {\bf ``Teams''}\\
 (Holmstrom [1982])
 \end{center}
  {\bf Players}\\
  A principal and $n$ agents.

 
\noindent
 {\bf Order of Play}\\
 (1) The principal offers a contract to each agent $i$ of the form $w_i(q)$,
where $q$ is total output.\\
 (2) The agents decide whether or not to accept the contract.\\
 (3) The agents simultaneously pick effort levels $e_i$, ($i =
1,\dots,n$).\\
 (4)  Output is $q(e_1,\ldots e_n)$. 

 \noindent
 {\bf Payoffs}\\
 If any agent rejects the contract, all payoffs equal zero.
Otherwise,\\
 \begin{tabular}{ll}
  $ \pi_{principal}$ & $= q - \sum_{i=1}^n w_i$;\\ 
 $\pi_{i}$ & $=  w_i - v_i(e_i)$, where $v'_i> 0$ and $v''_i > 0$.
\end{tabular}
   

 Despite the risk neutrality of the agents, ``selling the store''
fails to work here, because the team of agents still has the same
problem as the employer did. The team's problem is cooperation between
agents, and the principal is peripheral.

\noindent
 Denote the efficient vector of actions by $e^*$. An efficient
contract is 
  \begin{equation} \label{e10}  
w_i(q) = \left\{
 \begin{array}{ll}           
 b_i& {\rm if}\; q \geq q(e^*).\\
 0 & {\rm if} \;q < q(e^*).\\       
 \end{array}
 \right.         
\end{equation}     
     where $\sum_{i=1}^n b_i = q(e^*)$ and $b_i > v_i(e^*_i)$.  
 
 Contract (8.\ref{e10}) gives agent $i$ the wage $b_i$ if all agents
pick the efficient effort, and nothing if any of them shirks, in
which case the principal keeps the output.  The teams model gives one
reason to have a principal: he is the residual claimant who keeps the
forfeited output. Without him, it is questionable whether the agents
would carry out the threat to discard the output if, say, it were 99
instead of the efficient 100. There is a   problem of dynamic consistency. The 
agents would like to commit in advance to throw away output, but only
because they never have to do so in equilibrium.  If the modeller
wishes to disallow discarding output, he imposes the {\bf
budget-balancing} constraint that the sum of the wages equals exactly
the output, no more and no less. But budget balancing creates a
problem for the team that is summarized in Proposition 8.1.

\noindent
 {\bf Proposition 8.1.}   { \it If there is a budget-balancing constraint, no differentiable
wage contract $w_i(q)$ generates an efficient Nash equilibrium.}

\noindent
{\bf Proof}\\
     Agent $i$'s problem is     
\begin{equation}\label{e11} 
\stackrel{Maximize}{e_i} \;\;\;  w_i(q(e)) - v_i(e_i).
  \end{equation}
 His first order condition is
  \begin{equation}\label{e12}
 \left( \frac{dw_i}{dq} \right) \left( \frac{dq}{de_i} \right) -
\frac{dv_i}{de_i} = 0.
 \end{equation}     
 With budget balancing and a linear utility function, the Pareto
optimum maximizes the sum of utilities (something not generally
true), so the optimum solves
 \begin{equation} \label{e13}
 \begin{array}{cl}           
  Maximize & q(e) - \sum_{i=1}^n v_i(e_i)\\
   e_1,\ldots, e_n & \\
  \end{array}      
\end{equation}

  The first order condition is that the marginal dollar contribution
to output equal the marginal disutility of effort: 
 \begin{equation} \label{e14}
 \frac{dq}{d{e_i}} - \frac{d{v_i}}{d{e_i}} = 0.
\end{equation}     
    Equation (8.\ref{e14}) contradicts (8.\ref{e12}), the agent's first
order condition, because $\frac{dw_i}{dq}$ is not equal to one.  If
it were, agent $i$ would be the residual claimant and receive the
entire marginal increase in output--- but under budget balancing, not
every agent can do that. Because each agent bears the entire burden
of his marginal effort and only part of the benefit, the contract
does not achieve the first-best.  Without budget balancing, on the
other hand, if the agent shirked a little he would gain the entire
leisure benefit from shirking, but he would lose his entire wage
under the optimal contract.


\noindent
 {\bf Discontinuities in Public Good Payoffs}

 \noindent
 Ordinarily, there is a free rider problem if several players each
pick a level of effort which increases the level of some public good
whose benefits they share. Noncooperatively, they choose effort
levels lower than if they could make binding promises.
Mathematically, let identical risk-neutral players indexed by $i$
choose effort levels $e_i$ to produce amount $q(e_1,\ldots,e_n$) of
the public good, where $q$ is a continuous function.  Player $i$'s
problem is
 \begin{equation} \label{e15}
 \stackrel{Maximize}{e_i} q(e_1,\ldots,e_n) - e_i,
 \end{equation}
 which has first order condition
 \begin{equation} \label{e16}
  \frac{\partial q}{\partial e_i} - 1 = 0,
 \end{equation}
 whereas the greater, first-best effort $n$-vector $e^*$ is
characterized by 
 \begin{equation} \label{e17}
 \sum_{i=1}^n \frac{\partial q}{\partial e_i} - 1 = 0.
 \end{equation}
  If the function $q$ is discontinuous at $e^*$ (for example, $q=0$
if $e_i < e^*_i$ for any $i$), the strategy profile $e^*$ can be
a Nash equilibrium.  In  ``Teams''  the same effect is at work.
Although the ``Teams'' function is not discontinuous, contract (8.10) is
constructed to obtain the same incentives as if it were.

   The first-best  can be achieved  because the discontinuity at $e^*$ makes every player the marginal,
decisive player: if he shirks a little, output falls drastically and
with certainty.  Either of two modifications restores the free rider
problem and induces shirking:

\noindent
  (1) Let $q$ be a function not only of effort but of random noise---
Nature moves after the players.  Uncertainty makes the {\it expected}
output  a continuous function of effort.\\
 (2) Let players have incomplete information about the critical
value---Nature moves before the players and chooses $e^*$. Incomplete
information makes the {\it estimated} output a continuous function of
effort.

 The discontinuity phenomenon is common. Examples, not all of which
note the problem, include:

 \noindent
 (1) Effort in teams (Holmstrom [1982], Rasmusen [1987]).\\
 (2) Entry deterrence by an oligopoly (Bernheim [1984], Waldman
[1987]). \\
 (3) Output in oligopolies with trigger strategies (Porter
[1983a]).\\
 (4) Patent races (Section 14.1).\\
 (5) Tendering shares in a takeover (Grossman \& Hart [1980], Section
13.5).\\
 (6) Preferences for levels of a  public good.








 
\begin{small}

\bigskip
\noindent
 {\bf NOTES}

 \noindent
 {\bf N8.1} {\bf Pooling vs. Separating Equilibrium, and the
Revelation Principle}  
 \begin{itemize}
  \item
  The books by  Fudenberg \& Tirole (1991a),  Laffont \& Tirole  (1993), and Spulber (1989),  and Baron's chapter in the {\it Handbook of Industrial Organization}
edited by Schmalensee and Willig are good places to look for more on mechanism design.  

\item
   Levmore
(1982) discusses hidden knowledge problems in tort damages,
corporate freezeouts, and property taxes in a law review article.

\item
 In moral hazard with hidden knowledge, the contract must  ordinarily  satisfy only one participation constraint,  whereas in adverse selection problems there is a different participation constraint for each type of agent. An exception is if there are  constraints limiting how much an agent can be punished in different states of the world.  If, for example,  there are bankruptcy constraints,  then if the agent has different wealths across the $N$ possible states of the world, there will be $N$   constraints for how   negative his wage can be, in addition to the one participation constraint. These can be looked at as {\bf interim } participation constraints, since they represent the idea that the agent wants to get out of the contract once he observes the state of the world midway through the game. 
 
\item
 The revelation principle was named by Myerson [1979] and
can be traced back to  Gibbard (1973). A further reference is Dasgupta,
Hammond  \& Maskin (1979). Myerson's game theory book is, as one might expect, a  good place to look for  further details (Myerson [1991, pp. 258-63, 294-99]).  


 \item
 Moral hazard frequently occurs in public policy. Should the doctors
who prescribe drugs also be allowed to sell them? The question trades
off the likelihood of overprescription against the potentially lower
cost and greater convenience of doctor-dispensed drugs. See ``Doctors
as Druggists: Good Rx for Consumers?''  {\it Wall Street Journal}, 25
June 1987, p. 24.

\item
 For a careful discussion of the unravelling argument for information
revelation, see Milgrom (1981b).

  \item
 A hidden knowledge game requires that the state of the world
matter to one of the players' payoffs, but not necessarily in the
same way as in ``Production Game VI''. ``The Salesman Game'' of Section 8.2
effectively uses the utility function $U(e,w,\theta)$ for the agent
and $V(q-w)$ for the principal. The state of the world matters
because the agent's disutility of effort varies across states. In
other problems, his utility of money might vary across states.
 \end{itemize}



\bigskip
\noindent
{\bf N8.2} { \bf An Example of   Moral Hazard with Hidden Knowledge:  ``The Salesman Game''}
 \begin{itemize}
  

\item
 Sometimes students know more about their class rankings than the
professor does.  One professor of labor economics  
used a mechanism of the following kind for grading class discussion.
Each student $i$ reports a number evaluating other students in the
class.  Student $i$'s grade is an increasing function of the
evaluations given $i$ by other students and of the correlation
between $i$'s evaluations and the other students'. There are many
Nash equilibria, but telling the truth is a focal point.

\item
 In dynamic games of  moral hazard with hidden knowledge    the {\bf
ratchet effect} is important: the agent takes into account that his
information-revealing choice of contract this period will affect the
principal's offerings next period. A principal might allow high
prices to a public utility in the first period to discover that its
costs are lower than expected, but in the next period the prices
would be lowered. The contract is ratcheted irreversibly to be more
severe. Hence, the company might not choose a contract which reveals
its costs in the first period.  This is modelled in Freixas,
Guesnerie \& Tirole (1985) 

$\;\;\;$ Baron (1989) notes that the principal might purposely design the
equilibrium to be pooling in the first period so self selection does
not occur. Having learned nothing, he can offer a more effective
separating contract in the second period. 

 \item
   We can only find weak Nash equilibria for most hidden knowledge
models, because many contracts can achieve the same outcome by
specifying different out-of-equilibrium punishments. In the context
of ``The Salesman Game'', the contract could specify either ($w=w_2$ if
$q > q_2$) or ($w =0$ for $q > q_2$). The salesman would choose $q =
q_2$ in either case.
  \end{itemize}



\bigskip
\noindent
{\bf N8.4} { \bf Efficiency Wages}
 \begin{itemize}
 \item
  For surveys of the efficiency wage literature, see the article by
L. Katz (1986), the book of articles edited by  Akerlof \& Yellen (1986), and the book-length survey by Weiss (1990).

\item
  While the efficiency wage model does explain involuntary
unemployment, it does not explain cyclical changes in unemployment.

\item
 The efficiency wage idea is  based on the same idea as   the Klein \&
Leffler (1981) model of product quality   formalized in Section
5.3. If  no punishment is available for  player who is tempted to misbehave, a punishment can be created by giving him something to take away. This something can be a high-paying job or  a loyal customer. It is also similar to the idea familiar in politics and  university administration of {\bf co-opting} opponents. To tame the radical     student association, give them  an office  of their own which can be taken away if they seize the dean's office.   Yet another field of application is    moral hazard with hidden knowledge;   
Rasmusen (1988b) shows yet another context:  when depositors do not know which investments are risky and which are safe,  mutual bank managers  can be  highly paid to deter them from making risky investments that might cost them their jobs (see the article for details on what happened after 1980).    

\item
 Adverse selection can also drive an efficiency wage model. We will see in Chapter 9 that a customer might be willing to pay a high price to attract sellers of high-quality cars when he cannot detect quality directly.   
  \end{itemize}



\bigskip
\noindent
{\bf N8.5} { \bf Tournaments}
 \begin{itemize}
 \item
  An article which stimulated   much  interest in tournaments
is Lazear \& Rosen (1981), which discusses in detail the importance
of risk aversion and adverse selection.

 \item
 One example of a tournament is the two-year, three-man contest in
which to choose its new chairman Citicorp named three candidates as
vice-chairmen: the head of consumer banking, the head of corporate
banking, and the legal counsel.  Earnings reports were even split
into three components, two of which were the corporate and consumer
banking (the third was the ``investment'' bank, irrelevant to the
tournament).  See ``What Made Reed Wriston's Choice at Citicorp,''
{\it Business Week}, 2 July 1984, p.  25.

\item
     General Motors has tried a tournament among its production
workers. During a depressed year, management credibly threatened to
close down the auto plant with the lowest productivity. Reportedly,
this did raise productivity. Such a tournament is interesting because
it helps explain why a firm's supply curve could be upward sloping
even if all its plants are identical, and why it might hold excess
capacity.  Should information on a plant's current performance have
been released to other plants? See ``Unions Say Auto Firms Use
Interplant Rivalry to Raise Work Quotas,'' {\it Wall Street Journal},
8 November 1983, p.  1.

 \item
  Under adverse selection, tournaments must be used differently than
under moral hazard because agents cannot control their effort.
Instead, tournaments are used to deter agents from accepting
contracts in which they must compete for a prize with other agents of
higher ability.

\item
  Interfirm management tournaments run into difficulties when
shareholders want managers to cooperate in some arenas.  If managers
collude in setting prices, for example, they can also collude to make
life easier for each other.



\item
 Antle \& Smith (1986) is an empirical study of tournaments in
managers' compensation.  Rosen (1986) is a theoretical model of a
labor tournament in which the prize is promotion.

  \item
  Suppose a firm conducts a tournament in which the best-performing
of its vice-presidents becomes the next president.  Should the firm
fire the most talented vice-president before it starts the
tournament? The answer is not obvious. Maybe in the tournament's
equilibrium,  Mr Talent works less hard because of his initial
advantage, so that all of the vice-presidents retain the incentive to
work hard.



\item
  A tournament can reward the winner, or shoot the loser. Which is
better? Nalebuff \& Stiglitz (1983) say to shoot the loser, and
Rasmusen (1987) finds a similar result for teams, but for a
different reason.  Nalebuff \& Stiglitz's result depends on
uncertainty and a large number of agents in the tournament, while
Rasmusen's depends on risk aversion.  If a utility function is concave
because the agent is risk averse, the agent is hurt more by losing a
given sum than he would benefit by gaining it. Hence, for incentive
purposes the carrot is inferior to the stick, a result unfortunate
for efficiency since penalties are often bounded by bankruptcy or
legal constraints.

\item
 Using a tournament, the equilibrium effort might be greater in a
second-best contract than in the first-best, even though the
second-best is contrived to get around the problem of inducing
sufficient effort.  Also, a pure tournament, in which the prizes are
distributed solely according to the ordinal ranking of output by the
agents, is often inferior to a tournament in which an agent must
achieve a significant margin of superiority over his fellows in order
to win (Nalebuff \& Stiglitz [1983]). Companies using sales
tournaments sometimes have prizes for record yearly sales besides
ordinary prizes, and some long distance athletic races have
non-ordinal prizes to avoid dull events in which the best racers run
``tactical races.''

\item
 Organizational slack of the kind described in the Farrell model has important practical implications. In dealing with bureaucrats, one must keep in mind that they are usually less concerned with the organization's prosperity than with their own.  In complaining about bureaucratic ineptitude, it may be much more useful to name particular bureaucrats and send them copies of the  complaint than to stick to the abstract issues at hand. Private firms, at least, are well aware that customers help monitor agents.  



  \end{itemize}


 \bigskip
\noindent
 {\bf N8.6} { \bf  Institutions and Agency Problems} 
  \begin{itemize}
 \item
  Gaver \& Zimmerman (1977) describes how a performance bond of 100
percent was required for contractors building the BART subway system
in San Francisco. ``Surety companies'' generally bond a contractor
for five to 20 times his net worth, at a charge of 0.6 percent of the
bond per year, and absorption of their bonding capacity is a serious
concern for contractors in accepting jobs.


\item
 Even if a product's quality need not meet government standards, the
seller may wish to bind himself to them voluntarily. Stroh's {\it
Erlanger} beer proudly announces on every bottle that although it is
American, ``Erlanger is a special beer brewed to meet the stringent
requirements of Reinheitsgebot, a German brewing purity law
established in 1516.'' Inspection of household electrical appliances
by an independent lab to get the ``$U_L$'' listing is a similarly
voluntary adherence to standards.

\item
  The stock price is a way of using outside analysts to monitor an
executive's performance.  When General Motors bought EDS, they
created a special class of stock, GM-E, which varied with EDS
performance and could be used to monitor it.
 \end{itemize}



 

\bigskip
\noindent
 {\bf N8.7} {\bf Teams } 
 \begin{itemize}
\item
 {\bf Team theory}, as developed by Marschak \& Radner (1972) is an
older mathematical approach to organization. In the old usage of
``team'' (different from the current, Holmstrom [1982] usage),
several agents who have different information but cannot communicate
it must pick decision rules. The payoff is the same for each agent,
and their problem is coordination, not motivation.

 \item
  The efficient contract (8.\ref{e10}) supports the efficient Nash
equilibrium, but it also supports a continuum of inefficient Nash
equilibria.  Suppose that in the efficient equilibrium all workers
work equally hard.  Another Nash equilibrium is for one worker to do
no work and the others to work inefficiently hard to make up for him.

\item 
 {\bf A Teams contract with hidden knowledge.} In the 1920s,
National City Co.  assigned 20 percent of profits to compensate
management as a group. A management committee decided how to share
it, after each officer submitted an unsigned ballot suggesting the
share of the fund that Chairman Mitchell should have, and a signed
ballot giving his estimate of the worth of each of the other eligible
officers, himself excluded.  (Galbraith [1954] p. 157)



 \item
 {\bf A First-Best, Budget-Balancing Contract when Agents are Risk
Averse}

 Proposition 8.1 can be shown to hold for any contract, not just for
differentiable sharing rules, but it does depend on risk neutrality
and separability of the utility function.  Consider the following
contract from Rasmusen (1987):\\
  \begin{equation}\label{e21}   
w_i = \left\{ \begin{array}{ll}
b_i&  {\rm if}\; q \geq q(e^*).\\
  0&   {\rm with\; probability}\; (n-1)/n \;{\rm if}\; q < q(e^*),\\
  q&   {\rm with\; probability}\; 1/n \;{\rm if}\; q < q(e^*).\\
 \end{array}
  \right. 
\end{equation} 
  If the worker shirks, he enters a lottery.  If his risk aversion is
strong enough, he prefers the certain return $b_i$, so he does not
shirk.  If agents' wealth is unlimited, then for any positive risk
aversion we could construct such a contract, by making the losers in
the lottery accept negative pay.


\item 
 A teams contract like (8.\ref{e10}) is not a tournament.
Only absolute performance matters, even though the level of absolute
performance depends on what all the players do.

\item
  {\bf The budget-balancing constraint.} The legal doctrine of
``consideration'' makes it difficult to make binding,
Pareto-suboptimal promises. An agreement is not a legal contract
unless it is more than a promise: both parties have to receive
something valuable for the courts to enforce the agreement.  

 \item
 Adverse selection can be incorporated into a teams model. A team of
workers who may differ in ability produce a joint output, and the
principal tries to ensure that only high-ability workers join the
team. See Rasmusen \& Zenger (1990).
\end{itemize}



  
\end{document}
 