Eric Rasmusen
26 September 2006

Notes on Risk <(revised after class)/h2> 1. The idea of risk aversion is that people like to keep their consumption at a steady level rather than up and down. They prefer to consume $50,000 per year rather than $10,000 one year and $90,000 the next.

The conventional way to represent this is with a strictly concave utility function, such as

(A) U = log(C)

If the utility function is concave like this, then from any starting point a loss of $1 hurts more than a gain of $1 helps.

For decades it has troubled many economists that this theory doesn't perfectly fit the facts of people's behavior. No other theory of risk aversion has proven useful enough to be remembered, though, despite the hundreds of articles written on how to improve the theory.

Usually, you can make little modifications to the theory if you need something fancier to fit your particular situation. One thing you can do, for example, is to add adjustment costs.

Suppose someone is currently earning enough to barely make his $500/month payments for his car. If he makes $10/month less, he is in trouble. There is a big adjustment cost since he must give up his car and buy a smaller one instead. If he makes $20/month less, that isn't much worse. The adjustment cost is a fixed cost. Hence, his behavior will seem to violate the assumption of concave utility. Suppose he has a 5% chance of losing $10 and a 5% chance of losing $20. He would prefer to have a 1% chance of losing $10 and an 8% chance of losing $20. He is positively risk loving!

On the other hand, suppose this person starts to make $10/month more. There is an adjustment cost to trading in his current car for a slightly better one, but this adjustment is optional, so what may happen is that he doesn't change his consumption behavior at all in the short run. In the long run, though, when his car needs replacing, he will smoothly change to owning one that costs $510/month.

This idea that any change in behavior incurs adjustment costs, and that negative but not positive changes in wealth require changes in behavior, is powerful. Think of it whenever someone brings up a behavior anomaly involving the decisionmaker's original position being important.

2. Diversification reduces risk. Suppose you could invest in the stock of Apex, which yields either $0 or $3 with equal probability, or the stock of Brydox, which also yields either $0 or $3 with equal probability, but with independent probability. A share of stock costs $1, and you have $6 to invest.

If you invest in 6 shares of Brydox, your end wealth is either $0 or $18, an expected value of $9.

If you invest in 3 shares of Apex and 3 of Brydox, your end wealth is either $0, $9, $9, or $18, with equal probabilities, an expected value of $9.

The diversified strategy is less risky. The Brydox strategy's wealth pattern is reached by a mean preserving spread from the diversified strategy. (What *is* the MPS that gets there?)

Thus, if there are lots of independent assets, you should spread your investment across all of them. The more you diversify, the lower your risk.

If you bought shares of 100 different companies (via a mutual fund that held all of them), then the risk from any 1 company's behavior would be trivial. Thus, you would be happy for that company to ignore risk and simply to maximize its expected value.

3. Combination of risk does not reduce risk. Only division does. Suppose you could invest in the stock of Apex, which yields either $0 or $3 with equal probability, or the stock of Brydox, which also yields either $0 or $3 with equal probability, but with independent probability. A share of stock costs $1, and you have $6 to invest.

All this is the same as in my previous example. There, you were going to invest either $6 in Brydox or $3 in Brydox and $3 in Apex. Now, however, you are thinking of investing either $3 in Apex and $3 in cash, or $3 in Apex and $3 in Brydox. Which is less risky? If you invest in both companies, will the risks tend to cancel out?

The risks will indeed *tend* to cancel out, but the risk is neither higher nor lower--- the change in wealth pattern is ambiguous.

(A) One strategy is to invest $3 in Apex and $3 in cash. This will have a return of $3 or $12, with equal probability. the expected return will be $7.50.

(B) The other strategy is to invest $3 in Apex and $3 in Brydox. This will have a return of $0, $9, $9, or $18, with equal probabilities, as we found earlier. The expected return will be $9.

If you were risk neutral, you'd choose strategy (B) because of the higher expected return. But its risk is neither higher nor lower than A's, so which strategy you would prefer if you were risk averse depends on your utility function-- on the third derivative of it, actually. See Douglas W. Diamond, "Financial Intermediation and Delegated Monitoring," The Review of Economic Studies, Vol. 51, No. 3. (July 1984), pp. 393-414; Paul Samuelson (1963) "Risk and Uncertainty: A Fallacy of Large Numbers," Scientia, 98: 108-113.

It's easy to see why. First, if you were close to risk neutral, you'd still prefer (B), because of its higher expected value. So all we have to do is understand why anyone would ever prefer strategy (A). The reason is that somebody might really hate a return of $0. Suppose your utility function was the following strictly concave function:

U = 100 + log(C) if C greater than 2

U = log (C) if Cless than or equal to 2

That's discontinuous, but there's nothing wrong with that.

The main thing for this person is to avoid consumption of less than 2. Thus, he would prefer strategy (A).

This idea also applies to adding risks across time. Is it less risky to invest $100 in the stock market for one year, or for ten? Over ten years, you are more certain to get the average historical return on the stock market--- say, 7%. But you also are more likely to get a return of -95% or of +200% than if you just invested for one year and then put your money into cash. It is *not* strictly accurate to say that investing for the long haul has lower risk.

4. Uninsurable Dangers. Some utility items don't depend on monetary wealth, or depend on it in surprising ways. Consider your leg. Suppose you are a soldier with two legs and you earn $50,000 per year. With probability 0.5, your leg will be shot off in the Iraq war but you will survive the injury. You don't need hospital insurance, because the army will pay for that. Let us suppose, too, that your civilian job is such that you will suffer no loss of income from having only one leg. Do you want to buy insurance against the loss of a year's worth of utility from losing the leg? No.

Without insurance, the two possible future states of the world are:

A1. Two legs and $50,000

A2: One leg and $50,000

There exists some amount of dollar compensation that will make you indifferent about losing your leg. Suppose it is $80,000. If you wanted to make your utility secure, you could pay a promise of $40,000 for insurance, so the two possible outcomes would be

B1. Two legs and $10,000

v B2: One leg and $90,000 (remember, you paid $40,000 for the insurance)

Which would you prefer, (A1, A2), or (B1,B2)? Most people would prefer (A1,A2). The reason is that what we really want to do with insurance is *not* to equate utility levels over time, but to equate *the marginal utility of money* over time. At a first cut, the marginal utility of money would be no different with one leg than with two. With only $10,000, you would eating dog food, and with $90,000 you would be eating caviar, but you'd prefer $50,000 and hamburger, regardless of how many legs you have. It isn't worth trying to equate utilities; better just to accept that you will have lower utility when you lose your leg and that money is an inefficient way to make up for its loss.

In fact, you may have *lower* marginal utility of money if you lose your leg, because you can no longer do all the things with money that you used to, such as skiing trips. Thus, you would want to arrange things to have *more* money in the state of the world in which, with two legs, you can make better use of it.

Let's translate this into equations. A realistic utility function is

(C) U = f(legs) + g(dollars), with f and g concave

Utility function (C) is separable in legs and dollars, so losing a leg does not change the marginal utility of dollars, and leg insurance is inefficient.

Our usual utility function is something like

(D) U = g(80,000*legs + dollars)

Utility function (D) is not separable in legs and dollars, and the marginal utility of dollars falls with the number of legs the person has, so leg insurance is efficent for this person.