04.03.01a. Kripke's Paradox; The Two-Armed Bandit. Revised March 2. Crooked Timbers has an interesting post which mentions in passing the following paradox:
Assume p is something I know. So any evidence against p is evidence for something false. Evidence for something false is misleading evidence. It’s bad to attend to misleading evidence. So I shouldn’t attend to evidence against p. So more generally I should ignore evidence that tells against things I know.
I found this interesting, so I looked a bit further and found this description of a similar paradox:
The following is an excerpt from my analysis of one of the most debated paradoxes in the philosophy of language. In A Puzzle About Belief (1979) Saul Kripke proposed the following paradox (and I briefly summarize): If a Frenchman, Pierre, who knows not a word in any language other than French, hears nice things about 'Londres' (French for London), and says sincerely "Londres est jolie" ("London is pretty"), we can conclude that: Pierre believes London is pretty. Pierre then travels to London without knowing where he is, particularly he reaches a sector where not a soul speaks French. He learns English by direct method (by pointing to objects and being told the referring word). A local points to the surroundings and says "London". Pierre disapproves of his surroundings and with the little English he has learned he sincerely asserts, "London is not pretty". We can therefore conclude that Pierre believes London is not pretty. But the Frenchman would still assent the French statement he made while home, for he cannot translate between "London" and "Londres". The question then is, does Pierre believe London is pretty?

I find the Crooked Timbers version better, actually, even though usually concrete examples are best. Partly that is because it shifts the question from Kripke's, which apparently is about what people mean by terms they use. The Crooked Timbers question is rather: "Should I bother to listen to data that goes against my beliefs?"

This is the same as the Two-Armed Bandit Problem in decision theory, I think. Here is my version of that paradox.

I am in a casino with two slot machines. I know that Machine 1 has a payout ratio of 1.2 (1.2 dollars comes out for each dollar I put in). I don't know the payout ratio of Machine 2. I can switch back and forth as much as I want. I don't care about the timing of payoffs, and I live forever. What should I do?

The right answer is that I should start by playing only Machine 2. I will lose sometimes, and win sometimes. As time passes, I will change whatever initial ("prior") opinion I had on the payout ratio of Machine 2, revising it up when I win and down when I lose. I will also revise my certainty over my opinion. Other things equal, I will become more certain as time passes, because a win-loss record of 2-2 leaves me less certain than a win-loss record of 2000-2000. At some point, I may become certain enough that the payout ratio of Machine 2 is less than 1.2 that I will switch to Machine 1.

If I am unusually lucky and keep playing Machine 2 even though its payout ratio is less than 1.2, eventually, "with probability one" (a term of art in probability), my luck will run out and I will discover the truth and switch to Machine 1. That makes sense-- no paradox yet.

If, however, I switch to Machine 1, I will stay there forever. I already know its payout ratio is 1.2, so as I play it, nothing changes in my beliefs, and if nothing changes my beliefs, there is no reason to switch back to Machine 2.

The paradox is that this last paragraph is true whether or not Machine 1 actually is better! If I have a long enough run of bad luck on Machine 2, I will switch to Machine 1 and retain my mistaken beliefs forever. Morever, this is not irrational. I know that I take that risk when I switch to Machine 1. It is rational to take that risk because it is only a risk-- with high probability, it wasn't just a run of unusually bad luck on Machine 2 but a typical pattern of losses. In fact, when the mathematically optimal strategy is calculated, the necessary length of a switch-inducing run of losses on Machine 2 is lengthened to make up for the complete loss of learning that occurs if I switch to Machine 1.

It nonetheless remains true that the optimal strategy involves the possibility of making a mistake that reduces my payoff for each of an infinite number of periods, as I play Machine 1 when I should be playing Machine 2. And this is despite the lack of time discounting.

It is also possible that I make this mistake "with probability one", because I am hit with a version of the "Gambler's Ruin" paradox. This would happen if when Machine 2 is better I nonetheless have probability one of eventually--- after billions of plays-- encountering a long enough span of bad luck that I will switch to Machine 1. I don't know how the mathematics works out on this, though, and am doing this from memory and re-derivation, so this claim might well be wrong.

Anyway, let's return to political opinions. Suppose I am a young person trying to decide whether I favor abortion, and suppose I initially favor abortion, but I also know all the pro-abortion arguments cold, but not the anti-abortion arguments. What I should do is skip all the pro-abortion articles and just read anti-abortion arguments. At some point, I might switch to being anti-abortion. I should continue to studying anti- abortion arguments until I know them completely, because I might discover they are wrong and switch back to being pro-abortion. On the other hand, if I ever switch back, I should stop reading about abortion altogether, and remain pro-abortion the rest of my life. Or, if I come to know both sides well enough, I should also stop reading and thinking, and stick with my final opinion.

Of course, it might be that a lifetime is not enough time to read and understand all the relevant data on either side, in which case I might switch back and forth and keep reading till I die. It is entirely rational, however, to stop reading once one has learned enough.

[in full at 04.03.01a.htm .      Erasmusen@yahoo.com. ]

UPDATE: Later on March 1, a professional philosopher wrote me the following:

...you've got two distinct puzzles up on your latest blog entry, but the text there seems to imply that they are one and the same. The first one -- and the one that you're really interested in -- is actually I think owed to Kaplan, not Kripke (I'm going to go over and see whether anyone's noted this on the CT site.) The second is most often referred to as a 'puzzle about belief', because the locus classicus for it is a paper entitled, well, "A Puzzle about Belief". (It's in a volume entitled Meaning and Use , edited by, um ... [googles for a minute]... Margalit.)

There are indeed many puzzles & paradoxes that are owed to Kripke, such as those raised in his book on Wittgenstein, and he wrestled significantly with some old classics like the liar paradox. But I actually think that this one is not his. If it something he worked on -- which wouldn't totally surprise me; I'm always open to new evidence to contradict my putative knowledge ;-) -- then at a minimum it's not the same as the puzzle about belief.

and then more on the provenance on March 2:
... believes that Princeton philosopher Gil Harman writes about it somewhere, probably in his book _Thought_, and attributes it to Kripke there.

So it seems that it is _a_ Kripkean paradox indeed, even if not _the only_ Kripkean paradox.

There's a sort of economies of scale to being a scholar. If one has already produced a lot of good ideas, there will be a natural tendency for other people to mistakenly attribute even more ideas to you.

In this case, I envy Kripke his name. Something can be wonderfully Kripkean, but nobody will be tempted to call anything Rasmusenian (I know this was a straight line-- there's a reason besides the name!). It looks too much like Ruthenian, but without even that nationality's limited euphoniousness. (And I know that should be "euphony", but the word just doesn't sound right.)

To return to Eric Rasmusen's weblog, click http://php.indiana.edu/~erasmuse/w/0.rasmusen.htm.