Friday, July 11, 2003

JUDICIAL SUBVERSION of the Constitution is old news, I guess, but Eugene Volokh points out a particularly egregious case by the Nevada Supreme Court. The Nevada state legislature was deadlocked and hadn't authorized state spending-- a not uncommon problem in legislative brinkmanship, and one that has often happened before in Nevada. Also, the state constitution (a) requires the state to fund public schools, and (b) has a 2/3 requirement for enacting tax increases. What the Court did in Guinn v. The Legislature , with just one dissent, was to order the legislature to fund the state government immediately rather than wait until the start of the new fiscal year, and to do it by ignoring the 2/3 requirement needed for new taxes. They do this quite baldly. Rather than hint that they won't object if the Legislature violates the 2/3 requirement they command the Legislature to ignore it-- a command presumably enforceable by jailing the legislators indefinitely for contempt until they obey the Court's order:

Therefore we grant the petition in part and order the clerk of this court to issue a writ of mandamus directing the Legislature of the State of Nevada to proceed expeditiously with the 20th Special Session under simple majority rule.
As Volokh points out, this is totally outrageous. The Court could more plausibly have ordered the Legislature to pass a budget, or even have ordered the Governor to continue spending on education at the previous year's level. But to order a tax increase is simply judicial lawlessness. They might just as well have said that a 30% margin was enough to pass a tax increase.

I am glad that Volokh proposes impeachment for the judges. It is surprising that this remedy is not used for judges who break their oaths of office, and, indeed, that law professors are so shocked by the idea that judges might need to be disciplined for decisions that purposely ignore the law.

WHICH DO WE REMEMBER BETTER, good events or bad events? Thomas Gilovich notes in How We Know What Isn't So that although one might think that the problem with gamblers is that they remember their wins but not their losses, in fact the opposite seems to be true. He found in experiments that they remembered their losses-- but only because they wanted to find excuses for them.

By carefully scrutinizing and explaining away their losses, while accepting their winnings at face value, gamblers do indeed rewrite their personal histories of success and failure. Losses are counted not as losses, but as `near wins' (p. 55).
[ http://php.indiana.edu/~erasmuse/w/03.07.11a.htm ]

 


Thursday, July 10, 2003

DEFINING "HUMAN" is not easy and it matters. Whose death can be murder? Who counts in the utilitarian or Pareto calculus? Who has a soul that can be lost or saved? I just came across a 1997 posting by Dean Sherwood that introduces a new angle: the human as property-owner and contract-maker.

...our laws and customs will have to change to accommodate machine intelligence, enhanced animals, brainless human clones (organ banks), bio-gadgets using human brain tissue cultures, experimental human-animal chimeras, human minds copied into other media (and then altered or excerpted)... ... If nothing else, imagine the shopkeeper who, before accepting money for a box of crackers, must demand documentary evidence that the purchaser is "human" or risk prosecution (or forfeiture) for, in effect, receiving stolen money (because only a human is a moral agent capable of making a valid transaction).
I'm not sure what to say on the big question, but I can turn with relief to the law- and-economics one.

Suppose I build a robot to do some of my work. This is not a menial robot. Rather, I want it to prioritize tasks that I give it, and to figure out how to do them, in a way complicated enough that I can't be sure at the end whether the robot did everything right unless I check everything so thoroughly that I lose the benefit of having a robot doing the work.

I would want to motivate the robot by giving it a utility function-- a distaste for effort and a taste for things that come with the wages of effort. Maybe I could simply make my robot a miser, paying him wages in toy money and making him want to maximize his toy money bank account. But there might be a reason I'd want to pay him in real money, which he would spend on real goods. In fact, going around the question of motivation, I might want him to pay for his own maintenance and upgrades, delegating to him the decision of when and how, and this would require giving him control over some money.

The robot is probably not like a corporation, a trust, or a government, because in this situation I don't want to have it acting under human control. Those three legal institutions are all "persons" in the law, I think, but they have fiduciary duties to humans and can take no action without human control.

My robot is more like a human slave. Slaves in Classical times and in the American South often owned property, in effect (I don't know about legally). Their masters wanted to motivate them, and so paid them extra--"tips" of a sort-- or allowed them to go off and earn money on their own. Morality would constrain seizing the slave's money even if the law did not, and, just as important, a master who paid wages and then took them back would not be able to motivate that slave or others in the future.

So I pay my robot money wages. Legally, I still own the money, in the Indiana of 2003. But what happens when the robot goes to the store and buys a book? Can I go and get my money back from the store?

First consider a couple of analogies. What would happen if I lost some money on the street and the storekeeper found it? I don't know, but my guess is that I would have to give it back. That is probably the right legal analogy here.

But what if a burglar stole my money and spent it at the store? Then the storekeeper keeps it, I think. (That is special to money and perhaps fungible securities-- I can get back my heirlooms if they're stolen-- but that's fine for our purposes.)

At any rate, I think I'd have to give back the book if I wanted the money back. So the storekeeper could feel pretty safe in selling books to robots.

The storekeeper would have an additional argument against me, especially if (a) the robot was indistinguishable from a human, or (b) if it had become a common custom to let robots buy and keep books: that I had caused the loss of the money by entrusting it to the robot. Some doctrine such as "unclean hands" or "estoppel" might apply (or might not--don't trust me on that point). In case (b), business custom might establish that I had in effect made a valid contract purchasing the book myself. I could take the book away from the robot, but not get my money back from the storekeeper. In fact, even if the robot merely ordered the book and had not yet bought it, I might be bound to pay and accept delivery.

In this second case, the robot is acting very much like what is called an "agent" in the law-- someone to whom I have given authority to act for me. Agents do not have to be human-- corporations or trusts can be agents. A machine is not a legal person, at present, but it could perhaps be made one.

Even if the robot does not rise to the dignity of an agent, it could still be considered as a method of placing an order. Suppose I write a macro for my computer which dials up Amazon every week, chooses a book at random, and orders it delivered to my mother-in-law, typing in my credit card number. Surely I am legally bound in that transaction. My robot is more intelligent, but it is still a machine to whom I have given the ability to make transactions with what is legally my money.

I don't think I've exhausted the subject. I'll bring it up at my law- and-economics lunch today.

[ http://php.indiana.edu/~erasmuse/w/03.07.10a.htm ]

 

To return to Eric Rasmusen's weblog, click http://php.indiana.edu/~erasmuse/w/0.rasmusen.htm.