Saturday, December 08, 2007

Traveler's Dilemma

The Traveler's Dilemma, unless I'm missing something, is another example of Ivory Tower logic gone too far in the wrong direction.

UPDATE: Really, the Traveler's Dilemma could be viewed as a good critique of Nash equilibrium, and for that it works well.

5 comments:

ADHR said...

I get that you like bashing the "ivory tower" (which doesn't really exist, but never mind), but this is twisted. First, it's not a logic problem, it's a game theoretical problem. Second, game theory studies these sorts of problems as a way of approaching rationality in small, manageable doses. (AI does much the same sort of thing.) As with the PD, I suspect that TD's would come up in a number of real-world scenarios.

undergroundman said...

The ivory tower does exist in that there are a great many impractical people who have been sucked into a "logical" system, and are incapable of analyzing things from outside that system. I've seen this a lot in economics and philosophy; I'm not sure about the extent to which it exists elsewhere.

I should rephrase the post. I see nothing wrong with presenting The Traveler's Dilemma: it's the "rational" solution which I have a problem with. Imagine you and I are presented with this case. I choose $100. I reason that if you choose $100 also, then I could choose $99 and get $101. But then I realize that you'll realize this, and so on. So, "rationally", we both end up choosing $2.

No. We both rationally realize that if we're both going to choose $2, we might as well both choose $100 instead. In fact, we don't even have to go through that iterative process to know what's really correct: $100. But (100,100) is not a Nash equilibrium choice.

Scroll down two-thirds of the page and read what it says: "Game theory predicts that the Nash equilibrium will occur when Traveler's Dilemma is played rationally."

The article goes on to say that people who choose $100 are illogical. Excuse me? If I assume that you will make the logical choice, and you assume that I will, that should lead us to both choose $100, from my perspective.

Humans operate a complex set of logical presumptions in their pursuit of a "rational" action, and this "Nash equilibrium" should not pretend to be "rational." I noticed that Nash equilibrium had flaws when my professor insisted that, in game theory under PD, players derive no benefit when their opponents lose. Economics is a discipline which should realize that one's relative position is really what matters the most. In corporate strategy (and this is what we were applying it to) hurting your competitor's profits is especially useful because it cripples them.

AI?

What I'm trying to get at is the fact that there are different senses of logic, and you have to make clear what sort of logical presumptions you're using. Normal mathematics conforms to simple logical rules of the natural world, and attempts to create a consistent system (a symbolic modeling system) based upon these rules. Philosophy has its own set of (in some cases rather strange) logical rules. Economics assumes that maximizing one's own benefits is "logical" (and even then, often does not go delve deep into true maximization of benefits). That's all I can think of off the top of my head.

The foundations of logic are basically the same, I suppose. You have premises and you attempt to derive conclusions from these premises. The problem, it seems to me, is that premises and conclusions are often confused in economics and philosophy.

ADHR said...

If that's what you mean by "ivory tower", then, yes, it seems like it's a real thing. I'm not sure it's confined to just those domains, though -- I've seen psychologists, academic and clinical, with the same problem, also lawyers and accountants. Pretty much any group of people which has an orthodoxy is going to qualify -- so, we should probably lump the religious in, too. So, I think my concern is best put as a worry about calling this an "ivory tower", which is a term usually reserved for academia-bashing. If it applies more broadly than that, then I think we need another word. "Dogmatism" is a good one we can steal from the Pyrrhonians.

I can't find the original post this is a comment to -- thanks to Blogger for changing things around to make that impossible -- so I'm not sure exactly the context of the original discussion. (Take that as a disclaimer, in case I now go on to say something horribly uninformed.) This means I'm not sure what my "AI" remark was in reply to.

It seems right to me that, insofar as we are able to presume we're dealing with someone more or less like ourselves, we won't go through the iterative process at all. And presuming that we can make that prediction effectively, of course. Both pretty reasonable presumptions of threshold rationality -- if we were dealing with someone who couldn't do that, it should be obvious. (They'd have their underpants on their head or something.) Davidson makes the point that we have to presume this in order to be treating the other person as a person -- minimally rational, a lover of the good, etc. Presuming otherwise is a coded way of implying that we think we're dealing with a subhuman.

Now, "logic" is a slippery word, unfortunately. It's got at least two senses -- a descriptive sense, and an evaluative sense. Or, if you like, it's either a straightforward factual word, or a term of praise/blame. "Rational" is the same. If you're doing, say, formal logic, you're interested mostly in the former sense -- what in fact follows from what, under what scheme of rules of inference. If you're doing, however, argumentation theory, you're interested mostly in the latter sense -- given what in fact follows from what, under what scheme of rules of inference, which is the better scheme.

ADHR said...

In the evaluative sense of "logical", it might be illogical to choose $100 -- but that depends, as you note, on the standards that we're using to draw the evaluation. Different standards can obviously lead to different evaluations. I think that's probably the sense in play in the claim that it's illogical to choose $100, because it seems to presume that we all should -- so, normative rather than descriptive claim -- live up to what is described by a Nash equilibrium. Which is weird, as I always thought the Nash equilibrium was supposed to be a descriptive claim about how humans actually reason. It might be false that we actually reason according to Nash equilibria, but I always took it that this is what Nash himself, at least, was trying to fumble towards -- a better account of what's going on in dilemma situations than was afforded by other models of reasoning.

In the descriptive sense of "logical", we wouldn't face the problem of the standards -- no standards are in play as we're not drawing an evaluation -- but I can't see how one could conclude it's "illogical" to choose $100. Without an evaluative component to the use of the term, all calling something "logical" amounts to saying is that it (as a matter of fact) obeys a certain set of rules of inference or fails to obey them. So, I suppose, the case works out as underdescribed -- we need to know which system of logical inference we're supposed to be working with. If the system is one which includes the Nash equilibrium as a rule -- so, an inference is in fact incorrect if it obeys the terms of the equilibrium, correct otherwise -- then it is illogical to choose $100, but the claim is trivial. Choosing $100 violates the Nash equilibrium, so if the Nash equilibrium is one of the rules, choosing $100 is illogical by definition.

Generally, I'm getting more and more suspicious of theories which try to isolate human rationality in the spirit of the early modern conception of the atomistic agent. Rationality has a huge social component, and this is where things like relative advantage are going to not only come into play, but be visible. If your conception of rationality doesn't have a social component at all, it's impossible to grasp what relative advantage even is.

undergroundman said...

Wow. It sounds like we sorta agree here! :)

Maybe I'm missing something in this whole Nash equilibrium thought experiment, and if I ever figure it out, I'll let you know.