Tuesday, January 6, 2009
Posted by Jeff Lipshaw
Two colleagues walked into my office yesterday and told me they had thought of me when reading the Michael Lewis/David Einhorn essay that purports to explain exactly what went wrong with the financial markets in the Sunday New York Times. David Zaring and Frank Pasquale have weighed in on the Lewis/Einhorn piece (Zaring = "kneejerk/post-hoc moralizing"; Pasquale = "smart commentary"); like Fred Tung, I think I have to go with David's balloon-popping of the "I-Told-You-So" School, and thank David as well for reminding me to go back to Joe Nocera's article about Nassim Taleb and the "Value at Risk" algorithm in the Sunday NYT Magazine. (The problem, for me, with Sunday Magazine articles is that part of our paper gets delivered on Saturday, and the magazine gets stored in a hermetically-sealed container for twenty-four hours so that I have a pristine crossword puzzle to do on Sunday morning. I jump right to the crossword (do not read The Ethicist or William Safire, do not pass GO)).
It's not enough to say "see, I was right" because some lucky bastard always manages to take the long shot bet. My prime example of this is Edward Yardeni. He's still in business, despite having predicted the end of the world as a result of the Y2K crisis. The link is to a CNET article dated January 4, 2000 when it appeared that, indeed, the world had not collapsed, not the U.S., which had spent trillions, and not even Italy, which showed up on most people's charts as completely unprepared for the turn of the clock! I love this - here's what it looks like when you've bet the farm on doomsday, and doomsday doesn't happen:
In a statement posted Sunday on his own Web site, Yardeni.com, one of the more outspoken doomsayers on the Y2K problem, said he is "impressed and pleased by the smooth transition into 2000 so far." He also said the risk of disruptions to global supply chains, which was his No. 1 concern, now seems less likely to occur.
He also said that if no "significant" problems occur by the end of this month, he will admit he was wrong about a Y2K global downturn.
In the statement, Yardeni credited the IT community for a successful century date change as well as Y2K preparedness efforts by John Koskinen, the U.S. government's leading Y2K man.
Of course, this doesn't account for Italy. (Per the CIA in October, 1999: "Russia, Ukraine, China and Indonesia are among the major countries most likely to experience significant Y2K-related failures. Countries in Western Europe are generally better prepared, although we see the chance of some significant failures in countries such as Italy.") But who cares when you've made a bundle in consulting and appearance fees.
The lead-in to Nocera's piece, a quote from Peter Bernstein's introduction to his work on risk, Against the Gods, capsules this nicely:
The story that I have to tell is marked all the way through by a persistent tension between those who assert that the best decisions are based on quantification and numbers, determined by the patterns of the past, and those who base their decisions on more subjective degrees of belief about the uncertain future. This is a controversy that has never been resolved.
Books are starting (again) to stack up on my desk, but they all seem to tie back into my thesis about the irreducibility of judgment (and rule-following). Take Harold Schulweis' book, Conscience: The Duty to Obey and the Duty to Disobey. It's an argument, drawn on several Jewish sources (e.g. Abraham's argument with God about saving Sodom and Gomorrah), that our own conscience can override what seems to be God's dictate. How can that be? And if it's so (and, by the way, it seems that way to me!), how do you decide when to obey and when not to obey? If you are allowed to argue with God, then you probably ought to be able to argue with the results of the Value at Risk algorithm. But when God or the entire financial community are telling you X, it's really hard to do Y! Then when you do Y, and it turns out you were right, were you wise or insane but lucky?
I'm also reading Mark Turner's Cognitive Dimensions of Social Science. Turner (pictured above) is a leading theoretician in cognitive science. This is fascinating stuff - he's taking apart Clifford Geertz's iconic article "Deep Play" on the Balinese cockfight, not from an anthropological standpoint, but the standpoint of trying to theorize how human beings evolve new meanings. The key here is culturally developed categories (how our minds classify data) and metaphor or analogies that disturb them. Moreover, we blend meanings from separate "influencing spaces" into a new meaning. The hypothesis is that the so-called "double-scope blend" in which two wholly separate influencing spaces determine a new meaning is the critical evolutionary development that makes us human. I think of it this way. My dog associates my putting on my coat with going for a walk. My understanding is that is a single-scope blend of meaning. A double-scope blend, on the other hand, is the metonymy (or is it synecdoche) of, say, the cockfight. Natural cockfights and Balinese social structures don't have much to do with each other until they are blended to have a new meaning in which the victorious chicken says something about its owner. (Think about your affiliation with your favorite sports team.) Modern humans do double-scope blends; no other creatures do. (That sounds like a testable hypothesis to me, by the way.)
In short, if the sensory data of the world takes on meaning to a human being through a process of blending - of metaphor and analogy - what does that say about the tension Bernstein identifies? And is being right in your prediction of the future any evidence of the superiority of the mental processes that produced it?