Sunday, September 23, 2007
Posted by Jeff Lipshaw
It's always interesting when something you are studying is the subject of popular commentary. The New York Times "Week in Review" today has an article on algorithms and the interactions between humans and computers that turns on the Turing test (developed by British code and cybernetics genius Alan Turing, the breaker of the German Enigma code at Bletchley Park during World War II). The ultimate test of artificial intelligence is whether a tester corresponding with a computer cannot tell if her correspondent is a computer or a human. (The article points out that the word "algorithm" comes from the name of the ancient Baghad scholar ibn Mus al-Khwarzimi, pictured on a Russian stamp at left.)
I happened to be reading an essay, "Analogies Hard and Soft" by Joseph Agassi (Tel Aviv University, Philosophy, right) (in David Helman's 1988 anthology, Analogical Reasoning: Perspectives of Artifical Intelligence, Cognitive Science, and Philosophy, which I picked up from Cass Sunstein's citation to it in his 1993 Harvard Law Review article on analogical reasoning). The essay also addresses the Turing test and the limits of artificial intelligence. To summarize, analogies fascinate because they waiver between "hard" copies or forgeries of something else, at one extreme (e.g. a computer simulation so good that it cannot be distinguished from a human correspondent, or a piece of expert art forgery), and wispy soft comparisons: "they are vague in the limits of their applicability, they are suggestive, they are not simply vague and indefinite, they stimulate one's thinking, they offer possibilities which scintillate between promise and disappointment." Here is the metaphysical issue Agassi raises: is it possible to program so powerfully that it replicates all possible human (i.e. brain) programming? The problem is one of self-reference and infinite regress. The Turing test requires a tester. If the tester concluded that the program was the ultimate copy, then the program should also be able to replicate what the tester just did. But that would mean that there had to be a "meta-program" to be tested now by a "meta-tester." And so on.
Indeed, the natural hypothesis here, that no program will ever be able wholly to replicate the power of the brain, particularly as it relates to creation or inventiveness, is, as Agassi points out, an inductive and not a deductive conclusion. Showing that a computer cannot replicate something, for example, as noted in the Times article, image or sound recognition (try posting a comment to this blog for an example of that!), merely supports but does not prove the thesis.
I suppose I ought to conclude this with a couple of implications that make this somehow relevant to lawyers. The point being made by the Times story is not: it is about systems that make up for the failure of the perfect algorithm by incorporating human brains into cybersystems, thus taking advantage of what human brains (creativity? inventiveness? analogy?) and computers (speed of processing) each do best. But a point made by Agassi is: the vagueness of analogy finds its practical frustration in the determination of patentability. "[H]ow inventive should a forgery be in order to make it patentable? . . . Michael Polanyi, the famous philosopher of science who took expertise as axiomatic and undefined, has claimed that no formula can be given to justify the patent-tester's decision." Indeed, Agassi notes the infinite regress of the attempt to use algorithms in the patent office; once a court states the formula, "competitors forge new forgeries by what is technically known as going around the patent, i.e. varying it trivially but with sufficient significance using formulas accepted by courts. Courts may take notice and improve their formulas, but not in retrospect!"
That's a lot of food for thought on a Sunday morning.