Sunday, September 23, 2007

Algorithms and Analogies

Posted by Jeff Lipshaw

It's always interesting when something you are studying is the subject of popular commentary.  The New York Times "Week in Review" today has an article on algorithms and the interactions between humans and computers that turns on the Turing test (developed by British code and cybernetics genius Alan Turing, the breaker of the German Enigma code at Bletchley Park during World War II).  The ultimate 250pxabu_abdullah_muhammad_bin_musa test of artificial intelligence is whether a tester corresponding with a computer cannot tell if her correspondent is a computer or a human.  (The article points out that the word "algorithm" comes from the name of the ancient Baghad scholar ibn Mus al-Khwarzimi, pictured on a Russian stamp at left.)

I happened to be reading an essay, "Analogies Hard and Soft" by Joseph Agassi (Tel Aviv University, Philosophy, right) (in David Helman's 1988 anthology, Analogical Reasoning: Perspectives of Artifical Intelligence, Cognitive Science, and Philosophy, which I picked up from Cass Sunstein's citation to it in his 1993 Harvard Law Review article on analogicalAgassi reasoning).  The essay also addresses the Turing test and the limits of artificial intelligence.   To summarize, analogies fascinate because they waiver between "hard" copies or forgeries of something else, at one extreme (e.g. a computer simulation so good that it cannot be distinguished from a human correspondent, or a piece of expert art forgery), and wispy soft comparisons:  "they are vague in the limits of their applicability, they are suggestive, they are not simply vague and indefinite, they stimulate one's thinking, they offer possibilities which scintillate between promise and disappointment."  Here is the metaphysical issue Agassi raises:  is it possible to program so powerfully that it replicates all possible human (i.e. brain) programming?  The problem is one of self-reference and infinite regress.  The Turing test requires a tester.  If the tester concluded that the program was the ultimate copy, then the program should also be able to replicate what the tester just did.  But that would mean that there had to be a "meta-program" to be tested now by a "meta-tester."  And so on.

Indeed, the natural hypothesis here, that no program will ever be able wholly to replicate the power of the brain, particularly as it relates to creation or inventiveness, is, as Agassi points out, an inductive and not a deductive conclusion.  Showing that a computer cannot replicate something, for example, as noted in the Times article, image or sound recognition (try posting a comment to this blog for an example of that!), merely supports but does not prove the thesis.

I suppose I ought to conclude this with a couple of implications that make this somehow relevant to lawyers.  The point being made by the Times story is not:  it is about systems that make up for the failure of the perfect algorithm by incorporating human brains into cybersystems, thus taking advantage of what human brains (creativity?  inventiveness?  analogy?) and computers (speed of processing) each do best.  But a point made by Agassi is:  the vagueness of analogy finds its practical frustration in the determination of patentability.  "[H]ow inventive should a forgery be in order to make it patentable? . . . Michael Polanyi, the famous philosopher of science who took expertise as axiomatic and undefined, has claimed that no formula can be given to justify the patent-tester's decision."  Indeed, Agassi notes the infinite regress of the attempt to use algorithms in the patent office; once a court states the formula, "competitors forge new forgeries by what is technically known as going around the patent, i.e. varying it trivially but with sufficient significance using formulas accepted by courts.  Courts may take notice and improve their formulas, but not in retrospect!"

That's a lot of food for thought on a Sunday morning.

http://lawprofessors.typepad.com/legal_profession/2007/09/algorithms-and-.html

Hot Topics, Law & Society, Lipshaw | Permalink

TrackBack URL for this entry:

http://www.typepad.com/services/trackback/6a00d8341bfae553ef00e54efbf1bf8834

Listed below are links to weblogs that reference Algorithms and Analogies:

Comments

Nice post. I have to say that I enjoyed Judge Posner's rather swift dismissal of the potential grant of personhood to computers in his review of Wise's book on animal rights.

As for the Turing Test: I think such a test elides what is truly essential about persons (i.e., consciousness/a soul) and instead elevates the simulacra of it. It is a classic example of models of verifiability/measurement developed in the natural world being misused in order to displace subjectively experienced (moral) judgment.

Posted by: Frank | Sep 23, 2007 8:51:22 PM

"But that would mean that there had to be a "meta-program" to be tested now by a "meta-tester." And so on." How so? No it doesn't. It only posits the possibility of developing AI good enough to be able to conduct Touring tests of its own. Implicitly, anyone or anything conducting a Touring test has a certain set of expectations of real people or adequately good fakes. So indeed, a model, for example an AI, can have a model, for examples a set of expectations guiding the conduct of a Touring test. But not all models automatically have models of their own. Plainly, the expectations of a Touring tester, human or AI, do not in and of themselves constitute a running program or simulation. Now, a more sophisticated Touring test might including having the subject conduct a Touring test of his/her/it's own upon yet another subject. And that subject in turn, might be set to conduct a Touring test on yet another subject. But here there is no regress, only iteration, more test subjects, not simulations simulating more simulations.

Moreover, a programmer creating an AI, runs that simulation on a computer, not in the programmers own mind. The computer can have as much capacity as needed, and the programmer need only think of any smaller part or aspect of the whole of his/her design at a time, while writing the code. Never need the program encompass the totality of being of the AI in his or her own mind like unto God. Even larger cognitive overviews only include certain details at any time. Also, we can be sure that a great deal of trial and error will ensue until the AI runs properly. And the very success of trial and error demonstrates that the perfection required for cognitive simulation in totality is not necessary. Indeed, a lesser intelligence could create a more powerful intellect than itself. Indeed, all too often do all manner of creations surprise their creators.

An artificially intelligent computer might conceivably be able actually to run a simulation of itself in its own mind. And that does raise the possibility of recursion, limited only by processing resources. But recursion, incidentally, is also allowed. Indeed, such an AI could conceivably study and model itself, better than its own human creator. And by appealing to general principle in understanding the particular, meta-modeling enters, but no differently than usually.

But can we ever truly understand creativity well enough for analytic simulation, as opposed to bringing creativity about experimentally, even via only partial understanding? Probably understanding in abstract principle must advance, before any understanding to comprehensive minutiae and formula can come. The reverse would stink of Induction, and that is absurd!

Posted by: Aaron Agassi | Aug 3, 2009 1:01:41 PM

Post a comment