« May 25, 2008 - May 31, 2008 | Main | June 22, 2008 - June 28, 2008 »

June 7, 2008

The Transposition Fallacy in the Los Angeles Times

In an earlier posting, I noted a story in the Los Angeles Times about the perceived need to adjust the probability for a random match when an individual emerges as a suspect because of a trawl through a database of DNA profiles. The reporters suggested that there was a grave injustice because "the prosecutor told the jury that the chance of such a coincidence was 1 in 1.1 million," but "jurors were not told the statistic that leading scientists consider the most significant: the probability that the database search had hit upon an innocent person. In Puckett's case, it was 1 in 3." They added that "the case is emblematic of a national problem."

The Times received some flak for this reporting. Not only do many leading statisticians dispute the claim that an adjustment for the size of the database searched produces the most significant statistic, but, it was said, the description of "1 in 3" as "the probability that the database had hit upon an innocent person" was wrong. The critical readers complained that, at best,  1/3 was the chance of a match to someone in the database if neither Puckett nor anyone else in the database were the source of the DNA in the bedroom of the murdered woman. It is not the chance that Puckett is not the source given that his DNA matches.

To equate the two probabilities is to slip into the transposition fallacy that P(A given B) = P(B given A). Conditional probabilities do not work this way. For instance, the chance that a card randomly drawn from a deck of ordinary playing cards is a picture card given that it is red is not the chance that it is red given that it is a picture card. The former probability is P(picture if red) = 6/26. The latter is P(red if picture) = 6/12.

The reporters responded with the following defense:

In our story, we did not write that there was a 1 in 3 chance that Puckett was innocent, which would be a clear example of the prosecutor's fallacy. Rather, we wrote: "Jurors were not told, however, the statistic that leading scientists consider the most significant: the probability that the database search had hit upon an innocent person. In Puckett's case, it was 1 in 3." The difference is subtle, but real.

Interestingly, when asked whether there was any difference on a listserve of evidence professors, two professors described the statement as ambiguous, while four saw it as a clear instance of transposition.

My view is that the following two statements are true:

1. IF THE DATABASE WERE INNOCENT (meaning that it does not contain the source of the crime-scene DNA and everyone in it is unrelated), then (prior to the trawl) the probability that SOMEONE (regardless of his or her name) would match is roughly 1/3.

2. IF THE DATABASE WERE INNOCENT, then (prior to the trawl) the probability that a man named Puckett would match is 1/N = 1/1,100,000.

But neither (1) nor (2) is equivalent to

3. The probability that the database search hit upon an innocent person named Puckett was 1/3.

--DHK

June 7, 2008 | Permalink | Comments (0) | TrackBack

June 5, 2008

fMRI, Lie Detection, and Statistics

I'm blogging from the AALS Mid-Year Conference on Evidence in Cleveland, where I just moderated a discussion this morning on fMRI and Lie Detection featuring Steve Laken (Cephos Corp.) and Mike Pardo (Alabama).  Although the studies on fMRI lie detection have their limitations, the results so far are quite impressive, with accuracy rates in the 90% range.  One wonders how soon they will make their way into court, where admissibility questions loom large.  Even if the technology is in fact sufficiently reliable for Daubert (and what I saw this morning suggests that this is true), the inherent conservatism of the legal system, coupled with the bias against analogs to polygraphs, will make admissibility a tough hurdle for the technology.  (For more on the bias against mind-reading devices, see this Note written by my student Leo Kittay.)

One striking aspect of the various discussions on fMRI during and after the session was the focus that people had on mechanism.  Many people are concerned that researchers have not yet pinpointed specific areas of the brain associated with lying, or have not determined specific pathways for deception.  Often, they are similarly concerned that other brain activities may "light up" the same regions.  I'm skeptical, however, that these concerns really matter.  While it may be desirable and interesting to know the specific mechanisms associated with deception, we really don't need to make such discoveries to have a practically useful lie detection machine.  All that matters is that some model exists (here, presumably using brain scans) that can with reasonable accuracy separate liars from non-liars.  How the model does that is in many ways beside the point.  As Laken pointed out during the discussion, medical researchers often have little or idea about the specific mechanism for a drug's success, yet such a limitation never prevents us from using its therapeutic benefits as proven through statistical/epidemiological studies.

--EKC

June 5, 2008 | Permalink | Comments (2) | TrackBack