Sunday, June 21, 2009

Osborne and the Right to Post-conviction DNA Testing (II)

On November 8, 2008, I outlined the issues in the Osborne case that the Supreme Court decided a few days ago (June 18, 2009). The Court avoided the core issue of whether a prisoner has a right to be released upon a showing that he is probably innocent of the crime for which he was convicted after a fair trial. It did so in a 5-4 decision by reasoning that even if this right exists, a prisoner has no due process right to test the DNA from the scene of a rape after the conviction when (1) the convicted offender did not seek extensive DNA testing before trial even though it was available, (2) he had other opportunities to prove his innocence after a final conviction based on substantial evidence against him, (3) he had no new evidence of innocence (only the hope that more extensive DNA testing than that done before the trial would exonerate him), and (4) even a finding that he was not source of the DNA would not conclusively demonstrate his innocence (obviously, a tough standard to meet). Unless the Court overrules itself, later courts will have to figure out which combination of these factors should be dispositive in future cases.

Chief Justice Roberts' opinion for the majority begins with the observation that "DNA testing has an unparalleled ability both to exonerate the wrongly convicted and to identify the guilty." Sure, DNA evidence is highly probative in certain types of cases, but is it truly "unparalleled"? What happened to fingerprints as a biometric identifier? Is this another example of "DNA worship"?

Another oddity in the case is Justice Alito's remarks, in a concurring opinion, that the DNA sample here might be so small, degraded, and contaminated (because the condom sat outside for 24 hours) that a failure to find STR aleles matching Osborne would not mean much. To support this speculation, Justice Alito relied on some law review articles that noted that some fraction of DNA samples have these problems. Yet, there was enough undegraded material in the condom for HLA DQA testing (which linked Osborne to the sample) and, apparently, for RFLP testing (which Osborne's counsel chose not to pursue before trial). At a minimum, it would seem that the case could have been remanded for a determination of whether the DNA here was as degraded, contaminated, and limited as Justice Alito thought it could have been.

Obviously, these are minor points in the greater scheme of things. The majority opinion has been widely condemned (usually on the basis of very general statements about DNA testing and false convictions). Inspired, one surmises, by recent Justice-confirmation politics, the New York Times depicted the result as the work of a conservative bloc of Justices insensitive to the plight of real human beings.

--DHK

June 21, 2009 | Permalink | Comments (0) | TrackBack (0)

Saturday, April 18, 2009

Taking Liberties with the Numbers

This month's issue of the California Lawyer perpetuates the confusion in the media about DNA database trawls. In an article entitled "Guilt by the Numbers: How Fuzzy is the Math that Makes DNA Evidence Look So Compelling to Jurors?," award-winning journalist Edward Humes discusses the unusual case of People v. Puckett, No. A121368, Cal. Ct. App., 1st Dist., May 1, 2008). John Puckett, now an elderly man, is appealing his recent conviction for the 1972 murder of Diane Sylvester, a San Francisco nurse. The conviction rests on a cold hit in California’s convicted-offender database at a small number of STR loci (genetic locations). Hume writes that in Puckett, "the prosecution's expert estimated that the chances of a coincidental match between the defendant's DNA and the biological evidence found at the crime scene were 1 in 1.1 million." Id. at 22. Then he adds "there's another way to run the numbers" which shows that "the odds of a coincidental match in Puckett's case are a whoppiong 1 in 3." Id. "Both calculations," he maintains, "are accurate. The problem is that they answer different questions." Id. The explanation, he believes, lies in "a classic statistical puzzle known as the 'birthday problem.'" Id.

Surely the probability of "a coincidental match" cannot have such fantastically different "accurate" values. Moreover, the birthday problem has almost nothing to do with these numbers. The fuzziness is in the words of the article, not in the math. Only if we define "a coincidental match" can we begin to see what its probability would be and how unlike the birthday problem it is.

Definition 1. The probability of coincidental match is the chance that Mr. Puckett is innocent and the match to him is just a coincidence

The average reader might think that a coincidental match means that Mr. Puckett is innocent and the match to him is just a coincidence.  If this is what it means, however, its probability is neither 1 in 1.1 million nor 1 in 3.  The former figure is the probability that Puckett's DNA would match if he were the only one whose DNA had been checked and if he were unrelated to the killer. The latter figure is the probability that at least one profile in the California database -- not necessarily Puckett's -- would match if no one in the database were the killer.  Notice that both probabilities are conditional -- they depend on assumptions about who the real killer is or is not.  They cannot readily be inverted or transposed into the probability of who the real killer is. Under Definition 1, therefore, neither number is an "accurate" statement of the probability of a coincidental match.  Neither one expresses the chance that the match to Mr. Puckett is just a coincidence.

A technical note: This description of the probabilities of 1 in 1.1 million and 1 in 3 assumes, for simplicity, that it was the killer's DNA that was found near the victim and later typed and that there was no possibility of error in the DNA typing, no ambiguity in the test results, and no selectivity in presenting them. Statisticians will immediately recognize that Bayes' rule could be used to arrive at the posterior probability of Puckett's innocence.

Definition 2. The probability of a coincidental match means the chance that Mr. Puckett's DNA would match (and no other DNA in the database would) if he were not the killer and if he were unrelated to the killer.

This definition refers to the probability of the DNA evidence given the hypothesis of coincidence. Again, neither 1 in 1.1 million nor 1 in 3 expresses this value, but 1 in 1.1 million is a far closer estimate than is 1 in 3. The reason is that the DNA evidence includes not merely the datum that Puckett's DNA matches, but the additional information that no one else's does. If Puckett were the only one tested (a database of size 1) and if he were innocent, then the chance that he would match would be 1 in 1.1 million. Now we test an unrelated second person. The chance that this individual would match if he were innocent also is 1 in 1.1 million, and the chance that he would match if he were the killer is 1. The chance that Puckett matches and the other man does not is therefore either (1/1,100,000) x (1/1,100,000) (if both men are innocent) or 1/1,100,000 x 1 (if Puckett is innocent and the other man is the killer). In other words, the probability that Puckett matches just by coincidence (he matches if he is innocent) in a search of a database of size 2 is, at most, 1 in 1.1 million. Searching the database and finding that only Puckett matches is better evidence than testing only Puckett.  (This reasoning is developed more fully, for a database of any size, in. e.g., David H. Kaye, Rounding Up the Usual Suspects: A Legal and Logical Analysis of DNA Database Trawls, 87 N. Car. L. Rev. 425 (2009).)

Definition 3. The probability of a coincidental match means the chance that one or more DNA profiles in the database would match if no one in the database is the killer.

This definition refers to the probability of one or more hits in the database given that the database is innocent. This probability is approximately 1 in 3. What it has to do with the probability that the DNA in the bedroom was Mr. Puckett's is obscure.  It is not even the expected rate at which searches of innocent databases would lead to prosecutions. After all, the 1 in 3 figure includes people who were not even born in 1972, when Puckett allegedly killed Diane Sylvester. If the probability that applies under Definition 3 were to be admitted, it should be adjusted so that it it is not so misleadingly large. See id.; David H. Kaye, People v. Nelson: A Tale of Two Statistics, 7 L., Probability, & Risk 247 (2008).

The Birthday Problem

Also contrary to the claim in the California Lawyer, the birthday problem is not involved in Puckett. The birthday problem, in its simplest form, asks for is the smallest number of people in a room such that the probability that at least two of them will have birthdays on the same day of the same month exceeds one-half. The answer (23) is surprisingly small because no particular birthday is specified. In the Puckett search, however, a particular DNA profile -- the one from the crime-scene -- is specified. Finding that this particular profile matches at least one in the database is much less likely than finding at least one match between all pairs of profiles in the database. The latter event is the kind that is at issue in the birthday problem.  See David H. Kaye, DNA Database Woes: What Is the FBI Afraid Of? (under review). It is not involved in a cold hit to a crime-scene profile.

There are other errors in the California Lawyer article, but I hope I have said enough to caution readers to be wary. The media portrait of the database-trawl issue bears but a faint resemblance to the peer-reviewed statistical literature on the subject.

--DHK

References

Guilt by the Numbers: How fuzzy is the math that makes DNA evidence look so compelling to jurors?, California Lawyer, Apr. 2009, at 21-24.

This blog --
The Birthday Problem in Las Vegas, Aug. 11, 2008
DNA Database Woes and the Birthday Problem, July 20, 2008
Rounding Up the Usual Suspects III: People v. Nelson, June 22, 2008
The Transposition Fallacy in the Los Angeles Times, June 8, 2008
The Transposition Fallacy in Brown v. Farwell, May 3, 2008
Rounding Up the Usual Suspects II, May 5, 2008
Rounding Up the Usual Suspects, April 5, 2008

Recent law review articles

David H. Kaye, People v. Nelson: A Tale of Two Statistics, 7 L., Probability, & Risk 247 (2008)
David H. Kaye, Rounding Up the Usual Suspects: A Legal and Logical Analysis of DNA Database Trawls, 87 N. Car. L. Rev. 425 (2009)

April 18, 2009 | Permalink | Comments (0) | TrackBack (0)

Thursday, April 16, 2009

Two Cases on Multiple Chemical Sensitivity

A diagnosis that is presented in courts with some regularity is "multiple chemical sensitivity." Wikipedia provides the following links and remarks about its dubious scientific status:

"Because of the lack of scientific evidence based on well-controlled clinical trials that supports a cause-and-effect relationship between exposure to very low levels of chemicals and the myriad symptoms reported by clinical ecologists, MCS is not recognized as an established organic disease by the American Academy of Allergy, Asthma, and Immunology, the American Medical Association (AMA), the California Medical Association, the American College of Physicians, and the International Society of Regulatory Toxicology and Pharmacology."

Case law therefore generally rejects expert medical testimony of MCS. A recent Kansas court of appeals case, Kuxhausen v. Tillman Partners, 197 P.3d 859 (Kan. Ct. App. 1998), is illustrative. "When Stacy Kuxhausen reported for work at an accounting firm on a Monday morning in Manhattan, Kansas, she smelled paint and began to feel ill within minutes of entering the building. She said that her eyes burned, that she started to get a sore throat, and that she had to take deep breaths to get enough air. She later learned that epoxy-based paints had been applied in the basement of the building on the preceding Friday and Saturday. Kuxhausen came back to the building twice more over the next few days but stayed for only a few hours each time. . . ." She sued the building owners for about $2.5 million.

She found a member of the American Academy of Allergy, Asthma, and Immunology (not board certified), Dr. Henry Kanarek, who has diagnosed more than 100 patients with MCS and who concluded that Ms. Kuxhasen was suffering from this condition. The trial judge barred the diagnosis on the ground that MCS is not an ailment that is generally recognized in the medical community.

The court of appeals affirmed. There is nothing odd about that, but the court had to distinguish Kuhn v. Sandoz Pharmaceuticals Corp., 14 P.3d 1170 (Kan. 2000).  Although Kansas follows Frye v. United States, 293 F. 1013 (D.C. Cir. 1923), in requiring general acceptance of scientific propositions that are the basis of expert testimony, Kuhn tosses this test out the window when the expert gives "pure opinion" based on personal experience. Thus, had Dr. Kanarek simply testified that in his 13 years of practice, he had encountered more than a 100 cases of MCS, the trial judge might have had to admit the diagnosis.  But "[w]hen asked his basis for multiple-chemical sensitivity as a valid diagnosis," Dr. Kanarek cited "information that has appeared in various articles written in the publications that I've read as well as lectures or discussions."  Because he "has relied upon articles and lectures by others as support for the validity of the diagnosis," the court of appeals concluded that his testimony was inadmissible, Kuhn notwithstanding. 

There is something ironic about a legal doctrine that excludes scientific evidence when the expert cites the scientific literature yet admits it when the expert relies on much more limited (and much less reliable) personal experience of a single physician. Kuhn makes little sense.

The Kansas Court of Appeals had to get around bad law to reach a reasonable result. Moving from Kansas to Oregon, the Oregon Court of Appeals misapplied good law (in the form of a state test for scientific evidence that anticipated the approach in Daubert v. Merrell Dow Pharmaceuticals) to deem it an abuse of discretion for a trial court to exclude such theories as dental fillings cause chemical sensitivity. In Kennedy v. Eden Advanced Pest Technologies, 193 P.3d 1030 (Or. Ct. App. 2008), the court of appeals thought that theories and publications within the subculture of clinical ecology were just as valid and established as mainstream medicine. It was unfazed by a toxicologist's testimony within "the recognized medical community" there was zero acceptance of the field "because it hasn't been substantiated [as] a scientific method."
Id. at 1037.

Plaintiff relied on a diagnosis of "Dr. William Rea . . . who founded the Environmental Health Center in Dallas," id. at 1035, and whose methods, the court conceded, had been rejected by "virtually all courts that have considered the issue." Id. at 1041. But a physician who testified for the defendant dismissed the diagnosis as resting on "novel tests * * * published in obscure journals for which we don't know anything about peer review or other aspects of the testing procedure." Id. at 1037. He explained that Dr. Rea was "the mouthpiece, so to speak, for the clinical ecology movement. But the—the difficulty with—with this concept is that it's never had any scientific underpinnings. [T]he condition [cannot] be defined in such a way that anybody can properly diagnose it. [W]e continue to see a number of physicians who . . . use diagnostic tests that are not validated. They continue to make the diagnosis of multiple chemical sensitiv[ity], or MCS, or chemical sensitivity or sometimes it's been renamed to idiopathic environmental intolerance. None of these are legitimate diagnosable medical conditions for which criteria exist." He insisted that Dr. Rea is "practicing something that is not mainstream medicine, for sure. That, I can tell you."

In essence, the Oregon court substituted a simple credentials test for the requirement of a scientific foundation for scientific testimony, observing that "Rea is a medical doctor who has practiced for a long period of time, belongs to relevant professional organizations, and has examined over 30,000 patients." Id. at 1039. Apparently thinking that the possession of an M.D. makes a physician a scientist, the court stated that "there exists a legitimate debate within the scientific community between two groups of scientists." Id. at 1040. It concluded that "the most that can be said is that there is a controversy in the medical community about whether chemical sensitivity or MCS is a valid diagnosis." Id. at 1039.

The question, of course, is not just whether there is a controversy among individuals with advanced degrees. It is the nature, quality, and extent of the data that might confirm or refute the beliefs of these individuals. The learned professions are not immune to quackery. Some physicians entertain unvalidated -- and sometimes implausible -- theories. These believers may organize themselves into professional societies, issues certificates to their members, and publish their own peer-reviewed journals (that are ignored by the larger medical community). Courts dealing with medical testimony therefore may have to probe more deeply than the Oregon court did into the substance of the dispute if they are to reach sound decisions about the admissibility of scientific evidence.

--DHK

Thanks to David Bernstein for calling the Oregon Court of Appeals opinion to my attention. Further discussion of the admissibility of medical testimony in light of modern tests for scientific evidence can be found in The New Wigmore, A Treatise on Evidence: Expert Evidence (2004).

--DHK

April 16, 2009 in Science | Permalink | Comments (0) | TrackBack (0)

Saturday, March 7, 2009

Genetic Datasets to Stay Closed

"The National Human Genome Research Institute is sticking with a decision, made last summer, to remove free-access, pooled genomics data [from] the Internet." An article in the American Scientist implies that one reason for the decision is that "law enforcement is expanding the way it uses DNA to track suspects, even innocent relatives of suspects in crime cases." The worry seems to be that the police will take a DNA sample from a crime scene, perform a genome-wide scan of SNPs, use a statistical procedure described in a research paper published last year to determine whether the sample could have come from someone who contributed DNA to the research database, then subpoena the research institution for the name of the donor. For references to the report that triggered the worries at NHGRI and private genetics research centers that have adopted the same restrictive polocy, see the report from September 5, 2008, posted on this blog.

--DHK

Reference: Catherine Class 2009. "DNA Research Commons Scaled Back." American Scientist 97(2): 113. March-April.

March 7, 2009 | Permalink | Comments (0) | TrackBack (0)

McDaniel v. Brown: The Supreme Court, Bayes' Theorem, Five Brothers, and Two Errors in DNA Probabilities

At the end of January, the Supreme Court granted a petition for the writ of certiorari in McDaniel v. Brown.  I noted this case back in May 2008.  In Brown v. Farwell, 525 F.3d 787 (9th Cir. 2008), as the case then was known, the Ninth Circuit discussed Bayes' Theorem and granted habeas corpus relief. The May blog explained how the majority of the panel may have misportrayed the implications of the theorem in this case.

The Supreme Court will consider two procedural issues: (1) What is the standard of review for a federal habeas court analyzing sufficiency-of-evidence claim under Antiterrorism and Effective Death Penalty Act? (2) Does analysis of sufficiency-of-evidence claim pursuant to Jackson v. Virginia under 28 U.S.C. § 2254(d)(1) permit federal habeas court to expand record or consider nonrecord evidence to determine reliability of testimony and evidence given at trial? The "nonrecord evidence" is a report prepared by Dr. Larry Mueller pointing out errors in the trial testimony of the DNA analyst, Renee Romero, about the "prosecutor's fallacy" and the chance of a match to one of the defendant's brothers. Mueller is correct on both counts. First, Romero, at the behest of the prosecutor, transposed the conditional probability of a match to an unrelated individual given that the defendant was the source of the DNA by transforming it into "a 99.99967 percent chance that Troy's DNA was the same as the DNA discovered in Jane's underwear." Because the uncontested random-match probability was 1/3,000,000, however, a correct application of Bayes' Theorem easily could give approximately the same result. (This point is explained in the earlier blog.) Second, Romero miscomputed the chance of a match to one of Troy's untested brothers as 1/6500. As Mueller found, the correct number is closer to 1/66.

If the Court agrees that Mueller's report should have been ignored, then an interesting evidence question arises. Can a court take judicial notice that Romero's testimony about the probabilities was wrong? I think so. The transposition error has been discussed ad nauseum in the scientific-evidence literature. See, e.g., The New Wigmore: A Treatise on Evidence: Expert Evidence (2004). Mueller's figure of 1/66 is trickier, though, because it rests not only on a well-accepted formula in genetics but also on sample data about allele frequencies. Still, the proposition that Romero's figure of 1/6500 is too small is indisputable. As noted in the May blog, the correct number cannot be much smaller than 1/512 for the two brothers considered by Romero (or 1/256 for these two plus another living across the state line). Of course, whether these errors -- errors that were not brought out at trial -- can be shoehorned into a grounds for habeas relief is another question that I shall leave for others to address.

--DHK

Acknowledgements: Thanks to Carissa Hessick for calling the grant of certiorari to my attention and to Larry Mueller for explaining his analysis of the probability for a match to a brother.

March 7, 2009 | Permalink | Comments (0) | TrackBack (0)

Thursday, February 19, 2009

Viewing the National Academy Report on Forensic Science

The National Academy of Science has been generating a lot of reports recently on forensic science topics. Its latest, long-delayed, long-awaited effort is the most ambitious. It surveys all fields of forensic science and calls for dramatic reforms in the system of producing forensic science evidence in the U.S. We hope to address specific parts of the report later.

To read most NRC reports, you can go to the National Academy Press website. The Academy advertises that the prepublication forensic science report also is available online for free.

The tab at http://www.nap.edu/catalog.php?record_id=12589#toc promises "Read this book online, free!" It leads to a table of contents. The links there take you to the book's contents, but sometimes they lead to a page that reports that the book is not available online free. I presume that this is a programming glitch and that online access is or soon will be fully functional.

--DHK

February 19, 2009 | Permalink | Comments (0) | TrackBack (0)

Saturday, November 22, 2008

Simpley neurology

Around the 1950s, psychiatry and law was in vogue. Today, we are hearing a lot about neurology and law. In the spirit of Jay Leno's "Headlines," here is an advertisement from the Oxford University Press catalog:

Neurology
Second Edition
Michael Donaghy
paper, ISBN13: 9780198526360, ISBN10: 0198526369

Praised by both lecturers and students alike for successfully breaking down the barriers which conventionally surround the subject of neurology and for making it simpleý

--DHK

November 22, 2008 | Permalink | Comments (1) | TrackBack (0)

Friday, November 7, 2008

Osborne and the Right to Post-conviction DNA Testing

The Supreme Court will consider whether an individual convicted of a crime has a constitutional right to obtain a DNA sample that might exonerate him. The case that raises this issue has produced four appellate opinions so far. The one that the Supreme Court will review is Osborne v. District Attorney's Office for Third Judicial District, 521 F.3d 1118 (9th Cir. 2008).

The case began in 1994 with a vicious attack on a prostitute. The evidence that led to William Osborne’s conviction included semen from a condom that was analyzed with a relatively unrevealing form of DNA testing. While pursuing other avenues of relief, Osborne filed an action in federal district court under a civil rights statute, 42 U.S.C. § 1983, to force state officials to give him the biological material for more modern DNA testing. Unlike most other states, Alaska has no statute specifically prescribing the conditions under which prisoners can obtain post-conviction DNA testing. After some twists and turns, the district court decided that Osborne had a “limited” due process right to the sample.

The Ninth Circuit affirmed, emphasizing that the crime-scene DNA sample had been introduced at trial as evidence against him, that more definitive testing now is available at no cost to the state, and that Osborne could use an exculpatory finding to obtain post-conviction relief. Although the state of Alaska contends that the Ninth Circuit “created from whole cloth” a new constitutional right, other courts have found that such a constitutional right exists. E.g., Savory v. Lyons, 469 F.3d 667 (7th Cir. 2006); McKithen v. Brown, 565 F.Supp.2d 440 (E.D.N.Y. 2008).

Although Osborne has been pursuing state habeas corpus relief, he has yet to seek federal post-conviction relief. Given the many limitations of federal habeas corpus and the state court decisions to date, it may be that the only avenues that remain open to him are executive clemency and state or federal relief on the theory that a prisoner has a “freestanding” right to be released because he can show that, despite a fair trial unblemished by any prejudicial errors, he is actually innocent. In House v. Bell, 547 U.S. 518 (2006), the Supreme Court recognized that such a right might exist, but the Court determined that even if it did, the proof of actual innocence in that case did not satisfy the “extraordinarily high ... threshold for any hypothetical freestanding innocence claim.” Id. at 555.

Although a new DNA finding excluding Osborne as the source of the crime-scene DNA would be powerful evidence of actual innocence, the Ninth Circuit conceded that it would not be “conclusive.” As in House, therefore, the Court conceivably could avoid resolving the “freestanding right” issue by characterizing the evidence here as falling short of the vague, “extraordinarily high” threshhold. It also could avoid this question by finding a right of access to DNA evidence just because the evidence would provide support for a petition for a pardon—an issue that the Ninth Circuit chose not to consider. Finally, it could avoid all these issues by holding, as Alaska argued, that a federal court should not issue what is, in effect, a discovery order against the state outside of an actual post-conviction proceeding.

In contrast, unless the Court were to pin the right of access to new testing to executive clemency rather than a judicial proceeding, Osborne cannot prevail unless he establishes two things. First, the Court must agree that regardless of how fair Osborne’s trial may have been, he has a constitutional right to be released if he is factually innocent. Second, he must persuade the Justices that access to the DNA samples would be reasonably likely to produce sufficient evidence of his actual innocence.

In short, this complex case will produce new law about federal rights to DNA testing, but the opinion that emerges could be anything from a narrow, procedural loss to a sweeping constitutional victory for Osborne, with plenty of permutations in the middle.

Postscript: Facts, Procedure, and the Methods of DNA Testing in Osborne

The Crime

In March 1993, two men paid a female prostitute (K.G.) to perform fellatio. They then drove her to “a service road ... in an isolated area on the outskirts of Anchorage ... near Earthquake Park.” Osborne v. District Attorney's Office for Third Judicial Dist., 521 F.3d 1118, 1137 (9th Cir. 2008). Judge Melvin Brunetti, writing for himself and judges Alfred T. Goodwin and William A. Fletcher, described a brutal attack in which the driver hit K.G. in the head with a gun and the passenger, wearing a blue condom, “vaginally penetrated her” (id. at 1122), choked and shot at her, and both of them beat her with an axe handle. They left her, half buried in the snow, for dead.
    Incredibly, K.G. “got up, walked to the main road, flagged down a passing car, told its occupants what had happened, and—hoping to avoid the police—asked only for a ride home.” Id. After a neighbor of one of the car's occupants notified the police, an uncooperative K.G. “eventually described the incident.” Id. During the medical examination, a “vaginal examination was not performed, however, because the passenger-rapist had worn a condom and K.G. had bathed repeatedly since the attack. At the crime scene, Anchorage police recovered from the snow a used blue condom, part of a condom wrapper, a spent shell casing, and two pairs of K.G.'s grey knit pants stained with blood.” Id.

The Suspects and the DNA

“A week later, military police stopped Dexter Jackson for a traffic infraction.” Id. Because Jackson and his car “resembled ... sketches that had been circulated after the assault ... , the military police contacted the Anchorage Police. Jackson confessed and identified Osborne as the passenger on the night of the assault. K.G. identified him and his car. And, the police collected considerable circumstantial evidence establishing that K.G. had been in the car and that the car had been at the location she had described.

The case against Osborne was weaker. K.G. picked photos of him and another person from a photo spread, and she thought “Osborne was ‘most likely’ to have been the passenger who raped and shot her.” Id.  Although she pointed to Osborne at trial, she originally described the passenger as older and heavier than Osborne, and as clean shaven rather than mustached (as Osborne was). Osborne had been to an arcade some time before the attack, and there were paper tickets from there in the car. Id. at 1124. Some witnesses saw Osborne get into the car before the crime. Others saw Osborne and Jackson together after the attack, and they saw blood on Osborne’s clothing. Two pubic hairs from the blue condom and another one from K.G.’s sweatshirt (which was beneath her in the car) were microscopically similar to Osborne’s. And, there was DNA evidence.

The DNA

Along with the two pubic hairs, the blue condom contained sperm. In 1993, the most revealing DNA tests detected VNTR (variable number tandem repeat) types. VNTRs are like freight trains with lots of cars. The DNA boxcars are sequences of DNA about 15-35 base pairs long. At a location (a “locus”) on a chromosome, a particular one of these sequences is repeated many times. Just as different trains have different numbers of boxcars, different people usually have the different numbers of repeat units, and this causes the lengths of the VNTRs to be quite variable in the population.
    But VNTR testing was not done in this case. The crime lab “felt that the sample was [too] degraded” for VNTR testing to work. Id. at 1123. (As an aside, it is not obvious why the DNA in the condom would have been very degraded. DNA is a rather stable molecule. Bacterial enzymes will degrade it by cutting it into small pieces. That is why DNA should not be stored in warm, moist conditions, but the condom was recovered from the snow within 24 hours.)

Osborne’s lawyer knew about DNA testing. She met with the crime lab analyst, reviewed some “research articles, and conferred with a Fairbanks public defender who was litigating the scientific basis of DNA testing.” Id. In a post-conviction affidavit, she stated that she did not press for VNTR testing because she did not believe her client was actually innocent and concluded that he “was in a strategically better position without RFLP [VNTR] testing.” Id. at 1124.
    This is not to say that Osborne’s strategic position was great. The laboratory had performed a less-discriminating DQ-alpha test. DQ-alpha is a gene in the major histocompatibility complex that produces the genetic markers that constitute individual tissue types. The DQ-alpha type in the sperm was the same as Osborne’s. That is incriminating, but not terribly so, for “one in every 6 or 7 black men” have the same type. Thus, defense counsel had decided it was better to deal with this limited DNA information at trial than to risk a match to VNTR types that could produce figures like one in a million.

The Alaska Proceedings

At a joint trial with Jackson, Osborne was convicted of kidnapping, assault, and sexual assault, and was sentenced to 26 years imprisonment. The Alaska Court of Appeals affirmed, and Osborne did not appeal further.

Years later, he brought an action for post-conviction relief in Alaska Superior Court. He contended that his lawyer’s decision not to pursue VNTR testing amounted to ineffective assistance of counsel and that he had “a due process right, under either the state or federal constitution” to have to the DNA tested with more modern procedures. Id. After this court rejected his claims in 2002, Osborne appealed to the Alaska Court of Appeals. In 2004, he also applied to the parole board. He confessed to the attack and provided details. The board denied his application. The court of appeals held open the possibility that of relief. It remanded the case to the superior court to decide if the original conviction rested primarily on eyewitness identifications, if “demonstrable doubt” as to that identification existed, and if DNA testing could “be conclusively exculpatory.” Id. at 1125. The superior court determined that these stringent conditions had not been met, the court of appeals affirmed, and the Alaska Supreme Court denied review.

The Federal Proceedings

While Osborne was unsuccessfully pursing the post-conviction remedies in the Alaska courts, he was in federal court asserting a due process right to test the DNA at his own expense. Specifically, he filed an action against Alaska officials under 42 U.S.C. § 1983, a civil rights statute, alleging that by refusing to give him the hairs and the semen, Alaska officials were violating his federal rights under the due process, equal protection, confrontation, compulsory process, and cruel and unusual punishment clauses. The federal district court dismissed the action on procedural grounds. It reasoned that the only way to obtain federal court-ordered post-conviction access to an old DNA sample for the purpose of overturning a conviction was through a petition for a writ of habeas corpus.

Osborne appealed this ruling, and the Ninth Circuit reversed the district court. Osborne v. District Attorney’s Office, 423 F.3d 1050 (9th Cir. 2005) (Osborne I). It remanded the case, instructing the district court to decide whether Osborne had a federal right to obtain the DNA samples. After some additional procedural skirmishes, the district court ruled for Osborne. This time the state appealed to the Ninth Circuit. Osborne v. District Attorney's Office for Third Judicial Dist., 521 F.3d 1118 (9th Cir. 2008) (Osborne II). The panel affirmed the district court, and that is the ruling that the United States Supreme Court will review.

The Osborne II opinion

In Osborne II, the Ninth Circuit relied on Brady v. Maryland, 373 U.S. 83 (1963), to find a federal right to access DNA evidence based on “only a reasonable probability that with favorable DNA test results he could affirmatively prove that he is probably innocent.” 521 F.3d at 1131. In Brady, Maryland prosecuted Brady and a companion, Boblit, for murder. Brady claimed Boblit had done the actual killing. The prosecution had withheld a written statement by Boblit confessing that he had performed the act of killing by himself. The Court held that for the government to conduct a trial while withholding material, exculpatory evidence violates due process.
    The Ninth Circuit extended the Brady right to receive exculpatory evidence at or before trial to the post-conviction context and to evidence that might not, in the end, prove to be exculpatory. It did so even though there was no case seeking post-conviction relief before it. It is tempting to describe the case as a freefloating discovery claim predicated on a freestanding actual innocence theory. The state can be expected to challenge both aspects of the claim.
    Having determined that Brady applies in the § 1983 context, the Ninth Circuit addressed the standard of materiality: How great must the potential exculpatory value of DNA testing be to require the state to turn over the sample? There are two parts to this question. First, how clear is it that DNA testing would produce usable results? The opinion provides little information with which to answer this question. Since the state is in possession of the DNA evidence, however, it would be unreasonable to require the prisoner to prove that the sample is of a sufficient quality and quantity for successful DNA testing. Often, this will not be known until the laboratory does the testing.

Second, given the other evidence in the case, what would an exclusion of the defendant imply about his actual innocence? In this regard, the court rejected “the extraordinarily high standard of proof that applies to freestanding claims of actual innocence.” Id. at 1132. The Ninth Circuit, having previously allowed freestanding claims, requires that a prisoner “go beyond demonstrating doubt about his guilt, and must affirmatively prove that he is probably innocent.” Id. (quoting Carriger v. Stewart, 132 F.3d 463, 476 (9th Cir. 1997) (en banc)). If the due process theory is that the discovery right is parasitic on a later claim of actual innocence (which seems to be the only route left open to Osborne for judicial relief from imprisonment), one would think that the prisoner must show that a DNA test that excludes him as the source of the crime-scene DNA demonstrates, in the context of the case, that “he is probably innocent.” Invoking Brady, however, the court suggested that a weaker standard should apply— a “showing that the favorable evidence could reasonably be taken to put the whole case in such a different light as to undermine confidence in the verdict” (id. at 1133 (quoting Kyles v. Whitley, 514 U.S. 419, 435 (1995)) or that, as Osborne framed it, there is “a reasonable probability that, had the evidence been disclosed to the defense, the result of his trial would have been different.” Id. at 1134.

In the end, the court split the difference. It left “to another day” the possibility that it would adopt Osborne’s version of the Brady standard in the post-conviction context. In this case at least, the court thought it sufficient to conclude that there was “a reasonable probability that, if exculpatory DNA evidence were disclosed to Osborne, he could prevail in an action for post-conviction relief.” Id. We seem to be left with a reasonable probability of probable innocence.

In fulfilling this standard, or some variation on it, further questions arise. Are we to consider the possibility that, if the DNA were successfully typed and a prisoner excluded, the crime-scene DNA profile would match a record on a state or federal DNA database? Once another suspect is identified in this way, would an investigation of this individual show his guilt and the prisoner’s innocence?

A final quirk in the Ninth Circuit opinion is the court’s insistence that it “need not decide the open questions surrounding freestanding actual innocence claims” and that it merely needed to “assume for the sake of argument that such claims are cognizable in federal habeas proceedings” to conclude that Osborne had a post-Brady right to secure evidence that might help him a federal habeas proceeding. Id. at 1131. This approach seems incoherent. New DNA testing cannot show that any error occurred at Osborne’s trial. Its only relevance lies in proving his actual innocence, either in a pardon application, a state post-conviction case, or a federal habeas proceeding. For a court to derive the discovery right from the right to prevail in a federal habeas case, as the Ninth Circuit tried to do, there must be a freestanding due process right to be released because of actual innocence. The discovery right is parasitic. It cannot survive without its host.

What Might New Testing Prove?

The Ninth Circuit wrote that the DNA tests that Osborne is seeking can distinguish “one in a billion people, rather than one in 6 or 7” because there are 13 STR loci to test rather than one DQ-alpha locus. Id. at 1126. This actual situation is more complex. STRs are “short tandem repeats.” They are similar to VNTRs, but they are more like small toy trains than the full-scale VNTRs. The STR boxcars are only four base pairs long, and the STR trains consist of between three and 50 such boxcars. “Degradation” refers to long DNA molecules being broken up into shorter ones. Because the STR segments of DNA are much shorter than the VNTRs, even if the DNA in the semen is too degraded for VNTR analysis, it might well be long enough for STR analysis. But it is hard to say how many loci will be typeable if the DNA is degraded. The court’s opinion reads as if the test will work at either 13 loci or none. In fact, the number of typeable loci could be considerably more than 13 loci, or it might be less than 13 but still more than zero.

If the 13 loci that are routinely analyzed are all typeable, then the test will be even more powerful than the court suggested. Because some STR alleles are more common than others, different alleles give rise to different random-match probabilities. Ignoring close relatives and population structure, the random-match  probability ranges from about 1 in 160 billion for the most common alleles to an unbelievably small one in 1050 (a one followed by 50 zeroes.) This means that if the sample is amenable to STR analysis at a goodly number of loci, testing is almost certain to exclude Osborne — if the DNA in the condom is not his.

For reasons that I won’t go into here, even if there is too little DNA in the condom and the pubic hairs to give any STR results, mitochondrial DNA testing might exclude an innocent Osborne. Mitochondrial-DNA sequencing is nowhere near as powerful as STR testing typically is, but it is better than (and independent of) the DQ-alpha testing that was done in 1993. Therefore, even if the STR testing fails, there is a good chance that mitochondrial testing will exclude Osborne — again, if he is innocent.

DHK

Thanks to Andrew Hessick for looking at a draft of this posting and to Ira Ellman, Carissa Hessick, and Carrie Sperling for listening to me blabber about some of the issues noted here.

November 7, 2008 | Permalink | Comments (2) | TrackBack (0)

Friday, September 5, 2008

Genetics Datasets Closed Due to Forensic DNA Discovery

Until last Friday, the National Institutes of Health (NIH) and other groups had posted large amounts of aggregate human DNA data for easy access to researchers around the world. On Aug. 25, however, NIH removed the aggregate files of individual Genome Wide Association Studies (GWAS). The files, which include the Database of Genotypes and Phenotypes (dbGaP), run by the National Center for Biotechnology Information, and the Cancer Genetic Markers of Susceptibility database, run by the National Cancer Institute, remain available for use by researchers who apply for access and who agree to protect confidentiality using the same approach they do for individual-level study data.) The Wellcome Trust Case Control Consortium and the Broad Institute of MIT and Harvard also withdrew aggregate data.

The reason? The data keepers fear that police or other curious organizations or individuals might deduce whose DNA is reflected in the aggregated data, and hence, who participated in a research study. These data consist of SNPs -- Single Nucleotide Polymorphisms. These are differences in the base-pair sequences from different people at particular points in their genomes. Many SNPs are neutral -- they do not have have any impact on gene expression. Nonetheless, they can be helpful in determining the locations of nearby disease-related mutations.

The event that prompted the data keepers to act was the discovery at the Translational Genomics Research Institute (TGen) of a new way to check whether an individual's DNA is a part of a complex mixture of DNA (possibly from hundreds of people). According to the  TGen report, Resolving Individuals Contributing Trace Amounts of DNA to Highly Complex Mixtures Using High-Density SNP Genotyping Microarrays, a statistic applied to intensity data from SNP microarrays (chips that detect tens of thousands of SNPs simultaneously) reveals whether the signals from an individual's many SNPs are consistent with the possibility that the individual is not in the mixture. (Sorry for the wordiness, but the article uses hypothesis testing, and "not in the mixture" is the null hypothesis.)

How could this compromise the research databases? As best as I understand it, the scenario is that someone first would acquire a sample from somewhere. Your neighbor might check your garbage, isolate some of your DNA, get a SNP-chip readout, and check it against the public database to see if you were a research subject who donated DNA. Or, the police might have a crime-scene sample. Then they would use a SNP-chip to get a profile to compare to the record on the public database to see if the profile probably is part of the mixture data there. Finally, if they got a match, the police would approach the researchers to get the matching individual's name.

Kathy Hudson, a public policy analyst at Johns Hopkins University, stated in an email that “While a fairly remote concern, and there are some protections even against subpoena, NIH did the right thing in acting to protect research participants.” However, scientists such David Balding in the U.K. are complaining that the restrictions on the databases are an overreaction. Indeed, an author of the TGen study is quoted as stating that the new policy is "a bit premature." See http://www.nature.com/news/2008/080904/full/news.2008.1083.html.

It seems doubtful that anonymity of the research databases has been breached, or will be in the immediate future, by this convoluted procedure. Of course, the longer-term implications remain to be seen, and the technique has obvious applications in forensic science. If the technique works as advertised, police will be able to take a given suspect and determine whether his DNA is part of a mixture from a large number of individuals that was recovered at a crime scene. Analyzing complex mixtures for identity is difficult to do with standard (STR-based) technology.

-DHK

References

Homer N, Szelinger S, Redman M, Duggan D, Tembe W, et al., Resolving Individuals Contributing Trace Amounts of DNA to Highly Complex Mixtures Using High-Density SNP Genotyping Microarrays, PLoS Genetics (2008). 4(8):e1000167. doi:10.1371/journal.pgen.1000167

DNA databases shut after identities compromised, Nature 455:13. Sept. 3, 2008

Natasha Gilbert, Researchers criticize genetic data restrictions, Nature Sept. 4, 2008, <http://www.nature.com/news/2008/080904/full/news.2008.1083.html>

September 5, 2008 | Permalink | Comments (1) | TrackBack (0)

Monday, August 11, 2008

Hot Tubbing: Old Wine in New Bottles for Expert Witnesses

The New York Times has discovered that expert witnesses retained by parties often are partisan. This certainly is fit to print, but is it news? Not to anyone who has been reading law reviews and opinions written during the past century or two. (For a good recent analysis, see Bernstein (2008).)

Still, the Times revealed that the Australians have discovered a way to improve expert testimony. They call it "hot tubbing.":

In that procedure, also called concurrent evidence, experts are still chosen by the parties, but they testify together at trial — discussing the case, asking each other questions, responding to inquiries from the judge and the lawyers, finding common ground and sharpening the open issues.

Interestingly, "Australian judges have embraced hot tubbing." According to UCLA law professor Jennifer Mnookin, "[t]he future ... may belong to Australia. 'Hot tubbing,' she said, 'is much more interesting than neutral experts.'”

If so, the movement will resemble the breakthrough of the Beatles "from Hamburg." (In 1961, when the band returned to Liverpool from Germany and made an appearance at The Cavern Club, some in the audience thought they were watching a German band.) Fifteen years ago, when I gave a traveling series of seminars to federal judges under the auspices of the Federal Judicial Center, I suggested that the judges experiment with this format. At least one judge was intrigued, saying that it sounded like the McLaughlin Show, but might be worth a try.

The idea certainly did not originate with me. I got it from a 1989 report of a blue-ribbon panel on statistics in the courtroom. The panel observed that

F.R.E. [Federal Rule of Evidence] 611 allows the trial judge to exercise power over the presentation of evidence to make it more effective and efficient. Many judges have used that authority in innovative ways to modify the traditional sequencing of evidence. For statistical matters, there are a variety of approaches tnhat might be attempted. When the reports of witnesses go together, the judge might allow their presentations to be combined and the witnesses to be questioned as a panel discussion ... .

Panel on Statistical Assessments as Evidence in the Courts (1989, 174). But "hot tubbing" is a lot catchier than "panel discussion," and the right packaging sells a product.

--DHK

References

David E. Bernstein, Expert Witnesses, Adversarial Bias, and the (Partial) Failure of the Daubert Revolution, 93 Iowa L. Rev. 451–489 (2008)

Adam Liptak, American Exception: In U.S., Partisan Expert Witnesses Frustrate Many, N.Y. Times, Aug. 11, 2008, available at http://www.nytimes.com/2008/08/12/us/12experts.html?_r=1&8au&emc=au&oref=slogin

Panel on Statistical Assessments as Evidence in the Courts, The Evolving Role of Statistical Assessments as Evidence in the Courts (Stephen E. Fienberg ed. 1989)

August 11, 2008 | Permalink | Comments (1) | TrackBack (0)

The Birthday Problem in Las Vegas

The other week, an editorial in the Las Vegas Review-Journal misconstrued the now infamous 2001 findings of partial matches in the Arizona DNA database. The study was discussed on our blog on July 20, and I won't repeat the explanation of the birthday problem. You might think that people in Las Vegas would know more about winning combinations, but the editor presented the Arizona findings as proof that hundreds of thousands of Americans would be falsely incriminated by DNA profiling in criminal cases. His conclusion: "the odds of a 'coincidental match' with an innocent party -- the realistic odds, based on searches such as [the one in the Arizona database], not something out of astronomy book -- should be carefully explained."

The last remark got me thinking. Could the results of Arizona database study give an estimate of a random-match probability?

Mathematician Charles Brenner did some simplified calculations that can be adapted to this end. A standard DNA profile consists of 13 pairs of numbers. The numbers have to do with the lengths of various fragments of DNA at particular points (loci) on certain chromosomes. If a suspect and a crime-scene sample have the same fragment lengths at all 13 loci, then the match is strong evidence that the suspect (or an identical twin) is the source of the DNA. A partial match excludes the suspect as the source, and there are many ways for two 13-locus profiles to match in part but not in full. For example, Brenner pointed out that there are 715 ways to select 9 loci from the full 13. In the Arizona study, the analyst looked at all distinct pairings of the 65,493 people in the 2001 Arizona database.  (This is where the birthday problem, with its combinatorial explosion, comes in.) Brenner reported that 65,493 x 65,492/2, or approximately 2,140,000,000 pairs, were compared. Since each pair of genotypes were checked for all 715 ways to get a 9-locus partial match, some 715 x 2.14 x 109 = 1.5×1012 nine-locus comparisons were made. Only 122 yielded matches. The empirically determined proportion is therefore about 8 x 10-11, or 1 in 12 trillion.

Let's compare this number with a theoretical estimate of the random-match probability -- one that assumes statistical independence of DNA alleles and loci. Brenner presents 1/13.66 as the probability of a random match at a single locus. Assuming independence, the probability of an exact match at 9 out of 9 such loci would be (1/13.66)9, or 4.5 x 10-11.* This agrees rather well with the empirical value of 8 x 10-11 in Arizona.

If this numerical exercise is any indication, the approach favored by the Las Vegas editor will not change things. The "realistic" probabilities that can be quoted in court on the basis of the number of 9-locus matches still will be astronomically small.

--DHK

Note

* The probability of a partial match, that is, of a match at 9 loci and a mismatch at the remaining 4 loci would be 715 x (1/13.66)9 x (1 - 1/13.66)4 = 1/(3.1×107). Of course, nobody would introduce this number in a real case because such a partial match excludes the suspect as the source of the crime-scene DNA. Partial matches like the ones in the Arizona database are not used to convict anyone. Rather, they are of interest because they raise a question as to whether there is an an excess of partial matches compared to the numbers that would be expected if the usual random-match probabilities are accurate. If there is a surprising excess -- something that is not yet clear -- then perhaps the standard calculation of random-match probabilities needs to be altered.

References:

Charles Brenner, Arizona DNA Database Matches, Jan. 8, 2007, http://dna-view.com/ArizonaMatch.htm.

Editorial, DNA Evidence: What Are the Real Chances of Mistakes?, Las Vegas Review-Journal, Jul. 29, 2008,available at http://www.lvrj.com/opinion/26025944.html

D.H. Kaye, Letter, The Math Behind DNA Matching, Las Vegas Review-Journal, Aug. 01, 2008, available at http://www.lvrj.com/opinion/26171924.html

August 11, 2008 | Permalink | Comments (0) | TrackBack (0)

Friday, August 8, 2008

Fingerprints' Chemical "Footprints"?

Today's New York Times reports a story that appears in this week's Science.  According to the Times, "With a new analytical technique, a fingerprint can now reveal much more than the identity of a person. It can now also identify what the person has been touching: drugs, explosives or poisons, for example."  See full story HERE. In short, scientists (Demian R. Ifa, et al.) have used mass spectrometry to identify fingerprints after subjects' fingers were applied with various solutions, including drugs and explosives residue.  The researchers suggest that this technology might have several uses, including identifying what substances particular people might have handled recently and being able to distinguish overlapping fingerprints, by tracing the chemical "footprint" of the individual fingerprints.

Although there may yet be much value in this research, this single report hardly demonstrates its value for forensic purposes.  The researchers essentially identified the true-positive rate for this technology, and, so far as either the Times or the original article in Science report, the researchers have provided no data on false positives, true negatives, or false negatives.  Moreover, this study was a highly controlled laboratory study, so we don't know whether the technology might confront excessive "noise" when applied to the general population.  Indeed, given the reported amount of drug residue on United States currency, mass spectrometry that is too sensitive is likely to produce large numbers of false positives.

Hence, while the research results reported here are interesting and noteworthy, without considerable more work in this area, they appear a long way from daily forensic use.
--DLF

August 8, 2008 | Permalink | Comments (1) | TrackBack (0)

Sunday, July 20, 2008

DNA Database Woes and the Birthday Problem

The Los Angeles Times has reported that "A discovery leads to questions about whether the odds of people sharing genetic profiles are sometimes higher than portrayed. Calling the finding meaningless, the FBI has sought to block such inquiry." Actually, the discovery is not new, but the story is still unfolding.

According to the article,

State crime lab analyst Kathryn Troyer was running tests on Arizona's DNA database when she stumbled across two felons with remarkably similar genetic profiles.

The men matched at nine of the 13 locations on chromosomes, or loci, commonly used to distinguish people.

The FBI estimated the odds of unrelated people sharing those genetic markers to be as remote as 1 in 113 billion. But the mug shots of the two felons suggested that they were not related: One was black, the other white.

In the years after her 2001 discovery, Troyer found dozens of similar matches -- each seeming to defy impossible odds.

The key word here is "seeming." This is not the first time partial or even complete matches have appeared in a search of all pairs of DNA profiles in a law-enforcement database. Eight years ago, the National Commission on the Future of DNA Evidence (2000, 25 n.13) reported that

Although brothers and twins are rare in databases, they can be common among those pairs that are found by profile matching. John Buckleton (2000 personal communication) found that, among ten 6-locus matches in a New Zealand database of 10,907 records, all but 2 were brothers (including twins). This shows that the possibility of sibs cannot be ignored in database searches. We should note, however, that these could usually be identified as brothers, either by further investigation or by testing additional loci.

So close relatives are one possible explanation for a seeming surplus of partial matches.

A second consideration is statistical. The random-match probability of 1 in 113 billion quoted in the Times applies to a single comparison between a particular profile and a randomly selected, unrelated individual. It is not the probability that a search through all pairs of profiles in a database composed entirely of records from unrelated people will show a match. Because there are so many pairs to compare, that probability is much greater.

Suppose that there are 500,000 profiles in the database. How many possible pairs can be formed? The answer: 500,000 x 500,000 = 2.5 x 10^11 = 250 billion. How many of these are from different individuals? Answer: Subtract the 500,000 pairs [(1,1), (2,2), ... , (500,000, 500,000)]. That hardly changes anything, since 500,000 is nothing compared to 250,000,000,000. How many are from distinct pairs of people? Answer: Half, since the pair (1,2) is the same as (2,1), etc.  Conclusion: There are almost 125 billion pairs to search.

How many comparisons would be expected to match if, for every comparison, the chance of a match is 1 in 113 billion? Answer: About 1. Even without relatives, the observation of a partial match in such a database would not be so surprising.

Of course these numbers do not pertain to the Arizona database. I do not know how large it was, and the chance of a match in each comparison was not constant.  But the example shows why the random-match probability grossly understates the chance of a partial-match in an all-pairs trawl in a large database.

In probability theory, this situation is known as a birthday problem. The chance that one randomly selected person has the same birthday as mine is about 1/365. The chance that at least two people in a room full of people have the same birthday (whatever it might be) is much, much larger.

We can expect further studies of the databases for consistency with the estimated random-match probabilities. The article reports on several that have taken place so far. My prediction is that when the dust settles, the results will be inconclusive. Judges will struggle a bit with the birthday problem, and it will be difficult or impossible to determine all the close relatives in the database. Scientists who accept the existing random-match probabilities as reasonable estimates won't change their minds. Well, maybe they'll give up a power of ten or so. Individuals who distrust the estimates will continue to distrust them.

--DHK

References

Felch, Jason, and Maura Dolan. 2008. "How Reliable Is DNA in Identifying Suspects?" Los Angeles Times: July 20, 2008. <http://www.latimes.com/news/local/la-me-dna20-2008jul20,0,5133446.story>

National Commission on the Future of DNA Evidence 2000. The Future of Forensic DNA Testing: Predictions of the Research and Development Working Group. Washington DC: National Institute of Justice

http://en.wikipedia.org/wiki/Birthday_paradox

July 20, 2008 | Permalink | Comments (13) | TrackBack (0)

Wednesday, June 25, 2008

The Psychology of Fuel Efficiency

A recent discussion started by John Lynch on the Society for Judgment and Decisionmaking listserv focuses on an interesting new article by Larrick and Soll in Science, entitled the "MPG Illusion."  The paper reemphasizes the point that statistical metrics matter.  It argues that the traditional miles per gallon metric leads people to make inaccurate judgments on the benefits of more efficient cars.

For example, Richard Larrick in his podcast makes an argument along the following lines.  Say you have the ability to trade in a 10 MPG SUV for a 20 MPG crossover, or a 25 MPG car for a 50 MPG hybrid.  Which switch is better for the environment?  As it turns out, the former, even though one might be tempted to say that the former only improves efficiency by 10 MPG while the latter improves it by 25.   Assume a 100 mile trip.  The SUV will consume 10 gallons versus 5 gallons for the crossover for a net savings of 5 gallons.  The car will consume 4 gallongs versus 2 gallons for the hybrid for a net savings of 2 gallons. 

It seems that since we drive given distances (e.g. 100 mi), rather than specific amounts of fuel, the MPG is a misleading measure of efficiency.  Small increases in efficiency down at the low end make much more of a difference than at the high end.  Larrick & Soll argue that an inverse ratio, gallons per 10,000 miles, might be a more useful measure.

More information is available on Larrick's website, which has links to the Science article, podcast, and supplemental materials.

--EKC

June 25, 2008 | Permalink | Comments (1) | TrackBack (0)

Tuesday, June 24, 2008

The persuasive power of neuroscience

The March issue of the Journal of Cognitive Neuroscience contains an article stimulated by the  frequent appearance of news stories announcing the latest brain signature -- for love, aggression, greed, lying, etc. A group of researchers at Yale decided to investigate whether people can distinguish solid claims about these associations from poorly substantiated ones. The researchers wrote explanations for well-documented psychological phenomena. Some versions presented scientifically accepted rationales and sound reasoning. Other explanations were circular. People with no training in psychology or neuroscience distinguished the good from bad -- until an utterly irrelevant mention of the physical brain was added. The bad explanations became far more believable when they included a mention of neuroscience, while the good accounts got only a slight boost. People with advanced training in cognitive science were immune to this "seductive allure of neuroscience."

Does this finding have some bearing on the law's demand for validation of scientific evidence? Does it support a distinction between "soft" psychological testimony and testimony about brain imaging results?

--DHK

References

Weisberg, D. S.; Keil, F. C.; Goodstein, J.; Rawson, E.; & Gray, J. (2008). The Seductive Allure of Neuroscience Explanations. Journal of Cognitive Neuroscience, 20(3), 470-477.

The description of the study is adapted from the May/June 2008 issue of the Yale Alumni Magazine, p. 38.

June 24, 2008 | Permalink | Comments (0) | TrackBack (0)

Sunday, June 22, 2008

Rounding Up the Usual Suspects III: People v. Nelson

On April 5, 2008, I mentioned People v. Nelson, 48 Cal.Rptr.3d 399 (Ct. App. 3 Dist. 2006), rev. granted, 147 P.3d 1011 (Cal. 2006), as a leading case on the admissibility of the various probabilities associated with cold hits in DNA databases. Last week, the California Supreme Court affirmed.

The case arose from the rape and murder of a nineteen-year-old college student in 1976. Dennis Nelson was a suspect, but the evidence was inconclusive, and the case grew cold. Later, Nelson was convicted of a different rape. His DNA profile was entered into the state convicted-offender databank. In 2001, investigators discovered that this profile matched those derived from stains from the 1976 rape. At that point, there were 184,000 profiles in the database. According to the state, the match would occur “at random among unrelated individuals in about one in 950 sextillion African-Americans, one in 130 septillion Caucasians, and one in 930 sextillion Hispanics.” As the court adds, “[t]here are 21 zeros in a sextillion and 24 zeros in a septillion.”

Nelson moved to dismiss the resulting charges on the ground that the delay between the 1976 crime and the charges filed in 2002 deprived him of his right to a speedy trial. The superior court denied the motion. At trial, Nelson conceded that he had intercourse with the victim but claimed that it was consensual -- somebody else must have murdered her and left her body in the mud. That did not work either. The jury convicted Nelson of first degree murder, and the Court of Appeal affirmed.

The California Supreme Court reviewed two claims. First, with respect to the speedy-trial issue, it held that the 26-year delay between the offense and the prosecution caused only slight prejudice and was justified.

Second, the court considered whether the vanishingly small random-match probabilities should have been admitted. The court correctly held that inasmuch as the procedure underlying this calculation was generally accepted and uncontested, the only real issue was the relevance of a random-match probability in a database-trawl case.

At this point, however, the opinion unravels. It contains but a single, short paragraph to show why the statistic is relevant:

        In a non-cold-hit case, we said that “[i]t is relevant for the jury to know that most persons of at least major portions of the general population could not have left the evidence samples.” (People v. Wilson, supra, 38 Cal.4th at p. 1245.) We agree with other courts that have considered the question (the Court of Appeal in this case; People v. Johnson, supra, 139 Cal.App.4th 1135; and Jenkins, supra, 887 A.2d 1013) that this remains true even when the suspect is first located through a database search. The database match probability ascertains the probability of a match from a given database. “But the database is not on trial. Only the defendant is.” (Modern Scientific Evidence, supra, § 32:11, pp. 118-119.) Thus, the question of how probable it is that the defendant, not the database, is the source of the crime scene DNA remains relevant. (Id. at p. 119.) The rarity statistic addresses this question.

As the co-author of the text of the treatise being quoted, I fear that these words are inconsistent with the portion of the court's opinion (in note 3) suggesting that the database-match probability also is relevant. If the issue is simply “how probable it is that the defendant, not the database, is the source of the crime scene DNA,” then the database-match probability is irrelevant. Unlike the “rarity statistic,” it does not figure into the probability that the named defendant is the source. The formulas are given and explained in a forthcoming article, Rounding Up the Usual Suspects: A Logical and Legal Analysis of DNA Trawling Cases. The Nelson court's theory that a variety of statistics are admissible in a database-trawl case does not withstand analysis. Or, if it does, it will take more analysis than this court has provided to explain why. The opportunity for such clarification may well arise, as there will some cases in which defense counsel will be interested in introducing the database-match probability, which can be orders of magnitude larger than the random-match probability.

In this particular case, however, the demand for an adjustment to the random-match probability is much ado about nothing. So what if the probability is 10–19 rather than 10–24? Having rejected the defense argument about general-acceptance, the court could simply have observed that the choice of a statistic could not have affected the outcome of the case. The court realized this, but it endorsed the 10–24 figure anyway.

-- DHK

References

People v. Nelson, No. S147051 (Cal. June 16, 2008), slip opinion available at http://www.courtinfo.ca.gov/opinions/documents/S147051.pdf

Dolan, Maura and Jason Felch. 2008. "California Supreme Court Ruling Allows 'Rarity' Statistic in DNA Cases." Los Angeles Times: June 17 available at  http://www.latimes.com/news/science/la-me-dna17-2008jun17,0,3313471.story

Kaye, David H. 2009. "Rounding Up the Usual Suspects: A Logical and Legal Analysis of DNA Trawling Cases". North Carolina Law Review: in press. Prepublication draft available at SSRN: http://ssrn.com/abstract=1134205

June 22, 2008 | Permalink | Comments (0) | TrackBack (0)

Saturday, June 7, 2008

The Transposition Fallacy in the Los Angeles Times

In an earlier posting, I noted a story in the Los Angeles Times about the perceived need to adjust the probability for a random match when an individual emerges as a suspect because of a trawl through a database of DNA profiles. The reporters suggested that there was a grave injustice because "the prosecutor told the jury that the chance of such a coincidence was 1 in 1.1 million," but "jurors were not told the statistic that leading scientists consider the most significant: the probability that the database search had hit upon an innocent person. In Puckett's case, it was 1 in 3." They added that "the case is emblematic of a national problem."

The Times received some flak for this reporting. Not only do many leading statisticians dispute the claim that an adjustment for the size of the database searched produces the most significant statistic, but, it was said, the description of "1 in 3" as "the probability that the database had hit upon an innocent person" was wrong. The critical readers complained that, at best,  1/3 was the chance of a match to someone in the database if neither Puckett nor anyone else in the database were the source of the DNA in the bedroom of the murdered woman. It is not the chance that Puckett is not the source given that his DNA matches.

To equate the two probabilities is to slip into the transposition fallacy that P(A given B) = P(B given A). Conditional probabilities do not work this way. For instance, the chance that a card randomly drawn from a deck of ordinary playing cards is a picture card given that it is red is not the chance that it is red given that it is a picture card. The former probability is P(picture if red) = 6/26. The latter is P(red if picture) = 6/12.

The reporters responded with the following defense:

In our story, we did not write that there was a 1 in 3 chance that Puckett was innocent, which would be a clear example of the prosecutor's fallacy. Rather, we wrote: "Jurors were not told, however, the statistic that leading scientists consider the most significant: the probability that the database search had hit upon an innocent person. In Puckett's case, it was 1 in 3." The difference is subtle, but real.

Interestingly, when asked whether there was any difference on a listserve of evidence professors, two professors described the statement as ambiguous, while four saw it as a clear instance of transposition.

My view is that the following two statements are true:

1. IF THE DATABASE WERE INNOCENT (meaning that it does not contain the source of the crime-scene DNA and everyone in it is unrelated), then (prior to the trawl) the probability that SOMEONE (regardless of his or her name) would match is roughly 1/3.

2. IF THE DATABASE WERE INNOCENT, then (prior to the trawl) the probability that a man named Puckett would match is 1/N = 1/1,100,000.

But neither (1) nor (2) is equivalent to

3. The probability that the database search hit upon an innocent person named Puckett was 1/3.

--DHK

June 7, 2008 | Permalink | Comments (0) | TrackBack (0)

Thursday, June 5, 2008

fMRI, Lie Detection, and Statistics

I'm blogging from the AALS Mid-Year Conference on Evidence in Cleveland, where I just moderated a discussion this morning on fMRI and Lie Detection featuring Steve Laken (Cephos Corp.) and Mike Pardo (Alabama).  Although the studies on fMRI lie detection have their limitations, the results so far are quite impressive, with accuracy rates in the 90% range.  One wonders how soon they will make their way into court, where admissibility questions loom large.  Even if the technology is in fact sufficiently reliable for Daubert (and what I saw this morning suggests that this is true), the inherent conservatism of the legal system, coupled with the bias against analogs to polygraphs, will make admissibility a tough hurdle for the technology.  (For more on the bias against mind-reading devices, see this Note written by my student Leo Kittay.)

One striking aspect of the various discussions on fMRI during and after the session was the focus that people had on mechanism.  Many people are concerned that researchers have not yet pinpointed specific areas of the brain associated with lying, or have not determined specific pathways for deception.  Often, they are similarly concerned that other brain activities may "light up" the same regions.  I'm skeptical, however, that these concerns really matter.  While it may be desirable and interesting to know the specific mechanisms associated with deception, we really don't need to make such discoveries to have a practically useful lie detection machine.  All that matters is that some model exists (here, presumably using brain scans) that can with reasonable accuracy separate liars from non-liars.  How the model does that is in many ways beside the point.  As Laken pointed out during the discussion, medical researchers often have little or idea about the specific mechanism for a drug's success, yet such a limitation never prevents us from using its therapeutic benefits as proven through statistical/epidemiological studies.

--EKC

June 5, 2008 | Permalink | Comments (2) | TrackBack (0)

Friday, May 30, 2008

The Transposition Fallacy in Brown v. Farwell

Earlier this month, Ninth Circuit held in Brown v. Farwell, No. 07-15592 (9th Cir. May 5, 2008) that a prisoner was denied due process of law because of a mistake involving DNA evidence. Troy Brown had been tried and convicted in Carlin, Nevada, for sexual assault. A federal district court granted Brown's petition for habeas corpus relief because the evidence against him was insufficient to prove guilt beyond a reasonable doubt. The court of appeals affirmed. As the majority opinion by Judge Wardlaw summarizes the case, the state's "DNA expert Renee Romero of the Washoe County Sheriff's Office Crime Lab ... provided critical testimony that was later proved to be inaccurate and misleading. ... [A]bsent this faulty DNA testimony, there was not sufficient evidence to sustain Troy's conviction."

     The court identified two faults in Romero's testimony. One involves the chance that one of Brown's four brothers would have the same DNA profile. The other is the transposition fallacy that has been recognized in court opinions at least since the California Supreme Court's famous opinion in People v. Collins, 438 P. 2d 33 (Cal. 1968). Twenty years after Collins, the Brown court wrote:

Here, Romero initially testified that Troy's DNA matched the DNA found in Jane's underwear, and that 1 in 3,000,000 people randomly selected from the population would also match the DNA found in Jane's underwear (random match probability). After the prosecutor pressed her to put this another way, Romero testified that there was a 99.99967 percent chance that the DNA found in Jane's underwear was from Troy's blood (source probability). This testimony was misleading, as it improperly conflated random match probability with source probability. In fact, the former testimony (1 in 3,000,000) is the probability of a match between an innocent person selected randomly from the population; this is not the same as the probability that Troy's DNA was the same as the DNA found in Jane's underwear, which would prove his guilt. Statistically, the probability of guilt given a DNA match is based on a complicated formula known as Bayes's Theorem, ... and the 1 in 3,000,000 probability described by Romero is but one of the factors in this formula. Significantly, another factor is the strength of the non-DNA evidence. Here, Romero improperly conflated random match and source probability, an error that is especially profound given the weakness of the remaining evidence against Troy. In sum, Romero's testimony that Troy was 99.99967 percent likely to be guilty was based on her scientifically flawed DNA analysis, which means that Troy was most probably convicted based on the jury's consideration of false, but highly persuasive, evidence.

     This analysis is less than convincing because it misportrays Bayes' theorem -- which is not particularly complicated here -- and it fails to consider what the theorem means. The formula can be written compactly with a few symbols. Let ST stand for the proposition that Troy Brown is the source and MT stand for the fact that his DNA matches that of semen from the victim's clothing. Let S0 be the hypothesis that some individual unrelated to Troy is the source. The theorem simply states that

Odds of ST given MT =
     (Odds of ST without considering MT) x
     (Probability of MT given ST / Probability of MT given S0)

The formula could be extended to deal with the hypothesis that one of Troy's brothers is the source, but that is not a feature of the transposition fallacy that worried the court. Transposition consists of equating (a) the probability that DNA would match if it had come from an unrelated individual with the (b) probability that the DNA came from an unrelated individual given that it matched. Conceptually, these are quite different, from the probabilities often are approximately equal.

     Bayes' Theorem is essentially a recipe for going from (a) to (b). In Brown, the formula indicates that the statement that "there was a 99.99967 percent chance that the DNA found in Jane's underwear was from Troy's blood" may not be so far off — if the only alternative worth considering is S0. Suppose, for the sake of illustration, that the other evidence was so weak that, before considering the DNA match, it was 1000 times more likely that an unrelated individual was the source. Let us also assume that the DNA samples from the semen and the defendant have the profiles reported by the laboratory. The formula now shows that the odds that Troy is the source are (1/1000) x [1/(1/3,000,000)], or 3000 to 1. The corresponding probability is 99.96667%. This is smaller than the 99.99967% reported in the case, but is the discrepancy a violation of due process?

     One might argue that the error in the second decimal place or beyond rises to this level because the witness's description of the “chance that the DNA ... was from Troy” invites a more serious error. It encourages the jury to think that this chance is 99.9+% even though the figure ignores the possibility that one of Troy's four brother's was the rapist as well the other evidence in the case. This seems to be the basis of the majority's concern that the "complicated formula" requires other "factors" to be considered.

   Once the argument is framed this way, however, it no longer is an argument about the sufficiency of the evidence. It is an argument about prejudice in the manner in which sufficient evidence is presented, and it has to confront the fact that the jury was given a separate number for the chance that Troy and any one of brothers would match. Romero testified that the chance of a match between two full siblings at any locus was 1/4. The DNA match in the case involved five independent loci, so the chance of a brother's having the same DNA profile then would be 1/4 to the fifth power, or 1/1024. Somehow Romero came up with value of 1/6500 instead. (If anything, the true figure would be somewhat larger than 1/1024 because 1/4 is only the change of identical alleles at each locus by descent. It does not consider the possibility that both parents might share some alleles; however, for rare alleles, this won't make much difference.)

     All this provides ample material for cross-examination to bring out the fact that the criminalist's testimony could not be trusted because: (1) the 99.9+% figure ignores the possibility that close relatives such as brothers would match; (2) the chance that a single brother picked at random would match was not 1/6500; and (3) the chance that one or more of Troy's brothers would match would be larger still. The last probability is approximately 1/512 for the two brothers living in Carlin and 1/256 if we toss in another two brothers living across the state line in Utah. (In the habeas proceedings, Troy supplied a report from a geneticist, Larry Mueller, that gave the figure of 1/66 for the chance that at least one of the other four brothers would match.)

     Judge O'Scannlain grasped what all these numbers really proved. His dissenting opinion maintained that “[t]hus, it was extremely unlikely that a random person committed the crime, and of the brothers, it was extremely unlikely that the specimen DNA would match not only Troy—as it did—but another brother. These probabilities put together still constitute overwhelming DNA evidence against Troy which the jury was entitled to consider.” (Well, if Mueller's perplexing figure of 1/66 were correct, the DNA match, standing alone, might not seem so overwhelming. The whole inquiry into probabilities concerning siblings could have been avoided had a recommendation of the 1996 National Research Council report on DNA evidence been followed. The NAS committee recommended testing close relatives whenever it is suggested that they might have committed the crime. I would bet that none of the four brothers matched, but I should not have to guess.)

     Nevertheless, even Judge O'Scannlain's more perceptive discussion of the hypotheses about unrelated people and brothers does not necessarily dispose of the case. It means, as I indicated earlier, that the real issue is not the sufficiency of the evidence. He shows that a rational jury that understood the evidence as he did could have been persuaded beyond a reasonable doubt. The problem is that the evidence, as it was actually presented, was garbled by the failure of the prosecution, the criminalist, and the defense attorney to deal with a few simple probabilities intelligently. The jury might well have been confused about what the DNA actually proved and given it undue weight. As such, the system might have failed, as it often does, because of the inadequacies of its participants.

    At last, we arrive at the fundamental issue in Brown. Does this kind of a system failure amount to a deprivation of due process, given that the defendant had the opportunity to correct the prosecution's overreaching. The majority thought that it could pretermit this issue. It wrote that “[b]ecause we affirm the district court's grant of Troy Brown's habeas petition on due process grounds, we need not reach his arguments regarding ineffective assistance of counsel.” But because the sufficiency theory of the majority is doubtful, the case leaves unresolved the basic questions -- Did the prosecutor's (apparently) negligent presentation of scientific evidence in this case deprive the defendant of due process? Did the defense attorney's (apparent) failure to challenge the prosecution's presentation deprive Troy Brown of a constitutional right?

–DHK

May 30, 2008 | Permalink | Comments (0) | TrackBack (0)

Friday, May 23, 2008

False Confession Testimony

I participated on an interesting panel on the admissibility of psychological testimony concerning false confessions at the New York City Bar last night.  Conventional wisdom, of course, is that no one would ever confess to a crime that they did not commit absent overwhelming coercion -- e.g., torture, physical threats, etc.  However, recent wrongful convictions have shown that police interrogation tactics in certain circumstances may be sufficient to cause an innocent suspect to confess.  The question then becomes: can the defendant introduce a psychological expert at trial to describe the phenomenon about wrongful convictions, the various risk factors, and perhaps even whether the confession obtained in this case was likely to be false?

It seemed to me that in reading the case law, there are a number of issues to consider in this area, some more basic that others.
i) Whether the theories about false confessions are sufficiently developed to pass Daubert/Frye
ii) Whether false confession testimony can be analogized to other psychological testimony that has been admitted about counterintuitive phenomenon (e.g., eyewitness reliability, rape trauma syndrome, and the suggestibility of children in sex abuse cases)
iii) The link between false confession testimony and the right to present a defense as developed in Chambers v. Mississippi and more recently Holmes v. South Carolina, in particular, whether false confession testimony must be admissible in cases without corroborating evidence
iv) The specificity of the expert testimony -- is the expert merely providing social context testimony as described by Monahan & Walker, or is the expert reaching a conclusion about the confession
v) Whether it makes a difference that the expert is a clinical psychiatrist testifying about a specific medical issue or mental deficiency present in this defendant, or a psychologist testifying about the phenomenon of false confessions more generally
vi) Whether the expert is testifying to the jury, where all the usual Daubert/Frye strictures apply, or whether the expert is presenting to the judge at a suppression hearing, where in many cases, they do not.  (See Federal Rule 104(a)).

--EKC

May 23, 2008 | Permalink | Comments (0) | TrackBack (0)

Saturday, May 17, 2008

Reproducible Analyses

John Cook has an interesting discussion about the problem of reproducing statistical analyses here.  (Thanks to Andrew Gelman for the link). 

The problem is this: even given the same dataset, statistical analyses are often difficult to replicate for a variety of reasons.  The process of analysis involves many (even if perfectly legitimate) data manipulations, and it is nearly impossible to document all of these in an expert report.   The opposing party then has to guess at what the expert did, an inefficient and imperfect exercise in reverse engineering.  This practice arguably exacerbates the battle of the experts problem.  Not only do the opposing experts reach different conclusions, but why they do so is unnecessarily hidden from view.  I recall seeing this problem regularly with economic analyses when I clerked -- it was nearly impossible to reconstruct how the expert reached his/her conclusions.

To the extent that Daubert emphasizes transparency and reasoned decisionmaking, it seems appropriate that requiring the disclosure of scripts and other code be part of the process.  That way, opposing experts can zero in on methodological differences.  This transparency by no means guarantees that legal actors will understand the differences, but it's a start.

Cook notes that the Sweave software package helps address this issue by allowing users to mix text (in LaTeX) and code (in R).  Unfortunately, the fact that most attorneys have no contact with LaTeX or R  suggests that we have a long way to go on this issue.

EKC

May 17, 2008 | Permalink | Comments (1) | TrackBack (0)

Friday, May 16, 2008

The Trouble with Forensics

Roger Koppl has a very nice piece in Forbes discussing the gap between the perceived power of forensic science and its reality.  See Here.   He sketches potential reforms, all of which are sensible and relatively easily accomplished.  The value of the piece, however, lies in its placement in Forbes, thus giving the topic the popular attention it deserves.
--DLF

May 16, 2008 | Permalink | Comments (0) | TrackBack (0)

Tuesday, May 6, 2008

My brother's DNA: Near-miss DNA searching

California has adopted an aggressive policy toward near-miss DNA searching -- something discussed in this blog before. The state is going to compare DNA profiles recovered from crime-scenes to those in its offender database (1) to see if there are any "cold hits" to convicted offenders and arrestees, and (2) to see if there are any almost-matching profiles that are likely to have come from a very close relative.

The first procedure has been upheld in case after case challenging its constitutionality (in the context of convicted offenders). Why would the second procedure be constitutionally defective? According to a Los Angeles Times article of April 26 on the California policy, some lawyers think it is an unreasonable search that might run afoul of the Fourth Amendment. The paper also quotes "Tania Simoncelli, science advisor to the American Civil Liberties Union," as asserting that "The fact that my brother committed a crime doesn't mean I should have to give up my privacy!"

This crie de coeur surely is sincere, and it may not be meant as a constitutional argument, but it is interesting to ask whether it supplies a plausible principle for applying the Fourth Amendment. Consider the following case: You have an identical twin brother. He robs a bank, is locked away in prison, and his DNA profile is put in an offender database. This can happen even though his DNA was not evidence in the bank robbery case and had nothing to do with that crime.

While your brother is out of circulation, you break into a house. cutting your hand on the glass of a window that you shattered to gain entry. A tiny bloodstain with your DNA on it is analyzed. The profile is compared to those in the database. It matches the one that is file perfectly -- your brother's -- because identical twins have the same DNA sequences. But the police know that your brother was in prison when the house was burgled. They scratch their heads until they realize that he might have an identical twin with identical DNA.

So the police investigate you and find plenty of other evidence against you. Now you are facing trial. You move to exclude evidence that your DNA matches that in the bloodstain on the ground that this discovery is the result of an unreasonable search, arguing that "the fact that my brother committed a crime doesn't mean I should have to give up my privacy!" Not only that, you contend that the rest of the evidence must be dismissed because all of it is the fruit of this illegal search.

I do not see how anyone (who agrees that convicted-offender databases that include bank robbers are constitutional) can argue that this search infringes the Fourth Amendment. It is too bad that you and your brother share the same DNA profile, but the police have not forced you to surrender your DNA, and you have no right to stop them from checking your brother's DNA to see if he might be responsible. By checking him, they learn something about you. You might not like it, but let's face it, this probably is not the first time that your brother got you into trouble.

Counter-arguments, anyone?

Of course, the California policy is not limited to identical twins. Furthermore, it involves partial matches and less complete information. All that I have tried to show is that the slogan that "the fact that my brother committed a crime doesn't mean I should have to give up my privacy!" does not settle any constitutional question. It states the conclusion of what must be a rather complex argument about (1) the privacy of information that identifies a class of individuals and (2) the power of the state to investigate one individual on the basis of information it legitimately obtains from another individual.

--DHK

May 6, 2008 | Permalink | Comments (1) | TrackBack (0)

Monday, May 5, 2008

Rounding Up the Usual Suspects II

Not long ago, I mentioned the DNA-database-trawl issue that has led to several confused court opinions. The evidentiary issue is whether a complete search through a database of DNA profiles that produces one and only one match is less probative than a simple match to a known suspect. Some researchers in the U.K. tendentiously call the former use of the database “speculative searching.” (Kaye 2006, 18).

Now, an article in the May 3 Los Angeles Times claims to have uncovered a national scandal of sorts. The reporters describe a recent “cold hit” case that they say

is emblematic of a national problem, The Times has found. [¶] Prosecutors and crime labs across the country routinely use numbers that exaggerate the significance of DNA matches in "cold hit" cases, in which a suspect is identified through a database search. [¶] Jurors are often told that the odds of a coincidental match are hundreds of thousands of times more remote than they actually are, according to a review of scientific literature and interviews with leading authorities in the field.

The article maintains that

[I]n cold hit cases, the investigation starts with a DNA match found by searching thousands, or even millions, of genetic profiles in an offender database. Each individual comparison increases the chance of a match to an innocent person. [¶] Nevertheless, police labs and prosecutors almost always calculate the odds as if the suspect had been selected randomly from the general population in a single try. [¶] The problem will only grow as the nation's criminal DNA databases expand. They already contain 6 million profiles.

This description portrays one approach to the issue as if it is the consensus in the scientific literature. It is not. There is disagreement about the need to adjust a random-match probability. Furthermore, if one counts the number of peer-reviewed articles on the subject, the dominant view is that adjustment is not necessary.

I won't present a full blown analysis here, but I will offer a thought on the statement that “[e]ach individual comparison increases the chance of a match to an innocent person.” It is true that if one searches a database of a million innocent people, all of whom are unrelated to the source of the crime-scene DNA, there are more opportunities for a match to an innocent person than if one searches a database of half a million innocent people, or than if one searches a database of one only one innocent person (i.e., the suspect). So sooner or later, searches of innocent databases will produce a false positive. Indeed, they already have.

But is the probability that an innocent database will contain a matching type the right question to ask? The probative value of a match depends on how much it shifts the odds in favor of the prosecution's claim that the matcher is the source of the crime-scene DNA. The enhancement in the odds grows progressively larger as the size of the database increases. The reason is simple. More and more people are definitively excluded as possible sources of the crime-scene DNA. This raises the probability that someone else in the population — including the matcher — is the source. In the limiting case of a database that includes every person on earth, the evidence of a single match in the database becomes conclusive (ignoring scenarios involving fraud or laboratory error).

It can be shown (and has been) that, due to this “exclusion effect,” the single match in the database raises the odds even more (at least slightly) than does testing a single person at random and finding that he matches. (E.g., Donnelly and Friedman 1999; Kaye 2008). Therefore, if there is any prejudice in the existing practice of reporting the random-match probability in the “cold hit” case, it is not because a cold hit in a large database is less probative than a cold hit in a small one!

In sum, searching large databases gives more information than searching small ones, and searching small ones is better than limiting a search to a single individual. The DNA evidence has more, not less, probative value in a database-search case than in a single-suspect case.

References:

Donnelly, Peter, and Richard D. Friedman. 1999. “DNA Database Searches and the Legal Consumption of Scientific Evidence.” Michigan Law Review 97: 931–984.

Kaye, D.H. 2008. “Rounding Up the Usual Suspects: A Legal and Logical Analysis of DNA Trawling Cases.” (submitted for publication).

Kaye, Jane. 2006. "Police Collection and Access to DNA Samples." Genomics, Society and Policy. 2: 16–27.

--DHK

May 5, 2008 | Permalink | Comments (0) | TrackBack (0)

Thursday, April 17, 2008

Ghostwriting in Medical Journals

The Journal of the American Medical Assocation (JAMA) recently published a study discussing the disturbing practice of ghostwriting in medical journal articles -- in this case, involving Vioxx.  The JAMA article is here.   A related New York Times article can be found here.

--EKC

April 17, 2008 | Permalink | Comments (0) | TrackBack (0)

Friday, April 11, 2008

Low copy number DNA vindicated?

In January, we noted the unusual opinion of the trial court in the Omagh bombing case. Now, a second review of the Forensic Science Service's procedure for amplifying small quantities of DNA has appeared. According to an article in today's Guardian, a report undertaken for the Crown Prosecution Services concludes that "[t]iny samples of DNA evidence are safe to use in criminal prosecutions." However, the report also contains "21 recommendations to standardise procedures, including ensuring that police evidence-gathering kits are "DNA-clean", to avoid contamination with someone else's genetic profile, a national agreement on how to interpret the results from low-template DNA, and clear guidance on how courts should interpret the evidence."

--DHK

April 11, 2008 | Permalink | Comments (2) | TrackBack (0)

Saturday, April 5, 2008

Rounding Up the Usual Suspects

Countries around the world have established databases consisting of the DNA profiles of suspected or convicted offenders. In the United States, state and federal databases combined in the FBI's National DNA Index System hold over five million convicted offender DNA profiles as well as those of some people who are merely arrested or detained. These identification databases have helped solve cases that have baffled investigators for decades. In one case, a federal database search linked a 58-year-old man suspected of raping at least 25 women in three states to semen on underwear from a 1973 rape.

When the DNA profile from a crime-scene stain matches one of those on file, the person identified by this “cold hit” will become a target of the investigation. A fresh sample will be taken from the suspect to verify the DNA match, and other evidence normally will reinforce the investigatory lead. In rare cases, prosecutors will even proceed with no other evidence. In one such case, a San Francisco jury convicted John Davis, already behind bars for robbery and other crimes, of the murder of his neighbor, Barbara Martz, nearly 22 years earlier. The database match was all the jurors had to go on. This was enough for a conviction, at least where the probability that a randomly selected, unrelated individual would match the crime-scene DNA sample — the “random-match probability” — was said to be “quadrillions-to-one.”

Cases like Davis that emanate from cold hits have been called “trawl cases” because “the DNA match itself made the defendant a suspect, and the match was discovered only by searching through a database of previously obtained DNA samples.” Peter Donnelly & Richard D. Friedman, DNA Database Searches and the Legal Consumption of Scientific Evidence, 97 Mich. L. Rev. 931, 932 (1999). These database-trawl cases can be contrasted with traditional “confirmation cases” in which “other evidence has made the defendant a suspect and so warranted testing his DNA.” Id.

In terms of this dichotomy, we must ask whether the fact that the defendant was selected for prosecution by trawling requires some adjustment to the random-match probability. Two committees of the National Academy of Sciences (NAS) thought so. In their influential reports on “DNA Forensic Science,” they reasoned that a match coming from a trawl is much less impressive than a match in a confirmation case — just as finding a tasty apple on the very first bite is more impressive than pawing through the whole barrel of apples to locate a succulent one. To account for the extra bites at the apples, they described approaches that would inflate the normal random-match probability.

The response has been disputation and litigation. Two early commentators, Bill Thompson and Simon Ford, gave “a Bayesian analysis” to suggest that “this evidence has no probative value.” William C. Thompson & Simon Ford, DNA Typing: Acceptance and Weight of the New Genetic Identification Tests, 75 Va. L. Rev. 45, 100 (1989). Ten years later, Donnelly and Friedman reached precisely the opposite conclusion. They applied Bayes' rule in more detail -- and correctly -- to show that the trawl actually increases the probative value of the match.

Recently, three appellate opinions on the issue have emerged -- United States v. Jenkins, 887 A.2d 1013 (D.C. 2005), People v. Johnson, 43 Cal.Rptr.3d 587 (Ct. App. 2006), and People v. Nelson, 48 Cal.Rptr.3d 399 (Ct. App. 3 Dist. 2006), rev. granted, 147 P.3d 1011 (Cal. 2006). In these cases, defendants argued that until the scientific community can agree on a single statistic to characterize the import of a database trawl, even the fact of a match should not be admitted. Even though the dispute in the scientific community is limited to the question of whether there is any reason to bother with the NAS adjustments to the probability figure, the trial court in Jenkins felt compelled to exclude the DNA evidence in its entirety.

The appellate courts all rejected the defense challenges, but their opinions fail to address the dispute in the scientific and legal literature. The avoidance mechanisms they employ are singularly unimpressive. In Nelson, for example, the court of appeal claimed that California's general acceptance standard for scientific evidence does not apply because after the database trawl identifies the suspect, a fresh sample from the suspect is typed. If the fresh sample matches, only this match is introduced at trial. In the court's view, it is as if the database trawl never took place.

To a statistician, this is a jay-dropping claim. The challenge is not to the use of a convicted-offender DNA database as an investigatory tool. The objection is to the use of the random-match probability at trial to gauge the power of the later match when the defendant has not been selected for DNA testing “at random” — that is to say, on the basis of factors that are uncorrelated with his DNA profile. When the defendant is selected for a later test precisely because of his known DNA profile, the replication adds no new information about the hypothesis that the defendant is unrelated to the actual perpetrator and just happens to have the matching DNA profile. It adds no information because the datum — a matching profile in the new sample — is just as probable when this hypothesis is true as when it is false. Replication helps eliminate the risk of a laboratory error in determining or reporting the DNA profile, but it has no further value in probing the possibility of a coincidental match.

Because the rationales presented in the three cases to date are unconvincing, the emerging case law needs to be reoriented to confront directly the competing statistical arguments about the meaning of a database match. Recent statistical literature seems to favor the view that no adjustment to the random-match probability is necessary, but this may just reflect the fact that most statisticians writing about forensic science are Bayesians rather than frequentists. Although Donnelly and Friedman have presented the Bayesian perspective forcefully and simply, it appears that it will take more to convince the courts that they need to think more deeply -- and more clearly -- about the subject.

--DHK. These comments are adapted from a forthcoming book, The Double Helix and the Law of Evidence: Controversies over the Admissibility of Genetic Evidence of Identity (Harvard Univ. Press).

April 5, 2008 | Permalink | Comments (0) | TrackBack (0)

Monday, January 14, 2008

Low Copy Number DNA Dealt a Low Blow?

Twenty-nine people died and 200 were wounded when a 255-kilogram car bomb exploded in a busy shopping area. No, this was not Baghdad or Jerusalem. It was Omagh, Northern Ireland, in 1998. The bomb was the work of splinter group calling itself the Real IRA. It was Northern Ireland's worst single terrorist atrocity.

Ten years later, Sean Hoey, a 38-year-old electrician was on trial for the murders. It was Britain's biggest murder trial.  Justice Reg Weir, sitting under Northern Ireland's Diplock non-jury system for terrorism trials, announced his verdict last month -- not guilty! Hoey waved to applauding family members as relatives of the victims gasped in shock. Families of the dead said they were stunned that Mr Hoey had been acquitted but pledged to press ahead with a civil action for 14 million pounds compensation against five men who they claim were responsible for the attack.

Shades of O.J. Simpson? A lynchpin in the case was DNA evidence. The prosecution maintained that bombs used in various attacks, including this one, had distinctive similarities in their construction and that Hoey had been involved in constructing them. His DNA, it claimed, had been found in connection with four of them (not including the Omagh bomb). However, the judge had harsh words for the treatment of this vital DNA evidence, saying that the recording, packaging, storage and transmission of some of the items was "thoughtless" and "slapdash." He found that the police and forensic laboratory did not take "appropriate DNA protective precautions" and that the police had engaged in "mendacious attempts to retrospectively ... alter the evidence" to hide this fact.

For good measure, the court added a discussion of the validity of Low Copy Number (LCN) DNA typing. Introduced by Britain's Forensic Science Service (FSS) in 1999, LCN is a term for one of several related methods for increasing the sensitivity of ordinary DNA testing. All the procedures start by "amplifying" a sample of DNA, that is, by producing a huge number of copies of the original molecules for analysis.  LCN pushes the amplication step (known as PCR, because it is based on the Polynerase Chain Reaction for copying stands of DNA) to its limits. It permits the duplication of just a few original molecules.

The defense experts criticized LCN for want of adequate validation. They reached this conclusion because: (1) LCN results were admitted as evidence in only three countries; (2) the US (which only used it for investigative purposes) employed "a different and much more stringent operating system"; (3) it lacked "an international agreement on validation"; and (4) only two scientific papers on the technique were published, and those were written by its inventors. The judge endorsed these criticisms.

This part of the opinion seems odd for a country that does not subject scientific evidence to special scrutiny -- that has no precedent comparable to the Frye or Daubert cases in the United States. The opinion justifies its discussion of these matters by quoting from a House of Commons'committee report calling for a " 'gate-keeping' test for expert evidence [building] on the US Daubert test."

The Crown Prosecution Service responded by carrying out a "precautionary internal review of current cases involving the FSS use of LCN DNA analysis." On 14 January 2008, it reported that it could not find

anything to suggest that any current problems exist with LCN. Accordingly we conclude that LCN DNA analysis provided by the FSS should remain available as potentially admissible evidence. Of course, the strength and weight such evidence is given in any individual case remains a matter to be considered, presented, and tested in the light of all the other evidence

DHK

Sources:

Queen v. Hoey, [2007] NICC 49, WE17021

Crown Prosecution Service, Press Release, Review of the use of Low Copy Number DNA Analysis in Current Cases: CPS Statement, Jan. 14, 2008, http://www.cps.gov.uk/news/pressreleases/101_08.html

Duncan Campbell and Vikram Dodd, Police Suspend Use of Discredited DNA Test after Omagh Acquittal, The Guardian, December 22, 2007, http://www.guardian.co.uk/Northern_Ireland/Story/0,,2231403,00.html

Anne Cadwallader, Omagh Bombing Suspect Acquitted, Reuters UK, Dec 20, 2007, http://today.reuters.co.uk/news/articlenews.aspx?type=topNews&storyid=2007-12-20T201408Z_01_L2062777_RTRUKOC_0_UK-IRISH-OMAGH.xml

January 14, 2008 | Permalink | Comments (0) | TrackBack (0)

Friday, November 30, 2007

"Predictive Medical Information" in DNA Databases for Law Enforcement

One objection to amassing databases of DNA profiles for law enforcement purposes is that the profiles themselves contain "predictive medical information." E.g., Joh (2006). The average person might take this to mean that the profiles used for identification also reveal whether a person is at risk for particular diseases. Yet, no one knows how to make such predictions, and no physician would be interested in the allegedly "predictive medical information." Nevertheless, Professor Simon Cole (2007) of the University of California at Irvine suggests that such statements do not contradict the claim of forensic scientists that no one can use the set of numbers in the standard CODIS profiles to predict whether anyone represented in the database will develop any disease. In Cole's view, it is just a way of saying that someday, somehow, meaningful predictive value might be discovered.

So construed, the assertion of "predictive information" is irrefutable.  To give the claim some real meaning, however, one must show that the possibility that the identifying features will turn out to be predictive of disease is more than idle speculation. In this regard, Cole makes the following argument:

Presumably, Professor Kaye would respond that his extrapolation of the future is more defensible than others because it is “based on current knowledge and practice.” It may be more defensible, but that does not mean it is any more likely to be correct. Would the current capability of genetics have been predictable from the state of knowledge and practice in 1960? If not, there is no reason to assume that the capabilities of genetics in 2050—when the law enforcement DNA databases we are building today will likely still be in place and encompass a large portion of the population—must be wholly predictable from the current state of theory and knowledge.

Cole is referring a paper (Kaye 2007a) that explains why, in light of basic principles of statistics and genetics that date back to Bayes and Galton, the alleged predictive power of the profiles is likely to remain too slight to permit useful inferences about disease status or propensity. To reach this conclusion, the paper discusses of all the known ways in which the profiles in a database might be used to predict future diseases.

This is, of course, quite different from blithely assuming that the future will resemble the past. And, I think it is better than assuming that just because we know more about molecular biology and medical genetics than we did in 1960, we will be able to accomplish this particular feat by 2050. (Kaye 2007b). Perhaps we will -- such a development would not violate any known laws of physics. But do any readers have a more specific reason to suspect that the CODIS STRs profiles will turn out to powerful predictors of any medical conditions?

--DH Kaye

* On CODIS and STR profiles, see, for example, FBI brochure, NIJ webpage, NIST technical information

November 30, 2007 | Permalink | Comments (0) | TrackBack (0)

Friday, October 26, 2007

Additional Thoughts on Maryland v. Rose (Fingerprints)

Having just read the opinion in Maryland v. Rose, my initial reaction is an odd mixture of yawns and gasps.  On the unreliability of fingerprints, the opinion is in many ways completely unremarkable.  The arguments offered against fingerprints have now been around for quite some time.  What is remarkable is how long it has taken for courts to begin acknowledging the problems with fingerprints.

The broader aspects of the opinion, however, are arguably more fascinating.  First, the judge invokes the "death is different" concept almost like an incantation, and then says nothing more about it.  Is the judge suggesting that the holding be limited to death penalty cases only?  While one could develop a theory by which the constitution influences the interpretation of Rule 702 in certain contexts, that conclusion is not immediately obvious.   In addition, it seems that the criticisms of fingerprints are sufficiently serious that the problem is not just confined to death cases.

Second, I am astonished at how the court almost cavalierly sidesteps the issue of being in a Frye jurisdiction and subject to a "general acceptance" standard.  While I have previously argued that Frye and Daubert operate similarly in practice, never did I expect that an opinion from a Frye jurisdiction would feel so extraordinarily "Daubertesque." 

Finally, the opinion is a testament to how influential the Daubert criteria and mindset have become.  The court emphasizes the use of objective standards, testing, and error rates.  One gets the sense that expert intuition was summarily shown the door. 

--EKC

October 26, 2007 | Permalink | Comments (1) | TrackBack (0)

And more on Fingerprints....

Here's a follow-up story on Judge Souder's exclusion of fingerprints for not being based on "a reliable factual foundation."  See Here.   There are some insightful comments from Sandy Zabell and some not-terribly-insightful comments from Thomas P. Mauriello, an adjunct professor at the University of Maryland.  Indeed, a comparison of Zabell's (a statistician) and Mauriello's (a criminologist) comments nicely illustrates all that is wrong with forensic science as it is practiced today.  The former offers the critical comments of the scientist, and the latter the true-believer.  Evidence, not faith and anecdotal experience, should be the currency by which the forensic identification sciences are measured.
-- DLF

October 26, 2007 | Permalink | Comments (0) | TrackBack (0)

Thursday, October 25, 2007

Exclusion of Fingerprint Evidence

A Maryland trial court excluded the State's proffer of partial latent fingerprint evidence last Friday.  The judge based her ruling on the State's failure to demonstrate that the technology produced valid results.  Although the court used the Frye test, the opinion reads like a primer on Daubert.  The Baltimore Sun has had a few articles on the case, and an editorial.  One article and the PDF of the judge's decision can be found at the following link: Here.   The site is fully searchable, so you should be able to find the other articles fairly easily, if you are interested.
--DLF

October 25, 2007 | Permalink | Comments (4) | TrackBack (0)

Monday, July 16, 2007

Wall Street Journal law blog blooper on court size

The author of the Wall Street Journal law blog brought up an old hobby horse -- splitting the Ninth Circuit. Ashby Jones, reporting on some calculations (of dubious applicability) in a recent Los Angeles Times op-ed piece by law professor Brian Fitzpatrick, wrote that "as a court grows larger, it is increasingly likely to issue extreme decisions."

Larger courts are more likely to issue extreme rulings than smaller ones? This cannot be true in general. The effect that Professor Fitzpatrick identifies in his numerical example (which I won't describe here) is a subtle consequence of the finite size of the population (the judges) from which the three-judge panels that decide the appeals are drawn.  Let N be the size of the full court, and let s be the number of "extreme" judges. It should suffice to consider the probability of selecting three extreme judges at random. This probability is

             s(s–1)(s–2) / N(N–1)(N–2).           (1)

As N grows larger (and s grows proportionately, as Fitzpatrick posits), the subtractions matter less and less. In the limit, the chance of an extreme panel is just (s/N)3. Contrary the the claim in the Journal's blog, this quantity is less -- not more -- than (1). The proof is left as an exercise to the reader.

--DHK

July 16, 2007 | Permalink | Comments (2) | TrackBack (0)

Wednesday, July 11, 2007

Bush Science

Former Surgeon General Richard Carmona told a congressional committee that the Bush administration pressured him to change, modify, or omit information regarding public health matters based on political considerations.  See Here.   Nothing new, perhaps, but worth adding to the list.  History will not be kind to this president, and the chronicle of his ineptitude, inanity, and corruptness grows ever longer by the day.
--- DLF

July 11, 2007 | Permalink | Comments (0) | TrackBack (0)

Thursday, May 17, 2007

The "prosecutor's fallacy" in the Netherlands

A New York Times blog by Mark Buchanan on "The Prosecutor's Fallacy" provides a report on a conviction involving flawed statistical evidence. (The author also described the case in January in Nature. A more detailed analysis is at Richard Gill's website.)

Dutch nurse Lucia Isabella Quirina de Berk is serving a life sentence for seven murders and three attempted murders. Mr. Buchanan writes that:

Following a tip-off from hospital administrators, investigators looked into a series of “suspicious” deaths or near deaths in hospital wards where de Berk had worked from 1999 to 2001, and they found that Lucia had been physically present when many of them took place. A statistical expert calculated that the odds were only 1 in 342 million that it could have been mere coincidence.

After noting that the data collection was badly flawed and that "a more accurate number is something like 1 in 50," he adds that:

More seriously still – and here’s where the human mind really begins to struggle – the court, and pretty much everyone else involved in the case, appears to have committed a serious but subtle error of logic known as the prosecutor’s fallacy.

The big number reported to the court was an estimate (possibly greatly inflated) of the chance that so many suspicious events could have occured with Lucia present if she was in fact innocent. Mathematically speaking, however, this just isn’t at all the same as the chance that Lucia is innocent, given the evidence, which is what the court really wants to know.

Now, there are a great many instances of this fallacy of naively transposing P(E|H), the probability of the evidence E given the hypothesis H, into P(H|E), the probability of H given E. But surely this is not a more serious error than being off by a factor of some 10,000,000 in the estimate of P(E|H)! A p-value of 1/342,000,000 will be associated with a likelihood function that swamps any plausible prior probability distribution. See M.H. DeGroot, Doing What Comes Naturally: Interpreting a Tail Area as a Posterior Probability or a Likelihood Ratio, 68 J. Am. Stat. Ass'n 966 (1973).

Of course, Mr. Buchanan is right about one thing: the fallacy is subtle. He makes it when he describes the p-value as follows: "the odds were only 1 in 342 million that it could have been mere coincidence." The kind of statistics that appear to have been used in the case do not permit any quantitative statement about the probability that coincidence it the explanation for the nurse's presence. At best, they allow one to say that if that explanation is correct, it is exceedingly improbable that she would be present as often as (or even less often than) she was. Thus, if one wants avoid the transposition fallacy, one must say something like this: assuming the nurse's presence was mere coincidence, the probability of her being present at least as often as she was is only 1 in 342 million.

DHK

May 17, 2007 | Permalink | Comments (1) | TrackBack (0)

Tuesday, April 24, 2007

The Meaning of Error

I am sitting at a meeting of a National Academy of Science Committee on Identifying the Needs of the Forensic Science Community. Forensic scientists addressing the committee have said that it is not a "false positive" or not an "error" when a test (such as microscopic hair comparison or ABO blood typing) is "correct" in the sense that it gives the best result it can (two hairs really are indistinguishable under the microscope, the blood really is type A). Judge Harry Edwards, the co-chair of the committee, disputed this terminology, saying that if more discriminating mitochondrial DNA testing correctly establishes that the hairs actually come from different people, then the microscopic comparison was an error.

Who is right? Well, both. The laboratory has not erred in the sense that it has applied the test correctly. This is an internal perspective on the process. Judge Edwards also is right. The test has erred from an external perspective. If a court convicts an innocent man because the microscopic features of his hair  match, that is a substantive error. Would the forensic scientists insist that an eyewitness who looks carefully and has a good memory but nevertheless misidentifies an assailant has not erred?

The point is that there are different sources of possible error. If we are interested in the error rates of a properly performed test, then the external perspective is appropriate. Such a test has a measurable sensitivity and specificity, and we need to know these statistics to evaluate its validity and utility. If we are interested in proficiency or reliability, however, the internal perspective applies.

DHK

April 24, 2007 | Permalink | Comments (5) | TrackBack (0)

Friday, April 20, 2007

Predicting Dangerousness and Flipping Coins

On April 20, NPR's Morning Edition broadcast an interview with Phillip Merideth, "a forensic psychiatrist and the chief medical office for Brentwood Behavioral Healthcare in Mississippi [and] a lawyer who teaches about mental health and the law." According to Dr. Merideth, "[p]sychiatric literature in the past has shown that efforts to predict — and I'm using the word predict in quotes — is no better than flipping a coin," Merideth says.

One wonders what Dr. Merideth knows about coins and probability, since he also says "you can predict who is at risk of committing violence. But you can't actually say who will." So which is it? Predictions based on flipping a coin are totally worthless. They bear no relationship to the actual risk. Are the psychiatric predictions equally worthless -- or are they at  least  slightly informative? You cannot have it both ways. Either risks can be identified or they cannot be.  If they can be, the question then becomes how much better mental health professionals do than coins.

The issue confused Justice Blackmun in Barefoot v. Estelle, 463 U.S. 880 (1983). Quoting Ennis & Litwack, Psychiatry and the Presumption of Expertise: Flipping Coins in the Courtroom, 62 Calif. L. Rev. 693, 737 (1974), his dissenting opinion insisted that "[i]t is inconceivable that a judgment could be considered an `expert' judgment when it is less accurate than the flip of a coin." Consistently less accurate? A few seconds of reflection should reveal that this is impossible. How can anything be less accurate than flipping a coin?

The studies cited in the American Psychiatric Association's amicus brief in Barefoot actually demonstrate that the best predictions of mental health experts at the time were substantially better than coin tossing (a likelhihood ratio of about 8 rather than 1). See, e.g., Chrisopher Slobogin, Dangerousness and Expertise, 133 U.Pa. L. Rev. 97 (1984).

Of course, I am not arguing that psychiatric predictions are extremely accurate. In general, they are not. My point is simply that Dr. Meredith's statements about coins and pyschiatry are plainly false, and that debates over mental health law should be informed by more accurate information. Such information was provided to a different NPR program Talk of the Nation, just two days ago. John Monahan, Professor of Law and Professor of Psychology and Psychiatric Medicine at the University of Virginia Law School, pointed out that although "[i]t's been know for a very long time that psychologists and psychiatrists are not very good at predicting violence, ... they are better than chance." They may not be "very much better than chance," but "in recent years the science has improved considerably." The bottom line is that "our crystal balls are very murky, and we have a long way to go."

DHK

April 20, 2007 | Permalink | Comments (0) | TrackBack (0)

Sunday, April 8, 2007

Near-miss DNA Searching

"Familial searching" is back in the news.  60 Minutes had a segment on it last week called "A Not So Perfect Match: How Near-DNA Matches Can Incriminate Relatives Of Criminals," and the LA Times ran an editorial by UCLA Professor Jennifer Mnookin entitled "The Problem with Expanding DNA Searches: They Could Locate Not Just Convicted Criminals But Also Relatives -- Violating Privacy."

The phrase "familial searching" is slightly misleading. As Mnookin notes, when a DNA sample from a crime scene is almost -- but not quite -- a match to a particular individual in the convicted-offender database, it could well come from a full sibling or a parent or child. As one moves farther out on the family tree, however, it is difficult to distinguish relatives from unrelated individuals with the DNA types listed in the database.

Although one cannot expect too much from short editorials and TV clips, it may be worth noting and commenting on some of the arguments against near-miss searching floated in these media. The major argument offered on the 60 Minutes show was that looking for leads to relatives is "genetic surveillance." Of course, this is more of a slogan than an argument. Calling the practice "genetic" or "surveillance" does not make it wrong. People would prefer not to come to the attention of the authorities, but what is the underlying right that following these leads violates? Or is the argument not about rights, but policy? Is the unarticulated premise that the police should not have a way of tracking the whereabouts of large numbers of people who are not (yet) known to have done anything wrong? Perhaps, but don't people become suspects for all kinds of reasons beyond their control all the time?

Professor Mnookin formulates the point somewhat differently when she writes that “[p]ut plainly, it is discriminatory. If I have the bad luck to have a close relative who has been convicted of a violent crime, authorities could find me using familial search techniques. If my neighbor, who has the good fortune to lack felonious relatives, left a biological sample at a crime scene, the DNA database would not offer any information that could lead to her.” The “discrimination” here is that people whose parents, children, or siblings are convicted criminals can be caught. But why is this under-inclusiveness such a serious concern? By this logic, wouldn't it be equally discriminatory to seek or follow up on leads by interrogating friends of a criminal? To paraphrase the editorial, “If I have the bad luck to have a friend who is willing to talk to the police, authorities could find me using interrogation techniques. If my neighbor, who has the good fortune to lack loose-lipped friends, committed the same crime, the interrogation would not offer any information that could lead to her.” “Discrimination” that arises from “bad luck” is not generally a concern. Something else must be doing the work here.

Another less-than-obvious claim cast in terms of "discrimination" or "fairness" is that “those people who just happen to be related to criminals have not given up their privacy rights as a consequence of their actions. To use a search technique that targets them simply because of who their relatives are is simply not fair.” But it is not apparent that there is any fundamental “privacy right” to be free from becoming the target of an investigation because of one’s associations with individuals who come to the attention of the police. Suppose that I commit a crime all by myself but I have a nosy neighbor who shadowed me. He gets caught committing a totally unrelated crime, and he bargains for a lower sentence by offering to rat on me. Would we say that “those people like me, who just happen to be living next to nosy criminals have not given up their privacy rights as a consequence of their actions. To use a search technique that targets them simply because of who their neighbors are is simply not fair.”?

A more troubling point is that near-miss searching will have a disparate racial and economic impact because racial minorities and less affluent individuals are overrepresented among convicted offenders. Is the disparate impact is acceptable for the convicts but not for their closest relatives? Mnookin points out that in upholding the constitutionality of convicted-offender databases, courts have suggested that offenders lose privacy rights by virtue of their offenses. I am skeptical of this “forfeiture of rights” argument as the ground for upholding convicted-offender databases, but it is a common intuition, and many courts have relied on it to overcome the Fourth Amendment claims of convicted offenders. Notice, however, that the right be free from bodily invasion asserted in those cases has no application to near-miss searching. Under current Fourth Amendment doctrine, no "search" occurs in looking at validly obtained DNA profiles to determine if there are any near matches. That said, the disparate-impact concern remains, at least as a policy matter. The inequity exists with or without near-miss searching, but more people are affected if near-miss searching is performed.

The editorial tosses in a practical argument: "the broader the parameters for partial match searches, the more likely false positives become." But what is a "false positive" here? It is not a false conviction. If a close relative did not deposit the crime-scene DNA, then it is improbable that DNA testing of this individual will establish a total match. Testing a falsely identified relative thus will exculpate him. This is not to denigrate the individual's interest in not becoming a "person of interest" to the authorities, even if the interest is temporary, but such false leads are also a concern for the police because they waste time and resources. If the parameters are set so wide as to include large numbers of false leads, then the police will find the technique frustrating, and it will not be used very often. Furthermore, even if the "parameters" were grossly overinclusive, producing many bad near-matches, most of the false leads could be detected in the laboratory with the existing samples from the crime and the nearly matching convicted offenders. If a brother, son, or father of an actual rapist is in the offender database, then he will have the same Y chromosome as the rapist. If the samples do not match at loci on the Y chromosome, then the near-miss offender can be crossed off the list.  In this way, false leads to close relatives of an individual in the database can be largely eliminated by testing at Y-STRs or Y-SNPs in rape cases (or others with male offenders).

Professor Mnookin concludes that "as a matter of fairness, it ought to be all or nothing." Does this mean that (1) either everybody should be in the law-enforcement identification databases or nobody should be, or rather that (2) either everybody should be in the law-enforcement identification databases or only convicted offenders should be? Whichever is intended, she is right about one thing -- near-miss searching is a step in the direction of a more universal database.

DHK

April 8, 2007 | Permalink | Comments (2) | TrackBack (0)

Monday, April 2, 2007

The Supreme Court and Global Warming

The United States Supreme Court ruled this morning that the EPA has jurisdiction to regulate carbon  dioxide and other greenhouse gases from cars.  See Story Here.   Score one for the environment.
---DLF

April 2, 2007 | Permalink | Comments (0) | TrackBack (0)

Wednesday, March 21, 2007

Unholy Alliance?: Docs and Drug Companies

The New York Times has an excellent article today on the many connections between doctors and drug companies.  See Article Here.   Certainly, doctors who receive consulting and speaking fees from large drug companies do not invariably favor those companies, but full disclosure of such ties is probably a very good idea.  The same is true for company supported scientific research.  As in other contexts, it's good policy to follow the money trail.
--- DLF

March 21, 2007 | Permalink | Comments (0) | TrackBack (0)

Tuesday, March 13, 2007

New Mexico debates Pluto's planetary status

Never mind the International Astronomical Union and its "dwarf planet" controversy.  The New Mexico legislature has a bill on its legislative calendar today to declare Pluto a planet and today, the anniversary of Pluto's discovery, "Pluto Planet Day."

The text of the resolution is available here.

--EKC

UPDATE: The measured passed 70-0.

March 13, 2007 | Permalink | Comments (0) | TrackBack (0)

Wednesday, February 28, 2007

IRBs

Very interesting and informative article in today's Times regarding Institutional Review Boards.  Anybody who has had any sustained experience with these bodies will recognize the IRB phenomenon described in the article.  See article here.
---DLF

February 28, 2007 | Permalink | Comments (0) | TrackBack (0)

Wednesday, February 14, 2007

Evolution a Kabbalist Plot?

The Dallas Morning News reported today that the second ranking member of the Texas legislature, Warren Chisum, circulated a memorandum from Georgia State Rep. Ben Bridges calling for the end of teaching evolution in the schools.  Bridges apparently believes that evolution is a Kabbalist plot.  Really.  According to the newspaper,

"Indisputable evidence – long hidden but now available to everyone – demonstrates conclusively that so-called'secular evolution science' is the Big Bang, 15-billion-year, alternate 'creation scenario' of the Pharisee Religion," writes Mr. Bridges, a Republican from Cleveland, Ga. He has argued against teaching of evolutionin Georgia schools for several years.

Mr. Bridges also supplies a link to a document that describes scientists Carl Sagan and Albert Einstein as "Kabbalists" and laments "Hollywood's unrelenting role in flooding the movie theaters with explicit or implicit endorsement of evolutionism." 

See full story here.   How did this guy get elected?  Scary stuff.
--- DLF

February 14, 2007 | Permalink | Comments (0) | TrackBack (1)

Kansas Evolving

The New York Times reports today that Kansas has removed its anti-evolution guidelines:

The State Board of Education repealed science guidelines questioning evolution, putting into effect new ones that reflect mainstream scientific views. The move was a political defeat for advocates of “intelligent design” who had helped write the standards being repealed. The intelligent design concept holds that life is so complex that it must have been created by a higher power. The board removed language suggesting that basic evolutionary concepts were controversial and being challenged by new research. It also approved a new definition of science, limiting it to the search for natural explanations of what is observed in the universe. The state has had five sets of science standards in eight years, each affected by the seesawing fortunes of socially conservative Republicans and a coalition of Democrats and moderate Republicans.

Perhaps there is hope for us yet.
--- DLF

February 14, 2007 | Permalink | Comments (0) | TrackBack (0)

Saturday, February 10, 2007

Princeton ESP Lab to Close

The New York Times reports today that Princeton's laboratory on Extra Sensory Perception will be closing.  See story here.   Perhaps Princeton should consider opening a forensic science laboratory.  Much of that science operates in the same spirit as its former laboratory, but a forensic lab would have greater upside potential.
--- DLF

February 10, 2007 | Permalink | Comments (0) | TrackBack (0)

Wednesday, January 31, 2007

New Hampshire Superior Court Excludes Fingerprint Testimony

A New Hampshire Superior Court has provisionally excluded the testimony of a fingerprint examiner, citing the examiner's failure to maintain proper documentation and her failure to use a blind verification procedure.  The opinion by Justice Coffey can be found  here.  (Many thanks to Simon Cole for finding the link).
--EKC

January 31, 2007 | Permalink | Comments (1) | TrackBack (1)

Thursday, January 25, 2007

On Gay Sheep

The New York Times has an interesting article today concerning Dr. Charles Roselli's research on "what makes some sheep gay."  Roselli is looking at the physiological mechanisms that might be involved in this phenomenon.  This appears to be pretty routine stuff.  But when news of the research hit the 24-hour news circuit, it led to an explosion in protest against Roselli's presumed desire to identify the biological causes of "gayness" with the apparent intention of someday applying this knowledge to humans.  Many figured that the research might eventually be used to either make gay people straight or permit parents the ability to avoid having gay children in the first place.  In a somewhat odd take on the story, Dr. Paul Wolpe, from Penn., suggested that "If the mechanisms underlying sexual orientation can be discovered and manipulated ..., then the argument that sexual orientation is based in biology and is immutable 'evaporates.'"  In the law, of course, if sexual orientation is biologically based, this gives it an added boost to being considered an "immutable characteristic," and thus potentially worthy of heightened judicial protection under the Equal Protection Clause of the Constitution.  This logic suggests that if such biological characteristics became manipulable through technology they might lose their "immutability" status is a new one, and not one that the law is likely to contenance any time soon.  After all, gender itself is not immutable under this definition, and it receives heightened scrutiny.  Interesting stuff. See the Story Here.
--- DLF

January 25, 2007 | Permalink | Comments (0) | TrackBack (0)

Tuesday, January 16, 2007

The Dire Consequences of Global Warming

Excellent, albeit disturbing, article about the receding ice sheets of Greenland and the dire consequences for our world.  See Article Here.   I suspect that generations in the not too distant future will count this as one of the many sorrowful legacies of the Bush administration.
--- DLF

January 16, 2007 | Permalink | Comments (3) | TrackBack (0)

Monday, January 15, 2007

Death Penalty and Mental Illness

Martin Magnusson, over at the ACS blog, has an excellent discussion of the Supreme Court's  decision to grant cert. in the Panetti v. Quaterman case.  See Here.   The Question Presented in the case is as follows:

Does the Eighth Amendment permit the execution of a death row inmate who has a factual awareness of the reason for his execution but who, because of severe mental illness, has a delusional belief as to why the state is executing him, and thus does not appreciate that his execution is intended to seek retribution for his capital crime?

-- DLF

January 15, 2007 | Permalink | Comments (0) | TrackBack (0)

Tuesday, January 2, 2007

Worth Reading!

A couple of recent articles in The New York Times are worth reading for those interested in science policy. 

In today's Times, Dennis Overbye discusses the illusion of "free will."  See Full Article Here.   It is a terrific overview of the subject, and is very cleverly written.

And in this past Sunday's Times, Simon Cole wrote on the illusion of "the science of partial latent fingerprint identification."  See Full Article Here.   Simon offers a wonderfully erudite discussion of the art of fingerprint identification.
--- DLF

January 2, 2007 | Permalink | Comments (0) | TrackBack (0)