Sunday, April 22, 2012
Cadaver dogs are back in the news with the revelation that a cadaver-sniffing dog detected the odor of human remains in a basement near the SoHo home of Etan Patz, a 6-year-old who disappeared in 1979 en route to a New York City bus stop. The Patz case again raises the issue of the reliability od cadaver dogs, which I previously addressed on this blog (here and here) during the Casey Anthony trial. Two problems with cadaver dogs are that:
(1) Sometimes they get waylaid by any decaying organic matter (e.g. a rotten log), and similar chemical signatures make it impossible for them to distinguish between humans and pigs. Thus, handlers are taught always to be on the alert for false positives; and
(2) No one knows exactly what dogs are smelling when they indicate the possible presence of remains.
Of course, the Patz case raises the separate question of whether dogs can really smell 33 year-old remains. According to an article,
Researchers from the University of Alabama, hoping to zero in on how long the scent of death might linger at a crime scene, designed a test for the state police’s cadaver dogs. A single human vertebra, more than 30 years old, was buried 12 inches deep. The dogs were let loose across a 300-by-150-foot plot, and several succeeded in sniffing out the dry bone fragment. So it’s certainly possible that the canines recruited for Etan Patz’s search could detect parts of a 33-year-old body hidden in the basement on Prince Street. A variety of factors, however, mediate the strength of the death odor and how quickly it dissipates. Temperature, humidity, the softness or hardness of the ground, and the amount of degrading matter all play a role, as does the physiology of the dog. (A heavily panting pooch can’t scent very well.)
What this reveals is that there is at least some research out there regarding the reliability of cadaver dogs, and yet, I haven't seen any evidence of courts really using this research to create a rigorous and scientifically valid test for when cadaver-sniffing dog detection creates probable cause or is admissible at trial. Indeed, in the case on the issue that I previously cited, the court tried to retrofit the test that it applies for scent lineup dogs to cadaver dogs while acknowledging that what the two types of dogs do is very different.
In it, Goldberg, a Visiting Assistant Professor at the Penn State University Dickinson School of Law, addresses the Supreme Court's grant of certiorari in Florida v. Harris, where the issue is: "Whether an alert by a well-trained narcotics detection dog certified to detect illegal contraband is insufficient to establish probable cause for the search of a vehicle." In the post, Goldberg
Courts consistently and expressly eschew technical conceptions of probable cause in order to provide police officers with flexibility to exercise their judgment in unfolding situations. In addition, courts focus on whether an officer has a reasonable belief that a suspect has committed or is committing a crime. This metric allows for probable cause to be found in situations where one reasonable officer might assess an 80% likelihood that a suspect is driving drunk, for example, even if another reasonable officer might think there is only a 40% likelihood. We might be tempted to assume the courts require that a reasonable officer be able to believe a crime has been committed by greater than a 50% likelihood, but this has not been made explicit. All officers must prove to a court assessing a vehicle search is a reasonable ground for belief of guilt. Further, when a court is making a probable cause determination for itself in determining if a warrant should issue, it must decide only if there is a "fair probability that contraband or evidence of a crime will be found in a particular place." What is a fair probability?
In the context of drug detection dogs, where we have actual data on reliability, assigning a numerical value to probable cause — or at least to the maximum false positive percentage upon which an officer can rely — would add much needed clarity to Fourth Amendment law. It also does not undermine police officers’ ability to use their intuition, because the event precipitating a search is not an officer’s informed judgment, but the alert from a dog.
Probable cause is one of the fundamental concepts of Fourth Amendment law, but the Supreme Court has refused to quantify it. The Court has described probable cause as a “fair probability,” but it has declined to explain just how likely a “fair” probability might be. Does a “fair probability” mean a 50% likelihood? A 40% likelihood? And why won’t the Justices say? Are they just afraid of math?
This essay argues that courts should not quantify probable cause because quantification would produce less accurate probable cause determinations. The core problem is that information critical to probable cause is often left out of affidavits in support of warrants: Although affidavits say what techniques police tried that added to cause, they generally leave out both what the police tried that did not add to cause and what techniques the police never tried. Determining probable cause accurately often requires this information, however. By leaving probable cause unquantified, current law enables judges to use their intuition and situation-sense to recognize when missing information is likely important to assessing probable cause. Quantification would lead to less accurate probable cause determinations by disabling those intuitions, creating the false impression that the information provided in the affidavit is the only relevant information. Cognitive biases such as the representativeness heuristic and anchoring effects would allow the government to create the false impression that a low-probability event was actually a high-probability event. To ensure accurate probable cause determinations, then, probable cause should remain unquantified. The result is counter-intuitive but true: Knowing less about probable cause improves how the standard is applied.
Professor Kerr's led me to think about that other heavy hitter of American criminal law that courts have refused to quantify: guilt beyond a reasonable doubt. The position that American courts take with regard to quantifying guilt beyond a reasonable doubt was succinctly stated in McCullough v. State, 657 P.2d 1157 (Nev. 1983): "The concept of reasonable doubt is inherently qualitative. Any attempt to quantify it may impermissibly lower the prosecution's burden of proof, and is likely to confuse rather than clarify."
The classic example of what I tell student not to do in this regard can be found in State v. Casey, 2004 405738 (Ohio App. 2 Dist. 2004), in which the prosecutor told the jury:
"All right. I like to make it kind of like a football field where you start at one end and you go to the other. If you go all the way and make a touchdown, that's like a hundred percent. That's beyond no doubt. I like to say reasonable doubt is kind of like 75 percent. Somewhere-75 and 90. Now, you're not going to hold me to going all the way for a touchdown, are you?"
I think that we can all see why such a comment would be confusing to the jury and could impermissibly lower the burden of proof (So, beyond a reasonable doubt is like kicking a field goal?). But what if the prosecutor's comments were not directed toward the jury but were instead directed toward the judge in a bench trial?
Case law is replete with statements that different standards apply to bench trials than apply to jury trials, with judges allegedly not being subject to the same biases and weaknesses as jurors. The one that I've focused upon the most is the Bruton doctrine, which precludes the admission, at a joint jury trial of a co-defendant's confession that facially incriminates the other defendant unless the co-defendant testifies at trial. The theory here is that if Carl confesses, "Dan and I robbed the bank," the jury, upon hearing this statement at trial will use it as evidence of Dan's guilt even if given a jury instruction only to use the confession as evidence of Dan's guilt.
The cognitive bias at play here is ironic process theory, the theory that if I tell you not to think of a polar bear, you will inevitably think of a polar bear. Similarly, if I tell a juror not to use Carl's confession as evidence of Dan's guilt, inevitably he will use the confession as evidence of Dan's guilt. But, despite evidence to the contrary (such as Lee v. Illinois), courts consistently hold that the Bruton doctrine doesn't apply to bench trials because judges can compartmentalize and are not subject to ironic process theory (holdings that are likely caused by the overestimation effect).
We don't quantify guilt beyond a reasonable doubt for jurors because such quantification could confuse jurors and impermissibly lower the burden of proof for a prosecutor. But it is a judge, not a jury that determines whether an alert by a narcotics detection dog creates probable cause. Should were be worried that a judge rather than a juror would be confused by quantification of probable cause and that such quantification would impermissibly lower the burden of proof for the prosecutor making arguments to the judge. Should we, as Professor Kerr argues, be concerned about the cognitive biases that quantification would create for judges?
The answer to these questions might very well be "yes," and they might very well be "no," but if they are "yes," shouldn't they also be "yes" in the Bruton context? I have no problem trusting judges more than jurors, and I also have no problem recognizing that judges are subject to the same biases and fallacies that plague jurors and us all. But shouldn't courts be consistent in answering these questions?