« Predicting Dangerousness and Flipping Coins | Main | The "prosecutor's fallacy" in the Netherlands »

April 24, 2007

The Meaning of Error

I am sitting at a meeting of a National Academy of Science Committee on Identifying the Needs of the Forensic Science Community. Forensic scientists addressing the committee have said that it is not a "false positive" or not an "error" when a test (such as microscopic hair comparison or ABO blood typing) is "correct" in the sense that it gives the best result it can (two hairs really are indistinguishable under the microscope, the blood really is type A). Judge Harry Edwards, the co-chair of the committee, disputed this terminology, saying that if more discriminating mitochondrial DNA testing correctly establishes that the hairs actually come from different people, then the microscopic comparison was an error.

Who is right? Well, both. The laboratory has not erred in the sense that it has applied the test correctly. This is an internal perspective on the process. Judge Edwards also is right. The test has erred from an external perspective. If a court convicts an innocent man because the microscopic features of his hair  match, that is a substantive error. Would the forensic scientists insist that an eyewitness who looks carefully and has a good memory but nevertheless misidentifies an assailant has not erred?

The point is that there are different sources of possible error. If we are interested in the error rates of a properly performed test, then the external perspective is appropriate. Such a test has a measurable sensitivity and specificity, and we need to know these statistics to evaluate its validity and utility. If we are interested in proficiency or reliability, however, the internal perspective applies.

DHK

April 24, 2007 | Permalink

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d8341bfae553ef00d834725b1f69e2

Listed below are links to weblogs that reference The Meaning of Error:

Comments

One source of error not discussed in this post is the non-forensic scientists' (more commonly social scientists, attorneys and judges) lack of understanding the forensic discipline including its limitations, and terminology. In this post, "two hairs are indistinguishable" seems to be accepted by the committee as saying they are the same and came from the same source. From the forensic scientist view, the hairs may be indistinguishable(providing this is a term used by hair/fiber experts), but this is not a positive identification. The error occurs when attorneys and courts interpret this statement as an identification.
An integral part of the examination methodology is to recognize any limitations inherent in the evidence. The result may be limited to "indistinguishable", "consistent", "similar". What do these terms mean? It would be appropriate for opposing counsel or the presiding judge to ask for clarification when these terms are used.
I'm not a hair and fiber expert. I do not know this discipline's terminology or conclusion scale. From reading this post, I do not know if the committee members are using a term from the discipline, or, semantics are in play.
I am responding to this post as I have read a great deal of articles by social scientists and legal scholars. The majority of these writings reflect a lack of understanding in the concept of limitations of evidence, terminology, and the weight given in a discipline's conclusion scale.
The attacks on proficiency test results also reflect this lack of understanding. Meaningful commentary benefits both forensic and social scientists. However, it is apparent in the articles and the discussion in this post, foundational research as to the tenets of the discipline, methodology, terminology, and limitations may not been done.
Lacking this fundemental knowledge base prevents an accurate assessment of errors. It has also created an abyss between social and forensic scientists. Accusations of denying the existence of errors have been lodged against forensic scientists. Any perceived resistance by the forensic community is not in proficiency testing, rather it is testing overseen by those who lack the foundational knowledge of a discipline; as evidenced by publications of social scientists espousing error rates that are not based on the methodology or framework that the forensic scientist employs in the testing and/or examination process. Real world application (this involves limitations of some evidence) has to be the foundation for any study regarding error rates.
I certainly am not criticizing the committee or Mr. Kaye. Healthy discussion by all those involved needs to take place. I am taking the opportunity to explain that errors can be caused by another external perspective......i.e. failure of the attorneys, the judges, and social scientists to understand the foundational tenets of the forensic discipline, the meaning of terminology,and/or, disregarding the discipline's terminology, and finally, failure to recognize or realize there are limitations of some evidence.
As a forensic scientist, it is important to me that errors be identified and remedied. I really do not know any scientist who wants to put an innocent person in prison, or, deny a beneficiary what he/she is entitled to. I also personally feel the perspectives of the legal scholars and social scientists can benefit the forensic community. However, the work of the National Academy of Science Committee on Identifying the Needs of the Forensic Science Community, and published articles by the scholars and social scientists should reflect the reality of the discipline discussed. I truly believe we all are striving for the same goal......to protect society from criminals and insure the innocent are not convicted

Posted by: Jan Kelly | Apr 30, 2007 9:45:09 AM

Criticism is welcome. To be clear, let me note that the committee has not taken any position on anything (yet). It is a large committee -- its members include forensic scientists, and it seems to be in the data gathering mode. See http://www8.nationalacademies.org/cp/projectview.aspx?key=48741.

As for the definition of "false positive error," the committee's co-chair, Constantine Gatsonis, a leading authority on the design and analysis of clinical trials of diagnostic and screening modalities, pointed out that the issue is one of terminology. If "the hairs are indistinguishable" means that and nothing more -- then the fact that two indistinguishable hairs come from different individuals does not make the statement an error. However, if it is used to conclude that the hairs come the same individual, then it is a false positive (when in fact the hairs come from different people).

Of course, the only reason the proponent of the evidence is introducing it is to induce the judge or jury to draw the inference of identity (that happens to be false in this case). So what is the responsibility of the hair examiner in the courtroom? To testify that "the hairs are indistinguishable" and then sit down (if not cross-examined)? Or to explain that the similarity in the hairs does not mean that they originated from the same individual? I am inclined to think that full disclosure should be required -- even when counsel would prefer to leave things ambiguous.

Posted by: DHK | May 2, 2007 8:12:11 PM

Prof. Kaye:
I agree with your comments, and, I would like to make one more comment regarding full disclosure. A forensic expert is only one participate in our adversarial court system. The witness' testimony offered depends upon the questions. Lack of full discolsure is due more to the questions posed by the prosecutor and/or defense attorney, as well as the court. When the witness is forced to answer only "yes" or 'no", or, the questions are limited in scope, then only part of the conclusion is entered into court. When testimony is allowed in narrative form, the examiner has the opportunity to explain what "consistent" means, the limitations, etc.
In court, I am not in the driver seat.....I'm not even in the front seat. Those positions are occupied by the judge and the attorneys. Therefore, it does not make much sense to blame the person in the backseat when the car hits a tree.

I personally feel all conclusions given during testimony should be explained in detail....then the presiding judge can determine if the testimony meets 702. In most cases, the testimony will still meet 702 and Daubert. However, full disclosure allows the court and the jury to have a better understanding of the meaning of the conclusion, the limitations of the evidence, and, be in a much better position to determine the weight or significance of the testimony.


Posted by: Jan Kelly | May 3, 2007 8:50:40 AM

Professor:

You said, "Such a test has a measurable sensitivity and specificity, and we need to know these statistics to evaluate its validity and utility."

The problem with this statement is, in many cases, we need to look behind the sensitivity and specificity numbers to see how those numbers have been obtained. Many so-called tests have published sensitivities and specificities; however, in some instances, the "gold standard" against which the test was compared for criterion validity purposes may be of dubious validity itself. This is a major concern with the many tests that purport to assess malingering. Consider the forensic psychology "malingering" test the M-FAST (Miller Forensic Assessment of Symptoms Test). The studies of the accuracy of this test often use the SIRS (Structured Interview of Reported Symptoms) as the criterion against which it is measured. Well, we all know that there is no single instrument that can tell if people are malingering, and there is no population of admitted malingerers that can be recruited for malingering studies. So any sensitivity/specificity numbers for the M-FAST based on studies using the SIRS as the sole criterion measure are of questionable value.

Look too at the studies of fMRI in the detection of deception. They use "fake" liars (usually college students asked to participate in the study for credit). Surely, these persons differ substantially from real life criminals and suspects, so even of the fMRI has impressive accuracy numbers in these analog studies, there are serious concerns with the external validity of such studies.

--------

Response: Good point. Both statistics (or variants on them) are crucial to measuring "accuracy," but if the estimates of these quantities are biased, then the estimates can be misleading. DHK 5/17/07

Posted by: Kevin | May 9, 2007 2:28:02 PM

Totally agree with Jan Kelly on the issue of Attorneys lack of understanding on the forensic disciple. Lacking this fundemental knowledge base prevents an accurate assessment of errors.

Posted by: kenya masai | Sep 15, 2007 3:06:02 PM

Post a comment