The Yale Law Journal

VOLUME
126
2016-2017
Forum

Prosecutors Respond to Calls for Forensic Science Reform: More Sharks in Dirty Water

18 Jan 2017
Adam B. Shniderman

Initial reactions to the PCAST report from the law enforcement community leave little hope that it will inspire any more reform than the NRC Report has. In the wake of the PCAST report, several law enforcement officials and organizations have commented on the findings and recommendations. Attorney General Loretta Lynch had the following to say:

We remain confident that, when used properly, forensic science evidence helps juries identify the guilty and clear the innocent, and the Department believes that the current legal standards regarding the admissibility of forensic evidence are based on sound science and sound legal reasoning . . . . While we appreciate [the PCAST report’s] contribution to the field of scientific inquiry, the department will not be adopting the recommendations related to the admissibility of forensic science evidence.6

The National District Attorneys Association (NDAA) released a similarly critical statement. It suggested that adequate safeguards exist in the criminal justice system to prevent flawed forensic science from entering the courtroom or convicting the innocent. In doing so, the NDAA relied on the supposed strength of judges as evidentiary gatekeepers and the ability of defense attorneys to conduct effective cross-examinations.7 The NDAA statement concluded that, “Adopting any of [PCAST’s] recommendations would have a devastating effect on the ability of law enforcement, prosecutors and the defense bar, to fully investigate their cases, exclude innocent suspects, implicate the guilty, and achieve true justice at trial.8

These statements by Attorney General Lynch and the NDAA highlight several fundamental issues facing the criminal justice system in its use of forensic science. First, both Attorney General Lynch and the NDAA ignore the realities of the criminal justice system. Wrongful convictions and miscarriages of justice occur more often than one might expect,9 and neither the judiciary nor vigorous cross-examination is sufficient to prevent flawed science from convicting innocent persons. Second, these statements and the underlying attitudes and practices suggest a misplaced emphasis on convictions over truth and fairness in the criminal justice system.

The actual number of wrongful convictions is a figure that is unlikely to ever be known. However, exonerations by the Innocence Project and those contained in the National Registry of Exonerations give us a sense of the problem. To date, 344 people have been exonerated by DNA evidence for crimes they did not commit10 and more than 1,500 have been exonerated by other means.11 The Innocence Project notes that of the first 225 exonerations, through 2009, more than fifty percent involved unvalidated or improper forensic science as a contributing factor in wrongfully convicting the defendant.12 Attorney General Lynch’s response to the PCAST report would suggest that these, and any subsequent cases of wrongful conviction in which flawed forensics is a contributor, are the result of either incompetence or individual human error, such as improper collection, processing, or interpretation of evidence. This belief has been referred to as the “Bad Apples” explanation.13 However, as the 2009 NRC Report, the 2016 PCAST Report, and substantial academic research over the past several decades make clear, the “Bad Apples” view is a simplistic mischaracterization of complex, serious, and systemic problems.14 Furthermore, both reports and the existing academic literature highlight the difficulty, if not impossibility, of meeting the conditions necessary to satisfy Attorney General Lynch’s assertion regarding “proper use.” Both the NRC Report and the PCAST Report cast serious doubts on the “foundational validity”15 and “validity as applied”16 of several feature comparison methods, including bite mark, shoe print, fingerprint, and firearm/toolmark analysis.17 Given this, how can we, as Attorney General Lynch suggests, “proper[ly] use” unvalidated techniques and unsubstantiated testimony?

This Essay argues that, contrary to Attorney General Lynch’s statements, there is no “proper” way to use flawed and unvalidated forensic science evidence. It begins by exploring how lack of scientific knowledge and pro-prosecution biases undermine the judiciary’s ability to act as effective gatekeepers of scientific evidence. It also contends that lack of scientific knowledge renders cross-examination by defense counsel unlikely to address the weaknesses and flaws in some scientific evidence. This Essay then turns its attention to the NDAA’s exaggerated concerns about the criminal justice system’s ability to function effectively in the absence of certain forensic techniques. Examining the history of forensic DNA typing, this Essay demonstrates that the legal system can continue to function while rendering inadmissible flawed scientific evidence and exaggerated claims by forensic examiners. Finally, this Essay concludes that despite the efforts of academics and the possibility of improving forensic science through research and collaboration, key law enforcement officials’ attitudes render it unlikely that meaningful reform can happen. This Essay calls for a more open-minded approach and willingness to work with academics and researchers to improve the criminal justice system and reduce miscarriages of justice.

I. forensic science, judges, and lawyers

Attorney General Lynch has misplaced faith in judges as arbiters of the quality of scientific evidence. InDaubert v. Merrell Dow Pharmaceuticals, Inc., the Supreme Court offered its interpretation of the application of the Federal Rules of Evidence on the admissibility of scientific evidence.18 The Court developed tests to assess the relevance, reliability, and admissibility of scientific evidence. Its test for reliability recommends assessing five factors pertaining to the evidence at issue.19This standard makes judges the “gatekeepers” for determining the admissibility of testimony related to scientific evidence. Yet, this procedural safeguard has proven ineffective. Two possible explanations exist for this phenomenon: lack of judicial scientific aptitude and systemic pro-prosecution bias.

Judges are generalists who often have little training in the sciences.20 The 2009 NRC Report noted:

The adversarial process relating to the admission and exclusion of scientific evidence is not suited to the task of finding “scientific truth.” The judicial system is encumbered by, among other things, judges and lawyers who generally lack the scientific expertise necessary to comprehend and evaluate forensic evidence in an informed manner . . . .21

The NRC Report’s assertion came as no surprise. Several years before the NRC Report was released, scholars had already begun to question whether trial court judges were equipped to assess highly technical scientific claims.22 Indeed, trial judges themselves have admitted their inability to handle complex scientific issues.23 In a 2001 study, Sophia Gatowski and her colleagues surveyed 400 state trial court judges, 191 (48%) of whom said they felt they had been inadequately prepared in their education to handle the types of scientific evidence they faced on the bench.24 Gatowski and Richardson found that an overwhelming majority of the judges surveyed could not correctly demonstrate a basic understanding of two of the Daubert criteria: falsifiability and error rates.25

With respect to falsifiability (also known as the testability of the technique), several responses showed an alarming lack of familiarity. Judges said, for example, “I would want to know if the evidence was falsified,” and “I would look at the results and determine if they are false.”26 Chief Justice Rehnquist foreshadowed these problems in his opinion in Daubert, where he observed: “I defer to no one in my confidence in federal judges; but I am at a loss to know what is meant when it is said that the scientific status of a theory depends on its ‘falsifiability,’ and I suppose some of them will be, too.”27 Falsifiability, drawn from Karl Popper’s work on philosophy of science, simply demands that a theory or hypothesis be able to empirically be proven false through scientific testing.28 Thus, judges’ focus in deciding admissibility should be on whether the underlying theory or method of the forensic discipline can and has been tested, rather than whether the results in a specific case are incorrect or have been altered. The judges surveyed exhibited a similar lack of understanding regarding error rates. Only 15 of 364 judges demonstrated even a basic understanding of error rates (e.g., that a technique with too high an error rate should be rejected because of the high risk of being wrong or making a mistake).29 Few understood the notion that an error rate has two components—false negatives (when a test identifies a true positive as a negative) and false positives (when a test identifies a true negative as a positive).30 The study further suggested that judges’ inability to operationalize and implement the Daubert criteria could create inconsistent decisions regarding admissibility, meaning a technique that passes muster in one judge’s court could very well fail the test in a different court.31

Lack of scientific aptitude may not be the only factor at play when explaining the judiciary’s failure to keep bad science out of courtrooms. We should also carefully consider the possibility of a systemic pro-prosecution bias on the bench. This bias may stem from the fact that a significant number of judges are former prosecutors. For example, forty-three percent of President Obama’s nominees to federal trial courts were previously state or federal prosecutors, while only fifteen percent were public defenders.32 The disparity is seen in state courts as well. A 2009 study found that fifteen percent of state supreme court justices had experience as public defenders, while thirty-three percent of the justices had experience as prosecutors.33 In Cook County, Illinois, seventy-five percent of judges hearing felony cases had served as prosecutors, and many of them had served only as prosecutors before becoming judges.34

Other factors may also contribute to a pro-prosecution bias and tough-on-crime approach, including judges’ desire to be re-elected in those states that hold judicial elections. In a 2015 report, the Brennan Center for Justice synthesized the research from a number of studies examining the impact of judicial elections on criminal cases.35 The report found “that re-election and retention pressures systematically disadvantage criminal defendants.”36 While the mere fact that many judges were previously prosecutors and/or seek to be re-elected does not guarantee bias, empirical research suggests that bias against defendants does contribute to admissibility rulings.37

With respect to allowing flawed evidence, at least one scholar has noted that, while courts rigorously engage in gatekeeping in civil cases, there is no parallel approach in criminal cases.38 As a result, criminal defendants tend to lose admissibility challenges to forensic evidence.39 This systemic bias also manifests in judges’ efforts to exclude defense experts from court. For example, in one case, a judge excluded the testimony of the defendant’s expert, who was an expert in the sociology and history of science and technology.40 The defense proposed that the expert, Dr. Cole, testify to the validity and reliability issues associated with latent fingerprint evidence. Although the admissibility of scientific evidence in New York is governed by Frye v. United States,41 which differs from Daubert,the court noted that “[e]ven applying the Federal Courts Daubert Standard what Dr. Cole has offered here is ‘junk science’ . . . . [It is] interesting but too lacking in scientific method to even bloody the field of fingerprint analysis as a generally accepted scientific discipline.”42 The court’s exclusion of Dr. Cole’s testimony was, at best, hypocritical. First, the claims at the heart of Dr. Cole’s work and his expert testimony are that latent fingerprint identification’s reliability, accuracy, and validity are largely unknown.43 One could reasonably conclude from this that latent print identification has largely lacked scientific method. Second, Dr. Cole’s criticisms are echoed by the NRC Report and the PCAST report, clearly indicating that he is not a rogue, contrarian academic, but rather one of many academics who have raised concerns about latent fingerprint identification.44 Courts have exhibited similar resistance to defense-offered expert evidence regarding the reliability of human memory and eyewitness testimony, excluding testimony on a variety of grounds.45 In several cases, courts have held that expert testimony on eyewitness identification is not sufficiently scientifically reliable to be admissible.46 Others have found that it is within the court’s province to instruct the jury on the reliability of eyewitness identification, making the admission of expert testimony improper and unnecessary.47

We do not live in a perfect world where judges are universally capable of using Daubert to adequately distinguish between good and bad science, either because of personal biases or insufficient scientific knowledge, or possibly both. As such, Attorney General Lynch’s faith in the judiciary is both misinformed and misplaced.

Cross-examination is not without its faults either. Although the NDAA places great faith in cross-examination as an effective means of highlighting weaknesses in evidence, it is unlikely that defense lawyers are any more adept at addressing the shortcomings of forensic evidence than judges are. As Professor David Faigman notes, lawyers generally lack training in scientific methods and usually struggle to articulate scientific concepts.48 Half-jokingly, Faigman comments that nothing puts law students to sleep faster than putting numbers on the board.49 Given these facts, how can we expect defense lawyers, many of whom are overburdened with larger than recommended case-loads,50 to subject forensic experts to meaningful cross-examination that would highlight the potential methodological flaws, lack of scientific validity, and possibility for procedural errors? Indeed, even if lawyers could accomplish such a feat, serious doubts would remain about the jury’s ability to understand the significance of these examinations and the subtleties of these attorneys’ challenges. As fictional trial consultant Rankin Fitch points out, the average juror isn’t King Solomon.51

In an ideal world, Daubert may be sufficient to protect criminal defendants from the perils of flawed forensic science. However, a lack of scientific aptitude and pro-prosecution bias render judges ineffective at appropriately admitting and excluding forensic science evidence under Daubert. Additionally, lack of scientific familiarity among lawyers and jurors makes cross-examination unlikely to adequately highlight the flaws in some forensic science disciplines. The best path forward for the criminal justice system involves scientific reform outside of the courtroom.

II. scientific reform can proceed

The NDAA’s hyperbolic response to the PCAST Report borders on contempt for truth and justice. The NDAA implies that the criminal justice system will come to a screeching halt and the guilty will roam free if forensic science disciplines are held to the standards in the PCAST report and forced to reform their practices and procedures. History tells us this is not the case. Evidence can be meaningfully challenged and excluded, scientific disciplines reformed, and eventually evidence from the discipline admitted again without the Four Horsemen roaming the streets of Anytown, USA. Forensic DNA typing, now seen by many as the “Gold Standard” of forensic evidence, faced significant challenges in the late 1980s and early 1990s. These challenges echo many of the same problems faced by disciplines discussed in both the PCAST and NRC reports today—most notably latent fingerprint analysis. And yet, somehow, the criminal justice system remained operational even as forensic DNA underwent a radical transformation.

In fact, DNA profiling is an excellent starting point for discussing how best to reform scientific evidence.52 Forensic DNA as we know it is the product of the Anglo-American legal system interacting with science and technology over the course of a decade.53 The “DNA Wars” of the late 1980s and early 1990s played an essential role in the development and refinement of forensic DNA testing.54 Following Jeffreys’s discovery of DNA testing procedures in 1984, the technique was quickly implemented by law enforcement officials. First used in a U.S. courtroom in 1987, DNA evidence was accepted with little challenge in jurisdictions across the nation shortly after.55 By the end of 1988, forensic DNA evidence had been admitted in nearly 200 cases.56 As Justice Wilkins of the Supreme Judicial Court of Massachusetts noted, DNA acquired an “aura of infallibility,”57 much like fingerprint and other forensic disciplines that have now come under fire in the NRC and PCAST reports.58 Yet, DNA soon came under criticism from a series of lawyers, academics, and expert witnesses. In an article in the Virginia Law Review, Professor William Thompson and Simon Ford framed the admissibility debate quite poignantly. In addition to noting several issues that needed to be resolved,59 they concluded that the stakes were high because of the need to balance the danger that excessive caution could prevent valuable evidence from being admitted in a timely manner with the risk that evidence accepted quickly and uncritically may prove to be less reliable than promised.60 In other words, courts must strike a balance between the risk of letting the guilty go free and convicting the innocent.

In 1989, Barry Scheck and Peter Neufeld, who would later co-found the Innocence Project, mounted the first serious challenge to the validity and admissibility of DNA evidence in People v. Castro.61 Subsequent challenges followed in State v. Schwartz62and United States v. Yee.63 While the defense largely lost the battle in these cases, this series of challenges led to the 1992 National Research Council (NRC) report “DNA Technology in Forensic Science.”64 The report expressed many of the same concerns about DNA evidence that have since been expressed about other disciplines in the 2009 NRC report and the 2016 PCAST report, namely concerns about the reliability and validity of the processes.65 Specifically, the 1992 NRC report noted the potential for errors arising from improperly maintained equipment, reagents, and specimen contamination.66 The report made several recommendations including calling for scientifically reliable and precise procedures, proficiency-testing and audits, lab accreditation, duplicate testing of samples, and further exploration of the issue of population subcultures.67 It also recognized the lack of, and need for, standardization in laboratory procedures.68 Following the 1992 NRC report, several jurisdictions ruled DNA evidence inadmissible, including California and Massachusetts.69

In People v. Barney,70 a California court held that the statistical significance of a match between the defendant’s DNA and the sample taken from the crime scene did not meet the standard for admissibility.71 In Commonwealth v. Lanigan, the Supreme Judicial Court of Massachusetts issued an opinion that highlighted the debate surrounding DNA evidence, particularly with regards to population substructures, in order to hold that the evidence had failed to meet the Frye standard, which Massachusetts used at the time.72 These cases helped move the debate from the courtroom into scientific journals, which focused on how to create lab standards and understand population subcultures.73 Following changes in lab standards, accreditation, and additional research into subpopulations, the debate was laid to rest.74 At this point, courts were once again prepared to admit DNA evidence, its scientific reliability having been enhanced and its evidentiary status fortified. Certainly, if the criminal justice system can survive the challenge and exclusion of what is likely to be the most conclusive forensic feature comparison discipline, it can survive the exclusion of less certain and reliable forensic science disciplines.

Finally, the NDAA’s hostile attitude toward reform suggests an emphasis on convictions and a belief that the criminal justice system’s current error rate is “good enough.” No longer can we deny that the system makes mistakes—wrongful convictions happen. At a minimum, we have more than 300 pieces of proof that the system isn’t perfect. Surely, any system that relies on human judgment (e.g., juror judgment) will make mistakes. It would be idealistic and naïve to hope that the criminal justice system would never, in practice, convict an innocent person or free a guilty person. However, settling for a system reliant upon unvalidated and flawed forensic science that holds such persuasive power over juries is antithetical to the concepts of justice and fairness.

Conclusion

While academics and some practitioners work to validate and better understand some forensic science disciplines, such as fingerprint identification, those in positions of power seem content to take a steadfast, obstructionist approach that will likely lead to further miscarriages of justice. Ultimately, the responses to the PCAST report from Attorney General Lynch, the NDAA, and others75 demonstrate a disturbing attitude towards justice and a lack of appreciation for the realities of the criminal justice system and the scope of the problems facing forensic science today. The PCAST report offered a number of suggestions for restructuring and reforming forensic science to ensure the scientific validity of forensic feature-comparison disciplines. Until law enforcement officials and forensic science organizations and practitioners are open to engaging in meaningful reform, little progress will be made and miscarriages of justice are likely to continue as a result of flawed and unvalidated forensic evidence. For now, the path forward for forensic science seems littered with obstacles and hazards.

Adam B. Shniderman is an Assistant Professor, Department of Criminal Justice, Texas Christian University. B.A., Amherst College, cum laude, Law, Jurisprudence, and Social Thought; Ph.D., University of California – Irvine, Criminology, Law and Society. The author would like to thank Simon Cole for his helpful comments on this article.

Preferred Citation: Adam B. Shniderman, Prosecutors Respond to Calls for Forensic Science Reform: More Sharks in Dirty Water, 126 Yale L.J. F. 348 (2016), http://www.yalelawjournal.org/forum/prosecutors-respond-to-calls -for-forensic-science-reform.