Peer Review, Scientific Evidence, Daubert, Frye, Reproducibility, General Acceptance

Initial Effort Shows Less Than Half of Psychological Studies are Reproducible

This past summer, the Open Science Collaboration published a piece in Science detailing their efforts to reproduce key findings in articles derived from three top journals in psychology. Putting aside the various nuances of actually measuring reproducibility, they were able to replicate less than half of the 100 studies that were completed (roughly 40%). To many, this low number seems condemning of science. How can a field which claims to be based on the “scientific method” such as psychology have such low success in a purportedly basic scientific requirement? To others, this low number represents the “scientific method” in action. To them, the real question is why anyone would take a solitary publication as established fact. It is only after many trials and efforts that such preliminary scientific findings can become reliable. Reproducibility is not vetted at publication, it is vetted after years of additional work in the field.

Science Gains Acceptance Over Time

For psychologists and scientists in general, these findings crystalize something they deal with everyday – uncertainty in data and its interpretations. Indeed, most researchers are quite comfortable with the fuzziness associated with cutting-edge research. For litigators, however, these findings only exacerbate the awkwardness of science in the courtroom. Justice demands certainty; now, psychology (and likely many other fields) provides even less than once was thought. It would seem at first that the two paradigms are irreconcilable. Some might even go as far as to dismiss science altogether as a “special” category of evidence with explicit admissibility rules. This reasoning belies an important fact – the vast majority of science HAS passed the reproducibility test and has gone on to become knowledge based on reliable empirical findings (i.e. evolution or climate change). Expressing surprise that new studies cannot be reproduced can be likened, in some sense, to expressing surprise that cases of first impression are sometimes overturned.

Effects on Expert Challenges

Nevertheless, changes can be expected in expert challenges. Whether under Frye or Daubert, peer-reviewed publications have provided judges with the background necessary to determine what the state of science is in a particular field. Given the recent developments, psychological studies, at least, will now require a different kind of scrutiny. For one, the difference between general acceptance and appearance in peer-reviewed literature will become more apparent. The value of solitary peer-reviewed studies in the law will diminish. General acceptance will become more difficult to measure, given the uncertainty in publication quality touted by advocates. Second, publication and methodological biases will gain renewed strength in refuting studies that show otherwise robust findings.

Given this most recent revelation about the state of psychology (and maybe science in general), litigators should be thinking of ways to bolster their expert opinions and formulate new strategies for attacking opposing experts. Reliance on publications and high-quality experts may no longer be enough to get passed an expert challenge. Cross-examination, a venerated tool for establishing checks on expert opinion, will get further muddled as both sides use it in an increasingly confusing zero-sum game. Judges and juries alike might just throw up their hands in despair and flip a coin. The battles-of-the-expert will become more uncertain.

Maximizing the Expert Challenge

Yet, there is a light at the end of the tunnel. Litigators on both sides should fully utilize expert challenges to force a close examination of expert methodologies. Daubert and Frye are tools to cripple the opponent and gain significant settlement leverage. But the bottom line is that expert challenges now demand a high level of sophistication. Remember to adhere to the following in your expert challenges:

  • Distinctions should be drawn between the subject matter in an expert challenge and the fodder that will be used at trial. They are not the same thing. The expert challenge requires more detailed methodological analysis and overview of the scientific field than will be drawn out during trial.
  • Expert challenges should focus on general methods and principles, not case-specific facts. Before submitting a challenge, make sure you parse these issues. Litigators who like to complain about case-specific issues in expert challenges do so at their own risk. Confusing the two will ultimately make judges nervous about impinging on the jury’s fact-finding role.
  • Don’t make expert challenges a “routine” part of litigation. Expert challenges have been said to be “low cost, high yield.” Don’t squander the opportunity to end your case favorably. It is better to save money and time and not file a motion if you plan on using the same boiler-plate motions.
  • Vet and analyze the issues raised by your own experts for the expert challenge. First, not all issues are worthy of inclusion. Second, you’ll need to know the material to explain it to the judge.

All expert challenges are risky. There are ways to minimize these risks. Ideally, the best expert challenge would come from someone other than you (it is the reason why courts love amici). If the case is sufficiently important, a neutral and blind evaluation of expert testimony and corresponding affidavit of findings could be an interesting solution (what JuriLytics offers). If not, remember that expert challenges are “low-cost, high yield.” Use them frequently, but use them effectively.

Comments (0)

    Leave a comment