Episode 7: Shaky Science in the Courtroom

Eyewitness testimony and forensic science are forms of evidence frequently relied upon in criminal cases. But over the past few decades DNA analysis—and the exonerations it has prompted—has revealed how flawed these types of evidence can be. According to the Innocence Project, mistaken eyewitness identifications played a role in about 70% of convictions that were ultimately overturned through DNA testing, and misapplied forensic science was found in nearly half of these cases.

In this episode we speak with Jed Rakoff, senior US district judge for the Southern District of New York. Judge Rakoff discussed the weaknesses in eyewitness identification and forensic science and offered thoughts on how judges, policymakers, and others can reform the use of these methods and get stronger science into the courtroom.

Rakoff is the author of the 2021 book Why the Innocent Plead Guilty and the Guilty Go Free: and Other Paradoxes of Our Broken Legal System. He cochaired the National Academies committee that wrote the 2014 report Identifying the Culprit: Assessing Eyewitness Identification, and served on the National Commission on Forensic Science from 2013 to 2017.

Transcript

Frueh: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academies of Sciences, Engineering, and Medicine and Arizona State University. I’m Sara Frueh, consulting editor for the journal. I’m joined today by Judge Jed Rakoff to talk about flaws in forensic science and eyewitness identification, their use in the courtroom, and what can be done to address these problems. Judge Rakoff is a senior US district judge for the Southern District of New York. He co-chaired a National Academy’s committee that issued a landmark report on eyewitness identification in 2014, and his recent book is called Why the Innocent Plead Guilty and the Guilty Go Free: And Other Paradoxes of Our Broken Legal System.

Frueh: Welcome, Judge Rakoff, it’s great to have you with us.

Rakoff: It’s my pleasure, and thank you so much for inviting me.

Frueh: I’d like to start by talking about eyewitness identifications. People and juries often assume that if a witness in a criminal case says, “I was there, I saw that person commit the crime, I’ll never forget his face,” that it’s reliable. But several years ago, you led a study that found that these identifications often aren’t as reliable as people think. Why is that?

Rakoff: First of all, in the words of the Innocence Project, inaccurate eyewitness identification is so common that it’s the single greatest factor in the wrongful convictions that, through DNA, have led to exonerations. The Innocence Project has exonerated, to date, almost 400 people. These were people who were convicted beyond a reasonable doubt of very serious crimes—murder, rape, things like that—and yet were eventually proven to be absolutely innocent. And in over 70% of those cases, eyewitness identification evidence was introduced. In one of the earliest cases, the case involving Kirk Bloodsworth, a murder rape case—terrible crime—no fewer than five eyewitnesses said they had seen him commit the crime or had seen him fleeing from the scene of the crime. And they were all wrong. eventually, the DNA testing of the semen from the victim proved that it was someone else, who later confessed, but not until Mr. Bloodsworth had spent nine years in prison, [two of those years] on death row, I might add.

So it is a real conundrum for the legal system because, as you point out, this is powerful evidence. It’s not only evidence that’s frequent, particularly in state crimes, but it’s quite powerful. The eyewitness typically has no motive to lie. The eyewitness is usually a responsible citizen. The eyewitness is someone who, by the time a case goes to trial, or even when it just goes to a hearing before a judge, or even before that, when he or she is talking to the prosecutor, has become quite sure that he has identified the right guy. Juries naturally believe them. So why are they wrong? Well, sometimes they’re wrong for reasons that are understandable to a jury: bad lighting, the fact that the guy was carrying a weapon so that the eyes of the witness were focused on him, the fact that the eye witness’s own view is obscured.

Unfortunately, that’s just the tip of the iceberg. There are all sorts of things going on in the human perception equipment and the human memory that jurors are not aware of, and that can make for false identification. Just to give you two examples: one is the racial effect. People of one race are much better at perceiving and remembering the fine facial features of someone of their own race than someone of a different race. There is some controversy over why this is so, but there’s no doubt that it is so. Another more subtle problem, and one I think you see frequently in the exoneration cases, is where two different memories have merged together—unconsciously, but for very real. And the way this works, for example, is you see the culprit committing the crime, and maybe you see it for a minute or two. And if one could take a photo of the inside of your brain at that point, one would see that you only had a very fuzzy perception of what the guy looked like.

But three hours later, you’re shown a photo array, seven photos. And if it’s done properly, you’re told, “No one here may be the guy, but if there is someone who is the guy, let us know.” You’re a conscientious citizen. You pour over those photographs very carefully. And finally you say, “Well, I’m not sure, but the person who looks most to me like the guy that I saw commit the crime is number two.” And in studying number two, you notice, among other things, that he has a scar over his right eyebrow. Now you did not, in fact, perceive that at the time. But over the next few weeks and months, those two memories will merge. And by the time you come to testify, you will say, “I am really absolutely sure this is the guy, because I will never forget that I saw this scar over his right eyebrow.” And what you’re really remembering is the photo, not the guy you saw, but the two memories that merged unconsciously and just reinforced your wrongful identification.

Frueh: In your book, you discuss those fundamental problems with our vision and memory, and you suggest that one way to limit the damage they can cause would be to educate prosecutors about those flaws and how they can lead to mistaken identifications. How would that education work, and how would it help solve the problem?

Rakoff: You have to understand that because of laws, many of which I think are unfortunate, that were passed in the 1960s, seventies, eighties, and nineties, 97% of all criminal cases are plea bargained. And this is, it used to be no more than 80%, sometimes 75% in some jurisdictions. Now it’s overwhelmingly the case. And so the prosecutor is really the key player. To be frank, he or she wields much more power than the judge in determining who gets charged, who goes to jail, and so forth. And most prosecutors are very well intentioned, but very young. I was a prosecutor for seven years. I was three years out of law school, two years out of clerkship. I knew nothing about most of this stuff. I learned it on the job, so to speak. And what I regret is that I didn’t have what I had when I became a judge.

When I became a judge, like all federal judges, I was sent to something called baby judge school. But it’s a week-long program in Richmond, Virginia, where you’re taught things that you probably don’t know about being a judge. Well, I would like to see the same kind of thing for prosecutors. It doesn’t necessarily have to be a whole week, but several days. It could be done, ideally, in person, but it could be done by video and pre-packaged programs and so forth in which, among other things, prosecutors would be made aware of how often eyewitness identifications are wrong and why this is so. And you see the importance of this in the cases where there were exonerations, because almost always the guy who they then discover really was guilty was one of the suspects originally. But then the eyewitness came along and said, “Oh no, it’s Jones.” And so the police stopped looking at Smith and they started focusing exclusively on Jones.

But the prosecutor, if he knows of that danger, could say to the police at that point, “Before we charge this guy, have you looked into Smith? Let’s follow up a little bit more with Smith.” Because they would know that this testimony is not as reliable, in many cases, as it appears. So that’s the thing I have in mind. Now, I have another suggestion, which I feel will probably not be adopted in the United States, but it’s based on the British system. And my idea would be that for every three years that a prosecutor is a prosecutor, he has to spend six months of those three years as a criminal defense lawyer, defending indigent people. This would have to be in another district so there wouldn’t be conflicts of interest. And that would give him a much greater feel for where weaknesses in his approach may lie. Not just in eyewitness identification, although that would be certainly a key one, but also in areas of doubtful forensic science, for example.

Frueh: Speaking of that, I’d like to switch gears and talk a little about forensic science, which in the last decade or more has faced similar problems in revelations in terms of it not being as reliable as many people think. In your book, you talk about how the emergence of DNA analysis in the 1990s revealed the weaknesses in some other forensic science methods, like hair analysis and bite mark analysis and other techniques. In other words, that they’re not really grounded in science. What does it mean to say that a forensic science method is truly scientific?

Rakoff: So the great report—done by the National Academy of Sciences, let’s hear it for the National Academy of Sciences, and published in 2009, and the co-chair of that was the very great Judge Harry Edwards from the DC circuit—that report went through virtually all the major forms of forensic science and concluded, as you say, that most, if not all, were not really grounded in science. And what do we mean by grounded in science? Well, the Supreme Court has spoken to this in the Daubert case. We mean first, that the theory has been tested, tested in scientifically sound studies, blind studies, studies that meet all the requirements of a good scientific study, that [it] has then been peer-reviewed in publications that are well-known as being the publications that monitor developments in the relevant areas, that it has then been used sufficiently so that we can calculate an error rate, and if the error rate is too large, say more than 10%, that would cast great doubt on the reliability of the method—and finally, that it’s generally accepted, not just in the narrow community of people who administer these tests, but in the broader scientific community.

The National Academy report found that DNA was really the only one that met all four of those requirements, probably because it was developed by scientists for other reasons. Most of what we think of as forensic science was originally developed by police labs as a way of helping leads. And that’s perfectly legitimate. You don’t have to have rigid science in order to have something that may help you go identify another lead to follow. But then, beginning in the early 1900s, it began to be introduced in the federal and then the state courts. And a lot of it proved to be very unreliable, although we didn’t know that until things like the Innocence Project came along. There’s something called The National Registry of Exonerations, which is put together by several law schools, which records all court-ordered exonerations since 1989. And there are now about 2,500 of those. They found that in 40% of their cases, there was inaccurate forensic science testimony introduced.

I’ll give you an even more extreme example. It used to be that the FBI and various local police people would do what’s called a microscopic hair analysis, where the theory was that everyone’s hair is unique, just like everyone’s fingerprint is unique. I think many barbers would disagree. In any event, just taking that—without ever having tested it to see if that was true or not—the theory that developed [was] that if you look carefully enough at a blown-up slide of the hair found at the scene of the crime and compare it with the hair of the suspect, you can make a match. And in the lingo of forensic science, this got introduced as, “I am sure to a reasonable degree of scientific certainty, that the hair found at the scene, the crime matches the hair of the defendant and the likelihood that it could be anyone else is extremely remote.”

Well, that turned out to be wrong. It was bad science and it wasn’t true. And finally, the FBI, to its credit, did its own study of the 3,000 cases in which its own experts had testified to the effect that I just indicated. And they concluded that in 95% of those, what the experts said was either flatly inaccurate or, more often, considerably overstated. Many courts now preclude the emission of this bogus forensic science. Now, the bad news is there are still states that allow it.

There is a lot of problem with forensic science. Not all DNA is good. Fingerprint is better than things like bite marks and microscopic hair analysis, but it still has a degree of subjectivity, and that’s the last thing I’ll make mention of.

In a really scientific test, subjectivity plays no role, but as the National Academy found in their report, even in things like fingerprints, subjectivity plays a big role. So you have two fingerprints that you’re comparing, and it’s a subjective decision based on “experience,” which things to look at for comparison and which not. And so you may say, “You see that little swirl over there and the one that’s in the other slide, and that only occurs when it’s the same person.” Another examiner will say, “No, the swirls are really not central, and you have to look at something else.” There have been cases in which fingerprint experts have gotten it flat wrong, but it is better than some of the others.

Frueh: Is there a way to make some of the subjective forensic approaches more objective, or are they just intrinsically subjective?

Rakoff: Well, I think the biggest suggestion from the National Academy of Science 2009 report that I really do not know why it hasn’t been followed up, but it hasn’t, was to create a National Institute of Forensic Science staffed by high level scientists. And they would look at each form of forensic science and they would say, “This one’s good, but it could be made even better in the following ways. This one’s so bad, we don’t think it’s salvageable. This one’s bad, but it could be made salvageable if the following steps were taken.” And they would be in a position, as scientists, to bring to bear a kind of rigor that no judge is capable of doing and no party is capable of doing. So that would be, I think, the ideal solution—it was the proposal from the National Academy. I’m not sure why it was never followed up. There is some opposition to all of this from local police labs and the like. They have understandable biases in favor of what they’re doing, which I can fully appreciate. But I’m not sure why it didn’t attract greater national attention.

Frueh: Given that that hasn’t happened so far, have you seen any areas of forensic science where there has been progress made, either to put disciplines on more solid scientific footing or to make courtroom testimony more accurate and to reflect more fully the limitations of these methods? Have we made any progress at all?

Rakoff: Yes. It varies a lot from state to state. And please remember that the criminal justice system is mostly a matter of state prosecutions. Something like 90% of all criminal prosecutions are brought by the state, not by the federal government, so that’s where the action is. I think 37 of the states have now adopted Daubert and therefore, at least in theory, could subject forensic science, so-called forensic science, to the four-part analysis that I mentioned previously. And some cases have done that. There was an early case in Oklahoma, for example, where the federal judge threw out a microscopic hair analysis because after a Daubert hearing, he realized that this was just not good science at all.

A number of judges, and I would have to admit including myself, have also experimented with putting limits on what the expert can say. So I had a case a few years ago involving ballistic testimony. The theory of ballistics is that when you fire a bullet, it makes certain markings on the cartridge, and more importantly, on the barrel of the gun that are unique because the physical situation is never totally identical—it could be in a room with more pressure or less pressure, whatever. There has been doubt cast on that theory. But in the particular case that I had, and some other judges have done similar things, I said, OK, I will allow the expert to show two blowups, one of the bullet and one of the barrel of the gun, and to point out what he thinks are the commonalities of the two. But he can’t say there’s a match. And the most he can say is that it’s more likely than not, a much lower standard than proof beyond a reasonable doubt, more likely than not that this bullet came from this gun.

A number of judges have taken that approach, and I did. I’m not so sure in the end that’s the approach I would take today. I think I might throw it out altogether. But that is the approach that’s been taken. One of the reasons is judges are, I think, reluctant in a criminal case to exclude evidence that they think might make the difference between a determination of guilt or innocence. And they sort of feel, “Gee, the jury should see it all and then make their decision.” But seeing it all is not really meaningful when the jury has not the slightest scientific clue as to whether this evidence is good science or bad science. But nevertheless, that is, I think, deeply ingrained in many judges: when in doubt, let it in. And I think that happens too often in these cases.

Frueh: What about the role of the federal government in all of this? It sounds like a lot of these decisions are made on a court-by-court basis, on a state-by-state basis. Is there a role where the administration or Congress could take a more active part in driving change beyond the founding of the National Institute of Forensic Science, which I know you said was your first choice? Short of that, are there other steps that they could take to improve forensic science and its use?

Rakoff: The government, to its credit, in the last term of President Obama, created a National Council of Forensic Science [National Commission on Forensic Science] and they brought together all the players. So there were defense lawyers, there were government prosecutors, there were hard scientists, so to speak, and scientists who had devoted themselves to particular forensic disciplines. There were lab technicians and there were some judges. There was even one federal judge—they were desperate. Anyway, over the course of the four years that that existed, the group made no fewer than 59 recommendations. Now, these were technically recommendations being made to the Department of Justice, but they were written in such a way that we were hopeful would also impact local police and prosecutors as well.

To give you one example, we came down very strongly—this was almost unanimous—against the formulation [of] reasonable degree of scientific certainty, saying A) that’s not good science, science’s probability is not certainties, and B) it conveys completely the wrong view. It basically says to the jury, “Don’t be skeptical of this, it’s certain.” So we recommended the Department of Justice adopted that recommendation. I think some states have followed suit as well. I think it would be not as good as the so-called National Institute, but I think a useful step would be to revive the National [Commission on] Forensic Science. In the early days of the Trump administration, it had a four-year term and it was allowed to lapse and not be renewed. But a majority of the members felt there was still work to be done and asked that it be renewed. I think that would still be a helpful step.

Frueh: OK. As a final question, and we’ve talked about some areas where progress has been made, sometimes piecemeal, sometimes slow, and where progress hasn’t been made, and I’d like to ask, do you have hope for the future of forensic science and for fundamental change in forensic science? And if so, where does it lie, and what part of the system do you think is most fertile ground for change?

Rakoff: Well, being American, of course I’m optimistic. I’m always optimistic. I do think—and this goes not just to forensic science, this goes more across the board—movements for reform in the criminal justice area will go up and down depending on crime rates. So crime rates, until the last year or so, were tending down quite dramatically, and that made people more willing to consider reforms. But with the increase in violent crimes in the last year or so, I think that consensus may be broken up. The other thing though that leads to reform is something that is continuing and has been quite well publicized, which is the exoneration of innocent people. You can pick up a newspaper every week, practically, and find another case where someone who’s been in prison for a substantial period of time is found to be totally innocent and the court says, “Oops, sorry about that, you can go free.”

And the American people, I do think, are fundamentally a very fair people, and they are affected, as they should be, by that knowledge. So against that big background, what needs to be done is educating the public that it is things like eyewitness testimony and bad forensic science that is such a big factor in these wrongful convictions, so that they can see the connection. They’re more used to seeing on TV where the CSI type of approach—“Oh, it’s brilliant.” And they solve this problem, this case that could not otherwise be solved, and so forth. So I think that’s where you read these newspaper accounts of exonerations, rarely do they get into the weeds of why the person was wrongfully convicted. Usually it will be either wrongful eyewitness identification or bad forensic science. There are other factors as well, but those are the two big ones.

So I think the focus needs to be there. Where reform will ultimately come from, I think the courts can’t escape responsibility. These wrongful convictions occur on our turf and sure, Congress sets a lot of the rules, the president exercises a lot of discretion, but, when all is said and done, it’s in our courtrooms that the wrong occurs. And so if we don’t start doing something about it, we will have evaded our responsibility.

Frueh: Thank you, Judge Rakoff. I want to give you a hearty thanks for being with us today and just for all of the efforts that you’ve made to educate people about the importance of science in the courtroom.

Rakoff: My pleasure and thank you so much for inviting me.

Frueh: Thank you for joining us for this episode of The Ongoing Transformation. To learn more about forensic science and eyewitness identification, check out Judge Jed Rakoff’s book, Why the Innocent Plead Guilty and the Guilty Go Free, and also the National Academy’s report, Identifying the Culprit: Assessing Eyewitness Identification. Check our show notes to find links to these publications and more. Also, please subscribe to The Ongoing Transformation wherever you get your podcasts. Email us at [email protected] with any comments or suggestions. And if you enjoy conversations like this one, visit us at issues.org and subscribe to our print magazine. I’m Sara Frueh, consulting editor for Issues in Science and Technology. See you next time.