Fingerprints: Not a Gold Standard

A few judges are showing signs of skepticism, and it’s about time.

In January 2002, Judge Louis Pollack made headlines with a surprising ruling on the admissibility of fingerprints. In United States v. Llera Plaza, the distinguished judge and former academic issued a lengthy opinion that concluded, essentially, that fingerprint identification was not a legitimate form of scientific evidence. Fingerprints not scientific? The conclusions of fingerprint examiners not admissible in court? It was a shocking thought. After all, fingerprints have been used as evidence in the U.S. courtroom for nearly 100 years. They have long been considered the gold standard of forensic science and are widely thought to be an especially powerful and indisputable form of evidence. What could Judge Pollack have been thinking?

About six weeks later, Judge Pollack changed his mind. In an even longer opinion, he bluntly wrote, “I disagree with myself.” After a second evidentiary hearing, he had decided that despite fingerprinting’s latent defects, the opinions of fingerprint identification experts should nonetheless be admissible evidence. With this second opinion, Pollack became yet another in a long line of judges to preserve the status quo by rejecting challenges to fingerprinting’s admissibility. Since 1999, nearly 40 judges have considered whether fingerprint evidence meets the Daubert test, the Supreme Court’s standard for the admissibility of expert evidence in federal court, or the equivalent state standard. Given Pollack’s about-face, every single judge who has considered the issue has determined that fingerprinting passes the test.

And yet, Judge Pollak’s first opinion was the better one. In that opinion, after surveying the evidence, he concluded that, “fingerprint identification techniques have not been tested in a manner that could be properly characterized as scientific.” All in all, he found fingerprinting identification techniques “hard to square” with Daubert, which asks judges to serve as gatekeepers to ensure that the expert evidence used in court is sufficiently valid and reliable. Daubert invites judges to examine whether the proffered expert evidence has been adequately tested, whether it has a known error rate, whether it has standards and techniques that control its operation, whether it has been subject to meaningful peer review, and whether it is generally accepted by the relevant community of experts. Pollak found that fingerprinting flunked the Daubert test, meeting only one of the criteria, that of general acceptance. Surprising though it may sound, Pollak’s judgment was correct. Although fingerprinting retains considerable cultural authority, there has been woefully little careful empirical examination of the key claims made by fingerprint examiners. Despite nearly 100 years of routine use by police and prosecutors, central assertions of fingerprint examiners have simply not yet been either verified or tested in a number of important ways.

Consider the following

Fingerprint examiners lack objective standards for evaluating whether two prints “match.” There is simply no uniform approach to deciding what counts as a sufficient basis for making an identification. Some fingerprint examiners use a “point-counting” method that entails counting the number of similar ridge characteristics on the prints, but there is no fixed requirement about how many points of similarity are needed. Six points, nine, twelve? Local practices vary, and no established minimum or norm exists. Others reject point-counting for a more holistic approach. Either way, there is no generally agreed-on standard for determining precisely when to declare a match. Although fingerprint experts insist that a qualified expert can infallibly know when two fingerprints match, there is, in fact, no carefully articulated protocol for ensuring that different experts reach the same conclusion.

Although it is known that different individuals can share certain ridge characteristics, the chance of two individuals sharing any given number of identifying characteristics is not known. How likely is it that two people could have four points of resemblance, or five, or eight? Are the odds of two partial prints from different people matching one in a thousand, one in a hundred thousand, or one in a billion? No fingerprint examiner can honestly answer such questions, even though the answers are critical to evaluating the probative value of the evidence of a match. Moreover, with the partial, potentially smudged fingerprints typical of forensic identification, the chance that two prints will appear to share similar characteristics remains equally uncertain.

The potential error rate for fingerprint identification in actual practice has received virtually no systematic study. How often do real-life fingerprint examiners find a match when none exists? How often do experts erroneously declare two prints to come from a common source? We lack credible answers to these questions. Although some FBI proficiency tests show examiners making few or no errors, these tests have been criticized, even by other fingerprint examiners, as unrealistically easy. Other proficiency tests show more disturbing results: In one 1995 test, 34 percent of test-takers made an erroneous identification. Especially when an examiner evaluates a partial latent print—a print that may be smudged, distorted, and incomplete—it is impossible on the basis of our current knowledge to have any real idea of how likely she is to make an honest mistake. The real-world error rate might be low or might be high; we just don’t know.

Fingerprint examiners routinely testify in court that they have “absolute certainty” about a match. Indeed, it is a violation of their professional norms to testify about a match in probabilistic terms. This is truly strange, for fingerprint identification must inherently be probabilistic. The right question for fingerprint examiners to answer is: How likely is it that any two people might share a given number of fingerprint characteristics? However, a valid statistical model of fingerprint variation does not exist. Without either a plausible statistical model of fingerprinting or careful empirical testing of the frequency of different ridge characteristics, a satisfying answer to this question is simply not possible. Thus, when fingerprint experts claim certainty, they are clearly overreaching, making a claim that is not scientifically grounded. Even if we assume that all people have unique fingerprints (an inductive claim, impossible itself to prove), this does not mean that the partial fragments on which identifications are based cannot sometimes be, or appear to be, identical.

Defenders of fingerprinting identification emphasize that the technique has been used, to all appearances successfully, for nearly 100 years by police and prosecutors alike. If it did not work, how could it have done so well in court? Even if certain kinds of scientific testing have never been done, the technique has been subject to a full century of adversarial testing in the courtroom. Doesn’t this continuous, seemingly effective use provide persuasive evidence about the technique’s validity? This argument has a certain degree of merit; obviously, fingerprinting often does “work.” For example, when prints found at a crime scene lead the police to a suspect, and other independent evidence confirms the suspect’s presence at the scene, this corroboration indicates that the fingerprint expert has made a correct identification.

The history of fingerprinting suggests that without adversarial testing, limitations in research and problematic assumptions may long escape the notice of experts and judges alike.

However, although the routine and successful police use of fingerprints certainly does suggest that they can offer a powerful form of identification, there are two problems with the argument that fingerprint identification’s courtroom success proves its merit. First, until very recently fingerprinting was challenged in court very infrequently. Though adversarial testing was available in theory, in practice, defense experts in fingerprint identification were almost never used. Most of the time, experts did not even receive vigorous cross-examination; instead, the accuracy of the identification was typically taken for granted by prosecutor and defendant alike. So although adversarial testing might prove something if it had truly existed, the century of courtroom use should not be seen as a century’s worth of testing. Second, as Judge Pollack recognizes in his first opinion in Llera Plaza, adversarial testing through cross-examination is not the right criterion for judges to use in deciding whether a technique has been tested under Daubert. As Pollack writes, “If ‘adversarial’ testing were the benchmark—that is if the validity of a technique were submitted to the jury in each instance—then the preliminary role of the judge in determining the scientific validity of a technique would never come into play.”

So what’s the bottom line: Is fingerprinting reliable or isn’t it? The point is that we cannot answer that question on the basis of what is presently known, except to say that its reliability is surprisingly untested. It is possible, perhaps even probable, that the pursuit of meaningful proficiency tests that actually challenge examiners with difficult identifications, more sophisticated efforts to develop a sound statistical basis for fingerprinting, and additional empirical study will combine to reveal that latent fingerprinting is indeed a reliable identification method. But until this careful study is done, we ought, at a minimum, to treat fingerprint identification with greater skepticism, for the gold standard could turn out to be tarnished brass.

Recognizing how much we simply do not know about the reliability of fingerprint identification raises a number of additional questions. First, given the lack of information about the validity of fingerprint identification, why and how did it come to be accepted as a form of legal evidence? Second, why is it being challenged now? And finally, why aren’t the courts (with the exception of Judge Pollack the first time around) taking these challenges seriously?

A long history

Fingerprint evidence was accepted as a legitimate form of legal evidence very rapidly, and with strikingly little careful scrutiny. Consider, for example, the first case in the United States in which fingerprints were introduced in evidence: the 1910 trial of Thomas Jennings for the murder of Clarence Hiller. The defendant was linked to the crime by some suspicious circumstantial evidence, but there was nothing definitive against him. However, the Hiller family had just finished painting their house, and on the railing of their back porch, four fingers of a left hand had been imprinted in the still-wet paint. The prosecution wanted to introduce expert testimony concluding that these fingerprints belonged to none other than Thomas Jennings.

Four witnesses from various bureaus of identification testified for the prosecution, and all concluded that the fingerprints on the rail were made by the defendant’s hand. The judge allowed their testimony, and Jennings was convicted. The defendant argued unsuccessfully on appeal that the prints were improperly admitted. Citing authorities such as the Encyclopedia Britannica and a treatise on handwriting identification, the court emphasized that “standard authorities on scientific subjects discuss the use of fingerprints as a system of identification, concluding that experience has shown it to be reliable.” On the basis of these sources and the witnesses’ testimony, the court concluded that fingerprinting had a scientific basis and admitted it into evidence.

What was striking in Jennings, as well as the cases that followed it, is that courts largely failed to ask any difficult questions of the new identification technique. Just how confident could fingerprint identification experts be that no two fingerprints were really alike? How often might examiners make mistakes? How reliable was their technique for determining whether two prints actually matched? How was forensic use of fingerprints different from police use? The judge did not analyze in detail either the technique or the experts’ claims to knowledge; instead, he believed that the new technique worked flawlessly based only on interested participants’ say-so. The Jennings decision proved quite influential. In the years following, courts in other states admitted fingerprints without any substantial analysis at all, relying instead on Jennings and other cases as precedent.

From the beginning, fingerprinting greatly impressed judges and jurors alike. Experts showed juries blown-up visual representations of the fingerprints themselves, carefully marked to emphasize the points of similarity, inviting jurors to look down at the ridges of their own fingers with new-found respect. The jurors saw, or at least seemed to see, nature speaking directly. Moreover, even in the very first cases, fingerprint experts attempted to distinguish their knowledge from other forms of expert testimony by declaring that they offered not opinion but fact, claiming that their knowledge was special, more certain than other claims of knowledge. But they never established conclusively that all fingerprints are unique or that their technique was infallible even with less-than-perfect fingerprints found at crime scenes.

In all events, just a few years after Jennings was decided, the evidential legitimacy of fingerprints was deeply entrenched, taken for granted as accepted doctrine. Judges were as confident about fingerprinting as was Puddn’head Wilson, a character in an 1894 Mark Twain novella, who believed that “ ‘God’s finger print language,’ that voiceless speech and the indelible writing,” could provide “unquestionable evidence of identity in all cases.” Occasionally, Pudd’nhead Wilson itself was cited as an authority by judges.

Why was fingerprinting accepted so rapidly and with so little skepticism? In part, early 20th-century courts simply weren’t in the habit of rigorously scrutinizing scientific evidence. Moreover, the judicial habit of relying on precedent created a snowballing effect: Once a number of courts accepted fingerprinting as evidence, later courts simply followed their lead rather than investigating the merits of the technique for themselves. But there are additional explanations for the new technique’s easy acceptance. First, fingerprinting and its claims that individual distinctiveness was marked on the tips of the fingers had inherent cultural plausibility. The notion that identity and even character could be read from the physical body was widely shared, both in popular culture and in certain more professional and scientific arenas as well. Berthillonage, for example, the measurement system widely used by police departments across the globe, was based on the notion that if people’s bodies were measured carefully, they inevitably differed one from the other. Similarly, Lombrosion criminology and criminal anthropology, influential around the turn of the century, had as its basic tenet that born criminals differed from normal law-abiding citizens in physically identifiable ways. The widespread belief in nature’s infinite variety meant that just as every person was different, just as every snowflake was unique, every fingerprint must be distinctive too, if it was only examined in sufficient detail. The idea that upon the tips of fingers were minute patterns, fixed from birth and unique to the carrier, made cultural sense; it fit with the order of things.

One could argue, from the vantage point of 100 years of experience, that the reason fingerprinting seemed so plausible at the time was because its claims were true, rather than because it fit within a particular cultural paradigm or ideology. But this would be the worst form of Whig history. Many of the other circulating beliefs of the period, such as, for example, criminal anthropology, are now quite discredited. The reason fingerprinting was not subject to scrutiny by judges was not because it obviously worked; in fact, it may have become obvious that it worked in part precisely because it was not subject to careful scrutiny.

Moreover, fingerprint examiners’ strong claim of certain, incontestable knowledge made fingerprinting appealing not only to prosecutors but to judges as well. In fact, there was an especially powerful fit between fingerprinting and that which the legal system hoped that science could provide. In the late 19th century, legal commentators and judges saw in expert testimony the potential for a particularly authoritative mode of evidence, a kind of knowledge that could have been and should have been far superior to that of mere eyewitnesses, whose weaknesses and limitations were beginning to be better understood.

There should be serious efforts to test and validate fingerprinting methodologies and to develop difficult and meaningful proficiency tests for practitioners.

Expert evidence held out the promise of offering a superior method of proof—rigorous, disinterested, and objective. But in practice, scientific evidence almost never lived up to these hopes. Instead, at the turn of the century, as one lawyer griped, the testimony of experts had become “the subject of everybody’s sneer and the object of everybody’s derision. It has become a newspaper jest. The public has no confidence in expert testimony.” Experts perpetually disagreed. Too often, experts were quacks or partisans, and even when they were respected members of their profession, their evidence was usually inconsistent and conflicting. Judges and commentators were angry and disillusioned by the actual use of expert evidence in court, and often said so in their opinions. (In this respect, there are noticeable similarities between the 19th-century reaction to expert testimony and present-day responses.)

Even if experts did not become zealous partisans, the very fact of disagreement was a problem. It forced juries to choose between competing experts, even though the whole reason for the expert in the first place was that the jury lacked the expertise to make a determination for itself. Given this context, fingerprinting seemed to offer something astonishing. Fingerprinting—unlike the evidence of physicians, chemists, handwriting experts, surveyors, or engineers—seemed to offer precisely the kind of scientific certainty that judges and commentators, weary of the perpetual battles of the expert, yearned for. Writers on fingerprinting routinely emphasized that fingerprint identification could not be erroneous. Unlike so much other expert evidence, which could be and generally was disputed by other qualified experts, fingerprint examiners seemed always to agree. Generally, the defendants in fingerprinting cases did not offer fingerprint experts of their own. Because no one challenged fingerprinting in court, either its theoretical foundations or, for the most part, the operation of the technique in the particular instance, it seemed especially powerful.

The idea that fingerprints could provide definite matches was not contested in court. In the early trials in which fingerprints were introduced, some defendants argued that fingerprinting was not a legitimate form of evidence, but typically defendants did not introduce fingerprint experts of their own. Fingerprinting thus avoided the spectacle of clashing experts on both sides of a case, whose contradictory testimony befuddled jurors and frustrated judges. The evidence that a defendant’s fingerprints matched those found at the crime scene was very rarely challenged. Fingerprinting grew to have cultural authority that far surpassed that of any other forensic science and the experts’ claims of infallibility came to be believed.

Although some present-day defendants do retain a fingerprint expert of their own, what is striking, even astonishing, is that no serious effort to challenge either the weight or admissibility of fingerprint evidence ever did emerge until just a couple of years ago. One of the many consequences of DNA profiling and its admissibility into court is that it has opened the door to challenges to fingerprinting. Ironically, DNA profiling—initially called “DNA fingerprinting” by its supporters to enhance its appeal—could turn out to have planted the seeds for fingerprinting’s downfall as legal evidence.

In the earliest cases, DNA profiling was accepted almost as breathlessly and enthusiastically as fingerprinting had been 75 years earlier. But after a few courts had admitted the new DNA identification techniques, defendants began to mount significant challenges to its admissibility. They began to point out numerous weaknesses and uncertainties. First, how exactly was a DNA match defined? What were the objective criteria for declaring that two similar-looking samples matched? Second, how accurate were the assumptions about population genetics that underlay the numbers used to define the probability of a mistaken DNA match? Third, however marvelous DNA typing appeared in theory, how often did actual laboratories making actual identifications make mistakes? These are the very questions that fingerprinting has ducked for a century. In the case of DNA, defense experts succeeded in persuading judges that there were serious concerns within each of these areas, and a number of courts even excluded DNA evidence in particular cases as a result. Eventually, the proponents of DNA were able to satisfy the courts’ concerns, but there is no doubt that the so-called “DNA wars” forced the new technique’s proponents to pay greater attention to both laboratory procedures and to the scientific basis for their statistical claims than they had done at first.

Current challenges

These challenges to DNA profiling, along with the increasing focus on judicial gatekeeping and reliability that grew out of the Supreme Court’s Daubert opinion, opened the door to contemporary challenges to fingerprinting. Together, they created a climate in which fingerprinting’s limitations became more visible, an environment in which legal challenge to a well-established, long-accepted form of scientific proof was doctrinally imaginable.

First, the move toward focusing on the reliability and validity of expert evidence made fingerprinting a more plausible target. Before Daubert, the dominant standard for assessing expert evidence was the Frye test, which focused on whether a novel technique was generally accepted by the relevant scientific community. Under Frye’s approach, it would have been extremely difficult to question the long-standing technique. Of course, fingerprinting was accepted by the relevant scientific community, especially if that community was defined as fingerprint examiners. Even if the community were defined more broadly (perhaps as forensic scientists in general), it would have been nearly impossible to argue that fingerprinting was not generally accepted. After all, fingerprinting was not just generally accepted; it was universally accepted as forensic science’s gold standard. Unlike Frye, Daubert made clear that judges were supposed to make a genuine assessment of whether the substance of the expert evidence was adequately reliable.

The Daubert approach offers two significant doctrinal advantages for anyone attempting to launch a challenge to fingerprint evidence. First, the views of the relevant community are no longer dispositive but are just one factor among many. We would hardly expect polygraph examiners to be the most objective or critical observers of the polygraph or those who practice hair identification to argue that the science was insufficiently reliable. When there is challenge to the fundamental reliability of a technique through which the practitioners make their living, there is good reason to be especially dubious about general acceptance as a proxy for reliability. For a debate about which of two methods within a field is superior, the views of the practitioners might well be a useful proxy for reliability, but when the field’s very adequacy is under attack, the participants’ perspective should be no more than a starting point..

The second advantage of the Daubert approach is that it offers no safe harbor for techniques with a long history. Frye itself referenced novel scientific techniques, and many jurisdictions found that it indeed applied only to new forms of expert knowledge, not to those with a long history of use. Under Frye, this limitation made sense: If a form of evidence had been in use as legal evidence for a long while, that provided at least prima facie evidence of general acceptance. Although judges need not reexamine a form of expertise under Daubert each time it is used, if there are new arguments that a well-established form of evidence is unreliable, judges should not dismiss these arguments with a nod to history.

Daubert, then, made it imaginable that courts would revisit a long-accepted technique that was clearly generally accepted by the community of practitioners. But it was the controversies over DNA profiling that made the weaknesses in fingerprinting significantly more visible to critics, legal commentators, and defense lawyers alike. The debates over DNA raised issues that had never been resolved with fingerprinting; indeed, they practically provided a blueprint to show what a challenge to fingerprinting would look like. And the metaphoric link between the two identification techniques made the parallels only more obvious. They helped defense attorneys to recognize that fingerprinting might not fare so well if subjected to a particular kind of scientific scrutiny.

Of course, so far, fingerprinting has fared all right. Those several dozen judges who have considered the issue continue to allow fingerprint evidence even in cases involving smudged and distorted prints. What is most striking about the judicial response to date is that with the exception of Judge Pollak, trial judges faced with challenges to the admissibility of fingerprinting have not confronted the issue in any serious way. Appellate courts have also avoided the issue—with the notable exception of the 4th Circuit’s Judge Michael.

The cases reveal a striking reluctance even to admit that assessing fingerprinting under Daubert raises tricky issues. One judge, for example, wrote that, “latent print identification is the very archetype of reliable expert testimony.” Although it may be arguable that fingerprinting should be admissible under the legal standard, to argue that it is the “archetype of reliable expert testimony” is to misunderstand either the defense’s critique of fingerprinting, Daubert, or both.

I suggest that what is driving these opinions is the concern that if fingerprinting does not survive Daubert scrutiny, neither will a great deal of other evidence that we currently allow. Rejecting fingerprinting would, judges fear, tear down the citadel. It would simply place too many forms of expert evidence in jeopardy. Even though the validity of difficult fingerprint identifications may be woefully untested, fingerprint identification is almost certainly more probative than many other sorts of nonexpert evidence, including, perhaps, eyewitness testimony. But it may also be more probative than other forms of expert evidence that continue to be routinely permitted, such as physicians’ diagnostic testimony, psychological evidence, and other forms of forensic science evidence. As one judge wrote in his opinion permitting fingerprints, the error rate “is certainly far lower than the error rate for other types of opinions that courts routinely allow, such as opinions about the diagnosis of a disease, the cause of an accident or disease, whether a fire was accidental or deliberate in origin, or whether a particular industrial facility was the likely source of a contaminant in groundwater.” (Of course, this is just a hunch, for we lack empirical data about the error rates for many of these enterprises, including fingerprint identification.) A similar notion seems to have influenced Judge Pollak in his reconsideration in Llera Plaza. He emphasizes in his second opinion permitting the testimony that fingerprint evidence, although “subjective,” is no more and perhaps less subjective than many other permitted opinions by experts in court.

In addition, the judges who are assessing fingerprinting most likely believe deeply in fingerprinting. Rightly or wrongly, the technique continues to have enormous cultural authority. Dislodging such a strong prior belief will require, at a minimum, a great deal of evidence, more than the quantity needed to generate doubt about a technique in which people have less faith. One could certainly criticize these judges for burying their heads in the sand instead of executing their duties under Daubert in a responsible way. However, their reluctance to strictly apply Daubert to fingerprinting reflects a deeper and quite problematic issue that pervades assessments of expert evidence more generally. Daubert provides one vision of how to assess scientific expert evidence: with the standards of the scientific method. But surely this idealized version of the scientific method cannot be the only way to generate legitimate expert knowledge in court. If fingerprinting fails Daubert, does this suggest the limits of fingerprinting or the limits of Daubert? When judges refuse to rule on fingerprinting in careful Daubert terms, perhaps they are, knowingly or not, enacting a rebellion against the notion that a certain vision of science provides the only legitimate way to provide reliable knowledge.

Whether such a rebellion is to be admired or criticized is beyond the scope of this article. But it does suggest that the legal rule we ask judges to apply to expert evidence will not, in and of itself, control outcomes. Determinations of admissibility, no matter what the formal legal rule, will end up incorporating broader beliefs about the reliability of the particular form of evidence and about the legitimacy of various ways of knowing. Scrutiny of expert evidence does not take place in a cultural vacuum. What seems obvious, what needs to be proven, what can be taken for granted, and what is viewed as problematic all depend on cultural assumptions and shared beliefs, and these can change over time in noticeable and dramatic ways. Whatever the ostensible legal standard used, it is filtered through these shared beliefs and common practices. When forms of evidence comport with broader understanding of what is plausible, they may be especially likely to escape careful analysis as legal evidence, no matter what the formal legal standard ostensibly used to evaluate them. Although commentators have often criticized the legal system for being too conservative in admitting expert evidence, the problem may be the reverse: The quick and widespread acceptance of a new technique may lead to its deep and permanent entrenchment without sufficient scrutiny.

The second lesson that can be drawn from the state of fingerprint identification evidence is that there may be a productive aspect to the battles of expert witnesses in court. A constant leitmotif in the history of expert evidence has been the call for the use of neutral experts. Such neutral experts could prevent the jury from having to decide between different views on matters about which it lacks knowledge and could ensure that valid science comes before the tribunal. Neutral experts have been recommended so frequently as a cure for the problems of expert evidence that the only wonder is that we have in practice budged so little from our adversarial approach to expert testimony. Those who have advocated neutral experts as a solution to the difficulties of expert evidence in a lay jury system should therefore take heed from the history of fingerprinting. Early fingerprint experts were not neutral experts, in the sense that they were called by a party rather than appointed by the court, but they do provide one of our only examples of a category of expert scientific knowledge in which the typical adversarial battles were largely absent. And the history of fingerprinting suggests that without adversarial testing, limitations in research and problematic assumptions may long escape the notice of experts and judges alike. Although it is easy to disparage battles of the experts as expensive, misleading, and confusing to the factfinder, these battles may also reveal genuine weaknesses. It is, perhaps, precisely because of the lack of these challenges that fingerprinting was seen to provide secure and incontestable knowledge. Ironically, had defense experts in fingerprinting emerged from the beginning, fingerprint evidence might have lower cultural status but in fact be even more trustworthy than it is today.

Finally, to return to the practical dimension: Given fingerprinting’s weaknesses, what should be done? Clearly, more research is necessary. There should be serious efforts to test and validate fingerprinting methodologies and to develop difficult and meaningful proficiency tests for practitioners. Even in his second opinion, Judge Pollak recognizes that fingerprinting has not been adequately tested; he simply decides that he will admit it nonetheless. But until such testing proceeds, what should judges do? Should they follow Pollak’s lead and admit it in the name of not “let[ting] the best be the enemy of the good”? Especially given fingerprinting’s widespread authority, this seems highly problematic, for jurors are unlikely to understand that fingerprinting is far less tested than they assume. Should judges instead exclude it as failing Daubert? This would have the valuable side effect of spurring more research into fingerprinting identification’s reliability and is perhaps the most intellectually honest solution, for under Daubert’s criteria, fingerprinting does not fare well. The problem with exclusion is that fingerprinting, although problematic, is still probably far more probative than much evidence that we do permit, both expert and nonexpert; so it seems somewhat perverse to exclude fingerprinting while permitting, say, eyewitness testimony. Perhaps courts ought therefore to forge intermediate and temporary compromises, limiting the testimony to some degree but not excluding it completely. Judges could permit experts to testify about similarities but exclude their conclusions (this is, in fact, what Pollak proposed in his first opinion). Or they could admit fingerprinting but add a cautionary instruction. They could admit fingerprints in cases where the prints are exceptionally clear but exclude them when the specimens are poor. Any of these compromise solutions would also signal to the community of fingerprint examiners that business as usual cannot continue indefinitely; if more research and testing are not forthcoming, exclusion could be expected.

Frankly, reasonable people and reasonable judges can disagree about whether fingerprinting should be admissible, given our current state of knowledge. The key point is that it is truly a difficult question. At a minimum, judges ought to feel an obligation to take these challenges seriously, more seriously than most of them, with the exception of Judge Pollak and Judge Michael, have to date. They should grapple explicitly and transparently with the difficult question of what to do with a technique that clearly has great power as an identification tool but whose claims have not been sufficiently tested according to the tenets of science.

Your participation enriches the conversation

Respond to the ideas raised in this essay by writing to [email protected]. And read what others are saying in our lively Forum section.

Cite this Article

Mnookin, Jennifer L. “Fingerprints: Not a Gold Standard.” Issues in Science and Technology 20, no. 1 (Fall 2003).

Vol. XX, No. 1, Fall 2003