Forum – Winter 2004

Forensic science, no consensus

In “A House with No Foundation” (Issues, Fall 2003), Michael Risinger and Michael Saks raise what they perceive to be serious questions regarding the reliability of forensic science research conducted by law enforcement organizations and, in particular, by the Federal Bureau of Investigation (FBI) Laboratory. They make sweeping unsupported statements about scientists’ bias, manufactured data, and overstated interpretations. Although it is not possible to address each of these ill-founded remarks in this brief response, it is apparent that the authors are unaware of, or at least misinformed about, the FBI Laboratory R&D programs, which support an environment contradictory to that portrayed by the authors. I appreciate this opportunity to inform your readers about the strong foundation that forensic science research provides for the scientific community.

The FBI Laboratory’s Counterterrorism and Forensic Science Research Unit (CTFSRU) is responsible for the R&D within the FBI Laboratory and provides technical leadership in counterterrorism and forensic science for federal, state, and local laboratories. The CTFSRU focuses its research activities on the development and delivery of new technologies and methodologies that advance forensic science and fight terrorism. Our R&D efforts range from fundamental studies of microbial genomes to the development of an

In 2003, the CTFSRU has 115 active R&D projects, with a budget of $29 million. Our CTFSRU research staff is composed of 15 Ph.D.-level and 7 M.S./B.S.-level permanent staff scientists, supported by approximately 30 visiting scientists consisting of academic faculty, postdoctoral, graduate, and undergraduate students from accredited universities. In addition, to leverage our R&D activities and allow state and local laboratories to participate directly in FBI research efforts, the FBI Laboratory’s Research Partnership Program was initiated in 2002. Since its inception, partnership opportunities have expanded beyond research collaborations to include creation and maintenance of databases, method development, testing and validation, and technology assessment and transfer. Today, scientists from approximately 40 laboratories, including state, local, federal, and international laboratories, are research partners. The results of these completed research projects are published in peer-reviewed scientific journals, findings are presented at scientific meetings, and advanced technical training is provided to the forensic community via formal classes and training symposia.

The FBI Laboratory R&D collaborations involve prestigious researchers from academia, national laboratories, and private industry, as well as forensic science laboratories worldwide. With this diversity of input and review of our research projects, it is difficult to comprehend how one could perceive that research scientists are biased simply because the law enforcement community is providing funding for their research.

A more in-depth discussion regarding the specific issues that have been raised in this and other articles related to the forensic sciences will be published in the April 2004 issue of Forensic Science Communications ().

DWIGHT E. ADAMS

Director

FBI Laboratory

Quantico, Virginia


In their fine article, Michael Risinger and Michael Saks present an all-too-accurate critique of the so-called forensic identification sciences. Some, they argue, are so deeply entrenched in the litigation system that they have been able to escape scrutiny for decades despite their having no scientific foundation at all. Experts simply testify based on their experience, and courts accept their opinions without any serious inquiry into such matters as validity, reliability, and observer bias. In some instances, testing of practitioners and methodologies has begun, but the research designs and analysis are so biased as to make it of little scientific value.

There are some exceptions, however, which Risinger and Saks do not discuss. Examining areas in which forensically oriented research is being conducted with scientific rigor can help us to identify the conditions under which such research can more generally be made to comply with the ordinary standards of scientific inquiry.

Ironically, a good example of the development of sound forensic science has grown out of the legal community’s premature acceptance of some questionable methods. In the 1960s, a scientist from Bell Labs, which had earlier developed the sound spectrograph, claimed that human voices, like fingerprints, are unique and that spectrograms (popularly called “voiceprints”) could be used to identify people with great accuracy. Studies published in the 1970s showed low error rates, and judges began to permit voiceprint examiners (often people who had received brief training in police labs) to testify as experts in court.

Unlike the forensic sciences on which Risinger and Saks focus, however, acoustic engineering, phonetics, and linguistics are robust fields that exist independently of the forensic setting. Prominent phoneticians and engineers spoke up against using voiceprints in court. Then, in 1979, a committee of the National Research Council issued a report finding that voiceprints had not been shown to be sufficiently accurate in forensic situations, where signals are often distorted or degraded. Although some courts continue to admit voiceprint analysis, its use in the courtroom has declined.

At the same time, however, new research into automatic speaker recognition technology has been making significant progress. Laboratories around the world compete annually in an evaluation sponsored by the National Institute of Standards and Technology, which is intended to simulate real-life situations (see ). Rates of both misses and false alarms are plotted for each team, and the results of the evaluation are published. How can this happen even with government-sponsored research? First, the researchers are trained in fields that exist independently of courtroom application. Second, reliable methods are established in advance and are adhered to openly and on a worldwide basis.

Not every forensic field has both the will and the scientific prowess to engage in such serious research. But at least it appears to be possible when the conditions are right.

LAWRENCE M. SOLAN

Professor of Law

Director, Center for the Study of Law,

Language and Cognition

Brooklyn Law School

Brooklyn, New York

PETER M. TIERSMA

Professor of Law

Joseph Scott Fellow

Loyola Law School

Los Angeles, California

Lawrence M. Solan and Peter M. Tiersma are the authors of the forthcoming book Language on Trial: Linguistics and the Criminal Law.


Michael Risinger and Michael Saks make several unfortunate remarks about the forensic sciences in general and about me and a colleague in particular. But we do agree on one thing at least.

First, Risinger and Saks state that “(m)any of the forensic techniques used in courtroom proceedings . . . rest on a foundation of very weak science, and virtually no rigorous research to strengthen this foundation is being done.” They list “hair analysis” as one of the weakly founded techniques in need of more research. By this phrase, I take it that the authors mean “microscopical hair comparisons,” because they discuss it as such later in their paper, but “hair analysis” is often used to describe the chemical analysis of hairs for drugs or toxins–a very different technique. The forensic comparison of human hairs is based on histology, anthropology, anatomy, dermatology, and, of course, microscopy. An extensive literature exists for the forensic hair examiner to rely on, and the method appears in textbooks, reference books, and (most importantly) peer-reviewed journal articles. Although DNA has been held up by many as the model for all forensic sciences to aspire to, not every forensic science can or should be assessed with the DNA template. Forensic hair comparisons do not lend themselves to statistical interpretation as DNA does, and therefore other approaches, such as mitochondrial DNA analysis, should be used to complement the information derived from them. A lack of statistics does not invalidate microscopical hair comparisons; a hammer is a wonderful tool for a nail but a lousy one for a screw, and yet the screw is still useful given a proper tool.

Second, the authors use a publication I coauthored with Bruce Budowle on the correlation of microscopical and mitochondrial analyses of human hairs as an example of exclusivity in research (they describe it as an example of a “friends-only regime”). I worked at the FBI Laboratory, as did Budowle, during the compilation of the data found in that paper, and the paper was also submitted for publication while I worked there. Subsequently, I left the FBI and took my current position; the paper was published after that. Regardless of my employer, I have routinely conducted research and published with “outside” colleagues; Budowle certainly has. The data from this research were presented at the American Academy of Forensic Sciences annual meeting and appeared in the peer-reviewed Journal of Forensic Sciences. How our paper constitutes a “friends-only regime” is beyond me.

Third, Budowle and I are accused of having “buried” a critical result “in a single paragraph in the middle of the paper.” This result, where only 9 hairs, out of 80 that were positively associated by microscopical comparisons, were excluded by mitochondrial DNA, appeared in the abstract, Table 2 (the central data of the paper), and the “Results” and “Discussion” sections. Furthermore, these 9 hairs were detailed as to their characteristics (ancestry, color, body location, etc.) in Tables 3 and 4, as well as in the corresponding discussion. “Buried,” I feel, is not an accurate description.

Finally, Risinger and Saks suggest that my coauthor and I equate the value of the two methods, placing neither one above the other. As far as it goes, that much is true: Microscopical and mitochondrial DNA analyses of human hairs yield very different but complementary results, and one method should not be seen as “screening for” or “confirming” the other. As an example, examining manufactured fibers by polarized light microscopy (PLM) and infrared spectroscopy (IR) yields different but complementary results, and the two methods are routinely applied in tandem for a comprehensive analysis. PLM cannot distinguish among subspecies of polymers, but IR provides no information on optical properties, such as birefringence, that help to exclude many otherwise similar fibers. In the same way, microscopy and mitochondrial DNA methods provide more information about hairs together than separately. To quote from our paper, “(t)he mtDNA sequences provide information about the genotype of the source individual, while the microscopic examination evaluates physical characteristics of an individual’s hair in his/her environment (phenotype).” In our paper, Budowle and I concur with other researchers in the field that “there will be little if any reduction in the level of microscopic examination as it will be both necessary and desirable to eliminate as many questioned hairs as possible and concentrate mtDNA analysis on only key hairs.”

This does not, however, mean that we feel, as Risinger and Saks intimate, that “all techniques are equal, and no study should have any bearing on our evaluation of future cases in court.” The sample in this one study was not representative in a statistical or demographic sense; the study was a review of cases submitted within a certain time frame that had been analyzed by both methods in question. Had Budowle and I extrapolated the results of this one study to the population at large or to all forensic hair examiners, we would have been outside the bounds of acceptable science. I’m sure the authors would have taken us to task for that, as well.

I strongly agree with Risinger and Saks’ statement that “any efforts that bring more independent researchers . . . into the forensic knowledge-testing process should be encouraged” and their call for more independent funding of forensic science research. More independent forensic researchers lead to more demands for funding from public and private agencies interested in improving science, such as the National Science Foundation and the National Institutes of Health. Forensic science has a history of treatment as an “also-ran” science. That perception needs to change for the betterment of the discipline and, more importantly, the betterment of the justice system. It can’t happen without additional money, and it’s time that the science applied to our justice system was funded as if it mattered.

MAX M. HOUCK

Director

Forensic Science Initiative

West Virginia University

Morgantown, West Virginia


In defense of crime labs

In “Crime Labs Need Improvement” (Issues, Fall 2003), Paul C. Giannelli makes a few valid points about the need to improve the capacity and capabilities of forensic science laboratories in the United States. He makes several invalid points too.

Giannelli states that, “the forensics profession lacks a truly scientific culture–one with sufficient written protocols and an empirical basis for the most basic procedures.” This claim is without merit. Gianelli cites as proof significant problems in West Virginia and the FBI Crime Laboratory that occurred years ago. A few aged examples should not be used to draw the conclusion that “the quality of the labs is criminal.” One need only walk through an accredited crime laboratory, such as the Los Angeles Police Department (LAPD) Crime Laboratory, to observe that forensic science is the embodiment of scientific culture. Since 1989, at least 10 scientific working groups have been formed in the LAPD Crime Laboratory, each with a specific scientific focus such as DNA, firearms, or controlled substances. These groups are federally sponsored and have broad representation from the forensic science community. They have worked to establish sound scientific guidelines for analytical methods, requirements for scientist training and education, and laboratory quality-assurance standards. The LAPD Crime Laboratory has not only accepted these scientific guidelines but has participated in their development.

Giannelli claims that accreditation rates for crime laboratories and certification rates for scientists are too low. He cites a lack of funding as contributing to these low rates and to a “staggering backlog of cases.” A review of the situation in California in a recent publication, Under the Microscope, California Attorney General Bill Lockyer’s Task Force Report on Forensic Services (August 2003), provides a different view of the state of forensic crime laboratories. Some of its findings: 26 of California’s 33 public crime laboratories are accredited by the American Society of Crime Laboratory Directors-Laboratory Accreditation Board. The seven nonaccredited labs all intend to apply for accreditation in the near future. The LAPD Crime Laboratory was accredited in 1998.

Certification of criminalists, questioned document examiners, latent fingerprint specialists, and other forensic specialists is not mandatory in California. Many have voluntarily undergone examination and certification by the American Board of Criminalistics, the International Association for Identification, and other certification boards, but most have not. Most forensic specialists do work in accredited laboratories that follow established standards that include annual proficiency testing of staff.

More than 450,000 casework requests are completed in California crime laboratories each year. A relatively low number of requests, 18,000, are backlogged. Of the backlogged requests, most require labor-intensive services such as analysis of firearms, trace evidence, fire debris, latent fingerprints, and DNA.

To reduce backlogs and improve analysis turnaround times, the state and local agencies need to increase permanent staffing levels.

Giannelli’s claim that more funding is needed to reduce backlogs and improve analysis turnaround times is valid and welcome. We also agree with his urging that the nation’s crime laboratories must be accredited and examiners certified. But his questioning of whether forensic science is “truly a scientific endeavor” is clearly invalid.

WILLIAM J. BRATTON

Chief of Police

Los Angeles, California


Polygraph fails test

For several years now, the widespread use of the polygraph as a screening tool in our country’s national laboratories has been a concern of mine, and I am glad that David L. Faigman, Stephen E. Fienberg, and Paul C. Stern have raised the issue (“The Limits of the Polygraph,” Issues, Fall 2003). Although the polygraph is not completely without merit in investigating specific incidents, I have yet to see any scientific evidence that this method can reliably detect persons who are trained to counter such techniques, or that it can deter malicious behavior. In fact, based on my relationship with the national laboratories in the state of New Mexico, I am certain that these tests have the effect of reducing morale and creating a false sense of security in those who are responsible for safeguarding our nation’s greatest secrets.

There are now three studies spanning 50 years that validate my concerns. In 1952, the Atomic Energy Commission (AEC) created a five-person panel of polygraph-friendly scientists to review the tool’s merit in a screening program, and in the following year the AEC issued a statement withdrawing the program as a result of congressional inquiry and serious concerns expressed by these scientists. In 1983, the Office of Technology Assessment concluded that, “the available research evidence does not establish the scientific validity of the polygraph test for personnel security screening.” And in 2003, a comprehensive study by the National Research Council of the National Academy of Sciences concluded essentially the same thing, even using accuracy levels well above the known state of the art.

Although I am encouraged that the Department of Energy is reducing its use of the polygraph in screening its employees, approximately 5,000 people every five years, with no record of misbehavior at all, will still be subjected to this test. I believe that polygraph use should be limited to highly directed investigations on a one-on-one basis, combined with other data about an individual. I am also troubled by the Department of Defense’s recently authorized expansion of its use of polygraphs. From all accounts, neither the technology nor our understanding of it has advanced much–if at all–in half a century, which leaves its results a highly dubious indicator at best, and a boon to our nation’s enemies at worst. In using polygraphs as an open screening tool, I believe that the Department of Defense is making the same mistake that the Department of Energy is now trying to correct.

SEN. JEFF BINGAMAN

Democrat of New Mexico


The fingerprint controversy

Many of the issues in Jennifer L. Mnookin”s “Fingerprints: Not a Gold Standard” (Issues, Fall 2003) have been discussed at great length during the past few years by many professionals in both the legal and forensic communities. Her contention that friction ridge identification has “not been sufficiently tested according to the tenets of science” raises this question: To what degree should these undefined “tenets of science” determine the evidentiary value of fingerprints?

In Daubert v. Merrell Dow Pharmaceuticals, the U.S. Supreme Court made general observations (more commonly referred to as “Daubert criteria”) that it deemed appropriate to assist trial judges in deciding whether “the reasoning or methodology underlying the testimony is scientifically valid.” The court also stated, “The inquiry envisioned by Rule 702 is, we emphasize, a flexible one. Its overarching subject is the scientific validity–and thus the evidentiary relevance and reliability–of the principles that underlie a proposed submission. The focus, of course, must be solely on principles and methodology, not on the conclusions they generate.”

Many years of scientific testing and validation of fingerprint uniqueness and permanency–the primary premises of friction ridge identification–were significant in a British court’s decision in 1902 to allow fingerprint evidence. Since that time, independent research, articles, and books have documented the extensive genetic, biological, and random environmental occurrences that take place during fetal growth to support the premises that friction ridge skin is unique and permanent. The Automated Fingerprint Identification System (AFIS) today makes it possible to electronically search and compare millions of fingerprints daily. To the best of my knowledge, fingerprint comparisons using AFIS systems worldwide have never revealed a single case of two fingerprints from two different sources having identical friction ridge detail in whole or in part. These findings attest to the uniqueness of fingerprints. Although this cannot be construed as “true” scientific research, it should not be discounted, in my opinion, when evaluating the probative value of fingerprint evidence. Mnookin also acknowledges that, “fingerprint identification . . . may also be more probative than other forms of expert evidence that continue to be routinely permitted, such as physician’s diagnostic testimony, psychological evidence, and other forms of forensic evidence.”

Although I support further scientific research to determine statistically “how likely it is that any two people might share a given number of fingerprint characteristics,” one must be extremely careful when bringing a statistical model into the courtroom. Misinterpretations of DNA statistical probability rates have been reported.

I believe that the identification philosophy and scientific methodology together create a solid foundation for a reliable and scientifically valid friction ridge identification process. S/Sgt. David Ashbaugh of the Royal Canadian Mounted Police describes the identification philosophy as follows: “An identification is established through the agreement of friction ridge formations, in sequence, having sufficient [observed] uniqueness to individualize.” The scientific methodology involves the analysis, comparison, and evaluation of the unknown and known friction ridge impressions for each case by at least two different friction ridge identification specialists. This approach to friction ridge identification follows an extremely uniform, logical, and meticulous process.

As long as properly trained and competent friction ridge identification specialists correctly apply the scientific methodology, the errors will be minimal, if any. The primary issue here, I believe, is training. Is the friction ridge identification specialist properly trained and does he/she have the knowledge and experience to determine that a small, distorted, friction ridge impression came from the same source as the known exemplar? This issue can certainly be addressed on a case-by-case basis, once fingerprint evidence is deemed admissible by the presiding judge, by qualifying the expertise of the friction ridge identification specialist before any fingerprint evidence is given.

Shortcomings do exist in some areas of friction ridge identification, and they should be addressed: specifically, the need for standardized training and continuing research. The evidentiary value, however, of friction ridge identification is significant and should not be excluded from the courtroom on the basis that it does not fit into one individual’s interpretation of the “tenets of science.”

MARY BEETON

1st Vice President

Canadian Identification Society

Orangeville, Ontario, Canada

www.cis-sci.ca


I was gratified to read Jennifer Mnookin’s superb article because it shows that yet another eminent scholar agrees with what I have been arguing for some years now: that courts admitted fingerprint evidence nearly a century ago without demanding evidence that fingerprint examiners could do what they claimed to be able to do. What’s more alarming, however, is that courts are now poised to do exactly the same thing again.

Since 1999, courts have been revisiting the question of the validity of forensic fingerprint identification. But no court has yet managed to muster a rational argument in defense of fingerprint identification. Instead, courts have emphasized the uniqueness of all human fingerprints (which is irrelevant to the question of with what degree of accuracy fingerprint examiners can correctly attribute latent prints) and, as Mnookin points out, “adversarial testing.” But any scientist can easily see that a criminal trial is not a scientific experiment.

Additionally, most scientists will probably be shocked to learn that fingerprint examiners claim that the error rate for fingerprint identification can be parsed into “methodological” and “human” error rates and that the former is said to be zero, despite the fact that known cases of misidentification have been exposed. (Of course, there is no concept of a “methodological error rate” in any area of science other than forensic fingerprint identification. Try typing it into Google.) It’s like saying the airplanes have a zero “theoretical” crash rate. But courts have accepted the zero “methodological error rate” and, by fiat, declared the “human error rate” to be “vanishingly small,” “essentially zero,” or “negligible.”

Such arguments, which would be laughed out of any scientific forum, have found a receptive audience in our courts. As a result of this, fingerprint examiners have answered demands for scientific validation by invoking the fact that courts accept fingerprint identification. Since fingerprint examiners do not answer to any academic scientific community, legal opinions have come to substitute for the scientific validation that Mnookin, like virtually every disinterested scholar who has examined the evidence, agrees is lacking. Legal scholars, psychologists, and scientists–the weight of scholarly opinion is clear. The lone holdouts are the ones that matter most: the courts, which are inching ever closer to declaring the issue closed and to treating questions about the validity of fingerprint identification as absurd on their face.

Science has encountered a roadblock in the courts. But the scientific community, which has remained largely silent on the issue, can break the impasse. Fingerprint identification may or may not be reliable, but in either case “adversarial testing” and “zero methodological error rates” are not good science. Will the scientific community allow these notions to enjoy the imprimatur of our courts, or will it demand that scientists and technicians (whichever fingerprint examiners are) provide real evidence to support their claims? This is one issue where the scientific community needs to serve as the court of last resort.

SIMON A. COLE

Assistant Professor of Criminology, Law,

and Society

University of California, Irvine

Simon A. Cole is the author of Suspect Identities: A History of Fingerprinting and Criminal Identification (Harvard University Press, 2001).


For most of us, fingerprints do indeed conjure up the image of a gold standard in personal identification. In her provocative article, Jennifer Mnookin questions that view and raises a number of issues about the scientific evidence supporting the admissibility of fingerprints in criminal proceedings. I will address only one of these issues: the historical question she asks about why fingerprinting was accepted so rapidly and with so little skepticism. The answer is simple: Fingerprints were adopted initially because they enjoyed a very strong scientific foundation attesting to their efficiency and accuracy in personal identification. With widespread adoption and successful use, this early scientific foundation receded from view, to the point where it has been all but invisible, including in several recent and otherwise scholarly books on the subject.

The scientific foundation of fingerprints as a means of accurate personal identification dates from the work of Francis Galton, particularly in three books he published in 1892-1895. In his 1892 book Fingerprints, he presented evidence for their permanence (through the examination of several series of prints taken over a long span of time) and developed a filing scheme that permitted storage and rapid sifting through a large number of prints. The filing scheme, as further developed by E. R. Henry in 1900, spread to police bureaus around the world and made the widespread use of the system feasible. That portion of Galton’s work is reasonably well known today. But more to the point of present concerns, Galton also gave a detailed quantitative demonstration of the essential uniqueness of an individual’s fingerprints; a demonstration that remains statistically sound by today’s standards.

The quantification of the rarity of fingerprint patterns is much more difficult than is the similar study of DNA patterns. Strands of DNA can be taken apart, and the pieces to be subjected to forensic study may be treated as nearly statistically independent, using principles of population genetics to allow for the known slight dependencies and for differences in pattern among population subgroups. The (nearly) independent pieces of evidence, no one of which has overwhelming force, may then be combined to produce quantitative measures of uniqueness that can, when carefully presented, overwhelm any reasonable doubt.

Fingerprints present a different set of challenges. They have one advantage over DNA: the fine details of fingerprints are developmental characteristics, and even identical twins sharing the same DNA have distinguishably different fingerprints (as Galton in fact showed). But a fingerprint exhibits widespread correlation over the parts of its pattern and is not so easily disaggregated into elementary components, as is a DNA strand. How then did Galton overcome this obstacle?

In his 1892 book, Galton presented an ingenious argument that disaggregated a full fingerprint into 24 regions and then gave for each region a conservative assessment of the conditional rarity of the regional pattern, taking account of all detailed structure outside of that region. This conditional assessment successfully swept aside worries about the interrelations across the whole of the pattern and led him to assert a conservative bound on a match probability: Given a full and well-registered fingerprint, the probability that another randomly selected fingerprint would match it in all minutia was less than 1 in 236 (about 1 in 69 billion, although Galton’s calculation was not equal to his ingenuity, and he stated this was 1 in 64 billion). If two prints were matched, this would be squared, and so forth. I give a more detailed account of Galton’s investigation in a chapter of my book Statistics on the Table (Harvard University Press, 1999).

If Galton’s investigation withstands modern scrutiny, and it does, why should we not accept it today? There are two basic reasons why we might not. First, he assumed that the print in question was a full and accurately registered print, and he made no allowance for partial or blurred prints, the use that produces most current debate. Galton himself would accept such prints, but cautiously, and subject to careful (but unspecified) deliberation by the court. And second, his quantitative assessment of the rarity of regions was made on the basis of narrow empirical experience; namely, his own experimentation. He proceeded carefully and his results deserve our respect, but we require more today.

Galton’s investigation was crucial to the British adoption of fingerprints, and his figure of 1 in 64 billion was quoted frequently in the decades after his book, as fingerprints gained widespread acceptance. But as time passed and success (and no significant embarrassments) accumulated, this argument receded from view. Nonetheless, for most of the first century of their use, fingerprints enjoyed a scientific foundation that exceeded that of any other method of forensic identification of individuals. Although more study would surely be beneficial for its modern use, we should not wonder about the initial acceptance of this important forensic tool.

STEPHEN M. STIGLER

Department of Statistics

University of Chicago

Chicago, Illinois


In addition to the problems with fingerprint identification discussed by Jennifer Mnookin, a new source of potential error is being introduced by the expanded use of digital technology. The availability of inexpensive digital equipment such as digital cameras, printers, and personal computers is providing even the smallest police departments with their own digital forensic laboratories. Although this would appear to be a sign of progress, in practice it is introducing a new realm of errors through misuse and misunderstanding of the technology. Until police receive better training and use higher-quality equipment, the use of digital technology should be challenged in court, if not eliminated.

Consider what happens in fingerprinting. Crime investigations often involve the recovery of latent fingerprints, the typically small, distorted, and smudged fragments of a fingerprint found at crime scenes. Police often use digital cameras to photograph these fingerprints, instead of traditional cameras with 35-mm forensic film. Although the digital cameras are convenient for providing images for computer storage, they are not nearly as accurate as the analog cameras they are replacing. When shooting in color, a two-megapixel digital camera produces an image with only 800 dots per inch, whereas a 35-mm camera provides 4,000-plus dots per inch. As a result, critical details including pores and bifurcations can be lost. Colors are similarly consolidated and very light dots are lost.

Distortion occurs whenever digital images are displayed. Every monitor and every printer displays color differently. The fact that the resulting image looks crisp and clear does not mean that it is accurate.

Once a latent fingerprint is entered into a computer, commercial software is often used to enhance its quality. Adobe Photoshop, an image-creation and editing product, can be combined with other software products to improve the latent image by removing patterns such as the background printing on a check, the dot pattern on newsprint, or the weave pattern on material that blurs the image of a fingerprint. The software can then be used to enhance the image of the remaining fingerprint.

The problems do not end with the image itself. Once information is computerized, it needs to be protected from hackers. Many police departments allow their computer vendors to install remote access software such as PC Anywhere to facilitate maintenance. Unfortunately, this also makes the computer particularly vulnerable to unauthorized attack and alteration of the computer’s files. Most police departments have no rapid means of determining whether their digital information was modified.

By the time such a digital fingerprint image reaches a courtroom, there is no easy way of verifying it. Even the police department’s own fingerprint examiners who take the stand may not realize that they are working with a digital picture on printer paper and not an original photograph of a fingerprint. Of the 40-plus Daubert challenges to fingerprints in court, none have been based on the inaccuracy or loss of detail associated with the use of digital technology or the possibility of unauthorized manipulation of computer images. In most instances, the defense attorney is not aware that the fingerprint image is digital. Indeed, how could the defense know it is dealing with a digital image if the fingerprint examiner does not?

It might be futile to forbid the police to use digital technology in their work, but it is clear that before this technology can be used successfully, we must develop rigorous standards for the quality and reliability of digital images, extensive training for police personnel, and improved computer security.

MICHAEL CHERRY

Woodcliff Lake, New Jersey

LARRY MEYER

Towanda, Illinois


Radiological terrorism

“Securing U.S. Radioactive Sources” by Charles D. Ferguson and Joel O. Lubenau (Issues, Fall 2003) identifies noteworthy issues concerning the potential malevolent use of high-risk radioactive material. These issues are not new, however. The Nuclear Regulatory Commission (NRC), its Agreement States, and our licensees have taken, or intend to take, measures beyond those mentioned in the article to address these matters–measures that I believe ensure the continued safe and secure use of radioactive material. For this reason, I would like to provide a summary of the progress made to date in ensuring the security of high-risk radioactive material. I also will discuss Ferguson and Lubenau’s recommendation that NRC advocate alternative technologies.

The U.S. government has responded effectively and in a coordinated manner to address the potential for radioactive material to be used in a radiological dispersal device (RDD). The NRC has worked with other federal agencies; federal, state, and local law enforcement officials; NRC Agreement States; and licensees to develop security measures that ensure the safe and secure use and transport of radioactive materials. In addition to the measures taken immediately after the events of September 11, 2001, we have, in cooperation with other federal agencies and the International Atomic Energy Agency (IAEA), established risk-informed thresholds for a limited number of radionuclides of concern that establish the basis for a graded application of any additional security measures. This approach ensures that security requirements are commensurate with the level of health risk posed by the radioactive material. Using these thresholds, the NRC has imposed additional security measures on licensees who possess the largest quantities of certain radionuclides of concern and will address other high-risk material in the near future. At the international level, we have worked closely with the IAEA to develop the “Code of Conduct on the Safety and Security of Radioactive Sources,” which will help ensure that other countries also will work to improve the global safety and security of radioactive materials. Taken together, these national and international cooperative efforts have achieved measurable progress in ensuring adequate protection of public health and safety from the potential malevolent use of high-risk radioactive material.

Concerning the advocacy of alternative technologies that was raised in the Ferguson/Lubenau article, I would note that the NRC does not have regulatory authority to evaluate non-nuclear technologies or, as a general matter, to require applicants to propose and evaluate alternatives to radioactive material. Moreover, the evaluation may be significantly outside the scope of existing NRC expertise, given that the evaluation would need to consider, on a case-specific basis, not only the relative risks of the various non-nuclear technologies that could be applied, but also the potential benefits, including consideration of the societal benefits of using each technology.

NILS J. DIAZ

Chairman, Nuclear Regulatory

Commission

Washington, D.C.


The issue of radiological terrorism is one of the most serious homeland security threats we face today. Dirty bombs, although nowhere near as devastating as nuclear bombs, can still cause massive damage, major health problems, and intense psychological harm. Incredibly, action by the Bush administration is making things worse. Two successful and inexpensive radiological security programs in the Department of Energy (DOE)–the Off-Site Source Recovery (OSR) Project and the Nuclear Materials Stewardship Program (NMSP)–are under attack and may be terminated within a year.

There are more than two million radioactive sources in the United States, which are used for everything from research to medical treatment to industry. The Nuclear Regulatory Commission has admitted that of the 1,700 such sources that have been reported lost or stolen over the past five years, more than half are still missing. There is also strong evidence that Al Qaeda is actively seeking radioactive materials within North America for a dirty bomb (see “Al Qaeda pursued a ‘dirty bomb,'” Washington Times, October 17, 2003). I have been working hard to improve the security of nuclear material and reduce the threat of terrorist dirty bombs, both by introducing the Dirty Bomb Prevention Act of 2003 (H.R. 891) and by pursuing vigorous oversight of DOE’s radiological source security programs.

In their excellent article, Charles Ferguson and Joel O. Lubenau point to the OSR Project as an example of a successful federal effort to improve radioactive source security. Despite having retrieved and secured nearly 8,000 unwanted and unneeded radioactive sources from hospitals and universities between 1997 and 2003, the program’s funding and DOE management support will be in jeopardy as of April 2004. Even more troubling is the case of the NMSP, established five years ago to help DOE sites inventory and dispose of surplus radioactive sources. At a cost of only $9 million, the program has recovered surplus plutonium, uranium, thorium, cesium, strontium, and cobalt. By collecting and storing these sources in a single secure facility, the NMSP increased safety and saved $2.6 million in fiscal year 2002. The NMSP is now prepared to assist other federal agencies, hospitals, universities, and other users of radioactive sources. However, in June 2002, DOE Assistant Secretary for Environmental Management Jessie Roberson announced that the NMSP should prepare to finish its activities and shut down in FY2003.

The Bush administration’s failure to energetically support these programs is particularly appalling in light of the May 2003 Group of Eight (G8) summit in Evian, France. With U.S. encouragement, the G8 launched a major new international radiological security initiative involving many of the tasks performed domestically by the OSR Project and the NMSP. If we can’t support these programs at home, how can we expect radioactive responsibility from others?

REP. EDWARD J. MARKEY

Democrat of Massachusetts


Charles D. Ferguson and Joel O. Lubenau provide an excellent review of the problems with the current laws and regulations governing the acquisition and disposal of radiological sources. We’d like to add two more urgent issues to the list.

First, the Group of Eight (G8) leaders detailed a new initiative last May to improve the security of radioactive sources in order to reduce the threat of dirty bombs. This important initiative includes efforts to track sources and recover orphaned sources, improve export controls, and ensure the safe disposal of spent sources. Given this unanimous guidance from the G8, it is astonishing that the U.S. Department of Energy (DOE) has recently moved in a completely opposite direction: The United States is canceling the Nuclear Materials Stewardship program (NMSP) and closing the Office of Source Recovery (OSR). These programs should be strengthened, not eliminated. The NMPS has just completed a multiyear program to catalog radioactive sources at U.S. national labs. It has also assisted in the recovery of unwanted radioactive sources, including plutonium, uranium, thorium, cesium, strontium, and cobalt, from these labs and is prepared to expand its efforts overseas. The OSR has recovered 8,000 unwanted radioactive sources from U.S. universities and hospitals and could recover at least 7,000 additional unwanted and unsecured sources. Neither of these programs is expensive to operate. They support the recommendations of the G8 and meet many of the needs detailed by Ferguson and Lubenau.

Second, the United States has no clear protocol for responding to the detonation of a radiological dispersal device (RDD). Given the public’s fear of anything related to radiation, it can’t be assumed that procedures that work well when a tanker of chlorine is in an accident will be effective in the case of a radiological incident. Clear guidelines for evacuation, sheltering in place, and post-event cleanup must be determined and disseminated before, rather than after, the detonation of an RDD. Preparatory training and materials to help public officials and the press communicate with the public during an incident are essential for ensuring that the public understands the risks and trusts local government and first responders. Radiation, dispersal patterns, and evacuation techniques are all well understood; what is needed is a clear plan of action. Furthermore, debates over the difficult issue of what is “clean” after an RDD event should be undertaken now, not after an incident. The protocol needs to reflect the fact that cleaning procedures to reduce cancer risks from radioactive particles bonded to concrete may not need to be as stringent as the Superfund and other applicable standards would suggest. Contaminated food and water supplies may present a far more urgent danger, and detailed plans should be in place for measuring the danger and establishing procedures before an incident takes place.

In short, Congress should act to fully fund DOE source control programs and to require that the Department of Homeland Security create response guidelines for an RDD.

HENRY KELLY

President

BENN TANNENBAUM

Senior Research Associate

Federation of American Scientists

Washington, D.C.

www.fas.org


Preventing forest fires

Policy related to dealing with fire on the public lands is certainly a hot topic (pun fully intended) in political circles. The development of a national fire policy and the pending passage of the Healthy Forest Restoration Act (H.R. 1904) are the two activities in this political season that are drawing the most attention. Unfortunately, debates surrounding these efforts in some circles have portrayed Republicans (in general) as trying to circumvent environmental laws by using growing concerns about “forest health” and fires “outside the range of historical variability” to speed up “treatments” (which involve cutting trees). Democrats (in general) have responded to environmentalists’ concerns about “shortcutting” public participation and appeals processes by opposing such actions.

The House of Representatives passed H.R. 1904 by a large margin. Things were blocked in the Senate until the dramatic wildfires in southern California in mid-October of 2003–hard on the heels of dramatic fire seasons in the Northwest in 2000, 2002, and 2003–broke the resistance, and political response to building public concern carried the day. In late October, the compromise legislation passed the Senate by a huge margin. As of this writing, the bill was in conference and considered certain to pass.

No matter what forms the general policy and the Healthy Forest Restoration Act take, it will be essential for natural resources management professionals to construct a more complete framework for action. At this point, “A Science-Based National Forest Fire Policy” by Jerry F. Franklin and James K. Agee (Issues, Fall 2003) can be considered fortuitous in terms of timing and prescient in terms of the need to guide discussion of and application to building a platform for the development of a science-based national wildfire policy.

If I were handed the task of leading an interagency group to take on this task, I would tell the administration that the Franklin/ Agee paper would be a most excellent place to start. They correctly note that although H.R. 1904 provides impetus by addressing procedural bottlenecks to action, it does not answer the inevitable questions about “the appropriate decisions about where, how, and why.” Answering those questions would go far toward toning down opposition based on the suspicion that such activities are merely a charade to cover the accelerated wholesale cutting of timber.

To their credit, Franklin and Agee provide a generalized blueprint for doing just that. The land management agencies that will be tasked with carrying out the underlying intent of the legislation would be well advised to consult that blueprint in developing both the underlying foundation and the detailed approaches.

JACK WARD THOMAS

Boone and Crockett Professor of

Conservation

University of Montana

Missoula, Montana

Jack Ward Thomas is Chief Emeritus of the U.S. Forest Service.


Jerry F. Franklin and James K. Agee make an invaluable contribution to the debate over developing ecologically and economically balanced fire management policies. For eons, fire has played an essential role in maintaining the natural processes dictating the function, integrity, and resiliency of many wildland ecosystems. However, decades of fire suppression and past management practices have interrupted fire regimes and other natural processes, thereby compromising natural systems. Meanwhile, drought and development patterns have complicated the responses of land management agencies to “wildfire.” The result is a dire need for effective ecological restoration and community fire protection.

Franklin and Agee describe one possible trail that can be followed in crafting fire management policies tailored to current physical and ecological realities throughout the West. They make a strong case for community protection and ecosystem restoration, as well as for the role of appropriate active vegetation management in achieving those goals. The key to success lies in tailoring that management to the land. Initial, though not exclusive, priority should be placed on community protection. Once significant progress has been made in this all-important zone, efforts can shift more toward ecological restoration where needed across the landscape. Active management should logically be more intensive in the community zone and less intrusive and intensive in the backcountry. In some places, a cessation of management may be enough to free an area to resume natural processes. In other areas, prescribed burning, wildland fire use, strategic thinning, and other forms of mechanical treatment will be both necessary and appropriate. The key in virtually every case will be doing something about ground, surface, and ladder fuels.

The reintroduction of fire can play a vital role in this endeavor. Restoring fire to the ecosystem is of elemental importance both to the ecological health of Western landscapes and to the safety of Western communities that are currently at risk from fire. And yet there are very real social and political obstacles to reintroducing fire to wildland landscapes, even if doing so will have ecological and social benefits, including a reduced risk of future catastrophic fire. The combined effect of these factors is today’s challenge : how to better manage and live with fire so that people and communities are safe, while ecosystems are allowed to benefit from annual seasons of flame. Franklin and Agee make one thing clear: It is time to break the cycle of inaction and get to work in the right places with the right tools.

JAY THOMAS WATSON

Director, Wildland Fire Program

The Wilderness Society

San Francisco, California


Oil and water

Nancy Rabalais’ “Oil in the Sea” (Issues, Fall 2003) makes the recommendation that “the EPA should continue its phase-out efforts directed at two-stroke engines.” However, by not differentiating between old and new technology, the article exaggerates the impact of recreational boating and fails to recognize the effect of current regulations on the engine population.

Since 1997, the Environmental Protection Agency (EPA) has regulated outboard marine engines and personal watercraft. Since the mid-1990s, marine engine manufacturers have been transitioning their product lines toward low-emission technologies, including four-stroke and direct-injection two-stroke engines. Although the EPA regulations are designed to reduce emissions to the air, they will clearly also result in reduced fuel emissions to water, as conventional crankcase scavenged two-stroke (or existing technology two-stroke) engines are replaced by four-stroke engines and direct injection two-stroke engines. In fact, the engine technology jump has produced engines that exceed the early expectations that were used to develop the EPA rule. California adopted a rule in 2000 accelerating the impact of the EPA national rule by five years. All outboard manufacturers had products in place meeting an 80 percent hydrocarbon emission reduction in 2001.

It should also be pointed out that the EPA does not typically promulgate rules that preclude certain technologies such as the two-stroke engine; in fact, direct injection two-stroke engines are considerably cleaner than existing technology two-stroke engines, and they meet or exceed EPA limits for 2006 and California limits for 2004. Current, new-technology, direct injection two-stroke engines reduce hydrocarbon emissions by as much as 80 percent from the emissions produced by conventional two-strokes that are characterized in “Oil in the Sea.” Marine engine manufacturers continue to invest in new technologies to reduce emissions. Marine engine users operate their products in water, and they all want clean water, whether for business or recreation.

SUE BUCHEGER

Manager, Engine Test Services

DAVID OUGHTON

Manager, Regulatory Compliance

Mercury Marine

Fond du Lac, Wisconsin


A humanities policy?

More money for the humanities? By all means! As Robert Frodeman, Carl Mitcham, and Roger Pielke, Jr. point out in “Humanities for Policy–and a Policy for the Humanities” (Issues, Fall 2003), there has been a shocking decline in the U.S. government’s commitment to the humanities in the United States during the past few decades, diminishing our ability to deal creatively with important social problems.

An obvious example is nuclear waste. Regardless of one’s views on the virtues or vices of nuclear power, current arrangements for waste storage are clearly inadequate. Yet local opposition to developing long-term storage facilities or even to transporting waste to a facility elsewhere is often fierce. Scientists and engineers characteristically dismiss this opposition as irrational, but regardless of its source, the opposition is as real as the waste that engenders it. If more humanists and social scientists had been involved in plans for waste disposal, some of these concerns might have been anticipated and addressed.

Moreover, the humanities don’t just offer perspective on the human dimensions of the waste disposal issue; they also offer insight into the science itself. After all, the science supporting the proposed repository at Yucca Mountain has been seriously criticized from within the scientific community, despite decades of work and billions of dollars spent. Could this money have been spent more effectively? Very likely. Historians who have studied large-scale science and engineering projects in diverse settings could have offered relevant insights and advice, both at the onset of the project and as difficulties arose along the way. Few scientists and engineers think of the humanities and social sciences as resources that could help them do their job better, but they should.

That said, a federal policy for the humanities is another story, for federal support is a two-edged sword. It is easy to congratulate ourselves on the way in which the U.S. government has managed the tremendous growth of federal science, but a closer look reveals a more complex and less gratifying picture. Although federal support was obviously good for physics (and more recently for molecular biology), other areas of science have not been so fortunate. A credible case can be made that significant areas have been starved of support, yet the question of how federal funding has affected U.S. science in both good and bad ways has scarcely even been posed.

Even more worrisome is the risk of deliberate politicization of research. Consider the recently released congressional report Politics and Science in the Bush Administration, compiled by the House Committee on Government Reform on behalf of Rep. Henry A. Waxman. This report “assesses the treatment of science and scientists by the Bush Administration [and] finds numerous instances where the Administration has manipulated the scientific process and distorted or suppressed scientific findings.” (Executive Summary, p. i). Waxman is a Democrat, but the report includes complaints from former Republican officials as well. If the sciences have been subject to intrusion and interference, consider the far greater vulnerability of disciplines whose topics are often explicitly political and whose methods and evidential standards are admittedly subjective and interpretive.

Policy for the humanities and humanities for policy? Perhaps, but there will be a price, and it might just be too high.

NAOMI ORESKES

Associate Professor of History

Director, Science Studies Program

University of California, San Diego


Confronting nuclear threats

Wolfgang K. H. Panofsky’s claim (“Nuclear Proliferation Risks, New and Old,” Issues, Summer 2003) that the United States “has failed to take constructive leadership” in countering the threat of nuclear terrorism is baffling, given this country’s record of counterproliferation initiatives. A couple of examples will suffice. The Nunn-Lugar Cooperative Threat Reduction and Nonproliferation Program, created by Congress in 1991, has been making significant progress in preventing the nuclear weapons in Russia and the former Soviet republics from being acquired by terrorists and “states of concern,” such as Iran. Since the tragic events of September 11, 2001, the United States was one of the first to recognize that, as the 2002 National Security Strategy puts it, “the gravest danger our Nation faces lies at the crossroads of radicalism and technology.” In May 2003, President Bush launched the Proliferation Security Initiative (PSI), which is expected to enable the United States and an 11-nation “coalition of the willing” to intercept transfers of weapons of mass destruction to states of concern. I believe Panofsky would agree that such preventive action against the proliferation of weapons, including nuclear materials, can be one of the first lines of defense against what he calls a nuclear catastrophe.

The real problem with the Bush administration’s approach to countering nuclear threats–both conventional and asymmetrical–has more to do with style than with actions, as Panofsky would have it. This administration has been steadfastly unwilling to rein in its global counterproliferation initiatives under the structures of international law and security. The U.S. strategy since September 11, 2001 places emphasis on freedom of individual action (unilateral, if necessary) rather than on the constructive constraints of international institutions, such as the United Nations (UN) and NATO. Notwithstanding public statements to the contrary, the U.S. actions, including its counterproliferation posture, largely conform to this strategic vision. The PSI is no exception: The United States did not consider institutionalizing the initiative within the UN Security Council or NATO, which would have given it vital legitimacy and increased its long-term effectiveness. Instead, the Bush administration preferred to retain PSI’s image as a U.S.-made counterproliferation hit squad.

The administration’s scornful view of international norms (during its first months in office) has now largely been replaced by a paternalistic one, according to which leadership through international organizations is an act of charity, not an expression of self-interest. This worldview is problematic for two reasons. First, it is unsustainable in the long run, and the “forced multilateralism” that has found expression in the U.S. approach to the North Korean crisis attests to this. Second, it decreases the effectiveness and capabilities of our global initiatives. Organizing PSI under the UN Security Council or a NATO mandate would give the initiative unique clout by demonstrating the acceptance of this counterproliferation approach by a wide variety of countries and not just by an American-made coalition of the willing. It could also prompt more of our allies to take an active counterproliferation approach. Channeling the U.S. counterproliferation efforts through multilateral structures can significantly increase the effectiveness of these initiatives and, in turn, make this country and the rest of the world more secure from nuclear threats, both old and new.

EUGENE B. KOGAN

Research Intern

Center for Nonproliferation Studies

Monterey Institute of International Studies

Washington, D.C.

Cite this Article

“Forum – Winter 2004.” Issues in Science and Technology 20, no. 2 (Winter 2004).

Vol. XX, No. 2, Winter 2004