Damned Lies and Statistics: Untangling Numbers from the Media, Politicians, and Activists, by Joel Best. Berkeley and Los Angeles: University of California Press, 2001, 190 pp.
It Ain’t Necessarily So: How Media Make and Unmake the Scientific Picture of Reality, by David Murray, Joel Schwartz, and S. Robert Lichter. Lanham, Maryland: Rowman & Littlefield Publishers, 2001, 248 pp.
David S. Moore
“Report deplores science-media gap,” declared a 1998 headline in Science magazine. The article noted a sample survey disclosing that journalists think that scientists are arrogant, whereas scientists think that journalists are ignorant. One result relevant to the books under review: 82 percent of the scientists agreed that the “media do not understand statistics well enough to explain new findings” in medicine and other fields.
Here are two more books, both competent, that explore for general readers the interactions among science, social activism, the media, and the loose numbers that often result from statistical studies of complex and vaguely defined problems. They have much common ground. They overlap in their discussion of prevalent abuses of data: vague and varying definitions, imperfect measures, partial results reported without adequate context, and so on. Both abound in amusing or infuriating examples. Both point to the way in which “good causes” tend to attract bad statistics. Neither is systematic in describing the weaknesses that occur in the use of statistics in public discourse or in the standards needed for good practice. Despite these similarities, they are quite different books by authors with different backgrounds. I found it remarkable how few of the same examples appear in both books.
Joel Best is a sociologist, and he concentrates on social statistics, or more precisely bad social statistics and why they won’t go away. The greatest strength of Damned Lies and Statistics is its consistent presentation of the sociological context of bad statistics. Best notes that social problems are “constructed” in the sense of being singled out for attention and promoted as serious until the public, previously indifferent, comes to regard hate crimes or child abuse, for example, as self-evidently major problems requiring action. Statistics, even the professional products of the Census Bureau and the Bureau of Labor Statistics, also reflect social construction. Those who doubt this should ponder the official definition of “unemployed,” which bears little relation to the everyday meaning of the word.
Best describes the sociology of activist groups, reminding us that dedication to a just cause insufficiently acknowledged by the public at large justifies (in the eyes of the dedicated) what skeptics regard as abuses. What is more, the activists have a point: Social phenomena have “dark figures,” unrecorded occurrences that may outweigh reported cases. But when activists believe that their cause is just and are certain that the dark figure must be large, the slippery slope awaits: Overly broad definitions expand the problem, and shocking examples suggest without actually asserting that all the cases covered by the broad definition are as horrifying as the examples. Because activists believe that no one understands the problem as well as they do, they are entitled to give “estimates” when reporters want numbers. Even sound data give rise to “mutant statistics,” which are simpler or more compelling than their ancestors and so well adapted to the ecology of the press and the public that they drive out more accurate but less dramatic numbers. These themes are nicely explained and even more nicely illustrated by numerous examples. We see, for instance, how “an estimate that perhaps 6 percent of priests in treatment were at some point attracted to young people was transformed into the ‘fact’ that 6 percent of all priests had had sex with children.”
Damned Lies and Statistics alludes in passing to the needs and practices of the media. It Ain’t Necessarily So, which cites Best’s other work several times, focuses on media reporting of research studies that are judged to be of public interest. Although the book is not ostensibly about statistics, statistical issues are pervasive. Indeed, roughly half the book is devoted to demonstrating that press reports regularly overlook statistical weaknesses in research.
Like Best, David Murray, Joel Schwartz, and S. Robert Lichter bring a social science perspective to their subject: Two were trained as political scientists, one as a social anthropologist. Unlike Best, they inform their readers at length about the culture of the press and more briefly about the culture of science. They show how scientists seeking an audience for their work collaborate with the media in oversimplifying conclusions, omitting the context of other studies, and emphasizing the interest of the findings while neglecting weaknesses in the evidence. Although activists are prominent in the book, scientists and especially journalists occupy center stage.
The great strength of It Ain’t Necessarily So lies in the many carefully documented case studies of exactly what different media outlets reported or failed to report about specific issues and of the quality of follow-up reporting in instances in which a scientific consensus eventually emerged. The authors place these case studies in an explanatory framework that emphasizes that “news” is a manufactured product, and they attempt to clarify the process by which events become news. Most of their examples are drawn from the print media, where it is easier to document exactly what was reported. It will surprise no one that the New York Times and the Wall Street Journal often differ in what their staffs consider newsworthy, as well as in the details they choose to publish or omit. That the authors’ own judgments can sometimes be criticized is no argument against their theme.
The authors’ portrait of journalists has something in common with Best’s picture of activists. Journalists are motivated by a noble desire to right wrongs, unmask hidden evils, and uncover ulterior motives. They are often partial to a simplified “villain, victim, hero” narrative and to an adversarial style that leads, in the case of science reporting, to the Food Marketing Institute’s Tim Hammonds’s wry dictum that “for every Ph.D., there is an equal and opposite Ph.D.” Journalists like and respect data, which seem solid, but dislike the qualifications and uncertainties that scientists accept and expect. Of course, journalists face a difficult task in summarizing a complex world in a finite number of words. The authors are careful not to cast them as villains or incompetents and appear to be trying to clarify journalistic practices for the rest of us. I would guess that most scientists and all scientific societies have already heard the authors’ warning that “other players will shape and construe the results and carry them to the public in partial form. Moreover, all findings will be apprehended in terms of cultural understandings that the media bring to bear on any individual story. It follows that developing a more sophisticated appreciation of the media’s interaction with public policy should be a high priority for scientists.” Neither Murray et al. nor Best offer suggestions for change that go beyond “be aware and be critical.” As social scientists, they describe rather than prescribe.
When I was a program director at the National Science Foundation long ago, we were urged to be on the lookout for “nuggets” in the work of the investigators we supported: stories that could help justify research to Congress and the public. These books abound in nuggets for those who think, write, or teach about social science, statistics, or the media. Moreover, each places the nuggets in a matrix of sorts. Both books are informative. Neither is highly original or provocative.
What do I miss in these books? Examples of sound statistics wisely used, for one thing. Both note that some data are much more reliable than others, and Murray et al. point to and applaud examples of accurate reporting, but both share the predilection of journalists for picturesquely negative stories. Not all data-based reasoning in messy settings is misleading; accounts of successes would sharpen the condemnation of shoddy work. For example, they could have mentioned the Tennessee STAR project, a four-year randomized comparative experiment that clearly demonstrated the beneficial effects of small class sizes on learning in the primary grades. Most of all, I think that any account of the many and dire misuses of statistics should also support the superiority of data–even partial and imperfect data–over anecdotes. No news story, it seems, can begin without a human interest anecdote, and many go no further.
Let us suppose that we wanted, in 1996, to make a case that corporate downsizing was demoralizing the American worker, even to the point of encouraging political extremism. This might seem like a tough assignment at the beginning of the great economic boom. Fear not. With anecdotes, anything is possible. Visit Dayton, Ohio, where the woes of NCR, the former National Cash Register, are eroding the community’s prosperity. Avoid Redmond, Washington, where Microsoft is hiring and passing out stock options. Poll the members of the class of 1970 at an expensive private university to learn that they worry that their children will not do as well as they. Don’t interview Korean immigrants, who would have a different estimate of their children’s future. The New York Times had no difficulty pointing to the devastation caused by downsizing in a seven-part series in March 1996. Dayton, Ohio, and the Bucknell class of 1970 each received full-page treatment. As the Harvard statistician Frederick Mosteller has said, “It is easy to lie with statistics. But it is easier to lie without them.”
David S. Moore (email@example.com) is Shanti S. Gupta Distinguished Professor of Statistics at Purdue University and the author of Statistics: Concepts and Controversies, (5th edition, Freeman, New York, 2001).