Flying Blind on Drug Control Policy

The axing of a key data collection program is a major setback for effective policymaking.

Not knowing about the actual patterns of illicit drug abuse and drug distribution cripples policymaking. As the subtitle of a National Academies report put it four years ago, “What We Don’t Know Keeps Hurting Us.” (Currently, we don’t even know whether the total dollar volume of illicit drug sales is going up or down from one year to the next.) It hurts more when the most cost-effective data collection programs are killed, as happened recently to the Arrestee Drug Abuse Monitoring (ADAM) program of the National Institute of Justice (NIJ).

Determining the actual patterns of illicit drug abuse is difficult because the people chiefly involved aren’t lining up to be interviewed. Heavy users consume the great bulk of illicit drugs, and the vast majority of them are criminally active. About three-quarters of heavy cocaine users are arrested for felonies in the course of any given year. But somehow these criminally active heavy users don’t show up much in the big national surveys.

In the largest and most expensive of our drug data collection efforts, the household-based National Survey on Drug Use and Health, only a tiny proportion of those who report using cocaine frequently report ever having been arrested. Most of the criminally active heavy drug users are somehow missed: Either they’re in jail or homeless and therefore not part of the “household” population, or they’re not home when the interviewer comes, or they refuse to be interviewed. (The total “nonresponse” rate–not-homes plus refusals–for the household survey is about 20 percent. Because the true prevalence of heavy cocaine use in the adult household population is on the order of 1 percent, that’s devastating.) An estimate of total cocaine consumption derived from the household survey would account for only about 10 percent of actual consumption, or about 30 metric tons out of about 300.

So when the National Drug Control Strategy issued by the Office of National Drug Control Policy (often referred to as “the drug czar’s office”) bases its quantitative goals for reducing the size of the drug problem on changes in self-reported drug use in the household survey (or in the Monitoring the Future survey aimed at middle-school and high-school students), it’s mostly aiming at the wrong things: not the number of people with diagnosable substance abuse disorders, or the volume of drugs consumed, or the revenues of the illicit markets, or the crime associated with drug abuse and drug dealing. All of those things are arguably more important than the mere numbers of people who use one or another illicit drug in the course of a month, but none of them is measured by the household survey or the Monitoring the Future survey.

If most drugs are consumed by heavy users and most heavy users are criminally active, then to understand what’s happening on the demand side of the illicit drug business we need to study criminally active heavy users. The broad surveys can provide valuable insight into future drug trends; in particular, the “incidence” measure in the household survey, which picks up the number of first-time users of any given drug, is a useful forecasting tool. But because the vast majority of casual users never become heavy users, and because the rate at which casual users develop addictive disorders is neither constant nor well understood, spending a lot of money figuring out precisely how many once-a-month cocaine users there are isn’t really cost-effective.

The obvious places to look for criminals are the jails and police lockups where they are taken immediately after arrest. So if most of the cocaine and heroin in the country is being used by criminals, why not conduct another survey specifically focused on arrestees?

That was the question that led to the data collection effort first called Drug Use Forecasting and then renamed Arrestee Drug Abuse Monitoring (ADAM). Because ADAM was done in a few concentrated locations, it was able to incorporate what neither of the big surveys has ever had: “ground truth” in the form of drug testing results (more than 90 percent of interviewees agreed to provide urine specimens) as a check on the possible inaccuracy of self-reported data on sensitive questions.

The good news about ADAM was that it was cheap ($8 million per year, or about a fifth of the cost of the household survey) and produced lots of useful information. The bad news is that the program has now been cancelled.

The proximate cause of the cancellation was the budget crunch at the sponsoring agency, the NIJ. The NIJ budget, at about $50 million per year, is about 5 percent of the budget of the National Institute on Drug Abuse (NIDA), which sponsors Monitoring the Future, and about 10 percent of the budget of the Center for Substance Abuse Treatment, which funds the household survey. That’s part of a pattern commented on by Peter Reuter of the University of Maryland: More than 80 percent of the actual public spending on drug abuse control goes for law enforcement, but almost all of the research money is for prevention and treatment. (Private and foundation donors are even less generous sponsors of research into the illicit markets.) Thus, the research effort has very little to say about the effectiveness of most of the money actually spent on drug abuse control.

But the picture is even worse than that, because most of the NIJ budget is earmarked by Congress for “science and technology” projects (mostly developing new equipment for police). When Congress cut the NIJ budget from $60 million in fiscal year (FY) 2003 to $47.5 million in FY 2004, it also reduced the amount available for NIJ’s behavioral sciences research from $20 million to $10 million. Although the $8 million spent on ADAM seems like a pittance compared to the household survey, NIJ clearly couldn’t spend four-fifths of its total crime research budget on a single data collection effort.

However, the NIJ could have continued to fund a smaller effort, involving fewer cities and perhaps annual rather than quarterly sampling efforts. For whatever reason, ADAM seems to have been unpopular at the top management level ever since Sarah Hart replaced Jeremy Travis and his interim successor Julie Samuels as the NIJ director in August 2001.

Unconventional sampling

In addition to its budgetary problems, ADAM had a problem of inadequate scientific respectability because of its unconventional sampling process. ADAM was a sample of events–arrests–rather than a sample of people. The frequency of arrest among a population of heavy drug users varies from time to time in unknown ways and for causes that may be extraneous to the phenomenon of drug abuse. For example, if the police in some city cut back on prostitution enforcement to increase enforcement against bad-check passers, and if the drug use patterns of bad-check passers differ from those of prostitutes, the ADAM numbers in that city might show a drop in the use of a certain drug that didn’t reflect any change in the underlying drug market. So it isn’t possible to make straightforward generalizations from ADAM results to the population of heavy drug users or even to the population of criminally active drug users. Moreover, because arrest practices and the catchment areas of the lockups where ADAM took its samples varied from one jurisdiction to another (Manhattan and Boston, for example, are purely big-city jurisdictions, whereas the central lockup in Indianapolis gets its arrestees from all of Marion County, which is largely suburban), the numbers aren’t strictly comparable from one jurisdiction to another. (If Indianapolis arrestees are less likely to be cocaine-positive than Manhattan arrestees, the difference can’t be taken as an estimate of the difference in drug use between Indianapolis-area criminals and New York­area criminals.) The effects of these variations turned out to be small, but that didn’t entirely placate some of the statistical high priests.

In the real world, these are manageable problems, at least when you consider the fact that the big surveys miss most of what’s actually going on. But in the world of classical statisticians and survey research experts, the absence of a known sampling frame–and consequently of well-defined standard errors of estimate–is a scandal too horrible to contemplate. “ADAM,” sniffed one of them in my presence, “isn’t actually a sample of anything. So it doesn’t actually tell you anything.” (In my mind’s ear, I hear the voice of my Bayesian statistics professor saying, “Nothing? Surely it doesn’t tell you nothing. The question is: What does it tell you? How does your estimate of what you’re interested in change in the presence of the new data?”)

What to the guardians of statistical purity looks like a defense of scientific standards looks to anyone trained in the Bayesian tradition like the man looking for his lost keys under the lamppost, rather than in the dark alley where he lost them, because the light is better under the lamppost. But the argument that a data series isn’t scientifically valid is a powerful one, especially among nonscientists. Even before ADAM was killed, there was pressure to expand the number of cities covered and include some rural areas. The result was to make it much more expensive but not really much more useful. A national probability sample of arrestees would be nice, but it wouldn’t obviously be worth the extra expense.

The two broad national surveys on drug use capture only a tiny proportion of the heavy users who consume the great bulk of illicit drugs.

In an ideal world, of course, we would have a panel of criminally active heavy drug users to interview at regular intervals. But that project is simply not feasible. ADAM was a rough-and-ready substitute and provided useful information about both local and national trends. Outside the classical statistics textbooks, not having a proper sampling frame isn’t really the same thing as not knowing anything.

So ADAM wound up caught in the middle. It was no longer a cheap (and therefore highly cost-effective) quick-and-dirty convenience sample of arrestees in two dozen big cities. It was also not an equal-probability sample from a well-defined sampling frame and therefore not fully respectable scientifically. That made it hard for the drug czar’s office, which very much wanted and still wants some sort of arrestee-testing system, to persuade NIDA to fund it when NIJ couldn’t or wouldn’t maintain it. NIDA regards itself as a health research agency, so anything that looks like law enforcement research naturally takes a back seat there. And to an agency that measures itself by publications in refereed scientific journals rather than information useful to policymakers, ADAM’s sampling issues look large rather than small.

The situation isn’t hopeless. Both the NIJ and the drug czar’s office have expressed their intention to reinstitute some sort of program to measure illicit drug use among arrestees. But no one seems quite sure yet what form that system will take or who will pay for it. I would suggest a hybrid approach: Get quarterly numbers from lockups in the 20 or so biggest drug market cities, and run a separate program to conduct interviews and collect specimens once a year from a much larger and more diverse subset–not necessarily an equal-probability sample–of lockups nationally. That won’t provide a computable standard error of estimate, but it will give us some cheap and highly useful data to use in making and evaluating drug control policies. And at least with that solution, our national drug control effort won’t be flying blind, as it is today.

Your participation enriches the conversation

Respond to the ideas raised in this essay by writing to [email protected]. And read what others are saying in our lively Forum section.

Cite this Article

Kleiman, Mark A. R. “Flying Blind on Drug Control Policy.” Issues in Science and Technology 20, no. 4 (Summer 2004).

Vol. XX, No. 4, Summer 2004