Science policy matters
Daniel Sarewitz asks, “Does Science Policy Matter?” (Issues, Summer 2007). The answer is “absolutely yes.” In a high-tech global economy, science and technology are indispensable to maintaining America’s economic edge. In fact, historically, studies have shown that as much as 85% of the measured growth in per capita income has been due to technological change. In a very real sense, the research we do today is responsible for the prosperity we achieve tomorrow. For that reason, I believe Congress must support low tax rates as a catalyst for innovation.
Ever since President Reagan’s tax cuts went into full effect in 1983, the U.S. economy has almost quintupled in size, the Dow Jones Industrial Average has surged from less than 1,000 to over 13,000, and a host of revolutionary technologies, from cell phones to DVDs, from iPods to the Internet, have enhanced productivity and our quality of life. In many cases, the low tax rates enabled dynamics entrepreneurs to secure the private investment they needed to create their own businesses, and in effect, jump-start the information revolution.
But despite our economic gains, Congress needs to play a more active role in shaping science and technology policy with federal funding. Last year, the National Academies released a startling report called Rising Above the Gathering Storm, which showed how unprepared we are to meet future challenges. According to the report, the United States placed near the bottom of 20 nations in advanced math and physics, and ranked 20th among all nations in the proportion of its 24-year-olds with degrees in science or engineering. Right now, we are experiencing a relative decline in the number of scientists and engineers, as compared with other fast-growing countries such as China and India. Within a few years, approximately 90% of all scientists and engineers in the world will live in Asia.
We are starting to see the consequences of our neglect in these fields. In the 1990s, U.S. patent applications grew at an annual rate of 10%, but since 2001, they’ve been advancing at a much slower rate (below 3%). In addition, the U.S. trade balance in high-tech products has changed dramatically, with China overtaking the United States as the world’s largest exporter of information-technology products (and the United States becoming a net importer of those products).
I agree with Sarewitz that “the political case for basic research is both strong and ideologically ecumenical,” as people across the political spectrum view scientific research as an “appropriate area of governmental intervention.” For example, Congress recently passed the America Competes Act. This landmark legislation answered the challenge of the report from the National Academies to increase research, education, and innovation and make the United States more competitive in the global marketplace.
In addition, federal funding for basic research has increased substantially, although I am growing concerned that the emphasis of that funding is starting to shift from hard science to soft science. As government leaders, we have a responsibility to establish priorities for the taxpayers’ money; and in that case, hard sciences (physical science and engineering) must assume a larger share of federal funding.
The bottom line is science policy does matter—and I thank you, as leaders of the scientific community, for your efforts to make the United States a better place to live, learn, work, and raise a family.
Daniel Sarewitz’s “Does Science Policy Matter?” continues the tutorial begun with his 1996 Frontiers of Illusion, still one of the most compelling myth-busting texts for teaching science policy. More policy practitioners should read it, or at least this updated article.
Sarewitz carries the mantle of the late Rep. George Brown, for whom he worked through the House Science Committee. I did, too, as study director for the 1991 Office of Technology Assessment report Federally Funded Research: Decisions for a Decade, which Sarewitz cites. In it, and subsequently in a Washington Post editorial titled “How Much Is Enough?” the following questions were raised:
“Is the primary goal of the Federal research system to fund the projects of all deserving investigators . . .? If so, then there will always be a call for more money, because research opportunities will always outstrip the capacity to pursue them.
Is it to educate the research work force or the larger science and engineering work force needed to supply the U.S. economy with skilled labor? If so, then support levels can be gauged by the need for more technically skilled workers. Preparing students throughout the educational pipeline will assure an adequate supply and diversity of talent.
Is it to promote economic activity and build research capacity through the United States economy by supplying new ideas for industry and other entrepreneurial interests? If so, then the support should be targeted …to pursue applied research, development, and technology transfer.
Is it all of the above and other goals besides? If so, then some combination of these needs must be considered in allocating federal support.
Indicators of stress and competition in the research system do not address the question of whether science needs more funding to do more science. Rather, they speak to the organization and processes of science and to the competitive foundation on which the system is built and that sustains its rigor” (Federally Funded Research, May 1991, p. 12).
Though a generation old, these words are effectively recast by Sarewitz as an indictment of the science policy community, to wit:
“. . . the annual obsession with marginal changes in the R&D budget tells us something important about the internal politics of science, but little, if anything, that’s useful about the health of the science enterprise as a whole.
“We are mostly engaged in science politics, not science policy …. this is normal science policy, science policy that reinforces the status quo.
“If the benefits of science are broadly and equitably distributed, then ‘how much science can we afford?’ is a reasonable central question for science policy.”
Taken together, these observations require, in Sarewitz’s words, “that unstated agendas and assumptions, diverse perspectives, and the lessons of past experiences be part of the discussion.” That the policy community shuns such self-examination suggests that it is more an echo of the science community than a critical skeptic of it, more a political body than an analytical agent.
To be sure, Sarewitz scolds science policy for being ahistorical and nonaccumulative in building and applying a knowledge base. He proposes a new agenda of questions that starts with the distribution of scarce resources and extends to goals, benefits, and outcomes. He continues to challenge his colleagues to heed the world while reinventing our domestic policy consciousness for the 21st century. He does nothing less than ask science policy to find its soul.
Science education for parents
Rep. Bart Gordon has been a champion of science and math in Congress, and we agree completely that the necessary first step in any competitiveness agenda is to improve science and math education (“U.S. Competitiveness: The Education Imperative,” Issues, Spring 2007). For over two years now, scores of leading policymakers and business leaders have been calling for reforms in science, technology, engineering, and mathematics education and offering a myriad of suggestions on how to “fix the problem.”
Before we can fix the problem, however, we have to do a much better job of explaining what is actually broken. A survey last year of over 1,300 parents by the research firm Public Agenda found that most parents are actually quite content with the science and math their child receives. Fifty-seven percent of the parents surveyed say that the amount of math and science taught in their child’s public school is “fine.” At the high school level, 70% of parents are satisfied with the amount of science and math education.
Why is there such a disconnect between key leaders and parents? Clearly we have to get parents to realize that there is, in fact, a crisis in science and math education and it’s in their neighborhood too.
With all the stakeholders on board, we can work together to ensure that innovations and programs are at the proper scale to have a significant impact on students. We can ensure that teachers gain a deeper understanding of the science content they are asked to teach, and we can do a much better job of preparing our future teachers. Together we need to overhaul elementary science education and provide all teachers with the support and resources they need to effectively teach science. Our nation’s future depends on it.
Large effects of nanotechnology
Ronald Sandler and Christopher J. Bosso call attention to the opportunity afforded to the National Nanotechnology Initiative (NNI) to address the broad societal effects of what is widely anticipated to be a transformative technology (“Tiny Technology, Enormous Implications,” Issues, Summer 2007).
From its beginnings, the federal agencies participating in the NNI have recognized that the program needs to support activities beyond research for advancing nanotechnology and have included funding for a program component called Societal Dimensions, which has a funding request of $98 million for fiscal year 2008.
The main emphasis of this program component has been to advance understanding of the environmental, health, and safety (EHS) aspects of the technology. This funding priority is appropriate because nanomaterials are appearing in more and more consumer products, while basic knowledge about which materials may be harmful to human heath or damaging to the environment is still largely unavailable. In fact, the NNI has been criticized for devoting too little of its budget to EHS research and for failing to develop a prioritized EHS research plan to inform the development of regulatory guidelines and requirements.
Nevertheless, the article is correct that there are other public policy issues that need to be considered before the technology advances too far. The NNI has made efforts in this direction. A sample of current National Science Foundation grants under its program on ethical, legal, and social implications (ELSI) issues in nanotechnology includes a study on ethical boundaries regarding the use of nanotechnology for human enhancement; a study on societal challenges arising from the movement of particular nanotechnology applications from the laboratory to the marketplace and an assessment of the extent to which existing government and policy have the capacity (resources, expertise, and authority) to deal with such challenges; a study on risk and the development of social action; and a project examining nanoscale science and engineering policymaking to improve understanding of intergovernmental relations in the domain of science policy.
Although the NNI is not ignoring broader societal impact issues, the question the article raises is whether the level of attention given and resources allocated to their examination are adequate. The House Science and Technology Committee, which I chair, will attempt to answer this question, and will examine other aspects of the NNI as part of its reauthorization process for the program that will be carried out during the current Congress.
The article by Ronald Sandler and Christopher J. Bosso raises important issues concerning the potential benefits and impacts of nanotechnology. The authors’ focus on societal implications points to considerations that apply specifically to nanotechnology as well as generally to all new or emerging technologies. In striving to maximize the net societal benefit from nanotechnology, we need to examine how we can minimize any negative impacts and foresee—or at least prepare for— unintended consequences, which are inherent in the application of any new technology.
The U.S. Environmental Protection Agency (EPA) recognizes that nanotechnology holds great promise for creating new materials with enhanced properties and attributes. Already, nanoscale materials are being used or tested in a wide range of products, such as sunscreens, composites, medical devices, and chemical catalysts. In our Nanotechnology White Paper (www.epa.gov/osa/nanotech.htm), we point out that the use of nanomaterials for environmental applications is also promising. For example, nanomaterials are being developed to improve vehicle fuel efficiency, enhance battery function, and remove contaminants from soil and groundwater.
The challenge for environmental protection is to ensure that, as nanomaterials are developed and used, we minimize any unintended consequences from exposures of humans and ecosystems. In addition, we need to understand how to best apply nanotechnology for pollution prevention, detection, monitoring, and cleanup. The key to such understanding is a strong body of scientific information; the sources of such information are the numerous environmental research and development activities being undertaken by government agencies, academia, and the pri-vate sector. For example, on September 25 and 26 of this year, the EPA is sponsoring a conference to advance the discussion of the use of nanomaterials to prevent pollution.
The EPA is working with other federal agencies to develop research portfolios that address critical ecological and human health needs. We are also collaborating with industry and academia to obtain needed information and identify knowledge gaps. Nanotechnology has a global reach, and international coordination is crucial. The EPA is playing a leadership role in a multinational effort through the Organization for Economic Cooperation and Development to understand the potential environmental implications of manufactured nanomaterials. Also on the international front, we are coordinating research activities, cosponsoring workshops and symposia, and participating in various nanotechnology standards-setting initiatives.
We are at a point of great opportunity with nanotechnology. From the EPA’s perspective, this opportunity includes using nanomaterials to prevent and solve environmental problems. We also have the challenge, and the responsibility, to identify and apply approaches to produce, use, recycle, and eventually dispose of nanomaterials in a manner that protects public health and safeguards the natural environment. Using nanotechnology for environmental protection and addressing any potential environmental hazard and exposure concern are important steps toward maximizing the benefits that society derives from nanotechnology.
I read with great interest the piece by Ronald Sandler and Christopher J. Bosso on nanotechnology. It is hard to argue with their assertion that the social and environmental implications of nanotechnology will be wide-ranging and deserve the attention of the government. However, their faith in the National Nanotechnology Initiative (NNI) as a mechanism to address these issues seems misplaced.
The NNI’s governance and overall coordination are done through the National Science and Technology Council. To date, the NNI has functioned as an R&D coordination body, not a broader effort to develop innovative regulatory or social policy. It is questionable whether many of the issues that the authors raise, such as environmental justice, could be dealt with effectively by the NNI. Even some of the issues that lie within the NNI’s competency and mandate have not been adequately addressed.
For example, six years after the establishment of the NNI, we lack a robust environmental, health, and safety (EH&S) risk research strategy for nanotechnology that sets clear priorities and backs these with adequate funding. The House Science Committee, at a hearing in September 2006, blasted the administration’s strategy (Rep. Bart Gordon described the work as “juvenile”). A lack of transparency by the NNI prompted the Senate Commerce Committee in May 2006 to request that the General Accountability Office audit the agencies to find out what they are actually spending on relevant EH&S research and in what areas.
Another issue raised in the article that needs urgent attention is public engagement, which must go beyond the one-way delivery of information on nanotech through museums, government Web sites, and PBS specials. Though this need was clearly articulated in the 21st Century Nanotechnology R&D Act passed in 2003, the NNI has held one meeting, in May 2006, to explore how to approach public engagement, not to actually undertake it.
The authors correctly call for a regulatory approach that goes beyond the reactive incrementalism of the past decades. However, the Environmental Protection Agency’s recent statement that the agency will treat nano-based substances like their bulk counterparts under the Toxic Substances Control Act—ignoring scale and structure-dependent properties that are the primary rationale of much NNI-funded research—hardly gives the impression of a government willing to step “out of the box” in terms of its regulatory thinking and responsibilities.
As more and more nano-based products move into the marketplace, the social and environmental issues will become more complex, the need for public engagement more urgent, and the push for effective oversight more critical. The authors are right in calling for the NNI to step up to these new challenges. The question is whether they can or will.
The importance of community colleges
I appreciate the invitation to respond to James E. Rosenbaum, Julie Redline, and Jennifer L. Stephan’s “Community College: The Unfinished Revolution” (Issues, Summer 2007). I will focus my remarks on how the U.S. Department of Education is assisting community colleges to carry out their critical multifaceted mission.
The Office of Vocational and Adult Education (OVAE), under the leadership of Assistant Secretary Troy Justesen, is committed to serving the needs of community colleges, as evidenced by my appointment as the first Deputy Assistant Secretary with specific responsibility for community colleges. As a former community college president with experience in workforce education, I bring first-hand knowledge to our community college projects and services.
Comprehensive community colleges have a priority to be accessible and affordable to all who desire postsecondary education. They prepare students for transfer to four-year institutions, meet workforce preparation needs, provide developmental education, and offer a myriad of support services needed by students with diverse backgrounds, skills, and educational preparation. Community colleges also have thriving noncredit programs that encompass much of the nation’s delivery of Adult Basic Education and English as a Second Language instruction. Noncredit programs often include customized training for businesses, plus initiatives that range from Kids College to Learning in Retirement. Many community colleges use innovative delivery systems such as distance education, making courses and degrees accessible 24/7 to working students and those with family responsibilities.
In the report A Test of Leadership, the commission appointed by Secretary of Education Margaret Spellings made recommendations to chart the future of higher education in the United States. Accessibility, affordability, and accountability emerged from the report as key themes in the secretary’s plan for higher education. Comprehensive community colleges are well-poised to move on these themes and are doing this work in the context of national and global challenges raised by the commission.
At a Community College Virtual Summit, Education Secretary Spellings said, “you can’t have a serious conversation about higher education without discussing the 11 million Americans (46% of undergraduates) attending community colleges every year.” The Virtual Summit is one of a series of U.S. Department of Education activities related to the secretary’s plan for higher education.
Community college leaders and researchers underscored the importance of accountability during the summit and the need for data-driven decisionmaking. For example, initiatives such as Achieving the Dream and Opening Doors focus on data-directed support services in community colleges. The average age of community college students is 29, reflecting the large number of working adults who attend; however, growing numbers of secondary students are also attending community colleges. These “traditional”-age community college students are well prepared for higher academic challenges. Many of these students transfer before they complete the Associate of Arts or Associate of Science degree and often are not recognized as community college successes. Many students also return to their local community college in the summer and during January terms to complete additional courses. Often overlooked when discussing degree completion results are the data that show that more than 20% of community college students already have degrees.
New OVAE projects focused on community colleges include a research symposium and a showcasing and replication of promising practices that will produce additional information. Moreover, the College and Career Transitions Initiative has developed sequenced career pathways from high school to community college that encompass high academic standards. Outcomes of this project include a decrease in remediation, increases in academic and skill achievement, the attainment of degrees, and successful entry into employment or further education. Community colleges using best practices offer a pathway model with multiple entry points for adults and secondary students; end-point credentials; and “chunking,” which organizes knowledge in shorter modules with credentials of value early in the process to allow for periods of work. The use of chunking with pathway models is a practice recommended in a recent report by the National Council of Workforce Education and Jobs for the Future.
Students of all ages come to community colleges with many different educational goals. They are vital entry points to postsecondary education for new Americans, nontraditional and traditional students alike. When comparing the cost of the first two years at a public community college with the cost at a four-year public university, it is apparent why community colleges gained support from the president and state governors as the postsecondary institutions of first choice for millions of Americans.
In “The Promise of Data-Driven Policymaking” (Issues, Summer 2007), Daniel Esty and Reece Rushing describe the U.S. health system as ripe for the improved use of aggregated information in support of better policy and clinical decisions. Let me highlight two challenges they did not address.
First, much of our thinking about data acquisition and analysis for government decisionmaking reflects a 20th century information paradigm rather than the Web 2.0 model that pervades so much of society now. In many domains, and certainly in health care, we don’t rely on top-down policy development and enforcement. Instead, data for decisionmaking must be widely available and subject to analysis by diverse stakeholders, ranging from the organizations directly subject to regulation to the public interest groups and individuals who wish to learn from or add to society’s evidence base in each area. Similarly, the evidence for decision-making is not determined by a single national authority (often after years of review, sign-off, and political vetting) but represents a dynamic stream of insight built by numerous interested parties engaged in a continuing dialogue. We need both a policy regime and a technology infrastructure that support decentralized and distributed data resources and that protect individual privacy and other public values. This architecture needs to be open and fluid, accommodating new data contributions, new methodologies, and new opinions.
Second, although we certainly lack sound actionable data for health policy decisionmaking, data alone do not affect how decisions are made, and new technology will not change that. In health care, there’s been much furor over the Institute of Medicine (IOM) reports since 1999 revealing both high rates of preventable medical errors and evidence of poor-quality care. But in 1985, the federal government published voluminous data on individual hospital mortality rates, and in the early 1990s a federal research agency developed and published recommended evidence-based practices for conditions such as managing low back pain. In both cases, the affected stakeholders—the hospital associations and the back surgeons— acted politically to crush the federal efforts at publishing relevant data. In neither case was technology the key lubricant of evidence-based policymaking— it was the short-lived will of federal officials to increase transparency, diffuse information, and demand improved quality in our health system.
When political forces favoring transparency can’t be stopped, the industry is often successful at negotiating systems that create the illusion of disclosure by releasing data that poses little risk of disrupting current practice. Over the past decade, federal and state agencies have been determined to publish “performance” data about hospitals, nursing homes, and doctors, but they’ve allowed the industry to decide what measures should be reported. As a result, we are swamped with measures that have no value to consumers or purchasers and do nothing to stimulate innovation, competition, or systemic improvement. These data will never address the issues that Medpac or the IOM or our presidential candidates are talking about: how to best care for people with chronic illnesses such as asthma and diabetes, how to deal with the challenges of obesity, how to provide access to millions of uninsured Americans, or how to provide quality care at the end of life.
Evidence-based policymaking is an important goal, but it becomes important when it allows policymakers and the larger community to discover new ways to solve intractable problems.
Daniel Esty and Reece Rushing’s helpful article notes that the ability to collect, analyze, and synthesize information has never been as promising as it is today. Whether it’s fighting crime or monitoring drinking water, analysts and policymakers of many stripes, not just government leaders, have unprecedented opportunities to obtain information at faster rates and more extensively than ever before.
Current technology can make real-time data collection and analysis possible without regard to geography, and such data can be made publicly available to all. This should enable quicker, smarter decisions. It also means that decisions about priorities, resource allocations, and performance can be made more easily visible to constituents and consumers, allowing for more informed choices and greater accountability of decisionmakers.
What is not new is the ongoing challenge in building an information infrastructure that can enable this data-driven policymaking. Among the many challenges, here are three. The public is eager to examine government data. Yet the underlying data from government have many errors that can lead one to wrong conclusions. Until this is addressed, data-driven decisions will always be suspect. Second, the strength of newer information technologies is the ability to link disparate data in order to create profiles that previously could not be obtained. But linking such data sets is very difficult because government has no system of common identifiers. Finally, the government has a Janus-faced approach to data collection. In one breath, government calls for benchmarks to assess performance and in the next calls for an annual 5% reduction in information collected. This inconsistency must stop.
Even if we had an ideal information infrastructure, do we want all decisions to be driven by quantifiable data? No. Some decisions are more appropriately driven by rigorous quantitative analysis, others less so. Science should guide policymakers, but human judgment is needed to make decisions. Esty and Rushing note that one promise of data-driven decisionmaking is that it will harpoon one-size-fits-all decision-making in government. That would be good. But this also raises tough questions about what benchmarks to use for performance measurement. Although some benchmarks can be established by statutory mandates, much is left to human discretion and ultimately to politics.
We must also remember that science is not value-free. Relying solely on performance measures to guide decisions may create incentives to manipulate data or cause the complexities of crafting policies to address difficult problems, such as hunger in the United States or balancing civil liberties and security, to be ignored. Moreover, key assumptions used in research may change results exponentially. For example, assessing risk for the general public could have vastly different results, conclusions, and policy decisions than assessing risk for vulnerable subpopulations. Although numbers can help inform and support policy decisions, they should not alone create solutions to policy problems.
Presumably, Esty and Rushing are writing this article because they believe that government is currently not relying on data to make its decisions. Yet the Bush administration would probably disagree, arguing it relies on data, citing, for example, regulatory decisions that are based on cost/benefit analysis. We would assert that the Bush administration has used data to manipulate regulatory and performance outcomes, allowing political goals to trump science. Ultimately, this suggests that the debate is less about use of data for decisionmaking and more about how data is used.
If we expect data-driven decision-making to lead to a broader vision of the policymaking process, as the authors suggest, we will need help from them in deciding how to define what “good” data-driven decisionmaking is, as well as how to build a robust information infrastructure to complement their prescription. Without this help from the authors, we risk elevating expectations about what the tools help us accomplish. And those expectations may well defeat the true promise of these tools.
Daniel Esty and Reece Rushing’s article lays out an ambitious agenda for evidence-based regulation. Data-driven policymaking is an important way to transcend ideological squabbles and focus instead on results.
Esty and Rushing show that regulators could learn a lot from the evidence-based medicine movement. For the past 10 years, there has been an immense effort to grade and catalog the level of evidence that undergirds every treatment option. Some treatments are supported by the highest quality of evidence—multiple randomized trials—while other medical treatments are to this day based on nothing more than expert intuition. Knowing the quality of evidence gives physicians a far sounder basis to advise and treat their patients. Regulations and laws might usefully embrace a similar ranking procedure.
Preliminary randomized trials of policy are a particularly attractive policy tool. Political opponents who can’t agree on substance can sometimes agree to a procedure, a neutral test to see what works. And the results of randomized trials are hard to manipulate; often all you need to do is look at the average result for the treated and untreated groups.
For example, in 1997 Mexico began a randomized experiment called Progresa in more than 24,000 households in 506 villages. In villages randomly assigned to the Progresa program, the mothers of poor families were eligible for three years of cash grants and nutritional supplements if the children made regular visits to health clinics and attended school at least 85% of the time.
The Progresa villages almost immediately showed substantial improvements in education and health. Children from Progresa villages attended school 15% more often and were almost a centimeter taller than their non-Progresa peers. The power of these results caused Mexico in 2001 to expand the program nationwide, where it now helps more than 2 million poor families.
Progresa shows the impact of data-driven policymaking. Because of this randomized experiment, more than 30 countries around the globe now have Progresa-like contingent cash transfers.
Esty and Rushing also emphasize that the government in the future can become a provider of information. They emphasize the disclosure of raw CD-ROMs of data. But government could also crunch numbers on our behalf to make personalized predictions for citizens.
The Internal Revenue Service (IRS) and Department of Motor Vehicles (DMV) are almost universally disliked. But the DMV has tons of data on new car prices and could tell citizens which dealerships give the best deals. The IRS has even more information that could help people if only it would analyze and disseminate the results. Imagine a world where people looked to the IRS as a source for useful information. The IRS could tell a small business that it might be spending too much on advertising or tell an individual that the average taxpayer in her income bracket gave more to charity or made a larger Individual Retirement Account contribution. Heck, the IRS could probably produce fairly accurate estimates about the probability that small businesses (or even marriages) would fail.
Of course, this is all a bit Orwellian. I might not particularly want to get a note from the IRS saying my marriage is at risk. But I might at least have the option of finding out the government’s prediction. Instead of thinking of the IRS as solely a taker, we might also think of it as an information provider. We could even change its name to the “Information & Revenue Service.”
Here in (the other) Washington, we’ve found that citizens expect their state government to be responsive and accountable. The public wants a state government that is responsive to their needs whether that means investing in high-quality schools, ensuring public safety, or helping our economy remain strong. They also want a government that is accountable; namely, one that invests taxpayer dollars in the right priorities and in meaningful programs that achieve results.
As described in “The Promise of Data-Driven Policymaking,” new technologies and new ways of thinking about government give us an unprecedented opportunity to bring about the kind of results the public expects.
For example, data collected in Washington state in 2004 and early 2005 revealed that state social workers were responding to reports of child abuse and neglect within 24 hours only around 70% of the time. Governor Gregoire deemed this level completely inadequate and set a goal of a 90% response rate within 24 hours.
By digging further into the data, we were able to determine that there were a variety of reasons for the inadequate response time, including unfilled staff positions, misallocation of resources, and insufficient training on the database program used to record contacts with children in state care.
The data showed us what we needed to do: reallocate resources and speed up the hiring process. And the results are impressive. As of July 2007, social workers are responding to emergency cases of child abuse within 24 hours 94% of the time.
This is just one example of how our state has benefited from data-driven policymaking, and we are looking forward to many more as we further integrate this accountability mechanism into our policy decisions.
Just as businesses must innovate to stay ahead of their competition and keep their customers happy, so must government. Not only do citizens demand it, but it is also the right thing to do.
This article deals with an important and too often neglected issue. Despite my agreement with the goals found in the article, I believe that it overreaches its scope. One can be supportive of the goal of accountability and use of information without overpromising its benefits. Solutions may actually generate new problems.
What may work in some policy areas is not effective in others. Different policy areas have different attributes (such as the level of agreement on goals, data available, and agreement on what can achieve goals). Many policy areas have competing and conflicting goals and values. Many also involve third parties (such as private-sector players) and, in the case of federal programs, intergovernmental relations with state or local governments.
This article, like too many others, tends to oversimplify data itself. There are many different types of data and information. Some are already available and useful; cause/effect relationships appear to be known, and it is relatively easy to reach for (although not always attain) objectivity. In those cases, information collected for one purpose can be useful for others. More data, however, surrounds programs in which cause/effect relationships are difficult to disentangle and fact/value separations are thorny. If data are collected from third parties, they are susceptible to gaming or outright resistance. Indeed, it is often unclear who pays for the collection of information.
Complex institutional systems make it problematic to determine who should define data systems and performance measures. Who defines them may or may not be the same institution or person who is expected to use them. In the case of the Bush administration’s Program Assessment Rating Tool, are we talking about measures that are important to the Office of Management and Budget and the White House, congressional authorizing committees, congressional appropriations committees, cabinet or subcabinet officials, executive level managers, or program managers? Each of these players has a different but legitimate perspective on a program or policy.
The technology that is currently available does hold out great promise. But that technology can only be used within the context of institutions that have limited authority and ability to respond to only a part of the problems at hand. These problems often reflect the complexity of values in our democracy, and solutions are not easily crafted on the basis of information. Indeed, multiple sources of information may make decisionmaking more, not less, difficult.
Where’s the water?
If there is a flaw in David Molden, Charlotte De Fraiture, and Frank Rijsberman’s “Water Scarcity: The Food Factor” (Issues, Summer 2007) and in the seminal encyclopedic comprehensive assessment from which it is drawn, it is in the pervasive assumption that human behavior can and will change in the right direction. Given the acute and compelling nature of the problems and the overwhelming importance of the subject to every living being on Earth, there exists in these prescriptions confidence that “Surely, humans will somehow do the right thing.” Alas, sentences preceded by surely rarely describe a sure thing!
The article could as well be titled “Food Scarcity: the Water Factor.” We know already that many populations in many countries are struggling to feed themselves with currently inadequate amounts of water. We see already what declining precipitation, rising temperatures, and increased flood and drought are doing to their food production. We see in only short decades ahead the acceleration of these trends as the natural dams of glaciers and mountain snow melt and disappear.
The study is compelling, the 700strong research team awe-inspiring, the argumentation trenchant, and the solutions described neither impossible nor out of reach—if we want them to happen. For the moment, we the relatively better off can, as always, find ways to protect ourselves from this series of emerging threats and nuisances: paying more for food, buying water, building cisterns, digging further underground, and securing property where lakes and rivers are pristine. Water is indeed the divide between poverty and prosperity.
It is one of the ultimate ironies that our whole tradition of governance probably formed around the imperative to manage water: to allocate and protect supplies and tend water infrastructure. Yet the current crises of water seem too difficult, too fraught, and too entrenched in existing power relationships for most governments to be able to take on most of the issues in any meaningful way. Trends in all of these areas are going in the wrong direction. The study does not dwell enough on this. There are hopeful signs: The Australian national government is stepping in to provide the conflict resolution mechanism and fiscal backup for their largest, greatly damaged, essential river basin. A few countries are creating Ministries of Water Resources; more are beginning to write water resource plans.
Plans on paper are a good start. Translating those into action is difficult. Will we stop real estate development in dry areas? Will we stop building in floodplains? Will we really invest in optimizing existing irrigation systems?
In removing the worst environmental effects of agricultural intensification? In taking the real steps to stop overfishing? Will we continue to subsidize food, fuel, and fiber production, so that these are not grown where water availability is optimal? The article does seminal service in pointing out that many of these issues are “next-door” questions, essential to but not necessarily seen at first glance as primarily water- or food-related; often not seen as meaningful to our own lives.
We will see these issues play out silently: dry rivers, dead deltas, destocked fisheries, depleted springs and wells. We will also see famine; increased and sometimes violent competition for water, especially within states; more migration; and environmental devastation with fires, dust, and new plagues and blights.
As the world comes to a better understanding of these Earth-threatening issues and the needed directions of changes, we must do more than hope for better policy and practice—we must become advocates, involved and persuasive on behalf of rain-fed farming, for a different set of agricultural incentives and for more transparency about water use and abuse. Surely this will happen before it is indeed too late to prevent such substantial damage to ourselves, to our Earth, and to living things….?
I entirely agree with David Molden, Charlotte De Fraiture, and Frank Rijsberman that every effort should be made to maximize income and production per unit of water. The government of India has launched for this purpose a More Income and Crop per Drop of Water movement throughout the country this year.
Better transparency for a cleaner environment
The United States was once the world forerunner in the development of pollutant release and transfer registers (PRTRs). Its Toxics Release Inventory (TRI), launched in the mid-1980s and continuously upgraded, was the first example of how information on pollutants could be made accessible to the public, and it has been the model for all national and regional PRTRs developed thereafter. The existence of and the first experiences with the TRI were also the basis for governments making commitments to “establish emission inventories” in the Rio Declaration in 1992.
Traditionally, the transparency of (environmental) information has always been one of the major assets in Western, especially Anglo-Saxon, societies. Therefore, it is astonishing that the United States has now fallen behind other countries by not requiring facility-based reporting of greenhouse gas emissions and is not following in this case the underlying “right-to-know” principle.
This shortcoming has already had international implications: The United States, contrary to 36 other countries and the European Community, did not sign the United Nations Economic Commission for Europe PRTR Protocol in 2003, a major cause being its reluctance to report greenhouse gases on a facility basis, which is asked for by the protocol.
We owe much to Elena Fagotto and Mary Graham for having clearly pointed out this gap in transparency and describing its negative impacts (“Full Disclosure: Using Transparency to Fight Climate Change,” Issues, Summer 2007). The authors do not restrict themselves to analyzing the situation but make pragmatic proposals for how to carefully construct a transparency system as a politically feasible first step.
Despite the advantages of more transparency, even with regard to emission reductions, priority efforts should be given to directly reducing greenhouse gas emissions; for example, by implementing a cap-and trade approach as soon as possible.
Science’s social effects
As with many policies, fulfilling their intent is a matter of enforcement. The National Science Foundation (NSF), with its decentralized directorate/division/program structure, supports the “broader impacts” criterion unevenly at best. This criterion can be traced to NSF’s congressional mandate (the Science and Engineering Equal Opportunities Act of 1980, last amended in December 2002) to increase the participation of underrepresented groups (women, minorities, and persons with disabilities) in STEM (science, technology, engineering, and mathematics). The problem of enforcement resides in the collusion of program officers and reviewers to value participation in STEM and the integration of research and education to transform academic institutions, but not to fund them. As Robert Frodeman and J. Britt Holbrook observe in “Science’s Social Effects” (Issues, Spring 2007), the two criteria are not weighted equally. Broader impacts are unlikely to overshadow intellectual merit in deciding the fate of a proposal, nor arguably should they. And because proposal submission is an independent event, accountability for project promises of broader impacts is never, except for the filing of a final report, systematically considered in the proposer’s next submission. Consequently, the gap between words (commitments in a proposal) and deeds (work performed under the project) continues.
So what to do about it? In an age of overprofessionalization,“education and public outreach (EPO) professionals,” as Frodeman and Holbrook call them, join education evaluators as plug-in experts invoked to reassure reviewers that proposals have covered all bases. Yet these are viewed as add-ons to the “real intellectual work” of the proposed project rather than weighted as “plus factors.”And it is doubtful that “researchers on science,” who collectively are as single-minded and professionally hidebound as the science and engineering communities they seek to analyze, can help. Their criticisms remain at the margins, largely unactionable if not unintelligible. They are also not inclined and are ill-equipped to have the conversation with those they could inform. Most, I suspect, are themselves devising ways of satisfying the broader impacts criterion, much like their natural science brethren, to survive merit review without devoting much project time or money to “social effects.” Most cynically, one could say that these are rational responses to increasing sponsor demands for performance and accountability.
If an NSF program, however, were treated as a true portfolio of activities, including some that pursue broader impacts, then funded projects would be expected to demonstrate social relevance as a desired outcome. It is unrealistic to demand that every NSF principal investigator be responsible in every project for fulfilling the need to educate, communicate, and/or have effect beyond their immediate research community. Diversifying review panels with specialists who can address broader impacts, in addition to the small minority of panelist-scientists who “get it,” would be a better implementation strategy, providing a kind of professional development for other panelists while applying appropriate scrutiny to the proposed work they are asked to judge.
All of this puts program officers, who already exercise considerable discretion in selecting ad hoc mail and panel reviewers, on the spot. Make them accountable for their grantees’ serious engagement of the broader impacts criterion. If they don’t deliver, their program’s funding should suffer. That would distribute the burden to division directors and directorate heads. Without such vertical accountability practiced by the sponsor, what we have is all hand-waving, wishful thinking, and a kind of shell game: Beat the sponsor by either feigning or farming out responsibility instead of proposing how the project will broaden participation, enhance infrastructure, or advance technological understanding.
Frodeman and Holbrook have diagnosed a need. Although I applaud their pragmatic bent, I fear their solution hinges on misplaced trust in rational action and a commitment to promoting science’s social effects by a recalcitrant science community. Trust, but verify.
Universities as innovators
“…a hitter should be measured by his success in that which he is trying to do …create runs. It is startling …how much confusion there is about this. I find it remarkable that, in listing offenses, the league will list first— meaning best—not the team which scored the most runs, but the team with the highest batting average. It should be obvious that the purpose of an offense is not to compile a high batting average.” Bill James, Baseball Abstract, 1979.
In his book Moneyball, Michael Lewis laid out the new knowledge in baseball that was guiding seemingly mediocre teams with small budgets to new heights of success. The key is knowing which metrics are related to winning. Although it sounds simple, it requires ignoring decades of conventional wisdom spouted by baseball announcers and armchair managers.
In “The University As Innovator: Bumps in the Road,” (Issues, Summer 2007) Robert E. Litan, Lesa Mitchell, and E. J. Reedy make a similar observation that “scoring runs” in transferring new ideas and innovations from universities to the marketplace has taken a back seat to a “high batting average” measured by revenues per deal. By using the wrong metric, university tech transfer offices are turning the Bayh-Dole Act on its head. The authors point out that the act envisioned accelerating the introduction of innovations into the marketplace by clarifying the intellectual property rules and giving universities and their faculties an incentive to commercialize their discoveries. Instead, universities are focusing on ownership to the detriment of innovation.
This misguided focus on revenue enhancement, when moving ideas out of universities, is matched by an unseemly focus on maximizing revenues when bringing students in. In his book The Price of Admission, Daniel Golden details how top universities pass over better-qualified students in favor of the children of the wealthy, with the knowledge that it will help development. By choking talent on the way in and choking ideas on the way out, universities are not just inefficient, they are violating their educational duty to students and their duty to serve the public interest.
The authors’ solutions to this problem range from market-driven to dreamy. The “free-agency” model builds on a simple idea: Let innovators build their own social networks to develop their inventions rather than forcing them to squeeze through a central chokepoint populated by people who are risk-averse. Regional alliances and Internet-based approaches are variations on the social networking theme and should be included in the commercialization repertoire.
On the dreamy end is the “grateful faculty” approach. This may work at the top end for the biggest successes where shame is a motivator, but loyalty to institutions with 40% overhead rates, rigged admissions processes, and irritating tech transfer offices is not likely to be characteristic of your average innovator.
Measuring the right things—number of innovations licensed versus revenue scored, meritorious students versus family wealth—will help universities score runs. Without the right metrics, we will be on a losing team no matter what our batting average.