Forum – Spring 2001
OTA reconsidered
While not arguing with the accuracy of Daryl E. Chubin’s view of the positive contributions of the Office of Technology Assessment (OTA) (“Filling the Policy Vacuum Created by OTA’s Demise,” Issues, Winter 2001), I would point out that the article fails to deal with the fundamental problem that led to OTA’s demise. The agency was created as a tool for legislative decisionmaking. Its work, therefore, was only as valuable as the timeliness of its reports within the legislative schedule. Too often the OTA process resulted in reports that came well after the decisions had been made. Although it can be argued that even late reports had some intellectual value, they did not help Congress, which funded the agency, do its job. For that reason, I would argue with Chubin’s characterization of OTA as “client-centered.” Its client was Congress, and that client was not satisfied that it was getting the information it needed when the need existed. And so, in 1995, Congress decided to look elsewhere for advice and counsel on matters relating to S&T.
Daryl E. Chubin has provided a timely reminder of a long-standing issue of governance in a technological age: Our elected officials tend not to be schooled or experienced in science and engineering. The kinds of personalities attracted to political life are usually not the same as those drawn to research and technology. But the policy agenda for our citizen governors has come to be heavily influenced, if not dominated, by the development and impacts of science and technology (S&T).
This means that it is imperative that effective means be devised to link specialized knowledge to the needs of the electorate in a way that is authoritative, understandable, fair, and helpful. As James Madison observed in a letter he wrote in 1822: “A popular government, without popular information, or the means of acquiring it, is but a prologue to a farce or tragedy; or, perhaps both. Knowledge will forever govern ignorance, and a people who mean to be their own governors must arm themselves with the power which knowledge gives.”
OTA was created to be Congress’s own. It was governed by a bipartisan board of members selected by the leadership of the House and Senate and served, in the words of Senator Ted Stevens, as a “shared resource” for the committees. Although it directly served Congress, its work was broadly used by U.S. society.
OTA was meant to be a highly skilled catalyst to distill national wisdom, derive findings, narrow differences, and present policy-relevant results that served to elevate the debate. Frequently, it enabled clearer distinctions not only of where things stood but where things were headed–an increasingly vital need for effective public policy in an age of rapid and pervasive technological change.
OTA drew heavily on specialized studies, such as those by the National Academies, university researchers, corporate leaders, and the nongovernmental organization community. Its output, however, went beyond such studies to clarify the reasons for debate about the issues and define alternative public policy positions. Six years after OTA was shut down, many of its analyses stand as remarkably timely, prescient, and on target.
Since 1994, there have been other attempts to make up for OTA’s demise, as Chubin says. Indeed, some of the activities he identifies, such as the National Bioethics Advisory Commission, were devised by OTA alumni, and the National Academies have become more actively engaged in assisting Congress. Other actions have been more haphazard, such as occasional meetings among a few experts and a few members of Congress; and of course an army of “experts” is always ready to give advice–usually one-sided and biased. But the stakes are far higher than that. We can ill afford not to invest in careful, searching appraisals of sociotechnical issues, so that the resulting choices, which can be so profound in their implications, can be made using the most thoughtful and considered wisdom possible. The American people and Congress deserve no less.
Daryl E. Chubin addresses “the policy vacuum created by OTA’s demise” on September 30, 1995, after 23 years of service to Congress and the nation.
OTA was unique, in part because, as Chubin notes, it was blessed with high-quality staff, staff continuity, a capacity for self-analysis and criticism, and a process that encouraged advisors from all perspectives. The result was a constant process of refinement and encouragement of the essentials of balance, completeness, and accuracy in OTA work. In part, this was also because OTA was sited in the U.S. legislative branch, an unusual system that allowed a truly bipartisan (and bicameral) approach to analysis and the selection of policy-relevant, nationally significant subjects requested by committee chairmen and ranking minorities. Of course, OTA also struggled with familiar problems: the need to understand the political environment, the press for timely delivery of work, the constant battle to be even-handed, and the need to be brief and incisive.
After I “turned out the lights” at OTA in early 1996, I reviewed the history of technology assessment and policy analysis in the United States. As so often happens, there have been cycles of varying interest. During the early and mid-20th century, influential political and scientific thinkers increasingly supported developing our capacity to analyze S&T to inform decisionmakers in the executive and legislative branches of government. In Congress, the Science Committee thought long and hard and held many public hearings before legislation to create OTA could be negotiated through both houses in 1972. The evidence supports a devaluing of the work of analytic staffs and agencies in the 1990s, through staff cuts, high turnover, and budgetary restrictions. Some have ascribed this policy to the concept of a minimalist national government held by Republicans and, in OTA’s case perhaps, a perception that the agency was captive to the long-time Democratic majority in Congress. It is indisputable that at the time of defunding of OTA and cutbacks at other agencies, the Republicans were in the majority in both the House and Senate. However, there was also a strong appreciation of the need for budgetary control at that time.
OTA’s body of work was reproduced in a set of CDs, the OTA Legacy, in the final days of the agency. That resource has remained valuable. The ability to address and evaluate issues important to the legislative process from a unique perspective and responsibility as part of the Congress, and the ability to draw, as advisors, the best minds in the United States to the service of congressional technology policy analysis would be of great value to Congress and the public today.
Whatever the motives of the actors at the time of OTA’s demise, subsequent events lead me to be somewhat more sanguine than Chubin about the vacuum he has identified. Both Republicans and Democrats have given indications in recent years that they share an appreciation of the value of analysis of scientific evidence by continuing to seek the assistance of the National Academies on many problems. Perhaps they may be ready to support a new OTA.
Lamenting the demise of OTA, Daryl E. Chubin asks “Who will help [federal] policymakers understand science and technology well enough to make wise decisions?” There are actually many sources of advice. Senior advisory boards abound, such as the President’s Committee of Advisors on Science and Technology, the National Science Board, and science boards specific to the executive agencies and their missions. The National Academies advise the government, typically using the format of an in-depth study resulting in a book-sized report. The Congressional Research Service will collect published materials on a given topic at the request of a member of Congress.
What is missing is the ability for a member to find, in short order, just the right expert to help that member work through a specific issue in a timely way. OTA staff performed this function to some extent; they “answered the phone” as well as developing in-depth reports. But no modest-sized staff can be expert in all areas of S&T.
Members of Congress are called on to act on a wide variety of issues, often with tight timelines for action. The percentage of those policy issues in which the problem, the solution, or both have a substantive science or technology component seems to be increasing.
Congress should consider providing themselves with “just-in-time” access to experts in S&T. They should consider chartering the national academies to maintain a small office that connects members of Congress with one or several paid consultants who, at the member’s direction, can create a white paper or meet with the member to discuss the explicit issue at hand. The National Academies already have eminent committees who can identify and vet truly qualified individuals. What is lacking is a “broker office” that is funded to locate and contract with the right expert just in time.
Expert advice on the specific issue at hand delivered on a short fuse could provide Congress with the kind of support that is very difficult for a single member’s office to acquire.
Daryl E. Chubin asks whether sufficient time has passed or enough wounds have healed for OTA to be reconstituted. Chubin concludes that a congressionally mandated resurrection is not in the works and raises troublesome questions about whether an OTA-like capability can be conjured up through existing institutions, governmental or nongovernmental. These questions are timely: Carnegie Mellon University is organizing a conference in Washington this summer to review the strengths and weaknesses of the old OTA, to assess whether it indeed met its legislatively mandated goals, and to look at new structures for filling the policy vacuum.
OTA had its faults, but the participants in the Carnegie review need to understand that in 1995, Congress did not act after an OTA-like deliberative process, nor did its decision somehow confirm that the agency was dysfunctional. Rather, OTA fell prey to a strange and unique alignment of the stars. The first Republican House in 40 years was led by a charismatic, technology-infatuated speaker whose reorganization of the House led to great centralization of power in his office. As OTA’s last director, Roger Herdman, has said, “The speaker had his own ideas about what he wanted to do with science, and he didn’t want anything that would conflict with those.” Other sources of unvarnished data and analysis also suffered in that period. Congress cut professional committee staffs by one-third, and many sources of independent information, including the U.S. Geological Survey and many of the Environmental Protection Agency’s programs, were proposed for elimination. Thoughtful elected officials from both parties who had served long enough to appreciate the value of in-depth S&T policy analysis wanted OTA to continue; three of the longest serving Republicans in the House–Phil Crane, Ben Gilman, and Henry Hyde–voted to preserve OTA. On the other hand, those who had never been exposed to OTA dutifully followed Gingrich and Bob Walker, the chairman of the Science Committee, and provided the margin to terminate it in the initial House vote; only 2 of the 74 freshmen Republicans swept into office by Gingrich’s “Contract with America” voted to keep OTA.
Of course, these circumstances, however unique, cannot be undone. Or can they? It is telling that the number of legislative technology assessment groups throughout the world has grown from 1 in 1982 (OTA) to 15 today, despite the fact that none of the national parliaments currently supporting such institutions have the independence or power of the U.S. Congress. The Carnegie group should look at these international entities for clues to alternative modes of operation for a reformulated OTA. But in the end, renewed funding for a congressionally based OTA is the best option for reinvigorating technology assessment and related policy formulation. The parliamentary basis for these global institutions is not accidental. It simultaneously provides independence, flexibility, breadth of scope, openness, and access to the policymaking process. None of the alternatives, including a huge infusion of cash from a white knight such as Ted Turner, can begin to replicate the impact that OTA made in its heyday.
Many of us in the S&T policy community lamented the demise of OTA in 1995. But as Daryl E. Chubin recognizes, Congress is unlikely to bring the agency back to life any time soon. Chubin is on target in calling attention to the growing list of S&T-intensive policy issues confronting Congress and the nation, but he underestimates the growth in interest and capability in S&T policy that has taken place in recent years.
He states that most of the scientists and engineers who come to Washington to participate in the American Association for the Advancement of Science (AAAS) Fellows Program (actually an umbrella program involving more than 30 science and engineering societies) return to traditional careers after their fellowships end. In fact, about two-thirds make significant career shifts, and more than half of those remain in Washington in policy-related work.
The program has produced many leaders in S&T policy in recent years, including, at one point during the Clinton administration, three of the four associate directors of the White House Office of Science and Technology Policy (OSTP); a substantial number of congressional committee and subcommittee staff directors; more than a few top-level federal agency officials; and one member of Congress, Rep. Rush Holt (D-N.J.). Furthermore, the program continues to grow. AAAS today administers 12 S&T Policy Fellowship Programs, placing highly qualified scientists and engineers not only in Congress but also in a dozen federal agencies, including the Defense Department, Environmental Protection Agency, Food and Drug Administration, the State Department, and even the Justice Department. There are now more than 1,300 alumni of the programs, and the 2000-01 class includes 125 fellows.
Other policy analysis organizations have also begun to take root. The Science and Technology Policy Institute, established within the RAND Corporation in 1992 as the Critical Technologies Institute and renamed in 1998, serves OSTP with a strong analytic capability on a budget that now exceeds $5 million a year. SRI International maintains a similar, though smaller, S&T policy group in its Washington office. Columbia University recently established a Center for Science, Policy and Outcomes in Washington. And the Science Policy Directorate at AAAS has a staff of more than 40 people, about a quarter of which hold Ph.D.s. Several months ago, these four organizations joined with four Washington-area universities that maintain research and teaching programs in S&T policy to form the Washington Science Policy Alliance, which is sponsoring periodic seminars in S&T policy. When the alliance placed a form on the World Wide Web inviting people to sign up for its mailing list, more than 500 registered in the first two weeks. All of this suggests that the field of S&T policy is alive and well in the seat of the federal government.
Congress should heed Daryl E. Chubin’s plea but, as he suggests, we need to consider ways to fill the gaps left by the demise of OTA that not dependent on the sudden reenlightenment of Congress.
Chubin alludes to a variety of public and private settings in which technology assessment (TA) and TA-like activities occur. Although public TA activity has diminished, it persists in some agencies working in near obscurity. In public health alone, for example, the Agency for Healthcare Research and Quality, the Office of Medical Applications of Research, and the National Toxicology Program all perform assessments that inform policy decisions.
Obscurity has two pernicious political consequences. First, such TA activities may not develop the broad and active constituency to defend them from political attack. Second, narrow constituencies involved in the assessments may unduly influence the agencies’ processes and decisions–a problem exacerbated when the TA performer is a private organization.
Obscurity also has a pernicious intellectual consequence. Scholars are generally ignorant of how differences in the participation, structure, and procedures of TA activities translate into differences in their outputs. This ignorance leads to trouble not only in evaluating the quality of TA, but also in considering whether there should be greater consistency among various performers. For example, to what extent should the procedural checkpoints of the Administrative Procedures Act apply to TA? And to what extent must assessments be based only on published peer-reviewed research, rather than on research that meets other public criteria or research that is proprietary in nature?
Funding agencies have done enough to build the intellectual infrastructure necessary for a renaissance of TA. Again, a variety of programs exist, such as programs dealing with the ethical, legal, and social implications of set-asides in genome, information technology, and nanotechnology research, and programs in environmental decisionmaking cosponsored by the National Science Foundation and the Environmental Protection Agency. But there is still too little emphasis on understanding and anticipating the societal implications of new research and too little effort directed at integrating the research outputs from such programs into new and improved decision outcomes. Foundations could take a lead in such efforts, demonstrating to federal funders and decisionmakers that research on TA and decisionmaking can improve the social outcomes derived from R&D.
Finally, the United States continues to lag behind other nations in incorporating the perspectives of the lay public into TA for both analytical and educational purposes. Contemporary technological decisionmaking still frames the public as passive actors or, at best, single-minded consumers, rather than as active citizens capable and even desirous of deliberative participation in technological choice.
A new OTA would make important contributions to governing in this technological age. But TA requires increased political and scholarly attention whether there is one reconstituted office or many offices struggling in obscurity.
Airline safety
In “Improving Air Safety: Long-Term Challenges” (Issues, Winter 2001), Clinton V. Oster, Jr., John S. Strong, and C. Kurt Zorn have written a stimulating and sophisticated paper about aviation safety, in which they argue that the primary threats to future air travelers might differ from the menaces of the past. I would amend the statement slightly and say that certain long-term dangers that have been quiet in recent years might be poised for a resurgence.
Midair collisions were once quite common, for example, but they did not cause a single death in the 1990s on any of the 100 million jet flights in the First World. But air traffic control is changing: To improve efficiency, the country-by-country airspace systems in Western Europe might be merged into a larger entity. In the United States, a growing emphasis on point-to-point “free-flight” routings could create patterns of unprecedented complexity on the air traffic controller’s screen. Although such changes would surely be introduced with great caution, learning-curve theory tells us that any major policy revision can have unanticipated adverse effects.
Runway collisions caused 30 deaths during the 1990s among the billions of First World jet passengers. That is a far better record than in previous decades (the catastrophe at Tenerife in 1977, for example, took 583 lives). But growing traffic levels at airports create new opportunities for collisions; indeed, there is some empirical evidence that risk grows disproportionately as airport operations increase. As is widely recognized, runway collisions are anything but an extinct danger.
During the 1990s, the chance that a First World air traveler would perish in a criminal terrorist act was 1 in 10 billion. This record is especially striking because sabotage in the late 1980s felled several First World jets, causing hundreds of deaths. Yet it would go too far to suggest that First World security systems are now infallible or that the desire to harm Western air travelers has withered away. It is thus understandable that, two days after the October 2000 bombing of the USS Cole, Reuters reported that “fear of terrorist attacks hammered global airline shares yesterday.”
Reversals in safety are not inevitable, but we do well to ponder the understated but potent warnings of Oster, Strong, and Zorn.
Clinton V. Oster, Jr., John S. Strong, and C. Kurt Zorn propose that, as low as they are today, commercial aviation accident rates must be reduced. The most significant point in the article is that our aviation system will need to grapple with growth and rapid change if we are to obtain these safety improvements. The authors cite several areas in which it is essential to look beyond the lessons of the past. I agree, but based on my experience as a manager of major airline accident investigations I believe that the lessons of the past–the findings of accident investigations–will probably continue to provide much of the impetus for actual safety improvements.
The authors indicate that valuable safety information is available from the vast majority of flights that arrive safely, in addition to the few that crash. The goal is to prevent accidents by spotting trends in less serious precursor events. There are several such programs in the air-carrier industry worldwide, most falling into two categories: monitoring data from onboard recording systems and self-reporting by pilots of otherwise unreported in-flight events. These are rich data sources for improving safety, and they deserve the support of the public and government. Unfortunately, although it has announced its support for these programs, the Federal Aviation Administration (FAA) also stands in their way by hanging on to outmoded police-like concepts of enforcement and sanctions, which it threatens to impose on those who are being monitored and who come forward voluntarily to provide safety information. As a result, these programs (Flight Operations Quality Assurance and Aviation Safety Action Partnership) are languishing in the United States.
If we could overcome the problem of FAA sanctions, the next challenge would be to extract valid conclusions about air safety from the scattered pings and upticks in the mass data that would be collected from millions of nonaccident flights. Many potential safety issues will be spotted from these data, but it will be hard to separate the issues that might lead to an accident from all the others.
And there is a much more difficult challenge yet. There are plenty of safety issues that are well known and that might occasionally, or just conceivably, be involved in an accident. But often the FAA and private interests will not take action until after an accident has occurred. The key distinction is between knowing about a problem and being willing to pay the money to do something about it. The authors are correct in calling for aviation industries and government to look ahead to prevent accidents. However, the best data collection and analysis systems will be useless unless companies and regulators are willing to act on the findings. Because I don’t see any scientific or political factors that will change the current state of affairs any time soon, in many cases we will still be dependent on learning from accidents to move air safety forward.
Clinton V. Oster, Jr., John S. Strong, and C. Kurt Zorn note that we may already have taken all the easy avenues to reducing aviation accidents. If we are, in fact, in a position where all that are left are random accidents with no common thread, then an irreducible minimum accident rate combined with increases in aviation activity may well lead to an increased number of accidents each year. The authors suggest that society may find this unacceptable. However, we may also need to think about how society might cope with the reality of such a situation, because it may be where we end up.
As the authors suggest, it may be that we will have enough work to do in keeping the accident rate from growing as the aviation system becomes more congested. Some recent work by Arnold Barnett of MIT and others suggests that the rate of runway incidents increases with increased activity at a location. The idea is that the exposure to risk grows exponentially with increases in activity at a location with fixed capacity. However, even when capacity at an airport grows by addition of runways and taxiways, risk may also increase because of increased complexity. There will be a larger number of crossing points, and the chance that a pilot will take the wrong runway or taxiway also might increase. We may need to increase investment in ground control and monitoring systems irrespective of whether we expand runways at the nation’s busiest airports, just to maintain the current level of safety.
The authors also suggest that less experienced human resources will be brought into more sophisticated aviation environments in times of high economic growth. The corollary, I imagine, is that inferior resources will be shed in times of economic downturn. However, we also know that there are concerns about companies reducing safety expenditures when times are tough. Work by my firm shows that increased vigilance is likely to be required when carriers undergo significant changes, such as fleet growth, changing aircraft types, opening new markets and so forth. I suggest that oversight mechanisms should recognize that an airline’s internal systems can be stressed by either growth or contraction and that safety oversight be increased in either situation.
Finally, the authors note that the institutional structure of how air traffic control (ATC) services are provided in the United States may change in response to what can be characterized as the demand-capacity-delay crisis. They correctly note that there are some inherent conflicts when, as under the current system, the same organization [the Federal Aviation Administration (FAA)] is both the operator and the regulator of the ATC system. With the many proposals to make ATC more businesslike, the authors note that an arm’s-length approach to monitoring the safety of the ATC system will be required. It will be a challenge for the FAA to develop a new institutional framework in a timely manner. Yet both the reality and the perception of independent safety oversight will be crucial to the ultimate success of any restructuring of how the nation provides ATC services.
Civilizing the SUV
John D. Graham’s “Civilizing the Sport Utility Vehicle” (Issues, Winter 2000-01) will hopefully do much to enlighten the debate over SUVs. After all, given the tirades to which these vehicles have been subjected, a more accurate name for them would be “Scapegoat Utility Vehicles.” Yet, as Graham recognizes, SUVs offer some very desirable attributes, ranging from their utility to (shocking as it may be after all we’ve heard to the contrary) their safety. The popularity of SUVs owes less to their image than to their utility.
In fact, a careful reading of Graham’s article suggests that it might more accurately have been titled “Civilizing the Small Car.” As he notes, overall vehicle safety would gain more from upsizing small cars than from downsizing SUVs, and many small car buyers may not fully appreciate the risks inherent in their cars. Political correctness may be the culprit here, given that few things are as politically incorrect nowadays as large cars. The federal government and consumer safety groups have actively highlighted practically every vehicle risk except that of small cars. On that issue, federal fuel economy standards have actually exacerbated the problem through their downsizing impact on new cars. Given Graham’s pioneering research on the lethal effects of those standards, it is refreshing to see his perspective applied to the SUV debate.
I question, however, Graham’s assessment of the “aggressivity” of SUVs. Vehicle incompatibility is not a new problem, and in some collision modes there is a greater mismatch between large and small cars than there is between cars and SUVs. True, more people die in cars than in SUVs when the two collide, and you could make the ratio smaller by making SUVs more vulnerable. It’s far from clear, however, that doing so would improve overall safety.
Graham’s introduction of the global warming issue as a reason for reducing fuel consumption is another matter. The scientific basis for a climatic threat is far more tenuous than is commonly realized. As for the Kyoto Treaty, it has not even been submitted to the Senate, let alone ratified, and Congress has prohibited a number of federal agencies from spending funds to meet its objectives. Finally, whenever a new political program is introduced to limit consumer choice, there is the possibility that it will run disastrously off course. If the history of the federal fuel economy standards can teach us anything, it is that.
I appreciate John D. Graham’s starting a discussion about civilizing the SUV. A number of his suggestions (such as reworking safety ratings and reclassifying station wagons) are worthy of consideration. However, I hope he was not serious about the idea of raising the weight of cars by 900 pounds to compensate for the increasing weight of SUVs. First, it seems wildly inappropriate to suggest that U.S. drivers should consume more of the world’s steel and petroleum resources to decrease their risk of a fatal collision with an SUV. Second, it would take over a decade (cars are driven for an average of at least 12 years) to replace all the cars on the road, unless we require retroactive armor plating for smaller cars. Third, increasing the weight would not address the “bumper override” problem.
I do not think we need a federal study of the road vision problems caused by the inability to see through or around SUVs. Anyone who drives around SUVs knows there’s a problem in seeing roadside signs, merging, or pulling into traffic when there’s an SUV in the way. Research into possible solutions (SUV-only lanes? SUVs treated as trucks for highway and parking purposes? SUVs restricted from parking at corners that obscure the vision of oncoming traffic? SUVs required to be low enough to see through? Periscopes for smaller vehicles?) would be more welcome.
Finally, some of the problems of uncivil SUVs are actually caused by their drivers, not by the vehicles themselves. Rude and inconsiderate drivers are nothing new, but driving a vehicle that allows them to indulge their attitude at a heightened risk to their fellow citizens exacerbates the problem. Again, some suggestions in this area would be most welcome. Requiring special licensing and/or training might help (chauffeur’s license required if the vehicle seats more than six?). Then again, so might making these drivers commute for a week in a Geo Metro.
It is as refreshing as it is unusual in the current media climate to see a reasoned discussion about SUVs such as the one that appeared in your Winter 2000-01 edition. For some time now, Ford Motor Company has been attempting to improve the health, safety, and environmental performance of light trucks and cars. Our engineers and scientists know firsthand the challenges inherent in balancing utility and reasonable cost with customer and societal demands for safety and emissions control.
Our recent efforts, under a “cleaner, safer, sooner” banner, offer new technologies in high volumes at popular prices as soon as feasible and usually years ahead of any regulation. We began in 1998 by voluntarily reducing the emissions levels of all our SUVs and Windstar minivans. A year later, we included all of our pickup trucks. These actions, at no additional charge to the consumer, keep well over 4,000 tons of smog out of the atmosphere each year. In summer 2000, we committed to achieving a 25 percent improvement in the overall fuel economy of our SUVs during the next five years, through a combination of technical innovations in power trains, efficiency, lightweight materials, new products, and hybrid vehicles.
In terms of safety, we consistently have more top-ranked vehicles in government crash tests than any other automaker. Ford was the first company to depower airbags across all vehicle lines when it became apparent that this would protect smaller-stature people who were using safety belts. And we’re determined to increase the use of safety belts: the single most important safety technology that exists. BeltMinder, now in all of our vehicles, uses both chimes and a warning light for a driver who puts a vehicle in motion without buckling up. Our Boost America! Campaign encourages the use of booster seats for children between 40 and 80 pounds. Today, less than 10 percent of these children are properly restrained.
In 1999, we introduced the industry’s most comprehensive advanced restraints at family car prices. Called the Personal Safety System, it “thinks about” an accident as it is happening and selects the proper combination of airbag deployment and power levels as well as safety belt pretensioning, depending on conditions. This fall we will offer a sport utility rollover curtain protection system that will help reduce the risk of occupant ejection. We will also offer stability control systems for all of our light trucks during the next several years with the performance, but not the cost, of advanced systems now in use.
All of this work is designed to make a real-world difference for our customers and to contribute positively toward addressing the social issues that arise from the use of our products.
John D. Graham argues that SUVs can be made safer, more energy-efficient, and less polluting. As a researcher in these areas, I fully agree and remain dedicated to that proposition.
But I worry that this focus on civilizing the SUV misses an important point: that SUVs, especially the larger ones, are part of an antisocial trend. As Graham notes, they maim smaller vehicles and their occupants in collisions and block views in traffic and at intersections. One might respond that minivans (and some pickups) are also large obtrusive vehicles. The difference is that minivans and pickups are valued for their functionality and used accordingly. SUVs, in contrast, are rarely driven off road and their four-wheel-drive capability is rarely used. The question, therefore, that merits more research and debate is: Is the embrace of SUVs another manifestation of the breakdown in civility and community, with concern for self trumping concern for fellow beings?
Especially disconcerting is the fact that as the population of SUVs (and other large vehicles) expands, those not owning such large vehicles feel intimidated. I am one of them. I bought a compact-sized Toyota hybrid electric Prius. I find it spacious and comfortable and highly fuel-efficient. But I fear that I am irresponsibly subjecting my family and myself to heightened danger. I feel pressured to buy an SUV simply for self-preservation. I have many friends who have succumbed to this fear.
The solution? At a minimum, eliminate the anachronistic regulations favoring SUVs (especially the lax fuel economy rules). And require that SUVs have lower bumpers and be designed so that they don’t subject cars to undue risk.
Another response is to encourage multicar households to use vehicles in a more specialized way. They could drive a small car locally and rely on the large SUV mostly for recreational family trips. Better yet, we could encourage car sharing, as is becoming popular in Europe, whereby families gain easy access to locally available SUVs only when they really need them.
The real solution, though, is deeper and more basic–it has to do with our sense of civility and caring for one’s fellow beings. For that there are no simple fixes.
Older drivers
A. James McKnight’s comprehensive and thoughtful article (“Too Old to Drive?” Issues, Winter 2001) describes the complexity of the issues surrounding driving by older people. Though in general the article is correct and complete, there are some areas that need further clarification.
McKnight clearly makes the case that older drivers are safe but fails to indicate that as pedestrians, they are at risk, which complicates transportation solutions for older people. Not only are older pedestrians not safe, but many of the older people who cannot drive are significantly less able to walk or use transportation options. Therefore, providing mobility for people who stop driving is much more complicated than merely providing transportation options and teaching them how to use them. Currently, the most frequent way that older nondrivers get around is by having their spouse or children drive them. When someone is not available to drive the frail older person, providing usable, convenient transportation is extremely costly and difficult. Therefore, ideas such as those that permit “through the door to through the door” capabilities are absolutely necessary.
In the area of available resources, there are several concerns. As their number grows, significant numbers of older people will depend on public transportation, which would transfer a previously private expense to public coffers. New sources of funding could be required to meet this increased demand. With reference to vehicles, it is unknown whether the future purchasing power of the older population will lead car manufacturers to pay more attention to their safety and ease of driving. Another big issue is whether land use planners can be motivated to make the changes that would enable older people to more readily age in place.
In the area of technology, McKnight is generally more positive than many in the field. It is not at all clear that there is no downside to what technology can offer older people. They frequently are at their limit in dealing with the information that they currently have to process. He is also much more optimistic about the potential value of computer-based testing in providing a valid test of driving than are many people in the field. Though many older drivers may not do well on tests, they adjust their driving, mainly by slowing down and driving less, to account for their limitations and are able to drive with a lower crash rate per licensed driver. With reference to the crash involvement of older drivers at intersections, it should be pointed out that although older drivers have a higher percentage of their crashes at intersections, they still have fewer crashes at intersections than other age groups. They simply have fewer crashes, in general. Finally, research from the National Institute on Aging and other sources has shown that people with dementia at and above the mild stage have higher crash rates and should not drive and that these individuals can be identified.
We commend McKnight on an excellent issues paper on how we are going to deal with a population in which more than 70 percent of the population more than 70 years old are drivers. He presents a balanced view on how we need to respond to their transportation needs.
I am pleased to see the Independent Transportation Network (ITN) of Portland, Maine, favorably described by A. James McKnight in his excellent overview of issues relating to safety and mobility for older drivers, especially since he recognizes the development of alternatives to driving as “probably the most daunting issue facing the transportation community.” I would like to distinguish, however, between the ITN’s financial position and that of other senior transportation services, and in this distinction to suggest an important direction for the nation’s policymakers.
Unlike most senior transit services, the ITN is designed to become economically sustainable through user fares and community support, rather than relying on an ongoing operating subsidy from taxpayer dollars. I readily admit, as McKnight points out, that the ITN has yet to achieve this goal. But the logic of our approach and the staggering cost of public funding for senior transit (added to the cost of Social Security and Medicare) mitigate strongly in favor of research and development of an economically sustainable solution.
The ITN is pleased to have received a $1.2 million Federal Transit Administration grant to develop a nationally connected and coordinated transportation solution for older people. But to put these dollars in perspective, one county in Maryland spends $2 million annually just to subsidize taxicabs for seniors in that county.
The ITN approach is to pay scrupulous attention to the market demands of senior consumers by delivering a service that comes as close as possible to the comfort and convenience of the private automobile. In our consumer-oriented culture, where cars convey symbolic meaning far beyond their transit value, the ITN strives to capture feelings of freedom in a transportation alternative. To this end, the ITN uses cars and both paid and volunteer drivers to deliver rides 7 days a week, 24 hours a day. Seniors become dues-paying members, open prepaid accounts, and receive monthly statements for their rides. These are the characteristics of a service for which seniors are willing to pay reasonable fares for their own transportation.
The ITN vision of a nationally coordinated, community-based, nonprofit transportation service for America’s aging population, a service connected through the Internet and funded by market-driven rather than politically driven choices, is worthy of support, both privately and publicly. Beyond the traditional paths of action–regulation and publicly funded solutions–Congress should encourage socially entrepreneurial endeavors by developing policy incentives for private solutions.
In this vein, the ITN is developing an entity larger than itself–the National Endowment for Transportation for Seniors–as a focus for private resources, from individuals and corporations who share a vision of sustainable, dignified mobility for the aging population. The endowment will support research, policy analysis, education, alternative transportation, and fundraising. It’s a big vision, but big problems warrant big solutions.
Driver behavior and safety
I agree with Alison Smiley’s conclusion that improvements in road safety intended by technological innovations interact with driver behavior, so that the expected reduction in crashes may not in fact occur (“Auto Safety and Human Adaptation,” Issues, Winter 2001). The research she cites clearly indicates that drivers quickly adapt to the new features by increasing risky behaviors such as driving faster, driving under poor conditions, or being less attentive and expecting the new technology to take care of them. She also points to drivers’ lack of understanding of the limitations of the new assistive systems.
Gerald Wilde has discussed this adaptation phenomenon in his book Target Risk, a term he defines as “the level of risk a person chooses to accept in order to maximize the overall expected benefit from an activity.” His concept of “risk homeostasis” provides insights into human risk-taking behavior. People set a “risk target” and adjust their behavior accordingly. After the introduction of safer equipment or roads, drivers will adjust their actions to the same level of risk as before, which is why the “three E’s”–enforcement, engineering, and education–do not necessarily improve road safety and crash statistics. The research he cites on human risk-taking indicates that there are large individual differences in risk homeostasis based on personality, attitude, and lifestyle.
It appears that drivers are less interested in reducing risk than they are in optimizing it. All drivers have a preferred level of risk that they maintain as a target. When the level of risk they perceive in a situation goes down, they will adapt by increasing their risky behavior so that the preferred target level remains constant over time. Technological improvements that drivers perceive as lowering the risk are thus followed by a change in behavior that is less cautious and raises the risk to the level before the improvement. The data discussed in Smiley’s article conform to this homeostatic explanation.
Her solution calling for better driver understanding of the limitations of the new driver assistive technology may not work for the same reason. Although driver training improves skill, it also increases confidence, which in turn lowers the perception of risk and increases unsafe behavior. What is needed in addition to training is the introduction of increased benefits from safer behavior. When this motive is introduced into the driving equation, it acts in opposition to homeostasis and many drivers will respond by inhibiting risky behaviors. Drivers need training in two areas: understanding how the new technology works, especially its limitations, as Smiley points out; and understanding the risk compensation effect on their decisions. The latter understanding, reinforced with positive incentives for safe behavior, will make it more likely that society can benefit from the introduction of the new driver-assistive technologies.
Climate change
In “Just Say No to Greenhouse Gas Emissions” (Issues, Winter 2001), Frank N. Laird clearly describes political obstacles, implementation problems, and other difficulties associated with the emissions targets and deadlines in the Kyoto Protocol. Focusing on targets and deadlines, however, introduces fundamental conceptual fallacies in addition to those he discusses.
The intricacies of the natural world and the complexity of its interaction with human behavior make full specification of a given “state” or “condition” of the Earth system virtually impossible. Contributions from and consequences for some components must necessarily be omitted in any finite characterization. Furthermore, evidence shows that in prehistoric times, the Earth system exhibited both much higher atmospheric concentrations of carbon dioxide and breathtakingly rapid climate change. Designating a particular atmospheric concentration (or rate of annual emissions) of greenhouse gases to be “acceptable” is doubly dicey because doing so both oversimplifies the complex dynamics involved and relies on a simplistic notion of the consequences of climate change.
This is particularly relevant because alternative, and equally applicable concepts of fairness imply very different assignments of responsibility for action. The polluter pays principle implies that countries currently (or even better, historically) emitting the most greenhouse gases should bear the burden of rehabilitation. However, the equally valid principle of functional equivalence says that those who reap the benefits should share the burden. This avoids free riders. Going a step further, the level playing field calls for shared burdens to be borne by all regardless of their respective endowments in order to avoid shirking. All of these positions can be found in the policy debate, and there is no simple way to choose one over the others.
Assigning explicit, time-constrained emission targets both oversimplifies the interactions between natural and human systems and arbitrarily imposes a single concept of fairness. Emissions targets are appropriate when, as was the case with chlorofluorocarbons, targets can be continually ratcheted downward. Total elimination of carbon dioxide emissions, however, is ludicrous, and the danger of explicit emission targets is that further effort may be viewed as unnecessary when an artificial and inappropriate target is met. The emissions targets in the Kyoto Protocol are both artificial and inappropriate, and I agree with Laird that they should be abandoned.
Categorizing research
I have a small suggestion to get away from the “dichotomy” so much deprecated in “Research Reconsidered” (Issues, Winter 2001) and in Lewis Branscomb’s “The False Dichotomy: Scientific Creativity and Utility” (Issues, Fall 1999). It occurred to me at a conference Roger Revelle assembled in Sausalito about 30 years ago when he was vice president for research at the University of California. I mention it occasionally, it has not caught on, but I try once again.
The usual picure of basic and applied research has them spread along a single axis, basic toward one direction, applied toward the other. In this picture research is one or the other or somewhere in between. My suggestion is to draw not one axis but two. “Basicness” could be measured on the y-axis and “appliedness” on the x-axis. In this picture, research can be represented as a mix of somewhat to very basic and somewhat to very applied chararacteristics. Simple? Helpful?
A matter of curiosity: Can there be research that is neither? Examples are naturally hard to come by, because there would be little motivation for such “research”–if one should even call it research–among scientists. I do have a candidate: research to determine who would win a Joe Louis-Muhammad Ali fight.
Anyway, there goes the dichotomy.
Economic development
I applaud Barry Bozeman’s call for an expansion of the mission of state economic development agendas and increased attention to the reduction of income inequality, alleviation of poverty, and closing of racial and class divides (“Expanding the Mission of State Economic Development,” Issues, Winter 2000-01). The problems in state economic development programs are more serious than he describes, and the content of governmental policy in many states is far more politically opportunistic and less farsighted than suggested by his citation of Georgia’s experiences.
State policy agendas are trichotomized, not dichtomized. There is an economic development agenda, a science and technology (S&T) agenda, and a socioeconomic agenda. The split between the economic agenda and the S&T agenda is that the former typically focuses on tax reductions that constrain the state’s ability or willingness to invest in its K-12 and higher education system. The result in many states is a secular privatization of public higher education. Nationally, state government funding for public universities has declined steadily. Universities have turned to tuition increases to maintain core functions and are unable to extend programs to underserved populations or to exploit new research areas, and rising tuition has made higher education less affordable for historically disadvantaged populations.
State S&T programs, in the main, are targeted at selected technologies and industrial sectors. Seldom do they address the national or state workforce development needs noted by Bozeman or the state’s (and nation’s) socioeconomic agenda. In many states, S&T programs constitute economic development on the cheap. They provide highly visible and technologically trendy images of state governors attuned to the “new economy,” while core state functions such as the support of education are kept on lean rations as the state fails to constructively balance its long-term revenue and expenditure activities. Unhappily, we may be about to repeat the same mistake at the national level.