We Need New Rules for Self-Driving Cars

Autonomous vehicles will change the world in ways both anticipated and entirely unexpected. New rules should be flexible while ensuring that self-driving cars are safe, broadly accessible, and avoid the worst unintended consequences.

There is a sign on the campus of the National Transportation Safety Board (NTSB) in Virginia that reads “From tragedy we draw knowledge to improve the safety of us all.” The NTSB is obsessive about learning from mistakes. Its job is to work out the causes of accidents, mostly those that involve airplanes. The lessons it takes from each crash are one reason why air safety has improved so dramatically. 2017 was the first year in which, for the first time since the dawn of the jet age, not a single person was killed in a commercial airline passenger jet crash.

The NTSB also looks into other major transport malfunctions, particularly those involving new technologies. In June 2016, the board’s chair, Christopher Hart, was invited to speak to the National Press Club in Washington, DC, about self-driving cars. Hart—a lawyer, engineer, and pilot—gave a speech in which he said, “Rather than waiting for accidents to happen with driverless cars, the NTSB has already engaged with the industry and regulatory agencies to help inform how driverless cars can be safely introduced into America’s transportation system.” On finishing the speech, Hart was asked, “Is there a worry that the first fatal crash involving a self-driving car may bring the whole enterprise down?” He replied, “The first fatal crash will certainly get a lot of attention … There will be fatal crashes, that’s for sure.”

Hart didn’t know it at the time, but seven weeks earlier a first fatal crash had already happened. The same day that he gave his speech, the electric carmaker Tesla announced in a blog post that one of its customers had died while his car was using the company’s Autopilot software. Joshua Brown was killed instantly when his car hit a truck that was crossing his lane. The bottom half of Brown’s Tesla passed under the truck, shearing off the car’s roof. The only witness to speak to the NTSB said that the crash looked like “A white cloud, like just a big white explosion … and the car came out from under that trailer and it was bouncing … I didn’t even know … it was a Tesla until the highway patrol lady interviewed me two weeks later … She said it’s a Tesla and it has Autopilot, and I didn’t know they had that in those cars.”

The process of learning from this crash was as messy as the crash itself. The first investigation, by the Florida Highway Patrol, placed the blame squarely with the truck driver, who should not have had his truck across the road. Once it had become clear that the car was in Autopilot mode, a second investigation, this time by the National Highway Traffic Safety Administration, concluded that Brown was also at fault. Had he been looking, he would have seen the truck and been able to react.

It took the NTSB to appreciate the novelty and importance of what had transpired. The board’s initial report was a matter-of-fact reconstruction of events. At 4:40 p.m. on a clear, dry day, a large truck carrying blueberries crossed US Highway 27A in front of the Tesla, which failed to stop. The car hit the truck at 74 mph. The collision cut power to the car’s wheels and it then coasted off the road for 297 feet before hitting and breaking a pole, turning sideways and coming to a stop.

In May 2017, the NTSB released its full docket of reports on the crash. Tesla had by this point switched from coy to enthusiastically cooperative. A Tesla engineer who had formerly been a crash investigator joined the NTSB team to extract and make sense of the copious data collected by the car’s sensors.

The data revealed that Brown’s 40-minute journey consisted of two and a half minutes of conventional driving followed by 37 and a half minutes on Autopilot, during which his hands were off the steering wheel for 37 minutes. He touched the wheel eight times in response to warnings from the car. The longest time between touches was six minutes.

Despite a mass of information about what the car’s machinery did in the minutes before the crash, the car’s brain remained largely off-limits to investigators. At an NTSB meeting in September 2017, one staff member explained: “The data we obtained was sufficient to let us know the [detection of the truck] did not occur, but it was not sufficient to let us know why.”

Another NTSB slogan is that “Anybody’s accident is everybody’s accident.” It would be easy to treat the death of Joshua Brown as a mere aberration and to claim that, as his car was not a fully self-driving one, we can learn nothing of relevance regarding their performance. This would be convenient for some people, but it would be a mistake. Brown was one of many drivers lured into behaving, if only for a moment, as if their cars were self-driving. The hype surrounding the promise of self-driving cars demands our attention. As well as marveling at the new powers of machine learning to take over driving, we should look to the history of transport to build new rules for these new technologies.

Henry Bliss has the dubious honor of being the first person killed by a car in the United States. In the final weeks of the nineteenth century, an electric taxi hit Bliss as he was getting off a trolley car on the corner of 74th Street and Central Park West in New York City. The report posted on September 14, 1899, in the New York Times was brutally frank: “Bliss was knocked to the pavement, and two wheels of the cab passed over his head and body. His skull and chest were crushed … The place where the accident happened is known to the motormen on the trolley line as ‘Dangerous Stretch,’ on account of the many accidents which have occurred there during the past Summer.”

The driver was charged with manslaughter but later acquitted.

Over the twentieth century, as the internal combustion engine replaced the electric motor and car use exploded, the number of US car deaths grew, peaking in the 1970s, when the average was more than 50,000 per year. Individual tragedies stopped becoming newsworthy as the public grew used to risk as the price of freedom and mobility. Improvements in technology and political pressure from safety campaigners meant that even as the number of miles traveled kept climbing, road deaths declined until the end of the century. Only recently has this trend reversed. The years 2015 and 2016 both saw jumps in fatalities, almost certainly due to increased distraction from smartphones.

Proponents of self-driving cars claim that machines will be far better than humans at following the rules of the road. Humans get distracted; they get drunk; they get tired; they drive too fast. The global annual death toll from cars is more than a million people. At least 90% of crashes can be blamed on human error. If fallible human drivers could all be replaced by obedient computers, the public health benefit would surely be enormous.

However, technologies do not just follow rules. They also write new ones. In 1988, the sociologist Brian Wynne, looking back at recent calamities such as the Chernobyl nuclear disaster and the Challenger space shuttle crash, argued that the reality of technology was far messier than normally assumed by experts. Technology, for Wynne, was “a form of large-scale, real time experiment,” the implications of which could never be fully understood in advance. Technological societies could kid themselves that things were under control, but there would always be moments in which they would need to work things out as they were going along. Even exquisitely complex sociotechnical systems such as nuclear power stations were inherently unruly.

The United States’ early experience with automobiles is a cautionary tale of how, if society does not pay attention, technologies can emerge so that their flaws become apparent only in hindsight. The car did not just alter how we moved. It also reshaped our lives and our cities. Twentieth-century urban development took place at the behest of the internal combustion engine. Cities are still trying to disentangle themselves from this dependence.

It’s a story that the historian Peter Norton has narrated in detail. In the 1920s, as cars were becoming increasingly common, the automotive industry successfully claimed that the extraordinary social benefits of its creations justified the wholesale modernization of US cities. In the name of efficiency and safety, streets were reorganized in favor of cars. Led by the American Automobile Association, children learned the new rules of road safety in school. Ordinary citizens were recast as “pedestrians” or, if they broke the new rules, “jaywalkers.” By the 1930s, people were clear on how the privileges of access to streets were organized. The technology brought huge benefits from increased mobility, but also enormous risks. In addition to what the author J. G. Ballard called the “pandemic cataclysm” of road deaths, the nation’s enthusiasm for cars also made it harder to support alternative modes of transport. The conveniences of cars trumped other concerns and allowed for the reshaping of landscapes. Vast freeways and flyovers were built right into the hearts of cities, while the network of passenger railroads was allowed to wither. Around the cities’ edges, sprawl made possible by two-car families leaked outwards. By the 1950s, the United States—and much of the world—had been reshaped in the car’s image. The car and its new ways of life had created a new set of rules.

In the twentieth century, the unruliness of technology has become a brand. Silicon Valley sells “disruption,” a social media-friendly remix of what the economist Joseph Schumpeter called “creative destruction.” The idea, established by picking through the wreckage of once-powerful companies such as Kodak and Sears, is that incumbent companies will be brought down by upstarts bearing new technologies that change the rules. Among its banal slogans, the disruptive innovation movement proclaims that “doing the right thing is the wrong thing.” The disruptors are nimble and constantly experimenting. Disruptive innovation is intentionally reckless, looking for opportunities in challenging or evading regulatory rules. Facebook chief executive Mark Zuckerberg’s motto was, until recently, “Move fast and break things.” It’s a message that is easy to live with if you are benefiting from the changes and the stakes are low. But high stakes sometimes emerge unexpectedly. In the past couple of years, we have seen how software systems such as Facebook can challenge not just social interactions but also the functioning of democratic institutions.

For software, constant upgrades and occasional crashes are a fact of life; only rarely are they life-and-death. In the material world, when software is controlling two tons of metal at 70 mph, the collateral damage of disruption becomes more obvious. Software engineers encountering the challenge of driving appreciate the complexity of the task. It is infuriatingly unlike chess, a complicated game that takes human geniuses a lifetime to master but which computers now find rather easy. Driving, as self-driving car engineers regularly point out, doesn’t take a genius. Indeed, cars can be, and are, controlled by flawed and error-prone human brains. It is a process that, like identifying images and recognizing speech, has only recently become amenable to artificial intelligence. The approach to tasks of this complexity is not to try to work out how a human brain does what it does and mimic it, but rather to throw huge quantities of labeled data at a deep neural network (a layered software system) and let the computer work out patterns that, for example, tell a cat from a dog. In “deep learning,” as this approach is called, the aim of the game is to work out the rules. In some areas, the achievements have been remarkable. One of these systems—AlphaGo Zero, from Google DeepMind—allowed a computer to become the world’s best Go player in 40 days in 2017, working out strategy and tactics for itself using nothing but first principles and millions of practice games against itself. Deep learning can be extraordinarily powerful, but it is still learning.

A Tesla’s software is in a process of constant improvement. Fueled by data from millions of miles of other Teslas’ experiences in a process called “fleet learning,” the brains of these cars are being regularly upgraded. Tesla’s chief executive, Elon Musk, was so optimistic about the speed of this process that when he produced a new generation of the Tesla Model S in October 2016, he described it as having “full self-driving hardware.” All that would be required was for the car’s brain to catch up with its body, and for lawmakers to get out of the way, allowing hands-free driving on US roads.

The Tesla blog post that brought the May 2016 crash to light referred to the car’s software as being “in a public beta phase.” This was a reminder to Tesla owners that their cars were still not self-driving cars. The software that was driving their cars was not yet artificially intelligent. Its algorithms were not the driving equivalent of AlphaGo. They were, in the words of Toby Walsh, a leading researcher in artificial intelligence, “not very smart.” As the NTSB found, not only was the machine not smart enough to distinguish between a truck and the sky, it was also not smart enough to explain itself.

Elon Musk is relaxed about his car’s brain being a black box. In an email response to one business journalist’s critical investigation, Musk responded: “If anyone bothered to do the math (obviously, you did not) they would realize that of the over 1M auto deaths per year worldwide, approximately half a million people would have been saved if the Tesla autopilot was universally available. Please, take 5 mins and do the bloody math before you write an article that misleads the public.”

Similarly, when Consumer Reports called for a moratorium on Autopilot, Tesla replied, “While we appreciate well-meaning advice from any individual or group, we make our decisions on the basis of real-world data, not speculation by media.”

For Tesla, “doing the math” means that if self-driving cars end up safer on average than other cars, then citizens have no reason to worry. This relaxed view of disruption has a patina of rationality, but ignores Brian Wynne’s insight that technological change is always a real-time social experiment. Musk’s blithe arithmetic optimism fails to take account of a range of legitimate public concerns about technology.

First, the transition will not be smooth. It is not merely a matter of replacing a human driver with a computer. The futures on offer would involve changing the world as well as the car. The transformations will be unpredictable and intrinsically political.

Second, some risks are qualitatively different from others. When an airplane crashes, we don’t just shrug and say, “Oh well, they’re safer than cars.” We rely on accident investigators to dig out the flight data, work out what went wrong and why, and take steps to prevent it happening again. If passenger jets were anywhere near as deadly as passenger cars, there would be no commercial airline industry. Citizens are legitimately concerned about the risks of complex, centralized technological systems in which they must place their trust. (And not nearly concerned enough about the risks from cars, it has to be said.)

Third, a technology’s effects are not just related to the lives it takes or the lives it saves. Technologies distribute risks and benefits unevenly. They create winners and losers. Autopilot will never become universally available. All the current signs suggest that self-driving car technology is set to benefit the same people who have benefitted most from past technological change—people who are already well-off. Traffic, we might think, is the archetypical example of us all being in it together. But poor people typically spend more time commuting in traffic (and often in older, less fuel-efficient cars) than do the well-off who can afford to live closer to their workplaces. Well-designed transport systems can enable social as well as physical mobility to at least partly redress such inequities. Bad ones can be bad for commuters, bad for the environment, and especially bad for those who are already economically marginalized.

The efficiencies of algorithms should not be used as an excuse to disrupt time-honored principles of governance such as the need to explain decisions and hold individuals responsible. Concerns about algorithmic accountability have grown in volume as it has become clear that some advanced decision-making software has revealed implicit biases and, when questioned, its creators have been unable, unwilling, or both, to say where the biases came from. ProPublica’s investigation into the use of a risk-assessment algorithm in policing revealed an encoded bias against black people. The issue was not just the bias, but also the inscrutability of the algorithm and its owner, a company called Northpointe. With deep learning, these problems are magnified. When Google’s algorithms began mislabeling images, the company’s own engineers could not work out the source of the problem. One Google engineer took to Twitter to explain to an African-American man whose photo of himself and his friend had been tagged “gorillas” that “machine learning is hard.”

The inscrutability of machine learning, like technological inequality, is not inevitable. Controversies such as these are starting to convince machine learning companies that some form of transparency might be important. Toyota is currently working on an algorithmic transparency project called “the car can explain,” but such activities are only recently starting to move in from the fringes. Redressing the balance requires the engagement of governments and civil society as well as scientists. In some places, the lawyers have stolen a march on the innovators. The European General Data Protection Regulation, which comes into force in May 2018, demands what some observers have called a “right to explanation” from automated systems. In the 1990s, the European Union took a similar approach to the regulation of agricultural biotechnology, scrutinizing the processes of genetic modification. The difference of opinion with the United States, which looked only at the products of innovation—the seeds themselves and the traits they exhibited in plants—resulted in a high-profile dispute at the World Trade Organization.

Understanding processes of algorithmic decision-making is vital not just to govern them democratically, but to ensure that they will be able to deal with unfamiliar inputs. When deep learning systems work as designed, they may be exceptionally powerful. When they fail, we may not know why until it is too late. As well as the capabilities of artificial intelligence systems, we must consider their reliability and transparency.

If we expect too much of machine learning for self-driving cars, we will lose sight of everything else that is needed for well-functioning transport systems. The risk is that today’s algorithms become tomorrow’s rules of the road. When Sebastian Thrun ran Google’s self-driving car research, he argued that “the data can make better rules” for driving. As cars start to be tested, we can already see their handlers attempting to write their own rules. In the United States, the federal government has exempted thousands of vehicles from existing safety laws and made few demands in return. Behind the technological dazzle, there is little appreciation of the public cost of upgrading infrastructure to suit the needs of self-driving cars. We can get a sense of how the politics might play out. At the Los Angeles Auto Show in 2015, Volvo executive Lex Kerssemakers took the city’s mayor, Eric Garcetti, on a test drive in a prototype self-driving Volvo XC90. When the car lost its way, Kerssemakers said, “It can’t find the lane markings! … You need to paint the bloody roads here!” He deftly off-loaded responsibility for the failure of his technology onto the public sector. The comment was lighthearted, but the implications for infrastructure will be serious. Our roads have been designed with human perception in mind. When they get rebuilt, at substantial public cost, the pressure will be to do so in ways that suit self-driving cars, and thus benefit those who can afford them. If built without attention to winners and losers, smart infrastructure could easily end up further exacerbating economic and social inequities.

Self-driving cars will change the world. But that doesn’t mean much. The ways in which self-driving cars will change the world are profoundly uncertain. The range of possible sociotechnical futures is vast. All we can say with certainty is that the development of the technology will not be as flawless or as frictionless as the technology’s cheerleaders would imagine. A future in which all cars are computer controlled is relatively easy to imagine. The transitions required to get there could be horrendously complex. The current story of self-driving car innovation is that this complexity can be engineered into the system: machine learning will be able to handle any eventuality. Engineers talk about “edge cases,” in which unusual circumstances push a system outside its design parameters. For self-driving cars it is easy to imagine such situations: a confused driver going the wrong direction on a freeway, a landslide, a street entertainer, an attempted carjacking. Factoring such things into a system will require extensive training and testing. It would mean adding sensors, processing power, and cost to the car itself. The temptations to remove such complexities from the system—for example, by forcing pedestrians away from roads or giving self-driving cars their own lanes—could well prove irresistible. The segregation of different forms of traffic may be efficient, but it is controversial. In Europe, for example, the politics of streets are played out in cities every day. City planners everywhere should not let technologies force their hand.

If handled with care, self-driving cars could save thousands of lives, improve disabled people’s access to transport, and dramatically improve lifestyles and landscapes. If they are developed and governed thoughtlessly, the technology could lead to increases in sprawl, congestion, and the withering of mass transit. At the moment, the story is being led by the United States. It is a story that prioritizes freedom—not just citizens’ freedom to move, but also companies’ freedom from regulation. The story of the ideal “autonomous vehicle” is not just about the capabilities of the robot. It is also about unfettered innovation. It is a story in which new technologies come to the rescue, solving a problem of safety that policy-makers have for decades been unwilling to prioritize. This story will, if allowed to continue, exacerbate many of the inequalities created by our dependence on conventional cars. If we are to realize the potential for self-driving car technology, this story needs to change.

The race for self-driving car innovation is currently causing a privatization of learning. The focus is on proprietary artificial intelligence, fueled by proprietary data. The competition this creates leads to fast innovation, but speed can be bad if pointed in the wrong direction or if there are unseen dangers in the road ahead. If we want innovation that benefits citizens rather than just carmakers or machine-learning companies, we urgently need to recognize that the governance of self-driving cars is a problem of democratic decision-making, not technological determinism. Alongside machine learning, we must create mechanisms for social learning.

The first target should be data-sharing. The Tesla crash revealed an immediate need. In this case, Tesla cooperated with the NTSB to extract the data required to work out what went wrong. We should not have to rely on companies’ goodwill. US regulators have for years tried without success to mandate event data recorders in cars that would, like airplane “black boxes,” provide public data to narrate the last moments before a crash. The arrival of automated decision-making in driving makes this more urgent. Marina Jirotka, a social scientist, and Alan Winfield, a roboticist, recently argued that we need to enforce data sharing in robot systems so that people beyond just roboticists can learn from accidents. The challenge here is to not just relax companies’ grip on data, but also to improve the accountability of artificial intelligence.

In September 2016, the National Highway Traffic Safety Administration, by the request of the Obama administration, issued a call for data-sharing, which it justified using the language of “group learning.” The regulator also suggested that companies should collect and analyze data on “near misses and edge cases,” join an “early warning reporting program,” and find ways for their cars to communicate with one another. The NTSB concluded its investigation into the Joshua Brown wreck with a similar recommendation: “We don’t think each manufacturer of these vehicles need to learn the same lessons independently. We think by sharing that data, better learning and less errors along the way will happen.” Data-sharing is not just important when machines go wrong. If self-driving pioneers are prioritizing machine learning, then we should ask why they can’t learn from one another as well as from their own data sources.

Regulators are right to challenge the story of heroic independence that comes from such a heavy emphasis on artificial intelligence. Getting innovators to work together makes more urgent an inclusive debate on standards. When pushed, self-driving car engineers admit that for the things to work, they cannot be completely autonomous robots. They must be digitally connected to one another and with their surroundings. We must start to work out the real costs of doing this. Smart cars would require smart infrastructure, which would be expensive. It would also mean that the benefits of self-driving cars will be felt by some people far earlier than others. There is fevered discussion of when self-driving cars will be with us. The question is not when, but where and for whom. Cars will be “geofenced”—prevented from operating outside particular places and particular conditions. The dream of complete automotive autonomy and freedom will likely remain a dream.

Connectivity gets less attention than autonomy, but its challenges are just as great. Connected cars bring new risks of cybersecurity, data breaches, and system failure. Ensuring effective, safe transport across entire cities and countries demands early standards-setting. This process would be an ongoing conversation rather than a once-and-for-all. Technologies for self-driving are fluid, and the future of transport is profoundly unpredictable. Governance must therefore adapt. However, the first step, which requires real political leadership, is for governments to reassert their role in shaping the future of transport. Two philosophers at Carnegie Mellon University who study artificial intelligence ethics, David Danks and Alex John London, recommend a regulatory mechanism analogous to the Food and Drug Administration. New technologies would be systematically tested before release and continually monitored once they are out in the wild. In deciding whether self-driving cars were safe, it would also be necessary to ask, Safe enough for what? Addressing such questions would in turn require democratic discussion of the purposes and benefits of the technology.

Governments in the United States and elsewhere have held back from proactive regulation of self-driving cars. Demanding regulatory approval before these technologies hit the market would be a big shift. It would also force governments to rediscover skills that have been allowed to atrophy, such as those of technology assessment. If these technologies are as new and exciting as their proponents say they are, then we should ask what new rules are needed to ensure that they are safe, broadly accessible, and free from problematic unintended consequences. If the public does not have confidence in the future benefits of self-driving cars, the next Autopilot crash may cause far more damage and controversy, jeopardizing the future of the technology.

Indeed, as this article goes to press, the next accident has occurred in Tempe, Arizona: a crash of a self-driving Uber test vehicle, which resulted in a pedestrian death. Details are unclear. The NTSB has begun an investigation. Governance by accident continues.

Vol. XXXIV, No. 3, Spring 2018