Forum – Summer 2018

There’s a strange pattern to popular handwringing about the skills gap. Whether one reviews pre-Great Recession headlines from 2008, when unemployment hovered around 4.5%, or those printed at the height of joblessness in 2010, the refrain is the same: workers’ lack of proper skills left them in the lurch. Today, big business sings the same song, calling on workers to “skill up!” before the robots overtake them.

Thankfully, John Alic’s scholarship has keenly kept track of this pattern, rigorously linking it to something more structural. There’s no empirical evidence that “skill”—distinct from social status and the economic means to access higher education—works as a silver bullet. That means that it’s too simple to pin low wages and lack of advancement on individual workers making bad educational choices. Alic’s early work was some of the first scholarship to trace the disconnect between skill attainment and the United States’ transition from a stable manufacturing economy to the staccato pace of service-driven economies. It illustrated that specific skills don’t lead to economic security. Sound economic policies written with workers’ best interests and rights to organize in mind do. Today, three-quarters of the nation’s workforce is employed in low-paying, dead-end service jobs, not because workers took the wrong classes at school but because labor laws do nothing to push businesses to fully value everyday workers’ economic contributions. In other words, as Alic presciently noted nearly two decades ago, if service work leads to low pay and little advancement for workers, it’s an absence of policy interventions that hold businesses accountable, not an individual skill mismatch, to blame.

Alic’s “What We Mean When We Talk about Workforce Skills” (Issues, Spring 2018) could not be timelier, as the conversation around automation and the future of work heats up for the 2018 mid-term elections. It is not uncommon for technologists, policy-makers, and economists across the political spectrum to trot out the old saw that workers’ lack of high-tech skills is the reason US wages and prosperity have stalled. Alic offers a sweeping and compelling argument challenging such unexamined conventional wisdom. As he argues, skills “overlap, combine, and melt together invisibly, defying measurement,” making explicit adjustment to them an ineffective lever for economic change. But, importantly, the attention on skills education “deflects attention from broader and deeper institutional problems in the labor market.” For Alic, placing debt and speculation about the “right skills” on the shoulders of young people (or parents helping them with schooling) harms individuals and businesses alike. Human capital, as Alic poignantly puts it, is “a national resource.” As Alic argues, one of the most important shifts we must make is recognizing this fact. This is the logical conclusion when one takes in Alic’s case for the immeasurability and unpredictable nature and value of skill as society-at-large stands at the precipice of the future of work.

Mary L. Gray
Senior Researcher, Microsoft Research
Professor, School of Informatics, Computing, and Engineering, Indiana University
Fellow, Harvard University Berkman Klein Center for Internet & Society


In “Reauthorizing the National Flood Insurance Program” (Issues, Winter 2018), Howard C. Kunreuther raises several points that are germane to the Insurance Information Institute’s mandate to “improve public understanding of insurance—what it does and how it works.” One point in particular goes to the heart of why my organization believes that the pending renewal of the National Flood Insurance Program (NFIP) presents an ideal opportunity to modernize the program. After 50 years of existence, the NFIP has become financially unsustainable (per a report from the US Government Accountability Office). Yet even with the NFIP as a backstop against a total loss, an average of only 14% of the 3.3 million households in counties declared federal disaster areas after Hurricane Irma ravaged Florida in September 2017 had NFIP coverage. Equally frustrating are the reasons why many people decline to purchase coverage, including a lack of awareness that flood-caused damage is not covered under their homeowners, renters, or business policies; underestimating flooding risk; and the cost of coverage.

Proposals in Kunreuther’s article include behavioral risk audits and other tools to better understand how homeowners, renters, and business owners view and rationalize risk; a call for Congress to provide increased investment in producing more accurate flood maps; and premising NFIP reauthorization on the adherence to “the guiding principles for insurance.” These also happen to be principles long embraced as core values of private insurance carriers, which are stepping up to expand insurance options available to residential and business customers. Recently, the NFIP and the Federal Emergency Management Agency (FEMA) have publicly embraced this sort of competition as a way of improving flood insurance for all. Indeed, the NFIP’s prospective expanded investment in reinsurance and the purchase of its first-ever catastrophe bond to contend with the realities of flood risk are positive steps toward finding private-sector solutions to existing problems.

We applaud Kunreuther’s call to make the NFIP “more transparent, more cost-effective, more equitable, and more appealing to property owners.” Moreover, we share his conclusion that a successful, more evolved NFIP would benefit greatly by building on the “support and interest of real estate and insurance agents, banks and financial institutions, builders, developers, contractors, and local officials concerned with the safety of their communities.”

Sean M. Kevelighan
Chief Executive Officer
Insurance Information Institute


Howard Kunreuther identifies a central challenge when it comes to reducing flood damage: motivating people to take action ahead of time. The systematic biases that he articulates (myopia, amnesia, optimism, inertia, simplification, and herding) apply not only to purchasing insurance but also to making life-safety and property-protection decisions before, during, and after disasters. These biases result in a serious protection gap that stymies efforts to improve the National Flood Insurance Program and hampers mitigation efforts to reduce damage from wind, hail, wildfire, winter storms, and other severe weather.

Kunreuther writes, “A large body of cognitive psychology and behavioral decision research over the past 50 years has revealed that decision-makers (whether homeowners, developers, or community officials) are often guided not by cost-benefit calculations but by emotional reactions and simple rules of thumb that have been acquired by personal experience.”

Intuition will get you only so far, and it’s not far enough to push back against the wiles of Mother Nature. To meet Kunreuther’s challenge we need well-founded, realistic, scientific testing and research on building systems subjected to real-world wind, hail, wind-driven rain, and wildfire conditions.

The Insurance Institute for Business & Home Safety (IBHS) that I lead not only takes on this challenge of sound building science, but also educates consumers on how to put that insight into action. Groundbreaking building science findings need to be translated into improved building codes and standards as well as superior design, new construction, and retrofitting standards that exceed building code requirements. Yet in translating our work into bricks-and-mortar improvements, we are confronted with the same risk biases that undermine sound NFIP purchase decisions.

The cognitive psychology and behavioral decision research cited by Kunreuther has opened our eyes to the impediments to rational risk management behavior. A discipline once confined to classrooms and conferences now fills Kindles and personal studies in books such as Misbehaving; Nudge; and (my favorite) Thinking, Fast and Slow. That’s progress, but the next step is to translate these ideas into messages and strategies that change the way people think about risk and act to reduce it. In this regard, we support Kunreuther’s idea of conducting behavioral risk audits to promote a better understanding of the systemic biases that block appropriate risk decision-making and set the stage for development of strategies that work under real-world conditions.

While at the Federal Emergency Management Agency, where I worked prior to joining IBHS, we launched two ambitious moonshots: the first, to double the number of policyholders with flood insurance, and the second, to quadruple mitigation investment nationwide. Achieving these goals will require dedicated efforts by government, the private sector, and the nonprofit community. To best cope with rain, flood, hail, wind, wildfire, earthquakes—choose your peril—we need a whole set of players to put in motion actionable strategies that, as Kunreuther suggests, “work with rather than against people’s risk perceptions and natural decision biases.”

Roy Wright
President & Chief Executive Officer
Insurance Institute for Business & Home Safety

Formerly led the Federal Emergency Management Agency’s mitigation and resilience work and served as the chief executive of the National Flood Insurance Program.


Debates about autonomous vehicles (AVs) can sometimes become quite polarized, with neither side willing to acknowledge that there are both risks and opportunities. In “We Need New Rules for Self-Driving Cars” (Issues, Spring 2018), Jack Stilgoe provides a much-needed, balanced overview of AVs, calling for increased public debate and governance of the development, not just deployment, of AV technology. We agree with many of his recommendations, particularly about the importance of data sharing with the public and other parts of the industry. We also share his concern that AVs may exacerbate social inequalities, or shape the public environment in ways that disproportionately harm the worse off. At the same time, we raise two issues that are deeply important to our understanding and policy-making, but that receive only passing mention in his article.

First, “reliability” and “safety” are oft-used terms in these discussions, but almost always without reference to the contexts in which AVs are hopefully reliable or safe. Stilgoe notes challenges presented by “edge cases,” but the importance of deployment context is not limited to unusual or anomalous cases. Rather, those contexts shape our whole understanding of AV safety and reliability. In particular, proposals for approval standards based on criteria such as “accident-free miles driven” or “number of human interventions per mile” are misguided. Such criteria are appropriate only if test conditions approximately mirror future deployment contexts, but many technologies used in AV development, such as deep networks, make it difficult to determine if test and deployment conditions are relevantly similar. We have thus proposed that the approval process for AVs should include disclosure of models used by developers to link AV testing and deployment scenarios, including the validation methodology for that model, along with credible evidence that the developer’s test scenarios include a sufficient range of likely deployment contexts and that the AV performs safely and reliably in those contexts.

Second, development of AVs requires many technological decisions, where different technological options are acceptable. As one simple example, an AV can be designed to always drive as safely as possible, or to always follow the law, but not to always maximize both values. Driving as safely as possible might involve breaking the law, as when other vehicles are themselves breaking the law (say, by speeding excessively). Moreover, the technology does not dictate which value to prioritize; the developer must decide. Importantly, there is no unique “right” answer: one could defensibly prioritize either value, or even try to balance them in some way. Nonetheless, some choice is required. Technology is not value-neutral, but rather encodes developer decisions about, for example, what counts as “success” at driving. Public discussion should thus go beyond the issues mentioned by Stilgoe, and further include debate about the values that we want AVs to embody and what AV developers should be required to tell us about the values in their particular technologies.

David Danks
L.L. Thurstone Professor of Philosophy and Psychology

Alex John London
Clara L. West Professor of Ethics and Philosophy
Carnegie Mellon University


In a rapidly changing world, communities must deal with a wide range of complex issues. These issues include adapting to a rapidly changing economy and providing a safe and healthy environment for their residents. In rural areas, addressing such issues is made more difficult because communities often lack the capacity to hire professionals and elected officials are typically stretched thin. In addition to their work with the community, leaders often have full-time employment, a family, and other responsibilities.

At the same time, many new data and other resources are available to assist communities. Data can help them better understand their current circumstances and prepare for the future. The problem is finding the time and expertise to effectively collect and analyze these data. I am convinced that the Community Learning through Data Driven Discovery (CLD3) model that Sallie Keller, Sarah Nusser, Stephanie Shipp, and Catherine E. Wotek describe in “Helping Communities Use Data to Make Better Decisions” (Issues, Spring 2018) represents an effective way to provide much-needed help to rural communities. The brilliance of the model is that it uses existing entities (Cooperative Extension and the Regional Rural Development Centers) to provide expertise that rural communities often lack. Cooperative Extension has representation in virtually every county in the country. The Regional Rural Development Centers have the capacity to make the necessary connections with Cooperative Extension in all 50 states. As director of one of the Regional Rural Development Centers, I am excited about this program and the potential benefits it provides to rural areas.

Don Albrecht
Executive Director, Western Rural Development Center
Utah State University


In “Reconceptualizing Infrastructure in the Anthropocene” (Issues, Spring 2018), Brad Allenby and Mikhail Chester offer a challenge for large-scale infrastructure designers and managers, as well as for policy-makers. It is certainly true that major infrastructure projects can have significant effects on both human and natural systems. As Allenby and Chester emphasize, we need new tools, education approaches, and management processes to deal with them.

We also need political leadership and extensive institutional partnerships to be effective in tackling major challenges and projects in the Anthropocene era. Advisory and research entities, such as the Natural Research Council (NRC) of the National Academies of Science, Engineering, and Medicine, can also be critical for success. Let me give a few examples of such leadership and partnerships.

First, building the US Interstate Highway System over the past 50 years has provided extraordinary mobility benefits for the nation. The 1956 Federal Aid Highway Act provided a vision and a funding mechanism with the Highway Trust Fund. The system implementation relied on a partnership between federal agencies and state departments of transportation, as well as with private contractors who designed and built the system. However, it was only after a period of decades that the system began to deal with the environmental and social impacts of roadbuilding, especially in sensitive natural environments and urban neighborhoods. The American Association of State Transportation Officials and the NRC’s Transportation Research Board (TRB) became other institutional partners. TRB played a major role in convening forums to incorporate more holistic approaches to roadway development and operation.

Second, the South Florida Everglades Restoration project is an ambitious effort to restore natural systems and alter the overall flows and use of water in an area of 4,000 square miles (10,000 square kilometers). Political leadership came when Congress passed the Water Resources Development Act of 2000. The effort has required active partnership among several federal agencies (Army Corps of Engineers, Fish & Wildlife Service, and National Park Service), Florida state agencies, local agencies, and various private groups. The project is still coping with issues such as legacy pollution (especially nitrates) and climate change (including sea level rise). Again, the NRC is a partner with an interdisciplinary advisory group producing a biennial report on progress and issues.

Finally, the National Academy of Engineering identified the electricity grid (as well as the Interstate Highway System) as ranking among the greatest engineering accomplishments of the twentieth century. Now, the rebuilding of the national electricity grid is under way with goals of efficiency, resiliency, and sustainability, including the incorporation of much more extensive renewable energy and new natural gas generators. Unlike the original grid development that relied on vertically integrated utilities (and utility regulators), this new rebuilding relies on market forces to involve private and public generators (including private individuals with solar panels), transmission line operators, and distribution firms. Though markets play a major role, institutional partnerships among grid operators, state regulators, and federal agencies are essential. Again, study forums such as the Electric Power Research Institute and the NRC have been instrumental in moving the effort forward.

Some Anthropocene challenges have not had effective political leadership or formed these institutional partnerships to accomplish infrastructure change. Allenby and Chester provide a good example with the “dead zone” in the Gulf of Mexico below the Mississippi River. In addition to developing new education, tools, and design approaches, we need both political leadership and new partnerships to make progress coping with large-scale problems in the Anthropocene era.

Chris Hendrickson
Hamerschlag University Professor Emeritus, Civil and Environmental Engineering
Carnegie Mellon University


In “What is ‘Fair’? Algorithms in Criminal Justice” (Issues, Spring 2018), Stephanie Wykstra does a masterful job summarizing challenges faced by algorithmic risk assessments used to inform criminal justice decisions. These are routine decisions made by police officers, magistrates, judges, parole boards, probation officers, and others. There is surely room for improvement. The computer algorithms increasingly being deployed can foster better decisions by improving the accuracy, fairness, and transparency of risk forecasts.

But there are inevitable trade-offs that can be especially vexing for the very groups with legitimate fairness concerns. For example, the leading cause of death among young, male African Americans is homicide. The most likely perpetrators of those homicides are other young, male African Americans. How do we construct algorithms that balance fairness with safety, especially for those mostly likely to be affected by both?

An initial step is to unpack different sources of unfairness. First, there are the data used to train and evaluate an algorithm. In practice, such data include an individual’s past contacts with the criminal justice system. If those contacts have been significantly shaped by illegitimate factors such as race, the potential for unfairness becomes a feature of the training data. The issues can be subtle. For example, a dominant driver of police activity is citizen calls for service (911 calls). There are typically many more calls for service from high-crime neighborhoods. With more police being called to certain neighborhoods, there will likely be more police-citizen encounters that, in turn, can lead to more arrests. People in those neighborhoods can have longer criminal records as a result. Are those longer criminal records a source of “bias” when higher-density policing is substantially a response to citizen requests for service?

Second is the algorithm itself. I have never known of an algorithm that had unfairness built into its code. The challenge is potential unfairness introduced by the data on which an algorithm trains. Many researchers are working on ways to reduce such unfairness, but given the data, daunting trade-offs follow.

Third are the decisions informed by the algorithmic output. Such output can be misunderstood, misused, or discarded. Unfairness can result not from the algorithm, but from how its output is employed.

Finally, there are actions taken once decisions are made. These can alter fundamentally how a risk assessment is received. For example, a risk assessment used to divert offenders to drug treatment programs rather than prison, especially if prison populations are significantly reduced, is likely to generate fewer concerns than the same algorithm used to determine the length of a prison sentence. Optics matter.

Blaming algorithms for risk assessment unfairness can be misguided. I would start upstream with the training data. There needs to be much better understanding of what is being measured, how the measurement is actually undertaken, and how the data are prepared for analysis. Bias and error can easily be introduced at each step. With better understanding, it should be possible to improve data quality in ways that make algorithm risk assessments more accurate, fair, and transparent.

Richard Berk
Professor of Criminology and Statistics
University of Pennsylvania


As decision-makers increasingly turn to algorithms to help mete out everything from Facebook ads to public resources, those of us judged and sorted by these tools have little to no recourse to argue against an algorithm’s decree against us, or about us. We might not care that much if an algorithm sells us particular products online (though then again, we might, especially if we are Latanya Sweeney, who found that black-sounding names were up to 25% more likely to provoke arrest-related ad results on a Google search). But when algorithms deny us Medicaid-funded home care or threaten to take away our kids through child protective services, the decisions informed by these tools are dire indeed—and could mean life and death.

Here in Philadelphia, a large group of criminal justice reformers is working to drive down the population of our jails—all clustered on State Road in Philly’s far-Northeast. Thousands of people are held on bails they can’t afford or on violations of previous probation and parole convictions, and reformers have already been able to decrease the population by more than 30% in two years—and the city’s new district attorney has told his 600 assistant DAs to push to release on their own recognizance people accused of 26 different low-level charges. But in an effort to go further, decision-makers are working to build a risk-assessment algorithm that will sort accused people into categories: those deemed to be at risk of not showing up to court, those at risk of arrest if released pretrial, and those at risk of arrest with a violent charge.

In her compelling and helpful overview of the most urgent debates in risk assessment, Stephanie Wykstra lifts up the importance of dividing the scientific questions associated with risk assessment from the moral ones. Who truly decides what “risk” means in our communities? Wykstra includes rich commentary from leaders in diverse fields, including data scientists, philosophers, scholars, economists, and national advocates. But often, the risk of not appearing in court is conflated with risk of arrest. We need to understand that communities fighting to unwind centuries of racism in practice define what level of risk their communities might tolerate very differently from how others active in criminal justice reform do.

I would posit that no government should put a risk-assessment tool between a community and its freedom without giving that community extraordinary transparency and power over that tool and the decisions it makes and informs. Robust and longstanding coalitions of community members have prevented Philly legislators from buying land for a new jail, have pushed the city to commit to closing its oldest and most notorious jail, and have since turned their sights on ending the use of money bail. That same community deserves actual power over risk assessment: to ensure that any pretrial tool is independently reviewed and audited, that its data are transparent, that they help to decide what is risky, that tools are audited for how they are used by actual criminal justice decision-makers, and that they are calibrated to antiracist aims in this age of mass incarceration in the United States.

Our communities have understandable fear of risk assessment as an “objective” or “evidence-based intervention” into criminal justice decision-making. Risk-assessment tools have mixed reviews in practice—and have not always been focused on reducing jail populations. Because black and brown Philadelphians are so brutally overpoliced, any algorithm that weights convictions, charges, and other crime data heavily will reproduce the ugly bias in the city and the society. But we also see these tools spreading like wildfire. Our position is that we must end money bail and massively reduce pretrial incarceration, without the use of risk-assessment algorithms. But if Philly builds one as a part of its move to decrease the prison population in the nation’s poorest big city, we have the right to know what these tools say about us—and to ensure that we have power over them for the long haul.

Hannah Sassaman
Policy Director
Media Mobilizing Project


Scholars, activists, and advocates have put an enormous amount of intellectual energy into discussing criminal justice risk-assessment algorithms. Stephanie Wykstra sifts through this literature to provide a lucid summary of some of its most important highlights. She shows that achieving algorithmic fairness is not easy. For one thing, racial and economic disparities are baked into both the inputs and the outputs of the algorithms. Also, fairness is a slippery term: satisfying one definition of algorithmic fairness inherently means violating another definition.

The rich garden of observer commentary on criminal justice algorithms stands in contradiction to the disinterest or even disregard of those who are responsible for using them. As Michael Jordan put it in his recent essay on artificial intelligence, “the revolution hasn’t happened yet.” Angele Christin’s research shows significant discrepancies between what managers proclaim about algorithms and how they are actually used. She found that those using risk-assessment tools engage in “foot-dragging, gaming, and open critique” to minimize the impact of algorithms on their daily work. Brandon Garrett and John Monahan show broad-reaching skepticism toward risk-assessment among Virginia judges. A significant minority even report being unfamiliar with the state’s tool, despite it having been in use for more than 15 years. My own research shows that a Kentucky law making pretrial risk assessment mandatory led to only a small and short-lived increase in release; judges overruled the recommendations of the risk assessments more often than not.

Algorithmic risk assessments are still just a tiny cog in a large, human system. Even if we were to design the most exceptionally fair algorithm, it would remain nothing more than a tool in the hands of human decision-makers. It is that person’s beliefs and incentives, as well as the set of options available within the system, that determine its impact.

Much of the injustice in criminal justice stems from things that are big, basic, and blunt. Poor communication between police and prosecutors (and a rubber-stamp probable cause hearing) means that people can be held in jail for weeks or months before someone realizes that there is no case. Data records are atrociously maintained, so people are hounded for fines that they already paid or kept incarcerated for longer than their sentence. Public defenders are overworked, so defendants have access to only minutes of a lawyer’s time, if that. These are issues of funding and management. They are not sexy twenty-first century topics, but they are important, and they will be changed only when there is political will to do so.

Wykstra’s summary of the fairness in criminal justice algorithms is intelligent and compassionate. I’m glad so many smart people are thinking about criminal justice issues. But as Angele Christin writes in Logic magazine, “Politics, not technology, is the force responsible for creating a highly punitive criminal justice system. And transforming that system is ultimately a political task, not a technical one.”

Megan Stevenson
Assistant Professor of Law
George Mason University

Cite this Article

“Forum – Summer 2018.” Issues in Science and Technology 34, no. 4 (Summer 2018).

Vol. XXXIV, No. 4, Summer 2018