The Thrill of Discovery

During my first year of college, my organic chemistry professor assigned us Anda Brivin’s book Gun Down the Young, a slim fictional account of professional academic life at an unnamed university. I suspect that she did so because she knew that many of her students were envisioning careers at top-tier research universities and she hoped to cure us of some of our more romantic fantasies of life as academics. Although the book did, indeed, provide some much-needed levity in an otherwise grueling course, Hope Jahren’s Lab Girl might serve future students as a better introduction to the real stories of the ivory tower.

lab_girl_cover

Early in Lab Girl, Jahren recounts late winter evening visits to her father’s laboratory at a community college in rural Minnesota, where she “transformed from a girl into a scientist, just like Peter Parker becoming Spider-Man, only kind of backward.” The rest of the book serves as a reflection on that transformation, an exploration of the forces that drove—and hindered—Jahren’s journey from frosty Minnesota to California, Georgia, Maryland, and Hawaii.

This journey is not often recounted. Scientists from every discipline will recognize Jahren’s lament about the necessity of “distilling ten years of work by five people into six published pages, written in a language that very few people can read and that no one ever speaks.” There is no doubt that Jahren is very, very good at this distillation: among her honors are three Fulbright awards; Young Scientist awards from both the American Geophysical Union and Geological Society of America; and recognition by Popular Science and Time as an influential scientist. But as she tells the reader, “there’s still no journal where I can tell the story of how my science is done with both the heart and the hands.” And so she has given readers Lab Girl, and we are all luckier for it.

Heart drives Jahren’s narrative, an unbounded love of and curiosity about the natural world. She interweaves her story of becoming a geobiologist with stories of how tiny seeds struggle to become (or, more frequently, fail to become) seedlings, then saplings, then towering trees. Jahren uses these passages to inspire a sense of wonder, to illustrate that discovery and mischief are two sides of the same science coin, and to spur readers to ask questions and become a scientist. And once we grow to love the plants that Jahren loves, she breaks our hearts, lamenting the reduction of plants to commodities and the rapid pace of deforestation (“One France after another, for decades, has been wiped from the globe”), and in the epilogue concluding that her “job is about making sure there will be some evidence that someone cared about the great tragedy that unfolded during our age.”

In this regard, Jahren is following in the footsteps of other scientists who have found popular success with their writing. Like biologist and author E. O. Wilson, she steps beyond the confines of journals and textbooks, drawing on her experience as a researcher and the work of colleagues to highlight the present environmental crisis. Jahren has a light touch when it comes to the hotly debated issue of where the lines between science, policy advice, and advocacy lie; whether and how scientists might cross those lines; and if by crossing those lines scientists risk their credibility. She focuses on what individuals might do, for example, to replace trees in their own communities, rather than suggesting broader policy change. By focusing on what each reader can do, she overcomes a common critique of scientific reports chronicling environmental problems: that after delivering gloomy results, they do nothing to suggest productive actions, especially responses at the household level.

More than an ode to the natural world, though, Lab Girl is a story of the heights and heartbreak of the practice of science, cataloging the workaday life of a scientist on the path from student to assistant professor to tenured faculty. Jahren’s story is full of reasons why she loves science: the satisfaction of attacking problems (“in science classes we did things instead of just sitting around talking about things”), the thrill of discovery (“the overwhelming sweetness of briefly holding a small secret the universe had earmarked just for me”), and the many ways in which her laboratory is her true home. Most of the chapters illustrate these reasons with anecdotes from Jahren’s research career—using exploding glassware, police visits to impromptu fieldwork campsites, lost and ruined samples, experiments that failed to bear fruit, and late-night breakthroughs in isotope chemistry—to fight back against the “disrespectful amnesia” that removes the practice of science from most research reports.

These are the sort of tales that scientists share over pints of beer or plates of food. But as Jahren points out, that sharing is often limited to specific audiences (generally scientists of the same generation, race, and gender), and so she has often “stood alone [at a conference or seminar] … apparently radiating cooties and so excluded from the back-slapping stories of building mass spectrometers during the good old days.” Such storytelling serves a greater purpose than catching up with old friends; it is a lifeline for young scientists who otherwise might fall victim to imposter syndrome, believing that a failed experiment validates their internal doubts about their abilities as a scientist. These stories provide a context for failure by acknowledging that, as Jahren put it, “if there had been a way to get to success without traveling through disaster someone would have already done it and thus rendered the experiments unnecessary.” By excluding graduate students, female scientists, and other minority groups from storytelling, the larger community of scientists does a disservice to the future of the scientific workforce. A growing number of fieldwork blogs, discussions about imposter syndrome and failure at scientific conferences, and books like Lab Girl are helping make those stories more accessible.

Perhaps the least visible (to the public) but most pressing task of an academic scientist is the pursuit of funding. According to Jahren, “I need to do about four wonderful and previously impossible things every year until I fall into the grave in order for the university to break even on me.” And that’s before the cost of lab supplies, paper, and staff. Chasing federal and private grants, and the stagnant or decreasing amounts available from these sources, is a task that Jahren discusses at length. She walks readers through the intricacies of university overhead rates, the budgets of the National Science Foundation and the Department of Homeland Security, and the overabundance of PhD scientists compared with available money. For academic scientists who are all too familiar with the current low success rate of their grant proposals, this discussion will result in either a lot of head nodding or cries of agony from something akin to post-traumatic stress. For non-scientist readers, the discussion will provide a window into what scientists do after the rest of the world has gone to bed. The readers who stand to gain the most from these sections, however, are those who aspire to join the ranks of academia. Like the tales of success and failure in the lab, these stories of 3:00 a.m. grant writing, pinching pennies until the next payment arrives, and not being able to pay staff enough are tales that need to be told.

Jahren’s lab manager, Bill Hagopian, is a central character in Lab Girl. Their scientific partnership began when Jahren was an assistant instructor for a field course in which Hagopian was a student. “I didn’t so much meet Bill. It was more like I identified him,” she says of that fateful day. Their working relationship and friendship is explored from many angles, with Jahren as the one to “cook up a pipe dream” and Hagopian as the one in charge of prototyping and mechanics; Jahren as the lumper and Hagopian as the splitter when it comes to soil classification; Hagopian as the patient teacher long after Jahren has grown frustrated with a student who is progressing slowly. In the end, Lab Girl stands as a testament to the fruitfulness of that collaboration. Yet Jahren notes that the hiring and funding structures at universities that have granted her tenure and provided at least several months of her salary have denied Hagopian the same long-term job security, something she feels guilty about because she is the one who brought him into the lab in the first place.

Lab Girl arose from Jahren’s need to tell the secrets of science to someone, not as advice or caution, but as “one scientist to another.” She does not aim to synthesize all that she knows on one topic or another, or find deeper thematic threads. Instead, she conveys the way in which she “will never stop being ravenously hungry for science” regardless of the obstacles and bureaucracy she finds along the way. In pulling back the curtain on what lies at the heart of science, she challenges readers to find their own passions and to pursue them relentlessly.

Defending Expertise

Science studies scholar Harry Collins sets the stage for this short-form analysis by explaining why so many people have a sense of being experts by default. Over the past half-century, even as science has advanced in numerous ways, scientists have frequently oversold their abilities, made mistaken predictions, or disagreed among themselves with regard to such issues as nuclear power, food, nutrition, safety, medicine, and the economy. These areas of public concern are more immediately important to us than questions concerning the existence of the Higgs boson, whether there is water on Mars, or prospects for quantum computing. Why shouldn’t everyone feel at least as smart as the scientific experts who predicted nuclear energy would be too cheap to meter? Or that it was okay to feed cattle the remains of diseased sheep and then to serve the slaughtered beef to humans, more than 150 of whom eventually died of mad cow-related illness? It is not rocket science to doubt the expertise on display in these instances.

are_we_all_scientific_experts_now

Against this sympathetically sketched background, Collins offers a carefully calibrated defense of scientific expertise. His response to his title question is that, in fact, we are not all scientific experts, although he wants to grant a legitimate role for citizen whistleblowers who can point out when science is being distorted.

Collins’s argument depends on defending science as an ideal, even in the presence of less-than-ideal practices, and on clarifying distinctions among different types of expertise. In the process he calls attention to academic shifts across three waves in science and technology studies (STS). First-wave STS, in the 1940s and 1950s, mostly assumed science was true knowledge and technology the foundation for social progress. Second-wave STS, beginning in the 1960s, challenged such assumptions in the name of more accurate descriptions and led to the science wars, an academic conflict between realists and postmodernists about the nature of scientific knowledge. Our current third-wave STS seeks “to describe science accurately while still admiring it and what it stands for … to treat science as special without telling fairy stories about it.” Third-wave STS is what Collins himself wants to exemplify in this book.

Collins’s defense of science begins by delimiting the scope of default expertise. Although everyone has acquired some level of what he calls ubiquitous expertise (e.g., in speaking a native language, in taking a bath), specialist expertise (e.g., of physicians, concert pianists, mathematicians) depends on dedicated, conscious acquisition of tacit knowledge and explicit procedures within a distinctive group. Specialized expertise, in turn, comes in several forms.

First, “contributory expertise” comes from being a member of some specialist group. Contributor experts can be researchers producing knowledge in a scientific discipline or practitioners such as AIDS activists sharing experiential knowledge of their illness.

Another type of specialist expertise—and one increasingly important in our techno-scientific society—is what Collins terms “interactional expertise.” This is “acquired by engaging in the spoken discourse of an expert community to the point of fluency, but without participating in the practical activities or deliberately contributing to those activities.” One can be an interactional expert without being contributory, but all contributory experts are also interactional experts. The introduction of interactional expertise broadens the boundaries of specialist expertise to include qualified persons outside the core set of scientists or researchers.

Collins’s typology extends further to what he calls “meta-expertise,” or the skillful knowledge for “choosing between experts and their expertises.” Even more than interactional expertise, meta-expertise is critical to public intelligence in a high-tech, advanced scientific world. It is precisely this type of expertise that Collins seeks to engender in his readers.

A false form of meta-expertise is readily found in what Collins calls citizen skeptics. Citizen skeptics of scientific expertise appeal to the exposed dirty inner workings of science without fully understanding how such inner workings are not really as dirty as they might appear to outsiders. Citizen skeptics manufacture counterfeit scientific controversies out of situations in which the science is settled by inappropriately appealing to findings or theories that have been rejected by a strong consensus in the scientific community. Examples include rejecting the existence of HIV, the safety of vaccines, and climate change. Because of the limits of interactional expertise alone, “citizens must be careful of being too skeptical when they discover … that scientists … argue with each other.”

Collins nevertheless grants that meta-expertise can sometimes justify citizen whistleblowing. “Non-specialists may [on occasion] have enough knowledge … to understand that the normal scientific process is being distorted.” It does not take super meta-expertise to recognize that the tobacco industry distorted the science with its funding of research to question any linkage between smoking and cancer.

The book is an effort to cultivate a level of meta-expertise that would alter what Collins sees as a popular contemporary belief that “scientists are just ordinary people with no special qualities.” Collins wants to re-elevate scientists and recognize their specialist expertise. Even if they are not always correct, they “share the scientific ethos which may be the most important contribution of science to society.” We may possess a measure of ubiquitous expertise in some practices, but “we are not all scientific experts now” because we do not belong to the scientific community and we do not make our judgments from the platform of the norms and aspirations that drive that community:

If we start to believe we are all scientific experts, society will change: it will be those with the power to enforce their ideas or those with the most media appeal who will make our truths, according to whatever set of interests they are pursuing. The zeitgeist has to change if we want to preserve society as we know it because we have to start raising the value of plain ordinary science in our minds.

Despite Collins’s sober and salutary arguments, however, it remains reasonable for a citizen of the techno-scientific world, even one with a measured trust in science, to maintain a level of skepticism. First, there is a question of how best to distinguish and relate scientific and technological or engineering expertise. Won’t there be important differences between the type of meta-expertise that can properly question claims about the Higgs boson or string theory, and that which can challenge designs for nuclear waste disposal or high-speed rail transport? The more immediate social consequences of engineered systems suggest that skepticism about this infrastructure in the form of meta-expertise can be simultaneously more popular, dangerous, and beneficial than, perhaps, in the realm of theoretical physics.

Second, Collins’s argument fails to give sufficient attention to the new problems in the internal practice of scientific knowledge production. As science becomes more and more dependent on new forms of information—from computer processing to data mining—science is experiencing new challenges to its norms and aspirations. This situation has become only more pronounced in post-normal “science on the verge” (of a nervous breakdown), in which decision making must occur under conditions of uncertainty, great urgency, and conflicting values.

Last but not least, Collins’s idea of preserving society as we know it through belief in science is highly problematic. Science itself has been, and promises to continue to be, a radical disturber of the social order: scientific knowledge, for example, challenges such cultural traditions as belief in the truth-value of the Bible and science contributes to the creative destruction of innovation. Science presents us with a world increasingly divorced from common sense: the sun does not rise, the earth turns; no scientific knowledge can change the fact that we experience weather, not climate. Indeed, the destabilizing power of science is another source of both public and expert uneasiness—a source that deserves at least to be acknowledged in any review of challenges to expertise and argument for reinstating trust in science.

Journalism Under Attack

Several weeks after Election Day, as the final ballot count wound down, it was reported that Hillary Clinton’s lead in the popular vote had surpassed 2 million. On November 27, President-elect Donald Trump declared on Twitter: “I won the popular vote if you deduct the millions of people who voted illegally.” Neither Trump nor his advisers offered any evidence to support his claim; reporters traced its origins to a website known for its promotion of feverish conspiracy theories.

During the 2016 presidential campaign, Trump said many patently false things that were described as such by journalists. (An arbitrary sampler: “Obama founded ISIS”; “Of course there is large-scale voter fraud happening on and before election day.”) It didn’t erode his standing in the polls, much less the enthusiasm of his base. He often repeated falsehoods after they were proven to be demonstrably false. Normally, there would be consequences for a major presidential candidate who behaved this way.

But as Politico’s Susan Glasser recently noted in an essay for the Brookings Institution: “Even fact-checking perhaps the most untruthful candidate of our lifetime didn’t work; the more news outlets did it, the less facts resonated.”

To my perplexed colleagues in the political journalism community: Welcome to the world of science journalism, where with respect to some topics, the more you report facts, the less they seem to matter. Anyone who’s been on the front lines of the climate wars, feel free to nod along. The same goes for you scientists and science communicators who have gotten entangled in the genetically modified organism (GMO) thicket or who have chased anti-vaccine activists down a rabbit hole.

Donald Trump’s improbable march to the White House shocked many, but the tactics that made it possible undoubtedly looked familiar to those of us who have navigated the topsy-turvy landscape of contested science. For Trump’s success was predicated on techniques that are used by advocates across the ideological spectrum to dispute or at least muddy established truths in science. I’ve reported on such cases for Issues in Science & Technology and other publications, which I’ll discuss momentarily.

First, it’s important to understand that Trump’s winning strategy centered on demonizing his opponent and delegitimizing his critics, such as those pesky, fact-checking journalists. This required an overarching narrative—of a corrupt, entrenched political establishment, which Hillary Clinton embodies. That narrative already had an existing foundation (from the 1990s) for Trump’s team to build on, using new informational architecture from allies such as Steve Bannon, the former chairman of Breitbart Media, who produced anti-Clinton books and documentaries. Bannon later became Trump’s campaign manager and has since been named the president-elect’s chief strategist and senior counselor.

Outside events (WikiLeaks disclosures, FBI announcements) had a “truthy feel that bolstered the corrupt theme of the narrative frame. (Remember those “lock her up” chants at the Trump rallies?) The comedian Stephen Colbert famously coined the term “truthiness” in 2006, as “something that seems like truth—the truth we want to exist.” Since then, the rise of social media, such as Facebook and Twitter, has supplied us with a steady diet of news and information from sources that tend to reflect our own biases.

With the ascension of Trump in 2016, have we graduated from truthiness to what some political observers are now calling the post-truth era? Post-truth is defined by Oxford Dictionary as a state in which “objective facts are less influential in shaping public opinion than appeals to emotion.” But this doesn’t do justice to the bending of reality by Trump en route to the White House. You can’t do that simply with appeals to emotion; you need, as his triumph suggests, a made-for-media narrative, with villains, accomplices, and heroes.

You need to do what has already been proven to work in warping public perceptions and discussion of certain fields of science.

Villainous vaccines

Several years ago, I received a call at home from a famous environmentalist. After introducing himself, Robert Kennedy Jr. cut right to the chase: “I’m trying to figure out if you’re a shill for Big Pharma.”

Kennedy has carved out an admirable, decades-long career as an environmental lawyer and ecological advocate. Although we had never previously met or spoken, he knew I worked as an editor at Audubon magazine in the 2000s. That he wondered if I turned into an industry stooge astounded me, but I knew what prompted him to make that leap.

Kennedy had recently given a fiery keynote speech at a conference organized by two well-known anti-vaccine groups. Like the attendees, Kennedy erroneously believed that early childhood vaccines were responsible for the increased number of children diagnosed with autism (which he has often characterized as a “neurological holocaust”) and that evidence for this had been concealed by the US government, in cahoots with the pharmaceutical industry. Even prominent members of the scientific community were complicit, he asserted in his talk. He referred to specific individuals—such as a pediatric researcher who was a vocal vaccine advocate—as the equivalent of Nazi concentration camp guards and said: “They should be in jail, and the key should be thrown away.”

In my 2013 post on the Discover magazine site, I criticized these scurrilous comments and noted that Kennedy had been down this road before. In the mid-2000s, he caused a huge stir when he first made the argument for a nefarious cover-up of vaccine harm by the Centers for Disease Control and Prevention (CDC) in a controversial, widely publicized Rolling Stone magazine article. The sweeping allegation was thoroughly discredited after being scrutinized by leading science journalists. But Kennedy’s belief hardened over the years, and his rhetoric became inflammatory. I wrote at Discover: “Because of his celebrity status and standing in liberal and environmental circles, it is arguable that Kennedy has done as much as anyone to spread unwarranted fear and crazy conspiracy theories about vaccines.”

This prompted his angry phone call to me and his suspicion that I was on Big Pharma’s payroll. After disabusing him of this notion, Kennedy spent the next hour telling me about the “explosive” book he was soon to publish that would show a connection between vaccines and neurodevelopment disorders. He also mentioned that he had upcoming meetings with congressional leaders and top federal agency officials to press his case.

These curious developments, a by-product of his zealotry, convinced me to tell the story of Kennedy’s fixation. My profile of him appeared in the Washington Post magazine in 2014. It illustrated that he was at odds with established science, that his meetings in Washington amounted to nothing, that he had alienated lifelong allies in the public health sphere, and perhaps most astoundingly, that he wouldn’t quit his crusade.

The story elicited sharply divergent responses. Those who reacted strongest fell into two very different camps. One side shook its collective head in disgust: They saw Kennedy as a good guy gone bonkers, someone who had “taken a disreputable plunge into the world of anti-science with his new and inexplicable crusade,” as Time’s Jeffrey Kluger wrote. The other side, particularly those who expressed their distrust of government institutions and the medical establishment, lauded Kennedy as a brave “hero.”

Same story, two opposite take-aways. How could that be?

Ideally, journalists and scholars are fearless when it comes to examining assumptions and embedded narratives that influence public policy and scientific debate.

When I researched the sources that fed Kennedy’s obsession, I discovered an alternate universe of “facts” and “science” that had been constructed over the past 10-15 years: a cottage industry of books, documentaries, obscure journal papers, and websites that reinforced Kennedy’s belief in a vaccine-induced “neurological holocaust” that the CDC is covering up. Kennedy was upset after my piece came out because it didn’t delve into the “science” that he shared with me. (Conversely, there were some in the science community who felt I was too easy on him.)

During the time I reported this story, I met with many intelligent people who appear to sincerely believe that the federal government is hiding the truth about vaccines and autism. (It’s not.) I doubt they would have become so certain without an overarching good guys/bad guys narrative: Big Pharma is the villain; CDC is the accomplice; Kennedy is the brave truth-teller.

In this world, Andrew Wakefield, the British doctor and author of a fraudulent study that set off a wave of panic about the measles, mumps, and rubella (MMR) vaccine in 1998, is a rock star. He is lionized by people who, according to surveys, make up one-third of American parents who (mistakenly) believe there is a link between autism and vaccines. Never mind that Wakefield’s medical license was revoked and that he’s been thoroughly discredited by mainstream science. That’s more proof of the conspiracy!

Just as Trump’s most ardent supporters live in a media bubble with its own set of truths, so too do passionate fans of Kennedy and Wakefield. Both of these bubbles foster disdain for establishment figures and institutions. Objective facts cannot penetrate these enclosed worlds. As Brian Stelter, CNN’s senior media correspondent, recently said: “A big part of the country has opted out of journalism and opted in to an alternate reality.”

It bears mentioning that Wakefield and fellow anti-vaccine activists met privately with Trump for about an hour several months before Election Day. Trump has previously stated his belief in a connection between vaccines and autism. According to Science magazine, which reported the meeting, Wakefield gave Trump a copy of a recent documentary film he directed. It’s called Vaxxed: From Cover-up to Catastrophe.

Monsanto the maleficent

During my career as a journalist, I have been fascinated by the staying power of certain false narratives. In 2014, I explored for this publication the origins and sustained nurturing of one such narrative about the hundreds of thousands of Indian farmers who have been driven to suicide by Monsanto and GMOs.

It’s not true, which was easy enough to ascertain after some research, but what shocked me was how the story became embedded in the media and unquestionably accepted by many smart people as true. I wanted to unravel that.

Just to be clear: The plight of small-scale farmers in India is real; many live on the barest of margins, without access to irrigation, at the mercy of an increasingly unstable climate, and do not have access to institutional credit and crop insurance. Because of a complex mix of sociopolitical reasons, too many of these farmers end up with crushing debts that lead them to suicide. That is a tragic and real occurrence.

And yes, in the early 2000s, India’s government permitted Monsanto to introduce GMO cotton seeds into the country, which many farmers eagerly embraced. But the “agrarian crisis” in India, as it’s been called, predated the introduction of GMO cotton. And the precarious conditions for small farmers did not change much in the 2000s; what changed was the greater attention agrarian-related suicides (which, by the way, occur at a lower rate than nonfarmer Indians) suddenly received.

Although India has deeply rooted, long-standing social and gender inequity issues that derive from its caste society, and although systemic violence and sexual attacks against women are also a long-standing societal problem in India, global activists latched onto Indian farmer suicides as a cause in the mid-2000s. This drew media coverage and interest from university think tanks.

It was around this time that Monsanto and its GMO cotton seeds were pegged as the main culprits for Indian farmer suicides. As I reported in my story for Issues in Science & Technology, no one has done more to cement and perpetuate this narrative than Vandana Shiva, the famous globe-trotting environmentalist. She and her organization published reports calling Monsanto’s product “seeds of suicide.” Shiva amplified the Monsanto-Indian farmer suicide connection in op-eds, media interviews, and public talks. Her exalted standing in the green world and among influential thought leaders helped legitimize the narrative.

But for it to really take hold, there had to be an existing, well-established frame.

That would be the made-for media villain: Monsanto, or as its detractors like to refer to the biotechnology company, Monsatan. That meme, in which Monsanto became tagged on the Internet as “the most evil” company in the world, because it was hell-bent on taking over the world’s food supply and jamming “frankenfoods” down our throats, was already firmly established when Shiva decided to build on it with the Indian farmer suicide story.

I’ve got a shelf of books that vilify Monsanto for its corruption of agriculture. I’ve seen documentaries on this. Everybody hates Monsanto, right?

Never mind that this image is cartoonish. What matters is that it sounds truthy. So yeah, the demonization of the company had been happening well before Shiva made it culpable for 300,000 Indian farmer suicides. Many were already primed to believe it. Of course it was true! Paul Ehrlich mentioned it, and Bill Moyers nodded along gravely when Shiva told him about it.

Then there was that affecting 2012 documentary called Bitter Seeds, which Shiva helped engineer and Michael Pollan praised. It made the rounds at film festivals. You watch that—or the various YouTube clips featuring tearful, wailing Indian families that lost a relative to suicide because of GMOs—and tell me that Monsanto isn’t evil.

It’s all about the narrative and how forcefully you build it: “Corrupt Hillary” is a “criminal”; a pediatric researcher who pushes back on anti-vaccine scare-mongering is the equivalent of a Nazi concentration camp guard; the scientists at Monsanto have created murderous “seeds of suicide.”

These story lines are real for the people who believe them because they have been reinforced repetitively with new information—books, articles, films, talks, radio segments—from trusted, like-minded sources.

Ideally, journalists and scholars are fearless when it comes to examining assumptions and embedded narratives that influence public policy and scientific debate. But in the real world, where group identity matters and reputations have to be guarded against political attacks, some have calculated that certain narratives are best left alone.

Alan Levinovitz, a professor of religious studies at James Madison University, was someone who never questioned the “Monsanto is evil” narrative until he was accused of being a shill for the company after some of his writing had been deemed by anti-GMO critics as too positive about biotechnology. In a 2015 essay, he writes, tongue firmly in cheek:

Like most people, I knew how Monsanto really was, despite not having thought too hard about it…I knew Monsanto sues farmers into oblivion, caused a rash of suicides in India, suppresses negative media coverage, and pays politicians and scientists to lie on its behalf.

But there was one story I didn’t believe, because I knew it wasn’t true: Monsanto hadn’t paid me. So I did what any academic or journalist would do, and started learning more about the company that supposedly had me on its payroll.

Levinovitz talked to scientists at Monsanto and soon a “complicated picture” emerged of a large multinational “that employed a wide variety of people, some of whom cared mainly about making money, and others who cared mainly about doing good science.”

He liked the idea of humanizing a reviled company—and perhaps a field of science—that had been thoroughly demonized. “But then I realized I would never write that story,” Levinovitz recalls in his essay. “It wasn’t worth it. Why risk associating myself, even in passing, with Satan? Other journalists have told me they feel the same way.” He then mentions Nathanael Johnson, who writes about food and agriculture for Grist, who agreed and said to him: “I’m not proud of the chilling effect it has on me.”

No journalist likes the viper pit that inevitably awaits him or her when wading into contested sciences. Based on what I’ve experienced in the past couple of years, I now wonder if I should have had the good sense to avoid certain stories myself.

The anti-GMO gang

In 2012, I wrote a piece for Slate that begins this way: “I used to think that nothing rivaled the misinformation spewed by climate change skeptics and spinmeisters. Then I started paying attention to how anti-GMO campaigners have distorted the science on genetically modified foods. You might be surprised at how successful they’ve been and who has helped them pull it off.”

At the time, agricultural biotechnology was barely on my radar. I spent much of the 2000s cocooned as a senior editor of a leading environmental publication, editing and writing stories about wildlife, conservation biology, climate change, and the sins of the fossil fuel industry. I’m proud of my work at Audubon magazine during this period of my career. I’m not someone who woke up one day and questioned his career choices or identity as an environmental journalist. I didn’t have an ideological or political conversion that led me to conclude, as the headline of my Slate piece put it, “GMO Opponents Are the Climate Skeptics of the Left.”

That said, after I became a freelancer in 2009, I sought to carve out a niche for myself, exploring the nuances of environmental and climate issues that were underreported or, in some cases, virtually ignored.

One topic that struck me as inadequately covered by my peers was the GMO debate, which, after flaring up in the 1990s and early 2000s, had simmered for the remainder of the decade. Then, after the food movement emerged in the late 2000s, activists began a campaign to label genetically modified foods. This put the science of agricultural biotechnology back in the public eye. I took notice.

I was fortunate in that I came to the issue with few preconceptions or strong feelings. I hadn’t previously paid much attention to the GMO debate. So I first set out to understand the science of agricultural biotechnology.

It was a complicated, industry-driven science, I quickly learned, which no doubt predisposed many to be suspicious of it. (This is understandable, given the long, well-documented history of disinformation and character assassination perpetrated by the chemical, lead, tobacco, and fossil fuel industries, to cite the most infamous examples.)

But I also discovered that the fears of “frankenfoods” that animated early opposition to GMOs never materialized. Prestigious science institutions had by the late 2000s looked closely at the accumulated body of independent research and found crop biotechnology to be safe. There were still thorny questions on some of the environmental trade-offs with respect to particular crops—whether GMOs reduced or exacerbated pesticide use, for example. But overall, there was a scientific consensus that the science was being put to productive use by farmers without harm to society or wildlife.

To my surprise, the same environmental groups and public interest watchdogs that accepted the scientific consensus on global warming disagreed. They rejected the scientific consensus on GMOs. What shocked me further was how they even used the same “merchants of doubt” tactics that the fossil fuel, tobacco, and chemical industries perfected to muddy public debate. For example, there is a small network of “no consensus” scientists and self-proclaimed experts in the anti-GMO sphere that mirrors the one created by climate denialists.

(In recent years, they have produced dubious scientific papers and hefty books with titles such as The GMO Deception and Altered Genes, Twisted Truth: How the Venture to Genetically Engineer Our Food has Subverted Science, Corrupted Government, and Systematically Deceived the Public. The most enduring anti-GMO, anti-vaccine, and climate denialist narratives are powered by a similar conspiracy theme.)

If you want to learn more about how this alternative universe was manufactured, be sure to read Will Saletan’s 2015 deep dive for Slate, which concluded: “The war against genetically modified organisms is full of fearmongering, errors, and fraud.”

This is what my 2012 Slate piece drew attention to when I wrote “that the emotionally charged, politicized discourse on GMOs is mired in the kind of fever swamps that have polluted climate science beyond recognition.”

The responsible parties, I pointed out, were environmental groups, prominent food columnists, and influential progressive writers. At my blog for Discover magazine (discontinued in 2015), I picked up on this theme. I highlighted continuing instances of misrepresentation of the science by green groups and prominent individuals who I thought should have known better. This has not been well received in parts of the progressive sphere (which, I should declare, is my natural habitat).

What do I mean by that? Ask Julia Belluz, a science reporter for Vox who has written hard-hitting pieces on Dr. Oz, alternative medicine, and diet fads; she discusses the blowback she has received in a recent piece entitled, “Why reporting on health and science is a good way to lose friends and alienate people.”

This has certainly been my experience while reporting on GMOs. It’s even worse when the only friends you make after reporting critically on an eco-saint such as Vandana Shiva are the kind of people who work in labs at Monsanto’s headquarters.

My point being: If the journalism you do is perceived to be aiding the most evil company in the world, trust me, you run the risk of losing more than friends. More on that in a minute.

In the meantime, put yourself in the shoes of food activists and greens who oppose GMOs and who truly believe they are on the side of angels. They wake up every day to fight evil. There are no shades of gray in this black-and-white world, which you should view through their lens:

If industry executives and industry-allied scientists—the faces of evil!—are approvingly sharing on social media all my stories and blog posts about how exalted progressive voices are distorting the GMO debate, that probably indicates I’m a friend of Monsanto, right?

If the Columbia Journalism Review publishes a story in 2013 about how I’ve shined a light on slipshod and slanted media coverage of GMOs, I must be carrying water for the biotech industry, right?

In 2015, if I report for a prominent science publication about public-sector scientists receiving Freedom of Information Act (FOIA) requests from an anti-GMO group—before that group wants the requests to be made public—then I must be industry’s handmaiden, right? If I then follow up with another exclusive about the contents of those FOIA requests—before the anti-GMO group that requested them wants the information to be made public—then I definitely must be working for industry, right?

No, my reporting is not influenced or bought by industry. I learned about the Indian farmer-GMO suicide myth on my own. I never even talked to Monsanto when I was reporting about it for this magazine. When I first wrote about the widespread distortions of crop biotech science for Slate in 2012, I had barely begun to talk to scientists in the field. I could glean the upside-down world that anti-GMO activists created just by comparing it to the world of actual science and respected literature reviews.

Since then, I’ve come to know many public-sector biotech scientists. They trust me to report on them in a fair, unbiased manner. After a number of these scientists were served with FOIA requests from an anti-GMO group in 2015, they let me know about it. I wrote a straightforward story about this news for Science early that February. In private, sideline conversations, I also told the scientists that, on journalistic principle, I was not opposed to the FOIA requests, even though I could understand why they felt aggrieved by them.

Later that year, scientists alerted me to the release of thousands of e-mails to the anti-GMO group that stemmed from its FOIA request of one researcher. I then reported on the contents of these e-mails for Nature, which blindsided the anti-GMO group, as it had already been working with other journalists it felt would reflect their interests and philosophy in articles to come.

What happened next blindsided me.

Character assassination

It started one day in September with the anti-GMO group posting e-mails it received from a FOIA request that mentioned my name and other journalists, such as Tamar Haspel, a food writer for the Washington Post, and Amy Harmon, the two-time Pulitzer-winning reporter for the New York Times, who had recently published several highly acclaimed feature stories on GMO controversies (that upset biotech opponents). The excerpts, followed by commentaries from the group, were called, “A Short Report on Journalists Mentioned in our FOIA Requests.”

The e-mail excerpts don’t include anything we said or did. (If a reporter has a specific beat or writes frequently on contentious topics, you can bet that reporter’s name will surface in e-mails of the people with great interest in those topics.) But the few times our names come up in chitchat among university scientists apparently triggered the anti-GMO group’s suspicion. For example, one of the highlighted e-mails where my name is mentioned comes from an academic scientist and outspoken GMO advocate. His message, which relayed concerns about a rumored hacking of websites, was sent to various science communicators and biotech industry representatives. I was on a long list of cc’d people. The anti-GMO group’s conclusion: “The e-mail implies that Kloor works closely with the agrichemical industry’s prominent advocates.”

Several days later, the liberal-leaning website Alternet published the anti-GMO group’s “short report” verbatim, but tacked on a catchier headline: “3 Journalists Who Are Disturbingly Cozy with the Agrichemical Industry.” Remember, there’s not even any e-mails from us! It’s inference piled on inference, a presumed guilt-by-association based on who mentioned our names.

Shortly after that came out, I received an e-mail from Robert Kennedy Jr., who included me on his response to an anti-vaccine activist who had just informed him that I had been outed as a “shill for industry.” Kennedy’s confirmation bias kicked in: “Makes sense. The first question I ever asked Keith was whether he was shilling for [the pharmaceutical] industry. It just didn’t make sense that the guy who sold himself as a science writer was promoting industry junk science so adamantly.” Within weeks I found myself being described on websites as a “Monsanto prostitute” and “industry sleazebag.”

I wasn’t too bothered by this because on-line flamers usually undermine their own credibility. But then in January 2016, the campaign to tarnish my professional reputation became serious. Greenpeace, which has long been opposed to GMOs (and rejects the scientific consensus that that they are safe), created a page for me on its PolluterWatch website. It’s a cunning mix of factually true autobiographical details, half-truths, and outright fabrications, such as this one: “Kloor has repeatedly decried public records requests, some of which include his communications with GMO interests, by organizations exposing conflicts of interest between corporations and scientists.” There is no evidence for this claim, which is also utterly absurd, given that I have used the FOIA myself to uncover industry misdeeds.

Then, days later, a similar post appeared on SourceWatch, an Internet watchdog site that tracks “corporate front groups, people who ‘front’ corporate campaigns, and PR operations.” Today, those who Google my name for whatever reason are likely to come across these sites. If you are unfamiliar with me or my work, you will be unable to distinguish what’s true and false, which is surely by design. That’s disconcerting in the digital age we live in; people are already plenty confused by fake news and slick political propaganda.

Even more disheartening were responses I received from colleagues in the Society of Environmental Journalists (SEJ). After I mentioned the PolluterWatch and SourceWatch pages on the organization’s listserv, one SEJ member said, “seems factual to me.” Another sent me a private e-mail: “Keith, please, tell me who pays your salary? How can you continue to pretend that you have not succumbed to the allure of spin that began with the pork industry paying doctors to extol the merits of eating dead pigs? Do you or do you not tout the technologies that have yet to be proven truly safe? DO TELL. It is time for you to come clean.”

After picking up my jaw from the floor, I thought to myself: With colleagues like this, who needs enemies?

On December 6, as I was wrapping up this essay, I received an e-mail from a scientist at a public university who had just that day received another FOIA request from the same anti-GMO group that has over the past year sent many such requests to dozens of his peers throughout the United States and Canada. This one was different from earlier requests that asked for correspondence between the scientists and anyone connected to the biotech industry.

This time, the request was for correspondence covering the past three years between the scientist and three journalists: myself, Tamar Haspel of the Washington Post, and Grist’s Nathanael Johnson. The three of us have written extensively about GMOs, at times correcting misinformation and debunking myths. In doing so, we have challenged certain false narratives about the science of agricultural biotechnology that have persisted. Perhaps the anti-GMO group that now seeks our e-mail correspondence with one scientist suspects that there are smoking guns to be found that will impugn our names and consequently our reporting on GMOs.

Regardless of what is found in the e-mails, I can already imagine the damning headline: “Science Journalists Found Consorting with Scientists.”

Civil Society’s Role in a Public Health Crisis

When the next major pandemic strikes, it will be accompanied by something never before seen in human history: an explosion of billions of texts, tweets, e-mails, blogs, photos, and videos rocketing across the planet’s computers and mobile devices.

Some of these billions of words and pictures will have useful information, but many will be filled with rumors, innuendo, misinformation, and hyper-sensational claims. Repeated tidal waves of messages and images will quickly overwhelm traditional information sources, including national governments, global news media outlets, and even on-the-ground first responders. As a result, hundreds of millions of people will receive unvetted and incorrect assertions, uncensored images, and unqualified guidance, all of which, if acted on, could endanger their own health, seriously damage their economies, and undermine the stability of their societies.

The impact of technology on pandemics is as old as mankind. When new technologies such as jet travel and global mass transit appear as they did in the 1950s, we update our thinking and containment strategies.

In terms of pandemics, the consumer information revolution today is just as significant a development as the commercial jet was in the 1950s. Within just the past few years, an entirely new worldwide information architecture has emerged. Modern communications enable uncensored, user-generated words, voices, photos, and videos to be broadcast globally by anyone, anywhere, 24/7,at no real cost and often anonymously.

Relatively new technologies, such as texting, chat, blogging, posting, and voice-over-Internet, as well as media platforms, such as Twitter, Facebook, Skype, and YouTube all began as secondary “backchannels” in a much larger information universe. That is no longer true. In the next major pandemic, when fatalities in a developed country start mounting past a few dozen, these backchannels will outrun and overwhelm the front channels of traditional media and official sources in terms of speed, volume, relevance, and credibility. Twenty billion personalized web and mobile messages a day could easily become the norm.

In addition, in a world of low-cost, ubiquitous mobile web and media, groups with destructive agendas could “infect” the information channel with the kind of intentional lies or highly disruptive bio-terror propaganda that will be specifically designed to be quickly forwarded by millions of people in minutes.

Are we prepared?

As a result of a backchannel explosion of unfiltered communications, entire segments of modern societies could easily slide into a kind of anaphylactic shock. Without some kind of effective counterforce, panic, chaos, and disorder are almost certain to break out during the next major pandemic.

There is a fundamentally unmeasurable, but nevertheless high probability that a kind of destructive feedback loop could emerge. If unchecked, it could quickly amplify a pandemic’s biological effects and create a secondary set of consequences that may be equally deadly, if not worse.

Here are a few examples of how runaway backchannel messages can create immediate and substantial damage:

Advances in communications are not the only developments that have the potential to magnify health emergencies. A variety of “superforces” are increasing the world’s vulnerability to pandemics. Ironically, many of the same factors that make our modern civilization successful and powerful can also be exploited by deadly pathogens, or by bad actors, leaving the entire world more vulnerable to a fast-moving catastrophe than ever before.

Technology is often a double-edged sword: it can be used to help or harm. In our time, we can identify a number of superforces that have already begun to transform science and technology, and will soon transform civilization itself.

These superforces have the capacity to improve our lives immeasurably by strengthening our health, our knowledge and access to data, our economies, our connectedness, our communications, our mechanical and scientific capabilities, and much more.

Yet these same superforces could—either inadvertently or deliberately—help spread the next global pandemic, causing casualties in the millions, tens of millions, or conceivably in the billions. These superforces include:

Urbanization, which concentrates populations, enabling greater cooperation while making it easier for pathogens to spread far more quickly and to infect far larger populations. Air travel and mass transit systems support and connect these urban environments at the speed and scale needed to keep populations fed and keep cities functioning.

Synthetic biology can design and create useful new biological agents, agricultural products, and even entirely new forms of living things. Synbio can also be used to design and create a pathogen that spreads more easily than a naturally emerging pathogen, kills faster, is more drug-resistant, and is harder to detect. So-called “chimeras” can artificially combine the most dangerous features of known killers.

Connectivity makes it easier for doctors, hospitals, and medical experts to track and support individuals on a 24/7 basis. Yet connectivity also makes it easier to order many existing pathogens from online sources or download digitized genomic blueprints, along with procedures to reassemble, modify, or weaponize them.

3D printing: Current bioreactors are inexpensive, but 3D printing will soon make it possible to replicate a powerful new customized drug or bio-agent quickly and cheaply, anywhere in the world. This capacity applies equally to life-saving medicines and to deadly pathogens.

Advanced robotics, such as self-flying programmable drones, offer a practical and inexpensive platform for many tasks, ranging from commercial package delivery to real-time traffic studies. Yet drones can also be used to deliver a pathogen to thousands, tens of thousands, or even millions of people at a time, in ways that bypass many traditional defenses.

As deadly as nuclear war

When an outbreak of a communicable disease rapidly exceeds the expected numbers of patients (for example, a normal seasonal flu or other malady) within a single city, region, or country, scientists call it an epidemic. When cases of that disease then pop up in a second region of the world, it is classified as a pandemic by the World Health Organization (WHO). Pandemics are not classified by their scale in numbers of human victims, but by their geographic breadth. They can start small and grow quickly. WHO has developed scenarios in which a modern global pandemic could rapidly escalate to the point where it might kill 7.4 million people. The organization refuses to speculate on maximum potential fatalities if a completely unanticipated disease appears.

A global corps of qualified volunteer reservists could play a critical supporting role to national health ministries and local governments during a pandemic.

From 1945 through today, the prevention of nuclear war has been the world’s top security priority. At the same time, emerging new technologies have now elevated the threat of pandemics—whether unleashed malevolently, triggered accidentally, or occurring naturally—to levels comparable with nuclear destruction.

In just the past 100 years, pandemics have killed from 1 million (1968 Hong Kong flu) to 40+ million people (WHO estimate, 1918 Spanish flu). The potential of bio-agents to spread widely and rapidly is why governments take them so seriously. The US went to war against Iraq in 2003 based partly on its belief that Iraq possessed chemical and biological weapons, and would use them against the West as it had previously done against its regional enemy, Iran. Just the threat of bio-terror can alter the course of history.

Pandemics have always been a threat. However, as global trade and international travel continues to escalate, as habitats for wild species shrink, and as the world’s population grows and tens of billions of animals are raised for food, the vectors and channels for pandemics have increased dramatically. What’s more, recent technological advances in areas such as synthetic biology have created a disturbing new reality: an attempt to create a bio-weapon that can trigger a man-made pandemic no longer requires large teams of world-class scientists or expensive labs full of advanced equipment. Instead, such attempts can be made using commercially available, affordable equipment—such as is found in a growing number of university and high school labs as well as easily ordered online—and requiring no more expertise than that of a high school biology teacher.

This level of technology and skills has already spread to tens of thousands of individuals worldwide and will soon be within reach of hundreds of thousands. Consequently, bio-weapons are an increasingly attractive option for non-state actors, making prevention of man-made pandemics even more difficult. If perpetrators have no official geographic base, then a target nation cannot deter this kind of attack by simply threatening a counterstrike. This is just one way that bio-weapons bypass many of our highly evolved defenses (such as secure borders and governmental control of the skies) that makes us feel safe and invulnerable to “far away” problems.

Bio-weapons, along with the world’s greatly increased vulnerability to naturally-occurring pandemics, are a genie that cannot be put back in the bottle. It requires imaginative new thinking to conceive of countermeasures that could contain and possibly deter the worst possible outcomes with fatality rates of potentially tens of millions of people along with possible economic collapse.

Using the latest tools of genetic engineering, a unique pathogen can now be designed in a lab that largely or totally bypasses our body’s highly-evolved natural defenses, including lethal variations of diseases that we have already been immunized against and thus feel “safe” about. Worst case, it could make the human species as extinct as the dodo bird or the Dutch Elm tree. Our mass numbers are no real defense. In 1860 there were billions of passenger pigeons. Just 50 years later there were none.

Containment and mitigation

How well any civil society addresses pandemics—potential and actual—depends, in part, on that society’s resiliency. Building resiliency means increasing our multi-layered capacity to do two things: containment to help slow or halt the spread of deadly pathogens before outbreaks grow to uncontrollable levels, and mitigation to prevent or reduce non-medical effects of pandemics (such as fear and panic that can paralyze economies and depopulate cities) and to recover as quickly as possible from whatever damage is sustained. Both containment and mitigation can reduce the risk that an epidemic will escalate into a major pandemic and can reduce the total number of fatalities substantially.

Containing pandemics begins with medical actions: identifying the source of a pandemic; tracing exposed individuals and tracking the spread of cases; declaring and imposing quarantines when needed; vaccinating threatened populations; decontaminating people, places, and things; instituting hygienic policies to reduce the spread of the pathogen; and, perhaps, searching for cures, antidotes, and therapies.

These medical efforts to contain actual and potential pandemics are largely the responsibility of government agencies at all levels. Understandably, this focus occupies most of government’s attention during a health crisis. Specifically, in the event of a pandemic, the government of any country will often have its hands full directing the actions of that country’s first responders, medical professionals, and healthcare organizations; tracking exposure; controlling the release of stockpiles of drugs, vaccines, food, water, and other emergency supplies; and quickly rebuilding or restoring everything from inadequate medical facilities to crisis-shattered public confidence.

Equally critical to saving lives is the second response strategy: mitigating the broader social and economic “ripple effects” of a pandemic. Even if initial mass casualties from a pandemic seem to be avoided, the cascading consequences of a potential high-impact pandemic—such as widespread fear and panic as well as economic disruption—can easily undo the positive effects of any medical defenses that authorities put in place and trigger a skyrocketing death toll. For example, if infected people—motivated by fear—travel or flee the contaminated area despite medical advice; if infected people ignore recommended hygienic precautions for dealing with food, water, clothing, burials, animals, and other people; if potentially infected people continue to have their normal interactions with non-infected persons, then such actions promote a wider spread of the disease, which threatens far higher causalities and disruptions.

When a potential pandemic emerges, public health professionals typically want entire populations to change many things about their regular day-to-day behaviors—and to do so quickly. For example, the science may indicate that to prevent the spread of disease, people should adopt a series of tactics, both novel and familiar—potentially with different behaviors for different populations in different areas, and often with different specific behavior regimens for work, home, school, retail environments, and so on. Behaviors that work to contain the most virulent and deadly influenza may not work for novel pathogens, whether naturally occurring or man-made. And, behaviors that work for airborne pathogens or those that spread through superficial contact may not work for those that spread through an exchange of bodily fluids.

Large-scale public understanding of the “best” behaviors in each case, for each region and population—added to the need for public understanding for the reasons behind medical recommendations; and then added to the need to obtain widespread compliance with these behaviors—requires a highly sophisticated, multi-channel communications effort. This “communication inoculation” must happen just as quickly as drug inoculation. Both kinds of inoculation can be critical for saving lives. Put another way, words and images can be as powerful in reducing death rates as drugs and doctors.

It is extremely difficult, however, to achieve this degree of mass behavioral change under the best of circumstances. The task becomes even more difficult in a world where “official” sources, including both governmental agencies and major media outlets, are distrusted by significant sectors of the global population, and where billions of people can access and amplify any unfounded opinion on the Internet.

The life-or-death importance of clear, trusted, scientifically reliable communication in the event of a pandemic was strongly emphasized by WHO scientists in their post-mortem report on how Singapore and the world handled the SARS outbreak in 2004. They recognized the critical human-behavior factor. According to WHO, “Human behaviors nearly always contribute to [the spread of a pandemic, therefore]…information to the public…

acquires the status of a control intervention with great potential to reduce or interrupt transmission and thus expedite containment.” To put it more plainly: Clear, credible communication of science-vetted messages to the public can make the difference between an epidemic that threatens a limited population confined to a relatively small geographic region and a pandemic that spreads across the planet and indiscriminately threatens hundreds of millions of lives—especially in urbanized, highly mobile societies.

In the face of these many challenges, a new and effective approach to crisis communication and management must be developed, tested, refined, and implemented. This approach cannot and must not wait for government agencies at every level to realize and act upon this need.

Civil society’s role

Before a potential or actual pandemic emerges, the resiliency of any society can always be strengthened. Every country has important, untapped assets in civil society that go beyond public employees and leaders, and these untapped assets can play a critical role in minimizing fatalities from pandemics.

Around the world, such assets include millions of scientists and tens of millions of qualified professionals in all major industries and sectors worldwide, especially in the fields of media and communication. The vast majority of these citizens and scientists are people of goodwill who would enthusiastically serve in a volunteer corps if invited to do so.

A global corps of qualified volunteer reservists could be created from civil society and funded privately to operate independently of any government to play a critical supporting role to national health ministries and local governments worldwide during a pandemic, by enabling prompt communication of clear, credible scientific information to the public.

Among other services, this independent global community would be fast and adept at creating public information materials—from Wikipedia pages to YouTube videos, texts, podcasts, radio spots, and more—that are precisely targeted for many disparate populations and demographic sectors. This includes grade-appropriate content for schoolchildren, multiple-language versions of urgently needed content for population slices at every education level from college graduates to the illiterate, regionally and culturally-targeted content, and materials for special populations such as the hearing- or vision-impaired.

More specifically, volunteer corps communications experts could create, market-test, refine, and stockpile a library of crisis media assets long in advance of certain kinds of pandemics or likely outbreaks. These assets would be ready-to-go in the event of well-understood threats such as SARS, anthrax, smallpox, virulent influenza, and other highly communicable diseases. To help ensure credibility, all media content would acknowledge that scientific knowledge of any unfolding emergency is necessarily uncertain and subject to continual revision.

Ideally, for internal communication among themselves, members of this global volunteer community would use affordable, existing technologies including, but not limited to, smartphones, texting, social media, and well-established global websites. For communication to the public and local agencies, the global volunteer corps would be expert at rapidly leveraging the social media platforms that billions of citizens around the world today already regularly turn to and use.

To be clear, this is not about giving scientists a direct line to the public through social media. And this proposal is not about trying to turn scientists into bloggers, YouTube video producers, and the like. That’s not a practical idea because so many scientists often find it challenging to translate their facts, findings, and ideas into language that the public clearly understands. Instead, volunteer communications amateurs and professionals around the world, working in close consultation with qualified volunteer scientists and governmental agencies, would craft media assets that both the public and front-line responders can easily understand, relate to, and find credible and compelling for use in a potential or actual pandemic or other asymmetric event. The communicators would create the media assets; the scientists would vet their content for accuracy. Separately, some of the scientists among the global volunteers would be available to do interviews on traditional and non-traditional media.

When communicators are identified as part of a civilian science volunteer organization rather than a governmental agency or media organization, their information is far more likely to be perceived by the public as free of political and financial self-interest, and therefore more credible than much of the emergency communication and guidance that comes from standard sources.

Finally, this proposal includes the suggestion that all of the volunteer scientists would be available to act as a grassroots network of local field agents. In this role they would perform non-communications, science-oriented tasks in the event of a crisis, in support of public agencies (police, fire, emergency medical technicians, and so on), as well as health ministries or international agencies.

These science-oriented tasks might include performing local testing of food and water sources, or basic, “on the ground” information gathering. Reliable data could then be passed on from the grassroots network to authorities. Other science-oriented tasks performed by volunteers could include limited, second-tier problem solving when local or national governments, health agencies, or international agencies are overburdened and unable to perform these tasks for themselves.

The size of this global volunteer corps would necessarily be a function of the scope of its mission. The organization needs distributed membership that is already in place.

Important note: the idea here is not that far-flung volunteers would act in perfect coordination worldwide, like a fast-food franchise simultaneously pushing the identical menu on every population in every quarter of the globe. Rather, a global volunteer corps at this scale ensures that in every major region, city, nation, and community, the organization will have local representatives ready and able to advise on—and adapt to—unique local conditions, including public attitudes, values, and communications requirements in case of an epidemic or pandemic.

Volunteers in each geographic region would act only when and as needed. For example, if a pandemic breaks out in three nations, then only the local volunteers in those three nations would need to respond. It would be extremely rare for the entire global volunteer corps to act simultaneously. Even if a massive pandemic required simultaneous, coordinated action across half the world, a widely distributed body of representatives makes it possible for volunteers to shape local communications by taking into account the dramatically varied cultural differences that drive public opinion and behavior.

In a time of crisis, the local volunteers’ input, recommendations, feedback, and experience in customizing health messages for individual communities would often make the difference between success and failure for any public service communications campaign or medical information-gathering effort. With these considerations in mind, the global volunteer corps would probably reach optimum size with between 1 million to 2 million members worldwide. This scale would ensure ready representation in every city of more than 10,000 people.

If this global community were assembled, trained, and tested before a crisis occurs, it would be able to establish credibility with general populations around the world, enabling the volunteers to make a very significant contribution to the containment of potential pandemics and the mitigation of actual pandemics.

When West Africa’s Ebola epidemic escalated into a pandemic in 2014, thousands of victims died across the subcontinent in a matter of months. And if Ebola had exploded in Lagos, Nigeria­—with its population of 21 million—casualties could have quickly reached hundreds of thousands just in that city. The spread would have been uncontrollable, and the entire economy of most African nations would likely have suffered enormous economic damage as global trade with Africa collapsed. Fortunately, Nigeria’s government, the CDC, and WHO held the number of cases in Lagos to just 20 infections, with only eight fatalities. Quick volunteer action was an important ingredient of this success. Volunteer media teams, led by private citizens including both media professionals and doctors, created a three-way partnership that made the difference.

The partnership included:

The campaign was pronounced a “spectacular success” by WHO. A major outbreak of Ebola was prevented from becoming an out-of-control mass killer and economy destroyer.

In November 2015, TEDMED and the Skoll Global Threats Fund convened a daylong discussion with 30 leaders from 24 educational, corporate, government, and nongovernmental organizations to discuss the growing threat of global pandemics and to explore strategies for prevention and response.

The thoughtful dialog that resulted from this gathering clearly revealed a growing consensus about the need and willingness to support a science-driven cooperative of qualified volunteers who can work with governments to help mitigate the next pandemic.

Key points of consensus included:

Acting now to create a global volunteer corps of qualified scientists, communicators, and other experts from civil society will significantly strengthen the world’s “firewall” against mass-scale deaths from pandemics and their aftermaths. And it will strengthen the underlying economies of all nations. In addition, the capabilities of this global volunteer corps could alter the calculus of transnational terrorists who today see very few sources of deterrence and resilience in our otherwise ill-prepared societies.

Foresight, imagination, and willingness to act are our most effective tools to prevent outbreaks and their consequences, and to contain and mitigate those that cannot be prevented. Both the responsive and deterrent capabilities of the proposed corps of global volunteers could ultimately save lives and reduce casualties—quite possibly in the many millions.

Forum – Fall 2016

Manufacturing’s loss, Trump’s gain

Most people in the Washington, DC, trade establishment see the current political backlash driving Donald Trump’s presidential campaign as a result of nativism, ignorance, or simply the whining of those hurt by trade. For them to consider the alternative—that these citizens may be on to something legitimate—is to challenge the core foundations of what can be labeled the “Washington Consensus” on globalization. As Stephen Ezell and I wrote in our book Innovation Economics: The Race for Global Advantage, most in the trade policy establishment share a view about trade that is grounded in at least five core beliefs: 1) the United States is the world’s economic leader because it is the most open, entrepreneurial, and market-driven economy; 2) trade is an unalloyed good for the United States, even if other nations engage in “innovation mercantilism;” 3) mercantilist nations hurt only themselves; 4) the US’s role in the global economy is to be a shining “city on the hill” that, by force of example rather than prosecution, shows misguided nations why mercantilism is bad; and 5) because there is no optimal industrial structure, the nation not only does not need, but should actively avoid, a national manufacturing policy.

In their dogged defense of trade, holders of the Consensus go to great pains to deny that US manufacturing has declined because of trade, because they know that to admit this, cracks begin to form in their position. This is why William B. Bonvillian’s article, “Donald Trump’s Voters and the Decline of American Manufacturing” (Issues, Summer 2016), is so important. Bonvillian rightly points out that many of Trump’s supporters are working class whites who have been hurt by trade. But as Bonvillian also points out, because so many nations have systematically manipulated the trading system for mercantilist advantage, and because the United States has failed to put in place an innovation-based manufacturing agenda, US manufacturing, and the US economy, has been hurt. In other words, the story is more complicated than the holders of the Consensus want us to believe.

In fact, these holders continue to stridently defend the fundamentally wrong story that virtually all of the loss of US manufacturing jobs has been due to higher productivity, not trade deficits, because they know that if they admit that the nation has lost manufacturing because of trade, then maybe not all is right with the government’s trade policy. But as research by the Information Technology and Innovation Foundation has shown, and as Bonvillian argues, the United States has, in fact, lost considerable manufacturing output and jobs to other nations, and not just in so-called low-tech sectors.

Holders of the Consensus see the choice as between “free trade” and protectionism. But Bonvillian rightly points out that the real choice is between the current system of weak trade enforcement coupled with a laissez-faire domestic policy when it comes to manufacturing, and a new system that embraces rules-based globalization grounded in tough and energetic enforcement against systemic and rampant innovation mercantilism combined with a coherent domestic national manufacturing strategy grounded in innovation. If the holders of the Washington Consensus want to avoid a new era of protectionist isolationism, they better embrace the framework Bonvillian lays out pretty quickly.

Robert Atkinson

President

Information Technology and Innovation Foundation

Washington, DC

Too much democracy?

Political sentiments among professional observers (e.g., “Voting is not democratic”) and the public are shifting phenomena. At times, the shift is akin to a dramatic quake. Recall the famous promise of the German Chancellor Willy Brandt in his 1969 state of the nation address before the German Parliament in Bonn: “Let’s dare more democracy” (“Wir wollen mehr Demokratie wagen”).

Willy Brandt expressed a sentiment very much in line with the spirit of that age; the student rebellion in many societies and the widespread public protests against the Vietnam War were illustrated and inspired by democratic sentiment. Today, however, we are more likely to encounter expressions about the democratic political order—not only among (would be) dictators—but more generally. In the wake of the Brexit vote in the United Kingdom, for example, we hear that there is “too much democracy.” We hear that a rational nation ruled by science would not be such a terrible idea. Such opposition to democratic governance is often cited, too, in response to the consequences of climate change. As Daniel Bell already postulated as part of his theory of post-industrial society, the political problems of such a society are essentially elements of science policy rather policy matters in the ordinary sense.

As Colin Macilwain documents very well in “Science and Democracy” (Issues, Summer 2016), despair about democracy is expressed in manifold utterances of scientists, who attempt to resurrect an idea that, we hoped, had been buried years ago. Prominent in the 1970’s, the idea that the virtue of technocratic rule by experts is the solution to whatever ails modern societies, is once again reappearing. The economist Daniel Kahneman is but one prominent new voice expressing extreme skepticism about the capacity of existing political arrangements and the world-views of the public as conducive to coping with the fallout of climate change.

Perhaps even more realistic as a threat to democracy than the attitudes and the behavior of members of the scientific community toward the public is the increasing growth of social inequality and the rentier-class in many societies that lives off the rent procured by its ownership of capital assets, independent of any labor. The beliefs of the elite of the scientific community perhaps are also driven by what Friedrich Hayek detected among his fellow scholars, namely as science advances, it tends to strengthen the conviction that we ought to aim at more deliberate and comprehensive control of all human activities.

The dangers for the scientific community in neglecting the public and its world views are, however, significant. For, rather than a greater role and reach of science in policy making, what could very well happen is exactly the opposite. What is to prevent the shift from directing society to a much closer scrutiny, to controlling the scientific community? A new political field, knowledge politics, would not promote science, but attempt to control, or even outlaw, new scientific discoveries before entry into society.

The political order of a society that is convinced that its real pattern of inequalities is based on merit and is just, rather than on claims to advanced knowledge, is in serious trouble once leadership should fall on the shoulders of those claiming superior knowledge. Scientific rationality, at times, has nothing do with democratic rationality. Democratic rationality requires its own set of instructions.

Nico Stehr

Karl-Mannheim Chair of Cultural Studies

Founding director, European Center for Sustainability Research

Zeppelin University

Friedrichshafen, Germany

I am appalled that you felt Colin Macilwain’s article on science and democracy merited publication. It purports to be about how the scientific community’s lack of engagement with the public is contributing to the current political situation in the United States. However, it provides no support for that opinion and identifies nothing about how he believes the scientific community ought to engage with the public. Instead, the article is filled with anecdotes and pejorative messaging on a variety of unrelated issues (such as those conflating trickle-down economics, the 2008 financial crises, and government funding of research). It is reminiscent of what Michael Douglas’s character said in the movie The American President: “whatever your particular problem is, I promise you, [my opponent] is not the least bit interested in solving it. He is interested in two things and two things only: making you afraid of it and telling you who’s to blame for it. That, ladies and gentlemen, is how you win elections.”

I do have to acknowledge that Macilwain’s writings are an excellent example of what has worked so effectively in public engagement recently—that is, extrapolating anecdotal information and using it to villainize—and would be quite appealing to the “insurgent forces, on the left or right” that he feels scientists should engage with more. Good show. But with regard to his complaint that a Trump presidency is tantamount to “democracy [getting] ridden over a cliff” and that scientists would be partly to blame, it is far more plausible that a democratically elected Trump would be a result of antiestablishment public opinion biased by articles such as his.

It is also surprising that someone who is indignant about “increasingly arrogant…public pronouncements” is so confident that his own suppositions and anecdotal observations are representative of the multifaceted scientific community at large. In my own experience, Macilwain’s pejorative references to people “squabbl[ing] haplessly among themselves, each maneuvering into whichever position most elevates them in the eyes of their aristocratic paymaster” and “lazy [people] who crawl around after…elites, massaging their egos, defending their interests, and happy with the [money] thrown their way” are likely to apply to a much greater extent outside the scientific community than within it. Though I am shocked—shocked—to hear that in public forums, scientists sometimes kowtow toward those who put the food on their tables.

It would be worrisome indeed if scientific information that ran contrary to the establishment were muzzled; however, it is patently obvious that it isn’t, as even the most cursory search of the internet or social media will show. In fact, it is far more fashionable for people to write provocative pieces that critique the establishment.

There are valid criticisms that can be made about communication, biases associated with funding, recognition of scientific and technical limitations, and the justification of government funding, but they deserve a much better discussion than what Macilwain provides. In fact, this journal has published a number of better examples in recent years. I would also like to recommend Daniel Sarewitz’s Frontiers of Illusion as an excellent book that provides a less politicized discussion of these points.

Todd Tamura

Petaluma, California

Medical crises

In “Civil Society’s Role in a Public Health Crisis” (Issues, Summer 2016), Jay Walker powerfully argues how the increased connectedness of the world, coupled with the growing significance of five “superforces,” threatens to overwhelm social order, policy makers, and public health officials in the event of a future pandemic. As one of the people assembled by Walker at the TEDMED 2015 session discussed in the article, I support his analysis and his recommendations for the vital steps needed to make the world safer.

I would amplify on Walker’s points with two additional considerations. First, although he is right to warn that a future public health crisis could see health systems and communities overwhelmed by massive miscommunication and misinformation, spread rapidly by social media and new technology, we should not forget that the immediate challenge is a lack of communication and lack of information in the places where epidemics are most likely to occur. Yes, the Ebola outbreak in the United States in the fall of 2014 had many of the elements of hyper-communication that Walker describes. But the response to the Ebola epidemic in West Africa was plagued by a deficit of communications, including an inability to educate the public about safe burial practices in regions without radio stations or other mass communication, an inability to pursue contract tracing in large areas where no cell service existed, and an inability to perform laboratory tests in communities cut off by poor roads and infrastructure. For the foreseeable future, epidemic threats are more likely to emerge from under-connected regions rather than over-connected ones.

Second, although I strongly endorse Walker’s proposal for a volunteer corps of experts to respond to outbreaks and support governments in mitigating the next pandemic, it also will be necessary to strengthen governmental and public-sector response mechanisms, which provide the first and last lines of defense. Reform of the World Health Organization must be pursued with relentless focus and determination. Plans to create an “African CDC,” modeled on the US Centers for Disease Control and Prevention, are vital. A “white helmet” battalion mustered by the European Union to provide a mix of medical and security support in an epidemic must be resourced robustly. And in the United States, bipartisan proposals for a federal Public Health Emergency Fund (and an epidemic response authority for the president on par with the current authorities to respond to other “natural disasters”) should be passed and generously funded. Civil society plays a crucial role in epidemic response, and that role can be increased. But government alone can provide critical services that are a necessary platform without which civil society cannot stand.

Readers within and outside of the science community should take Walker’s words of warning to heart: it is only a question of when, not if, the day of global reckoning with a pandemic will come. As he says, we must strengthen the ability of civil society to help respond to that challenge, and to tap the unique talents and resources of civil society to save lives when the crisis comes.

Ronald A. Klain

White House Ebola Response Coordinator, 2014­–15

This is a very important article by a thoughtful and worried citizen. No ordinary citizen, Jay Walker is curator of TEDMED, founder of Priceline, and holder of several hundred patents. He has agonized for years over the threat of the next pandemic caused by some bug jumping species, and this article is his best explanation of how we should reframe our thinking.

Social media, webizens, civil society, and public health workers need to pay attention to Walker’s prophetic voice about how the tools of modernity may either amplify or blunt the biological effects of a pandemic to humanity’s benefit or peril. Epidemiology must stretch the envelope to include mastery and understanding of the forces Walker articulately describes if we are to keep the world as safe and healthy as possible when the next Zika or Ebola or Swine Flu or SARs hits.

Larry Brilliant

Mill Valley, California

Author of Sometimes Brilliant, a memoir about working to eradicate smallpox

Pricing an ecosystem

In “Putting a Price on Ecosystem Services” (Issues, Summer 2016), R. David Simpson does an excellent job of articulating some of the limitations to using economic valuations as the basis for decisions about preserving and protecting ecosystems. Clearly, there are some situations in which quantitative benefit-cost assessment is an appropriate way to frame and consider how to manage natural systems. At the same time, recent decades have witnessed a growing application of quantitative anthropocentric framings to ever wider aspects of environmental decision making. This philosophical perspective is certainly not new.

The King James Version of the Bible instructs humans to “Be fruitful, and multiply, and replenish the earth, and subdue it: and have dominion over the fish of the sea, and over the fowl of the air, and over every living thing that moveth upon the earth.” In what some observers argue is a more accurate rendering of the original Hebrew, the New International Version reads “Rule over the fish in the sea and the birds in the sky and over every living creature that moves on the ground.” Over the centuries, many have interpreted this Judeo-Christian mandate to “subdue” and “rule” as basically saying that nature exists solely for the benefit of humans. That is very much the framing adopted when the preservation of nature is valued solely in terms of people’s willingness to pay or in terms of quantitative measures of “ecosystem services.”

The past 150 years have witnessed growing numbers of dissenting voices. Writing in 1875, John Muir challenged “the enormous conceit” of the doctrine that “the world was made especially for the uses of men.” Ninety years later, Rachel Carson argued: “The control of nature is a phrase conceived in arrogance, born of the Neanderthal age of biology and philosophy, when it was supposed that nature exists for the convenience of man.” And in 2015, Pope Francis argued that Christians have misinterpreted scripture and “must forcefully reject the notion that our being created in God’s image and given dominion over the earth justifies absolute domination over other creatures.”

Today, most people in the United States, including all but the most doctrinaire economists, share the view that there are limits to a strictly anthropocentric framing. The challenge now is to better articulate a philosophical framework for judging where those limits should lie; work to create a broadly shared social consensus about those limits; and develop norms that give decision makers more space to adopt decision rules that, when appropriate, go beyond quantitative anthropocentric benefit-cost analysis.

M. Granger Morgan

Hamerschlag University Professor

Department of Engineering and Public Policy

Carnegie Mellon University

In this article, R. David Simpson asks “What is driving the interest in ecosystem services?” Why has the language of ecosystem services become “standard jargon for environmental policy makers?”

As Simpson correctly observes, the ecosystem services bandwagon is not riding on the old Malthusian rails. Overpopulation, resource depletion, and mass starvation are so fin de siècle. Ecosystem collapse has also come and gone. As Simpson points out, the ecosystem services gestalt does not give people Malthusian fantods. But it also affronts the “conservation-for-conservation’s-sake ethic,” as Simpson says. So it is not driven by left-wing environmental ideology; indeed, it resists both Malthusianism and deep ecology.

It is not driven by right-wing free market ideology, either. The idea of using scientistic calculations to shore up regulations would not bowl over anyone at the Chamber of Commerce.

The idea of pricing ecosystem services is also infra dig in many academic circles. An effort to “value” anything outside markets is not consistent with received economic theory, and Simpson explains why.

So what is driving the interest in ecosystem services, given that it lacks both ideological bandwidth and disciplinary cred?

A Community on Ecosystem Services holds a biennial conference that (according to its program) extends over five days and includes “15 pre-conference workshops, six plenary sessions, 14 town halls, as well as hundreds of oral presentations and posters.” These posters and presentations bear out what Simpson observes: They do not ventilate on how ecosystem services “are essential to our very existence or that we have a fundamental moral obligation to preserve the habitats that provide them,” nor do they edify (not as much as you would expect, anyway) on the abstract theoretical foundations of the methodology. On the contrary, attention is paid to “the practical, and often local, value of ecosystems—on services such as pollination, pollution treatment, flood protection, and groundwater recharge.”

In this context, ecosystem services jargon provides a lingua franca—perhaps no more than a façon de parler—in which conservationists and industrialists can talk to each other without being limited by ideological or scientistic constraints. Both the left and the right—both environmentalists and industrialists—are equally able to manipulate this language; this draws them into playing the same language game. Because the pricing of ecosystems services is so open to manipulation, the extreme left (the Malthusians and the deep ecologists) and the extreme right (those who oppose any regulation outside of common law) are tempted to develop relevant language skills. This means they can be caught other than dead at the same conference.

So again, what is driving the interest in ecosystem services? The language of ecosystem services allows both environmentalists and industrialists—the left and the right—to coopt or, failing that, to tamp down their ideological fringes so that they can make sense to each other. The interest in ecosystem services is driven by centrism, incrementalism, rationalism, statism, moderation, and willingness to compromise. It may succeed, however absurdly.

Mark Sagoff

Professor of Philosophy

Senior Fellow, Institute for Philosophy and Public Policy

George Mason University

Energy efficiency

In “The Potential of More Efficient Buildings” (Issues, Summer 2016), Henry Kelly makes a critical and well-articulated case for a comprehensive research agenda for building energy efficiency. Building on this foundation, I would like to emphasize the potential of data analytics and the critical role of cities in achieving a more energy-efficient future.

Reducing energy use and lowering greenhouse gas emissions in the urban built environment has emerged as one of the primary grand challenges facing society in the twenty-first century. Cities have been at the forefront of policy innovation to address climate change mitigation and long-term sustainability, as buildings account for the vast majority of energy use and carbon emissions. New York City has established an aggressive mandate to reduce greenhouse gas emissions from 2005 levels by 80% by 2050, primarily from its existing building stock. Other cities in the United States and internationally have adopted similar goals, including Austin, Boston, Chicago, London, Los Angeles, Philadelphia, and Tokyo.

As part of these efforts, the proliferation of city energy use disclosure laws (such as Local Law 84 in New York City) represents one of the most promising public policy tools to accelerate market transformation around building energy efficiency. Research has shown that similar disclosure requirements in other industries, such as fuel efficiency in the auto sector and nutrition labels for food served by chain restaurants, have led to changes in behavior by both producers/suppliers and consumers/end-users.

It is important to emphasize that building efficiency is not just a technological problem, but a behavioral one. In many cases, the technology currently exists to achieve energy use reductions of up to 50% in buildings. Despite numerous initiatives to deploy proven technologies over the past several decades, persistent information asymmetries derived from an inability to measure and benchmark key sustainability indicators continue to constrain investor decision making and to limit capital market responses that could significantly lower the cost of capital for energy improvements and dramatically shift the calculus of building energy efficiency. Programs such as the US Environmental Protection Agency’s Energy Star label and the Green Building Council’s LEED rating system have raised market awareness for individual buildings, but are not sufficient to reach the significant reduction targets being set at the city scale.

To achieve these goals, new data-driven methodologies are needed to identify and target efficiency opportunities at the building, neighborhood, and city scale. The emergence of large data streams from an array of policy initiatives and sensor deployments now enables researchers and policy makers to characterize the energy usage of buildings across building types, neighborhood socioeconomic and demographic conditions, and urban form and morphology. Greater access to, and reliability of, information can lead to shifts in market demand by creating competitive markets around energy efficiency and, potentially, water, waste, and other resource efficiencies. In fact, these non-energy quality-of-life measures represent a significant new frontier for research in urban energy systems.

Harnessing, integrating, and interpreting these urban data streams to address the societal challenge of improving energy efficiency will require an interdisciplinary effort that combines innovative analytical and modeling methods from engineering, data science, social science, and urban planning, elevating the place of social and behavioral sciences in the study of energy dynamics. It will necessitate a coordinated data collection and analysis effort that bridges the public, private, and academic sectors, across federal, state, and local jurisdictions, while shifting the focus from individual building systems to the dynamic interplay of social, economic, and political phenomena.

Constantine E. Kontokosta

Assistant Professor of Urban Informatics

Center for Urban Science and Progress

New York University

I fully support Henry Kelly’s promotion of new and more advanced building energy technologies as a means to improve building performance and reduce energy use and related carbon emissions. However, to fully realize these aims, we cannot rely on advancements in building technology alone.

For one thing, only a fraction of a building’s total energy use is related to the efficiency of the technology within it; the remainder is a function of the operations-related management practices and the behavior of the people who use the building (although technology can be used to help in these areas, too). Furthermore, as Kelly acknowledges, deep reductions to energy use and carbon emissions cannot be achieved by focusing only on new technology for new buildings. Existing buildings have to be contended with. It is estimated that about half of the world’s building stock in 2050 will consist of buildings that already exist today; for cities such as New York and London, the share made up by the existing building stock is even higher (75% and 80%, respectively).

What’s needed, then, is for building energy technology to complement other related policies and initiatives to reduce building-related energy use and carbon emissions. There is one solution in particular that is as promising as it is simple: data transparency. Mandatory disclosure of building performance data is a relatively new practice that requires owners of large properties, often commercial and multifamily residential buildings, to publicly disclose key environmental data, such as carbon emissions and energy and water use. Disclosure programs have been (or are in the process of being) implemented in dozens of jurisdictions across Europe, Australia, and North America, including in such cities as New York City, San Francisco, Seattle, Toronto, and Washington, DC.

Jurisdictions that have implemented mandatory benchmarking policies have reported energy savings and subsequent financial benefits in a relatively short time. Early research conducted by the US Environmental Protection Agency estimated that requiring owners to disclose their building’s energy performance alone can lead to energy savings of 7% over four years. In the first three years that New York City implemented its Local Law 84, which requires owners of large buildings to disclose energy and water performance, the median commercial building energy use intensity was reduced by approximately 10%. Building owners are also required to complete energy audits and undergo retro-commissioning every five to 10 years to help achieve these savings.

There are other environmental and financial benefits of mandatory public disclosure of building performance. It provides building owners and managers with an effective and consistent measure of their building’s performance and enables them to see how they rank compared with other similar buildings, thereby offering a benchmark for setting performance targets. In addition, utilities and service providers can use the data collected to create and tailor more effective energy conservation programs, policies, and incentives. As more energy data becomes available, it becomes easier to identify which buildings (and which building types) are underperforming. Energy conservation programs can then specifically target these buildings, making the task of improving their performance more manageable. To complement their building performance disclosure programs, some cities have created engaging and data-rich visualization and mapping tools available for free online. Notable examples include Philadelphia’s Energy Benchmarking map (http://visualization.phillybuildingbenchmarking.com/) and New York City’s Energy & Water Performance Map (http://benchmarking.cityofnewyork.us/). Most importantly, disclosure of building performance data allows the market to introduce mechanisms that value environmental efficiency in purchasing, leasing, and lending decisions.

Although I agree with Kelly’s conclusion that “building energy technologies are among the most important and intellectually exciting technology challenges facing the world today,” I would go even one step further: some of the most intellectually exciting challenges facing the world today are provided by the built environment as a whole.

Mark Bessoudo

Research Manager (Sustainability & Energy)

WSP Canada Inc.

Toronto, Ontario, Canada

Eugenics warning

In “The History of Eugenics” (Issues, Spring 2016), Daniel J. Kevles rightly reminds us that many biotechnologies initially viewed as Frankensteinian, such as in vitro fertilization, eventually become commonplace. Whether or not CRISPR/Cas9 will follow an analogous path of routinization remains to be seen, and will depend on complex issues of cost, feasibility, safety, accessibility, and social acceptability. To date, the ethical brakes on human germline modification have slowed but not stopped the process. Much like the Human Genome Project in the 1990s, CRISPR/Cas9 has sparked a strong current of optimism about the promise of genomics to ameliorate and possibly eliminate certain diseases. But what are the potential ugly undersides of such advances?

The eugenic past can be a useful compass when considering present and future uses of genetic technologies. The lessons are clear-cut when it comes to the overreach of governments, including the implementation of policies that compelled individuals to relinquish their reproductive autonomy. Compulsory sterilization laws, which were passed in 32 states from 1907 to 1937, and resulted in more than 60,000 reproductive surgeries, overwhelmingly performed on poor, undereducated, and minority women and men, epitomize this egregious facet of eugenics.

Human germline modification seems quite distant from the coercive practices that enacted discrimination against socially vulnerable groups based on ostensible genetic inferiority. But tethering definitions of eugenics too closely to the state can obfuscate the extent to which presumptions of fitness and unfitness can insinuate themselves into broader social value systems.

Responding to Kevles’s interest in the increasing commercialization of biomedicine, we might ask how lessons from government-driven eugenics can be applied to the ethical quandaries associated with twenty-first century genetic and reproductive technologies, many of which arise as for-profit biomedical ventures. On one hand, when most couples incorporate genetic selection into their reproductive journey—via in vitro fertilization or preimplantation genetic diagnosis—these choices can be filed under the categories of personal privacy and reproductive rights. However, when genetic selection is approached from the perspective of disability rights and awareness, the slope becomes much slipperier and continuities between the past and present more visible.

The past few years have witnessed the dramatic growth of non-invasive prenatal screening, which allows fetal cells in the maternal blood, extracted through a simple blood draw, to be screened for chromosomal anomalies such as trisomy 21 or 13. Unlike the development of amniocentesis in the 1970s, which was built on partnerships between federal health agencies and academic medical centers, non-invasive prenatal screening entered the clinical domain as a commercial product, marketed by a small number of rivaling biotech companies. Disability studies scholars have criticized how glossy advertisements for Sequenom’s MaterniT21 PLUS include only fair-skinned, picture-perfect babies, reinforcing the presumption that unfit babies, those with physical or intellectual disabilities, are undesirable humans. A 2015 scientific study estimates that from 1974 to 2007, pregnancy terminations based on trisomy 21 diagnosis (made using amniocentesis or chorionic villus sampling) reduced the population of individuals living with Down syndrome in the United States by approximately 30%.

If we want to ensure that history does not repeat itself, then we need to recognize how people with disabilities experienced discrimination based on eugenic ideas of human worth and contemplate how such prejudices might manifest themselves in contemporary public policy and medical marketing.

Alexandra Minna Stern

Professor of American Culture, Women’s Studies, History, and Obstetrics and Gynecology

University of Michigan

Future of space

In “Reshaping Space Policies to Meet Global Trends” (Issues, Summer 2016), Bhavya Lal provides an insightful and comprehensive view of the changing landscape of space. The article is drawn from the findings of a report produced by Lal and her colleagues at the Science and Technology Policy Institute for the National Aeronautics and Space Administration (NASA) and the Office of the Director of National Intelligence (ODNI). It outlines both the opportunities and challenges afforded by a myriad of traditional and new space actors—including those privately funded and dubbed NewSpace, as well as domestic and foreign government agencies heretofore at best peripherally involved with space activity—who are offering products, services, and capabilities previously afforded to only a few nations.

Following her appraisal of this changing landscape, Lal offers three policy options to US agencies involved in space activities: “They can accept the trends as inevitable, get ahead of them, and pre-position the nation to benefit from them. They can actively fight the trends, by attempting to reverse the situation by starving the emerging private sector to keep the government as the key hub in the space ecosystem. Or they can drift, doing nothing at all, and react on a one-off basis to the trends that immediately affect a particular agency.”

I suspect that NASA would be eager to pursue option No 1. Having the private sector take over some of the space activities it has been charged with by default over the years would free up NASA money to pursue new, or already approved but underfunded, programs. Other agencies, however, including ODNI and those primarily concerned with security issues, might be less accepting. And therein lies the rub.

Consequent to the 2013 Chinese suborbital launch that nearly reached geosynchronous orbit where many US exquisite-class satellites owned by the intelligence and military communities reside, those communities began to see the “sanctuary” of this space region jeopardized. The security communities conducted a Strategic Space Portfolio Review in 2014 and talk of the inevitability of space war, space control, and the US needing to dominate space ramped up to the level of the George W. Bush years. It is important to note that other space players, those Lal suggests are going to be key in the future, were not included in that review. If the security communities accepted the reality of the changing space landscaping that Lal describes so well, they would have been. But there are forces of inertia and vested interests within the security-related communities that keep them mired in the status quo.

Space is, indeed, “congested, contested, and competitive,” as Lal references. NASA, other government agencies, and the private sector are interested in managing the congested (more stuff) and competitive (more actors) aspects, with competition sometimes even driving innovation. But the security communities are focused—almost exclusively—on the contested (development of counterspace capabilities by other countries) aspect of the space environment. Whereas strategic restraint had been the policy intended to deal with that issue, it now appears discarded in favor of returning to the eternal and unreachable quest for “control.”

But it is not just the United States that can favor the security aspects of space over the development, exploration, and commerce aspects. Europe is increasing its military space spending. China and India are pursuing across-the-board space security capabilities. Russia intermittently plays the spoiler in United Nation-led efforts to agree on voluntary best-practice guidelines toward maintaining the long-term sustainability of the space environment. For real change to occur, all spacefaring nations need to prioritize inclusive diplomacy over chest-thumping military “control” efforts.

It is heartening that ODNI commissioned the study by Lal and her colleagues. ODNI, and its Pentagon cohorts, need to read it carefully and, as Lal suggests, get ahead of the curve rather than trying to turn back time.

Joan Johnson-Freese

Providence, Rhode Island

Author of the forthcoming book Space Warfare in the 21st Century: Arming the Heavens

I enjoyed Bhavya Lal’s article on US space policy. However, I was struck by two omissions, which are related and, in my view, will have fundamental implications for space policy making moving forward in the United States and elsewhere.

The first omission is explicit observation of the fact that the primary impetus for space research and development is no longer military. As a result of this shift, there has been an increase in unilateral private participation and international collaboration in the space sector (and before that, the increase in contracting-out during the shuttle program). The second omission is the high probability that increased unilateral private-sector participation and international collaboration will mean a shift in US space policy from a predominantly distributive function to a predominantly regulatory function, just as occurred with the de-militarization of the Internet.

This said, some key problem areas for space policy makers moving forward may include one or more of the following: how space itself is used by private companies, who these companies’ clientele are, how foreign companies and governments use space, how space services can be made safe for workers and customers, and how property rights (intellectual and otherwise) will be distributed across international and multi-economic sectors and multi-industry space endeavors.

Craig Boardman

CenterScience

Atlanta, Georgia

Chemical safety

Beth Rosenberg’s sorry tale related in “Seventeen Months on the Chemical Safety Board” (Issues, Summer 2016) raises a lot of questions about how society regulates potentially dangerous industries, what it means to believe in a democratic process, and the impact of politics and power in regulatory and investigative agencies of government. At one level, this is a tale all too familiar: a plant blows up, people are killed and hurt, investigations are done, fingers are pointed, explanations are given. There are calls for reform—do this and it won’t happen again, adopt these protocols and we can lessen the likelihood of mayhem. At the very least, we might expect the best from the experts. In the case recounted by Rosenberg, things didn’t happen the way they should.

The Chemical Safety Board (CSB) has a professional staff and a mandate to investigate incidents and disasters in the petrochemical industry—a whole slew of chemists, public health professionals, engineers, and safety specialists who work under the direction of the board, which comprises presidentially appointed and representative members. What could possibly go wrong? As Rosenberg points out in this case: a lot. The CSB can only recommend, enforcement agencies (the Occupational Safety and Health Administration and the Environmental Protection Agency) have way too few inspectors, punishments and fines are often symbolic or puny, and the industry is critical to the economy and politically powerful. Even given this, the initial response by the CSB to the Chevron explosion at the heart of Rosenberg’s tale was wrong and wrongheaded. Why? Because something was rotten at the CSB. Its advocacy of the “safety case” as a solution to the problems confronting the aging plants and poor management in petrochemicals paid little attention to the true causes of the problem and effectively ignored calls for greater oversight or community input. Despite the objections by Rosenberg and others to this approach, this is what the CSB initially proposed. Changing that hit all manner of obstacles.

For years, the literature of regulatory agencies has stressed that such agencies end up being captured by the constituents they are supposed to regulate. Companies get to be pals of the regulators; understandings are arrived at; regulations and protections for workers, consumers, and the public are weakened. But at the CSB something else was going on. The CSB became captured by its own culture and interests, the staff (particularly senior staff) more dedicated to maintaining their jobs and reputations than doing the right thing, and the continued existence of the CSB was more important than truth or action. It was compounded by a failure of leadership, lack of democracy at the uppermost level of the board, and political maneuvering that ensured critical voices were silenced.

The lesson? Society needs to pay more attention to the politics of organizations within federal agencies. These issues deserve renewed study. We need a better understanding of what the political scientist Graham Allison has called bureaucratic politics—that is, how cultures develop in agencies, what values are promoted (or not), how where you sit becomes more important than where you stand.

If the nation wants to have safer industries and to protect workers and communities, we need good regulations, strongly enforced. Of course, this means the support of Congress—bigger budgets for more inspectors, and fines and punishments for transgressions that actually hurt. It needs open discussion involving workers and the community and creative approaches to understanding all aspects of what leads to explosions, fires, and disasters.

But what this tale of the CSB also shows is that we should turn our attention to the politics of bureaucracy and the way power and culture develop in places such as the CSB. We need to understand how agencies can capture themselves. This is more than calling for greater public accountability; it is, as Rosenberg notes, a call for a staff culture that puts public service before private interest, leadership that is committed to fairness and integrity, and training and more training to ensure that scientists and professionals understand their public responsibility.

John Wooding

Professor, Department of Political Science

University of Massachusetts, Lowell

I work in the chemical manufacturing industry. Our companies manage a great deal of risk in the production of the goods we sell. External checks and inputs on our operations in the interest of public safety and health are necessities of doing business. Nevertheless, an adversarial tension will always exist in the relationships between the industry and regulatory agencies. Thus, it is vitally important that all stakeholders view as unimpeachably objective any independent government body that is charged with investigating and analyzing the root causes of significant petrochemical accidents and making recommendations for future prevention to not only regulators, but also to industry, unions, state, and local governments. In this, the CSB faces problems.

Compared with the National Transportation Safety Board, the CSB has struggled throughout its relatively short history to achieve the same degree of credibility. It has never been adequately funded. And it has rarely been able to get along with itself. Internal conflicts and disagreements between board members and chairs date back to 2000 when, to settle an internal power dispute, the Department of Justice’s Office of Legal Counsel had to issue a ruling (known as the “Moss Opinion”) clarifying the roles, responsibilities, and boundaries between the chair and the members of the board. Thirteen years later, the CSB found itself going back down the path of poor leadership and management dysfunction, as Rosenberg clearly describes.

The problem with political appointments is that they are, well, political. Rosenberg calls for a more effective process for identifying and vetting leaders and preparing new board members for their roles. This is much needed reform, but a crapshoot to implement when ideology is allowed to trump knowledge and leadership experience in the public servant selection process. The nation got lucky with Rosenberg, who demonstrated integrity, courage, and personal sacrifice for public service at a time when it was greatly needed.

Scott Vincent

Butler, Pennsylvania

The nation has too often seen the toll that explosions, leaks, and other accidents within the petrochemical industry have taken on the health, safety, and livelihoods of workers, first responders, neighboring communities, and, at times, the industry itself. The Environmental Protection Agency (EPA) estimates that approximately 150 catastrophic accidents occur each year in regulated industrial facilities. Less severe accidents occur regularly; in just the little more than two years between April 2013, with the explosion at a West Texas fertilizer facility that killed 15 people, and August 2015, 425 chemical accidents were reported to the EPA. Others likely went unreported.

Rosenberg has provided an accessible, candid, and thoughtful chronicle of her service at the CSB, a small but important arm of the government’s efforts to protect people working in and living near petrochemical facilities. She efficiently summarizes the context, systems, and landscape in which the CSB—and other health and safety organizations—operate.

As a government body with no regulatory authority, the CSB has the mission and authority to investigate root causes of accidents in the petrochemical industry and make evidence-based recommendations to the full range of stakeholders that are in a position to take some action that might prevent similar situations in the future. “Root cause” is the important operative phrase here. As Rosenberg eloquently points out, this may go well beyond a technical issue and reach to a failure of or flaw in the facility’s management structures, practices, and communications.

Given the size of the petrochemical industry, its workforce, geographic spread, and the sheer potential for significant harm, the CSB has a challenging enough job, given its small budget and staff. It may be an agency that is under the radar screen for most people, but it has unique reach and deserves broad support.

The immense rewards of public service, along with its many challenges, are well known to people who have dedicated their time and talent to working on behalf of the public interest within government agencies. Rosenberg highlights her experience with internal politics, staff and board relationships, and a work environment that she believes hampered her effectiveness and the effectiveness of the agency to fulfill its mission. But her commentary offers more than an inside look at the challenges she and others faced at the CSB. She offers insights and suggestions that cannot only enhance the functioning of the CSB, but help to better prepare the scientists entering public service at the federal agencies whom we rely on to protect our health, safety, and environment.

Kathleen Rest

Executive Director, Union of Concerned Scientists

Former Acting Director, National Institute for Occupational Safety and Health

Celebrity science

In March 2014, Toronto Public Health launched a social media campaign to pressure the popular ABC daytime television talk show, The View, into firing outspoken US celebrity Jenny McCarthy as one of its hosts. Health experts worried that McCarthy’s strident anti-vaccine views would influence Canadian parents nervous about vaccination and undermine the efforts of medical officials to increase vaccine uptake at a time when preventable diseases have been making a comeback.

McCarthy isn’t the only celebrity to publicly espouse views at odds with mainstream science. In the vaccine wars, we also have Jim Carrey and Bill Maher; on homeopathy and new ageism, there’s Oprah Winfrey; Gwyneth Paltrow has the cleansing market cornered; and the increasingly fraught world of activism opposing genetically modified organisms is occupied by a range of stars, including Michael J. Fox, Gisele Bundchen, and Neil Young.

In “Science, Celebrities, and Public Engagement” (Issues, Summer 2016), Timothy Caulfield and Declan Fahy explore the mixed influence of celebrity culture on the public’s understanding of science and the potential benefits of celebrity engagement for science policy, literacy, and communication. Do celebrities influence what citizens believe about science? If so, how can these effects be determined and measured? Can celebrity influence be mobilized to advance a pro-science agenda? These are important empirical, normative, and strategic questions.

The authors’ observation that celebrities have achieved a state of pervasive cultural influence is consistent with decades of critical social science research. Yet, most academic observers, echoing the arguments of mainstream cultural critics, are often too quick to dismiss contemporary celebrity culture as thin, vacuous, and fleeting (as opposed to some make-believe bygone era when public discourse was deep, durable, and robust). For many observers, the celebrity embodies the artifice of contemporary pop culture, while celebrity power is linked to the decline of modernity and its faith in reason, knowledge, and expertise. As the Toronto Public Health campaign suggests, celebrities may also pose a clear and present danger to the public’s health.

Caulfield and Fahy present a more complicated picture. They acknowledge the worrisome effects that celebrities can have on our health. Yet, elements of the argument are also measured and pragmatic. They decry and mock the influence of some celebrities, but point to examples of others who have made important contributions to public science and health literacy. Gwyneth Paltrow (about whom Caulfield has written extensively) can be ridiculed for her goofy advice to women about diet and sexual health, yet the normalization of popular discourse about HIV and AIDS, particularly among men, could not have occurred without the determined advocacy of Earvin “Magic” Johnson. They also discuss examples of scientists (e.g., Neil deGrasse Tyson and David Suzuki) who have fashioned themselves into media stars with celebrity-like appeal of their own. The boundaries between celebrity and science are thus becoming increasingly blurred, and Caulfield and Fahy capture this fuzziness well.

The British cultural theorist Raymond Williams famously remarked that “we live in an expanding culture, yet we spend much of our energy regretting the fact, rather than seeking to understand its nature and conditions.” Like Williams, Caulfield and Fahy suggest that celebrity culture is better viewed as a phenomenon to be examined and understood than as a problem at which we should sneer (sneering, after all, doesn’t make what is there go away). Their call for more research into the nature and impact of celebrity culture on public understanding of science is important if we are to better know how citizens make sense of complex scientific and public health issues in our increasingly media-saturated environment. Although this requires attention to the influence of celebrities on science and health, it must also include examining the myriad ways in which celebrity media culture is transforming science and science communication.

Josh Greenberg

Director, School of Journalism and Communication

Carleton University

Ottawa, Ontario, Canada

Politics of the platform economy

Not so long ago, the study of economics was referred to as “political economy.” The term bore the recognition among scholars that economic behavior and activity could not be studied without consideration of the political context. It was understood that politics and economics shaped each other in intricate ways, and that choices in one area influence choices in the other. Subsequently, however, the economics profession chose the pathway of methodological individualism, and the study of economic behavior became mostly dissociated from consideration of politics.

In “The Rise of the Platform Economy” (Issues, Spring 2016), Martin Kenney and John Zysman remind us that political economy never really went away. And, the emerging economic reality of platform economies that they describe is closely connected to political choices. In particular, the choices that citizens, policy makers, and business leaders make about technological change, risk, benefits, and costs will affect the economic activity that occurs under the auspices of the platform economy.

The authors specifically highlight social policy, for example. In adjusting to platform economy effects, different countries will make different political choices about the relationship between social protection and economic dynamism. If greater individual initiative and more entrepreneurship is expected as result of the platform economy, what evolving role should social policy play?

There is evidence that enhancing social protections (e.g., unemployment insurance, family and medical leave, health care, food stamps) can actually raise the level and quality of entrepreneurship. Research has found such effects across different countries. How will the United States, Europe, and other regions make political decisions about social policy and economic dynamism? The answer will shape the effects of the platform economy.

Competition policy is another area in which politics will be critical for determining what the platform economy ultimately looks like and how it affects economic activity. Decisions about where to draw regulatory lines, how to even define those lines, and how to balance different types of competition are inherently political.

The political fight over the platform economy is already occurring—mostly at the local level where firms such as Uber and Airbnb often act as if they are somehow post-political businesses. Google and Facebook, too, are often treated as beyond politics. Criticism of the European Union’s antitrust approach to Google, for example, focuses not on the political economy question of Google’s competitive stance, but rather on shortcomings of European economies in that they did not produce a company as wonderful as Google. Of course, in the case of Uber, these firms are intensely political and often erect competitive barriers around themselves.

As a result, the political debate on the platform economy is sometimes framed as new versus old, disruptors versus incumbents, with one prima facie better than the other. Framing the debate as what it really is—a political choice, a question of the distribution of costs and benefits—might be unrealistic, but could be more fruitful in resolving the issues raised by Kenney and Zysman.

This is not to suggest that the platform economy is wholly negative. There will undoubtedly be positive effects from the rise of the platform economy. The nature of those positive effects—and how they balance the negative implications—is entirely a political question.

Dane Stangler

Vice President, Research & Policy

Kauffman Foundation

Crime and Immigration: New Forms of Exclusion and Discrimination

Mary C. Waters gives a lecture on The War on Crime and the War on Immigrants: New Forms of Legal Exclusion and Discrimination in the United States

In the United States today, two trends are intertwining in ways that affect both society and the research community. The first trend is the nationwide increase in the number of people incarcerated. This was the subject of a 2014 National Research Council (NRC) report titled, appropriately, The Growth of Incarceration in the United States: Exploring Causes and Consequences. The second trend is the increased inflow and integration of immigrants nationally. This was the subject of a 2015 NRC report titled, again appropriately, The Integration of Immigrants into American Society, (which I co-edited).

Despite the importance of these trends, however, most academics and policy makers have remained largely “siloed” in how they think about them. There are experts on incarceration, which is often understood in terms of its disparate impact on African Americans; and there are experts on immigrants, and the problem of undocumented immigration is often associated with Latino Americans. The two camps seldom meet. Yet I argue that they are intimately linked and it is imperative for stakeholders in all quarters to more fully examine how these issues are separating people across society—and how we can make things better.

Let’s start by thinking about African Americans and Latinos in terms of what has happened since the civil rights movement. Since the de jure end of discrimination in 1965, there has been a rapid growth of a black middle class, ushering in a lot of social mobility for African Americans and inclusion into the larger society. But since 1980, there has also been a rapid growth of mass incarceration of African Americans. Blacks are now 12% of the nation’s total population, but they represent 38% of the prison population. In 2010, Michelle Alexander wrote a very influential book, The New Jim Crow, in which she argued that the more things change, the more they stay the same, and that the incarceration of African Americans was a continuation of the kind of discrimination and racism that characterized Jim Crow in the South.

There is also a parallel debate about Latinos and race and Latinos and ethnic discrimination. A key question for scholars has been whether Latinos are making it in America or being racialized and excluded as they’re being incorporated? The situation of undocumented immigrants does disparately impact Latinos. An estimated 77% of undocumented people in the United States are from Mexico, Central America, or South America, although there are people who are undocumented from around the world.

What I want to argue is that social scientists have been using a racial lens when examining changes in mass incarceration and immigration. By lens I mean the variables that they—we—put front and center, the ways in which we argue about these topics, and the ways in which we think about them. Instead, I think we need to return to what was a prescient argument that sociologist William Julius Wilson made in 1978 when he talked about how we should think about race and class—and now we should add legal status—and how they intersect.

Today, it seems, legal exclusion is coming to replace racial exclusion as the major axis of differentiation in society. I think this legal exclusion is racially tinged, of course, but it is fundamentally class-based. It allows for the full inclusion of African Americans and Latinos in the upper reaches of society and it excludes vast numbers of people at the bottom level of society. So I think when we focus on race, we miss the growth of widespread legal discrimination.

This message was captured by Wilson when he wrote about economic changes in US society. He said, “A new set of obstacles has emerged from basic structural shifts in the economy. These obstacles are, therefore, impersonal but may prove to be even more formidable for certain segments of the black population. Specifically, whereas the previous barriers were usually designed to control and restrict the black population, the new barriers create hardships essentially for the black underclass; whereas the old barriers were based explicitly on racial motivations derived from intergroup contact, the new barriers have racial significance only in their consequences, not in their origins.”

Prison outbreaks

When Wilson was writing about the growth of the underclass and the economic hardships they were facing, it was before the nation faced mass incarceration, which really took off in the 1980s, accelerated through the 1990s and 2000s, and continues still. Today, approximately 7 million people are under supervision in the United States—2.23 million in jail or prison, another 851,000 on parole, and almost 4 million on probation. That adds up to one in 33 adults.

In this regard, the United States is an outlier within the international community. The United States has 707 individuals per 100,000 in prison, compared with 148 in the United Kingdom, for example. This is true, in part, because the United States has so many African Americans in prison. There are 2,300 black prisoners per 100,000 African Americans. But even in looking at the incarceration rate for whites, the United States would still be an international outlier. The nation has 400 white prisoners per 100,000 whites in the population, which is still more than double the figure for leading European comparisons.

The other way to think about this is the risk of imprisonment. The risk is extraordinary for young, less-educated black men. For example, 68% of those born between 1975 and 1979 who are high school dropouts have been imprisoned or can expect to be imprisoned during their lifetimes. Since 1990, the disparity between blacks and whites in prison has started to decline, not because fewer blacks are being imprisoned, but because the white imprisonment rate has been rising faster than the black imprisonment rate.

There is another aspect of incarceration that greatly affects society: the growing number of individuals who leave prison as ex-felons and return to their communities. Collectively, they represent a significant portion of the nation’s population. In fact, the total number of ex-felons is equal to the combined populations of Arkansas, Idaho, Nebraska, Nevada, New Mexico, Utah, and West Virginia. These people are in many ways legally excluded from many rights in US society.

As these developments were unfolding for African Americans, undocumented migration into the United States also grew very rapidly, especially during the 1990s, with the inflow hitting its peak right before the recession. The inflow has leveled off since 2008; there has basically been a net of zero in recent years, and the total number of undocumented immigrants has declined from its highest levels. There are now 11.3 million undocumented immigrants, the equivalent of the combined populations of Alaska, Delaware, the District of Columbia, Hawaii, Maine, Montana, New Hampshire, North Dakota, Rhode Island, South Dakota, Vermont, and Wyoming.

In general, immigrants enter societies that have developed institutions and ideologies to accommodate past conflicts and demographic realities. In the United States, this means immigrants encounter the legacies of the nation’s past failures in dealing with race and successes in dealing with immigration. The usual story about immigration is that before 1965, the United States was successful in integrating waves of immigration from all over the world. But since then, the thinking goes, the legacies of racial discrimination remain, and non-white immigrants are believed to have a harder time integrating. Basically, scholars have been worried about the racialization of post-1965 immigrants and the ways in which discrimination might prevent their inclusion in society. We have tended to worry about immigrants since 1965 because of their race, not because of their immigrant status.

The recent NRC report on the integration of immigrants into American society examined this issue across many dimensions. We looked at socioeconomic, political, sociocultural integration, spatial integration, family, and health, among other variables. And on every measure, we found progress over time for the first generation, and progress between the first and the second generation in that the immigrants were becoming more integrated into society, more like the native-born. So, over time, the immigrants were seeing a lot of progress in terms of education, occupations, the neighborhoods they live in, and rates of inter-marriage. All of this is very much similar to how the nation has integrated immigrants in the past.

The one exception we found centers on undocumented immigrants and their children. We found, in essence, that the nation actually has what might be termed a non-integration policy directed toward them. It’s also a failed policy, because the nation doesn’t completely keep them from integrating, but it stops them from full integration. We have people who are living in the United States forming families, working, going to church, and sending their children to school, and yet they are prevented from integrating on very many dimensions, especially politically.

Right now, about one-third of immigrants are citizens, one-third can become citizens, and one-third are undocumented and have no path to citizenship under current law. Inequality between the undocumented and others has grown a great deal. Inequality between citizens and legal immigrants has also grown in a legal sense. So, there has essentially been a bright line between people with a green card and citizens.

The Obama administration has deported almost 2 million people, more than any other administration, and the numbers being detained for possible deportation have been growing rapidly. But as work by Douglas S. Massey, a professor of sociology and public affairs at Princeton University and co-director of the Mexican Migration Project, makes clear, these efforts have not really reduced the rate of initial migration. What pouring money into border enforcement did accomplish is that it reduced the rate of return migration, changing a circular migration pattern into a permanent migration pattern. Migrants also began to cross at different parts of the border where they had never crossed before, especially in the deserts of Arizona, making it much more dangerous and leading to a national distribution of undocumented immigrants. So, a population that used to be concentrated in places such as Texas and California is now in all 50 states.

Another development as the government cracked down on immigration is that people who have successfully made their way to the United States have not gone back to Mexico or to other South American countries, which has resulted in a growth in settled immigration. People who might want to come to the United States only for temporary or seasonal work now stay permanently because it so difficult to leave and return. Sixty-one percent of immigrants now have been in the United States for 10 years or more, and only 16% have been here less than five years. So the undocumented population is changing in that it’s becoming more and more settled, people with families who are here in this kind of liminal status of non-integration.

There are 5.2 million children who have an undocumented parent; 4.5 million of those are citizens. Children with undocumented parents make up one-third of all immigrant-origin children in the United States, about 8% of all US-born children and about 7% of all kids in the K-12 system. This is a large number of children in a very uncomfortable predicament.

Much of the data on immigration has come from a wave of new research that is focusing on what the mass incarceration literature has called the collateral consequences—the effects that reverberate out from imprisonment. Researchers have found many collateral consequences that resonate from parents’ undocumented status onto their families. For instance, children of the undocumented have lower levels of cognitive development in early and middle childhood, as well as greater mental health issues in adolescence. Frank D. Bean, a professor of sociology and director of the Center for Research on International Migration at the University of California, Irvine, and his colleagues found that adult children of the undocumented are less likely to finish high school, and overall they achieve 1.25 years of schooling less than comparable children with parents who have achieved legal status. In sum, this research is revealing that growing numbers of families of the undocumented are experiencing negative consequences of the lack of integration.

Legal penalties expand

There are also rising legal penalties and increased involvement of the legal system in dealing with the undocumented. Overstaying a visa is a civil violation, not a criminal act. People in this category comprise about one-half of the people who are undocumented. Entering without inspection—that is, sneaking over the border—is a misdemeanor. Since 1996, if one is caught, sent back, and then re-enters and is caught again, that is a felony. The government has significantly increased the consequences for being undocumented.

Also since 1996, the government has mandated automatic deportation for anyone convicted of an “aggravated felony.” This includes anyone who is a noncitizen, even people with a green card who have been in the United States for a long time. It’s retroactive, and there is no recourse for false convictions. So, if you’re convicted of any one of 50 different crimes—which are decided by Congress, and Congress adds to them over time—you will be deported. This is despite the fact that these need not be actual felonies; the list includes minor transgressions such as filing false tax returns, simple battery, and failing to appear in court.

Or consider this case: prior to 1996, someone caught trying to sneak across the border would simply be returned to the other side. The process was called voluntary departure and there was no criminal charge. If one was caught inside the country, judges had discretion about what to do. The chances of being caught in the country were very low, and in 1990 there were only about 30,000 deportations.

After 1996, the government introduced a series of laws that effectively integrated the local criminal justice system with immigration enforcement. Now what happens is that whenever local police make an arrest—or for some other reason collect someone’s fingerprints or name or ID—that information is sent to the interstate identification index maintained by the Federal Bureau of Investigation (FBI). If the person is found to be undocumented, Immigration Customs Enforcement (ICE) will issue a two-day hold and then commence deportation proceedings.

Scholars have dubbed this intersection between crime and immigration as “crimmigration.” Since 1986, the United States has spent $187 billion on immigration enforcement. In 2012 alone, the nation spent $18 billion on immigration enforcement. The total is approximately 24% higher than the combined spending for all other federal enforcement, including for the FBI, the Secret Service, the Marshal Service, the Bureau of Alcohol, Tobacco, Firearms, and Explosives, and various other functions related to drug enforcement.

More people are detained each year in the immigration detention system than are serving sentences in Federal Bureau of Prison facilities for all other federal crimes. The Border Patrol and ICE together refer more cases for prosecution than all Department of Justice law enforcement agencies combined. The Border Patrol alone refers more cases for prosecution than the FBI. Immigration is now over half of the federal criminal workload, and immigration laws empower criminal prosecutions without constitutional protections.

Someone detained for an immigration violation is not entitled to the same criminal protection that a citizen who is arrested would have. For example, ICE puts a hold on any undocumented person arrested and that means the person is not eligible for bond and is therefore detained until trial. Moreover, one can be arrested without probable cause of crime because the government can make an accusation of immigration violations, and one can be sentenced without hope of probation because ICE automatically considers undocumented persons to be a flight risk. Someone convicted of a crime would have to serve the full sentence and then be deported.

ICE now has a mandated quota of 34,000 beds per day. This means that they are obligated by law to have at least 34,000 people in detention every day of the year. In 2013, the United States detained—not imprisoned, but detained—441,000 people in private detention facilities and in local jails, state prisons, and federal facilities. The federal government pays state and local prisons a set amount per night for every person detained. Even as the nation is supposedly trying to reduce mass incarceration and reduce the number of people in state and local prisons, the federal government is filling those beds. States such as Louisiana regularly take in large numbers of immigration detainees from across the country to help cover the cost of maintaining their prisons. This is an attractive option for states that are releasing prisoners to save money; they receive money from the federal government for every immigrant detained in their state and local prisons.

The total removals (which is the legal term for deportation) has skyrocketed under the Obama administration. In 2013, more than 400,000 people were deported. The risk of deportation is felt widely in many communities, especially those with large Latino populations. As reported in the New York Times: “It can be risky, for example, simply to live in an immigrant neighborhood in a house or apartment where a previous tenant may have had an old deportation order. Immigration agents may show up at the door with a photograph of someone who hasn’t lived there for years, roust people from bed to demand papers and take away in handcuffs anyone who cannot produce the right documents. In the aftermath of such raids, relatives, employers, even lawyers have to struggle to find out where those being detained are being held.”

The threat of deportation also extends to children in indirect ways. Researchers have conducted a number of studies on the prevalence and effect of stress in childhood from growing up in a poor neighborhood exposed to extensive violence and uncertainty. The studies have demonstrated that there are many lifelong consequences from growing up under those conditions, and even inter-generational epigenetic changes in children who have lived with this. I think that there is going to be a wave of research showing some of these same effects on kids growing up with the uncertainty of having undocumented parents.

The undocumented young people, largely Mexican Americans, who organized for the Dream Act adopted the model of the civil rights movement, demanding civil rights for themselves and their families. But I think that using this racial lens to understand things obscures the way in which race can be a resource rather than an impediment for legal immigrants and citizens of Mexican origin, and it can lead to more pessimistic predictions about their future. In addition, taking this view obscures the shifting line of repression in US society from racial phenotype to legal exclusion.

Legal aid

US citizens who are not white do have actual legal protection through the Justice Department’s Civil Rights Division, even though some of that has been undermined in recent years. The nation still has strong norms against racial hate speech—recent political activity to be excluded. There are still affirmative action and diversity programs and there still is—although it is being watered down—the Voting Rights Act to assure representation.

These protections have been resources for immigrants as well. The illegality of racial profiling puts limits on some of the most draconian state anti-immigrant laws. And the Supreme Court, when it heard the issue of Arizona’s latest immigration laws, made clear that racial profiling is the line that could not be crossed. This is partly why when one reads anti-immigrant rhetoric, it very much tries to avoid charges of racism. But anyone who listens to the evening news knows that it is no holds barred when it comes to demonizing undocumented people. There have been cases where the Department of Justice’s Civil Rights Division has investigated anti-immigrant actions by local governments. One example occurred when Somalis in Maine called for the Justice Department to come in, not because they were being mistreated as refugees or immigrants, but because they were black (for example, when the mayor of Lewiston told them publically to leave). Justice came through. This may be a limited resource, but in some ways, being non-white allows the tapping into racial laws that protect against discrimination.

Let’s now return to the issue of ex-felons and people who have experienced incarceration. When some politicians or laypeople discuss immigration, they often say: “What part of illegal don’t you understand? These people broke the law; they’re responsible for what happened to them, and we should be very strict with them.” The same ideas often swirl around the experience of mass incarceration, where legal distinctions are key to life chances and access to mainstream society. Just like the undocumented, ex-felons have no mobility across the legal divide. They still carry a prison record; they still carry the prohibitions that are built into many laws. Thus, both ex-felons and undocumented immigrants are defined out of civil society. They are defined as not eligible for the same rights as everyone else.

There are 19 million people who have been convicted of a felony. That is 6.5% of the adult population and includes approximately 25% of the African American male adult population. Through a mix of federal and state laws, ex-felons can legally be denied a variety of privileges and opportunities. Disenfranchisement from voting is probably the most well-known consequence. There are 5.8 million people who are denied the right to vote. Of these, only 25% are in prison; the rest are on probation, parole, or permanently disenfranchised. There are only two states—Maine and Vermont—that have no restrictions on voting for people who have been convicted of a crime. Other states place varying limits on voting, ranging up to lifelong prohibition.

Among other consequences, ex-felons may be barred from federal jury service for life, ineligible for a security clearance, not allowed to own firearms, not eligible to enlist in the military, prohibited from adoption or foster parenting, and excluded from public housing. For drug violations, ex-felons are excluded from receiving aid through Temporary Aid for Needy Families (TANF), most federal health care (although the situation under the Affordable Care Act remains difficult to understand), federal education aid, and 750 listed benefits that include any grant, contract, loan, or professional license provided by a federal agency. In the private sector, ex-felons are not eligible to work for the airlines.

Moreover, as states are trying to reduce the number of people in prison, there has been a growth of civil and other policies that are intended to control people who have run afoul of the law, but stop short of putting them in prison. One new area is what’s called spatial exclusions in addition to or in lieu of arrest. One can be issued a stay-out-of-drug-area order—called a SODA—or a stay-out-of-area of prostitution order. One can be excluded from public parks. Many parties involved in public or private housing now use trespass laws to keep anyone who has been convicted of a drug crime from being allowed on the premises. The number of people affected by such restrictions is significant. One study found that 30% of felony drug offenders now have SODA orders. For instance, in Seattle, almost the entire downtown area is defined as a drug area, so someone who has received a SODA is banned from entering most of the city’s downtown.

If one compares ethnographies of people who have been in trouble with the law, are ex-felons, or are on the run from the law with ethnographies of people who are undocumented, some striking similarities emerge, such as a reluctance to interact with mainstream institutions. This can mean avoiding going to the hospital, avoiding conventional employment, or avoiding going to any place where one might encounter agents of the police or become visible to government officials. These all stem from the constant threat of apprehension, which is part of what drives many undocumented immigrants and ex-felons into casual day labor jobs and the underground economy.

Shifting currents

So let’s now step back and do a thought experiment. Let’s substitute race for either ex-felon or undocumented status and then list all of the consequences that people live with for the rest of their lives—consequences that are written in laws and legally enforced, such as the various forms of legal discrimination. It begins to look like legal apartheid, but an apartheid based not on race, but on legal status. If these laws used race to define people, the laws would be unconstitutional discrimination. But, in fact, society does have these punishing laws based in legal status but with large racial disparities in their effects. That is in large measure because the people suffering this exclusion are seen as responsible, morally responsible, for their own fate. The rationale is that they broke the law—some law—and must pay the price.

What I would like to propose is that the nation may be moving from a system based on phenotype to a system based on legal status. The end result would be a way to continue to exclude brown- and black-skinned people and to exploit them in many ways, but with a legal system to enforce the new conditions and no obvious way to challenge them, all the while allowing diversity and access to the mainstream for many non-whites. And importantly, it does not lead to the complaint of racial discrimination because the people being excluded are seen as morally responsible for their own fate.

Deportation and post-entry social control is a legal form of oppression that has far-reaching consequences that are not widely recognized. These populations are predominantly non-white, but they are being excluded not formally because of race. Nevertheless, the negative effects fall primarily on certain racial groups, just as William Julius Wilson described the effects of economic changes in the 1970s.

My bottom line is that we need clear thinking about the ways in which laws at many levels in the United States do not reflect our basic values. In turn, we need a model of a social movement that is not based in civil rights, because we have defined millions of people living in this country as being outside of civil society. We need to think about whether that is something we want our laws to do, whether that’s something that is consistent with our moral values, and whether there might be a better way of thinking about the human rights of people and the right to social inclusion in society. In short, we need a different way of thinking about these very critical issues that have remained misunderstood and insufficiently explored for too long.

Donald Trump’s Voters and the Decline of American Manufacturing

Who are Donald Trump’s voters? What do they want? Observers call them angry, but anger has root causes and grievances. A December 2015 Washington Post-ABC News poll tells what we already sensed—his supporters tilt toward male, white, and poor. Other polls tell us the most important single predictor for Trump voters is that they didn’t go to college. A study from the Brookings Institution’s Hamilton Project informs this picture: full-year employment of men with a high school diploma but without a college degree dropped from 76% in 1990 to 68% in 2013; the share of these men who did not work at all rose from 11% to 18%. Although real wages have grown for men and women with college degrees, they have fallen for men without college degrees: the median income of these men fell by 20% between 1990 and 2013. This is not the American Dream. A Rand survey unearths another key feature: voters who agreed with the statement “voters like me don’t have any say about what the government does” were 86% more likely to vote for Trump than were voters who disagreed. They feel they have no voice and no power. These voters also resent trade agreements and immigrants competing for jobs, and more come from areas where racism historically has been more prevalent.

So there are a number of strands to Trump’s support but the economic elements tell us an evolving story that hasn’t been fully faced. Americans in the postwar era developed a myth of classlessness—almost everyone was middle class. Development during this period of an innovation-based growth model and expansion of mass higher education made the nation rich, enabling everyone to rise and fostering expectations and a dream of egalitarian democracy. Now Donald Trump wakes up society to see a class of Americans cut adrift from the middle class and in tough economic straits.

Part of the story is education. Higher education since the industrial revolution has become increasingly tied to economic well-being. Economists Claudia Goldin, Lawrence Katz, and David Autor argue that the continuing technological advances in industry require an ever-increasing level of technological skill in the workforce. The United States created a system for public mass higher education through the Land Grant College Act in 1862, then enlarged access after World War II with the GI Bill, perhaps the nation’s most important social legislation ever. For more than a hundred years, the education curve stayed ahead of the technology implementation curve, but starting in the 1970s, the higher education graduation rate began to stagnate while the required workforce skills continued to rise. These economists argue that these divergent trends are a major cause of growing US income disparity. Whereas the US upper-middle class was able to keep ahead of the technological skill curve, increasing its graduation rate, the lower-middle and lower classes were not. The upper-middle class rode this technological advance, earning a wage premium and leaving the other classes behind. Education is thus an important story helping to explain growing economic inequality and the disenchantment among Trump supporters.

But lurking behind this trend is a story about US manufacturing. The United States didn’t take manufacturing seriously in recent decades because a series of well-established economic views reassured us that declines in manufacturing were more than offset by gains elsewhere in the economy. The nation was losing manufacturing jobs because of major productivity gains; the production economy would naturally be replaced by a services economy; low-wage, low-cost producers must inevitably displace higher-cost ones; don’t worry about loss of commodity production, the country will retain a lead in producing the high-value advanced technologies; the benefits of free trade always outweigh any short-term adverse effects; and innovation is distinct from production so that innovation capacity remains even if the production is distributed worldwide. Unfortunately, none of these arguments is correct.

Lost decade in manufacturing

The US manufacturing sector had a devastating decade between 2000 and 2010, from which it has only partially recovered. The decline is illustrated by four measures: employment, investment, output, and productivity assumptions.

Employment: Over the past 50 years manufacturing’s share of gross domestic product (GDP) shrank from 27% to 12%. For most of this period (1965-2000), manufacturing employment remained constant at 17 million; in the decade from 2000 to 2010 it fell precipitously by almost one-third, to under 12 million, recovering by 2015 to only 12.3 million. All manufacturing sectors saw job losses between 2000 and 2010, with sectors most prone to globalization, led by textiles and furniture, suffering massive job losses.

Investment: Manufacturing fixed capital investment—plant, equipment, and information technology (IT)—actually declined 1.8% in the 2000s when adjusted for cost. This marked the first decade this has occurred since data collection began in 1947. It declined in 15 of 19 industrial sectors.

Output: US manufacturing output grew only 0.5% per year between 2000 and 2007, and during the Great Recession of 2007 to 2009, it fell by a dramatic 10.3%. Even as GDP began to slowly grow again (in what has been the slowest economic recovery in total GDP in 60 years), manufacturing output remained flat.

Productivity: Recent analysis shows that although the productivity growth rate in manufacturing averaged 4.1% per year between 1989-2000 while the sector was absorbing the gains of the IT revolution, it fell to only 1.7% per year between 2007 and 2014. Because productivity and output are tied together, the decline and stagnation in output mentioned previously is a major cause of the decline in productivity in that period. Compared with 19 other leading manufacturing nations, the United States was 10th in productivity growth and 17th in net output growth. So productivity increases are not the significant cause of the decline in manufacturing employment. Political economist Suzanne Berger has noted that economists thought manufacturing was like agriculture, where relentless productivity gains allowed an ever smaller workforce to achieve ever greater output. She found the agriculture analogy was simply incorrect. This conclusion means that it is necessary to look at an overall decline in the sector itself for reasons why manufacturing lost nearly one-third of its workforce in a decade.

The United States has simply not applied its innovation system to what turns out to be a crucial innovation stage, production, particularly initial production of complex, high-value technologies.

Part of the story of manufacturing’s decline can be found in the nation’s performance in the global market. Success in a highly competitive world rewards nations and regions that produce complex, value-added goods and sell them in international trade. Although world trade in services is growing, world trade in goods is four times trade in services. Complex, high-value goods such as energy, communication, and medical technologies make up over 80% of US exports and a significant majority of imports. The currency of world trade is in such high-value goods and will remain so indefinitely. Yet, the United States in 2015 ran a trade deficit of $832 billion in manufactured goods. That total included a $92 billion deficit in advanced technology products, a deficit that has been growing since 2002. The theory that the United States could keep moving up a production food chain, it could lose commodity production and keep leading production of advanced technology goods, is undermined by these data. Gradual growth in the services trade surplus ($227 billion in 2015) is dwarfed by the size and continuing growth of the deficit in goods; the former will not offset the latter anytime in the foreseeable future. A services economy does not allow the United States to dispense with a production economy.

Why is the country faring so poorly in high-tech trade? Part of the story is that US policymakers, under the influence of standard macroeconomic theory, were largely content to allow US manufacturing capacity to erode and shift offshore because they were confident that the knowledge and service economy would readily replace lost jobs and salaries from lost manufacturing. But it hasn’t worked.

Recent decades have seen extended periods (1982-1987; 1998-2004; 2014-2016) where the dollar had high value against leading foreign currencies, with Treasury secretaries and Federal Reserve chairs generally supportive of a strong dollar. US manufactured goods correspondingly became less competitive in foreign markets. In parallel, from 1981 on, US consumption as a share of GDP began rising, reaching 69% in 2011, higher than the level in other developed economies. The strong dollar also helped push the country toward what many consider over-consumption compared to saving and investment; there was a growing production/consumption imbalance. The combination of an open trading regime, generally strong dollar, high consumption rates, and open financial markets created advantages for competitor nations’ exports.

The China challenge

The trade imbalance between China and the United States substantiates this economic policy point: the United States has had deficit-ridden, effectively import-oriented economic policies whereas China has been able to force savings rates and investment up to record levels and subsidize and grow exports. This contrast exposes as a dangerous myth the idea that advanced economies are subject to an inherent and inevitable decline of the manufacturing sector and manufacturing employment. Germany’s continuing strong manufacturing sector is the obvious counter example. Its manufacturing workers are much more highly paid than their US equivalents, it employs 20% of its workforce in manufacturing, and it runs a major manufacturing trade surplus, including with China. A high-cost, high-wage production sector does not have to lose out to a low-cost one.

Meanwhile, China, after a three-decade effort, is now the largest manufacturing economy in the world. What led to this rapid shift in a field that the United States dominated for a century? Part of the story is the macroeconomic one told previously. Part of the story is China’s neo-mercantilist policies to mandate technology shifts and to dominate markets with below-cost goods. Intellectual property theft has also played a role, and recent fiscal and market policies show how the West has underestimated China’s nationalistic government economic controls. A seriously underappreciated factor has been Chinese innovation. Most experts have assumed China’s rise is predominately due to low production costs from cheap labor and cheap parts. There is also an assumption in the United States that manufacturing must naturally migrate to low-cost producers and that the knowledge required for production processes is relatively trivial and readily replicable. Neither is true. As Jonas Nahm and Edward Steinfeld argue, China has undertaken a new link between process innovation and manufacturing by specializing in rapid scale-up and cost reduction. It has joined together skills in the simultaneous management of tempo, production volume, and cost, which enables production to scale up quickly and with major reductions in unit cost. This capability has allowed China to expand even in industries that are highly automated or not on governmental priority and support lists. The key to this ability to innovate new production processes has been the capacity of Chinese firms to accumulate firm-specific expertise in manufacturing through extensive, multidirectional inter-firm learning, taking advantage of international know-how from multinational corporations with manufacturing facilities in China, and building on it.

Economists long held that free trade gains always offset losses as trading partners played to their comparative advantage. Economists David Autor, David Dorn, and Gordon Hanson find that the trade relationship between the United States and China came with a heavy cost to US workers and their communities. They conclude that adverse consequences of trade are enduring. Traditional economic assumptions about the ultimate gains of trade are contradicted by the reality that the United States hasn’t yet been able to get past the shock of the loss of millions of jobs in so many communities.

Autor and colleagues examined the direct impact of Chinese industry on incomes in some 700 US urban areas, comparing workers in heavily impacted areas (at the 75th percentile of exposure to Chinese competition) with workers in less-affected areas (at the 25th percentile). They found that income loss per adult was $549 greater in the most-impacted areas and that federal assistance in those areas increased income by only $58 per capita. The growth of trade with China, they find, has tended to make lower-skilled workers worse off on a sustained basis. There was no “frictionless” economic adjustment to other industries. Little offsetting growth was found in industries not affected by this “China shock.” Instead, workers did not make up lost wages and their communities entered a slow, continuing decline.

As economics Nobelist A. Michael Spence has noted, “Globalization hurts some subgroups within some countries, including the advanced economies … The result is growing disparities in income and employment across the US economy, with highly educated workers enjoying more opportunities and workers with less education facing declining employment prospects and stagnant incomes.” Just as manufacturing employment was a key to enabling less-educated workers to enter the middle class after World War II, the loss of manufacturing jobs is correspondingly a key element in the decline in real income for a significant part of the American middle class in the past few decades. Obviously, the 2008-09 Great Recession, where manufacturing (along with construction) was the leading victim, played a role, but there appears no getting around the trade effects, which have been longer term.

The weakening innovation system

But macroeconomic and trade factors alone don’t provide an adequate explanation; the US innovation system is also implicated in today’s socioeconomic climate. In the face of growing competition, the United States still retains the world’s strongest early-stage innovation system—from university research through a culture that favors entrepreneurial risk-taking. Any manufacturing strategy must seek leverage from this comparative innovation advantage. However, federal research in the past has had only a very limited focus on the advanced technologies and processes needed for production leadership. This is in sharp contrast to the approach taken by Germany, Japan, Korea, Taiwan, and now China, which have a strong focus on “manufacturing-led” innovation. The United States has simply not focused research investments, education, or incentives on what turns out to be a crucial innovation stage—production, particularly initial production of complex, high-value technologies. This stage involves creative engineering and design, and often entails rethinking the underlying science and invention as it moves from proof-of-concept to the production stage. Innovation is not just research and development (R&D). Production innovation is an integral part of the innovation process, not an afterthought. Lack of attention to production innovation in the United States created a major gap in its innovation system.

This gap is especially acute for the 250,000 small and mid-sized firms, which represent 86% of US manufacturing establishments, employ more than half of its manufacturing workforce, and produce close to half of non-farm GDP. These smaller firms tend to lack the capacity to keep up with production innovation. They have been at the heart of the nation’s economy, but they largely lie outside the innovation system.

Weakness in innovation for production is compounded by changes in the links between early-stage innovation and the production process. Since World War II, the US economy has been organized around leading the world in technology advances. It developed a comparative advantage over other nations in innovation, and as a result, it led all but one of the significant innovation waves of the twentieth century—in aviation, electronics, space, computing, the Internet, and biotech, having to play catch-up to Japan only on quality manufacturing. Its operating assumption was that it would innovate and translate those innovations into products. By innovating here/producing here, it would realize the full spectrum of economic gains from innovation at all the stages, from R&D, to demonstration and testbeds, to initial market creation, to production at scale, and to the follow-on life cycle of the product. This full spectrum worked, and the United States became the world’s richest economy.

Well, it worked in a world with limited global competition. But in recent years, with the advent of a global economy, the innovate here/produce here model no longer holds. In some industrial sectors, firms can now sever R&D and design from production. The development of computer-driven manufacturing systems had made it easier to distribute manufacturing to other locations across the globe. Today, firms using the distributed model can innovate here/produce there. It appears this distributed model works well for many IT products, as well as for commodity products. Apple is the standard-bearer for this model, continuing to lead in dramatic IT innovations but distributing virtually all its production to Asia. But this approach has an inherent problem: since production is part of the innovation system, distributed production may lead to a shift in innovation capability offshore, cutting the US comparative advantage in innovation. Produce there may lead over time to innovate there.

Manufacturing still matters—a lot

Manufacturing is approximately 12.1% of US GDP, contributing $2.09 trillion to the nation’s $17.3-trillion economy and employing 12.3 million people in a total employed workforce of some 150 million. Manufacturing workers are paid 20% more than service sector workers. Growth economists tell us that 60% or more of historic US economic growth comes from technological and related innovation. And as the dominant implementation stage for innovation, manufacturing is a critical element in the innovation system, although the United States hasn’t understood it this way. Industrial firms employ 64% of US scientists and engineers, and this sector performs 70% of industrial R&D. Thus US manufacturing strength and the strength of the nation’s innovation system are directly linked.

Despite the decline in the manufacturing employment base, manufacturing remains a major workforce employment source for the economy, measured largely by workers at the production stage. But the official data is collected at the establishment level, not firm levels. Should the view of manufacturing be limited to the production moment? Why is the economic reach of manufacturing measured at the factory? This narrow lens provides only a partial perspective on the role of this sector.

The manufacturing sector, instead, can be viewed as an hourglass. At the center, the narrow point of the hourglass, is the production moment. But pouring into the production moment is the output of a much larger employment base, which includes those working in natural resource extraction, those employed by a wide range of suppliers and component makers, and the innovation workforce comprising the very large percentage of scientists and engineers employed by industrial firms. Flowing out of the production moment is another host of jobs, those working in the distribution system, retail and sales, and on the life cycle of the product. The employment base at the top and bottom of the hourglass is far bigger than the production moment itself.

Arranged throughout the hourglass are lengthy and complex value chains of firms involved in the production of the goods—from resources to suppliers of components to innovation, through production, to distribution, retail, and life cycle. This great array of skills and firms are now largely counted as services, but in fact they are tied to manufacturing. If this production element is removed, the value chains of connected companies are snapped and face significant disruption. Although the lower base of the hourglass, the output end, may be partially restored if a foreign good is substituted for a domestic good, the particular firms involved will be disrupted. The upper part of the hourglass, the input end, with its firms and their employees, is not restored.

When these complex value chains are disrupted, it is very difficult to put them back together. A new Manufacturers Alliance for Productivity and Innovation study says the manufactured-goods value chain plus manufacturing for other industries’ supply chains accounts for about one-third of GDP and employment in the United States. For every dollar of added value from domestic manufacturing (in technical terms, value-added destined for manufactured goods for final demand), another $3.60 of value is added elsewhere in the economy. For each full-time equivalent job in manufacturing dedicated to producing value for final demand, there are 3.4 full-time equivalent jobs created in nonmanufacturing industries, far higher than in any other sector. Higher-value-added production industries appear to have even higher multipliers.

These factors make it clear why, historically, once the value chains snap and the United States loses an economic sector, it’s so hard to reassemble. Understanding manufacturing in terms of the hourglass and the value chains within it may provide part of the explanation for the economy’s current predicament over job loss, job creation, and declining middle class median income. If one-third of our economy is being sacrificed to problematic economic views, it is no wonder that Trump is ascendant.

Manufacturing and democracy

New work by Autor and coauthors tends to bear out the relationship of disruption in the manufacturing sector to disruption in the political process. Analyzing congressional elections between 2002 and 2010, they found that increased exposure of local labor markets to foreign competition, particularly from China, tended to push both political parties toward candidates at their ideological extremes, polarizing the political process. The Trump candidacy is an extension of this development.

Trump voters identified at the outset of this essay have now completely disrupted one of the nation’s two major political parties. There may be potential long-term consequences for the political system, which is indeed being pushed to its ideological edges. These voters appear stuck in their declining industrial communities strewn across the Midwest, the Northeast, and parts of the industrial South. Where could they move? To produce software in Silicon Valley? Biotech in Boston? As a number of economists are now grasping, these economically injured cities and towns can fall into failure mode. But their citizens have latched onto a new voice, Trump’s, a profoundly disturbing voice to many. This voice is as disruptive as a month of massive protests on the Capitol Mall; its confrontational messages dominate night after night of evening news. US manufacturing workers were the historic base of Roosevelt’s Democratic Party, they backed President Kennedy, began to shift parties in the Reagan era, and now that their prospects have significantly been eroded, they have blown up the Republican Party—the party of Main Street and Wall Street, of Lincoln and Taft, of country club and corner store, even of Rand Paul and the Kochs. It is now clear this disenfranchised group is so sizable that neither party can afford to ignore it. The parties must find a way to address grievances that have been long ignored as if this community was invisible. Both parties have embraced or tolerated a series of economic views and policies that fail to take into account the plight of these people. Will the political system be flexible enough to accommodate these recent outcasts without forfeiting democratic ideals? What would such a policy accommodation look like?

The Obama administration promised in 2012 to deliver 1 million new manufacturing jobs by 2016; only a third have materialized. But the president made manufacturing innovation the centerpiece of the technology agenda, hoping to have 15 advanced manufacturing institutes in place by the administration’s end. These are organized around advanced production technologies, promising dramatic efficiencies that can help offset higher US wage levels and restore manufacturing competitiveness. They aim to reconnect the innovation system to the production system, trying to rebuild a manufacturing ecosystem to better link small and large production firms and university engineering and science. It is a promising start, but more is required. The R&D system could do much more to focus on new manufacturing technologies and processes. Startups that could manufacture high-value goods are still shifting to contract manufacturers in places such as Shenzhen. Could there be new technology and know-how-rich spaces in the United States where startups could test and launch pilot production projects? The administration has been trying hard to increase college graduation rates, grow community college attendance, and improve workforce training. More is needed, including new online and blended learning systems that can radically expand access. New thinking on macro fiscal, tax, and trade policies and adjustments, and on longstanding economic assumptions, is still required. Trade-affected community assistance and job retraining must be rethought. The current political denouement tells us more will be needed from the next administration.

The nation can continue to ignore the manufacturing sector and let it erode still further, but the consequences for the innovation system and therefore to economic growth are significant. It now appears that there are also consequences for democracy and the nation’s social ideals. If the nation’s political leaders continue to write this neglected working class community off, the turmoil of this year’s presidential primaries might be just the beginning of a period of social and political disruption. A strong manufacturing sector is a crucial element of an inclusive economy that can undergird a better future. Whoever voters send to the White House this November needs to recognize this reality and address it with energy.

Reshaping Space Policies to Meet Global Trends

The space sector is undergoing a major transformation. Fifty years ago, the United States and the Soviet Union conducted the only significant national space programs, and only a small number of commercial entities were involved in space activities. Since then, the space sector has grown to include more countries, and it has diversified to integrate technologies and innovations from other sectors. Private funding for space-based ventures has increased dramatically over the past decade, and there has been a rapid growth of a private space sector, which now includes familiar companies such as SpaceX and Blue Origin, as well as less familiar but equally innovative ones, such as Planet Labs, Mapbox, and Spire, among others. As a result, major parts of the space sector are changing, from being largely driven by government and several large commercial enterprises to being more segmented—and therefore more open to participants—and globally integrated. These changes are electrifying for many people, raising new hope that the vision of incorporating the solar system into the economic sphere may finally be feasible. But the changes are wrenching for the old guard that created and nurtured the first government-led wave of the space enterprise.

What do these trends mean for the US government agencies and departments that spend in excess of $43 billion annually on space-based activities? At the request of the National Aeronautics and Space Administration (NASA) and the Office of the Director for National Intelligence, my colleagues and I at the Science and Technology Policy Institute analyzed this question, and our report, Global Trends in Space, explores some of the implications.

A first message is that the space enterprise is not an island. Most developments afoot in the sector are driven by external factors. In the early years of the space age, technologies were developed in and for the space sector and spun out into other sectors, photovoltaics and thermionic conversion technologies being principal examples. Increasingly, though, the reverse is occurring, and technologies are spinning into the space sector from others, principally from information technology (IT), and often in the form of commercial off-the-shelf products. Falling costs and dramatic improvements in areas such as processing power, data storage, camera technology, solar array efficiency, and micro-propulsion have fed into a variety of space-related areas, including Earth observations, telecommunications, and even space science and technology and exploration.

As technological advancements outside the space sector feed into the space sector and business model innovations occur, newer and lower-cost applications of space are emerging, making investing in space more beneficial and lucrative. Smaller, lighter, and more capable satellites are able to perform Earth observation and remote sensing functions, and are within the reach of countries, corporations, and even individuals. SkyNode, a service provided by the Google subsidiary firm Terra Bella, for example, will enable customers to directly task a satellite to download imagery within 20 minutes.

These innovations aren’t occurring in just the Earth-observation sector. In the satellite communications sector, for example, use of high-throughput satellites can provide high-speed data communication that is 20 times faster than with traditional satellites, fast enough to match the data rates obtained using terrestrial fiber optic networks at comparable prices. In the launch sector, using newer technologies such as three-dimensional printing and new business models, the company SpaceX is offering lower-priced launch services that are disrupting the market originally controlled by heavyweights such as United Launch Alliance and Arianespace. Firms such as AGI are providing space “situational awareness” (SSA) services that are improving our ability to view, understand, and predict the physical location of objects in space, with the goal of avoiding collisions such as the one that occurred between two communication satellites in 2009. Such services previously were firmly in the domain of governments, especially defense-related agencies. The trend toward smaller satellites is even enabling space science and technology and exploration, with CubeSats developed by private firms being used to conduct heliophysics, planetary science, and astrophysics research, with even more activities planned.

Low-cost or new technology is core to these developments. For example, a single OneWeb satellite weighs 330 pounds, compared with older telecommunications satellites, such as those used by Dish Network and HughesNet, that weighed more than 13,000 pounds. The new applications also have lower price tags. According to Surrey Satellites, a global leader in satellite manufacturing, the cost of launching, insuring, and operating a satellite that provides images with 1-meter resolution is now around $160 million, almost an order of magnitude lower than what used to be.

A final driver of emerging developments in space is government funding and policies. Government agencies in the United States and around the world are under pressure to re-examine policies restricting the commercial development and sale of space goods and services, as illustrated by the debate in the US Congress on the use of commercial rockets or space-based imagery. There is also pressure on agencies to begin to view and regulate space as a mainstream economic endeavor, and not see it solely as a strategic national security-relevant sector. This shift in emphasis is especially evident in the United States and Europe, where commercial solutions are increasingly being used to meet government needs, technology export controls are being liberalized, and regulations are being relaxed to allow the private sector to provide services such as high-resolution imagery and SSA that were previously restricted to the government.

Signposts of change

That space is changing is evident in many measurable ways. Although there have been government cutbacks in the United States, globally there has been an increase in funding of space activity. Global investment in space-based activities increased at an annual rate of 6% between 2009 and 2013. In the broadest picture, almost 170 countries have some level of financial interest in satellites, up from 20 in the 1970s. And of the more than 80 countries engaged in space-based activities in 2015 (that number has grown from two in the 1950s and 20 in the 1970s), 60 have invested $10 million or more in space-related applications and technologies, twice as many as in 2004. The increase has been especially noteworthy in countries such as Saudi Arabia (60% increase since 2009) and Brazil (40% increase). Overall, global activity in space is expected to almost double in the next 10 years (Figure 1).

Space budget chart b

It is not just the major space-faring nations that have ambitious plans. Countries such as India have entered the fray, and they are showing growing expertise in space exploration as well as technology development. Others, such as Israel, Singapore, South Korea, and the United Kingdom, have begun to specialize in niche areas, such as avionics, alternative approaches to launch, and data analytics, among others. South Korea has aggressively pursued satellite manufacturing despite beginning its government space program only in the late 1980s. The United Arab Emirates has plans to build a Mars probe and the first space research center in the Middle East, all the more impressive since the nation began its space activities only in the 1990s. Leveraging commercial products and services, including those from the United States, these and other countries stand poised to become major space players, and within several years they may well rival more established countries, particularly given the enduring perception that a presence in space brings prestige, geopolitical advantages, and economic opportunities.

The private sector is not much behind governments. It is important to note that the presence of private companies in space is not a new phenomenon. Companies such as North American Aviation and McDonnell Aircraft Corporation were heavily involved in Project Mercury in the 1950s and Apollo in the 1960s. But they operated under a different model where investments went into a long-term, capital-intensive, monolithic industry dominated by government contracts, legacy fixed satellite services, and big-iron hardware. Now, the investments are less capital intensive, with new investors spurred by new markets they see emerging, and the sector has attracted new companies, especially from the IT sector. In fact, if services such as Direct TV are included, private revenues exceed government expenditures by three-fold.

In total, there was almost twice as much venture investment in space in 2015 as in the previous 15 years combined. As of the last count, there are at least 124 new space-related firms that have started just since 2000. Private firms are also taking on a greater proportion of space-based activity. Between 2009 and 2013, for example, businesses contributed less than 8% of the 127 nano/microsatellites launched. In comparison, they are expected to be responsible for 56% of nano/microsatellites launched between 2014 and 2016. An interesting phenomenon in the space sector is that some of the new actors are not driven solely by a profit motive. Companies such as SpaceX, Blue Origin, and Bigelow no doubt intend to make money. However, their founders seem to be driven by a zeal—and a time-horizon—that transcends a typical venture investor. These are the “lost children of Apollo”—individuals who as children watching the space program in the 1960s thought there would be moon colonies by the time they were adults. Perhaps feeling let down by their leaders, they decided to invest their personal wealth in their dreams. It is also worth noting that most of the billionaire philanthropists investing in space are from the United States.

Another new stakeholder has also appeared in the space sector—one that did not exist when space was a strategic, government-owned sector—and that is consumers, who are showing a growing demand and the willingness to pay for ubiquitous Internet access and near real-time situational awareness. Twenty years ago, less than 3% of the world’s population had a mobile phone and less than 1% had access to the Internet. Today, more than two-thirds of the population has access to a mobile phone, and one-third of it can communicate on the Internet. These consumers are now contributors to the growing private sector. Add to this the emergence of crowdfunding and citizen-led space activities, and the number of stakeholders in the space sector is dramatically higher than a decade ago.

These diverse actors, within governments, academia, and the private sector, are following a varied set of approaches to developing their space enterprises. Governments in less-industrialized countries, as would be expected with a commoditized technology, are increasingly not investing in developing indigenous systems, but rather are using technology transfer and partnerships to build capabilities in specific areas of interest. It is no longer necessary to build a satellite in order to operate one or even just get data from one. Not only is there no need to develop indigenous technology, there is also a shift from buying technology and products to buying services.

The private sector, especially what is often called NewSpace, is increasingly engaging in novel approaches. Participants in this sector are focusing on cost innovation, following a philosophy of developing products that are good enough rather than perfect, and prioritizing low cost over performance or reliability. This approach is reflected in the increased use of streamlined and simpler processes, cheaper components (non-radiation-hardened components can be a hundred to a thousand times cheaper than the traditionally used radiation-hardened ones), open source hardware (such as microcontrollers or 3D printing) and software (Android operating system), agile manufacturing, and a production model (as distinct from the production of one-off products). The trend is most evident in the small satellite sector. The CubeSats developed by Planet Labs, an Earth observation startup, have gone through more than 12 generations of design since the firm was established in 2010. Also, the company, and others in this sector, conceptualizes risk and reliability differently than does the mainstream aerospace sector. For Planet Labs, a fifth of the CubeSats can fail in orbit without losing a meaningful amount of imaging capacity. Most interestingly of all, in a break from older space-sector firms, many of these firms see space as “just another place” where data are collected. Firms such as Spire see and pitch themselves not as aerospace firms but as IT or media companies, and investment in these firms is viewed as being in data products and services, not space. They are takeover targets not of traditional aerospace firms such as Lockheed Martin or Boeing, but of technology giants such as Google or Facebook.

Transitioning into the mainstream

A country’s capabilities in space used to be based on indigenous technology development. This dependence on indigenous capability meant that countries followed a reasonably predictable trajectory toward advancing their level of participation in the space sector. This is now changing. As already noted, a country can now own space assets simply by purchasing them from turnkey solution providers. They can also buy data and services rather than manufacturing and operating satellites, or partner with countries willing to provide assistance. For example, in 2015 the Turkmenistan Ministry of Communications contracted with the Italian/French multinational company Thales Alenia Space to build a satellite called TurkmenÄlem52E/MonacoSat. It was launched by the US-based firm SpaceX, and is operated by the Monaco-based satellite operator Space System International-Monaco.

Developing nations are also not locked into the traditional aerospace industry structure and may have advantages, in part because they can adopt the best historical practices while allowing their industries to develop under current conditions rather than refocusing legacy institutions and companies. A modern analogy is Africa and telephony; nations skipped straight to wireless communication rather than duplicating a landline infrastructure first.

Developments such as these reveal that many of the subsectors of space, including Earth observation, space science and technology, exploration, and even space situational awareness, are beginning to diverge into two segments. The first sector is a government-driven one that develops devices with exquisite capabilities (the James Webb Space Telescope or the Space Launch System rockets) that require hundreds of millions if not billions of investment dollars. The second is a less-capable but also less-expensive consumer-oriented sector. Examples include small launch companies such as Rocket Labs or small satellites such as those being developed by Planet Labs and Spire. This latter segment is often underestimated, as often happens with disruptive innovations. It is, however, globalizing rapidly, making space assets a commodity, focusing on services (as distinct from products and technology), and turning space into just another sector. As more subsectors of space go mainstream, there will be growing numbers of global enterprises, supply chains, partnerships, and competition. This change will have some very specific implications for the United States.

The entrance of new players into space brings an assortment of new challenges that were unimaginable at the height of the Cold War. These challenges are both domestic and international.

Domestically, for example, the emergence of a host of new Earth-observation companies presents unprecedented challenges to organizations such as the National Oceanic and Atmospheric Administration, which implements the Commercial Remote Control Licensing Regime developed when the United States was launching a small number of satellites annually. Now with SpaceX, OneWeb, and other companies planning to launch constellations comprising thousands of satellites, the same system will not work.

Globally, the challenges will be even more complex. The reality of almost 1,400 satellites orbiting the Earth and an increasing launch rate translates into an increasingly “congested, contested, and competitive” space that comprises not just the satellites but also hundreds of thousands of objects larger than one centimeter and several million that are smaller. The guidelines surrounding space debris are currently nonbinding but are interpreted as legitimate standards despite the difficulty in enforcement. Further compounding the challenge of handling space debris is the cost of debris mitigation, potentially staggering for a less-wealthy nation, and the concerns of sharing debris mitigation technologies among other nations.

Today’s space community must also address the hitherto unknown challenges of the loss of electromagnetic spectrum and its effect on rapid data transmission; the lack of global standards and regulations for activities related to serving satellites or other objects on-orbit; the development of deep space mining or in-situ resource utilization; the rise of cyber terrorism; and the legacy of pollution from launches. These challenges, only some of the many now confronting global space powers, require an appropriate response.

Looking further, the sheer number of actors and the diversity of approaches in the sector will ensure that the pace of innovation will accelerate. With the ready commercial availability of an array of hardware and software, and the expansion of satellite manufacturing industries globally, the private space sector is staged for growth. This will make it more difficult for most governments, not just in the United States, to manage the space sector. With countries following varying pathways for space-based activities, common metrics used for assessing the capabilities of a country, such as investment in developing indigenous capabilities, may lose some meaning. As a result, it will be more difficult than ever to assess or predict national capabilities. With more countries and private sector firms operating in space and seeking to take on additional roles by participating in international space organizations, both the domestic and global governance landscapes are becoming more complex. As a result, not only will the United States and other traditional space-faring countries have diminishing control of global decisions related to space activities, but there will likely be greater pressure on them to accommodate the needs of the private sector and emerging space-capable countries.

Policy prescriptions

Assuming that current trends hold—and they may not—within the next 10 to 15 years the US government may no longer be the principal hub of the space community. Given the pace of innovation and its geographic diversity, government may not always be the owner of the most innovative technology, approach, or architecture. There is already evidence of innovation not just abroad (more advanced radar satellites and optical remote sensing in Europe), but also within the US private sector (commercial firms developing SSA algorithms that rival government ones). In addition, as capabilities, resources, and ambitions outside the United States continue to grow, the US government’s traditional approaches to working with partners might have less traction, with fewer partners agreeable to following the US lead unquestioningly. Indeed, they may choose instead to partner with other nations that have begun to develop capabilities comparable to those of the United States. The overarching implication here is that although the United States will likely keep its innovative edge in some areas, in many others it will find its role transitioning. It will sometimes be a customer, at other times a consumer, and almost always a partner.

In light of these implications, US agencies involved in space activities have three choices. They can accept the trends as inevitable, get ahead of them, and pre-position the nation to benefit from them. They can actively fight the trends, by attempting to reverse the situation by starving the emerging private sector to keep the government as the key hub in the space ecosystem. Or they can drift, doing nothing at all, and react on a one-off basis to the trends that immediately affect a particular agency.

The first option is preferable: US policymakers should embrace these trends and thus position the nation to maximally benefit from the transformation under way. In this scenario, the United States would be well served by considering three goals: better leveraging the growing and independently capable private sector and markets outside government; better integrating the growing capabilities and demand outside the United States with US plans; and taking a leadership role in addressing the emerging new challenges. The US government and its space agencies can best reach these goals by making a subtle, but important, shift in posture. Instead of seeing themselves as the leaders in space with all other entities as followers, space agencies would be well-served if they see themselves as catalysts rather than performers, seeding innovation in-house or outside, growing the space economy, and then leveraging outcomes to meet their objectives.

It is important to note that the US government already works extensively with the commercial and global entities outside its realm—and always has. The change that the emerging global trends are prompting is that the government would not just be engaging these outside entities as contractors to help achieve its own goals, it would also be working together more collaboratively to meet everyone’s goals. This will mean that the outside entities come to the table with resources and that they have more power and influence on the future direction of the space enterprise. Many departments and agencies, of course, already leverage external capabilities. NASA, for example, already uses creative contracting mechanisms to engage the private sector in a cost-effective manner to carry cargo to the International Space Station. The challenge is to balance legacy and emerging practices, and to scale up practices that are working well but in isolated “silos,” in order to make them mainstream approaches.

Choosing when and how to leverage outside players is not a trivial matter, and each agency involved in such experimentation would need to establish an overarching rationale for any outsourcing or partnering. Government agencies could, for example, with certain exceptions for strategic reasons, identify external capabilities to do what is more efficient to do outside, such as to launch crew and cargo into low-Earth orbit. They could also use outside capabilities to do what is difficult to do in-house, such as employing a private firm to use a high-risk modular architecture for a journey to Mars. It goes without saying that both would be difficult shifts at agencies with a strong culture of resisting ideas “not invented here.”

As government agencies take on the role of catalysts and leverage outside skills, it is possible that many in-house skills would become redundant when superior skills are found outside, whereas others would become more desirable. Many agencies would need to evolve from being technology-driven to acquisition-driven, becoming entities in which the skill set required is that of scouting for talent and ideas rather than doing the work. This would make them similar to various other mission agencies, such as the Defense Advanced Research Projects Agency and the National Institutes of Health. As a result, many of the agencies would need to evaluate how to best reorganize their structures to fit with their new roles.

To be effective catalysts and force multipliers, space agencies will also need to change the way they find these outside entities or work with them. To find them, the agencies may need to look in new places and to new performers, and invent new mechanisms to work with them. To be able to partner effectively, since now the other sides hold more power, partnerships would need to be on a more equal footing, with government knowing that the other entities (cost sharing or otherwise, companies or governments) would insist on playing roles that fall on the critical path. Agencies would also not be able to create and dissolve partnerships as easily as they could in the past.

In addition to these general insights, we have identified three specific opportunities that would have relevance for all domains and priority areas. First, to leverage funds outside the government, a space agency or department may wish to establish a global space technology incubator (loosely, a focused area of attention) with multiple units. These would include a program for early-stage technology development; an arm to develop innovative systems solutions; a unit to develop and conduct prize competitions and grand challenges to spur innovative efforts; and a venture unit to seed early-stage, emerging space companies. What will make this organization—or sets of organizations, if they are created separately—different from current efforts is that it will include contributions and participants from countries, academia, and private sectors from outside the United States. Designing such an organization, which may even be virtual, may raise challenges in areas such as intellectual property sharing, export control regulations, and taxpayer value, but such challenges have been identified and addressed by other government organizations and can be resolved.

Second, and in the same vein, given growing availability of outside resources, and the core role that the government plays and would continue to play with respect to provision of infrastructure, space agencies or departments should consider developing partnership and funding models so that the best participants in the worldwide space community are attracted to come to the United States, work at its world-class facilities and centers, and participate collaboratively in the design and conduct of science and technology activities. For this organization, the European Organization for Nuclear Research, or CERN, an institute funded by member states that provides particle accelerators and other infrastructure needed for high-energy physics research, is a model.

Finally, and in perhaps the simplest action to initiate, the US government should scale up its space agencies’ and departments’ current ad hoc global technology monitoring functions. Two priorities stand out. As innovation grows around the world, the United States should consider engaging in more automated “horizon scanning,” similar to efforts by the Department of Defense (DOD) and other groups within the intelligence community, to identify where technologies are emerging and how the nation might customize them for space applications. The government should also undertake more “technology forecasting,” especially in specific high-priority areas such as solar or nuclear electric or thermal propulsion. To aid such efforts, the government should look to automation for help. Humans who are expert in individual fields will continue to be expected to be cognizant of the leading efforts in their fields. But given the potential expansion of the diversity of expertise globally and the likelihood that future missions will need to combine expertise in varying and ever-shifting ways, it will be increasingly necessary to develop and use automated tools to identify emerging capabilities and how they might be combined to fulfill mission requirements. Such tools will supplement the deep knowledge resident in government personnel. A community of practice has already emerged, and space agencies and departments should be core members of this community. A model may be within the DOD’s Office of Technical Intelligence, which was created to develop and apply automated analytic tools to analyze global science and technology activities to inform department investments in workforce, laboratory, and research funding.

Learning from the past, ready for wildcards

Taking the steps outlined here would lead to a radical evolution of the space sector. There is a popular pearl of wisdom, attributed to various sources, that reminds us that predictions are difficult to make, especially about the future. In the 1990s, for example, there was much clamor about the development of constellations of satellites to provide telephone and data services. However, beyond a couple of demonstration satellites, none of these systems came to fruition, killed by the telecom bust at the end of the 1990s that sent most of the companies involved into bankruptcy protection and reorganization. Predictions have gone the other way, too. In 1980, the consulting firm McKinsey & Company predicted a cellular phone penetration rate of 900,000 by 2000. The prediction was low by a factor of more than 100.

As efforts to develop and implement policy changes proceed, it will useful to ensure that any changes are alert to “wildcards” that might overturn these trends. Wildcards could be related to technology developments. A dramatic breakthrough, such as perfecting the ability to reliably and cheaply reuse multiple stages of rocket engines or developing specialized carbon nanofibers that make technologies such as space elevators feasible, could dramatically reduce the cost of access to space. Wildcards can also emerge from geopolitical developments. Drastic changes or responses to the Outer Space Treaty or other international rules governing space, or an aggressive weaponization of space, can affect how liberal the US government will be with respect to international collaborations. Other wildcards could come into play. A debilitating space weather disaster or cyber-event that cripples space-based services for an extended period, a space-debris cascading event that degrades use of space, or even the discovery of a large asteroid or comet headed toward Earth may upend the current trajectory.

Even if no wildcards enter the picture in the short term, or policies are designed to be responsive to multiple alternative futures, quick change is not advised. Just because capabilities become available outside the government does not mean that it must necessarily use these outside capabilities. Policymakers need to assess which capabilities are so important that they should not be outsourced, procured, or purchased from the outside regardless of their availability. This strategic thinking would also include examining and planning for the consequences of these decisions. Decisions about whether to “make” something in-house (or with outsiders using traditional contracting mechanisms) or to “buy” a service from outside are likely to be complex and iconoclastic, and will probably have workforce reduction consequences and hence political implications.

Change in the wind

Even if the future isn’t totally predictable, some things are clear. The space sector will continue to undergo transformation as it increasingly, if gradually, breaks free from the confines of the military/government sector restricted to a few space-faring countries. More governments worldwide can be expected to act on their space aspirations by participating in space activities in different ways, and a globalized private sector (even if mostly centered in the United States) will want to provide more space-based products and services. As the number of actors increases, the space sector will likely see increased competition and overcrowding, both literally and metaphorically. This, in turn, will serve as a driver for more products, services, and governance structures that can support the needs of the ever-expanding sector.

It is also clear that this tectonic shift will require the US government to adapt—in particular, by reshaping its space departments and agencies and by leveraging developments beyond their conventional boundaries. Toward this end, the government will need to harness its vision, openness, agility, and risk tolerance; incorporate a well-matched mix of centralized planning and decentralized execution; and expend the required resources to implement these changes.

Luckily, other sectors, such as computing and information technology, have undergone similar changes and may offer some lessons. The integrated circuit, for example, was developed by the military in the private sector (at Texas Instruments) to guide the Minuteman ICBM. Today, the commercial sector controls the integrated circuit market, and the military is, in large part, happy to buy products commercially at low cost and high volume. The space sector would be well-served if lessons from these sectors could be incorporated in its future development.

Science, Celebrities, and Public Engagement

In April 2015, two prominent public scientists demonstrated, in radically different ways, the rising power of celebrity in science. That month, the talk show host and surgeon Mehmet Oz stood at the center of a maelstrom of controversy after a group of doctors told Columbia’s dean of medicine they were dismayed that Oz remained on the medical school’s faculty. They accused Oz of promulgating unproven health advice for personal gain. They argued that the public was being “misled and endangered.” Vox magazine in the same month called Oz “arguably the most influential health professional in America,” but lamented that over his career he had “turned away from science and embraced fame.” As a consequence, on health advice, Oz was “leading America adrift.”

The same month, astrophysicist Neil deGrasse Tyson debuted his StarTalk show on the National Geographic Channel. Reviewing the show, the Los Angeles Times called Tyson—head of New York’s Hayden Planetarium, popular science writer, and experienced host of TV science shows—“the American face of science.” The show gave him a platform to interview celebrities, such as film director Christopher Nolan and media mogul Arianna Huffington, about the influence of science in their lives. Having a scientist’s perspective on late-night television was, the paper argued, “an idea whose time has seemingly come and, in a time when many people with influence believe that established facts are things to be voted on, an idea that can’t come too soon.”

Oz and Tyson are emblematic of a cultural trend: the larger role now played by celebrities in public discussions of science and science policy. To make sense of this development, we have written books that examine the ways celebrities have influenced public understanding of science and science policy. Is Gwyneth Paltrow Wrong About Everything? When Celebrity Culture and Science Clash examines how celebrities from outside science—such as the actress Paltrow, Jenny McCarthy, Jessica Alba, and the self-styled Food Babe Vani Hari—make science-based claims and provide advice, much of which has little evidence to back it up. The New Celebrity Scientists examines a set of prominent scientists—including Tyson—to show how they have become famous and use their celebrity to spread scientific ideas through society. Based on our research, we argue that the influence wielded by such celebrities is unlikely to recede, and the pervasive nature of fame in modern life means scientists should view celebrity culture as an opportunity to engage citizens in science policy debates.

Researchers in science communication have long known that once citizens leave formal education, the primary source for information about science becomes the media, with audiences now seeking out information from Internet search engines and social media, such as Facebook, Twitter, and Instagram. Traditional and new media celebrity culture has become increasingly influential, as a seemingly endless stream of celebrities endorses products (books, shampoos), lifestyles (vegan, gluten-free), and ideas (pro-labeling of genetically modified organisms [GMOs], anti-vaccination).

At the same time, celebrities can use their visibility to inject scientific ideas into contested political discussions, drawing attention to the issue and mobilizing public action. As physicist Lawrence Krauss argued in the Bulletin of the Atomic Scientists, scientists with a public audience have an opportunity, and often a responsibility, to use their public profile to help set the terms of debate on science policy. Those scientists, he wrote, can “help combat scientific nonsense” and also “help steer public policy discussions toward decision making based on empirical evidence and sound theory.”

Celebrities have more than just a strong cultural presence. According to a comprehensive analysis in the British Medical Journal, celebrities also have substantial impact. They have a significant influence on a range of health behaviors, for example, cosmetic surgery, cancer screening, tanning, smoking—and even suicide rates. Moreover, studies have shown that celebrities have helped promote the marketing of unproven therapies and products, such as in the provision of questionable stem cell treatments. Although it is difficult to quantify, there seems little doubt that celebrities have played a role in the growth of trends such as colonics, gluten-free diets, juicing, and a range of other—largely evidence-free—health practices, such as cleansing and detox diets, all of which currently have markets worth billions of dollars.

Exploring the role and impact of celebrities on the public engagement with science and science policy, therefore, is not a frivolous endeavor. Celebrity statements about science can mislead or harm. But many incidences do not have clear-cut outcomes, producing instead cultural effects that are complex, contradictory, and contentious. Yet all of these examples of celebrity science create opportunities for the scientific community to engage with citizens.

Assessing effects

There are innumerable examples of celebrity culture having a clearly harmful impact on popular views of science, particularly in the realm of health. This can be seen most sharply in the anti-vaccination movement. Vaccination rates in many developed countries have been falling. Although there are diverse reasons for this decline, Seth Mnookin argued in The Panic Virus that the circulation of scientifically inaccurate information—facilitated by celebrities such as Jenny McCarthy and her former romantic partner, actor Jim Carrey—has played a significant role in this clearly harmful social trend.

Research studies have shown that other areas where the celebrity voice has hurt public discourse include the perceived health value of gluten-free diets and high-dose vitamins and the health risks of GMOs. In all of these cases, the celebrities with scientifically unsupported views have played a prominent role, clouding public debates and fostering health practices that are not supported by the available evidence.

Yet celebrities have also had beneficial effects on public discourse about science and health. Earvin “Magic” Johnson’s 1991 revelation that he was HIV positive, for example, had an effect on public attitudes toward the condition. Adult men surveyed after the announcement, according to one study, were more concerned about an acquaintance getting HIV, were interested in getting more information about HIV and AIDS, and were more likely to discuss HIV and AIDS with friends. But the wider public response is varied and not always clear-cut. Adolescents, by contrast, had a mixed response to Johnson’s announcement. A study found that a majority of adolescents said Johnson’s announcement increased their awareness of the condition, but fewer than half said the revelation had changed their perception about their own risk of contracting HIV.

Celebrities who advocate on environmental issues also have impact. When he collected his 2016 Oscar for best actor, Leonardo DiCaprio used his acceptance speech to describe climate change as “the most urgent threat facing our entire species.” As described by the Washington Post, the actor had long been outspoken about environmental concerns, such as ocean conservation. These interventions have impact. According to one study of celebrities and climate change news, stars, such as DiCaprio, serve as a hook for journalists, allowing mass-market publications such as People, Variety, and Cosmopolitan to take environmental issues to a wider audience. Celebrity conservationists also have complex—and often contentious—public impact. David Suzuki, for example, used his fame to bring public attention to conservation biology. A household name in Canada, he argued over decades of media work for environmental protection of species and ecosystems, notably through his work as anchor of the long-running television show The Nature of Things. A book on conservationists who became celebrities, Nature’s Saviours, found that in the early 1990s, Suzuki used his fame and influence to establish the Vancouver-based David Suzuki Foundation, a nonprofit that works with business and government to, among other goals, protect the climate and nature.

Even when celebrities engage in public discussions that are thoughtful, relatively informed, and done with the best of intentions, the social impact can be complex, multi-factorial, and less than ideal. The prominent US journalist Katie Couric, for example, used her cultural capital to raise awareness of colon cancer. In 2000 Couric, who lost her husband to the disease, underwent a live colonoscopy on the Today show as part of a week-long series that promoted colon cancer awareness. Health researchers in one study found her campaign had a significant impact on people’s use of the procedure. The number of colonoscopies performed by each of 4,000 gastrointestinal endoscopists increased from 15 per month before the campaign to 18 per month after it, an increase that lasted for nine months. The researchers called the behavior change the “Katie Couric effect.” Although the impact of Couric’s actions have largely been praised, other scholars raised concerns that it increased worry and may have led to some expensive and inappropriate medical interventions.

A similarly complex case was actress Angelina Jolie’s decision to get genetic testing and prophylactic surgery for breast cancer. Although Jolie’s disclosures have, in general, been viewed in a positive light, their actual impact on public health, cancer prevention, and knowledge appears to be mixed. One study, for example, found the “Jolie effect” did increase awareness and information-seeking about cancer, but another found that her announcement was “not associated with improved understanding.” This kind of complex response, one scholar noted in the British Medical Journal, is evident after other celebrity disclosures of a cancer diagnosis or cancer screening. Indeed, public health scholars writing in the Journal of the National Cancer Institute suggested that celebrities should refrain from involvement in complex topics, such as cancer screening, and limit their health advice and advocacy to topics that are clear and relatively uncontroversial, such as advising citizens not to smoke or not to drive while intoxicated.

Scientists have also consciously adapted to celebrity culture to make themselves and their ideas part of public discussion. Tyson, for example, has been a career-long advocate for scientific literacy. But he has also used his media profile to advocate for his particular scientific interests. In 2012, he lobbied in various venues for an increase in funding for the National Aeronautics and Space Administration. The head of New York’s Hayden Planetarium made the case for space in congressional testimony, in his book Space Chronicles, in a long feature in Foreign Affairs, in media interviews, and on Late Night with Jimmy Fallon. Tyson, who sat on the space agency’s advisory board, used his fame to engage citizens with his arguments about the social, political, and economic value of space exploration.

Celebrity and public engagement

As these examples demonstrate, celebrities and celebrity culture have an influential role in shaping how citizens and policy makers encounter and make sense of complicated, often health-related science. Scientific institutions, therefore, should not dismiss celebrity or merely mourn its claimed harmful effects on scientific understanding. The scientific enterprise, instead, should view occasions where celebrity and science meet as opportunities to engage audiences in a deep way about science and scientific thinking.

There are several ways to do this. First, the scientific community needs to speak out when a celebrity issues an inaccurate or misleading pronouncement on scientific issues. But decades of research has shown that communicating facts on their own has little impact on audience understanding. Science communication researchers have long identified a fundamental principle underlying the communication of scientific ideas and information. To have a meaningful impact, communication must also address an audience’s values, ideologies, and motivations. With this knowledge as a foundation, scientists should not see their work as just fact-correction or fact-checking. They should view each piece of communication as an opportunity to engage in a discussion of the issues. Their long-term aim should be to build trust with citizens. An alliance with certain celebrities who have large followings and are motivated by a similar commitment to rigor and evidence has the potential to assist in this effort.

As an example of the benefits of active engagement, Vox reported in October 2015 that Oz met with health professionals to understand how his show affects public health. According to Vox, the new season of the show featured more skepticism, more debunking of dubious products, and more explanations of the scientific process. The reporter concluded that The Dr. Oz Show was “hardly an exemplar of scientific thinking,” but its host “does seem to be taking seriously his promise to improve the rigor of his show.”

Second, the scientific community should invest in scientists who can become a trusted public voice. In its Nov. 26, 2008, edition, Discover magazine named Tyson as one of its 10 most influential people in science. That didn’t happen by accident. His reputation was forged over decades, and the scientific community invested in his success. For example, the National Science Foundation awarded him more than $1 million to develop the StarTalk radio show that became the basis for his 2015 National Geographic TV show.

Third, those of us who study and write about the role of celebrities, and celebrity scientists, need to be clear about how uncertainties, cognitive biases, and self-interest can fuzz the boundaries between science and politics. Experts often disagree over issues, such as breast cancer screening, the severity of and best ways to address climate change, or the most important priorities for public investment in science. Moreover, social science research on scientific controversies, such as those surrounding GMOs or vaccines, shows that people’s reasons for their beliefs may have less to do with not understanding science than with cultural or political factors. Celebrities and scientists alike who link scientific facts to prescribed ways of acting may thus often themselves be using, consciously or unconsciously, the mantle of science to advance agendas that have a subjective element. In addition, celebrities can play a powerful role in framing what a particular position on a scientific issue—such as GMOs or climate change—signals to the world. Being for or against GMO labeling, for example, can help to define the type of individual you are. It becomes part of your personal brand. This is one reason the same individual can reject the scientific consensus around GMOs and accept it on climate change.

Finally, we need to continue to research the nature and impact of celebrity in public engagement with science. Celebrity culture isn’t going away. Indeed, social media will likely intensify its sway and ubiquity. According to one recent study of science communication, audiences are motivated to look for scientific information when they feel strongly about an issue, when they notice it covered in the media, and when they have to address it at school or work. The rise in celebrity culture will doubtless create multiple and regular opportunities for scientists and clinicians to spread evidence-based and credible scientific ideas as part of an ongoing public discussion of science—if the scientific community is prepared and willing to engage.

Surviving the Techstorm

In A Dangerous Master, Wendell Wallach, a scholar at Yale University’s Interdisciplinary Center for Bioethics, tells the story of modern society’s struggle to cope with technology’s complexity and uncertainty. In the course of telling this story, Wallach questions the terms of the social contract under which we as a society predict, weigh, and manage the potential harms and benefits that result from the adoption of emerging technologies. In urgent prose, he argues that we need different epistemic, cultural, and political approaches to adequately assess emerging technologies’ risks and their broader social impacts. Wallach promotes public deliberation as one of these approaches, which provides citizens and experts the opportunity to distinguish the technological hype from the hope, the speculation from reality, and in so doing shape their technological futures.

a-dangerous-master

There is a whiff of science fiction in Wallach’s prose—maybe more fiction than science. He envisions a future where autonomous robots, designer babies, homemade pathogenic organisms, and mindclones confront our hopes, fears, and inner conflicts. An ethicist by training, Wallach uses these future scenarios to explore normative questions borne of the pervasive entanglements of technology and society. One of the principal questions at the heart of A Dangerous Master is whether we will find ourselves governed by transformative technologies whose ramifications we did not envision and whose development we did not examine. This question is one that always emerges when we fail to anticipate or develop adequate controls for complex technological systems. To preemptively counter these questions, scientists and engineers usually reach for concepts and metaphors that portray their artifacts—physical and biological systems—as reliable and controllable.

Wallach pulls a powerful yet simple interrogation from the sidelines of the global conversation on science policy: What is our purpose in developing a particular technology, and is that purpose one we ought to pursue? Wallach warns us that without adequate reflection, we risk being swept into an incessant storm of groundbreaking scientific discoveries and technological applications that outpace society’s capacity for control. What Wallach refers to as a “techstorm” is essentially a period of overzealous technological innovation that can have destructive consequences. Wallach offers a useful historical analog for a techstorm in the tools and technologies that gave rise to the Industrial Revolution. Disruptive technologies, including steam power, the cotton gin, and more efficient iron production, radically transformed society, upending traditional hierarchies, reshaping economies, and even modifying relationships with the natural world. He reminds us that the benefits reaped from the new manufacturing era were preceded by a period that immiserated much of the workforce and included among its harms child labor and unsafe working conditions.

In the wake of the manufacturing era, came the rise of industrial capitalism, based on individual rights, private ownership, and free markets. Capitalism and the techstorm that enabled it brought with them growing disparities in wealth and opportunity, both among and within countries. In the current context of four decades of wage stagnation and wealth inequality approaching levels not seen since the early twentieth century, Wallach imagines what would happen if technology permanently replaced a great deal of human work, as many digital age observers have predicted, and suggests that rethinking capital distribution will become necessary if society wishes to avoid a crisis.

Wallach identifies another potential fissure in the social contract in the form of a genetically stratified future, where a select few are capable of further distancing themselves from the majority through the use of genetic enhancements. He also analyzes transhumanist philosophy, which idealizes a post-human future in which humans master their own evolution using technological enhancements: “The best of what it means to be a feeling, caring, and self-aware person will be fully expressed in technologically enhanced cyborgs and techno sapiens.” Wallach characterizes the critique of transhumanism as “buy[ing] into a representation of what it means to be human that is incomplete and mechanizes, biologizes, and pathologizes human nature.”

I think Wallach’s perspective on this debate is unduly Manichean, but I share the notion that we need to be vigilant of reductionist discourses. These conversations imperceptibly close rather than open the prospect for us to decide what we want to become and what we want our futures to be. Such discourses also obscure rather than illuminate the deepest sources of social ills, which shape the evolution of our genes, bodies, and identities. Our biological existences are profoundly influenced by where we live and how much money we have to better ourselves through education and access to care. Reductionist discourses that ignore social ills and contingencies will tend to crystallize our genetic and digital divides and, in turn, limit our opportunities to bridge them.

Techstorms are enabled by rhetoric that hypes immediate benefits while downplaying risks. This premise allows technology creators, such as those in the biomedical industry and the military-industrial complex, to evade scrutiny and quickly entrench themselves in society. Interestingly, Wallach’s emphasis on security and defense research as a driver of the techstorm is in line with what we see happening in cutting-edge biotech, as accelerated development and the glacial pace of regulatory agencies and courts outpace society’s ability to adequately evaluate the technologies’ effects. This paradigm of fast-paced innovation, driven by economic competition and security imperatives, incorporates little input from the public beyond consumers’ market preferences. Given that these technologies have the potential to disrupt many aspects of society, should we not look to uphold the much-espoused principle of democracy?

Wallach’s purpose in critiquing the techstorm is not to stifle innovation but to slow the speed of innovation to a “humanly manageable pace.” This pace is described as one that’s in line with informed decision making by individuals, institutions, governments, and humanity as a whole. As Richard Denison of the Environmental Defense Fund stated in his blog, “The real danger is continuation of a government oversight system that is too antiquated, resource-starved, legally shackled, and industry-stymied to provide to the public and the marketplace any assurance of the safety of these new materials as they flood into commerce.” A manageable pace that incorporates broad social values while ensuring human safety begins with the need to discover inflection points, which are historical junctures in technological innovation that are followed by either positive or negative consequences. Secondary inflection points can be thought of as rate adjustments in the technology’s research trajectory.

Again, Wallach’s analysis is enlightening, as our global society currently grapples with the benefits and risks of genomic editing and artificial intelligence. When it comes to mastering the human genome, a first inflection point is our will to question who should own our genes and their secrets. The Supreme Court’s unanimous 2013 ruling barring the patenting of human genes was a wise and balanced decision that cleared a major barrier to innovation in biotechnology, drug development, and medical research. But as with any inflection point, the court’s decision was only a first step toward finding the right balance between protecting legitimate intellectual property and securing an open future for personalized medicine based on genomics.

By 2015, the scientific community was confronted with one of the most important disruptions in genomics research since the 1975 Asilomar Conference on Recombinant DNA. The advent of CRISPR technology, which allows gene editing at specific loci on the genome, drastically accelerates the potential for engineering human and non-human biology. I welcome Wallach’s critique of scientific entrepreneurship and its tendency to simplistic boosterism, which saturates genomics research and policy discussions. For example, those who support gene editing often describe it as a pair of molecular scissors that cut out harmful DNA sequences on a chromosome, thus “editing out” disease. Images such as this make the gene-editing process seem easier and cleaner than it really is, and assume a control over our germline we do not yet have. They gloss over the potential for off-target edits, which can create unintended mutations in the genome. Another characteristic of the momentum around gene editing is the lack of clear understanding of the role citizens are invited to play. As Wallach might suggest, experts’ call for a moratorium on germline gene editing is no substitute for more inclusive public debates on the promises and risks of our biotech futures.

Inflection points such as those apparent with CRISPR are opportunities that allow society to exert a degree of control of the future we create. Once this window of opportunity passes, it becomes extremely difficult to overcome the ensuing technological lock-in. To avoid this fate, Wallach posits that it is necessary for oversight to comprise a combination of hard and soft regulations. In other words, effective oversight requires both nimble governance (industry standards, codes of conduct, statements of principle, and so on), and the authority of government to enforce appropriate research practices. To create this kind of oversight, Wallach advocates for the creation of governance coordinating committees (GCCs).

A GCC would act as a good-faith broker searching for gaps, overlaps, conflicts, inconsistencies, and synergistic opportunities among the various public and private entities already involved in shaping the course of an emerging field. Its primary task will be to forge a robust set of mechanisms that comprehensively address challenges arising from new innovations, while remaining flexible enough to adapt those mechanisms as an industry develops and changes.

The GCCs would be led by “accomplished elders” who have achieved wide respect (Wallach doesn’t specify their exact qualifications) and would work together with all interested stakeholders to monitor development and propose solutions to perceived problems.

Public engagement is another area Wallach addresses, highlighting the use of citizen panels as a way to involve a representative cross section of citizens to tap into their knowledge and provide input to lawmakers. Such a group would allow citizens to receive information and opinions that are different from those traditionally offered by politicians, experts, and interest groups. It would also provide a better way to generate informed attitudes that can be clearly expressed to decision makers.

Without proper citizen engagement and a means to contain the power of minority interests, technological development will proceed unhindered, for better or, quite possibly, worse. The sheer speed of change will assuredly result not just in people who surrender their lives to gadgets and machines they may not want, but in vast disruptions that society cannot mitigate. Although such a view may mark me as a Luddite, it’s appropriate to remember that the purpose of technology is to benefit our quality of life. Technological progress alone is not a means of transcending the human condition or a goal in itself; it is a tool for improving the human lot, and like any tool, it can cause serious harm with improper use. In the end, the most important message that Wallach shares—and I appreciate his effort to do so with elegance and perspicacity throughout the book—is that we need to harness the full force of our democracy to shape technological progress according to our values. Otherwise, we will end up controlled by technologies whose ramifications we did not foresee and whose path we neglected.

Nobody Knows Anything

In a memoir, the screenwriter William Goldman reflected on his moviemaking career, wondering why some films caught the public imagination and soared, while others flopped. His depressing conclusion was that “nobody knows anything.” Although Hollywood projects an image of brash self-confidence, behind the scenes the actors, producers, and writers are in perpetual panic over the uncertain future—blockbuster or bomb?—of their fragile experiments.

nowatny-cover

Strictly speaking, some people do know some things. Experience, expertise, and data can, of course, help us when faced with difficult decisions. But it’s important to remember that uncertainty remains our default state and that it drives the scientific enterprise. The search for scientific certainty resembles the trial of the Danaids of Greek myth, whose fate in the Underworld was to spend eternity pouring water into an unfillable vessel. If we believe we have found an inviolable scientific certainty, or demand such certainty before acting, we are only kidding ourselves.

Science, Helga Nowotny tells us, “thrives on the cusp of uncertainty,” using what is known to probe what is not. Her brilliant new book, The Cunning of Uncertainty, argues that we should not just understand what it means to be uncertain, we—scientists, society, politicians, and all—should learn to embrace uncertainty. Nowotny provides dispatches from the frontiers of current research, taking in psychology, history of science, digital humanities, genomics, planetary science, and much more. She resists the temptation to report the latest surprising finding or pop fact; her aim is, instead, to ask about the nature of scientific exploration. In doing so, the book constructs a compelling case for free academic inquiry and the value of serendipity.

In some respects, her argument should come as no surprise. Nowotny was, until recently, president of the European Research Council, a funding institution set up to allow Europe’s leading scientists to conduct their research free from bureaucratic pressures. But, having read and enjoyed much of her previous work, I couldn’t shake the sense that, here, I didn’t know quite where she stood. Over the last three decades, Nowotny has done as much as anyone to add color to the often black-and-white depictions of the relationship between science and society. In this book, some of that color has drained away. Perhaps this is a result of the book trying to do too much. Nowotny wants to be philosophically consistent while also sociologically nuanced, historically accurate while also relevant for policy making. Inevitably, this produces unresolved tensions.

In the 1990s, Nowotny was part of a group of scholars in Science and Technology Studies (STS) who influentially described the changing relationship between science and society. “Mode 1” science, as they described it, was characterized by a social contract in which society gave science autonomy and funding in exchange for its downstream social benefits, whatever those might end up being. These STS scholars idealized contemporary science as “Mode 2,” in which the social contract is in flux; science is no longer done for its own sake; scientists are expected to imagine new economic possibilities, work with colleagues in other disciplines, and deal with public scrutiny. Nowotny and her colleagues made an argument that was at once descriptive and prescriptive: Mode 2 is the way science is going and the way it should go.

It is ironic, therefore, that Nowotny ended up running the European Research Council, a resolutely Mode 1 organization, a bulwark against the “impact” agenda that is creeping through other science funders. The sole criterion for funding research is “excellence,” which is code for scientists deciding what counts as worthwhile science. Nowotny cites education reformer Abraham Flexner’s 1930s call for an appreciation of the “usefulness of useless knowledge.” Her case for excellence is far more coherent and robust than most of what passes for contemporary science policy. But, in reading her book, I kept wanting to know whether its Mode 1-oriented conclusions had emerged from a lifetime of scholarship or a sequence of compromises. Is this Nowotny the STS scholar or Nowotny the policy maker?

She is certainly critical of some recent and powerful trends in science. For example, she draws attention to the tendency, through the collusion of policy makers and researchers, to hype the impact of research. As a funder, she knows all too well that “grant applications tend to be replete with over-promising rhetoric.”

One of the fields steeped in this kind of rhetoric is the emerging science of big data. Nowotny describes well, without resorting to algorithm alarmism, how the quest for big data leads to unintended consequences. “Data,” she points out, is the great misnomer. Despite the term’s Latin roots, data is (or are, if you insist) made, not given, and we should keep track of its (okay, their) social roots. Nowotny describes big data’s potential for moving from a science that is driven by “why” to one driven by “what,” in which surprising findings are allowed to emerge from data. But she would be as critical as anyone of overblown talk such as appeared on the cover of Wired magazine, which declared the “end of theory” in 2008. Excitement about “what” must not be allowed to occlude the discussion of “what for.”

Big data is one of many areas where research is intimately implicated in the creation of future worlds. The world of big data is not just a world as understood by big data, but a world created from big data. As Nowotny puts it, “when Google sought to gauge what people were thinking, it became what people were thinking. When Facebook sought to map the social graph it became the social graph.” Knowledge cannot be so neatly split from action.

In 2002, Donald Rumsfeld (then US Secretary of Defense) was much mocked for an obfuscatory attempt to justify an invasion of Iraq despite the lack of evidence that the country possessed weapons of mass destructions. He said: “There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know.” (Nowotny uses the neatened-up version that appeared in Rumsfeld’s memoir, which he titled Known and Unknown). Those of us who are interested in the interplay of knowledge and politics were embarrassed to admit that he was right. When imagining uncertain futures, it is comfortable to think in terms of calculable risks. But there are areas of uncertainty or ignorance in which we cannot calculate probabilities and we cannot predict consequences.

The attempt to exert total control over uncertainty (Rumsfeld’s “unknown unknowns”) by domesticating it as risk (the “known unknowns”) may be part of the problem. One need only look at financial services and the crisis of 2007-8 to see how an industry that claims control over risk creates and fails to understand its own systemic risks, with the result that, when surprises hit, they are felt harder. Nowotny argues that “the vast majority of economists failed to foresee the 2008 financial crisis,” but this is to overlook the economic Cassandras who were ignored because it was in nobody’s interests to worry about the big uncertainties. More thoughtful economists are now realizing their mistakes. In The End of Alchemy, Mervin King’s recent book-length attack on financial chicanery, the ex-governor of the Bank of England draws attention to what he calls “radical uncertainty,” by which he means uncertainty that can’t be put into numbers.

When Nowotny writes, regarding the financial crisis, that the “cunning of uncertainty transforms promises into probabilities,” we are left wondering who is claiming they have control over our futures—who is generating these probabilities?—and why. It is no accident that governments portray uncertainties in finance as under control while those from terrorism are presented as sufficiently troubling to justify our current security apparatus. Reading Nowotny’s book, I was left wondering where is the democracy in uncertainty? What would a genuinely radical approach to uncertainty look like?

The possibility of using under-scrutinized uncertainties for particular ends has not gone unnoticed. If uncertainty has politics that affect decision making, the question, as Nowotny’s STS colleague Erik Millstone has posed it, is “who gets the benefit of the doubt?” Rumsfeld’s speech reminds us that uncertainty is not simply a certainty deficit. It can be defined, manufactured, ignored, or exaggerated for political or economic ends. Naomi Oreskes, whose work examines corporate attempts to inflate uncertainties in the sciences of climate change and public health, uses a term coined by the historian of science Robert Proctor to describe the study of culturally induced uncertainty: agnotology. Agnotologists seek answers to Millstone’s question.

In a lecture to the London School of Economics about this book, Nowotny asked “if science can thrive at the cusp of uncertainty, why can’t society?” Her rhetorical question doesn’t want a response, but, if we were to force one upon it, it would be that, as with earlier calls for an “experimental society,” we need to be extremely careful about who controls the experiment. Yes, we need to get used to trying and erring rather than planning and predicting; but we also need to connect our academics to the real world rather than trying to free them from it.

The Human Factor

Science Policy Up Close traces the high points of John Marburger’s career from president of Stony Brook University to director of Brookhaven National Laboratory, and later, science advisor to President George W. Bush. The book, focused, as Marburger put it, on “policy in action,” is part retrospective on science policy and part memoir laced with biographical information and speeches. It is inconsistent in structure and content, by necessity, due to his death in 2011 after completing the first two chapters. At Marburger’s request, his longtime colleague Robert Crease, professor of philosophy at Stony Brook University, completed the book by ably editing materials from Marburger’s speeches, writings, and interviews, and providing context and commentary through introductions to the material in the remaining four chapters.

science-policy-up-close

The book is indeed about policy in action—the challenges of dealing with complex and controversial issues, with various advocates pursuing diverse, often strongly held, positions and outcomes. In that respect, the book provides a glimpse of science policy making, administration of science and technology programs, and conflict resolution at the highest levels. But for those looking for insights into thought processes, perspectives on interpersonal interactions, or glimpses of internal White House advice and deliberations, Science Policy Up Close will seem somewhat distant. However, through the course of his career, John Marburger was an up-close participant and influenced major science and technology policy decisions involving everything from the decommissioning of the Shoreham nuclear power pant to funding for the proposed Superconducting Super Collider particle accelerator, to the future of Brookhaven National Laboratory, and more broadly, to the distribution of funding for the federal research and development (R&D) enterprise. The book portrays the diverse experiences of an accomplished physicist, administrator of complex scientific and engineering programs and institutions, and manager of people—and there is much to glean from Marburger’s personal accounts and Crease’s compilations and insightful commentary.

In the preface, Marburger points to his “ability to deal with people in an objective and productive way,” and his skill at resolving conflicts involving science and technology programs and projects. In the book’s initial chapters, he underscores the value he places in working to understand diverse perspectives and identifying a path forward. For example, in the first chapter, he describes his efforts to resolve the dispute over the future of the Shoreham nuclear power plant on Long Island as head of the commission established by New York Governor Mario Cuomo. He proved to be an effective mediator, in part due to his respect for others, patient listening skills, and belief that the range of views on an issue should be given a full hearing.

In the second chapter, Marburger provides a detailed accounting of the factors leading to the demise of the Superconducting Super Collider from his vantage as chair of the board of trustees of the University Research Association, the group managing the project. He points to what he calls the Principle of Assurance—the need to ensure public accountability on large scientific facilities and projects through transparency on costs, scheduling, and performance—as a general objective that he pursued in later high-level positions.

The third chapter continues with the theme of conflict resolution and consensus building as Marburger, the new director of Brookhaven National Laboratory, responds to public uproar over a leak of radioactive water from a reactor at the laboratory. As in other controversies he dealt with throughout his career, Marburger’s strategy for resolution began with developing a narrative in which all sides could see themselves. Then he worked to craft a resolution that reflected elements of multiple perspectives. Chapters four and five are dedicated to his activities as science advisor to President George W. Bush and as director of the White House Office of Science and Technology Policy (OSTP) in each of the president’s two terms.

Early in his tenure, Marburger made organizational changes at OSTP in an effort to focus the office on a smaller number of policy issues. Critics viewed his streamlining of objectives skeptically, believing this would limit the influence of the office. Indeed, he did focus on a more limited policy agenda. He argued that such controversial issues as climate change and embryonic stem research were not primarily science policy matters and therefore were not OSTP priorities. He believed climate change policy was largely driven by economic considerations that were not an area of analytical strength in OSTP. Similarly, he argued that the debate over embryonic stem cell research involved ethical matters beyond the purview of his office.

But many in the scientific community believed Marburger was not actively pursuing these issues and not using his position to ensure that an ideological White House was making decisions informed by objective science. He responded to critics by emphasizing his role as an advisor and not an advocate, and pointing to the need to work in a manner that maintained the trust and confidence of the president and his other senior advisors. Marburger focused on budget priorities and R&D funding, communications technologies, bioterrorism, and the many technical issues that OSTP is called on to address. Unlike his predecessors, he was not given the title of Special Assistant to the President, which fueled the perception that he was not fully engaged in White House policy-making processes.

A controversy that dogged Marburger’s tenure at OSTP moved into the public spotlight in 2004, when more than 60 prominent scientists, including 20 Nobel laureates, signed a statement charging that the Bush administration was “suppressing, distorting, or manipulating the work done by scientists at federal agencies.” The statement, organized by the Union of Concerned Scientists, called for restoring scientific integrity and cited examples of alleged manipulation of science to further political objectives and the appointment of individuals to science advisory panels based on views favoring administration policy objectives. Marburger argued that the claims lacked a basis in fact and that the statement was counterproductive. He steadfastly defended the Bush administration’s record related to science, pointing to sustained support of R&D, including substantial funding increases for basic research.

Marburger was a determined leader and administrator who focused on priorities, acted deliberately, and cultivated an image of evenhanded propriety. As Crease points out in his introduction to the book, Marburger “learned to keep his image under his own control,” quoting him as saying that in order to maintain credibility and authority, an administrator involved in controversy should be “as bland as possible.” Marburger was a complex, highly accomplished individual who behind the scenes played piano, meticulously maintained an MGB sports car, and early in his career spent his spare time over three years building a harpsichord. He was chair of physics at the University of Southern California at the age of 32, dean at 35, and president of Stony Brook University at 39.

Marburger learned from his early accomplishments and attempted to apply his skills and experiences as science advisor to the president. But he, like other science advisors, found the complex and chaotic nature of political processes to be perplexing, and science policy making at the highest levels to be challenging. He was, as Crease put it, “a lightning rod for attack,” in part because prominent scientists believed the Bush administration was misusing science. Yet Marburger believed his job was to put politics and opinions aside and provide the president independent scientific advice.

What actually goes on in the White House policy-making deliberations often remains a mystery because advisors generally adhere to the principle that conversations with and advice to the president are private matters. In addition, many of the issues involve classified information and considerations that do not allow for the degree of transparency required for objective evaluations of the roles and actions of individuals. Marburger’s book is not an exception to this general rule.

But autobiographies and memoirs such as Science Policy Up Close are useful in providing a perspective on history from the standpoint of those who participated in historical events. In essence they provide a public figure an opportunity to present his or her actions in the context of events as he or she perceived them—or would like others to perceive them. One may argue that a more balanced assessment can be provided by those who are in a position to critically evaluate events from multiple perspectives, and are not defending or advocating for the actions of an individual—or oneself—actively engaged in those events. Science Policy Up Close provides an important, albeit somewhat opaque, glimpse of the inner workings of OSTP under particularly challenging political circumstances. It presents a clearer accounting of the complexities of operating major scientific facilities; the challenges of securing and maintaining support for large science and technology enterprises; and the difficulty of not just crafting, but effectively implementing, policy positions.

For those interested in national science policy, particularly those who might choose to participate in and guide the policy-making process as public servants, Science Policy Up Close is a valuable resource. Whether or not a reader agrees with the opinions and actions described in memoir or biography, there is much to learn from the way an individual assesses and responds to the issues with which he or she is confronted.

In an essay excerpted in the book’s final chapter, Marburger states that “science must continually justify itself, explain itself, and proselytize through its charismatic practitioners to gain influence on social events.” Readers of Science Policy Up Close and observers of actions and events during the presidency of George W. Bush will debate the degree to which Marburger accomplished this goal. Regardless of one’s perspective, the book provides a thoughtful accounting of the challenges associated with science policy making at the highest level, and of an accomplished individual’s persistence and dedication to science and public service under difficult circumstances.

Forum – Summer 2016

Challenges raised by gene editing

In “Why We Need a Summit on Human Gene Editing” (Issues, Spring 2016), David Baltimore describes how the planning committee chose the main theme and diverse topics of presentations for the International Summit on Human Gene Editing, held in December 2015. I appreciate the committee’s dedication and effort to make the global forum a memorable and significant event. Dr. Baltimore also expressed hope that the discussions would “serve as a foundation for a meaningful and ongoing global dialogue,” and with that in mind, I would like to share what has been happening in Japan and offer some thoughts for the future.

In Japan, the Expert Panel on Bioethics of the Council for Science, Technology, and Innovation in the national government’s Cabinet Office has been considering the issue since June 2015. The panel decided to take action because of the publication of the first research paper about gene editing on human embryos, and was also prompted by statements by the US government and the International Society for Stem Cell Research in spring 2015. The panel held four hearings with experts in medicine and ethics, and delivered an interim report in April 2016.

The panel concluded that clinical usage of gene editing techniques on human embryos that would lead to heritable genetic changes in future generations should not be allowed at this time, owing to safety concerns, as well as other ethical, social, and philosophical issues. The panel’s report also refers to the technical, ethical, and social issues described in the Statement of the International Summit. Regarding basic research on human embryos, the panel judged that it might be possible to justify some areas of research, such as research into the function of genes during the development of human embryos. All such research, however, would need to undergo strict ethical reviews and—whatever the case—gene-edited embryos should never be implanted in the uterus.

The Japan Society of Gene and Cell Therapy (JSGCT) issued a joint statement with the American Society of Gene and Cell Therapy in August 2015 (Molecular Therapy, 23:1282). Furthermore, in April 2016, the JSGCT, in collaboration with three other academic societies in Japan, issued a proposal for the prohibition of clinical application of germline editing and urged the government to establish appropriate national guidelines for basic research.

As a participant in both international and national activities, I can confidently say that the international summit has had a positive influence on the discussion of gene editing in Japan. Two of the members of the Expert Panel on Bioethics who participated in the summit—including myself—presented reports at one of the panel’s meetings, as well as at several academic societies. The challenge now is how to make the dialogue truly global. As a result of the summit, there are surely many discussions taking place all around the world. I hope that those local discussions—particularly those in non-English speaking countries and regions such as Asia, Africa, and Latin America—will be welcomed into these global discussions, since the challenge of how to handle gene editing technology is one that concerns all of humanity.

Kazuto Kato

Professor of Biomedical Ethics and Public Policy

Graduate School of Medicine

Osaka University, Japan

I had the privilege of attending the International Summit on Human Gene Editing, and the take-home message for me was that from an experimental perspective, human somatic and germline gene editing are acceptable within local (and global) regulatory and ethical/moral frameworks. In addition, from a therapeutic perspective, somatic gene editing provides a very exciting and globally acceptable opportunity. In contrast, editing the human germline for therapeutic (or preventative) purposes raises many important questions for which there are currently no answers. These questions are complex and touch on issues such as altering the course of natural evolution (with unpredictable consequences) and eugenics, among many others. All present at the summit shared a strong commitment that the scientific community should not proceed in the direction of therapeutic/preventative human germline gene editing.

Having studied and worked in the “North” and now located in the “South,” I have often been asked whether technological advances such as gene editing are indeed relevant to emerging economies, given the need to focus on more pressing priorities such as basic education, health, and food security. I live in a country—South Africa—that has one of the highest levels of HIV positivity, most of the affected individuals being in the economically active segment of the population. My own research would see the implementation of advanced technologies in the genomics field (including gene therapy) in a country in which the number of HIV-positive individuals on antiretroviral therapy is far below 100%. Can one justify advanced technologies in the face of an inability to meet basic needs?

The answer does not appear to lie exclusively in the notion of distributive justice, but perhaps in the principles of health economics: if it makes sense from an economic perspective, then everyone stands to benefit. Too little has been done, in my opinion, to accurately estimate the benefits that would be derived from implementing the fruits of the genomics era (including gene therapy and gene editing) on a large scale in the developing world. This approach would require a calculation of the costs, for instance, of lifelong therapy for communicable diseases such as HIV, genetic disorders such as cystic fibrosis, sickle cell disease, and familial hypercholesterolemia, and then contrasting this to the cost of a one-off diagnostic test or therapeutic procedure. In practice, this would include, for example, the institution of newborn screening programs in the public sector (currently available only to the privileged minority in the private sector) and the application of gene therapy and gene editing for diseases including those mentioned above, bearing in mind the significant cost reduction that would occur with economies of scale.

I would welcome an opportunity to work with like-minded individuals on the health economics of the large-scale implementation in the South of the fruits of the genomics era (including gene therapy and gene editing), where the need paradoxically is as great, if not greater, than in the North, where most of the attention appears currently to be focused. The hope is that armed with an objective appraisal, it will be possible to approach leaders in government and business to convince them of the urgency to act.

Michael S. Pepper

Professor of Immunology, Faculty of Health Sciences

Director, Institute for Cellular and Molecular Medicine

Director, South African Medical Research Council Extramural Unit for Stem Cell Research and Therapy

University of Pretoria, South Africa

The emerging power of biotechnology is promising an unprecedented ability to alter human structure and function. To foster global dialogue on those powerful possibilities, members of the international community gathered in Atlanta in May 2015 for Biotechnology and the Ethical Imagination: A Global Summit (BEINGS). Leaders from science, business, philosophy, ethics, law, social science, religious disciplines, and the arts and humanities convened to propose a set of ethical principles and policy standards for biotechnologies that impact the human species. The results will be published in the coming months.

The potential of biotechnological advances demands many such conversations as BEINGS, and so I applaud the sponsors of the International Summit on Human Gene Editing. But the key question for such conversations is: who should be at the table?

Human gene editing challenges our definitions of what it means to be human, as well as the proper limits of scientific interventions. But it also demands an examination of how our desire to define and heal disease is conflated with aesthetic definitions and desires, and strongly challenges us to examine our socially and culturally situated definitions of concepts such as “normal functioning” and “disability.”

How we approach such questions is historically and socially contingent. The impact of human gene editing will be felt beyond the biotechnologically advanced countries, and, as participants in the collective human experience, the world community deserves a voice. In the past, technologically advanced societies made decisions that had tremendous impact on the social progress and physical environments of other societies. We must learn from that history and solicit the combined wisdom of different cultures with different experiences and perspectives to thoroughly and transparently debate the implications of this technology. Different kinds of insights lie in the collective experience, the science and philosophy and art and literature of our species, including that of tribal and indigenous populations.

We must invest in those conversations now. It is not only the power of these technologies that challenge us, but their simplicity. It is challenging enough to determine how the scientists represented at the summit should handle human gene editing; it will become nearly impossible when the tools can be mastered by anyone with basic competence in genetic technical skills. The do-it-yourself, garage genetics lab may not be quite ready for human gene editing, but the ability to alter the genomes of plants and microorganisms is becoming routine. As these technologies become increasingly accessible, so will the potential for creating accidental (or, unfortunately, intentional) pathogens or environmentally destructive species.

How do we confront such challenges as a world community? I am not sure of all the solutions, but I am sure of the process: we need collective innovative thought from as many different fields, cultures, philosophies, and perspectives as possible. And that is going to happen only if we also invite critics, opponents to the technologies, and those whose disciplines or fields may at first seem irrelevant to the conversation, as we tried to do in BEINGS. Time is not on our side.

Paul Root Wolpe

Director, Emory Center for Ethics

Emory University

Regulating genetic research and applications

R. Alta Charo’s “The Legal and Regulatory Context for Human Gene Editing” (Issues, Spring 2016) provides an excellent broad overview of the current status and challenges surrounding biotechnology governance around the globe. The article also touches, if briefly, on the current oversight issues in biosecurity, mentioning self-governance models of the National Scientific Advisory Board on Biosecurity (NSABB) that emerged from recommendations in the 2003 National Academies’ report Biotechnology Research in an Age of Terrorism.

When thinking about biotechnology from a security governance perspective, it will be necessary to anticipate the types of security threats that may emerge as science and technology advance, the potential consequences of those threats, the probability that adversaries will obtain or pursue them, adversarial intent, and the potential effect on strategic stability.

The CRISPR/Cas9 system, and emerging variants on the system, enable unprecedented control and ease when editing the genome. It’s somewhat analogous to remote “command and control” of the genome and is what makes the tools novel and different from earlier gene editing methods. Potential impacts on biomedicine and human health are vast, including beneficial applications for enabling gene and cell replacement therapies, drug discovery through functional genomic screens, and simplifying the production of disease models that permit validated therapy targets and increased efficiency for testing drug efficacy. But the future challenges and pitfalls associated with CRISPR/Cas9, especially pitfalls with implications for international security, are still to be determined. Governance to address uncertainties, while not hindering research, is tough.

The broader biosecurity and nonproliferation communities (as well as congressional committee findings) have recognized that in the twenty-first century biological weapons are sometimes (but not always) cheaper, easier to produce, more widely available, within the capabilities of an increasingly large number of people with access to minimal technical skills and equipment, more concealable, and inherently exploitive of more dual-use technologies. The potential synergies between biotechnology and other emerging technologies, such as nanotechnology, big data analytics, and the cognitive neurosciences, not only suggest tremendous potential promise for advancement in technology for consumers and defense applications, but also raise new concerns. A driving concern is that within this century, both nation-states and non-state actors may have access to new and potentially devastating dual-use technology.

Reducing the risk from state-based misuse of biotechnology for biological proliferation will mean consideration of the highly transnational nature of biotechnology research and development. Traditional and innovative new approaches to nonproliferation and counterproliferation are important policy elements to reduce the risk of malfeasant application of technology. Robust international agreements lower the risk of terrorist applications by eliminating legal routes for states and terrorists to obtain agents, precursors, or weaponization materials, and by minimizing possible transfers from state to non-state actors through theft, deception, or other means. Efforts to strengthen the international regime to control transfers of dual-use materials and equipment are also important. The highly transnational nature of biotechnology research and development is a major consideration in reducing the risk of misuse of biotechnology for weapons.

Margaret E. Kosal

Associate Professor

Sam Nunn School of International Affairs

Georgia Institute of Technology

In R. Alta Charo’s article about human gene editing regulation, she gestures toward a regulatory model that has a lot in common with the “learning health system,” a recently described model in which a health care system is constantly engaged in a process of policy monitoring, rapid feedback, and quality improvement. Analogously, as new technologies emerge, a “learning regulatory agency” would apply the assessment methods and standards that it knows well, but would always be on the lookout for systematic biases, flaws, or new domains of innovation that are poorly served by its current tools. The agency would then adapt, developing new methods and standards accordingly.

We think that as policy makers contemplate the promise and potential pitfalls of gene editing technologies, it will be valuable for a “learning agency” to take some lessons from another recent innovation in biotechnology—genomics and precision medicine. Following on the success of the Human Genome Project, precision medicine initiatives arrived with the potential to read patients’ health and response to therapy off their genetic profiles. In that spirit, biotechnology companies raced forward, searching for meaningful genetic associations and then developing diagnostic technologies to detect these genes. But lacking a structured system for market entry, companies began offering tests for sale directly to consumers, without having to demonstrate that these associations were clinically meaningful predictors of patient outcomes. Unfortunately, an insufficient evidence base about how to use and interpret genetic tests—as would be demanded by stricter regulation—often left both clinicians and patients confused. In part as a result, precision medicine has failed, thus far, to live up to its high expectations.

In response to this problem, the Food and Drug Administration is now proposing to oversee more closely certain aspects of clinical diagnostics. However, we think that one lesson for a learning agency to draw from the precision medicine case is to be more proactive in requiring evidence of clinical utility for new biotechnologies. Applying this lesson to gene editing could mean that developers will need to go beyond merely showing that their technologies can successfully edit their targets in controlled conditions—they must also show that this intervention translates into reliable, cost-effective, and clinically meaningful benefit.

This experience also calls into question the regulatory policy narrative that policy makers must choose between protecting the public through strict regulation and promoting innovation through loose regulation. Indeed, the loose regulatory environment surrounding diagnostic technologies in precision medicine seems to have failed by both measures: There have been few genuine therapeutic breakthroughs emerging from this space, and the public has been made more vulnerable to scientifically dubious claims surrounding the utility of genetic testing. Both of these outcomes can erode public trust in science and regulation. Thus, although the ideal of an adaptive learning agency is a good one, it is critical to be clear about what should be driving this adaptation. An ideal learning agency would adapt and evolve alongside the science (with all of its messy uncertainty), not to the perception of what the science could be.

Spencer Phillips Hey

Harvard Center for Bioethics

Aaron S. Kesselheim

Department of Medicine

Brigham and Women’s Hospital

Harvard Medical School

There is currently a three-speed Europe for the biotechnology industry, with each of its three applications—health care, industrial, and agricultural—operating under different regulatory approval processes.

From 2012 to 2017, it is estimated that the annual growth rate for biotechnology products in the health care field (medicines and diagnostics) will be 5.5%, and 80% will be due to small- and medium-size enterprises. Europe has seen a greater number of approvals of biosimilar products (22 over the past 10 years) than in the United States, and more than 40 monoclonal antibodies active in different therapeutic areas, but especially cancer therapy, have received regulatory approval. This has been helped by several imaginative European Early Access Approval schemes that are hedged around by the requirement for strictly enforced post-marketing safety studies. These are monitored by new safety regulations introduced in 2013, including the establishment of a new European safety committee, the Pharmacovigilance Risk Assessment Committee. Health technology assessments, more highly developed in Europe than in the United States, and pressure on health care budgets by cash-strapped payers limit the availability of costly biotechnology products, even after regulatory approval.

Other new European regulatory initiatives relevant to health care biotechnology products include introducing in 2014 improved regulations for approving clinical trials; replacing the inconsistent implementation of the previous European Clinical Trials directive; developing in 2007 a new product classification called Advanced Therapy Medicinal Products, which incorporated gene therapy, cell therapy, and tissue engineered products; and creating a new office within the European Medicines Agency to provide regulatory advice to small- and medium-size enterprises at a favorable financial rate.

Sir Alasdair Breckenridge

Chairman

United Kingdom (UK) Medicines and Healthcare Products Regulatory Agency

London, UK

Assessing the platform economy

In “The Rise of the Platform Economy” (Issues, Spring 2016), Martin Kenney and John Zysman offer a fresh and comprehensive look at one of the most urgent topics in industrial economics. The article helps our understanding, as the authors refuse any form of technological determinism. Their phrase “technologies—the cloud, big data, algorithms, and platforms—will not dictate our future. How we deploy and use these technologies will”—is an inspiration. Of course, if we deviate from a techno-determinism understanding of the platform economy, we realize we don’t know much, and many doubts about the development of the high tech industry come to light. This article therefore leaves the reader with many unanswered questions—a very welcome and healthy result!

A logical consequence of Kenney and Zysman’s reasoning is the following: If a debate over policy on the rise of the platform economy is not going to be straightforward or simple, what are the dimensions that could guide industrial decisions and national innovation strategies to steer the economic transformation? As I was reading, I scribbled in the margins five issues that should be kept in mind to characterize various aspects of the platform economy.

First, who are the actors that will guide a transformation in the platform economy? Will incumbents resist the pressure of new entrants? Different sectors and different stages of a technology life cycle will provide different answers to these questions. Certain layers (or sectors) of the platform economy will be characterized by stronger barriers of entry and will assign a dominant position to platform leaders. To maintain leadership in these layers, scale will matter, and industrial policy should be very much aware of that. On the other hand, other layers (or sectors) of the platform economy will be characterized by significant opportunities of disruption by new entrants and entrepreneurs. Flexible specialization will be key to achieving competitiveness in these areas.

Second, what will the recombination of the factors of production look like? The platform economy will change the role of labor. As Kenney and Zysman suggest, we can imagine a reorganization of work that will prefer a looser relationship between employer and employee. Also, we need to consider the role of users, and their relevance as codevelopers for parts of the offering for services and products exchanged on the platform economy. How can we motivate these users? How can we protect their contributions? Should the development of users become one of the objectives of innovation policy?

Third, what will a new social contract look like? As a consequence of redesigned labor practices, a social welfare that is founded on formal and stable participation of individuals in the labor force might not set the right incentives to get people to contribute to a platform economy. Can policy redesign a new welfare state based on different criteria?

Fourth, how can we determine the most sustainable position for companies within the platform economy? Firms can compete to become platform leaders, but they could also very well position on a niche/module of the platform. Also, they could serve platform leaders and niches by providing important complementary assets.

Finally, what is the approach to take toward the development of a new technology? As we overcome the truism of technology determinism, pursuing technological leadership might not be the only strategy. We find many different alternatives to consider. Countries and companies that are leading the transformation could, indeed, choose to stay ahead and to lead others on the road to technology adoption. Alternatively, they could follow rather than lead the adoption of certain individual technological trends. Or they could prefer to hold the decision, and strategically prevent the adoption of certain technologies.

Countries as well as companies are trying to position themselves within the platform economy. The debate worldwide on Industry 4.0 is active and urgent. Kenney and Zysman encourage managers and policy makers to take a savvy, informed, and disenchanted approach in order to not end up in the digital peripheries of the future.

Alberto Di Minin

Associate Professor

Istituto Di Management

Scuola Superiore Sant’Anna

Pisa, Italy

Martin Kenney and John Zysman raise an important set of observations about the rise of the platform economy. There are few issues that elicit more passion with fewer facts than the gig economy and its impact on the lives of workers (especially at ground zero in the Bay Area of California where I live). The authors provide a very helpful and dispassionate overview and framework for how to understand the phenomena.

I would suggest four areas for near-term attention to push the discussion forward:

First, gather real data, not a set of anecdotes. Government data on employment in the platform economy is poor, and what is collected is dated. JP Morgan Chase Institute is the most comprehensive to date (https://www.jpmorganchase.com/corporate/institute/report-paychecks-paydays-and-the-online-platform-economy.htm), but much more is needed, including ethnographic research on the workers themselves.

Second, rethink the definition of employee and contractor. The US classification system no longer reflects the reality of the marketplace. Too often companies are winning via regulatory arbitrage and “catch me if you can” approaches to how they classify their workforce. Seth Harris and Alan Krueger have started the debate (http://www.brookings.edu/research/papers/2015/12/09-modernizing-labor-laws-for-the-independent-worker-krueger-harris).

Third, reimagine how benefits are provided independent of an employer. The discussion around portable benefits has sparked a very constructive set of conversations around how to deliver benefits that stay with the employee as they collect wages (https://medium.com/the-wtf-economy/common-ground-for-independent-workers-83f3fbcf548f#.a1rsc9p22). We should expect to see a series of experiments around the country on how to make this idea real.

Fourth, engage a broader conversation around the social contract. The great recession exposed the fragility of the post-WWII contract, and the expansion of the platform economy is laying it bare. New America and others are helping frame a dialog that reflects the reality of today’s economy (https://www.newamerica.org/economic-growth/the-next-social-contract-an-american-agenda-for-reform/).

It will be next to impossible to put the platform economy genie back in the bottle—and we shouldn’t try. Now that it is here, we need to adapt, not pine for a nostalgic view of the workplace.

Lenny Mendonca

Lecturer in economics

Stanford Business School

Digitalization is, indeed, changing industries and societies in fundamental ways, as Kenney and Zysman so accurately note. Digital platforms are the key change agents in this development. It is really difficult to find any business or area of life that would remain untouched as digitalization goes forward. The authors touch on some of these changes. They make a number of references to the Nordic countries, which in many respects have been the forerunners in digitalization. There is also a wide public debate on the challenges that digitalization brings to the society.

Intelligent information and communications technology solutions could, according to some observers, reduce global greenhouse gas emissions by 16%, providing a huge opportunity to solve the problem of global warming. Payment systems are changing. Some developing countries have shown the way. Kenya is a country where more than 50% of all financial transactions are made with mobile devices using a system called M-Pesa, provided by Vodafone and a Kenyan operator, Safaricom. A high-level media advisory group for the European Union concluded in 2013 that the media industry is now dominated by a platform game controlled by foreign players. These are just a few examples of the fundamental changes digitalization can bring to society.

Kenney and Zysman concentrate in their article on working life, which will change as people are more and more doing work that is made available by platforms. A number of platforms are distributing tasks that people can do and get compensated for as the task has been completed and approved by the customer. This is, indeed, a new issue not only for the people involved but also for the traditional institutions that create and control the labor market rules and legislation. The labor market rules are structured primarily to cover conventional employment situations where an employee and an employer have clear identities and certain rights and obligations. National tax revenues are dependent on these arrangements.

Nordic countries are widely recognized as welfare states with a high level of public expenditure and heavy taxation of income and consumption. In principle, the welfare societies have been able to make structural reforms in the economy due to the fact that people feel safe even if they lose their jobs for a short period. Now this situation is changing, and it is going to be extremely interesting to see how the Nordic welfare societies can adapt to the new conditions created by digitalization. Some suggestions have been presented in the public debate. We should tax environmentally harmful consumption more heavily, or we can pay people permanent financial compensation without any work obligations, etc. None of these suggestions, however, has been able to give a sustainable solution for how to adapt to the new conditions. It will remain to be seen how different countries will be able to adjust. As a general conclusion, we can say that doing nothing is not an option. Countries will have to change to be able to benefit from the platform economy and digitalization. They are here to stay, and it’s up to us whether we will be able create a sustainable future for us and our children.

Erkki Ormala

Professor of Practice, Innovation Management

Department of Management and International Business

Aalto University School of Business

Aalto, Finland

In their review of the platform economy, Kenney and Zysman offer a number of interesting observations, but they also fall short in several ways. They take a very US-centric view that lacks global perspective, and they focus narrowly on labor issues and give little attention to how platforms are influencing innovation through harnessing third-party complements.

There is a more interesting global story to tell about the variation in platform creation between North America, Asia, and Europe. Analysis I’ve done suggests that the top US platforms earned $1.3 trillion in profits over the past five years. Asia comes in second at $217 billion. Europe, by contrast, earned only $86 billion, since it has relatively few platform companies, and the ones that it does have are transaction-based rather than integrated platforms that bring together both the matching functions and accelerated third-party innovation.

A chunk of the $1.3 billion in profits the US platform companies have garnered have gone back to shareholders. However, a good portion is being invested in artificial intelligence, automation, and a bunch of other next-generation stuff that we have been reading about from companies such as Facebook, Amazon, and Apple. This suggests that platform companies have the potential to influence national systems of innovation, with potentially long-term consequences for national competitiveness. Whereas they have a marginal impact on employment relations, they are golden geese from the standpoint of innovation.

Thus, there is much more to say about the global status of platforms, including their significance in terms of rent/value accumulation and influence on national competitiveness.

Peter Evans

Vice President

Center for Global Enterprise

New York, NY

Data-driven science policy

Is it time for the science policy/funder communities to be more scientific about how to invest in science? This is the question posed and persuasively addressed by Katy Börner in “Perspective: Data-Driven Science Policy” (Issues, Spring 2016). The opportunity to gather, analyze, and visualize data from myriad resources (publications, grants, patents, social media, and so on) in unprecedented volumes provides a powerful argument, even a compelling rationale, for putting data-generated knowledge to work informing governments and private funders how to “most productively” (in Börner’s words) invest in research.

I agree that funders should be willing to become more scientific when it comes to investing in science. Cultivating a willingness to pose thoughtful questions about the purpose and the nature of investing in knowledge generation and use, developing robust ways to acquire and study the data available, and being honest and transparent about our determinations of success (or not) against meaningful outcomes strikes me as the right way to consider how best to deploy limited resources. On the flip side, data-driven science policy benefits from the same skepticism, caution, and debate ongoing for other “big data” initiatives—how “right” are the questions asked, how good are the data gathered, and what values are represented in outcome measures? Science and scholarship are, fundamentally, human enterprises in service to the common good. Attempts to make inquiry too efficient or optimal risk pushing us away from investing in research that is heterodox to prevailing wisdom, orthogonal to reigning dogma, or skewed to the interests of particular stakeholders. It may just be that some inefficiency is necessary to allow space for novelty to emerge.

Data-driven investment poses positive opportunities and some tricky challenges for nonprofit private funders. Foundations, charities, and individual donors typically rely on eminence-based rather than evidence-based decision making. Like government funders, foundations, charities, and wealthy individuals assemble panels of experts to shape initiatives, provide merit reviews of proposals, and make funding recommendations. Unlike government funders, private funders typically have limited resources and invest in science with small numbers of grants on more targeted topics with short time scales, limiting the amount of data available for analysis. The James S. McDonnell Foundation, for example, makes about 30 new grants a year via investment strategies including: identifying where modest research investments could help fill gaps in scientific knowledge; looking for emerging areas of research at the intersections of traditional disciplines; and identifying questions early in their inception.

It is easy to see how the data-driven approaches Börner describes can help us to more systematically “map” knowledge gaps or target emerging lines of research that could get a boost from targeted, albeit, modest funding. Visualization approaches that dynamically monitor how ideas, theories, and tools are crossing disciplinary boundaries or merging into novel hybrid fields allow us to see how influential our grantees and their publications are in the broader scientific community. Importantly, data-driven approaches can temper our expectations (and claims!) as to what can be achieved with limited investment, guide how funding strategies might need to be altered or adjusted to better match our goals, and identify new research directions for the future. In my view, the challenges for private philanthropic supporters of science are philosophical: in our enthusiasm for data, how do we maintain our core characteristic of independent and diverse decision making?

Susan M. Fitzpatrick

President

James S. McDonnell Foundation

St. Louis, MO

Katy Börner’s excellent article highlights the impact of enormous computer power for models and simulations, using big data to parameterize models, and increasing capabilities for interactive visualizations. Policy makers can now be immersed in the complexity of their domains and empowered to interactively explore a wide range of “what if” questions.

These capabilities enable decision makers from many disciplines beyond science and technology to engage in such explorations to inform their positions on a diverse array of policy issues, including education, energy, health care, and security. The process of “informing” decision making is rather different from IBM’s computerized Watson just telling them the answer. In fact, the key insights usually come from small, multidisciplinary groups using the technology to investigate and debate alternative futures.

Our experience is that senior decision makers readily adapt to such interactive environments, often wanting to “take the controls” and pursue their own ideas and questions. The biggest adoption hurdle involves policy analysts who are reluctant to abandon PowerPoint to create interactive environments that enable such explorations. Taking this leap successively involves understanding decision makers’ “use cases” and paying attention to their “user experiences.” This is often best approached by analyst-designer teams.

We have also found that it is best to stick with commercial off-the-shelf tools—for example, AnyLogic, D-3, Excel, R, Simio, Tableau, and Vensim—that allow embedding Java code, for instance, rather than writing software from scratch. We often use combinations of these tools. This practice can enable creating a prototype interactive environment within a week or two, which, in turn, allows rapid user feedback and easy mid-course corrections.

The goal of all this is evidence-based policy, rather than policies based largely on political positions or ideologies. Indeed, we have experienced many instances of stakeholders’ ardent positions dissolving in the face of evidence, with them at the controls. Decision making is a lot more efficient and effective when you can get rid of bad ideas quickly.

William B. Rouse

Alexander Crombie Humphreys Chair in Economics of Engineering

Systems Engineering Research Center

Stevens Institute of Technology

Gamers Abroad

Defining video games involves a bit of fuzzy science these days. Arcades of the 1980s or home consoles such as the Nintendo Wii and Sony PlayStation may immediately spring to mind, but games on smartphones, social networks, and (soon) a new wave of virtual reality devices are constantly evolving the medium and expanding the player population to just about everyone you know. There is no one element that makes a video game a video game, but Pac-Man creator Toru Iwatani attempts to capture the medium’s essence: “Video games differ from other manufactured goods in that they are the rare product to simultaneously be electrical, mechanical, and works of art; they aggregate a range of ideas from the fields of engineering to literature to art to psychology, and they provide society with a necessary cultural tool of ‘play.’”

video-games-around-the-world

As the possibilities of those fields change and flourish, so grow video games, and so grow the world and economy around them. Video Games Around the World, a new collection of essays edited by Mark J. P. Wolf of Concordia University, arrives at an ideal time for surveying the remarkably dynamic—and, importantly, global—landscape of digital gaming.

The Internet has dominated video game discourse with so-called “new games journalism,” a term coined by writer Kieron Gillen (derived from Tom Wolfe’s essay collection The New Journalism) to describe the non-academic, very personal style of journalism where game-playing experiences are described and embraced almost entirely through the first person. Recent book-length examples include Extra Lives by Tom Bissell and Gamelife: A Memoir by Michael Clune. This kind of writing about video games is perfectly entertaining, often informative, and nostalgic, befitting a medium that spread across the monoculture of suburban American life over the past four decades.

As good as this new games journalism often is, its omnipresence tempts fatigue, with similar themes and memes appearing again and again, and all of this shared individualism ultimately raising the question: What about everyone and everywhere else? Although video games have managed to find a home most places on the planet, the English-speaking continents of the Internet inevitably remember and recount only their own limited experiences. Video Games Around the World provides a glimpse beyond this bubble by commissioning writers from Africa to Asia to South America to offer an objective retelling of their homeland’s gaming past. To provide a comprehensive picture, the usual suspects such as Japan and the United States are included as well, but the allure of this book is the experience of less-traveled digital landscapes.

Both the pitfalls and the value of this impersonal investigation are evident in the first chapter and its singular attempt to cover the entirety of Africa. Popular regional games whose names have never before been uttered by Mario Bros.-filled mouths here get a chance to at least present themselves as existing. It is difficult to understand a game purely in words—especially when those words are directed toward the volume’s more overarching archival aims—but it is almost enough to know that they are out there. The gaming community often complains that developers overtread the same battlegrounds—World War II being a particularly popular setting—yet there already exist virtual worlds with fresh environments, if not always fresh ideas. Do you want to relive the early Islamic conquests of North Africa or the 1814 Peruvian Rebellion in Cusco? These games have already been created on the periphery and could perhaps succeed in English-speaking markets with the slogan, “It’s still war, but different!”

Many games documented in the book do offer truly different experiences that extend beyond warfare, in part simply due to their having been created outside dominant markets. It may not always be possible to play them due to technical or bureaucratic barriers, but we’re nearing a world in which such unavailability is anachronistic. Although a lack of industry access is a factor in countries less versed in traditional consumer delivery methods, the PC and smartphone ecosystems are developing to a point where established games publishers hold significantly less influence. A game put on a smartphone app store by a guy in Vietnam can make tens of thousands of dollars per day, as with the Flappy Bird phenomenon, which is revolutionary regardless of your opinion on the merits of Flappy Bird. It remains to be seen whether a new dynasty of publishing conglomerates capitalizes on this situation, but for now these peripheral video game markets are a dynamic “Wild West” of game development and distribution.

That said, there are still significant gaps in the global accessibility of game development. This is a technical field that requires up-to-date training. Even when skilled outsiders do overcome the odds to create a deliverable product, marketing skills are required to navigate the “all-access” environment. Perhaps the greatest accomplishment of Wolf’s book is the compiling into a single source the various support pipelines available to aspiring game makers. Many countries subsidize game development, and many more feature trade organizations akin to medieval guilds and conferences purposed toward professional progress. The transitioning landscape makes this a confusing time, to be sure, but the book’s individual authors typically conclude their chapters optimistic of this “Gilded Age.”

The chapters of Video Games Around the World are arranged alphabetically, which is appropriate, benign, and concedes the fact that this is ultimately an encyclopedia in form and function. This leads to flow issues for those seeking to read the work cover to cover, but the arbitrariness serves complex issues such as piracy well. There are few moral conclusions offered (as perhaps none exist), so the piracy discussion takes a disjointed path through international history. Corporations will be happy with cases such as Mexico, where longtime video game fans fostered during eras of piracy are now likely to spend money on their favorite franchises via legitimate pathways. However, traditional pay structures are being undermined by online subscriptions and microtransactions in any case, so piracy itself is morphing into shapes this tome cannot and does not seek to predict.

The individual authors are both the strengths and weaknesses of this book. Excellent introductions by the editor and the above-quoted Iwatani aside, the writing consists of (mostly) local voices describing local game history. Yet how many different voices can a reader entertain before intrigue becomes exhaustion? All authors were given the same assignment by the editor (to describe the past, present, and future of video games in their countries, with added focus on local game companies and academic endeavors), and although there is some stylistic variety to the response, the book as a whole feels a bit rote. All of the authors are qualified and knowledgeable, and all certainly care about and are personally invested in the topic at hand, but writing skill is sometimes lacking or has been lost in translation.

The chapters with personality inevitably become the most memorable. Hungarian contributor Tamás Beregi provides a couple of interesting anecdotes, one involving Yegor Ligachev, an official of the former Soviet Union who requested underground Commodore 64 software for his grandson, and another describing a hungover gonzo journalist in a bomber jacket wandering the halls of Hungary’s Ministry of Education, demanding permission to publish a game magazine. Thomas Apperley does an excellent job giving life to the Venezuela chapter as well, but here we have the case of a non-native ethnographer telling the national tale. The book never promises exclusively local authors, and it would be naïve to study a single nonfiction essay with the expectation of understanding the entire history of a country’s gaming past, so credentials providing the author’s bona fides should satisfy. However, these credentials are aggregated in the back of the book rather than alongside the numerous other appendices contained within the chapters—a legitimate formatting choice, but one that makes the reader work harder to discover the qualifications of their local tour guides.

Is it wrong to ask for more personality in a work like this? The opposite complaint might be leveled against new games journalism and all its self-centered style, so studiously avoided by the experts of Video Games Around the World. The truth is, though, that traveling the world is tiring without a good companion. The breadth of coverage amassed by Wolf is impressive, but the format makes a murderers’ row of virtuoso writers unlikely. Too often the games are presented in facts and lists rather than an engaging discussion of the tangible, vibrant, important entertainment video games offer. The book accomplishes its mission of delivering history. However, in so doing, it surrenders readability for completionism, transmuting along the way into a repetitive experience akin to playing too many first-person shooters, and making it unlikely many readers will see the mission through.

Harry Brammer is a video game tester turned international public policy researcher, who currently works for a research ethics board in Milwaukee, WI.

The Rise of the Platform Economy

A digital platform economy is emerging. Companies such as Amazon, Etsy, Facebook, Google, Salesforce, and Uber are creating online structures that enable a wide range of human activities. This opens the way for radical changes in how we work, socialize, create value in the economy, and compete for the resulting profits. Their effects are distinct and identifiable, though certainly not the only part of the rapidly reorganizing global economy. As the work by Michael Cusumano, Annabelle Gawer, and Peter Evans has shown, these digital platforms are multisided digital frameworks that shape the terms on which participants interact with one another. The initial powerful information technology (IT) transformation of services emerged with the Internet and was, in part, a strategy response to intense price-based competition among producers of relatively similar products. IT-enabled services transformation, as our colleagues Stuart Feldman, Kenji Kushida, Jonathan Murray, and Niels Christian Nielsen have argued in other venues, was based on the application of an array of computable algorithms to myriad activities, from consumption and leisure to services and manufacturing. The movement of these algorithms to the cloud, where they can be easily accessed, created the infrastructure on which, and out of which, entire platform-based markets and ecosystems operate. Platforms and the cloud, an essential part of what has been called the “third globalization,” reconfigure globalization itself.

These digital platforms are diverse in function and structure. Google and Facebook are digital platforms that offer search and social media, but they also provide an infrastructure on which other platforms are built. Amazon is a marketplace, as are Etsy and eBay. Amazon Web Services provides infrastructure and tools with which others can build yet more platforms. Airbnb and Uber use these newly available cloud tools to force deep changes in a variety of incumbent businesses. Together they are provoking reorganization of a wide variety of markets, work arrangements, and ultimately value creation and capture.

This digitally based new economy has been given a variety of names derived from some of its perceived attributes. How we label this transformation matters because the labels influence how we study, use, and regulate these digital platforms. Its boosters have called it the Creative Economy or the Sharing Economy, whereas those less convinced of its beneficence have dubbed it the Gig Economy, the Precariat, or the 1099 Economy, focusing on its impact on workers and how they are compensated. And there are wide variations within these labels. Consider the Shared Economy. Examples include Uber and Airbnb, which are very distant from the visions of Wikipedia, with its communal construction of knowledge; from Napster, which shared music regardless of whether it was legal; or from open source software creations such as Linux and Apache. Despite the attractive label and the entrepreneurial successes, Uber, Airbnb, and Facebook are not based on “sharing”; rather, they monetize human effort and consumer assets. Indeed, the advantage of platform-based companies often rests on an arbitrage between the practices adopted by platform firms and the rules by which established companies operate, which are intended to protect customers, communities, workers, and markets. Lyft and Airbnb are entrepreneurial initiatives that facilitate the conversion of consumption goods such as automobiles and apartments into goods that are monetized. This “sharing” has a more than passing resemblance to the putting-out economy that existed before factories, when companies would ship materials to people to assemble items such as shoes, clothing, or firearms in their homes. In the current manifestation of putting out, the platform operator has unprecedented control over the compensation for and organization of work, while still claiming to be only an intermediary. On the other hand, the rapidly growing mobile phone app stores and user-generated content platforms such as YouTube and Instagram are structured as digital consignment industries, borrowing from the way artists sell their work through galleries.

We are in the midst of a reorganization of our economy in which the platform owners are seemingly developing power that may be even more formidable than was that of the factory owners in the early industrial revolution.

We prefer the term “platform economy,” or “digital platform economy,” a more neutral term that encompasses a growing number of digitally enabled activities in business, politics, and social interaction. If the industrial revolution was organized around the factory, today’s changes are organized around these digital platforms, loosely defined. Indeed, we are in the midst of a reorganization of our economy in which the platform owners are seemingly developing power that may be even more formidable than was that of the factory owners in the early industrial revolution. The proliferation of labels is simply a reflection of the recognition that platforms are already having powerful consequences for society, markets, and firms, and that we are unclear about their dynamics and directions. Whatever we call the transformation, the consequences are dramatic.

Utopia or dystopia

The debate about the impact of the platform economy is an extension of a discussion that began in the early days of the IT revolution, when figures such as Robert Noyce, Bill Gates, and Steve Jobs claimed that they were creating a future that would open the world to new possibilities and prospects. Optimists still abound, and San Francisco is now experiencing what may be its biggest gold rush yet, with investors, entrepreneurs, and data scientists working furiously to create “disruptive” new businesses. For investors, inherently optimists, the question is how to build platforms, attract users, and then capture the value that is generated from the emerging ecosystem. Regardless of the platform, all of them are based on mobilizing human beings to contribute. Whether it is Google monetizing our searches, Facebook monetizing our social networks, LinkedIn monetizing our professional networks, or Uber monetizing our cars, they all depend on the digitization of value-creating human activities.

The optimistic version of the emerging techno-economic system suggests that society can be reconstituted with producers becoming proto-entrepreneurs able to work on flexible schedules and benefit from these platforms. And this certainly will be the case for many. Similarly, the utopians argue that platforms, such as the car-sharing services Uber and Lyft, can unlock the commercial value in underused personal assets; other platforms, such as Airbnb, promote the notion that vacant rooms in one’s house or apartment can become sources of income whether technically hotel rooms or not. Advocates believe that all of this can occur for the greater social good without negative consequences. But can we really foresee all the repercussions of these new economic arrangements? For example, platform businesses matching workers and tasks may make labor markets more efficient, but if they become pervasive and organize a significant portion of the work, they are at the same time likely to generate fragmented work schedules and increasing levels of part-time work without the employment-related benefits that previously characterized much employer-based full-time work. For now, it is not clear whether these digital platforms are simply introducing digital intermediaries or actually increasing the extent of gig or contract work.

Even as the digital era unfolded in its utopian phase in the 1970s, there were skeptics who feared that the new technologies would result in unanticipated and undesirable consequences. Perhaps most prescient was Kurt Vonnegut’s 1952 novel Player Piano, which even gave a bit part to the great mathematician Norbert Weiner. Vonnegut envisioned a digital future of material abundance—albeit a digital future of machines built with tubes, not semiconductors—with a radical social division between a highly credentialed and creatively employed elite and an underclass. His dystopian vision is now finding full expression in the fear that digital machines, artificial intelligence, robots, and the like will displace work for a vast swath of the population. Bill Davidow, once at Intel and then at his own Silicon Valley venture capital firm, expressed this in his Harvard Business Review article “What Happens to Society When Robots Replace Workers?” The MIT economists Erik Brynjolfsson and Andrew McAfee explore this trend in more detail in their book The Second Machine Age.

The impact on employment and the character of work is certainly one element in assessing whether we will have a utopia or dystopia. In our view, that outcome is yet to be determined. As a society we will have to make choices about how to deploy new technologies, choices that will be critical in shaping the ultimate impact. The questions are really: what balance will there be among jobs created as the digital wave flows through our economy and society, and which workers will be displaced? Certainly it is feasible to catalogue existing work, particularly work that is routine, as likely to be replaced or reconfigured by digital tools, and perhaps, as some have tried, to estimate the numbers of such existing jobs that will be digitized away. By contrast, the new kinds of work that are now being created and the existing jobs that will be redefined and reorganized in the future are more difficult to forecast, so we can only speculate. Algorithms and databases are automating some kinds of work, but even as this occurs other value-creating opportunities are appearing. There will be new products and services as well as new production and service processes, which are likely to be design and creativity intensive, as well as algorithm-enabled. Some of the early indicators of the new or transformed work can be enumerated, but certainly not exhaustively counted.

Moreover, existing jobs will be redefined and reorganized in the future. The character of some existing work—how much or how little, we cannot know—will be reframed but not eliminated by digital technology. Uber, Airbnb, TaskRabbit, Handy, and other platform firms are transforming industries by connecting “producers” with customers in new ways. In some cases, this is displacing or threatening existing, often regulated, service providers, such as taxis and hotels. In other cases, it is formalizing previously less organized or locally organized work. Still other platforms, such as app stores and YouTube, are creating entirely new value-creating activities that are formalizing into what can be seen as precarious careers, such as a YouTube producer or smartphone app developer. Finally, existing organizations are creating new digital and social media marketing departments and jobs. The question in these cases is what system of control and value capture will be in place. Our sense is not necessarily that there will be less work, but that for a growing number of jobs, the relationship with an employer will be more tenuous than ever. These changes are not likely to result in the workerless society. One possibility is a society in which the preponderance of the work and value creation is more dispersed than ever before, even as the platform owners centralize the transactions and capture their value.

Indeed, we may, unless policy rules lock in the position of the emerging incumbent, see another round of innovation and job creation. The use of digital automation presents a classic dilemma: anything that can be characterized sufficiently to become computable can be copied, as our colleague Niels Christian Nielsen has argued elsewhere. At that point, another round of innovation and imagination will be required. Can automation innovate itself? More likely, teams of people and digital tools working together will be required to be competitive. The Turing test might be able to establish that a digital machine can imitate intelligence, but the test does not establish consciousness or consider whether human consciousness in all its diversity differs in fundamental ways from current algorithmic tools.

The debate over jobs created or destroyed is useful and worth continuing, but we should be clear that it has no end, and there will be no definitive answer. For now, there are only indicators and traces to suggest an outcome. And that outcome, we emphasize and repeat, will be shaped by choices about technology deployment that turn on entrepreneurial initiative, corporate strategies, and public policies. As in the discussion of what is being called the Internet of Things or the digitally based reorganization of manufacturing, in our research with colleagues at the Research Institute for the Finnish Economy, we find significant differences among national emphasis and investments. German policy is directed toward maintaining its competitive position in manufacturing built on a base of skills and with a fabric of small and mid-sized companies even as the foundations of production evolve. The U.S. emphasis seems to be on developing and applying high-end sophisticated tooling for aerospace and military applications. On the consumer side, some communities have simply banned Uber and Lyft, whereas others welcomed it. Which communities, this leads us to ask, are most likely to be the sources and beneficiaries of the emerging platform economy? Which are most likely to be discomfited?

Although technologies may not dictate our future, they frame the choices to be made and the questions to be answered. Will the platform economy, and the reorganization it portends, catalyze economic growth and a surge in productivity driven by a new generation of entrepreneurs? Or will the algorithmically driven reorganization concentrate substantially all of the gains in the hands of those who build the platforms? Will it spark a wave of entrepreneurial possibilities, unleash unimagined creativity, free workers from oppressive work schedules, or unleash an avalanche of dispossessed workers who are trying to make a living with gigs and temporary contracts? If we do not interrogate these technological trajectories, we risk becoming unwitting victims of their outcomes. What questions should we be asking?

The key technologies

The algorithmic revolution and cloud computing are the foundations of the platform economy. But computing power is only the beginning of the story. That computing power is converted into economic tools using algorithms operating on the raw material of data. The software layer that stretches across and is interwoven with the economy is a fabric of algorithms. That software layer, that algorithmic fabric, is being extended to cover manufacturing, giving birth to the Internet of Things, the Internet of Everything, or the Industrial Internet, with its implied webs of sensor networks. It is no exaggeration to say that software was formerly embedded in things, but now things—services as well as physical objects—are woven into software-based network fabrics. This software layer extends the availability and lowers the cost of access to digital tools and traditional tools accessed by and controlled by digital processes. Moreover, costs drop through the use of open-source software, cloud storage and computing, and physical spaces such as those provided by TechShops that enable individuals to work with advanced industrial-scale equipment. Among other consequences, this certainly lowers the cost of entry for newcomers.

Cloud computing rests on the virtualization and abstraction of computing processes. One of us (Zysman) has examined the character, emergence, and deployment of cloud computing in work with Jonathan Murray, Kenji Kushida, Patrick Scaglia, and Rick McGeer. Although the details of how it works do not matter for this essay, the consequences do. For the providers of cloud services, scale matters enormously. For users—individuals, small- and mid-size enterprises, startups, and corporations—the consequence is a radical reduction in the cost of computing resources and information and communication technology tools, a radical reduction in barriers to usage. Users can rent resources as they require them rather than having to own or build entire computing systems. Computing and the applications and platforms it facilitates are now available as an operating expense rather than a capital expense.

Digital platforms are complicated mixtures of software, hardware, operations, and networks. The key aspect is that they provide a set of shared techniques, technologies, and interfaces to a broad set of users who can build what they want on a stable substrate. Android and IOS are platforms. Although they somewhat restrict the applications that one can build or sell, they are, in general, open to app builders. Android is also a platform for hardware (handset and other device makers) because the code is open, not just the interfaces. Indeed, platforms can grow on platforms. Many of the current Internet platform firms use Amazon Web Services. Many of these platforms attract a myriad of other contributors that, when sufficiently rich, can result in the formation of an ecosystem. For example, in the case of the apps stores, complementary businesses are emerging. AppAnnie is a firm that ranks the revenue generated by apps; there are advertising agencies that analyze YouTube ad buying; TubeMogul classifies YouTube “stars” and measures their reach; and there has been a proliferation of agencies that cultivate new YouTubers. These “complementors” are powerful allies in building and maintaining the lock-in for the master platform. Of course, building a platform is work, but platforms themselves then generate or organize the work of others by providing the digital locations for the connections that organize work and other activities.

A looser definition of a platform, as noted previously, is one in which social and economic interactions are mediated online, often by apps. For example, Uber, so far as we know, does not yet provide a platform upon which others can establish businesses, such as organizing a pizza delivery fleet. Nevertheless, as an algorithmic structure providing a digital market and potentially an ecosystem, albeit one it controls, Uber should be considered as a firm operating a platform.

Digital platforms facilitated by key technologies such as the cloud, including digital marketplaces such as Amazon and Internet firms such as Google and Facebook, are restructuring ever more parts of the economy. The discussion is complicated because, as noted, there is not yet a clear definition of digital platforms that allows us to specify precisely what is in and out of the category. The term “platform” simply points to a set of online digital arrangements whose algorithms serve to organize and structure economic and social activity. In the IT world, the term means a set of shared techniques, technologies, and interfaces that are open to a broad set of users who can build what they want on a stable substrate. As used more widely, and by us in this essay, the term also points to a set of digital frameworks for social and marketplace interactions.

Speculations aside, while there is a rich and emerging literature, at the moment there is no real theory of the effect of these diverse platforms on the overall economy. To sense the scope of the market and regulatory impact of the loosely labeled platform economy, let us consider some of the most salient types of digital platforms.

In all these examples, across all the categories, the algorithmic underpinnings of the online activity are most evident. For example, Lyft connects drivers with customers algorithmically. The algorithms integrate mapping software, real-time road conditions, and the availability of drivers to provide a price estimate. Drivers are vetted through online checks, which, of course, work only as well as the data they have. Payment is made by credit card information that is on file.

Economic consequences

What we do know is that these platforms are in many cases disrupting the existing organization of economic activity by resetting entry barriers, changing the logic of value creation and value capture, playing regulatory arbitrage, repackaging work, or repositioning power in the economic system. As a starting place for discussion, we might ask the following questions about each platform or type of platform.

How is value created? The platform economy comprises a distinctly new set of economic relations that depend on the Internet, computation, and data. The ecosystem created by each platform is a source of value and sets the terms by which users can participate.

Who captures the value? Indeed, what is the distribution of risks and rewards for the platform users? There are a variety of mechanisms with various implications for gains distribution. Some platforms allow the owner to “tax” all transactions, whereas others monetize their services through advertising. Platforms can transform work previously done by traditional employees into tasks performed by contractors, consigners, or quid pro quo workers—or create entirely new categories of work. There are also what Gina Neff calls “venture laborers,” that is, the people who work at the platform firms. They receive high wages, and if the firm is successful, the value of the platform is capitalized in the stock market, resulting in remarkable amounts of wealth for the firm’s direct employees and entrepreneurs. If the firm falters or fails, these individuals must find new employment.

There is also a growing cohort of what some call “mini-entrepreneurs” and others call “consignment workers,” who provide goods—usually but not necessarily “virtually”—for platforms such as app stores, YouTube, or Amazon Self-Publishing. Although the vast majority of them are unsuccessful or marginally profitable, some can be enormously successful, and despite the fact that this phenomenon is as yet unmeasured, it is clearly creating many new opportunities for entrepreneurship. In certain cases, particularly in apps, those in the consignment economy sometimes grow so large that venture capitalists will invest in the entrepreneur/firm, and the employees become venture labor. Some of these apps can become platforms themselves. Put differently, the consignment model has significant upside for participants, but it is accompanied by high risk.

Who owns or controls the platform? The answer varies by platform, and the differences are important. The distribution of benefits differs considerably, for example, at these platforms: Wikipedia, where the network is managed by a consensus set of rules; the Danish Agricultural Cooperative platform, in which participant owners know one another and there are clear boundaries between inside owners and others; and Uber, in which the platform is owned by a small group of entrepreneurs and their venture capitalists and where the value will eventually be capitalized by the sale of a controlling interest through either acquisition or a stock offering.

How is work packaged and value created, and what percentage of work is now organized in these radically new ways? What happens to the organizational forms of work? Certainly, some workers, such as those employed by Microsoft, Google, LinkedIn, and Facebook, retain traditional employment relationships. Although these firms expect long working hours, they also provide considerable scheduling flexibility as well as a variety of free food, drinks, transportation, and other benefits that can make them appear to be corporate paradises. By comparison, those who obtain work as gigs, consignments, or contracts through digital platforms have radically different experiences. Although they have control of their work hours, they rarely receive any other employee benefits. Conceptually, if not literally, Uber converts taxi company employees or former medallion owners into contractors, whose access to income is through the Uber platform, while removing government from the rate-setting equation. Are these contractors mini-entrepreneurs, employees in all but name, or yet something else? Further, what is the proper employment category for individuals who hope to be one of the winners by producing apps, YouTube videos, or self-published books on Amazon? In these activities, there is a power law of returns by which a few big winners are remunerated by advertising, product placement payments, personal appearance fees, and even crowd-funding campaigns, while a very long tail of producers are creating the vast bulk of consigned content without monetary return.

Many are now concerned that rather than creating a new source of productivity we are legitimating a new form of putting out. Can Uber drivers be self-supporting contractors in a 1099 economy rather than stable workers in an employment economy, or are they just extremely vulnerable gig workers? And, more broadly, as Ruth Collier asks, what will be the consequences for mass politics and political structures? Are we generating labor market flexibility, or a precariat that resembles a cyberized Downton Abbey replete with a small elite composed of the platform owners and a new and sizable underclass?

Making choices

What sort of an economy and society will we create in the transition to digital platforms and the accompanying reorganization of significant portions of the global economy? And importantly, what choices will we have?

Before we turn to the long list of issues, with each issue opening an array of questions and debates, two points need to be made. First, Larry Lessig famously claimed that code is law; that is, code represents binding restrictions on behavior. Algorithms and platforms structure and constrain behavior; the law in the books is often difficult to apply or enforce in the digital world where action is possible only if it conforms to frameworks expressed in the code that shapes and directs behavior. Consider the fight between the Justice Department and Apple; the warrant has no meaning if it cannot be executed in code; for the warrant to be implemented, the code would have to be modified.

Platform entrepreneurs increasingly believe that if they possess a first-mover advantage, they can, in fact, remake existing law by creating new practices on their platforms that essentially establish new norms of behavior. It is often said in Silicon Valley, “Don’t ask permission; ask forgiveness,” as, perhaps, was the case with Volkswagen’s “fix” for “clean” diesel engines. Of course, this forces two sets of questions. First, who writes the code, and whose values are expressed in code? The code writers, taking Uber as an example, have already reshaped social behavior. Government rules will influence how the new technologies are deployed and their consequences, but in a platform economy, government decisions may be constrained by the “facts” in the software.

Second, although public policies are obviously important, corporate strategies also have far-reaching effects. Do companies view workers only as costs to be contained or as assets—even in an era of algorithms, data, and robots—to be developed and promoted? And equally important, are those assets directly tied to the firm? Who should bear the costs of their retention and upgrading?

Acknowledging the constraints of code and the centrality of company choice in shaping outcomes, our platform future, the character of market, and the social logic established will depend on an array of policy choices. What market and social rules are appropriate for a platform economy and society?

Our old labels and categories, not just old rules, are being thrown into disarray. To begin sorting this out, let us start with the firm. In the late nineteenth century, the corporation emerged as a means of orchestrating economic activity and organizing markets. In the twenty-first century, we speculate that these functions will be taken on by the platform in the cloud. Take Google, the platform economy giant, which, despite its 2014 revenues of $66 billion, has only 50,000 employees. Uber has only about 1,500 employees and is already a global business. What policy and political issues arise when the orchestrators of economic activity are relatively small firms, rather than organizations as large as Ford Motor Company, General Electric, or General Motors—all of whom also require sophisticated supplier and distribution networks?

It is evident that platforms open up many entrepreneurial opportunities. Some entrepreneurs, such as Robin Chase at Zipcar, envisioned an alternative social, not just economic, model because digitally enabled car sharing could dramatically reduce the incentive to own a car. If that model spread widely, it might result in a drop in overall demand for auto production. This may or may not disrupt Hertz (Zipcar was sold to Avis), but it might dramatically affect automakers. Indeed, automakers responded by developing partnerships with Uber and Lyft. In other words, such “sharing” solutions could have unforeseen ripple effects on entire market ecosystems, as encyclopedia and book publishers are discovering to their dismay.

But many platforms by their very nature prove to be winner-take-all markets, in which only one or two companies survive, and the platform owner is able to appropriate a generous portion of the entire value created by all the users on the platform. More important, however, is that as the power is centralized, the platform owner can become a virtual monopolist. In that case, the platform owner can squeeze the platform community—the drivers or customers on Lyft or Uber, the content providers, the consigners, the customers, essentially any of the participants in the ecosystem who are instrumental in creating the value in the first place. Perhaps competition among platforms in a similar domain, Uber and Lyft for example, might mute the consequences of the power inside the platform. In any case, a monopoly position or even a strong oligopoly might inhibit, or sharply constrain, further entrepreneurial efforts.

Indeed, the appropriate market rules for competition/antitrust, labor market, and intellectual property among many others are becoming increasingly difficult to specify and legislate. Policy and political interests among the players, even among the winners, are far from uniform. Consider such domains as antitrust policy, where the European Commission has done battle with U.S. tech companies; intellectual property, where the interests among information and communications technology firms and platform firms are less consistent than it might seem at first glance; network policy, where carriers such as AT&T have radically different interests from Netflix or Google; and labor market policies. Indeed, the wireless carriers have announced they will start blocking advertisements on smartphones, thereby directly attacking the Google and Facebook business models. As we have shown with Bryan Pon, the turbulent environment in the smartphone ecosystem is leading to complex competitive strategies that have technical, social, and political ramifications.

The question of outcomes goes beyond the question of whether digital platforms spawn entrepreneurs or monopolists. We need to ask whether a society organized around platform owners servicing mini-entrepreneurs, contractors, and gig workers portends an even more unequal society. Does the answer depend on the character of platforms or on the policies and politics of the platform economy?

The issues of entrepreneurship and those of work organization that we discussed earlier are tightly interwoven. The policies that we adopt now might determine the balances achieved later. If we want an entrepreneurial spirit to infuse the platform world, then we want risk-taking entrepreneurs, whether they are forming the platforms or seeking advantage as contractors or consigners within it. But what encourages risk? Fear, or a safety-net certainty that if a gamble fails, one can always play again? Similarly, if we want workers to accept the new arrangements, how do we assure them that if they accept the flexibility, they will not be the victims but rather the beneficiaries of the ever-greater social value and wealth that is being created? All studies of technology adoption have shown that those who believe they will be victims will resist; if they believe they will be beneficiaries, they may help facilitate the shift. Of course, the largest group consists of those in the middle who are joining the platform economy because they have no choice and do not feel empowered to resist.

Balancing the need to sustain initiative while cushioning the consequences of significant socioeconomic transformation leads us to a focus on social policy, not just market policy. Social policy, sometimes called welfare, shapes the risks that workers and entrepreneurs take and their evaluation of whether to support or resist change. In the United States, benefits such as pensions and health-care coverage (the latter, until the passage of the Affordable Care Act) have been tightly tied to employment. Lose your employment, lose the protections. The U.S. debate often assumes that expanded welfare protections dampen initiative, pointing to Europe as an example of how investing in social protections limits economic dynamism. Aside from whether this was, in fact, ever the case in Europe, the question is whether social protection will inherently discourage initiative now. In our view, the real issue is never the fact of protections themselves—and indeed we believe that facilitating social and economic adjustment by easing the burdens of those dislocated is both a social obligation and an economic necessity—but how social policy is paid for and organized.

The emerging platform economy, with expanding contract work and gig employment, has encouraged many to look at the Nordic social policy model. The Danish flexible security model suggests that social protections can lubricate the engines of change. Simply put, many social benefits in that model are associated with citizenship, and the notion of flexible security gives employers extensive rights to adjust their workforce as needed while still providing workers with social protections in the form of training, job placement, and basic income. Certainly, this is no panacea, and indeed the Nordics are themselves reopening and intensely debating the character of their social policies. But we must consider whether addressing the downside risks of entrepreneurial efforts while providing worker flexibility with broader social safety nets as social rights can make the platform economy a source of sustainable growth. What assurances of social safety do we want to give to risk takers?

The debate over policy will not be straightforward or simple. As with all economic transformations, the disruptions will create winners and losers. Who will decide how the results of increased productivity are distributed? The reality is that the winners and losers in markets depend on who can participate and on what terms. There are no markets, and no market platforms, without rules, but what happens to the politics if important market rules are made unchallenged by the platform owners? Many political struggles will be waged over these rules, and those fights will be part of defining the market and society in a platform era. Political fights will break out over protections for communities, clients, and workers as markets are disrupted. Some of those fights will be about business models that are playing a game of policy arbitrage, whereas others may be about rules for the consignment platforms. In any case, how many instances of disruption will there be? Should we view these disruptions as creating a flood of viable entrepreneurial possibilities or as destroying the security of employment relations for many? Can they create new sources of income and reasonably compensated work? Can policy encourage labor market arrangements that facilitate innovation, provide protection for workers, are efficient, and promote decent, sustainable lives for citizens? In the platform world, is there a Henry Ford who recognizes that everyone in the ecosystem requires a reasonable income in order to buy his products? This will not be a straightforward process. The reorganization of the economy around platforms will inevitably change the very configuration of the interest groups that influence how the law tries to shape the code. In sum, these battles, often engaged in isolation from each other, will interweave to reshape our communities and social life, as José van Djick has shown, as well as the character of markets and market competition.

In the era of the platform, the future remains open. Answers to crucial questions are for the moment unknowable. The answers depend on our choices, not just on the technology. For example, will cloud technologies and the platform-driven economic reorganization they cause drive the productivity growth on which sustained real income improvement occurs? Will these reorganizations destroy jobs or reduce the required skill levels?

The technologies—the cloud, big data, algorithms, and platforms—will not dictate our future. How we deploy and use these technologies will. When we look at the history of innovations such as electric utility grids, call centers, and the adoption of technology standards, we find that the market and social outcomes of using new technologies vary across countries. Once we start on a technology path, it frames our choices, but the technology does not determine in the first place exactly which trajectory we will follow.

We will be making choices in an inherently fluid and ever-changing environment shaped to some degree by unpredictable technical change and social reactions to these changes. Ultimately, the results will depend on how we believe markets should be structured—who gains and who can compete; how we innovate; what we value in society; how we protect our communities, our workers, and the clients and users of these technologies; and how we channel the enormous opportunities created by these sociotechnical changes. It is up to us to sidestep a dystopia and to create, if not a utopia, at least a world of ever greater benefit for communities and citizens.

Germline Gene Therapy: Don’t Let Good Intentions Spawn Bad Policy

The proposed moratorium on clinical applications of gene editing technology reveals ignorance about how innovation works, and callousness about human suffering.

“Human gene therapy” has been one of the most ambitious goals of biotechnology since the advent of molecular techniques for genetic modification in the 1970s. There are two distinct approaches, which present different kinds of benefits, risks, and controversies. Somatic cell human gene therapy (SHGT) alters a patient’s genes—either by the editing of existing genes or the insertion of new ones—to correct conditions inherited or acquired later in life. Somatic cells are any cells in the body except eggs or sperm, so modifications in them are not heritable—that is, passed on to offspring.

Since a four-year-old with a genetic defect called severe combined immunodeficiency, or “bubble boy disease,” was first successfully treated at the National Institutes of Health (NIH) in 1990, SHGT has achieved several other successes, including the correction of rare genetic abnormalities that cause conditions such as recurring pancreatitis and blindness from degeneration of the retina.

Up to now, gene therapy has been of a type that affects only the patient being treated but does not create a heritable change and affect future generations; that is, it does not modify sperm, eggs, or embryos in a way that would constitute “germline gene therapy” (GLGT). However, using a gene-editing system called CRISPR/Cas9, Chinese researchers reported in May 2015 an unsuccessful proof-of-principle experiment that attempted germline gene therapy on embryos that were nonviable (and going to be discarded in any case).

The Chinese experiment precipitated a firestorm in the scientific community, with some researchers and bioethicists calling for an absolute ban on attempts to treat even imminently lethal diseases with gene-editing techniques that would affect germ cells. The move toward prohibition gained ground at a conference held in Washington, DC, in December 2015 under the auspices of national academies of science and medicine of the United States, China, and the United Kingdom. The attendees called for what amounts to a moratorium on the clinical use of germline editing, concluding that it would be “irresponsible to proceed” until the risks were better understood and until there was “broad societal consensus” about such clinical research. They did not, however, recommend a prohibition on basic or preclinical research.

Those recommendations—coming mainly from people who don’t actually treat patients—were the result of the kind of group-think that dismisses conflicting minority opinions and produces poorly reasoned outcomes.

A return to the bad old days of Asilomar?

Many of those currently opposed to germline gene therapy wax nostalgic about a historic 1975 meeting of scientists, ethicists, and members of the press held in Asilomar, California, which resulted in a temporary moratorium on recombinant DNA, or gene-splicing, research; and, ultimately, in the creation of highly restrictive, unnecessary regulation. They appear to be on the verge of repeating that blunder.

The 1974 article in Science that led to the Asilomar meeting urged that “scientists throughout the world join with the members of this committee” in halting recombinant DNA experiments “until attempts have been made to evaluate the hazards and some resolution of the outstanding questions has been achieved.” And the official “Summary Statement of the Asilomar Conference on Recombinant DNA Molecules” concluded: “Even in the present, more limited conduct of research in this field, the evaluation of potential biohazards has proved to be extremely difficult,” because “[t]he new techniques, which permit combination of genetic information from very different organisms, place us in an area of biology with many unknowns.”

Stanford University biochemistry professor Paul Berg, a prime mover in the current initiative to ban germline gene therapy, was (and remains) one of the staunchest defenders of the Asilomar undertaking. In a 2008 Nature essay modestly titled, “Meetings that changed the world: Asilomar 1975: DNA modification secured,” he recalled that at the time the greatest concerns were “that introduced genes could change normally innocuous microbes into cancer-causing agents or into human pathogens, resistant to antibiotics or able to produce dangerous toxins.” But what many have forgotten is that the research community was far from a consensus on the question of whether a moratorium was necessary at the time; indeed, many in the scientific community did not regard it as a success, either scientific or intellectual.

In fact, the Asilomar cabal misunderstood and exaggerated the potential risks of recombinant DNA technology, modern biotechnology’s core technique; gave rise to a lengthy, damaging research moratorium; and induced the U.S. National Institutes of Health (NIH) to draft and promulgate overly restrictive “biosafety” guidelines.

During the Asilomar conference, Stanley Cohen, James D. Watson, and Joshua Lederberg argued publicly (and others privately) “against the forming of any official guidelines that spelled out how we should work with recombinant DNA.” In the words of science historian José Van Dijck, “In the politicized mood of the 1970s, genetics got annexed as an environmental issue; this new configuration manifested itself in changed images of genetics, genes, and geneticists,” which were no longer altogether altruistic, or even benign.

By 1978, the regulatory obstacles slowing research in many fields and labs would induce Watson to dismiss the handwringing as “senseless hysteria” and to observe that “everyone I know who works with DNA now feels the same and the mere mention of ‘DNA Guidelines’ or ‘Memorandums of Understanding’ makes our mouths froth.” (As a laboratory scientist at NIH at the time, I shared this sentiment.)

Those process-based NIH guidelines, which were and remain focused on the use of a single technique instead of on the actual risks of experiments, have plagued genetic engineering research ever since. By assuming from the beginning that recombinant DNA-modified organisms—which have come to be commonly known as “genetically modified organisms” or GMOs—were a high-risk category that needed to have sui generis regulation, the NIH guidelines created significant duplication of oversight for many already-regulated products. Worst of all, they reinforced the misconception that recombinant DNA-modified organisms are a meaningful “category.” Although NIH gradually pared back the stringency of its guidelines, stultifying process-based approaches to regulation of this noncategory have remained at other federal agencies, including the Environmental Protection Agency, the Food and Drug Administration (FDA), and the Department of Agriculture, and in many foreign countries.

Returning to germline gene therapy: It is unethical to modify normal embryos, but nobody is proposing to do that. For diseases that are genetically dominant, which means an abnormal gene from either parent causes the disease—examples of which include Huntington’s disease, familial hypercholesterolemia, polycystic kidney disease and neurofibromatosis type 1 (the last three of which are relatively common)—one could simply perform pre-implantation genetic diagnosis to identify a normal embryo (the parents’ eggs and sperm would produce both affected and unaffected embryos), and then implant it in the uterus. There is no need to manipulate normal embryos. In fact, as explained later in this article, it may not even be necessary to manipulate abnormal embryos to perform germline gene therapy, because alternative in vitro approaches are available.

According to New York Times science reporter Nicholas Wade, some participants at the December Washington conclave “noted there was no pressing medical demand now for making heritable changes to the human genome because diseases caused by a single errant gene were rare.” Well, those participants need a refresher in human genetics. An appropriate—and, indeed, compelling—application of GLGT would be to correct debilitating and ultimately lethal sickle-cell anemia, the most common inherited blood disorder in the United States, which affects more than 100,000 patients. It is marked by the presence of atypical hemoglobin molecules that distort red blood cells into a crescent, or sickle, shape. These “sickle cells” obstruct small blood vessels, causing frequent infections, pain in the limbs, and damage to various organs, including the lungs, kidneys, spleen, and brain.

In genetics terms, sickle-cell anemia is an autosomal recessive disease, which means that an affected individual has inherited a defective hemoglobin gene from both parents, so every one of his or her sets of chromosomes carries a defective gene. (That results in a single aberrant amino acid being inserted into the hemoglobin protein.) Particularly significant is that every offspring of two patients with sickle-cell disease will be afflicted with the disease. Repair of this sort of molecular lesion has been performed successfully in monkeys with new, highly precise gene-editing techniques.

News from the innovation front

However, as discussed by Matthew Porteus of Stanford and Christina Dann of Indiana University in the June 2015 issue of Molecular Therapy, several technical obstacles may preclude successful zygote injection in humans, including the fact that “only a fraction of injected zygotes give rise to viable offspring. Tens to hundreds of zygotes would need to be injected and implanted into several surrogate mothers to generate viable, genetically modified offspring.” With current technology, such an approach would be neither ethical nor feasible in humans.

Porteus and Dann also warned that the editing of genomes to correct a disease-causing mutation must not create mutations at other sites. They suggest possible alternative approaches to zygote injection that would avoid both of those pitfalls. In contrast to the zygote-injection strategy, stem cell editing that can be propagated in vitro enables characterization of the modified stem cells before use in therapy. Recent developments in animal models have shown that spermatogonial stem cells (SSCs), which ultimately give rise to haploid sperm, can be grown as clones in culture and then transplanted back into the testis to generate sperm. Thus, a potential strategy is to isolate SSCs, use genome editing to precisely correct a disease-causing mutation, perform whole-genome sequencing of clones that have undergone gene correction, and use only the clones that are free from off-target mutations. A related strategy would be to generate sperm directly in vitro from edited SSCs to be used for in vitro fertilization.

Therefore, even if the current state of technology does not permit the therapeutic correction of genetic diseases by means of editing via zygote injection, the two approaches suggested by Porteus and Dann could be attempted, even for genetically dominant diseases. Certainly, further proof-of-concept research should proceed, even if gene editing of SSCs isn’t successful immediately.

Progress is exceedingly rapid in this field. In December 2015 , scientists at the annual American Society of Hematology conference announced a stunning, apparently successful attempt to treat leukemia in an 11-month-old girl using off-the-shelf T-cells (a subset of blood lymphocytes) that had been ingeniously gene-edited. They were modified to enable them to attack the leukemic cells; to delete a gene that codes for a receptor on certain white blood cells, to prevent the cells from recognizing the recipient’s body as foreign and attacking it; and to survive the intense therapy the girl was receiving. The patient’s physicians (in London) opted to push the boundaries of clinical research to save her life.

In December 2015, an article in Science showed that an even higher degree of precision and specificity in gene editing is possible, and another article in Nature Biotechnology reported that the frequency of erroneous (off-target) cuts in DNA made by CRISPR/Cas9 can now be reduced to fewer than one per 3 trillion base pairs of DNA. (The human genome is about 3 billion base pairs in length.) In January 2016, additional refinements that reduce even further the frequency of off-target cuts were reported in Nature.

As Harvard University professor and molecular geneticist George Church wrote in a Nature commentary about gene editing in December 2015, “Many of these technologies are improving so fast it’s hard to measure.” Therefore, he said, a ban doesn’t make sense, and the prohibition of human germline editing “could put a damper on the best medical research and instead drive the practice underground to black markets and uncontrolled medical tourism.”

The constant improvements serve as a reminder that technologies are seldom successful right out of the gate; as they’re applied and refined, they improve, sometimes with astonishing rapidity. The first mobile phones and mainframe computers were large, clunky, inefficient, and temperamental. When I was a medical student during the 1970s, bone marrow transplantation was being performed in only a few institutions and as a last resort, and the success rate was abysmal. But the discovery of potent immunosuppressants and other technical advances improved the success rate markedly, and bone marrow transplants are now routine in many institutions. Some leukemias that were once a death sentence now have cure rates of around 90%. There are many similar stories in medicine, including open-heart surgery and organ transplants, which were remarkably primitive in their earliest incarnation, but which are usually uneventful now. The reality is that successful innovation is impossible without continual learning and incremental improvements by users, and in medicine, it is in clinical settings that this process must occur.

The idea that a medical technology cannot be “perfected” without innovation and learning in the clinical setting seems to have eluded Harvard stem-cell researcher George Q. Daley, who said last year about germline gene therapy, “This is an unsafe procedure and should not be practiced at this time, and perhaps never.” Never? Maybe it has been a while since Dr. Daley has seen a patient like one I remember well—a 20-year-old with sickle-cell anemia who had suffered three strokes, been crippled by repeated bone and joint infarctions, and had become a Demerol addict due to the unrelenting pain from the arthritis that resulted.

The over-regulated gene

Interventions that involve germline gene therapy should be used with great care, as is the case with all new therapeutic interventions, but we don’t need a moratorium on clinical research. (And at the very least, we must not let skepticism about potential applications that would modify humans interfere with research-based germ cell editing.) It may be ethically warranted to intervene with as yet unproven therapies in dire situations when there are no alternatives.

Ironically, much of the controversy about moratoriums may be moot because, in effect, Appendix M of the NIH Guidelines for Research Involving Recombinant or Synthetic Nucleic Acid Molecules already creates a near-absolute moratorium on germline gene therapy in the United States:

RAC [the NIH’s Recombinant DNA Advisory Committee] will not at present entertain proposals for germline alterations but will consider proposals involving somatic cell gene transfer. The purpose of somatic cell gene transfer is to treat an individual patient, for example, by inserting a properly functioning gene into the subject’s somatic cells. Germline alteration involves a specific attempt to introduce genetic changes into the germ (reproductive) cells of an individual, with the aim of changing the set of genes passed on to the individual’s offspring.

As to the scope of its applicability, “Appendix M applies to research conducted at or sponsored by an institution that receives any support for recombinant or synthetic nucleic acid molecule research from NIH,” which would appear to rule out germline gene therapy experiments by researchers at any U.S. academic institution. Nor is there likely to be much interest in germline therapeutic interventions by companies, given the uncertainty about how the FDA might regulate these technologies, concerns about opposition from the public and from the academic scientific community, and uncertain economic prospects. In effect, a moratorium is already in place.

Appendix M’s prohibition is both puzzling and disturbing. Given that the committee can reject any proposal for any reason, its unwillingness even to consider an entire category of clinical studies seems unnecessarily intransigent and arbitrary. It’s also cruel, because children will die while potentially life-saving therapies go untested.

Sound and humane public policy would see the NIH RAC repeal Appendix M and announce its intention to consider carefully crafted human germline gene therapy proposals that meet community standards for risk-benefit. Better still, NIH should get out of the business entirely, since FDA and local institutional review boards—not the RAC or NIH officials—have experience with that standard, and they will necessarily be involved whether or not NIH has a role.

Interventions that involve germline gene therapy should be used sparingly and with scrutiny, but if we don’t take the first step of clinical application, the one certainty is that we’ll never reach the goal of applying gene editing to the reduction of human suffering.

Henry I. Miller, a physician and molecular biologist, is the Robert Wesson Fellow in Scientific Philosophy and Public Policy at Stanford University’s Hoover Institution and a fellow at the Competitive Enterprise Institute. He was the founding director of the FDA’s Office of Biotechnology.

Radical by Design

The term “Big Science” is attributed to Alvin Weinberg, the former head of the Oak Ridge National Laboratory (ORNL). The Clinton Laboratories, ORNL’s original name during the Manhattan Project, had produced materials for the first atomic bombs; later, the renamed ORNL supplied the materials for the growing nuclear arsenal. Despite his role at the helm of the massive lab, Weinberg was conflicted about Big Science, a term that encapsulated the trend toward larger-than-life projects staffed by thousands of scientists and requiring large government investments. The pyramids of ancient Egypt and the cathedrals of medieval Europe had proved shining symbols of their eras, according to remarks he made in 1961, but Big Science would serve merely to “placate what ex-President Eisenhower suggested could become a dominant scientific caste.”

Michael Hiltzik’s outlook on Big Science in his book by that name is more upbeat, placing physicist Ernest O. Lawrence at the epicenter of a new way of doing science. From his birth in South Dakota in 1901 to Norwegian immigrants, Lawrence showed a clear talent in science. After earning his Ph.D. in physics from Yale in 1925, Lawrence was hired as an associate professor of physics at the University of California (UC) at Berkeley in 1928, becoming the youngest full professor there in 1930. Lawrence’s rapid advancement was attributable to his formidable intellect—one of the “best experimental physicists among his age in the country,” as Hiltzik recounts. His rise also well served the ambitions of the University of California, which became a leading center of physics and a counterweight to the elite East Coast institutions.

Big Science

Hiltzik details the fascinating story of how Lawrence built the foundation for Big Science. The series of atom-smashing machines that Lawrence created blazed the path, starting with the 11-inch accelerator (later dubbed the cyclotron) and moving up in power to ever-larger scientific instruments. Lawrence’s work set a precedent for science as it is done today, in such diverse projects as the Hubble Telescope and the Human Genome Mapping Project. Hiltzik draws us into Lawrence’s genius as he builds the political framework and lays the cultural foundations in support of large scientific instruments and a way of doing science that persists to this day.

The creation of massive scientific instruments for probing the heart of matter and exploring the origins of the universe—while also revealing the secrets of the materials that would arm humanity with its most destructive weapons—began in a clapboard building on the UC Berkeley campus in 1932. In an early demonstration of his ability to amass the support and resources he needed to advance his quest for bigger and better science, Lawrence pulled off an “administrative, financial, and intellectual break,” and one might even say, stroke of bureaucratic genius, in separating himself from the Berkeley physics department soon after being recruited to found what was then known as the Radiation Laboratory, or Rad Lab. What was to become the Lawrence Berkeley National Laboratory (LBNL) started off modestly enough, but the ambitions of its creator were not modest, and Lawrence built his small kingdom into a model envied, admired, and emulated across the country and around the world by scientists, governments, patrons, and industry.

To its critics, the term Big Science is shorthand for what might be dubbed Big Money, and certainly funding caused friction between Big Science and so-called Little Science over the decades. Big Science was also variously described as wasteful, elitist, militarily driven, scientifically perverted, and bureaucratic. Scientists felt displaced in a system that favored those who could play to national interests and successfully lobby for big grants, charm would-be philanthropic supporters, and persuade government patrons that their work connected to the needs of the ever-growing military-industrial complex.

But Big Science also pioneered a different way of engaging scientists and of doing science through the deployment of multidisciplinary teams tackling scientific problems that could, for the most part, be solved only using large scientific instruments. Lawrence opened the door to scientists and engineers of all types, and specialized in bringing experimentalists and theoretical scientists together. In doing so, he created a unique approach to science in which experiment sometimes preceded theory and in which intuition played a role. His powerful instruments were often put together and taken apart in the lab’s signature “cut and try” approach.

One of Lawrence’s most impactful legacies was in his own backyard and bears his name. The creation in 1952 of what is today called the Lawrence Livermore National Laboratory literally doubled the nation’s nuclear weapons design and development capacity. It began when a restless Edward Teller, later dubbed the “father of the hydrogen bomb,” complained bitterly—as he had since the early days of the Manhattan Project—that the Los Alamos National Laboratory was not giving enough time and attention to the development of the H-bomb. The Atomic Energy Commission (AEC) was concerned that the establishment of a second nuclear weapons laboratory competing with Los Alamos would demoralize that lab and cause a brain drain on the already strained and underfunded post-war Los Alamos. The AEC turned to the trusted Lawrence for help. Lawrence devised a brilliant solution to the dilemma, creating the Livermore lab as a branch effort of his own Rad Lab to provide weapons diagnostics support to Los Alamos.

A formidable figure in his own right, Teller insisted that the fledgling effort at the Livermore facility, 40 miles east of Berkeley, be allowed to run some nuclear weapons tests of its own. Teller won the day, but it was Lawrence’s imprint on the Livermore lab that created its unique culture and scientific approach. Herbert York, one of Lawrence’s most promising graduate students, became the first leader of the Livermore lab. In contrast to the more academic structure and design of Los Alamos, which was organized around scientific disciplines, the Livermore lab worked more like Lawrence’s Rad Lab, bringing a variety of scientists and experimentalists together to tackle scientific problems. The lab culture that grew up around this original design shaped Livermore’s trajectory in important ways. The intense competition between Livermore and Los Alamos fed the ambitions of the armed services for more and better weapons, driving the number and diversity of weapons in the U.S. nuclear arsenal. Hiltzik is correct in his assessment (and in agreement with most other commentators) that Lawrence’s widow, Molly Lawrence, was wrong in asserting that her husband, always ready to serve his country and the needs of the military, would have disapproved of what the other lab bearing her husband’s name became.

Hiltzik, a Pulitzer Prize-winning author, columnist, and reporter for the Los Angeles Times, has written extensively over the past three decades on business, technology, and public policy, including books on the building of the Hoover Dam and Xerox PARC. In subtle but important ways, Hiltzik’s background informs his work and distinguishes it from that of writers whose primary interest is in the history of nuclear weapons. He draws much of his material from historians of the weapons program, such as Richard Rhodes, Greg Herken, and Barton J. Bernstein, but he is most interested in the intersection of industry, science, and public policy, and how these came together to create Big Science.

The big techno-scientific-social challenges of today, such as adequately addressing global climate change, dwarf those faced by Lawrence and his peers in scale, complexity, scope, and cost. Lawrence’s skills in designing a new way of doing science—including attracting and managing a diverse group of scientists and technical staff, and translating complicated scientific ideas for nonscientists—served the purposes of the nation at the time. Lawrence’s success was dependent on igniting the imaginations of his philanthropic, government, and industry sponsors and helping them feel connected to something important that was bigger than them. The outcomes he sought were different from those we seek today, but his methods may still be relevant.

A Tangled Web

The Internet certainly seemed a good idea at the time. What’s wrong with linking top academics through their computers so they can share big thoughts? And in these democratic times, extending those messaging facilities to the hoi polloi so they can share their tedious little thoughts, too? Yet one thing has led to another, and another, and the Internet of 2016 is a vast and ever-vaster entity. We don’t need to be techno-skeptics such as Andrew Keen and Sherry Turkle to wonder if one day we may question our wisdom. Because we didn’t just sit back and watch the Internet infiltrate every aspect of our lives; we cheered it on like Blackhawks fans at the Stanley Cup Final. The highly mixed results of which lead grumpy academics to organize another seminar on Collingridge’s famous dilemma (we can’t accurately predict the impact of a new technology until it is fully developed, and by then it is too late to do anything about it); heady Silicon Valley optimists to conjure Ned Ludd and tell us to sit down and shut up while technology solves its own problems; and those who are thinking hardest about cybersecurity to pour another Scotch.

These were among my preliminary thoughts as I approached these two very readable books, addressed to generalists. Samuel Greengard’s The Internet of Things comes from MIT Press’s Essential Knowledge series—small books on big subjects. In the case of P. W. Singer and Allan Friedman’s Cybersecurity and Cyberwar, they write out of a mission to alert both specialists and the public to rising and disturbingly underestimated threats, a theme that is also reaching a broad audience through Ted Koppel’s new book.

Cybersecurity is without doubt a threat inadequately recognized, perhaps vastly so, as I am reminded every time Microsoft Word flags it as a misspelling. (Word does not recognize what may yet prove the most consequential word of the twenty-first century, though it also continues to evade germline, which with CRISPR technology may prove the biggest boost or threat to Homo sapiens, qua species, since fire.) Similarly, the Internet of Things must be the least-noted of all developments in history at the human-machine interface. Its obscure name has certainly helped maintain an almost unbelievable opaqueness in a transparency-driven digital world.

The Internet of Things

Even generally well-informed people (including many tech people) have little notion what the Internet of Things signifies, let alone how revolutionary it promises to be. An Internet that has created its own new (and often bizarre) economy of cat pics and unicorns elbowing its improbable way into the Fortune 500 is suddenly refurbished as the new driver of value for General Electric and John Deere? It’s almost as if Beanie Babies had kept on going up in value until they underpinned the global economy in place of gold. How on earth can this be? My homespun summary: It’s where the new economy buttons itself to the old.

Greengard’s Internet of Things is a Dummies book without the horrible design that for some reason the Dummies people find sells books. An MIT Press watermark sits curiously on a book intended to be a primer—and dragooned from a writer rather than a technology specialist. (Though to be fair, although Greengard’s website pushes his American Association of Retired Persons book on finding the job you love in later life, he has long and expertly followed technology.) That itself is interesting, and perhaps a positive indication that the IoT (as we call it, partly because it sounds less silly) is, despite its unfamiliar name, moving mainstream. On the other hand, for all its handiness as a Dummies guide, this little book does somewhat lack the “here’s the scoop” enthusiasm that insiders tend to evince, and feels a little like reading Wikipedia.

Singer and Friedman, by contrast, are content gurus commandeered by a publisher to dumb down cybersecurity issues. Their blow-by-blow, Dummies-level prose is the result of gargantuan effort, and succeeds. If you really don’t know more about the Internet than that it is a system of pipes akin to the Paris petit bleu system of compressed-air telegrams that served late nineteenth century elites rather well (which is actually not a bad starting point), then there is no better place to begin your education. The downside, of course, is that if you do know your Internet onions, the Dummies lingo can get tedious. But it’s worth plowing through because the book is full of insight.

In short: the IoT is linking every object of any economic significance with every other such object. And increasingly, it’s doing so with standards that enable interoperability and Internet connectivity. Whereas some of this goes way back (RFID was the pioneer technology), some is already rather prosaic (those home heating systems you can operate from your phone if you really want to), and some is densely engineering-focused (think drilling, pumps, airplane engines, etc.), the frontiers include sensors in your bloodstream and vital organs, and of course, the end of driving.

The history of cybersecurity is well illustrated by the old saw, trotted out by intelligence chiefs when they want new money or are caught red-faced, that the manifold successes of cybersecurity remain a secret, while the bad guys need to score only one time to hit the headlines. The deceptive and self-serving nature of this rhetoric is plain when one considers that, like safety in an elevator, only 100% success is actually acceptable; anything less denominates the entire system as a disaster.

cybersecurity-book-cover1

The history of efforts in cybersecurity, laid out in extraordinarily helpful detail by Singer and Friedman, amounts to millions of successfully resisted attacks punctuated by a succession of enormous disasters—from the hacking of Target stores to the federal government’s Office of Personnel Management (OPM). There have been dozens of major consumer (and some government) hacks, many releasing tens of millions of records either to the public domain or the private databases of criminals, foreign powers, or others whom we have begun to call “bad actors.”

Information hacks are not merely annoying and potentially costly in dollar terms; they can have all manner of real-world implications. The OPM hack stands out because it included not merely up to 20 million federal employee personnel records, but umpteen thousands of security clearance applications. If this was the work of the Chinese or some other foreign power, as has been suggested, then those individuals could be subject to phishing attacks for the rest of their careers. The ridiculous fact that the intelligence community permitted those records to remain with clunky old OPM in the first place has excited little scrutiny. But who expects actual career-losing accountability among federal employees?

What neither of these books addresses is the step change in the significance of cyberattacks that is heralded by the incessant rise of the IoT. Not many months back, the news media was briefly fixated with the tale of the hacked Jeep Cherokee. In fact, it was a friendly hack: “white hat” hackers, who seek to help plug rather than exploit security holes, had prearranged to take over the car of a Wired magazine journalist. They broke in through the entertainment system, messed with the music options, then moved to the power controls—and ended up nearly killing the journalist as he had a truck on his tail. Fiat-Chrysler, under the leadership of its larger-than-life boss Sergio Marchionne, responded vigorously and fast, and all’s well that ends well for the future of the auto industry.

Except that it’s not. For a start, the fact that one of the world’s leading engineering companies could have its signature product compromised by a couple of guys in a basement should have sent shivers down the spines of every global executive—and every shareholder. It’s one thing to compromise financial data, the stigma of which has diminished with every gross breach and the harm of which is papered over with companies’ letters of apology to their 10 million or 20 million affected customers offering free credit reports, identity theft insurance, and other feel-good benefits. It is quite another to compromise cyber-physical systems that tie the Internet to real-world activity.

As the Jeep hack demonstrated, in the IoT age, a major hack to a connected car system could direct a million vehicles to stop at 5:30 p.m. eastern time. Or turn left. Or speed up. As I stated recently when I was chairing the annual information technology expo GITEX in Dubai, all it will take is two or three cyber-attacks on cyber-physical systems that cause large numbers of casualties, and the IoT will be a dead letter. One hundred dead here, 1,000 there, and the rush hour will be back to analog.

Singer and Friedman list a series of developments that are needed to raise the status and effectiveness of cybersecurity in corporate and government contexts. It’s plain that we don’t have a moment to lose. After the Jeep hack, I speculated that the famously hands-on Marchionne would have the company’s cybersecurity guy reporting directly to him the day after the breach. If top companies don’t elevate this role to an office leading into that of the CEO, giving proper priority to the integrity of code and the protection of valuable databases, it’s hard to see how the promised IoT revolution will safely succeed. Can this happen when so few corporate leaders (and board members) can follow, let alone lead, a conversation such as this one? We need to attend to the cybersecurity sirens or we shall be swiftly moving back to an analog future. They say the Kremlin is buying typewriters.

Forum – Spring 2016

Purposeful science

In “Fact Check: Scientific Research in the National Interest Act” (Issues, Winter 2016), Congressman Lamar Smith critiques the concerns presented by Democrats, scientists, and a number of social science associations about his bill, H.R. 3293. As proposed, the bill confuses the nature of basic science and adds bureaucracy that would impose a layer of political review on the National Science Foundation’s (NSF) gold-standard merit review system. This bill will hurt the nation’s premier basic research agency, and ultimately leave America less competitive.

Many in the legislative majority have been clear in their belief that, according to their own subjective definitions of “worthy,” numerous grants that have successfully passed merit review are not worthy of federal funding.

As the ranking Democratic member of the Committee on Science, Space, and Technology, I feel it is neither our job nor my intent to defend every NSF grant. Most members of Congress lack the relevant expertise to fairly evaluate the merits or value of any particular grant. If we do not trust the nation’s scientific experts to make that judgment on whether a scientific grant is worthy of funding, then who are we to trust to make those judgments? The clear intent of this bill is to change how NSF makes funding decisions, according to what some majority members believe should or shouldn’t be funded.

H.R. 3293 restricts scientists and students from asking questions within science and technology fields for which a direct application may not be known. However, as Maria Zuber, vice president of research at the Massachusetts Institute of Technology said in a tweet, “Outstanding science in any field is in the national interest.” I fear that an unintended consequence of this bill will be to inhibit high-risk, high-reward research in all fields. We’ve heard from many scientists who are concerned that NSF, because of political pressures and budgetary constraints, is already pushing scientists to justify everything according to short-term return. If not corrected, this will inevitably reduce the ability of NSF and U.S. scientists to conduct truly transformative research.

NSF’s mission “to promote the progress of science; to advance the national health, prosperity, and welfare; to secure the national defense; and for other purposes” is applied throughout the merit review process for each award. The notion that every research project must, by itself, be justified by a congressionally mandated “national interest” criterion is antithetical to how basic research works.

I simply cannot support an effort that politicizes NSF-funded science and undermines the very notion of basic research. I sincerely hope that this bill will never become law.

Rep. Eddie Bernice Johnson

Democrat of Texas

The ongoing debate over what science is worthy of federal funding has the potential to result in a general loss of support for scientific research. Of note, passage of the Scientific Research in the National Interest Act, by a vote of 236-178, closely followed party lines.

The United States economy, and all that it supports, depends on scientific research. A number of studies, including one that formed the basis of a Nobel Prize, indicated that 50-85% of the growth in the nation’s gross domestic product can be attributed to advancements in science and technology. But the debate over a tiny fraction of research is endangering the whole.

Is it unreasonable that taxpayers should expect that their funds be devoted to endeavors that serve their interests? It would seem not. Is it in the interest of science—and therefore the taxpayers—that the choice of what science should be pursued be politically influenced? It would seem not. Researchers in the Soviet Union and China, not to mention Galileo, might suggest that such influences hinder, not promote, scientific progress.

The problem, of course, is how to determine what science is in the public interest—and who should make that determination. Past studies of butterfly wings, Weddell seals, and the Pacific yew tree produced important medical discoveries. Roentgen was not searching for a way to see inside the human body; he was studying streams of electrons. Or in Fleming’s words, “When I woke up just after dawn on September 28, 1928, I certainly didn’t plan to revolutionize medicine by discovering the world’s first antibiotic…”

The problem is that there is no bright line between science that produces “useful” outcomes and science that does not. The utility spectrum spans from the sequencing of the human genome partially in support of cancer research to the study of black holes, to the study of early human-set fires in New Zealand. The more difficult questions seemingly arise in the social sciences. Even there, arguably the greatest challenge in the fight against Ebola was not the remarkable biomedical research, but the cultural resistance to inoculations that had to be overcome among local tribes. Similarly, U.S. troops in Afghanistan were well served by the understanding that they were provided of local cultures derived from earlier research.

The greatness of U.S. science and its research universities, the latter generally agreed to represent about 18 of the world’s top 25, is heavily dependent upon freedom of inquiry and peer review. It would seem to be time to seek common ground rather than stark differences. Perhaps there is a tiny segment of research that is better funded by foundations. Perhaps peer review committees should include one or two highly regarded nonscientist members, as is not uncommonly the case when ethical issues are addressed. Or perhaps a commission is needed to address conflicting views on the above issues held by well-meaning individuals, as was done in guiding stem cell research. But to continue on the present course seems to assure that a bad outcome will not be left to chance.

Norman R. Augustine

Chair of the National Academies committee that produced Rising Above the Gathering Storm

The Scientific Research in the National Interest Act is a bad idea, but not for exactly the reasons its opponents most often cite.

Critics of the legislation contend that it would interfere with peer review or, more broadly, that it represents inappropriate congressional interference with scientific grant making. Yet the measure leaves the peer review system intact, and there’s nothing inherently out of bounds about Congress determining what categories of research can be supported with federal funds.

The problem with the bill is that it provides poor direction to the National Science Foundation (NSF) on what sort of research to back in two ways—many of the bill’s clearest provisions seem wrong-headed and unnecessary, and the parts that are muddy leave the agency in a dangerous kind of limbo.

The requirement that research use “established and widely accepted scientific methods,” for example, is clear enough. But why put that prescription in law when it could create a bias against the most innovative research? No one has accused NSF of funding work that isn’t well grounded in scientific methodology.

But the bigger problem is that most of the bill is quite unclear in indicating what Congress wants—except that it wants the agency to walk on pins and needles. The bill’s backers may have objections to social science research or to proposals related to climate change or to something else, but they apparently lacked the political courage, the focus, or enough votes to have an open debate on those issues and to draft specific language. As a result, the measure effectively codifies a vague threat—Congress saying you’d better be good at divining what we think is a proper grant—and leaves it to NSF to guess what battle will come next, perhaps in the hope that the agency will timidly avoid it.

This might be justifiable if there were some new, systemic problems at NSF, but there aren’t. Rather, there’s the age-old, periodic tussling over this or that grant. Congress has all of the oversight authority it needs (and then some) to expose questionable or controversial grants and, if need be, to bring the agency to heel. Yet this bill puts in place the kind of sign-off requirement Congress imposed on corporate CEOs to address the falsification of financial statements—an effort to get a handle on a real and systemic and sometimes criminal problem. And no one was left guessing what constituted a proper financial statement.

The Scientific Research in the National Interest Act is overkill, and dangerous overkill at that. Congress should, and regularly does, debate what kinds of research to fund. This bill is not a healthy addition to that ongoing discussion. It should not become law.

Sherwood Boehlert

Former Republican member of Congress from New York (1983 until 2007) and former chair of the House Science and Technology Committee

When scientists on February 11, 2015, played the chirping song of gravitational waves—ripples in space-time that confirmed Einstein’s theory of general relativity—listeners worldwide leaned toward the sound, amazed by the music of our universe. Scientist Gabriela González choked up a day later, when she again played the sound of two black holes colliding 1.3 billion years ago. “Isn’t it amazing?” she asked a spellbound audience at the annual meeting of the American Association for the Advancement of Science. “I can’t stop playing it.”

The discovery of gravitational waves—through research supported by the National Science Foundation (NSF) via the Laser Interferometer Gravitational-Wave Observatory (LIGO) project—will inspire a generation of scientists and engineers. Gravity’s “chirp” may well be every millennial’s Sputnik moment, powerful enough to trigger an explosion of creative thinking.

Yet, except for more accurate scientific measurements, the potential practical applications of the LIGO team’s discovery remain unknown for now. Gravitational waves will not carry voices on data securely between continents. They will not sharpen human medical images. Colliding black holes will not provide power at any person’s electric meter. Still, the chirps were not the frivolous amusements of scientists. Einstein wasn’t thinking about everyday uses for general relativity in 1916, either, but his ideas have profoundly guided our understanding of the natural world. The Scientific Research in the National Interest Act would limit discovery by requiring all NSF grants to pass a “national interest” test. The bill’s sponsor, Rep. Lamar Smith (R-TX), has said that the proposal would not undermine basic science, but his emphasis on “improving cybersecurity, discovering new energy sources, and creating new advanced materials” to “create millions of new jobs” seems to overlook the real value of long-term, fundamental scientific investigation.

Basic science expands human knowledge. Often, attempts to understand curious phenomena fail, and sometimes they pay off in unpredictable ways: In 1887, one scientist who was trying to confirm the predictions of another scientist accidentally produced radio waves. Similarly, an experiment with cathode ray tubes unexpectedly gave rise to X-rays, which are now an indispensable diagnostic tool. Important breakthroughs have emerged from many basic research projects that may have sounded silly at first. The “marshmallow test,” in which children are given a chance to eat one marshmallow right away, or two later, ultimately revealed that self-control can be learned—an insight that can improve education, human health, and even retirement savings. There is a long list of studies in the social sciences, physical sciences, environmental sciences, and mathematics that at first may have seemed esoteric and hardly in the public interest, but ultimately advanced human quality of life. I suspect many would not have passed a strict “national interest” test.

Tremendous, unforeseen benefits spring from basic science. If past research grants had only supported “national interests,” would an investigation of glowing jellyfish have given rise to medical advances that won the Nobel Prize for Chemistry in 2008? Would Google founders Larry Page and Sergey Brin have used NSF grants to follow their curiosity about an algorithm for ranking Web pages? Grants focused exclusively on national interests could also discourage international research collaboration, which builds bridges between nations and stimulates new ideas by applying many different perspectives to a shared problem.

The United States is already underinvesting in science and technology. As a share of our economy, our investments in research and development put the nation in 10th place among developed countries. The recent omnibus spending bill provided a much-needed boost for U.S. science, after a decade of declining investments. We are poised to seize the “LIGO moment” and unleash a renaissance in American innovation. The National Interest Act would work against that goal, by devaluing the potentially huge future benefits of sustained, long-term investment in basic science.

Maybe the authors of the “national interest” legislation would say they do not intend for it to be applied in a strict way, hampering research that would actually benefit people in the future. Why have it at all, then? The possible damage is apparent; any possible positive effect is very hard to see.

Rush D. Holt

Chief Executive Officer

American Association for the Advancement of Science

Most attention in the science community about the Scientific Research in the National Interest Act has debated whether it is torqueing peer review, questioning the way decisions to fund science are made. I want to address a different but related point, agreeing with the bill’s sponsor, Lamar Smith, that it is incumbent on the publicly funded science community—just as it is on elected and appointed government officials—to say and convey to the nonscience public: “I work for you—and I look forward to telling you how, and to answering your questions.” Smith writes that “…researchers should embrace the opportunity to better explain to the American people the potential value of their work.” (I do quibble with his use of the word “potential,” as there is current value to the public, whom surveys tell us agree in high percentages that basic research is important, and should be funded by the federal government.)

But let’s face it. For all that decades of surveys tell us about how our fellow citizens trust and admire scientists, and want them to succeed, scientists are essentially invisible in this country. It’s a problem when a member of Congress rarely if ever hears from his or her constituents that federally funded science is at risk; it’s a problem when scientists in that state or district don’t engage in the political conversation and even proudly say they wouldn’t recognize their own member of Congress. Embedded in the culture of science is the tacit understanding that scientists must eschew the public eye and disdain the political; this has led to lack of interest in public engagement. My observation, however, is that young scientists are different; they want to run toward, not away from, engaging the public and political actors; they rightly believe that what they are doing is making a difference for the nation right now, not just when or if they receive a Nobel Prize. They want to tell their story and convey their excitement. But they don’t know how to engage, and, worse, they are continually discouraged from doing so. There is a constant stream of anecdotes about how “department chair X” or “mentor Y” actively discourages any activity other than science by scientists. Since the academic reward structure defines “community service” as service to the science community, we shouldn’t expect change from that quarter any time soon. More’s the pity.

It’s time for the science community to meet congressional and public expectations by making public outreach and engagement a part of the culture and expectation of all scientists. In a few hardy places, such training and experience is being incorporated into graduate science education, but it is far from the norm. Perhaps all training grants awarded by federal science agencies should include this expectation. Scientists who are equipped during their training to effectively explain their work will do themselves, science at large, and the general public a great service over the course of their careers.

Mary Woolley

President

Research!America

Climate and democracy

It is understandable that climate scientists are disheartened, distressed, even alarmed in the face of climate change. Scientists predicted more than half a century ago that increased atmospheric greenhouse gases could disrupt Earth’s climate, and that prediction has sadly come true. As an empirical matter, none of our current forms of governance—neither democratic nor authoritarian—have made sufficient progress in controlling the greenhouse gas emissions that drive climate change. But in their alarm, some scientists are making questionable assertions about both technology and democracy, grasping at utopian visions of miracle technologies and benevolent autocracy. In “Exceptional Circumstances: Does Climate Change Trump Democracy?” (Issues, Winter 2016), Nico Stehr is right to critique this.

The worst impacts of climate change can be avoided if we pursue solutions grounded in technological and political realities. Several recent studies by credible researchers suggest that even allowing for growth, the United States can produce most, if not all, of its electricity from renewables, provided we do a few key things. These include putting a price on carbon, adopting demand-response pricing, and integrating the electricity grid. (Similar results have been found for other countries). None of these requires a “breakthrough” technology or form of governance that does not already exist, but all of them do require governance. A price on carbon is the most obvious example: it takes a government to set and collect a carbon tax, or to establish emissions trading as a legal requirement. Re-thinking regulation is another example. As we saw in the recent U.S. Supreme Court case involving the Federal Energy Regulatory Commission versus the Electric Power Supply Association, public utility commissions must be empowered (no pun intended) to adapt to new conditions. Demand-response pricing requires changes in the ways utilities operate, which requires reform of our regulatory structures or at least changes in our interpretation and implementation of them. An integrated grid could in theory be built by the private sector, but it took the federal government to build a nationwide system of electricity delivery and it will likely take the same to update that system to maximize renewable utilization.

However, 30 years of antigovernment rhetoric have persuaded many citizens that government agencies are necessarily ineffective (if not inept) and erased our collective memory of the many domains in which democratic governance has worked well. The demonization of government, coupled with the opposing romanticization of the “magic of the marketplace,” has so dominated our discourse that many people now find it difficult to imagine an alternative analysis. Yet history offers many refutations of the rhetoric of government incapacity, and gives us grounds for reasoned belief in the capacity of democratic governance to address climate change.

Placing the demands of democracy at the center of our thought also helps us to sort through the various options available to address climate change. One potent argument for carbon pricing is that market-based mechanisms help to preserve democracy by maximizing individual choice. The political right wing has said many untrue things about climate change, but conservatives are correct when they stress that properly functioning markets are intrinsically democratic, and top-down decision making is not. To the extent that we can address climate change in bottom-up rather than top-down ways, we should make every effort to do so. Where we can’t, democratically elected government can (and should) step in to build infrastructure, to foster research and development, to create reasonable incentives and eliminate perverse ones, and to adopt appropriate regulatory structures. This will require leadership, but of a kind that is completely compatible with democracy. Professor Stehr is correct: now is not the time to abandon democracy; it is the time to recommit to it.

Naomi Oreskes

Professor of the History of Science

Harvard University

Nico Stehr has done a service in bringing out the tensions between what passes for democracy in much of the world and the need to take resolute, concerted, and sustained action on climate change. Stehr overdoes the contrast a little, perhaps in an effort to bring the tension forcefully to our attention. Most of those whom he cites (including me) articulate the tensions between democracy and addressing climate change, but they do not actively advocate the overthrow of democracy. Stehr’s solution is also a bit of a letdown. Science studies as a field was born in the fear of antidemocratic technocracies, which were seen as a specter haunting North America, having already consolidated their grip on much of Europe. With populism on the rise and expertise in retreat, Stehr’s call to “enhance” democracy sounds like a one-size-fits-all solution drawn from an earlier playbook. The tension that he highlights is deep and real, and threatens to explode old concepts and disrupt existing divisions. It is worthy of our most serious thought.

Climate change is the vanguard of a new kind of problem. What is characteristic of such problems of the Anthropocene is that they are caused by humans who are largely engaged in acts that would traditionally be viewed as innocent and inconsequential. When amplified by technology and indefinite iteration, they produce outcomes that can be devastating, though no one desires them nor intends to bring them about. This contributes to the crisis of agency that is salient in the contemporary world. Never have people been so powerful that they can actually transform the entire planet. Yet never have people individually felt so powerless to obtain collective outcomes regarding nature (at least since the rise of modernity, but that is another story). For these reasons, the Anthropocene threatens our sense of agency, erodes the public/private distinction that is foundational to liberalism (since many of the acts that contribute to climate change would traditionally have been construed as private), and drains legitimacy from existing states (which are increasingly seen as unconsented to, failing to deliver beneficial consequences, and not in accord with Kantian or Rawlsian public reason). Climate change is not a one-off event, such as a war; in so many ways it is “the new normal,” and we’re going to have to figure out how to live with it and whatever else the Anthropocene will bring.

The democracies that Stehr thinks can cope with such problems find their own foundations threatened by them. Nor is it clear (if it ever was) exactly what democracy is supposed to consist of. What is clear is that this concept does not simplistically apply to countries such as the United States and the United Kingdom. Regarding the United States, it is enough to gesture at the fact that the popular vote is often at serious variance with the make-up of Congress and even the person of the presidency (Al Gore won the popular vote in 2000, not George W. Bush). Add to this the power of money in political campaigns and the fact that voter suppression in its various forms is a major political tactic employed by at least one major political party. As for the United Kingdom, it is enough to observe that in the elections of 2015 there was a swing toward Labor of 1.5% and toward the Conservatives of 0.8%, which resulted in Labor losing 26 seats and the Conservatives gaining 24 seats. The Conservatives won less than 37% of the popular vote but more than half of the parliamentary seats, and the resulting media storm soon hardened into the dogma that the Conservatives had won an overwhelming mandate. Democracy? I don’t think so.

Yes, climate change puts pressure on democracy. But before we pick sides, let’s try to better understand the kind of problem that climate change is and what we mean by democracy. Even more radically, we might even try democracy before giving up on it.

Dale Jamieson

Professor of Environmental Studies, Philosophy

New York University

Nico Stehr raises many valid objections to those climate change activists who think a shortcut to the political process is needed to avoid global disaster. Stehr’s argument is that not less, but more democracy will make us better off and help us in confronting existential threats to society. I agree with this general argument in principle, above all because democracy keeps our options open in case we have taken the wrong path to salvation, so to speak. Democracy is better at incorporating new insights, as it allows the election of new governments over time. The learning potential is better compared with nondemocratic political systems, and the danger of irreversibility of decisions is lower.

Having said this, I wonder if Stehr has not granted the skeptics of democracy too much. Talking about democracy, no matter if in a positive or skeptical mood, seems to eschew at least two important questions: what does democracy mean, and how can we achieve climate policies that are effective?

Regarding the first question, there is a range of political constellations that count as “forms of democracy.” Nearly every sovereign state that follows some principles of democracy has a different political system, with a different constitution, different rules of the political game, different rights for specific social groups, different legal systems, and so on. There is not one democracy, but many, and these different forms may have different potentials as regards to effective environmental governance. We have systems of proportional representation and of majority rule, federal and unitary states, and more or less centralized states. We have systems with checks and balances and an independent judiciary, but also systems of parliamentary monarchy and systems where the separation of powers is less pronounced compared with other systems.

It is an open question which of these yields the most in terms of political effectiveness. On paper, two of the alleged frontrunners in climate policy, Germany and the United Kingdom, have contrasting political systems. Arguably, the German political system allows for the representation of environmental interests in a more effective way than does the British system. It makes it easier for small parties to get representation in the system (through seats in parliament, and other administrative positions that follow), provided the party has at least 5% of the vote. The German system also knows of the formation of a grand coalition, an arrangement where the two largest parties govern together. This nearly eclipses the opposition and the government has much more leverage to devise and implement policies, including progressive climate policies. We do not see this possibility in countries such as the United Kingdom or the United States. But the United Kingdom, in contrast to the United States, has, based on its majoritarian government and centralized system, imposed a climate change policy through the Climate Change Act of 2008 that is celebrated as peerless in the world. It is another matter how successful the implementation of this legal instrument has been.

This leads to the second point, which is really about political instruments. We can distinguish between markets, hierarchies, and voluntary associations. The antidemocratic impetus of climate activists seems to narrowly focus on hierarchies as tools for change. A critique of the climate activist argument should not throw the baby out with the bath water. There is a place for all three modes of social coordination, and it is an open question which constellation, or mix of them, is most promising in which political system.

Reiner Grundmann

Professor of Science and Technology Studies

University of Nottingham

United Kingdom

Reporting on climate change

As a longtime climate policy wonk, I have for many years viewed the New York Times environmental writer Andrew C. Revkin as the gold standard for insightful reporting on climate change. His presence became even more important with the shrinking commitment to environmental reporting by the mainstream media. What I therefore found most affecting in reading Revkin’s journey recounted in “My Climate Change” (Issues, Winter 2016) is the small chance that it can be repeated. As the economics of newspapers continues to decline, there are fewer and fewer dedicated environmental writers who will have a similar opportunity to learn over time, to assess the merits of different opinions, and to contribute significantly to public discourse. This is a loss to be mourned by all of those who view an informed electorate as key to good environmental policy—particularly for an issue as complex and subject to partisan debate as climate change.

The challenge we face is highlighted by a recent survey reported in the journal Science on how climate change is being taught in high school science classes. A major finding was that among teachers addressing the subject, “31% report sending explicitly contradictory messages, emphasizing both the scientific consensus that recent global warming is due to human activity and that many scientists believe recent increases in temperature are due to natural causes.” And even among teachers who agree that human activities are the main cause of global warming, a bare majority know that a high percentage of scientists share their view. The basic problem with the discourse on climate change in the United States is thus not that the issues are complex but that even the basics are not widely understood—and that for a significant minority, the facts may be unwelcome.

At a more fundamental level, I also find Revkin’s equanimity in contemplating climate change (“change what can be changed, accept what can’t”) astonishing. As the National Research Council warned in a 2013 report, “…the scientific community has been paying increasing attention to the possibility that at least some changes will be abrupt, perhaps crossing a threshold or ‘tipping point’ to change so quickly that there will be little time to react.” Although a completely workable—and politically acceptable—solution has yet to be identified, many respectable sources, including the International Energy Agency, have concluded that (contrary to Revkin’s assertion) practical solutions are technically feasible—if we act soon. Yet Revkin nevertheless seems to be saying what else can we do but roll the dice and hope it isn’t as bad (or worse!) than identified by peer-reviewed science. Not “my” climate change!

Finally, Revkin’s “existential” search for rational explanations of irrational behavior seems to avoid the obvious. While evaluating academic research on status quo bias, confirmation bias, and motivated reasoning, he neglects the power of massive industry lobbying and calculated obfuscation—the strategy made famous by the tobacco industry but arguably elevated to new heights by fossil fuel interests. Such efforts have arguably become even more effective at a time of diverse news sources and selective receipt of information. No fancy theory required.

Alan Miller

Climate Change Policy and Finance Officer (retired)

International Finance Corporation

The science journalist Andrew Revkin has been narrating the climate change story since it first came to public notice in the mid-1980s, and his wide-ranging and insightful article here offers an insider’s perspective on the art of shaping coherent narratives out of complex and uncertain science. In this context, imagery is everything. Revkin recalls that his first published climate feature in 1984 (on the Cold War prospect of a “nuclear winter”) was illustrated by a graphic image of Earth frozen in an ice-cube, although, tellingly, only four years later, his report from Toronto’s World Conference on the Changing Atmosphere was illustrated by an image of Earth melting on a hot plate. Scientists tend to be suspicious of such emotive editorializing, but if there’s one thing we’ve learned about policy makers (and the publics they serve), it’s that they respond more readily to images than to evidence.

This is well illustrated by the Montreal Protocol of 1987, which secured a global ban on the manufacture of ozone-depleting chlorofluorocarbons (CFCs). The ban represented humanity’s first victory over a major environmental threat, but Revkin warns us against seeing the episode as a template for curbing atmospheric carbon dioxide. As he rightly points out, eliminating CFCs was a relatively simple matter, given their niche industrial usage, whereas the reduction of carbon dioxide emissions requires root-and-branch amendments to every industrial process on the planet. But I think Revkin tells only part of the story here: the victory over CFCs was, in large part, won by imagery, in the form of the striking, false-color graphics that showed the ever-widening “hole” in the ozone layer above Antarctica. First published in 1985 by the National Aeronautics and Space Administration’s Scientific Visualization Studio, these now-iconic artifacts succeeded in visualizing an otherwise invisible atmospheric process, bringing a looming environmental crisis to the world’s attention. Response to those images was swift and decisive, and only two years after their publication, effective legislation was in place.

Atmospheric warming is, of course, a different process than ozone depletion, with different environmental and economic implications, but it is just as invisible. Yet so far, no equivalent to an ozone hole visualization has been found for global warming. We are stuck with polar bears and melting ice caps, aging poster children who have lost any impact they may once have had. “I had long assumed the solution to global warming was, basically, clearer communication,” writes Revkin, who goes on to list some of the failed climate metaphors that he has put to rhetorical work over the years, including “carbon dioxide added to the atmosphere is like water flowing into a tub faster than the drain can remove it,” or “the greenhouse effect is building like unpaid credit card debt.” To write about climate change is to be in the metaphor business, but so far—with the possible exception of the Keeling curve, with its dramatically rising, saw-toothed blade—no clinching image has been found. But then how do you visualize something that has become too big to see?

Richard Hamblyn

Lecturer in Creative Writing

Birkbeck, University of London

Author of The Invention of Clouds

As someone who has greatly admired Andrew Revkin’s work over the years, I very much enjoyed reading his story about his life’s journey in the world of journalism and science communication. However, I took issue with one of the claims he makes about science.

Revkin claims, as if it were self-evident, that a major hurdle in our response to climate change is that “science doesn’t tell you what to do.” He then invokes the “is-ought” problem coined by the eighteenth century philosopher David Hume, which states that no description about the way the world is (facts) can tell us what we ought to do (values). I would argue, however, that this separation between facts and values is a myth. Values are reducible to specific kinds of facts: facts related to the experience and well-being of conscious creatures. There are, in fact, scientific truths to be known about human values (a view defended most notably by the philosopher and neuroscientist Sam Harris in his book, The Moral Landscape: How Science Can Determine Human Values).

I agree with Revkin that environmental, economic, and cultural forces influence the values adopted by individuals and societies, but the reason is because they change our brains and influence our experience of the world. These changes can be understood in the context of psychology, neuroscience, and other domains related to the science of the mind. Human well-being is ultimately related, at some level, to the human brain.

Similarly, the reason climate change is so worrying to us is because of the consequences that it will ultimately have on our well-being. Whether we realize it or not, our concerns for the environment are ultimately reducible to the impact it has on the conscious creatures in it (both human and non-human).

Revkin is by no means alone on this. Most people, scientists included, seem to agree not only that ethics is a domain that lies outside the purview of science, but that it is taboo to even suggest otherwise. But perpetuating this myth has consequences. Our failure to recognize the relationship between facts and values will have wider implications for public policy related to many rapidly emerging technologies and systems, from artificial intelligence to agricultural technology to stem cell research to driverless cars.

It’s important to note that in this context, “science” isn’t merely synonymous with data, models, and experiments; these are merely its tools. We must recognize that science is actually more comprehensive than this. The boundaries between science, philosophy, and the rest of rational thought cannot be easily distinguished. When considered in this way, it’s clear that science can answer moral questions, at least in principle. And, as Sam Harris puts it, “Just admitting this will change the way we talk about morality, and will change our expectations of human cooperation in the future.”

Mark Bessoudo

Licensed Professional Engineer

Toronto, Canada

Reviving nuclear power

In “A Roadmap for U.S. Nuclear Energy Innovation” (Issues, Winter 2016), Richard K. Lester outlines in a thought-provoking manner the significant obstacles to and absolute necessity of innovation in the nuclear industry in the United States, and he provides well-founded recommendations for how the federal government can be more supportive of nuclear innovation. That being said, we need to think more creatively about policies to support nuclear energy, based on the federal and state policies that are currently leading to a boom in both natural gas and renewable energy across the nation.

Natural gas has benefited from 30 years of federal support, not just through research and development, but through public-private partnerships and a 20-year production tax credit for unconventional gas exploration and hydraulic fracturing in shale. These investments have made shale gas so cheap today that it is disrupting the energy market, producing more electricity than coal for the first time ever. Similarly, a suite of federal and state policies have been implemented to both drive down the cost of renewable energy and incentivize deployment.

Federal support for nuclear energy could level the playing field and help expand all clean energy options as the United States tries to meet its agreements under the 2015 United Nations Climate Change Conference, or COP 21, regarding reducing greenhouse gas emissions. Lester is correct that a carbon price could help make existing nuclear plants more profitable in merchant markets subject to unstable wholesale prices, but more direct and tactical support for low-carbon baseload power is needed. Such policies could include capacity payments, priority grid access for low-carbon baseload, a production tax credit for re-licensed plants, including nuclear in state low-carbon power mandates, or an investment tax credit for plants that perform upgrades or up-rates.

For the second phase of nuclear development—what Lester calls Nuclear 2.0, which is projected to stretch from 2030 to 2050—he is correct that we should focus on innovation for advanced nuclear reactors. But we also need to ensure there is market demand for these significantly safer, cheaper, and more sustainable reactors. As we learned from the aircraft industry, Boeing needs to sell over 1,000 of its innovative 787 aircraft before the company breaks even on the significant research and development costs. It is hard to see how a single nuclear reactor design will even reach those economies of scale. Federal policies that could stimulate demand include procurement of reactors for federal sites such as military bases or national laboratories, an investment tax credit for new builds, or fast-tracked licensing for new reactors at existing sites.

From 2050 onwards, what Lester calls Nuclear 3.0, international collaboration will be crucial for large-scale projects such as fusion. But the government should be more supportive of advanced reactor development collaborations in emerging economies today. Russia, China, and South Korea are currently investing heavily in their domestic reactor fleets, but also in building advanced reactors around the world. Rather than compete with them, it is more feasible and advantageous to develop partnerships to leverage our long-standing experience in the nuclear industry with the rapidly growing demand for energy in these countries.

Jessica R. Lovering

Director of Energy Research

The Breakthrough Institute

Oakland, California

Fusion entrepreneurship

Ray Rothrock makes several critical points in “What’s the Big Idea?”(Issues, Winter 2016). The first is about venture capital and its role in providing patient capital to promising ventures. Rothrock describes the birth of the modern professional venture capital industry, with pioneers such as Rockefeller, Doriot, Schmidt, and Kleiner, but the fact is that venture capital has existed from the beginning of purposeful economic activity. What changed in the 1960s and beyond is the development of a profession in which men and women made their living investing in and helping early-stage companies.

I believe that two of the greatest inventions of the twentieth century were the startup and venture capital. At accelerating rates through the second half of the century, men and women started new high-potential ventures. These companies created new products or services or displaced incumbent players (or both) by out-executing them or by using novel business models. That is the essence of a dynamic, competitive economy.

These companies—think Intel, Federal Express, Genentech, Apple, Starbucks, Amazon, Google, Facebook, Tesla, and Uber—needed capital. Professional venture capitalists invested in these enterprises in a highly structured way. They gave modest amounts of money to enable the entrepreneurial team to perform experiments and test hypotheses. If the tests yielded positive results, they invested more. If the team couldn’t execute, they replaced members of the team. If the original idea wasn’t quite right, they changed strategy to reflect market needs. Jointly, the entrepreneurs and the investors created enduring companies—or they shut them down.

Indeed, a critical element of this process is failure. By some estimates, almost two-thirds of the investments that venture capitalists make fail to return capital. Of the remaining investments, returns more than offset losses, which enables venture capital funds to gain access to more money. In the United States, nonmoral failure is an acceptable outcome. Team members get redeployed in the economy and capital gets reallocated without enormous social, economic, legal, or political consequences.

Rothrock’s description of Tri Alpha Energy is a perfect example of the model. People have ideas that if proved correct can result in valuable companies. In the earliest stage of Tri Alpha, experienced angels invested to support various experiments. When those tests worked, the team was able to attract more financial and human capital. That process will be repeated many times before we know if Tri Alpha will succeed or fail.

What is distinctive about Tri Alpha is how audacious the plan is, how much time will pass before commercialization, and the large amounts of capital that will inevitably be needed. On the other hand, if the plan works, the implications for the world will be stunning. We need low-cost, safe, reliable, secure, zero-carbon sources of power everywhere. More pointedly, if leaders keep deploying coal-fired plants in countries such as India and China, the implications for global warming and air pollution are alarming.

From a societal perspective, the public returns from success at ventures such as Tri Alpha can be far greater than the private returns. The same would be true of ventures focused on curing diabetes or Alzheimer’s disease. What is remarkable is that people are willing to devote their lives to pursuing seemingly impossible goals, and investors are willing to invest even when confronted with a high likelihood of failure.

In the domain of energy, there are currently hundreds, if not thousands, of well-financed teams working on new technologies and business models, in such diverse areas as solar power and energy conservation. But for many years, particularly after the incident at Three Mile Island, essentially no private nuclear ventures were started. Even the largest players in the industry were unprepared to build new capacity given public safety concerns and costs. Universities scaled back research and teaching efforts as the nuclear power industry atrophied.

Today, there are at least 20 significant nuclear ventures trying to do for nuclear power what teams have accomplished in semiconductors, communications, and life sciences. They are focused on addressing each of the concerns that have plagued the industry (cost, safety, waste, and so on down the list). A company such as Terrapower intends to create a “traveling wave” reactor that uses spent fuel rods as its energy source. ThorCon has devised a “walk-away-safe” molten salt reactor. UPower is designing small reactors that can effectively be mass-produced and deployed off-grid. Transatomic has a different design for turning nuclear waste into safe, low-cost, zero-emissions power.

No one can predict which, if any, of these teams will crack the nut—and indeed, many will fail. Investors such as Rothrock deal with the risk of failure for a single venture by investing in many different companies simultaneously. The same is true of institutional investors who give money to firms such as Venrock but also diversify across many asset classes. Team members are less diversified but know that they can find jobs at successful ventures in the same industry or can apply their skills elsewhere.

In summary, Rothrock has shed light on a remarkably efficient and effective model for promoting economic progress. It’s a messy model that often results in failure. Without failure, as Edison discovered, you can’t find success. Although Tri Alpha hopes to create a large, valuable company, society is a major beneficiary of the process. We all need smart people and patient capital to address major global challenges in almost every domain, from water to education to health. That is the promise and reality of the modern startup and venture capital.

William A. Sahlman

Dimitri V. D’Arbeloff – MBA Class of 1955 Professor of Business Administration

Senior Associate Dean for External Relations

Harvard Business School

Ray Rothrock clearly describes one of the great “inventions” of the twentieth century, namely the modern venture capital process. Tri Alpha’s pursuit of fusion is a perfect example of the venture capital process in action. If the reward is seen as great enough, it is remarkable to see the challenges that entrepreneurs and private capital will undertake.

The venture capital process couples the scientific method’s idea of staged experiments with finance’s idea of the “option value” of milestone-based review. As the results of each experiment are reviewed at the milestone, each of the parties that make up a new venture—the employees, the investors, the essential partners, and even the potential customers—has the real option to make a clear, informed decision to continue to pursue the venture, to abandon it, or to renegotiate the various “contracts” that hold the parties together. In the latter regard, such renegotiation might include division of ownership, changes in future milestone dates or objectives, and changes in direction based on the need to attract new investors, new partners, or even new employees. The twists and turns of an evolving venture are nail-biting, and the failure rate along the way is known to be high. But as long as there is shared belief in the reward and in the venture’s ability to capture the reward, it will hold together to attempt the next milestone.

Rothrock also points out another tremendous virtue of the venture capital process. It allows “outlier” ideas to be pursued as long as the entrepreneur leading the charge can attract that necessary coalition of employees, investors, essential partners, and forward-looking customers. The idea conceived by university researcher Norman Rostoker was such an outlier, passed over by most and even dismissed by a prominent few in the nuclear field. Fortunately, Glenn Seaborg and George Sealy thought Rostoker’s idea was worth the bet, putting their reputations and their endorsement into the creation of Tri Alpha. The company launched and is now on the hunt for a challenging but potentially stunning target: limitless, low-cost energy.

At this point, Tri Alpha still might fail technically or economically or even competitively. There are a number of privately funded fusion and fission ventures on the horizon that promise limitless, low-cost energy. And at least some of those have been launched because of Tri Alpha’s own success.

Rothrock reminds us all that the “Big Idea” is encouraging the world’s entrepreneurs to imagine the world as they would have it be and then to do everything they can to make it so. Imagine the unimaginable: a world with limitless, low-cost energy. Let’s all hope that Tri Alpha makes it so.

Joseph B. Lassiter III

Senator John Heinz Professor of Management Practice in Environmental Management, Retired

Harvard Business School

Ray Rothrock provides a valuable description of how venture capitalists evaluate investment proposals, as well as how his organization decided to invest in the Tri Alpha fusion research and development project. As he noted, his firm’s interest was influenced at the outset by the fact that the company had secured encouraging evaluations from a number of world-class research luminaries.

With significant investment forthcoming, Tri Alpha recruited a team of talented physicists and technologists, and according to friends who visited the company, impressive experiments were built, yielding promising early results.

My understanding of the Tri Alpha effort is limited to Internet information and an impressive presentation by the company’s chief technology officer at the December 2015 annual meeting of Fusion Power Associates. Accordingly, I am in no position to evaluate the company’s status or outlook. However, on the basis of my two recent publications on fusion research, I can comment as follows.

First, the mainline international fusion research program is focused on the tokamak plasma confinement concept, which is the basis of the extremely expensive International Thermonuclear Experimental Reactor (ITER)—with a price tag on the order of $50 billion—that is currently under construction in France, with international funding. As I noted in “Fusion Research: Time to Set a New Path” in the Summer 2015 issue of this journal, a fusion power plant based on the ITER-tokamak concept will be totally unacceptable commercially for reasons explicitly enumerated. And as I noted in a follow-on article titled “Revamping Fusion Research” in the April 2016 issue of the Journal of Fusion Energy, the lessons learned from the extrapolated ITER-tokamak commercial failure, along with other factors, point to important considerations for successful future fusion research. A cursory look at the Tri Alpha concept indicates that its approach seems to be on a better track for success. Two important Tri Alpha positives include the goal of using the proton-boron 11 fuel cycle with its extremely low neutron production and using a target of high plasma beta (plasma-magnetic field pressure ratio).

Second, it is readily apparent that any new fusion power concept will involve a large number of engineering questions that must be addressed on a timely basis. Does the concept appear to extrapolate to an attractive power plant, meeting established utility criteria? How close to acceptable is it likely to be? What potential commercial problems must researchers address during project development, and are these issues being properly addressed on a timely basis? Have any showstoppers been identified, and if so, how will they be addressed?

In this regard, it is imperative that commercially experienced engineers who are independently managed (this is essential) be given free rein to design a power plant based on a proposed fusion power concept, and that those engineers provide guidance gleaned from their analyses to the researchers. Without that continuing, independent input and related interactions, physicists can veer off the track that is necessary for ultimate project success. The absence of such evaluations and guidance is one major reason why the ITER-tokamak debacle occurred and billions of dollars have been and are being squandered. It is hoped that Tri Alpha has benefitted from that very painful experience and is acting accordingly.

Robert L. Hirsch

Senior Energy Adviser, Management Information Systems Inc.

Former Head of the U.S. Fusion Program, 1972 to 1976

Genetic goose chase

In “The Search for Schizophrenia Genes” (Issues, Winter 2016), Jonathan Leo notes that genetic searches at best have revealed a set of small-effect genomic variants that together explain less than 4% of schizophrenia liability. He adds that these susceptibility variants are close to equally common in the general population, with the differences appearing significant only when enormous sample sizes are assembled.

Although a 4% explained variance in schizophrenia liability has no practical utility, as Leo asserts, it is also a question whether even this small number is reliable. In a study central to the field—the 108 single nucleotide polymorphism (SNP) study, conducted by the Schizophrenia Working Group—results from the replication sample are telling, if one takes the effort to find them in the extensive online supplementary material. In the replication sample, as compared with the discovery sample, 13 of the 108 SNPs differed in the opposite direction between cases and controls; 87 failed to reach an uncorrected significance level of p ≤ 0.05; and only three reached the adequate Bonferonni corrected significance level. Replication failures plague psychiatric genetics in general and indicate that the null hypothesis of no effect remains to be adequately rejected.

The validity of genomic sequence alterations in schizophrenia would be strengthened by demonstrating associations with the well-documented volumetric changes (usually shrinkage) that are seen in prefrontal cortical and subcortical brain regions in this disorder. In the hitherto largest attempt to do this, the Schizophrenia Working Group recently published a mega-analysis (Franke et al., Nature Neuroscience) of SNP associations to brain volumes in 11,840 patients and controls from 35 countries. No single SNPs or “polygenic risk scores” from the 108 study were significantly associated with size of the amygdala, hippocampus, thalamus, nucleus accumbens, or any other subcortical region. Hence, this study comes close to falsifying the hypothesis that SNPs contribute to the most commonly observed neurobiological changes in schizophrenia. Instead, they strengthen the hypothesis that any schizophrenia risk alleles have been eliminated from the population by natural selection, because people with the disorder have reduced fecundity, reproductive disadvantages, and increased mortality at early ages.

Studies of person-environment interactions suggest that the genetic hypothesis is redundant. As noted by Leo, child psychosocial adversity is highly prevalent in patients with psychosis, with repeated demonstrations of dose-response associations. The dynamic from stress exposure to psychotic symptoms is likely to include the impact of stress on epigenetic processes, such as cytosine methylation, that alter gene transcription rates and neural architecture. Accordingly, child psychosocial adversities lead to the same neurobiological changes as those seen in schizophrenia, including volume loss in subcortical and prefrontal cortical brain regions, changed morphology and functioning in interneurons and pyramidal cells, and altered functional connectivity patterns in neural networks. The behavioral consequences of early life adversities include increased stress reactions to daily life events, cognitive difficulties, and suspiciousness of others—all being characteristics also for psychosis.

As indicated by Leo, the success of future preventive and treatment efforts for schizophrenia will benefit from the endorsement of other perspectives than genetics.

Roar Fosse

Division of Mental Health and Addiction

Vestre Viken Hospital Trust, Norway

Seeing through the smoke

As Lynn Kozlowski correctly observes in “A Policy Experiment Is Worth a Million Lives (Issues, Winter 2016), not all tobacco products present equal health risks, and the nation’s regulatory response should take account of these differences.

The Family Smoking Prevention and Tobacco Control Act of 2009 empowers the Food and Drug Administration (FDA) to permit companies to market novel tobacco products as presenting less risk than traditional products when doing so “is appropriate for the protection of the public health,” and FDA is currently reviewing an application by Swedish Match seeking permission to claim that its “snus” products are less harmful than other tobacco products. Moreover, FDA has indicated its intention to assert regulatory jurisdiction over electronic nicotine delivery systems (ENDS), and in so doing has acknowledged that “[e]merging technologies, such as the e-cigarette, may have the potential to reduce the death and disease toll from overall tobacco product use . . .” The key regulatory challenge for FDA—and for the states within their own residual authority—is to craft policies that reduce initiation, promote cessation, and shift tobacco users toward less harmful patterns of use.

However, a “harm reduction” strategy of product regulation entails a serious risk of regulatory mistake if use patterns diverge significantly from those projected by policy analysts. As FDA observed in its ENDS proposal, if use of these products results “in minimal initiation by children and adolescents while significant numbers of smokers quit,” the net impact at the population level could be positive. If, on the other hand, “there is significant initiation by young people, minimal quitting, or significant dual use of combustible and non-combustible products, then the public health impact could be negative.” In such a delicate behavioral context, a cautious approach is imperative, especially in relation to initiation. The public health gains from shifting currently addicted smokers to ENDS could be completely offset by a new wave of tobacco-related morbidity and mortality attributable to a significant increase in initiation of ENDS by young people who would not otherwise have used tobacco.

This is why Kozlowski’s proposal that states “consider selectively and differentially raising the purchase age of tobacco and nicotine products” is worrisome. Risk-based policy making is a potentially sensible strategy for shifting users from combustible products to non-combustible ones, but it not a sensible way of thinking about tobacco use initiation, which occurs almost entirely during adolescence and young adulthood. The marked decline in initiation over the past 15 years is attributable to the emergence of a strong tobacco-free social norm among adolescents and young adults. It would be a huge mistake to choose, as a matter of policy, to send young people an equivocal message (e.g., “you’re not old enough to use alcohol or cigarettes, but it’s OK to use e-cigarettes”). This is especially dangerous when promotional expenditures for these products are escalating exponentially, largely targeted at young people; a significant proportion of teenagers are already using e-cigarettes; and the likely trajectories of use are unknown.

It is true, of course, that any legal age of purchase is arbitrary, but a line must be drawn somewhere, and where we draw the line matters. When the voting age was raised to 18, most states unthinkingly lowered the “age of majority,” including the minimum legal age for the purchase of alcohol, from 21 to 18. This turned out to be a major public health error, leading to a substantial increase in alcohol-related driving fatalities among young adults. In 1984, Congress leveraged threatened decreases in highway funds to induce states to restore the legal drinking age to 21. The Institute of Medicine’s recent study on the minimum legal age for tobacco products concluded that raising the age from 18 to 21 will significantly reduce rates of initiation, especially among 15-17 year olds (due to reducing access to social networks of smokers).

Although ENDS and reduced-risk products such as snus should not be treated the same as cigarettes for all regulatory purposes, the minimum legal age should be raised to 21, without equivocation, for all tobacco products. If every state raised the minimum legal age to 21 today, there would be 3 million fewer adult smokers in 2060. Initiation of ENDS use by teens and young adults might save even more lives by reducing the likelihood that they will ever initiate more harmful forms of tobacco use, but it is just as plausible to believe that it would have the opposite effect. In short, doing anything to encourage use of ENDS by teens is not a prudent public policy.

Richard J. Bonnie

Harrison Foundation Professor of Medicine and Law

University of Virginia

Lynn Kozlowski correctly suggests that public health would further improve if states raised their minimum legal age for cigarette sales to a higher age than for sales of far less harmful vapor products and smokeless tobacco products. The scientific and empirical evidence consistently indicates that cigarettes are attributable for more than 99% of all tobacco morbidity, disability, mortality, and health care costs; that noncombustible vapor and smokeless tobacco products are 99% (plus or minus 1%) less harmful than cigarettes; and that these low-risk smoke-free alternatives have helped millions of smokers quit smoking or sharply reduce their cigarette consumption, or both.

Cigars and pipe tobacco pose significantly lower risks than cigarettes, but those products are not effective risk reduction alternatives for cigarette smokers.

Public policies that treat all tobacco and vapor products the same not only protect cigarettes and threaten public health, but also deceive the public to inaccurately believe that all of these products pose similar risks.

By setting a higher minimum sales age for cigarettes (e.g., to 19 or 21 years) than for smoke-free alternatives, states not only would help prevent teen smoking (by banning cigarette sales to high school seniors), but also would inform the public that smoke-free alternatives are less harmful than cigarettes.

Similarly, public health and tobacco risk knowledge would be greatly enhanced if states taxed cigarettes at a significantly higher rate than low-risk smoke-free alternatives, and if state and municipal governments rejected lobbying efforts to ban the use of smoke-free alternatives everywhere smoking is banned.

Unfortunately, in their zeal to attain an unattainable “tobacco free world,” those people who lobby to regulate vapor products, smokeless tobacco products, and what are known in government-speak as “Other Tobacco Products” in the same manner that cigarettes are regulated want the public to inaccurately believe that all tobacco products and vapor products are just as harmful as cigarettes. This is partly why surveys consistently find that 90% of people in the United States inaccurately believe that smokeless tobacco is as harmful as cigarette smoking, and why increasingly more of them inaccurately believe vaping is as harmful as smoking.

One of the worst offenders of this unethical risk deception has been the U.S. Centers for Disease Control and Prevention, which replaced its long-standing statement that “cigarette smoking is the leading cause of disease and death” with the claim that “tobacco use is the leading cause of disease and death” in virtually all of its tobacco-related research and reports during the past decade.

Smokers have a human right to be truthfully informed that smoke-free tobacco and nicotine alternatives are far less harmful than cigarettes. Therefore, public health officials and professionals have an ethical duty to truthfully and consistently inform smokers that smoke-free alternatives are far less harmful than cigarettes, and to advocate for stricter regulations for cigarettes than for lower risk alternatives.

Bill Godshall

Executive Director

Smokefree Pennsylvania

Leveraging global R&D

“How to Bring Global R&D into Latin America: Lessons from Chile” (Issues, Winter 2016) is a highly illuminating and timely article that may well hit a nerve with policy makers and institutional managers. There are three reasons for making this claim:

First, as authors José Guimón, Laurens Klerkx, and Thierry de Saint Pierre rightly suggest, research and development has essentially become a global affair—not only for companies, eminent research universities, and individual researchers, but also (or maybe especially) for countries that are trying to close ranks with leading science nations. Many countries are pursuing increasingly sophisticated and innovative policy strategies to tie global knowledge networks to local needs and development priorities. Yet, these internationally oriented capacity-building efforts have received relatively little systematic attention from policy analysts and commentators, particularly in certain regions in the world. As the authors point out, Chile—a member of the Organisation for Economic Co-operation and Development since 2010—represents an interesting case study both for its paragon role within South America and its ambitious and innovative partnership program.

Second, large-scale-capacity partnerships of the type described in Chile are on the rise globally. In what colleagues and I have called Complex International Science, Technology, and Innovation Partnerships (or CISTIPs for short), countries are increasingly looking to team up with foreign expert partners to build institutional capacity in specific scientific and technological domains. Although CISTIPs have long existed in certain technological sectors (e.g., nuclear power or space technology, where countries have typically built their first power plant or satellite in collaboration with nuclear or space nations, respectively), upstream capacity-building centered on universities and other research organizations is a much more recent phenomenon, though not without precedence. Initiatives such as the Chilean International Centers of Excellence (ICE) program differ from traditional research collaborations and cross-border higher education activities in that they seek to address simultaneous concerns in human resource development, research capability, translational capability, institution-building, and regulatory frameworks, among others, and they often involve hybrid multinational, multi-institutional architectures. Here, too, the case of Chile provides valuable insights into how CISTIPs might fare more broadly within a Latin American context.

Third, research and development has to a certain extent become a branding game. For countries to put themselves on the map and benefit from globally mobile talent, knowledge, and capital, they need to demonstrate credibility in, and long-term commitment to, science and technology. Conversely, leading research organizations around the world increasingly depend on brand recognition for access to interesting research opportunities and additional funding sources. If successful, the Chilean case can provide guidance for emerging science nations on how to nimbly position themselves and exploit international collaboration opportunities to their advantage. At the same time, it could provide a model for eminent research universities for how to engage in emerging science nations. The growing popularity of CISTIPs indicates that research universities should consider international capacity-building efforts more seriously as part of their core mission, going beyond the traditional trifecta of education, research, and (local) economic development.

The article thus provides a laudable stepping-stone for further inquiry into the Chilean partnership program and beyond. Among the questions raised are: Where did the Chilean government look for inspiration for the ICE program? How did earlier capacity-building efforts in Chile and elsewhere in Latin America (e.g., the satellite development partnership between the Chilean Airforce and the British company SSTL, the telescope cluster in the Atacama desert, or Massachusetts Institute of Technology’s institution-building efforts in the Argentinian province of Mendoza) affect the choices made by the Chilean government—if at all? How do the various ICEs differ in terms of goals, partnership architecture, and actual performance—and how does this compare with similar initiatives in other countries? Indeed, it would be very illuminating to contrast the case of Chile with similar initiatives in Singapore, Portugal, and various Middle Eastern countries in terms of visions, architectures, and partner selection, as well as politics and social uptake.

Sebastian M. Pfotenhauer

Assistant Professor of Innovation Research

TUM School of Management

Technische Universität München

Munich, Germany

Should other countries imitate what Chile has been doing and try to attract the research and development (R&D) departments of foreign universities, as José Guimón, Laurens Klerkx, and Thierry de Saint Pierre seem to imply? Should the government of Honduras, for example, try to convince the University of California, Davis, a global leader in agricultural research, to open a research facility in the country to improve Honduran agricultural production? Should Peru continue with its Formula C program?

The answer crucially depends on one factor: the capacity to absorb the knowledge and innovation that spill over from the foreign research facility. It is true that an R&D center may help strengthen the national innovation system—through, for example, scientists and engineers finding jobs in local firms and universities and transferring with them all of their knowledge and experience, or new startup firms spinning off from the center—but this is not likely to happen in the absence of strong local scientific and technological capabilities. Research has shown that a certain level of research activity is indeed needed simply to absorb and make efficient use of scientific research and technologies developed elsewhere, and not only to create new knowledge. The question is not whether foreign R&D centers can replace local R&D: both are needed, and the former does not easily lead to the latter.

Moreover, it is worth reminding that knowledge increasingly requires a combination of different elements that cannot all be created—or are not available—in only one country. Having foreign R&D centers in the country can represent an advantage, as much as having national universities and firms located abroad, with strong linkages to foreign institutions.

Therefore, a policy such as the one adopted in Chile can serve as a lesson to other developing countries only if their firms, universities, and government institutions also invest in research, development, and innovation. Some form of innovation system, even if only fragile and precarious, must be in place and in the process of developing. This includes networks of research institutions; firms demanding research from them and investing themselves in research and innovation; and the government orchestrating and regulating efforts, fostering coordination, and addressing market failures.

The results of the Chilean program can be fully assessed only in the long run. However, its success will need to be measured not so much on the basis of the nationality of the owners of the R&D centers, but on the deep linkages and interactions that they will have generated with local firms, universities, and the economy. Chile’s lessons for developing countries can only be partial: foreign R&D centers cannot substitute for, but rather complement, a national innovation system.

Carlo Pietrobelli

Lead Economist, Competitiveness and Innovation Division

Inter-American Development Bank

Professor of economics, University Roma Tre, Italy

Transforming the Active Orientation

Our ambitions are high. We have a long list of desiderata that, in effect, entail re-engineering much of the physical and social world around us, even the self. We Americans are keen to prevent the alarming deterioration of the environment, to attain higher rates of economic growth, to reduce inequality, to curtail the number of people incarcerated for nonviolent offenses while simultaneously reducing drug abuse, to end war and genocide, to foster human rights, to reform campaign finance, and to improve our knowledge, skills, and self-awareness (most recently through “mindfulness”). When then-President Bill Clinton was asked, on the eve of a new year, what he wished for his fellow Americans in the year to come, he responded: “Have all your dreams come true.”

Sadly, a deep gap exists between our aspirations and our capabilities. We aspire to re-engineer much of the world, but we are often buffeted by forces we neither understand nor control. Revisit the list of desiderata introduced above; it soon becomes clear that we have made little progress on most of these fronts, and we actually have fallen back on quite a few of them. All too often, we do not even agree about how to tackle these issues, nor do we command the know-how, resources, political will, or personal dedication to proceed successfully.

After conducting a review of the history of how we acquired these ambitions, and the role of science and technology in fostering them, I ask where we go from here. It seems obvious that either we must greatly scale back our ambitions, or we must double down and find more effective ways to proceed. Actually, it is likely that we will have to do both. Science and technology are seen by some as the most promising sources for finding ways to catch up, for controlling history rather than being subject to its vicissitudes. Others see science and technology as exacerbating the problem. Here, too, there might be a third way.

A brief history of the active orientation

At the beginning, humans were passive. They largely accepted nature around them as a given, rather than seeking to recast it. They accepted their place in the social world as fixed rather than seeking to move up within or change their social structures or themselves. The Stoics of Ancient Greece, for example, held that our actions were orchestrated by forces beyond humans’ control and that all events followed inherently from prior events dating back to the beginning of the universe. The philosophers Democritus, Heraclitus, and Aristotle held that “everything occurred by fate.” People accepted changes or lack thereof as God-given, or as being the result of spirits. True, passivity was not complete. People did make sacrifices and prayed, appealing to deities to intervene against natural disasters, illness, and war, but they did not believe that human beings could marshal the powers necessary to change the future.

When facing the social world, first Aristotle and then the Church told people to be the best they could be in whatever role they found themselves playing. Aristotle’s virtue ethics dismissed hedonism and instead held that the good life was one that fully achieved or “perfected” the “final cause” or purpose inherent in one’s nature. This purpose he referred to as a “telos.” For example, the telos of the flute is to be played well, not to be used as a knife. Each person should work to serve his or her assigned role rather than seek to serve in some other capacity—that is, people should not strive to be socially mobile or a change agent. Passivity was the order of day—for centuries.

The Catholic Church substantially incorporated Aristotelian teleology into its theology. Thomas Aquinas stated, “The Church [was] to teach the truth of God and to assist the faithful in fulfilling their God-given telos, individually and collectively.” Thus, the nobleman should strive to be the best of his kind, and the serf should be the best possible serf. There was no place here for an active orientation toward the self or toward society by seeking reform.

This position dominated political thinking in Western societies in the Middle Ages, but it was not limited to the West. The Indian caste system, too, reflected a passive orientation toward society, what sociologists referred to as “status acceptance.” Thus, Hinduism holds that: “If an individual respectfully accepts and carries out the duties of his or her caste, then he or she will live the next life in an elevated caste of society.” Many other societies embraced similar values that promoted social and political passivity.

The active orientation, or the presumption that humans can re-engineer the world, was born as part of the Age of Reason, which originated in the 1600s and was characterized by according high value to rationality, science, and technology. (Some locate the origins of the relationship between ideas of progress and technology earlier, in the Renaissance.) Rationality presumed that individuals have clear ends; are able to collect, process, and interpret information about ways of achieving these ends in an empirical and logical manner; and then act according to those means they judge to be the most efficient. The “rational man” was free of the bondage of the superstition, prejudices, biases, and social status that dominated earlier ages. Bigotry, belief, magic, and religion were treated by rational people as if they were one obsolete burden to be replaced by rational thinking and science, which were held to be the mainstays of an enlightened society. The era is widely credited with the birth of the modern approach to natural sciences, which, in turn, opened the world to be marshaled and used, allowing humanity to understand its place in the solar system, harvest electricity and radio waves, and create a host of innovations that enabled the Industrial Revolution. At the same time, technological breakthroughs and feats of engineering, such as the power loom and the steam engine, often played a much greater role than science, and science itself benefited from technological developments such as the microscope, the telescope, and (eventually) computers. This article treats science and engineering as two forces that together empowered human beings to marshal nature and make it work for us. From here on, references to technology should be read as if referring to technology and science. Initially, the shift from a passive to an active orientation was associated with great benefit: the rise of affluence, the advent of modern health care and education, and the free flow of information—essential for democratic politics, and celebrated by America’s founders and by nineteenth-century Americans who typically viewed technical, human, and moral progress as strongly aligned.

Francis Bacon was one of the first philosophers to postulate that human beings could re-engineer the world around them and achieve mastery over nature. In his The New Atlantis, he foresaw a utopia in which technology would make life much less taxing and would empower humans to overcome natural limitations. Henry Brooks Adams, though famously ambivalent about technology, nevertheless stated in 1906 that “the new American—the child of incalculable coal power, chemical power, electric power, and radiating energy, as well as new forces yet undetermined—must be a sort of God compared with any former creation of nature. At the rate of progress since 1800, every American who lived in the year 2000 would know how to control unlimited power.” A thread of “technological utopianism,” of which Edward Bellamy and Horatio Alger Jr. were perhaps the greatest proponents, ran through much of public culture from the late 1800s to the early 1900s.

Karl Marx extended the active orientation to re-engineering society. He envisioned a “classless society” brought about by the establishment of full communism, which would be characterized by “a cooperative union of free producers, who would be both owners of the means of production and workers.” All people would labor together equally to satisfy the economic needs of all of the members of the community. Such a classless society would represent the fulfillment of what Marx saw as humanity’s “capacity for harmonious society with others and the capacity for free, conscious, and universal labor.” Technology developments were key to this societal transformation and to the achievement of a future utopia.

Sigmund Freud extended the active orientation to re-engineering ourselves. Freud (and other psychoanalysts) held that man is able, if only with great effort and pain, not only to understand himself but also to transform himself, to free himself from his own past, and to chart a new course of his life. One’s natural urges could be sublimated in favor of a more civilized social world.

The idea that humanity was progressing dominated. Robert Nisbet, a leading sociologist, observed, “it is a notion of the European Enlightenment that thanks to scientific advances, [in the future] all people would be united in an egalitarian commonwealth, freed by machines from poverty and the necessity of toil, from disease and even death by scientific medicine, and ennobled by the heights of civilizational achievement.” The idea that “civilization has moved, is moving, and will move in a desirable direction” also was incorporated into major segments of the social sciences. Their core assumption is that we can recast the social world in line with our values and ambitions. Thus, according to Keynesian economics, if one correctly sets interest rates and the rates at which people spend and save, one can achieve high economic growth. Sociologists in the post-World War II era held that Head Start, Medicaid, negative income tax, Social Security, and half a dozen other such federal programs will allow us to close the gap between the races and the classes. These were indeed heady, optimistic ages, captured in such mottos as “where there is a will there is a way” and in assertions that “the richest nation of the world should be able” to accomplish whatever was needed.

Rising doubts

Historians of technology disagree about when people first noted that the active orientation had serious, negative side effects that people neither anticipated nor could handle readily. Some hold that the idea that humanity inexorably progressed was “dethroned” by the Great Depression and two world wars, which, as described by historian Dorothy Ross, collectively “destroyed the sense of cumulative gain in civilization on which progress depended.” Others point to the dropping of the atomic bomb on Hiroshima and Nagasaki as a major turning point. Literary scholar M. K. Booker finds that “the atomic bombing of Hiroshima and Nagasaki was not an entirely new departure so much as it was a final straw that finally broke the back of the American national narrative [about the merits of technology].” The world had to face the fact that technological developments brought about a tool that incinerated cities along with their hundreds of thousands of inhabitants and that threatened the whole world. Many, but far from all, scientists soon recognized the danger of the monster they had helped create. President Dwight Eisenhower used his farewell address to warn of the existential and political dangers created by the military-industrial complex. But there was no way to turn back. Once knowledge was forged it could not be unmade.

Soon, other developments gave the champions of reason, science, and technology additional pause. Malthusian fears were awakened by the prospect that improvements in health care could lead to overpopulation and then to mass starvation. The invention of the birth control pill raised fears of sexual promiscuity. And there are very familiar, growing concerns that humans’ expanding economic activities will exhaust the world’s resources; that the degradation of the environment will threaten human survival; and that climate change will subject us to a whole series of calamities.

In the social realm, there are growing doubts that we can actually manage the economy and an increasing sense that we are instead doomed to suffer recurring major recessions beyond our control. Grave doubts emerged about the effectiveness of the talk therapy championed by Freud and other psychotherapy gurus. Marxian ideas about fashioning a better social world through central planning and command-and-control economies, and of reordering political life through a working class revolution, have been discredited.

A debate erupted in the United States in the late 1960s over the expansion of liberal social programs based on social science. It turned out that many of these programs failed to achieve their transformational goals, as neoconservatives stressed, while liberals held that given more money and more time, these programs could succeed. Most recently, the rise of artificial intelligence and robots has raised great concerns about massive human unemployment and machines’ domination of people.

Meanwhile, social scientists began to recognize that human beings are much less capable of the kind of rational thought that active orientation takes for granted. In contrast to the innate rationality of behavior assumed by economics, other social sciences have demonstrated the limits of rationality for individuals and for organizations. Decision scientists showed that more information did not add up to better decisions. Psychologists have demonstrated that human beings are constrained by innate, hardwired cognitive biases and that human intellectual capabilities are much more limited than they were previously assumed to be.

“Muddling through” characterizes much of public policy, while failure at “encompassing planning,” especially of the central command-and-control kind, is well established. We increasingly give up on finding basic solutions for many of the major challenges we face, and instead seek to cope with the latest crisis—a much less active orientation. The terms arrogance and hubris versus humility begin to capture the difference between the sanguine active approach and the more accepting passive one. The political systems of most nations seem unable to cope with the growing list of problems societies and the international system face, raising the question of whether our aspirations are hyperactive and completely out of line with what we can achieve.

In short, it has become clear that active orientation is not the panacea it once seemed to be—and, indeed, some hold that our hubris will destroy us. Increasingly, the whole idea of progress, which was a reflection of active orientation, has been cast into doubt.

A fork in the road?

Nowhere is the question of whether humans should greatly scale back their ambitions more acute than in the debate between the advocates of greater economic growth (and the affluent society) and those advocating for scaling back economic activities and reliance on most technologies. The slow growth (“less is more”) camp holds that without scaling back our activities, the world will run out of resources; the environment will be degraded; and climate change will endanger humanity. The growth and antigrowth positions come in radical and moderate versions, accompanied by very different views on the role of technologies in our future. The pro-growth champions hold that technological developments can empower humans to deal with the challenges that face humanity on the path to ever higher levels of affluence; the anti- (or at least slower-) growth champions hold that focusing on technological solutions exacerbates the challenges humans face rather than offering a cure.

I turn to discuss a bit more the differences between the techno-optimists and the techno-pessimists. Given that these positions are familiar, I treat them briefly, and close by outlining a third way.

According to a host of scientists and public leaders, technological progress can help us to end the ills that plague the human condition. For example, Bill Gates is convinced that “technology can fix everything.” Gates thus announced plans to spend up to $2 billion on green technologies in the next five years. Strong technological optimists believe technology “paves a clear and unyielding path to progress and the good life,” and technology is “the means of bringing about utopia.” Historian of technology Carroll Pursell describes as very widespread “the notion … that a kind of invisible hand guides technology ever onward and upward, using individuals and organizations as vessels for its purposes but guided by a sort of divine plan for bringing the greatest good to the greatest number.”

Technological optimists differ in how strongly they hold this position. Many recognize the magnitude of the challenges humanity faces and our resource limitations, including those on funding and political will. Some, though, are quite optimistic, believing that technology could make energy “free, much like unmetered air” (John von Neumann) and eliminate the need for human labor (Jeremy Rifkin). Other optimists claim that technological innovation itself is speeding up and becoming less costly, which will usher in a new era of prosperity and innovation. Technological utopians even hold that society itself is akin to a large, exceptionally complex machine that scientists can engineer into perfection. Others are less sanguine about technology as a total panacea. Nonetheless, all technological optimists hold that the main way forward is to increase our investments in technology. This optimism is embraced by two-thirds of Americans who, polls show, believe that technology will bring about a future in which people’s lives are better than they are today.

Technological optimism takes continued economic growth as a sacred cow. University of Oregon political scientist Ronald B. Mitchell notes, “Mainstream policy and scholarly discussions of climate change accept growth in population and affluence as a given and view technological innovation as the only available policy lever.” Such optimists point to technological “fixes” such as geoengineering, seeding the ocean with iron to stimulate phytoplankton, or even “sending a fleet of planes into the sky and spraying the atmosphere with sulfate-based aerosols” to cool the planet.

Technological pessimists, by contrast, refer to “the sense of disappointment, anxiety, even menace” engendered by technology. According to them, technology frequently, if not always, has unintended negative side effects that are worse than its contributions to dealing with the problem it purports to solve. The negative consequences of technology may be delayed, but never avoided. Other scholars hold that such negative effects are inherent to the very nature of technology. Economist Robert Gordon argues that most, if not all, truly revolutionary technological innovations have already been made. Scholars such as Nick Carr, Jonathan Zittrain, Sherry Turkle, and Jaron Lanier hold that technology—especially technology associated with the media and with the Internet—has had negative impacts on the ways human beings think and interact with each other.

Techno-pessimists rarely see a technological fix that passes muster. For instance, they fear that geoengineering will increase acid rain, and is likely to reduce the urgency that is critical to mustering the political will to permanently address climate change. (Early critics of geoengineering were so vehemently opposed to the idea that they left death threats on the answering machine of one of its most notable advocates, David Keith.) Others point out that although “only nuclear power can satisfy humanity’s long-term energy needs while preserving the environment,” nuclear reactors generate highly radioactive waste that is dangerous if not stored with the utmost care, and reprocessing this waste is expensive and increases the possibility of the waste being accessed and used for malicious purposes.

Radical techno-pessimists urge us to leave the high-growth pathway needed for the affluent society, a path that presumes ever-greater reliance on technological innovation, in favor of returning to a simpler life. Such a life would entail adapting to nature rather than seeking to exploit it. Less radical technological pessimists instead believe that we should focus on activities that add less to the triple challenge discussed below. Technological innovation, these moderate techno-pessimists point out, has its place—as long as it first and foremost helps to ameliorate the harms already inflicted upon the earth by humans. For some, this entails greatly increased reliance on “alternative” sources of energy such as solar and wind; for others, it means increasing the energy efficiency of our buildings and cars. All, in effect, favor a less active, more adaptive world.

The post-affluent society: a third way

I see great merit in shifting the focus of our actions from seeking ever-greater wealth to investing more of our time and resources in social lives, public action, and spiritual and intellectual activities—in communitarian pursuits. Shifting to what we consider a good society leapfrogs the growth/antigrowth debate by suggesting that a slow- or no-growth society is not merely one with a greatly reduced environmental footprint, but also one that redefines well-being. It is a society in which those who have their basic material needs well sated find contentment in nonmaterialistic activities—and in helping others to catch up. We will be active but our activity will not be labor- or capital-intensive, but socially and spiritually rich. Before I spell out this argument further, I note that those who think that such a vision is utopian should note some version of it is found in all major religions. The preponderance of the relevant evidence shows that as societies grow more affluent, the contentment of their members does not much increase. For example, between 1962 and 1987, the Japanese per capita income more than tripled, yet Japan’s overall happiness remained constant over that period. Similarly, in 1970, the average American income could buy over 60% more than it could in the 1940s, yet average happiness did not increase. Gaining a good life through ever-higher levels of consumption is a Sisyphean activity. Only finding new sources of meaning in social and spiritual life can bring higher levels of contentment. A person who meditates does not feel that she is not sated unless she meditates more than someone else, or must find each year a new or richer way to meditate. The same is true of those who enjoy a sunset, a walk on the beach, or making pottery (not as a business but as a form of self-expression). A person who volunteers for one kind of community service or another may feel that he wished he could find more time for service, but does not feel flawed if he merely gives as much as last year and the year before. A person who makes ceramics, paints, joins a book club, keeps a journal, or spends more time with his or her children, spouse, and friends (and less time at work)—evidence shows—leads a longer, healthier, and happier life (as well as one that taxes the world around us less) than those seeking ever more wealth. This is what I mean by being active in a communitarian way, in “working” for a more communitarian society.

At first blush such a major cultural shift is hard to imagine, but one needs to recall that for most of history, work and commerce were not valorized; instead, devotion, learning, chivalry, and being involved in public affairs were. True, these were often historically accessible to only a sliver of the population, while the poor were shut out from such things and forced to work for those who led the chosen life. However, we can recognize that even though technology has created our global environmental challenges, it has also created the capacity on a global basis to eliminate degrading toil and generate enough wealth so that all can participate in the pursuit of fuller lives as citizens and individuals: Self-capping consumption now makes it possible for all the population to lead a less active economic life and a more active social, communal, and spiritual life—a communitarian life.

Abraham Maslow pointed out that humans have a hierarchy of needs. At the bottom are basic human necessities; once these are sated, affection and self-esteem are next in line, leading finally to “self-actualization.” It follows that as long as the acquisition and consumption of goods satisfy basic creature comforts—safety, shelter, food, clothing, health care, and education—expanding the reach of those goods contributes to genuine human contentment. However, once consumption is used to satisfy Maslow’s higher needs, it turns into consumerism—and consumerism becomes a social disease. Indeed, more and more consumption in affluent societies serves artificial needs manufactured by those who market the products in question. For instance, first women and then men were taught that they smelled bad and needed to purchase deodorants. Men, who used to wear white shirts and grey flannel suits, learned that they “had to” purchase a variety of shirts and suits, and that last year’s clothing was not proper in the year that followed. Soon, it was not just suits but also cars, ties, handbags, sunglasses, watches, and numerous other products that had to be constantly replaced to keep up with the latest trends. The new post-affluence society would liberate people from these obsessions and encourage them to fulfill their higher needs once their baser needs have been satisfied. None of this entails dropping wholly out of the economic or technological world. The shift to a less consumeristic society and a more communitarian one should not be used to call on the poor to enjoy their misery; everyone is entitled to a secure provision of their basic needs. Instead, those who have already “made it” would cap their focus on economic activities.

The triple challenge and social justice

A society in which people combine capping their consumption and work with dedication to communitarian pursuits would obviously be much less taxing on the environment and material resources than consumerism and the level of work that paying for it requires. Social activities (such as spending more time with one’s children) require time and personal energy, but do not mandate large material or financial outlays. The same holds true for cultural and spiritual activities such as prayer, meditation, enjoying and making music and art, playing sports, and adult education. Playing chess with plastic pieces is as enjoyable as playing it with mahogany pieces. Reading Shakespeare in a paperbound edition made of recycled paper is as enlightening as reading his work in a leather-bound edition. And the Lord does not listen more to prayers from those who wear expensive garments than from those who wear a sack.

Less obvious are the ways a socially active society is more likely to advance social justice than the affluent society. Social justice entails transferring wealth from those disproportionally endowed to those who are underprivileged. A major reason such reallocation of wealth has been very limited in affluent societies is that those who command the “extra” assets tend also to be those who are politically powerful. Promoting social justice by organizing those with less and forcing those in power to yield has had limited success in democratic countries and led to massive bloodshed in others. However, if those in power embrace the capped culture and economy, they will have less reason to refuse to share their “surplus.” This thesis is supported by the behavior of people who are committed to the values of giving and attending to the least among us—values prescribed by many religions. The same holds for secular liberalism. Many of my students are white and middle class. Their economic interests might well be considered in looking for lower taxes and less government regulation and spending. But of those students who are liberal, most are very agitated about social injustice and inequality. True, as they grow older, they are likely to focus more on their careers, but billions Americans donate whenever there is a crisis (in New Orleans, Haiti, or some other faraway place), and the very large amounts of time they spend volunteering show that doing good as a major source of meaning is far from a naïve vision.

Technology policy for the communitarian society

In shifting the active orientation from a society that seeks ever more affluence to one whose members cap their economic ambitions but is socially (in a communitarian sense) more active, technologies have three roles to play.

First, keep the economy humming at a level that makes it possible to satisfy all the members’ basic needs, for instance by making health care safer, higher quality, and lower cost. Many of the greatest advances in health care have been achieved by the provision of public goods with population-wide benefits, such as clean water and air, or by relatively cheap technologies, such as vaccines. We do not need speed trading on Wall Street, and can do without trivial redesign of medications to extend the period corporations can change high prices for them. More generally, what is essential versus what is not requires an ongoing societal discussion that puts the quality of life, and what makes a good society rather than growth, at the center of our agenda.

Second, ameliorate the environmental consequences of industrialism. Although often divisive and painful, the debates over climate and energy are slowly putting societies on a technological path toward cleaner, affordable energy. A much less-polluting economy is a critical element of a post-affluent world, and there is much to be learned from our continuing experience in moving toward this goal through the selection of technological innovations. As one very simple example that will stand for all the others, the Nest Learning Thermostat first observes the preferred settings of those who live in the residence. It then uses a sensor to determine if anyone is home. It lowers the setting until it senses movement. And it provides a green leaf display for those who adjust their setting by two degrees or more away from their initial comfort zone, to save energy. Such technology is not merely environmentally friendly, but it also provides a new and wholesome source of pride and self-esteem. The post-affluence society needs scores of these, on a much larger scale. It can play a major role as we shift to using ever more smart instruments, and as we shift to the “Internet of Things.”

Third, allow for a more active communitarian life, for instance through technologies that facilitate group interactions versus those that isolate people, through technologies that make voting easier while helping to prevent fraud, and through technologies that enable parents to monitor the whereabouts of young children. We are all familiar with these technologies—conference calling and telecommuting that reduces the need for travel, email instead of home delivery of mail, and nanny cams. Most measures that are making various commonly used technologies, from refrigerators to cars, smarter by the use of artificial intelligence, the coming Internet of Things, qualify. True, each of these technologies can be abused. Children can become addicted to screen time and avoid the outdoors and face-to-face social life. The Internet of Things can make us delegate too many choices to algorithms. But my argument is that we can and must assess the social and human impact of such technologies, and seek to modify them to serve the shift to the post-affluence society. Indeed, these and many other technologies will achieve their full potential only when we embrace a major culture shift that recognizes not merely the contributions of innovation to reducing our environmental footprint, but also to fostering a better life measured by other goals than working more and consuming more.

In the words of Pope Francis during his 2015 visit to Washington, DC, “we have the freedom needed to limit and direct technology to devise intelligent ways of developing and limiting our power, and to put technology at the service of another type of progress, one which is healthier, more human, more social, more integral.”

Recommended reading

Richard Easterlin, “Diminishing Marginal Utility of Income? Caveat Emptor,” Social Indicators Research 70 (2005): 243-255.

Richard Easterlin, “Does Money Buy Happiness?” The Public Interest, 30 (1973): 3-10.

Amitai Etzioni, “Normative-Affective Factors: Toward a New Decision-Making Model,” Journal of Economic Psychology 9, no. 1 (1988).

Alexander Charles Oliver Hall, “‘A Way of Revealing’: Technology and Utopianism in Contemporary Culture,” The Journal of Technology Studies 35, no. 1 (2009), available online: http://scholar.lib.vt.edu/ejournals/JOTS/v35/v35n1/hall.html.

Alexander Howard, “Pope Wants Technology to Make Us Better Humans,” The Huffington Post (September 24, 2015), available online: http://www.huffingtonpost.com/entry/pope-wants-technology-to-make-us-better-humans_5604141ce4b0fde8b0d17ea6.

Michael Huesemann and Joyce Huesemann, Techno-Fix: Why Technology Won’t Save Us or the Environment (Gabriola Island, Canada: New Society Publishers, 2011).

Nick Visser, “Bill Gates to Help Fight Climate Change by Investing up to $2 billion in Green Technology,” The Huffington Post (June 29, 2015), available online: http://www.huffingtonpost.com/2015/06/29/bill-gates-renewable-energy_n_7690418.html.

Amitai Etzioni ([email protected]) is University Professor at The George Washington University in Washington, DC and author of The New Normal and The Active Society.

Chemical Solutions

Geiser - Chemicals Without HarmKen Geiser, professor emeritus at the University of Massachusetts, Lowell, and founder of the Lowell Center for Sustainable Production, has written another excellent book, Chemicals Without Harm: Policies for a Sustainable World, which follows and extends a book he wrote 14 years ago. That book, Materials Matter: Toward a Sustainable Materials Policy, was framed by the Bhopal chemical plant disaster in India and the lessons learned from that experience. Geiser said that the chief lesson of Bhopal was that the materials used were highly toxic and hazardous and the production process itself was inherently problematic.

The earlier book had a foreword by the ecologist Barry Commoner in which he described a hypothetical business decision about whether to produce acrylic grass for use on athletic fields; the company in this example decided not to produce the grass because of the injuries athletes suffered when they fell or slid on artificial turf. The example could now be updated to include the potential hazards of crumb rubber from recycled tires that is spread on artificial turf to cushion falls, and the growing concerns about health problems the new materials may be causing. Paul Anastas, the director of the Center for Green Chemistry and Green Engineering at Yale University, said recently in response to these concerns, “Tires were not designed to be playgrounds. They were designed to be tires.” The point, as before, is that materials matter.

Chemicals Without Harm takes the reader beyond the previous lessons and proposes a re-framing of the dilemmas created by manufacturing products made of hazardous materials. To get to the re-framing, however, Geiser takes us on a deep dive into current federal and state policies, current synthetic chemical science, industrial infrastructure, advocacy organizations, and politics regarding chemicals production. He focuses primarily on the United States, although he references important developments in the European Union, East Asia, and international organizations, such as the United Nations Environment Program. He states the central argument of the book at the outset: “We can develop and use safer alternatives to the chemicals that threaten our health and environment; however, this will require a new chemical strategy focused on broad changes in science, the chemical economy, and government policy.”

In the early chapters, Geiser traces the origins of the current U.S. legal and regulatory framework, and the gaps, loopholes, and weaknesses that have reduced the effectiveness of even the most well-intentioned chemical policies. On a more fundamental level, he notes that by reducing the problem to the regulation of “a few bad actors,” this system avoided addressing the production and consumption systems that created the hazards in the first place. He also notes that the U.S. regulatory apparatus tends to require proof of harm, and takes a “risk-based” approach to controlling chemical hazards. In contrast, the current European Union approach, called REACH (Regulation, Evaluation, and Authorization of Chemicals), takes a “hazard-based” approach. The latter requires reasonable proof of safety before chemicals are allowed on the market. In other words, the REACH regulation operationalizes the “precautionary principle,” which is quite different from the U.S. approach, which has been characterized as embodying the “reactionary principle” for requiring incontrovertible evidence of harm before taking preventive action.

In the middle chapters, Geiser goes on to describe how a re-framing of U.S. chemicals policy around “green chemistry” would focus on producing safer chemicals rather than controlling hazardous ones. He outlines the vast scale and monetary value of the chemical economy, including a useful summary of the principal steps in chemical production, from primary petrochemicals such as ethylene and benzene, through intermediate chemicals such as vinyl chloride and styrene, to various end-use products. This is the platform on which the current chemicals economy is built and that will have to be transformed to produce useful end products with less hazardous materials. Green chemistry will play a crucial role in this transformation, which Geiser emphasizes at various points throughout the book, and the principles guiding this new way of thinking about chemicals and chemical production will take decades to reach fruition.

The Massachusetts Toxics Use Reduction (TUR) Act exemplifies the type of innovation that will need to become more widespread in the United States. This law, which was passed by the state legislature in 1989, set in place procedures to help companies develop plans to reduce the use and release of toxic chemicals in Massachusetts. Geiser outlines the key elements of the TUR process and alternatives assessment. He is extremely well positioned to explain how this might become a model for other states. In the 1980s, he helped draft and garner support for the original TUR Act, and in the 1990s, he became the first director of the Toxics Use Reduction Institute (TURI) at the University of Massachusetts, Lowell, and guided its work for over a decade. I had the opportunity to witness the development of this groundbreaking program, first as a member of the TUR Advisory Board and later as a member and chair of the TURI Science Advisory Board. I can attest to the success of the program over the past 25 years and to the inspiration and leadership provided by Ken Geiser.

Alternatives assessment is a key tool in the transformation to safer chemicals. Although Geiser describes alternatives assessment as “a conventional decision making process,” he shows how it can be focused on finding safe alternatives to hazardous materials currently being used. As implemented by staff in the Massachusetts TURI program and the companion Lowell Center for Sustainable Production, assessment includes consideration of not just economic feasibility and technical performance, but also social justice, human health, and environmental impacts.

As an example, the state legislature provided funds to evaluate alternatives for formaldehyde, lead, perchloroethylene, hexavalent chromium, and di (2-ethylhexyl) phthalate in 2005. The resulting report by TURI staff, with input from a multi-stakeholder advisory committee, identified at least one equivalent alternative that was less hazardous for each of the five targeted chemicals. A particularly compelling analysis of perchloroethylene, used in dry cleaning clothes and fabrics, concluded that professional “wet-cleaning” was a suitable alternative. This has led to a concerted effort to promote adoption of this safer method in Massachusetts communities, in parallel with similar efforts in other states.

The final chapters of Chemicals Without Harm focus on engaging national and international support for a transformative chemicals policy. Here again, Geiser is well positioned to comment on existing policies and propose new ones. In addition to his groundbreaking work in Massachusetts, over the past three decades Geiser has inspired and advised proponents of state chemicals policy reforms in Maine, California, Oregon, and Minnesota, among other states. He also participated directly in developing the Strategic Approach to International Chemicals Management (SAICM), which is a voluntary program that aims to achieve sound management of chemicals globally so that adverse health and environmental effects are minimized. SAICM’s Global Action Plan, with funds from the Global Environment Facility, includes a wide range of goals and activities that Geiser has promoted over the years. Similarly, nongovernmental organizations such as Health Care Without Harm and the International Campaign for Responsible Technology have, in consultation with Geiser, promoted preventive chemicals policies in specific industries.

Geiser ends his book with intelligent recommendations, and with a call to build a broad-based movement for chemicals policy reform in the United States. Geiser is quite realistic about what this will require and resists the temptation to despair of the possibilities in the current political climate: “A change as broad as what is imagined here cannot be realized through private negotiations or by a small group of elite individuals. It takes large and influential social movements to achieve major policy change in the United States.” But he points out that major policy changes have occurred in the past, with the passage of environmental protection laws in the 1970s and more recently with the Affordable Care Act, even in the face of determined political opposition. If equivalent national legislation for chemicals policy reform is not likely in the current U.S. Congress, there are other strategies that can succeed at the state level and within specific markets. Furthermore, Geiser points out that there are leaders, including chemists such as John Warner, business spokespeople such as Roger McFadden, and advocates such as Mike Belliveau and Charlotte Brody, coming up through the ranks who can carry these strategies forward over the coming years.

In short, this latest book by Ken Geiser is a penetrating and invaluable guide to those who wish to promote sustainable chemistries, and who want useful and effective products that do not jeopardize the health of future generations and the integrity of the global environment. This book will inspire those who wish to pursue such goals.

Super-muscly Pigs

Animal research is moving rapidly in two, divergent directions. Research on animal cognition, behavior, and welfare is teaching us that many animal species have complex cognitive and emotional lives and needs. Such insights reasonably increase our empathy for and insight into species other than our own. At the same time, new gene editing technologies are allowing scientists to design animals in ways that maximize their economic value as food sources. These technologies include programmable nuclease-based genome editing technologies, such as zinc-finger nucleases, transcription activator-like effector nucleases (TALENs), and the CRISPR/cas9 system. They permit the direct manipulation of virtually any gene of a living organism more easily, cheaply, and accurately than has ever been possible before. In the last five years, these technologies have been used to edit the germline of more than 300 pigs, cattle, sheep, and goats. In June 2015, a team of scientists from South Korea announced the creation of super-muscly pigs using the single-gene editing technology TALENs. Whereas the debates on the ethical and social aspects of genome editing of human embryos and crops have triggered public, political, and media firestorms, genome editing in animals has received virtually no ethical scrutiny. Yet gene modification of farm animals like super-muscly pigs raises complex ethical questions about animal welfare, about who is benefiting from these technologies, and about the evolving, contradictory relationship between humans and animals. These questions have been ignored so far, but our growing awareness of the rich inner lives of many animal species makes such neglect increasingly troublesome.

The animals’ perspective

Concerns about the welfare of genetically engineered animals starts with the very process of creating them. Sperm and egg donors and surrogate mothers are normally killed if they are not “re-usable” for other purposes (such as in other animal experiments). Furthermore, animals whose modifications end up being undesirable either die because their health is severely compromised, or are killed because they are neither commercially valuable nor usable for scientific purposes. Due partly to the novelty of the methods, little data are available on the costs of new engineering techniques in terms of the total numbers of animals used, the unintended suffering created, and the effects that they are having on the phenotypes of various species. For the super-muscly pigs, the large size of the newborn piglet leads to birthing difficulties; only 13 of the 32 piglets created by the South Korean scientists survived as long as eight months, and only one survived considerably longer in a healthy state.

For sheep, goats, cattle, and pigs, genome editing techniques are used to modify primary cells, which are then transferred to the recipient mother via somatic cell nuclear transfer (cloning). This causes a range of animal welfare problems, such as very low live birth rates in some species; abnormal sizes, which render them incapable of natural movement; and respiratory and cardiac problems. Although genome editing techniques are expected to offer a more precise modification of the genome, they, too, generate many more animals than are actually used for experimentation. For example, a 2016 paper by Wenfang Tan and others in Transgenic Research surveyed the published literature and determined that out of 23,216 pig embryos, which were implanted in 112 pigs and generated 62 pregnancies, 237 pigs were born alive. Of these, 179 (76%) were properly modified or “edited,” whereas the remaining 58 were not usable for the experiments. The scientists working in the field concentrate on the 76% of the pigs born alive and properly edited, which represents a success with respect to previous technologies. However, if we go back to the number of the embryos needed, to the pigs involved in pregnancies as well as to the individuals born which do not carry the modification needed, we can easily see how evaluation of the “efficiency” of these procedures depends on whether and how one counts the lives of the animals involved at all stages of the process. Among other considerations, as noted in a 2015 paper by Goetz Laible, Jingwei Wei, and Stefan Wagner, the “high efficiency” of CRISPR/Cas9 gene editing comes at the price of unintended mutations elsewhere in the genome.

For the scientists working on animal biotechnology in agriculture, the “efficiency” of gene editing provides a path to “solutions to securing food security for a rapidly growing human population under constrains of decreasing resources and a changing world climate,” as Laible and colleagues explain. However, if we move the focus away from the perception of animals as sources of food products which need to be optimized, and instead consider the new types of costs that animals must pay for being created through these technologies, the picture becomes very different.

Threats to the animals’ welfare do not end with their creation in the lab. Pigs, like many other animals used in agriculture, are being increasingly recognized by scientists as having complex abilities and needs in both the cognitive and social domains. As sentient beings, they have what biologists and veterinarians call “ethological needs,” comprising, for example, the need to explore their surroundings and to engage in meaningful social interaction with others of their species. These needs are an innate and important part of the animals’ behavioral repertoire. For example, animal welfare researchers have shown that if sows are not permitted to build nests prior to piglet birth, welfare problems such as abnormal repetitive behavior including bar-biting (chewing the metal bars of their crates), tail-biting, head-weaving or vacuum chewing (chewing when nothing is present) appear. Farm animal suffering manifests in diseases, lesions, or injuries (sometimes linked to high stock densities or quality of flooring), due to lack of space and behavioral stimuli, malnutrition, stress during handling, isolation, transportation, and, ultimately, killing methods.

We mention these well-known signs of animal discomfort because gene-editing of pigs and other farm animals is not being developed with consideration of how it might contribute to or even exacerbate such suffering. Especially in countries with very minimal or no animal welfare regulations, increased productivity due to genetic engineering could simply lead to more inhumane breeding, raising, and slaughter of ever-greater numbers of animals. Scientists who view genetically modified (GM) animals only in terms of food production fail to take into account the ethical costs of their one-dimensional perspective. The rationale for such a narrow view is obvious—increased yields in mass production of meat directly translate to increased profitability. But who is accounting for the increased animal suffering that gene editing technologies enable? Our concern is that the value gained from the higher precision of modifications compared to selective breeding, not to mention the economic interests involved in the meat market, create a powerful disincentive to consider the welfare concerns connected to GM animals.

The global context

The ambition behind the genetic modification of pigs and other animals bred for human consumption is to increase economic productivity. Scientists developing GM animals insist that their engineered status poses no threat to the environment, but such arguments cannot address larger questions of a globally sustainable food policy.

The livestock sector accounts for 14.5% of total global greenhouse gas emissions, which is more than the entire transport sector. It is the largest global source of the greenhouse gases methane (from ruminant digestive processes) and nitrous oxide (from manure and fertilizers used in the production of animal feedstuffs). The 2011 European Nitrogen Assessment estimated that in Europe, 85% of harvested nitrogen is used to feed livestock, with only 15% feeding people directly—even as the average European Union (EU) citizen consumes 70% more protein than needed for a healthy diet. Animal production creates substantial water and land pollution and requires vast amounts of territory—an estimated 45% of the global land surface area. Meat consumption is also linked to increased health risks such as cancer, ischemic heart disease, stroke, and diabetes mellitus. A report of the United Nations Environment Program (UNEP) concludes that both human and global environmental health would benefit from a substantial diet change away from animal products on a global scale. Yet demand for meat and dairy products continues to rise worldwide, driven especially by expected rising standards of living in China, India, and Russia. According to the Organisation for Economic Co-operation and Development (OECD)/Food and Agriculture Organization (FAO) Agricultural Outlook 2015, global meat production rose by almost 20% over the last decade and is expected to further expand until 2024. Pig meat production is expected to expand by 12% and poultry by 24% relative to 2012­­–14. The expansion is driven, in part, by increased profitability, particularly in these two sectors. Developing countries will account for approximately 75% of the additional output. To the extent that genetic modification of livestock increases meat production, it is also likely to lead to a “rebound effect,” driving prices down and further increasing the demand for animal products—just as making roads wider or paving more parking lots tends to make traffic problems worse. Thus, economic incentives, changing demographics and dietary habits, and advances in gene editing technology are all pushing in the same direction toward increased stress on global environmental and food production systems.

Given these concerns, there is profound unresolved tension between, on the one hand, the plea by an increasing number of scientists and institutions like the OECD/FAO and the UNEP for a substantial reduction in agriculture’s adverse impact through a decrease in the use of animals for food, and, on the other hand, biotechnologists’ support for ever-enhanced forms of animals for meat and dairy production. This tension is fundamentally political in nature, yet it is also a problem of ethics and values, not only because these technologies have negative impacts on animal welfare, but also because they profoundly shape the way in which we think about animals.

The changing human-animal relationship

Increased knowledge about animal sentience, cognition, and behavior is changing the human-animal relationship. Decades of ethical debates and a tightening of animal welfare regulations in many countries around the world contributed to higher awareness of the needs of some animals—predominantly the ones we take into our homes. We increasingly attribute emotions, intelligence, needs, and even some rights to our companion animals. We usually do not kill and eat them. Our perception of farm animals, like pigs, is substantially different, of course. We only pet the ones we love and tend to forget about the ones we eat—a cognitive dissonance which to us is one of the most striking and ethically problematic features of today’s human-animal relationship. Psychological studies show, unsurprisingly, that we tend to resolve our cognitive dissonance by denying the suffering and complex cognition of the species we want to eat. Meanwhile, scientific evidence increasingly shows that farm animals share capacities like sentience and cognition with humans, thus making it increasingly difficult to grant them an exemption from moral relevance.

Although science aimed at understanding the cognitive capacities of animals gives us reason to appreciate and empathize with dogs and pigs alike, technological advances like gene editing enhance the drift of the moral status of farm animals away from living beings that have inherent value (a moral standing in themselves) to products which only have instrumental value and a market price. Gene-edited animals like super-muscly pigs are brought to life and designed to meet human desires. Scientists speak of eliminating undesired traits to enhance productivity. In the process, it may even be the case that these designed animals suffer more than their non-GM relatives because of their extreme physiological traits. They are a technological success exactly because they are optimized—for example via an increase in body mass to match their instrumental purpose—but what about their suffering?

The scientific creation of super-muscly pigs thus drives a problematic understanding of animals. It does so in times where we are increasingly aware that animals, at least sentient animals, have needs and interests themselves and objective attributes of individual welfare, and thus moral standing that is independent from their instrumental value as objects for our consumption.

As such, the application of gene editing to animals, which to some is a step forward for technology, may be a step backwards for human moral development, as it conceals and heightens conflicts and dilemmas about which a truly reflective society should be openly deliberating. The continued pursuit of our scientific capacity to engineer animals for consumption, without commensurate attention to the ethical issues at the center of global meat production is at best naive, and at worst irresponsible. Recent decades have seen significant strides in confronting the ethical challenges of our relationships with animals, progress that is supported by our increasing scientific knowledge about animal cognition and sentience. Concern for animal welfare is rising. Meanwhile, from the overuse of antibiotics to the pollution of waterways from feedlots, the environmental consequences of the global food system add another dimension of concern to the globalizing market for meat. In all, our growing understanding of the human-animal relationship suggests that we need to revise our view of sentient animals, such as pigs, to recognize that far from being merely editable genetic material and edible flesh, they are also living individuals that merit our serious moral consideration.

From the Hill – Spring 2016

Congress advances spending bills for NSF, NASA, Energy, and USDA

In mid-May, the House Appropriations Committee approved FY 2017 spending bills covering the Department of Energy (DOE) and the Department of Agriculture, while Senate appropriators passed their transportation, commerce, justice, and science bills providing funding for the National Science Foundation (NSF), National Aeronautics and Space Administration (NASA), and the commerce agencies.

The House Committee’s energy-water bill (H.R. 2028) would increase DOE’s Office of Science budget by $53 million or 1% above FY 2016 levels, whereas the president sought a 4.2% funding boost. DOE’s Office of Energy Efficiency and Renewable Energy would see a large reduction of $248 million or 12% below FY 2016 levels, though House appropriators did provide substantial funding increases for grid-related research and development (R&D) and for Advanced Research Projects Agency-Energy (ARPA-E). The energy bill now awaits action from the House floor. The Senate approved its energy and water appropriations bill by an overwhelming 90-8 vote. Like the House bill, it provides only a 1% increase for the Office of Science. The bill does not include the administration’s request for a major funding increase for low-carbon energy technology as part of the Mission Innovation Initiative.

The House Appropriations Committee also approved its FY 2017 agriculture spending bill. The agency’s R&D funding would drop by 3.7% below FY 2016 levels and 1.4% below the president’s request to a total of $2.3 billion in FY 2017. The agriculture bill now heads to the House floor for consideration.

The Senate’s commerce, justice, science bill (S. 2837), approved by the Senate Appropriations Committee, would grant NASA a small $21 million increase above FY 2016 levels, as compared to the president’s proposed reduction of $1 billion in the NASA discretionary budget. Funding for the Space Launch System (SLS) and Orion would receive increases rather than the administration’s proposed cuts, whereas the Science Mission Directorate (SMD) would also see additional funding above the request, though still 3.5% below FY 2016 levels. Elsewhere in the bill, NSF would be essentially flat-funded from FY 2016 levels compared to a 1.3% discretionary increase sought by the administration. The National Oceanic and Atmospheric Administration and the National Institute of Standards and Technology would both see very modest increases in overall budgets.

The Senate transportation, housing and urban development (THUD) bill (S. 2844) was also approved by committee. Surface transportation funding in the bill is consistent with the Fixing America’s Surface Transportation Act reauthorization reached last winter, according to the committee. The THUD bill now heads to the Senate floor.

Senate hearing on leveraging US federal investments in science and technology

The Senate Commerce, Science, and Transportation Committee held a May hearing to explore how a reauthorized America Creating Opportunities to Meaningfully Promote Excellence in Technology, Education, and Science (COMPETES) Act can improve the US science and technology enterprise. The committee heard from Rob Atkinson, president of the Information Technology and Innovation Foundation; Kelvin Drogemeier, vice chair of the National Science Board; David Munson, Robert J. Vlasic Dean of Engineering at the University of Michigan; and Jeannette Wing, corporate vice president for research at Microsoft. The hearing featured questions from members on regional innovation programs that leverage science and technological advances to improve local economies; ways to improve US education; and ideas for improving coordination across federal agencies and the academic and private sectors. No specific timetable was set for introducing the legislation currently being drafted by the committee, but chairman John Thune (R-SD) stated in his opening remarks that he is “hopeful the bill will be ready in the coming days.”

Bill to improve understanding of space weather introduced

Sen. Gary Peters (D-MI) introduced legislation that follows on the White House’s recent Space Weather Strategy and would codify responsibilities of federal agencies with oversight of space weather research and forecasting, including DOD, NASA, NOAA, NSF, and the Department of Homeland Security. The legislation covers everything from clarifying that NSF and NASA should pursue the basic scientific research needed to better understand and, ultimately, predict space weather events, to other agencies’ responsibilities to provide forecasting services and assess space and ground-based infrastructure vulnerabilities to space weather events. The bipartisan bill has received praise from members of the scientific community and is moving quickly toward a markup by the Senate Commerce, Science, and Transportation Committee.

Senate committee passes SBIR/STTR reauthorization

The Senate Small Business and Entrepreneurship Committee passed a bipartisan reauthorization of the Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) programs, which receive a guaranteed percentage of federal agencies’ extramural research and development budgets. Participating agencies are those that spend more than $100 million on extramural research, and the reauthorization bill would make the program permanent rather than require reauthorization every five or six years. In addition, it would increase the percentage devoted to programs from the current 3% to 6% for non-defense agencies and 5% for the Department of Defense (DOD) by 2028, and institute a suite of reforms. The bill now awaits action by the full chamber. The House Small Business Committee has also passed reauthorization legislation that includes smaller increases over a shorter time period and includes fewer program reforms. That bill is awaiting action by the House Science, Space, and Technology Committee before it can move forward. Finally, in its markup of the National Defense Authorization Act for FY 2017, the Senate Armed Services Committee included language that would make the DOD SBIR and STTR programs permanent.

Senate passes bipartisan energy policy modernization act

On April 20, the Senate passed by an overwhelming vote of 85-12 a bipartisan comprehensive energy bill that touches on many aspects of federal energy policy—from efficiency standards and programs to natural gas export authority—and includes provisions that reauthorize the DOE Office of Science and ARPA-E. The bill authorizes increased funding targets for DOE-Science and ARPA-E for the next five years and rescinds unused or unneeded program authorities initiated by the prior two America COMPETES Acts of 2007 and 2010. A companion bill passed by the House late last year started on the same bipartisan track, but the final bill is considered decidedly more partisan than the newly passed Senate bill. The House bill also does not contain research-related provisions due to the differing jurisdictions of the relevant House and Senate committees. Energy research provisions are included in the separate House-passed version of COMPETES and differ from the Senate comprehensive bill. Hence, the House and Senate bills will now need to be reconciled, which legislators plan to do via a conference committee to get to a final compromise bill to send to the president for signature.

Senator Flake releases science-focused “Wastebook”

Sen. Jeff Flake (R-AZ) released his new report at a press conference on May 10, followed by a speech on the Senate floor two days later, and several press appearances. Whereas past “wastebooks” have included scientific research among other federal spending the senator considers wasteful, this report focuses solely on scientific research, including references to several grants that are no longer active. In releasing the report, the senator argues that the government should pay more attention to how its research funds are distributed, particularly when there are pressing priorities in areas such as medical science. Citing a current lack of transparency, Sen. Flake released companion legislation that would require more specific public accounting of funds spent on each individual project supported under a grant, a proposal that runs contrary to the current bipartisan push to lessen administrative burdens on researchers.

Hill addendum

OSTP announces National Microbiome Initiative

The White House Office of Science and Technology Policy announced a collaboration between federal agencies and the private sector to take a more cooperative approach to studying the microbiome of a range of ecosystems. The National Microbiome Initiative will focus on three specific goals: support interdisciplinary basic research; develop platform technologies to share information and knowledge; and expand participation through citizen science and public engagement.

NSF releases future vision for research

NSF director France Córdova has published a list of nine ideas intended to shape the foundation’s investments in the future. The list includes six research ideas:

• Harnessing data for twenty-first century science and engineering

• Shaping the new human-technology frontier

• Understanding the rules of life: predicting phenotype

• The quantum leap: leading the next quantum revolution

• Navigating the new Arctic

• Windows on the universe: the era of multi-messenger astrophysics

NSF reports increase in US graduate enrollment in science and engineering

NSF’s National Center for Science and Engineering Statistics (NCSES) released an updated report showing that the number of science and engineering (S&E) graduate students increased by 5.5% between 2013 and 2014, rising from 570,300 to 601,883. NCSES cites that much of this growth stems from a continuing increase in the enrollment of foreign graduate students on temporary visas, which grew by 7.4% between 2012 and 2013, and by 16.0% between 2013 and 2014. The report also finds that the number of S&E graduate students primarily supported by federal sources declined by 8.2% between 2009 and 2014, while those primarily on self-support increased by 26.7% during the same time period.

Posthumans’ Inhumanity to Man

Posthumans’ Inhumanity to Man

In 1963, when the still-new science of molecular biology was reaping its first big harvest, the great bacterial geneticist Joshua Lederberg looked ahead to what the field might bring in the coming years. Technology was advancing so rapidly, he wrote in Nature, that to keep pace with it, humans would need to adapt faster than natural evolution could take us. Threats such as pollution and nuclear war, he believed, demanded that we take absolute control over our genetics, development, and environment. “In short,” he concluded, “Man, unless he grows less ‘human,’ may destroy himself.”

Lederberg could wax melodramatic, but his awe toward science and technology was real and literal: the future both fascinated and terrified him. He was neither titillated nor repulsed by the thought of mankind transcending itself. “Growing less human” to him required courage, ample respect for the forces involved, and a rational, scientific head.

Today’s techno-futurists are both more sensational and more political than he. They over-predict and tend to fracture along lines they characterize as “conservative” and “liberal.” Conservatives are disgusted by the idea of becoming no longer human. Improve humanity by all means, they say, but people we are born and people we should die. In contrast, the self-described liberals—such as Ray Kurzweil, Nick Bostrom, and John Harris—thrill to the idea of transhumanism. They would use the tools of modern technology to enhance our powers, extend our lives, and eliminate gender, race, disease, suffering, stupidity, cruelty, and ultimately, limits of any kind on what “we” can be.

Eclipse of Man

Charles T. Rubin, a political scientist at Duquesne, a Catholic university in Pittsburgh, sits squarely in the conservative camp. He joins authors such as Leon Kass, Francis Fukuyama, and Bill McKibben in opposing liberal eugenics and transhumanism on moral grounds. Kass’s 1997 argument regarding the “wisdom of repugnance” is perhaps the best known perspective. It’s a bit of a caricature, but some have mocked it as the “yuck factor.” Listen to your gut, Kass writes: If it clenches at the thought of a new innovation, that’s reason enough to believe that innovation morally wrong. Rubin argues that transhumanism implies not human progress but human extinction. Yuck.

He builds his case in a peculiarly haphazard way. Eschewing either a broad survey of transhumanism or an in-depth critique of the key tenets of transhumanist thought, he takes a muddy middle ground, cherry-picking a handful of novels and creative nonfiction publications to serve as case studies. Some of these works are dark and dystopian, others boundlessly sunny and idealistic. Rubin does not defend his selections as quintessential or landmark works; indeed, some he acknowledges as minor or ignored. One imagines him running a finger down his own bookshelves and with an “Aha!” pulling down, say, a favorite by Arthur C. Clarke. A good third of the book is devoted to lovingly detailed but digressive plot summaries. He also indulges in fictional opening vignettes intended to illustrate his central points. In the introduction, he notes that great fiction often conveys moral complexity better than exposition. His parables, though, have none of the character or plot development that gives fiction its moral power. Rubin clearly believes that this recourse to art strengthens the book, but he is at his best when he drops the literary pretense and simply makes his argument.

Rubin anchors the book traditionally, in the Enlightenment. He opens with a look back at several historical futurists, starting with the French Enlightenment thinker Condorcet. He moves on to the Progressive era, building a theme of human advancement through science and the hazards of overestimating the power of human reason. (Near the book’s end, the myth of Icarus serves as an emblem.) What Rubin does not build on, though, is the Progressive—and progressive—commitment to feminism. The contemporary transhumanist movement shares with the Great Books canon a disproportional representation of white men. Rubin goes with the flow. The lack of women or people of color in the book does not invalidate his argument, but it does mark it as conservative in the more traditional sense. Among the many risks of transhumanism that Rubin discusses, one will not find the loss of diversity.

For many of his chosen authors, the transcendence of limitation and weakness is achieved by advancing beyond what is recognizably human. The crystallographer J.D. Bernal predicted humanity calving a race of cyborgian posthuman rulers who oversee and control the blissful, ignorant “Melanesians,” who remain human in a world that has been engineered to provide and protect. In the vision of the pioneering geneticist and embryologist J.B.S. Haldane, intrinsic selfish individualism leads to the destruction of the earth; only through making man into one of the works of man do we stand a chance of survival. Although Rubin gives no indication that these works are representative of transhumanism, they do establish his own major themes in what follows.

The remaining chapters develop an argument based on roughly equating nonhuman with inhuman and amoral with immoral. Rubin begins, curiously, with Carl Sagan’s Search for Extraterrestrial Intelligence (SETI) project, whose researchers scan the heavens for signs of life. Key to the project’s ideology is the “assumption of mediocrity”: since we are barely able to reach out beyond our own planet, any civilization with which we make contact will almost certainly be more advanced. SETI is rooted in Vietnam-era apocalypticism and a tie-dyed faith that extraterrestrials will come in peace, because harmony is surely more advanced than bellicosity. But idealism is almost all SETI shares with transhumanism. Altering humanity is not integral to the SETI project. Further, the assumption of mediocrity is a form of modesty—a quality that lies at the heart of Rubin’s alternative to transhumanism. SETI thus seems a poor case study to support Rubin’s argument.

Transhumanism can take many forms. Rubin chooses to focus on nanotechnology as the means of “enabling” inhumanity. Nano-visionaries promise that molecular machines will end disease, permit immortality, provide us with superhuman powers, and enable us to redesign our bodies. Although Rubin is not explicit about why he examines nanotech, as opposed to, say, genomics or neuroscience, hints of a larger critique emerge in his synopses of Eric Drexler’s classic speculative exposition, Engines of Creation (1986) and Neal Stephenson’s novel The Diamond Age (1995). Key to Drexler’s argument is the notion of choice: the freedom to choose how and why we modify ourselves. Stephenson pushes this idea to dark extremes. The world he creates is governed by unfettered selfishness and driven by the fallacy that technological advancement leads inevitably to social improvement. It begins to seem that nanotech is but a foil. Rubin’s real “yuck” is technophilic libertarianism.

Transhumanist aspirations for brain hacks, indeterminate lifespan, and sensory superpowers, Rubin argues, have what he calls a moral lacuna at their center. If infinite possibility becomes our goal, he insists, we lose our moral compass. He never puts it quite this way, but death is what gives life meaning. Memento mori. Finitude is the only thing that raises any stakes for our decisions: on what work we do, whom we love, how we live, why we raise children. But in the transhumanist utopia anything is possible, so nothing matters.

Ultimately, then, Rubin’s main objection to transhumanism is its empty materialism, its lack of a spiritual core. This is both trivial and profound. Obviously, materialism is the essence of all modern science, not just transhumanism. And yet, transhumanism could be construed as a reductio ad absurdum of scientism, the belief that science can solve all problems. The logical flaw in scientism is that precisely because science and technology are amoral, they can be applied with equal force to good or evil. The techno-fix does not absolve society of the need for moral choice.

But perhaps because Rubin never quite takes his critique this far, his solution to transhumanism is weak and unsatisfying. It is a tepid version of an argument from hubris. As he moves into his conclusion, his writing becomes more academic, the syntax pulled like taffy into strings of passive-voice clauses:

Surely it is not as if the only future that is worth looking forward to, and working hard for, is one in which we can achieve anything we can imagine, where everything will be permitted. If it were, what we are left with is mere pride in novelty and superlatives, a constant one-upsmanship of imaginative possibilities that diminish the worth of human beings as we actually know them. With no clear goal, direction, or purpose, with willful freedom of choice as the guiding light, how could it be otherwise?

As noted, Rubin’s solution is modesty. If we simply scale back our aspirations, he writes, if we strive to improve humanity while remaining human, we retain the humanitarian impulse to relieve suffering and better ourselves without sacrificing the self. But this proposal is far too subjective to be a realistic alternative to the technological juggernaut. Scale back to where? Are we to establish a federal modesty commission to enforce a humble red line that techno-liberals must not cross? Rubin simply defends his own morality against the deficiency inherent in transhumanism. In short, Rubin’s argument for modesty is less a solution to transhumanist hubris than an expression of his own moral identity—his persona, his ego.

A more muscular response would be an argument from humility. Transhumanists and eugenicists have always maintained that we are on verge of knowing enough to take the reins of our own evolution and steer ourselves toward perfection. This argument conflates technical prowess with scientific understanding. Technology is the acquisition of power; science, perhaps most accurately, is a process of discovering our ignorance. Thus, the more we can do, the less we appreciate its implications.

Every major technical advance or scientific insight leads to the opening up of a vast world of undreamed-of complexity that mocks the understanding we thought we’d achieved and dwarfs the power we hoped we’d acquired. For example, in 1910, Mendelian genetics appeared to present a complete explanation of heredity and was used to justify an appalling program of segregation, marriage restriction, and sexual sterilization. It is easy, from our vantage, to tut at their ignorance. Yet today, transhumanists, so-called liberal eugenicists, and other techno-visionaries make the same argument—adding that individual choice (the selection by parents rather than the government of desirable traits in their children) solves the moral problems of eugenics.

Meanwhile, most of the dogmas of modern biology are being drastically revised, if not overthrown. The gene itself is in danger of vanishing, a single bit of plankton in a sea of interacting genomes and genetic and developmental regulation. Further, these vast networks warp and rearrange themselves in different environments. Consciousness—to say nothing of memory—is an emergent property of all this and more.

Consider Lederberg again. To knowledgeably direct our own evolution, we would need something close to a complete explanation of genetics, epigenetics, the microbiome, development, evolution, and ecology. The strong argument from humility, then, is to acknowledge that were we to control our own evolution, we would almost surely muck it up. In short, we can paraphrase Rubin by recasting Lederberg: man, if he tries to grow less human, may destroy himself.

On Human Gene Editing: International Summit Statement by the Organizing Committee

Scientific advances in molecular biology over the past 50 years have produced remarkable progress in medicine. Some of these advances have also raised important ethical and societal issues—for example, about the use of recombinant DNA technologies or embryonic stem cells. The scientific community has consistently recognized its responsibility to identify and confront these issues. In these cases, engagement by a range of stakeholders has led to solutions that have made it possible to obtain major benefits for human health while appropriately addressing societal issues.

Fundamental research into the ways by which bacteria defend themselves against viruses has recently led to the development of powerful new techniques that make it possible to perform gene editing—that is, precisely altering genetic sequences—in living cells, including those of humans, at much higher accuracy and efficiency than ever before possible. These techniques are already in broad use in biomedical research. They may also enable wide-ranging clinical applications in medicine. At the same time, the prospect of human genome editing raises many important scientific, ethical, and societal questions.

After three days of thoughtful discussion of these issues, the members of the Organizing Committee for the International Summit on Human Gene Editing have reached the following conclusions:

1. Basic and Preclinical Research. Intensive basic and preclinical research is clearly needed and should proceed, subject to appropriate legal and ethical rules and oversight, on (i) technologies for editing genetic sequences in human cells, (ii) the potential benefits and risks of proposed clinical uses, and (iii) understanding the biology of human embryos and germline cells. If, in the process of research, early human embryos or germline cells undergo gene editing, the modified cells should not be used to establish a pregnancy.

2. Clinical Use: Somatic. Many promising and valuable clinical applications of gene editing are directed at altering genetic sequences only in somatic cells—that is, cells whose genomes are not transmitted to the next generation. Examples that have been proposed include editing genes for sickle-cell anemia in blood cells or for improving the ability of immune cells to target cancer. There is a need to understand the risks, such as inaccurate editing, and the potential benefits of each proposed genetic modification. Because proposed clinical uses are intended to affect only the individual who receives them, they can be appropriately and rigorously evaluated within existing and evolving regulatory frameworks for gene therapy, and regulators can weigh risks and potential benefits in approving clinical trials and therapies.

3. Clinical Use: Germline. Gene editing might also be used, in principle, to make genetic alterations in gametes or embryos, which will be carried by all of the cells of a resulting child and will be passed on to subsequent generations as part of the human gene pool. Examples that have been proposed range from avoidance of severe inherited diseases to ‘enhancement’ of human capabilities. Such modifications of human genomes might include the introduction of naturally occurring variants or totally novel genetic changes thought to be beneficial.

Germline editing poses many important issues, including: (i) the risks of inaccurate editing (such as off-target mutations) and incomplete editing of the cells of early-stage embryos (mosaicism); (ii) the difficulty of predicting harmful effects that genetic changes may have under the wide range of circumstances experienced by the human population, including interactions with other genetic variants and with the environment; (iii) the obligation to consider implications for both the individual and the future generations who will carry the genetic alterations; (iv) the fact that, once introduced into the human population, genetic alterations would be difficult to remove and would not remain within any single community or country; (v) the possibility that permanent genetic ‘enhancements’ to subsets of the population could exacerbate social inequities or be used coercively; and (vi) the moral and ethical considerations in purposefully altering human evolution using this technology.

It would be irresponsible to proceed with any clinical use of germline editing unless and until (i) the relevant safety and efficacy issues have been resolved, based on appropriate understanding and balancing of risks, potential benefits, and alternatives, and (ii) there is broad societal consensus about the appropriateness of the proposed application. Moreover, any clinical use should proceed only under appropriate regulatory oversight. At present, these criteria have not been met for any proposed clinical use: the safety issues have not yet been adequately explored; the cases of most compelling benefit are limited; and many nations have legislative or regulatory bans on germline modification. However, as scientific knowledge advances and societal views evolve, the clinical use of germline editing should be revisited on a regular basis.

4. Need for an Ongoing Forum. While each nation ultimately has the authority to regulate activities under its jurisdiction, the human genome is shared among all nations. The international community should strive to establish norms concerning acceptable uses of human germline editing and to harmonize regulations, in order to discourage unacceptable activities while advancing human health and welfare.

We therefore call upon the national academies that co-hosted the summit—the U.S. National Academy of Sciences and U.S. National Academy of Medicine; the Royal Society; and the Chinese Academy of Sciences—to take the lead in creating an ongoing international forum to discuss potential clinical uses of gene editing; help inform decisions by national policymakers and others; formulate recommendations and guidelines; and promote coordination among nations.

The forum should be inclusive among nations and engage a wide range of perspectives and expertise—including from biomedical scientists, social scientists, ethicists, health care providers, patients and their families, people with disabilities, policymakers, regulators, research funders, faith leaders, public interest advocates, industry representatives, and members of the general public.

The History of Eugenics

The human race today stands at a threshold unlike any in the past: it now possesses tools to reshape its own hereditary capacities, perhaps even to realize the dream of eugenicists that human beings might take charge of their own evolution. Over a long time, CRISPR could change the future of humanity, but no one is rushing into it. As President Barack Obama’s science adviser John Holdren has said, human germline editing “is a line that should not be crossed at this time.” The question is, will anyone be able to police that line? We are living in the age of biocapitalism, and it is entirely possible that commercial and consumer interests could find a way around the current commitments and controls of governments.

That is an ironic outcome. As anyone who lived in the twentieth century knows, “eugenics” is a dirty word largely because of its association with abusive governments, particularly the Nazis, but also as a result of race-improvement policies in the United States. Politically, it’s an untouchable third rail. But scientifically, it’s now far more plausible than it ever was. With the advent of a new way to modify humans—by transforming their genes, rather than through breeding and extermination—it’s not overly alarmist to say eugenics, or whatever we call it this time, could come back, only in a new, private form shaped by the dynamics of democratic consumer culture.

What could happen now is likely to be far more bottom-up than the top-down, state-directed racial programs of the past. We could see individuals and families choosing to edit their genes, whether to prevent illness or improve capacity or looks, and finding themselves encouraged to do so by what was absent in the era of eugenics: the biotechnology industry. Politicians are largely unaware of this possibility, but before long they’re going to have to take notice, especially if public demand starts to produce gene-editing services willy-nilly, perhaps at offshore clinics.

Examining why the dream of human biological improvement foundered in the past may help us understand why it may gain support in the future. The dream originated a century and a half ago with the British scientist and explorer Francis Galton, a younger first cousin of Charles Darwin’s. It was Galton who dubbed the idea “eugenics,” a word he took from the Greek root meaning “good in birth” or “noble in heredity.” It was well known that by careful selection, farmers and flower fanciers could obtain permanent breeds of plants and animals strong in particular traits. Galton, who believed that not only physical features but mental and moral capacities were inherited, wondered, “Could not the race of men be similarly improved?”

After the turn of the twentieth century, Galton’s ideas coalesced into a broadly popular movement that enlisted the new science of genetics and attracted the support of such luminaries as Teddy Roosevelt and Supreme Court Justice Oliver Wendell Holmes. They aimed, as Galton had said, to multiply society’s “desirables” and get rid of its “undesirables.”

A key problem was the difficulty of finding non-coercive means of multiplying the desirables. Galton proposed that the state sponsor competitive examinations in hereditary merit, celebrate the blushing winners in a public ceremony, foster wedded unions among them at Westminster Abbey, and encourage, by postnatal grants, the spawning of numerous eugenically golden offspring. But only the Nazis were willing, in practice, to enlist the state, establishing subsidies to racially meritorious couples in proportion to the number of children they bore. Heinrich Himmler urged members of the SS to father numerous children with racially preferred women, and in 1936 he instituted the Lebensborn—spa-like homes where SS mothers, married and unmarried, might receive the best medical care during their confinements.

Human improvers in the United States and Britain followed the route of voluntarism. Eugenics sympathizers such as Teddy Roosevelt, worried by the declining birth rate among their class, urged its women to bear more children for the good of the race. During the 1920s, taking a leaf from Galton’s book, they sponsored Fitter Family competitions in the “human stock” section of state agricultural fairs. At the 1924 Kansas Free Fair, winning families in the three categories—small, average, and large—were awarded a Governor’s Fitter Family Trophy. It is hard to know what made these families stand out as fit, but an indicator is supplied by the fact that all entrants had to take an IQ test—and the Wasserman test for syphilis.

Yet social-radical eugenicists, of whom there were a number on both sides of the Atlantic, were impatient with measures that sought to achieve human improvement within the constraints of conventional marriage and conception. A towering figure among them was J.B.S. Haldane, a brilliant British geneticist and evolutionary theorist. In 1924, in a slim book titled Daedalus, he laid out a method for producing human biological improvement that went far beyond urging high-class people to have more babies and behave well. The method centered on “ectogenesis”—the conception and nurturing of fetuses in glass vessels using gametes selected from a small number of superior men and women. Haldane predicted that the resulting offspring would be “so undoubtedly superior to the average that the advance in each generation in any single respect, from the increased output of first-class music to the decreased convictions for theft, is very startling.”

Aldous Huxley brilliantly spelled out the dystopian potential of Haldane’s scheme in Brave New World. But Herman J. Muller joined with a collaborator in Britain named Herbert Brewer to agitate for the realization of Haldane’s goal by the use of artificial insemination.

Brewer was a scientifically self-educated letter carrier and Muller an innovative experimental geneticist who would eventually win a Nobel Prize. Both men held, as Brewer put it, that if the salvation of the human species required socialism “to make a better world to live in,” it also required eugenics “to make better men to live in the world.” Both men fastened on artificial insemination to achieve that purpose because, although it was an imperfectly reliable technology, it was being used successfully with animals, was making headway among women, and took advantage of the fact that men produced millions of times more sperm than women produced eggs. It would thus enable a small number of superior men annually to father thousands of comparable children.

In his 1935 book, Out of the Night, Muller declared that “in the course of a paltry century or two…it would be possible for the majority of the population to become of the innate quality of such men as Lenin, Newton, Leonardo, Pasteur, Beethoven, Omar Khayyám, Pushkin, Sun Yat-sen…or even to possess their varied faculties combined.” Would thousands of women willingly make themselves vessels for the sperm of great men? Assuredly yes, both Muller and Brewer predicted. Muller confidently explained: “How many women, in an enlightened community devoid of superstitious taboos and of sex slavery, would be eager and proud to bear and rear a child of Lenin or of Darwin! Is it not obvious that restraint, rather than compulsion, would be called for?”

What proved obvious was the opposite. Muller and Brewer were naïve in assuming that thousands of women would break out of the day’s conventional child-bearing practices and standards.

Ultimately, the dreams of all the eugenicists went awry for a variety of reasons—not least because of increasingly controversial efforts by governments to get rid of the undesirables from the top down. Many U.S. states enacted laws authorizing compulsory sterilization of people considered unworthy and sterilized some 36,000 hapless victims by 1941. The Nazis went much further, subjecting several hundred thousand people to the gonadal knife and eventually herding some 6 million Jews—their ultimate undesirables—into the death camps.

Postwar developments

After World War II, eugenics became a dirty word. Muller, now an anti-eugenicist, revived a version of his and Brewer’s idea in 1959, calling it Germinal Choice. Despite Muller’s disapproval, a wealthy plastic-eyeglass maker established a sperm bank for Germinal Choice in Southern California to make the gametes of Nobel laureates available to women eager to improve the quality of the gene pool. Few women—only 15 by the mid-1980s—availed themselves of the opportunity.

The voluntarist multiplication of desirables, whether socially conventional or radical, was also problematic for technical and moral reasons. The aim of producing more desirables called on people to invest their reproductive resources in the service of a public good—the quality of what they called “the race” or, as we would say, the population or the gene pool. But, by and large, people have children to satisfy themselves, not to fuel some brave new world. Moreover, it was—to say the least—uncertain that the sperm of one of Muller’s heroes would produce offspring of comparable powers. And at the time, Haldane’s ectogenesis was technically unrealizable; no one knew how to produce test-tube babies. The reliance on artificial insemination was a vexed strategy. It was offensive under prevailing moral standards, which counted artificial insemination by a donor who was not the woman’s husband a form of adultery and which stigmatized single women who bore children.

But now, just about all sexual and reproductive practices among consenting adults are acceptable, and although no one knows what genes may contribute to exceptional talent, biologists possess precise and increasing knowledge of which ones figure in numerous diseases and disorders. And CRISPR offers the prospect of biological improvement not for the sake of the gene pool, but for whatever advantages it offers to consumers. Indeed, perhaps the most potent force driving its use will be consumer demand aimed at achieving the health of individuals ill with a genetic disease or improvement of the genetic profile in succeeding generations.

During the first third of the twentieth century, hundreds of men and women wrote to the Eugenics Record Office, in Cold Spring Harbor, New York, asking for advice about what kind of children they might produce. In offering advice, eugenic experts had nothing to go on except analyses of family pedigrees for deleterious traits, a strategy fraught with epistemological and prejudicial pitfalls. Still, the demand for advice continued after the post-World War II decline of the eugenics movement, providing a clientele for the increasingly medically oriented service of genetic counseling. The demand was multiplied in the latter half of the century by a series of technical advances that enabled prenatal diagnosis for flaws in a fetus’s genes and that, coupled with Roe v. Wade, permitted prospective parents to abort a troubled fetus.

The ability to have a healthy child—or, for infertile couples, to have a child at all—was further amplified by the advent in the late 1970s of in vitro fertilization (IVF)—that is, the joining of sperm and egg in a petri dish. Here was Haldane’s ectogenesis, only with the insertion of the resulting embryo into a woman’s womb. The method was pioneered by the British scientists Patrick Steptoe and Robert Edwards, who first conducted pioneering research—it eventually won a Nobel Prize—on conception and early gestation. At the time, they faced moral condemnation from scientists and ethicists for experimenting with an ultimate child without its consent and for bringing about, in the vein of Haldane, a test-tube-baby eugenics.

They effectively rebutted the warnings of their critics with the birth, on July 25, 1978, of Louise Brown, the world’s first test-tube baby, perfectly formed and healthy, a joy to her hitherto infertile mother. But Edwards had predicted that IVF could also be used to check embryos fertilized in a petri dish for genetic or chromosomal flaws with the aim of implanting one free of them. IVF is now used for that purpose as well as for assisting infertile couples. It is not hard to imagine couples taking the next step—exploiting IVF to modify pre-implantation embryos by replacing a disease gene with a healthy one.

What seemed like a moral or technical issue in the past is—in this society—very likely to become a consumer question of who can afford it. Will parents want to use germline modification to enhance a child’s genetic endowment? Will they be willing to insert into their embryonic offspring a set of genes—should any such set ever be identified—associated with extraordinary mental, physical, or artistic capacities? Conceivably, yes, given what they already do, if they can afford it, to advantage their children through environmental encouragements such as good schools or biomedical interventions such as the administration of human growth hormone. They might readily cross the line between germline medical treatment and enhancement if today’s enhancement—say, the ability to do complex computing—turns into an essential capacity, like language, tomorrow.

Whatever purpose they might choose for germline editing, the contemporary right to reproductive freedom would assist their pursuit of it. The offspring would not be test-tube products of Huxley’s fascist, anti-family reproductive technology. They would be babies born of women, not conditioned but nurtured as much or as little as any other child. As early as 1989, at the beginning of the Human Genome Project, the journal Trends in Biotechnology pointedly noted: “’Human improvement’ is a fact of life, not because of the state eugenics committee, but because of consumer demand. How can we expect to deal responsibly with human genetic information in such a culture?”

How indeed, we might further ask amid the increasing commercialization of biomedicine. Biotechnology companies have rapidly embraced CRISPR/Cas9, exploring new ways to treat patients with genetic diseases. If they find methods of safely editing human germlines for medical or enhancement aims, they would likely pressure regulators to permit their use and, as they do with drugs, heavily advertise their availability to consumers.

As Haldane observed in Daedalus, biological innovations initially regarded as repugnant tend eventually to become commonplace. Just as it occurred with artificial insemination, so it may happen in the age of biocapitalism with human germline editing.

Daniel J. Kevles, a former professor of history at Caltech and Yale University, is an interdisciplinary fellow at the New York University School of Law. A longer version of this article was published in Politico.

Interrogating Equity: A Disability Justice Approach to Genetic Engineering

My approach to human genetic engineering draws on 10 years of research on the social impact and meaning of emerging biotechnologies, in particular regenerative medicine and genomics, in which I have examined the relationship between innovation and equity as it connects to socioeconomic class, gender, race and ethnicity, citizenship, and disability. In what follows, I will focus primarily on disability with the understanding that these forms of social stratification, and their intersection with science and technology, are inextricably connected. With that, my intervention is twofold.

First, I would like to highlight that the distinction commonly made between genetic therapy and enhancement is not at all straightforward or stable. The bright line we may wish to draw between laudable and questionable uses of gene editing techniques is more porous than we realize. Many practices that were optional yesterday are medicalized today. Likewise, traits and behaviors that we may regard as “enhancement” today may very well find a therapeutic justification tomorrow. As the disability studies scholar Tom Shakespeare commented, “To fix a genetic variation that causes a rare disease may seem an obvious act of beneficence. But such intervention assumes that there is robust consensus about the boundaries between normal variation and disability.” Indeed, there is not, even though that distinction has become ubiquitous in reporting on gene editing.

The second point is this: Questions of equity and justice as they relate to human gene editing and related fields should not be mistaken as a kind of “special interest” or simply another angle from which to approach these topics or even solely a “problem” to be overcome. But rather, the work of interrogating equity serves as a vital framework for democratizing science more broadly because of the way it causes us to wrestle with some of the foundational assumptions of biotechnology, to the extent that we take up the challenge. I will briefly elaborate on these two points below, but first some background on the empirical basis of my comments.

Questions of equity and justice as they relate to human gene editing and related fields should not be mistaken as a kind of “special interest.”

In 2005, I began researching the passage and implementation of California’s Stem Cell Research and Cures Initiative. Proposition 71, as it was commonly known, successfully passed in November 2004, becoming the largest single source of stem cell funding in the world, authorizing the sale of state bonds in the amount of $3 billion to be managed by a new stem cell agency and governed by the Independent Citizens’ Oversight Committee. This unprecedented state investment is protected by a new constitutional “right to research” amendment that requires a 70% legislative super majority to modify, and it is this context of a political right to scientific inquiry that I used as a window to analyze the relationship between innovation and equity more broadly. I conducted a two-year, mixed-method study of the initiative, and through a formal affiliation with the state agency as part of its first cohort of training fellows, I conducted interviews with key proponents and opponents of the initiative, as well as people affected by conditions that could potentially be treated by stem cell therapies. I also produced a mixed archive of documents and media that allowed me to analyze the contours of social inclusion and exclusion.

One of my observations throughout this process was that to the extent that nonscientists were involved, a particular subset of patient advocates were positioned as the “default public” to whom the new state apparatus was most accountable. And although patient advocates hold a wide variety of perspectives on these issues, those that were most vocal in the California context framed their demands in terms of medical consumer rights, or what scholars have dubbed an “upwardly tilted public agenda” that appeals to middle-class supporters. Such advocacy is unlikely to represent the vast majority of disabled people for whom dismantling policies and prejudices that cast them as second class is often more vital than access to “miracle cures.” The fact is that innovation and inequity too often go hand-in-hand. Social science research piled high shows that as we develop the capacity to control disease and death, the benefits go disproportionately to those who already monopolize resources. So we either decide to prioritize issues of equity and justice early and often, or we ensure a world in which the health and longevity of some are predicated on the disposability of others.

To fully “interrogate equity,” we must foster deliberation that moves beyond questions of access to treatment, however important, and think very seriously about the design of research—who does it and with what guiding questions and assumptions— because how research is framed is never neutral, universal, or inevitable. Gene editing techniques are seeded with values and interests—economic as well as social—and without careful examination, they will easily reproduce existing hierarchies, including assumptions about which lives are worth living and which are worth “editing” out of existence.

In the words of geneticist James Watson, “From this perspective seeing the bright side of being handicapped is like praising the virtues of extreme poverty. To be sure there are many individuals who rise out of its inherently degrading states. But we perhaps most realistically should see it as the major origin of asocial behavior.” This statement reflects the default setting of much biotechnology—a benevolent medical missionary ethos that says essentially: “We know what you need better than you do.” For this reason, it is crucial that we take the disability justice refrain “Nothing About Us, Without Us” seriously, noting that there is substantial stratification among disabled people. And in the same way we do not expect scientists from a single field to address all the technical complexity associated with gene editing, surely we need to be equally attentive to social complexity, so that white middle-class patient advocates do not continue to serve as the default public to whom science and technology is accountable.

It is crucial that we take the disability justice refrain “Nothing About Us, Without Us” seriously, noting that there is substantial stratification among disabled people.

These were among the issues discussed at the National Convening on Disability Rights and Genetic Technologies, where participants noted that, of course, “Some people with disabilities eagerly await gene therapies. But many people are concerned that the increasing use of genetic technologies in this context reflects and reinforces societal assumptions that disability is always harmful and should be prevented.” The concern here is that people with disabilities would be less valued at a societal level as genetic technologies become more common, especially in the absence of public education and media campaigns on disability and genetics. In a similar vein, commenting on the 2015 International Summit on Human Gene Editing, biochemist and disability scholar Gregor Wolbring explained: “The disability-rights community has a history of disagreement with scientific and clinical experts over their perception of people with disabilities. This is summarized as ableism, a view that disability is an abnormality instead of a feature of human diversity. It can lead to flawed ‘solutions’ and disempower those affected.”

So then, how do we reflect carefully on ableist norms that are often embedded in genetic technologies? I will briefly flag five ways we routinely constrict what counts as relevant and meaningful to scientific innovation.

The first is an ahistorical fallacy, which is the tendency to project forward in time without the temporal corollary—a careful reflection on historical precedents and processes. Too often the contours of our thinking mirror the hyperbolic rhetoric of science—“breakthrough,” “cutting edge,” “breathtaking,” and “miraculous”—leading us to overlook continuities as we train our attention on all that appears novel. My observations at a number of meetings such as this Summit is that those seeking to dismiss the need to interrogate equity do so by assuming a hard break between past harms and future possibilities.

The second is a legalistic fallacy when we assume that reforming policies and laws is sufficient to shaping the context of science for the greater good. The passage of the Genetic Non-Discrimination Act, for example, was necessary but not sufficient to ensure that genetic predisposition to illness will not result in employer or insurance bias. That is, legal change must go hand-in-hand with public engagement and deliberation well beyond the staging of a single summit.

The third way we routinely constrict our ethical imagination is an informed fallacy when we presume that standard approaches to informed consent are sufficient in arenas that are characterized by so much scientific and medical uncertainty. The best that researchers can really promise is a partially informed consent—so that we urgently need to re-think and re-invest in technologies of trust and reciprocity that address the many uncertainties involved.

The fourth is a fixed fallacy, which is the tendency to assume that the way in which scientific harms get enacted in the present will look the same way they did in the past, rather than mutating with the times. This fallacy has us look for examples of state-sponsored eugenics, for instance, overlooking the way that market logic puts the responsibility of “racial fitness” in the hands of the consumer. In this way, the fixed fallacy serves as a counterweight to the ahistorical fallacy, by alerting us to the mercurial and often “liberal” context in which individual choices reinforce oppressive hierarchies.

The best that researchers can really promise is a partially informed consent—so that we urgently need to re-think and re-invest in technologies of trust and reciprocity that address the many uncertainties involved.

The fifth and final way we may inadvertently constrict our ethical imagination with respect to genetic engineering is the euphemistic fallacy, which is the tendency to adopt language that is already seeded with a particular ethical perspective on the techniques in question. The word “editing” itself sounds benign and even beneficial. Whereas for those struggling against the many forms of stigma and marginalization that grow out of ableist norms, editing may be more akin to being pushed through a shredding machine.

In moving forward, then, there are many ways to expand our scientific and ethical imagination. First, we need to remain watchful of how safeguarding “medical consumer freedom” displaces many other concerns. It is not coincidental that this notion of medical choice goes hand-in-hand with competitive chants of winning a global scientific race. As renowned legal scholar Patricia Williams noted with respect to CRISPR: “What’s going on now is also a rat race to beat out others in the charge to the patent office. Hence, much of this has an urgency to its framing that exploits our anxiety about mortality itself. Hurry up, or you’ll die of an ugly disease! And do it so that ‘we’ win the race—for everything’s a race. A race against time. A race to file patents. A race to market. A race to better babies, better boobs. There is never enough glory or gain, there is always the moving goal post.” The rhetoric of urgency, in other words, is not neutral or inherently good.

An expansive approach to genetic technologies, one that avoids the many fallacious constrictions I outlined earlier in this article, is one that includes disabled people “at the table and not just on the table of the life sciences.” The insights and expertise of those who have been harmed and exploited in the name of progress offer us a more rigorous foundation by which to democratize science than the current model in which citizens are imagined to be “We, the patients” waiting for the fruits of science to ripen. To begin this shift, we must become just as inventive about addressing social complexity as we are about biological complexity. If our bodies can regenerate, let us not imagine our body politic as so utterly fixed.

The Legal and Regulatory Context for Human Gene Editing

The potential use of human gene editing is stimulating discussions and responses in every country. I will attempt to provide an overview of legal and regulatory initiatives around the globe. But I need to note that we are talking not only about government when we talk about law, regulation, and biotechnology. We are really talking essentially about an ecosystem that is made up of government, the public, and private industry, which produces innovative products based on the basic science and applied research coming out of our universities.

The ecology of this system is one in which there are many legal or policy issues that combine to affect whether biotechnology is promoted or hindered in any particular country. It ranges from topics such as intellectual property rights, which are reflected in areas from patent policy, to international trade laws, which will have a huge effect on whether or not the new products are going to be able to cross borders easily and under what conditions. The regulatory framework is going to determine the speed at which biotechnology moves from laboratory to development to marketed product.

The consumer demand will also be a profoundly important feature in determining which products are developed, because so many discoveries do not lead to something that the public wants or needs, or that it knows it wants and needs. This will also be affected by variables such as stigma and cultural attitudes.

Last of course, but certainly not least, are areas of public research and investment. All of these together are going to combine into a vision of how a particular country moves or does not move biotechnology. Some of the categories that have been proposed by other scholars range from promotional, in which a country is actually pushing the innovation; to a more neutral stance, in which it simply proceeds or not with as little government direction as possible; to precautionary; to an absolutely prohibitive system that either defunds entirely or even makes criminal the technology.

It is worth keeping in mind that within a country, one can have very different attitudes about different aspects of biotechnology. For example, the United States has a fairly permissive approach to biotechnology applied to genetically engineered animals and plants in the agricultural sector, whereas it has a much more cautious approach when it comes to the use of biotechnology in the context of human clinical care and therapies. There does not have to be a single approach to biotechnology across all application areas. There can be differences among countries and even within a country.

One can also look at how different areas of policy can be tied to one or another of these visions of an overall biotechnology direction. For example, strong patent protection can be viewed as promotional because it gives industry the greatest possible financial incentive to pursue particular application areas. However, from the basic science and research community point of view, strong patent protection can sometimes be perceived as slowing the ability to collaborate or take advantage of one another’s work.

In the area of biosafety, we see more case-by-case evaluation of biotechnology products, where everything really begins to hinge simply on the presumption about risk. One can take a precautionary approach that presumes it is dangerous until it is proven safe, or a permissive approach that presumes it is safe until it is proven dangerous. Since it is often impossible to prove either danger or safety, where that presumption falls will often be more determinative than anything else in deciding how quickly technologies move from the basic science laboratory to clinical research to application.

Finally, in the area of public information, there is a very lively debate going on, particularly in the United States, about the labeling of foods that have some component that involves modern biotechnology. For example, now that the Food and Drug Administration (FDA) has approved the sale of a genetically modified farmed salmon, there is a debate about whether that salmon has to be identified for consumers.

If we have systems that carefully distinguish between those things that are the products of modern biotechnology and those that aren’t, we could be setting ourselves up for a more precautionary regulatory approach because it will tie into public attitudes that are often based on concern about either the corporate influence or the actual underlying science. On the other hand, if regulation is mandated only when there is evidence of a higher level of risk, products will reach the market more quickly, reflecting a more promotional stance.

To implement any one of these approaches, we have a variety of mechanisms that range from the least to the most enforceable. Public consultation is the least enforceable approach, and there is a spectrum of regulatory and legislative measures that can strengthen the level of control.

In the area of public consultation, we have numerous examples from around the world. In the United States, the National Environmental Policy Act is unusual among environmental laws because rather than telling individuals or companies what they can and cannot do, it simply provides that when the government makes a particular decision, it must be subjected to a higher degree of public scrutiny than is typical. The catchword for this approach is that “sunlight is the best disinfectant.” By incorporating public comment, it creates political pressure that can drive decisions in one way or another, and it allows for some interplay between government expertise and public consultation. We see other examples of it in the approval process for products such as engineered salmon, which required a number of public hearings.

Canada, when it looked at assisted reproduction, formed a royal commission on new reproductive technologies that held hearings on the topic across the country. In the European Union (EU), genetically engineered foods, or GMOs as they are usually referred to there, are of special concern. There is actually an EU directive requiring that there be a degree of public access to information whenever a product potentially affects biodiversity or other environmental elements.

Public consultation is considered an alternative to a centralized directive form of governance. One simply creates the situation in which the public can, through its own decentralized processes, exert pressure on government or on industry and thereby alter the direction or the speed of biotechnology innovation.

Next in this hierarchy of enforceability comes voluntary self-regulation. The 1975 Asilomar conference on recombinant DNA technology was one of the more notable examples of voluntary self-regulation by the scientific community when it recognized that there were certain risks that needed to be investigated before it pushed forward at full speed. The research community voluntarily imposed on itself moratoria on certain applications and implemented a series of precautionary measures having to do with containment of possibly dangerous materials. A more recent example is the set of guidelines for human embryonic stem cell research, which were developed by the U.S. National Academies and the International Society for Stem Cell Research.

What is interesting about these instances of self-regulation is that unlike the government-imposed rules, these were truly self-imposed rules that were seriously constraining in many ways. They often called for prohibiting payment for certain materials and services in ways that limited the ability of the scientific community to move as quickly as it might want. For example, it limited the use of chimeras and established strict guidelines on the distribution of the gametes and embryos needed for research.

It was a success in the sense that it forestalled what might have been really onerous government action at the state or federal levels, and it demonstrated that self-regulation could be flexible and nuanced without sacrificing reliability. The self-regulatory approach has also been used in the case of “gain of function research,” a very awkward name for research that increases the pathogenicity, transmissibility, or resistance to countermeasures of known pathogens.

Interestingly, these kinds of voluntary self-regulatory activities often lead directly into some government adoption by proxy of much of the content of the self-imposed rules. For example, in the gain of function area, some of the self-imposed rules led to a National Academies report, which then led, in turn, to the creation of the National Scientific Advisory Board for Biosecurity, which collaborates with its counterparts around the world to manage situations where there is fear that publishing key data will facilitate the transformation of useful biotechnology into bioterrorism.

There are government guidelines in other areas as well. These provisions technically are not enforceable, and yet they are very strongly persuasive because complying with them creates what essentially is a safe haven for companies. They know that if they stay within these guidelines, they are not going to run afoul of some actual regulation or law. These guidelines also create strong social norms.

At the international level, there is the Council for International Organizations of Medical Sciences (CIOMS), which is very influential in creating global standards for research on human subjects. It refers back specifically to the Nuremberg protocols and has the ability to be more restrictive than any particular national set of rules.

That doesn’t mean that national laws will necessarily follow, but it establishes a norm from which nations feel free to deviate only when they can provide justification that it is necessary to achieve some public benefit. Therefore, the CIOMS becomes extremely influential, even if not enforceable.

At the far end of the spectrum, of course, we have regulation and legislation. For example, many nations have laws that specifically ban human cloning, although the United States is not one of them. That is not to say that it actually happens in the United States; it is just that there is no U.S. legislation that explicitly bans it. The U.S. regulatory system could, in theory, approve it, but it has never indicated any particular willingness to do so. Effectively, it is impossible to do it legally in the United States, but it is not considered a ban.

We should keep in mind that legislation has the advantage of being more politically credible, particularly in more or less functioning democracies, because it is seen as a product of elected representatives. On the other hand, legislation is extremely rigid and difficult to change. Once it is in place, it can be impossible to remove it, and it is often resistant to nuance. Therefore, it can be a very blunt instrument.

Regulation—that is, the detailed administrative rules adopted pursuant to legislative direction and authority—has the ability to be much more responsive and detailed, and is influenced to a greater extent by expert information. Yet, it also begins to become somewhat more divorced from public sentiment and begins to move into the world of the administrative state where there is rule by expert, which has its own challenges for democratic systems.

Focus on human gene therapy

Looking specifically at regulation of human germline modification, a 2014 survey of 39 countries by Motoko Araki and Tetsuya Ishii found a variety of regulatory approaches. Many European countries legally prohibit any intervention in the germline. Other countries have advisory guidelines. The United States has a complicated regulatory scheme that would make it very difficult to perform any germline modification. There are also funding restrictions on embryo research that might have a very strong effect on the underlying basic science needed to even get to the point of regulatory approval. And many countries have simply not considered the possibility.

There are international instruments that have been written at various levels to address aspects of genetics. For example, the Council of Europe’s Oviedo Convention says that predictive genetic tests should be used only for medical purposes. It specifically calls for a prohibition on the use of genetic engineering of the germline or changing the makeup of later generations. It builds on earlier European conventions.

But like many international instruments, it is not ratified by every member country and, even when ratified, has not necessarily been implemented with concrete legislation. It has great normative value and can occasionally have enforcement-level value, but it is often lacking in the latter.

In the United States, gene therapy is handled in a regulatory system that treats it as a biological drug or a device, depending on its mode of operation. It comes under the comprehensive regulation of the FDA and under multiple laws focusing on infection control, efficacy, and safety.

The United States also seeks guidance from advisory bodies such as the Recombinant DNA Advisory Committee and the local research subjects review bodies that help to make sure that human clinical trials are managed in a way that agrees with the country’s norms and regulations.

But what is perhaps distinctive about the United States is that although it has very strong controls in the pre-market stage of these technologies, once a drug, device, or biologic is on the market, the control becomes much weaker. That is, the United States regulates the products, but not the physicians who actually use those products. Physicians have the discretion to take a product that was approved for one purpose and use it for a different purpose, population, or dosage. There are some post-market mechanisms to track the quality of this work and to dial it back, but they are not as strong as in other countries.

Gene therapy in South Korea has a pathway very similar to the one in the United States. Interestingly, South Korea has come to have a focus on innovation, with expanded access to investigational drugs. It is also developing a system of conditional approval, which would allow for some use of a product prior to the accumulation of the level of evidence that is required in systems such as that in the United States.

Again, there are different versions of this. Even in the United States, regulators sometimes accept evidence from surrogate markers of effectiveness, which allows for a faster path to the market. Many other countries are also considering adopting some form of conditional approval.

The United Kingdom’s (U.K.) system is a little different because not only is it operating within the context of the EU and its directives, but it has its own very strong pre-market review process. In addition, it has very strong post-market regulation of any procedures involving embryos or human fertilization. Thus, U.K. regulations cover not just the product, but also where the product can be used and by whom.

The EU has also added special provisions for advanced therapy medicinal products. Gene therapy is almost certainly going to be among them, so that there is an extra layer of EU review for quality control at a centralized level.

Japan has a regulatory pathway that tries to identify prospectively those things that are going to be high, medium, or low risk, and to regulate them accordingly. The United States follows a similar process in its regulation of medical devices.

But for drug regulation, the United States treats everything from the beginning as equally dangerous and runs every proposed drug through the same paces of testing for safety and efficacy. By contrast, in Japan, one will see an initial determination about the level of risk that is likely to be present for each proposed drug and the degree of stringency that the regulatory process must apply as a result.

Japan also has recently added a conditional approval pathway specifically for regenerative medicine and gene therapy products. It will be very interesting to see how this operates. It is still new, so the experience is limited.

There is certainly some concern that if new products are put into use too early in controversial fields such as embryonic stem cell research or gene therapy, a single high-profile failure might set back the entire field. In the United States, the death in 1999 of Jesse Gelsinger in a gene therapy trial at the University of Pennsylvania set back the field by years.

One of the challenges with the conditional therapy pathway is to balance the desire to move forward as quickly as possible while avoiding the kinds of adverse outcomes that not only injure individuals, but could slow progress to the point that many individuals who could have benefited in the future are denied the technology because it is delayed so significantly.

Singapore has a risk-based approach similar to Japan’s. What is interesting in Singapore is that it actually tries to figure out what would be a high- versus low-risk intervention in the area of cell therapy. The variables that are used include whether the manipulation is substantial or minimal, whether the intended use is homologous or non-homologous, and whether it will be combined with a drug, a device, or another biologic.

The only consideration one might add is autologous versus non-autologous use. In Singapore, these distinctions are used to classify the level of risk. In the United States, it is used to determine if the FDA has the jurisdiction to regulate that particular product.

Finally, Brazil provides an example of regulation and governance by accretion. It recently approved laws related specifically to genetically engineered foods, stem cell research, and cell therapy, but they are layered on top of earlier, more general rules: constitutional prohibitions on the sale of any kind of human tissue and 1996 laws on the patenting of human biological materials. Together they are creating a situation of confusion. The result is paralysis while people try to figure out how the laws are going to interact. It is a cautionary tale about how to proceed with legislation against the backdrop of older decisions that may have been made against different imaginary scenarios.

Product or process?

There is a fundamental divide in the world about how we regulate biotechnology that goes beyond the categories of promotional, permissive, or prohibitive. It is whether we think of biotechnology as a thing unto itself, or whether we think of it simply as one more tool that goes into making various products.

If one regulates the technology, one regulates everything about the technology in a comprehensive way. An example is the EU’s community strategy, which takes a global approach to the technology that makes it easier for the public to understand the so-called “laws on biotechnology.” One can focus on key aspects of the science that create key questions about the effects of a particular kind of innovation. Itto makes it possible to have consistent and overarching approaches to questions of great philosophical significance, such as what we mean when we say “human dignity” or “genetic heritage of mankind.”

It also has the problem of needing much more specific legislation to focus on individual products because, as is noted in a contrasting system where you regulate the product and not the technology, as is the case in the United States, the technology itself is neither inherently dangerous nor safe. It is dangerous in some contexts and safe in others. In some products, it is easier to predict its effects. In other products, it is much less likely. Some products may have environmental impacts, and for others the impact will be confined to a single individual or a single animal.

Regulating by product gives one the advantage of being able to be much more specific about the degree of risk that is feared or anticipated, and the degree of caution needed, as well as being able to take advantage of mature degrees of expertise in the regulatory pathways appropriate for drugs, foods, and pesticides, and of the expert people who have been implementing those pathways for years.

The trouble is that it can be confusing to the public. If someone asks: what is the “law on biotechnology,” the answer is that there are 19 different laws that cover drugs, devices, agricultural products, livestock, and so on. To many people, this sounds as if the country is not regulating biotechnology, and it creates the possibility for unintended or even unnoticed gaps among these laws or conflicts among them.

Whenever we are talking about this, whether in the human or non-human application, but particularly in the human, it is important to think about where in the R&D process we want to exercise control. Pre-market control is truly important to avoid the devastating adverse events that can occur if we move too quickly. But if pre-market control is too strong, not only does it slow the technology, but at a business level it creates a barrier to market entry for smaller players. Mature companies with large staffs know how to maneuver the regulatory system. A small company with very low levels of capital and a high burn rate is not necessarily going to be able to survive long enough to deal with a long and difficult pre-market process.

The AquAdvantage salmon that I mentioned earlier is made by a company that has reportedly been on the verge of bankruptcy during the 20-some years that the product was undergoing review. Another company in Canada that was trying to produce a pig that would be less environmentally damaging wound up abandoning this project, in part because that pathway was so long, slow, and expensive. There is a cost to pre-market controls that are so strong that they drive out the small, and often very creative, innovators.

One thing we have learned is that conditions on research grants, whether from government or philanthropies, can also serve as a strong regulator, but one that is much more responsive and much easier to adapt quickly to changing circumstances and changing levels of knowledge.

Finally, harmonization across national borders is crucial. If we want scientists to be able to use one another’s materials, they have to have confidence that the materials were derived and managed in a way that meets everybody’s common expectations of both ethical and biomedically safe levels of care.

We want to have uniformly high standards for research and therapy. We want to be able to reduce conflicts and redundancies in review procedures if we want the science to proceed in a way that is efficient as well as responsible. We learned this lesson with the many conflicts among jurisdictions in the area of embryonic stem cell research.

The more that we have effective systems for responsible oversight in the development and deployment of a technology, the more we can take chances. We can move a technology quickly because we have a chance to back up at the end and change course.

Innovation is not something that is in conflict with precaution. They are complementary strategies in which precaution will facilitate innovation and give us the confidence we need to support these new and risk-taking technologies.

R. Alta Charo is Warren P. Knowles Professor of Law and Bioethics at the University of Wisconsin.

Responding to CRISPR/Cas9

The prospect of influencing the course of human evolution through technological intervention has been thought about for a long time, but usually in an abstract or theoretical way. But that possibility has become an impending reality at a breathtaking pace in the past few years. Jennifer Doudna and Emmanuelle Charpentier published a paper in Science in June 2012 that demonstrated that CRISPR/Cas9 (if you must know, clustered regularly-interspaced short palindromic repeats, with CRISPR associated protein 9) is a remarkably accurate and relatively easy-to-use tool for editing genes. In October Feng Zhang of the Broad Institute published a paper in Science that demonstrated that CRISPR could be used to edit mammalian genes. Soon after, George Church of Harvard published a paper demonstrating the use of CRISPR in human cells. Excitement spread quickly through the scientific community as researchers realized that this new capability opened doors to a mind-boggling array of new directions for research.

With the thrill of new possibilities came a chill of recognition that there is no guarantee that all the new uses of this technology would be benign. A group of scientists, including leaders in field such as Jennifer Doudna and a few veterans of the 1975 Asilomar Conference at which a group of scientists debated the wisdom of pursuing the possibilities opened by the development of recombinant DNA technology, met in January 2015 to discuss the potential risks associated with this new gene-editing technology. In March 2015 they published an article in Science that asked whether it would be wise to place voluntary restrictions on the use of CRISPER/Cas9 until we had a better understanding of how it might be used. They recommended that leading thinkers in science, medicine, law, ethics, and policy come together to discuss how to proceed.

The members of this group approached a number of institutions to see who would be interested in convening this discussion. Not surprisingly, the National Academy of Sciences (NAS) and National Academy of Medicine (NAM) were among those who were approached. After a frantic round of discussions among leaders of the scientific community and a number of institutions, there was agreement that the Academies were in the best position to organize the event, and NAS president Ralph Cicerone and NAM president Victor Dzau formed an advisory group to guide the effort.

Everyone understood from the outset that this must be an international discussion, and the U.S. academies’ leaders reached out to engage their counterparts at academies in other countries. The advisory group included representatives of the Royal Society of the United Kingdom and the Chinese Academy of Sciences. The advisory group decided that two types of activities were needed. There had to be a rigorous study by an expert committee to collect as much information as possible about the technology and to develop a well-considered assessment of the risks as well as the opportunities.

In addition, the advisory group recognized that news of this technology was spreading fast and raising understandable public concern. The U.S. House of Representatives’ Science Committee organized a hearing so that they could learn from experts about the possible implications. The advisory group decided that the public and policy makers did not want to wait a year or more for an expert committee to deliberate and then announce its conclusions. The repercussions of this technology are potentially so powerful and so widespread that it was necessary to include a much wider range of perspectives and to do so as quickly as possible.

NAS and NAM decided to host a large meeting at their headquarters in Washington, DC. The Royal Society and Chinese Academy of Sciences agreed to cosponsor the event. The advisory committee then appointed a planning group to organize what would become the Summit on Human Gene Editing. They chose David Baltimore—Nobel laureate, Asilomar veteran, participant in the January Palo Alto meeting, and lead author of the Science article—to chair the planning committee. Other members included scientists, physicians, and experts in law, ethics, regulation, and policy from several countries. Although gene-editing advances will have a powerful impact throughout the life sciences and will be applied to plants and animals, the advisory committee decided to focus its attention on the use of the technology with human somatic and germline cells because of the broad public interest in this aspect, and to keep the boundaries of discussion manageable.

The committee began meeting in August 2015 to put together the Summit, which would be held December 3-5. They designed an agenda that included an overview of the science explained by the leading researchers in the world, but that devoted most of its attention to the relevant social, legal, ethical, and policy questions that are essential to understanding how to use or limit this technology. There were speakers from about 20 countries and representatives of many of the world’s scientific academies. Roughly 75 reporters attended the meeting. Participation was open to the public, and registration quickly reached the maximum of 400 people. The entire meeting was webcast and attracted viewers from 70 countries.

The event was recorded and is available for viewing on the National Academies website. To provide a glimpse of the meeting, Issues is publishing the text of presentations by David Baltimore, Alta Charo, Daniel Kevles, and Ruha Benjamin that were made at the Summit. They provide a taste of the quality of the speakers and the remarkable range of topics and perspectives that were circulating during the Summit. On the website, one can find the text of additional presentations plus a statement from the organizing committee on what it learned during the Summit.

There was never any presumption that the Summit would resolve any of the debates. Its purpose was to illustrate the importance of the subject, the variety of voices that need to be at the table, and the need to stimulate discussions across disciplines, cultural and ethical traditions, and national boundaries. We are just at the beginning of coming to terms with a new generation of genetic technology and knowledge that will continue to advance and open new doors.

As a first step in extending the discussion, we include an article by Henry Miller, who argues that the Summit was an unnecessary impediment to the progress of the science and its ultimate use to treat human disease. No doubt there are others who will argue that scientific hubris has already exceeded the boundaries of what society can countenance, that the Summit was a ploy to enable scientists to control the discussion and the ultimate fate of the technology.

The reality is that nothing is decided yet. The study committee organized by the U.S. National Academies is hard at work; similar committees are meeting in other countries; public discussions are taking place across the globe; and we can expect to see future summits that assemble participants from around the world. In its starkest and most dramatic form, new genetic technology offers the prospect of humanity taking control of the direction of its own evolution. If that doesn’t give you something to think about, nothing will.

Why We Need a Summit on Human Gene Editing

In 1981, Matthew Meselson pointed out that the puzzle brought to light by Darwin, of what constitutes heredity, was solved in two tranches. The first lasted from 1900, when Mendel’s work of the last half of the nineteenth century came into the consciousness of the scientific community. It lasted until 1950 or so, when the rules of genetic inheritance had been firmly established.

We then entered a new world of molecular genetics, learning first the chemistry of the underlying molecules of inheritance. Once we knew the chemistry and the topology of the DNA molecule, we learned how to cut it and how to paste it. That resulted in the recombinant DNA revolution of the mid-1970s.

We also learned how to modify DNA in the chromosomes of experimental animals. Those methods remained cumbersome and imperfect, and extending them to human beings was initially unthinkable. Over the years, however, the unthinkable has become conceivable. Today, we sense that we are close to being able to alter human hereditary. Now we must face the questions that arise. How, if at all, do we as a society want to use this capability?

Thus, we are part of a historical process that dates from Darwin and Mendel’s work in the nineteenth century. We in the scientific community are taking on a heavy responsibility for our society because we understand that we could be on the cusp of a new era in human history. Although gene editing is in its infancy today, it is likely that the pressure to use gene editing will increase with time, and the actions we take now will guide us into the future.

We should remember that there is a larger context for our deliberations. Aldous Huxley, in his book Brave New World, imagined a society built on selection of people to fill particular roles, with environmental manipulation to control the social mobility and behavior of the population. That book was written in 1932. He couldn’t have conceived of gene editing, but the warning implicit in his book is one that we should take to heart as we face the prospect of this new and powerful means to control the nature of the human population.

Thus, we are initiating a process of taking responsibility for technology with far-ranging implications. The process of accepting this challenge began in January 2015, when concerns about the consequences of modifying human genomes prompted a small group of scientists and ethicists to convene a meeting in Napa, California. That group recognized the opportunity that genome engineering technology presented to cure genetic disease in humans. It realized that these methods provide the opportunity to reshape elements of the biosphere, providing benefit to the environment and to human society.

Although these new technologies offer unprecedented opportunities for advancing science and treating disease, the group recognized that they might be used prematurely or in ways that might be viewed as inappropriate. Because of these concerns, those at the Napa meeting offered a number of recommendations and called for an international dialogue to further consider the attendant ethical, social, and legal implications of using germline modification techniques.

The National Academies of Sciences and Medicine agreed to convene an International Summit on Human Gene Editing and asked me to chair the planning committee. When the committee began its preparations, initial deliberations focused on defining the parameters of the discussion. We recognized that the application of gene editing techniques is not limited to humans. Such technologies can and are already being used to make genetic modifications in non-human organisms. The use of gene editing technologies to alter plants and animals raises many ethical and societal issues that are in and of themselves worthy of careful consideration.

We decided that to maintain focus, to avoid the discussion becoming too diffuse, we needed to limit the conversation to when and whether to proceed with conscious modification of the human genome. We believe that the tactical, clinical, ethical, legal, and social issues relating to the potential to make genetic changes that can be passed on to future generations were sufficiently complex to be a worthy target for a three-day meeting.

The committee was also aware that there are numerous relevant concurrent projects under way, both within the U.S. National Academies and in the larger community of stakeholders. These include two U.S. National Academies studies, one on gene drive in non-human organisms and the other on genetic modification of eggs and zygotes for the prevention of mitochondrial disease.

The planning committee believed that the key was to develop an agenda that gave voice to perspectives not represented in these other activities. The organizing committee recognized from the start that modern science is a global enterprise and that gene editing technologies are available to and are in use by researchers around the world. Furthermore, different cultures are likely to approach the question of human genome editing from different perspectives. The voices of diverse cultures should be heard.

Equally important, consideration of the path forward is not solely the responsibility of scientific researchers. The conversation must incorporate a broad range of stakeholders, including individuals from the bioethics community and social science community, along with specialists in medicine, regulatory affairs, and public policy, as well as, of course, the lay public.

The Summit should be seen as an opportunity to launch a much broader public discussion. It is part of a larger effort to inform policy makers and the public about recent advances. Although powerful new gene editing technologies, such as CRISPR-Cas9, hold great promise, they also raise concerns and present complex challenges.

We are saying that this is something to which all people should pay attention. Some might consider that to be fear mongering, but we hope that most will see it as the responsible acceptance of the National Academies’ role as expert advisers to the public.

In 1975, I had the privilege of participating in the Asilomar conference on recombinant DNA. That meeting was organized to “review scientific progress in research on recombinant DNA molecules and to discuss appropriate ways to deal with the potential biohazards of this work.”

In 1975, as today, we believed that it was prudent to consider the implications of a particular remarkable achievement in science. Then, as now, we recognized that we had a responsibility to include a broad community in our discussion. A lot has changed since 1975.

Science has become an increasingly global enterprise. The public has become ever more aware of the power of science and has seen the remarkable rate of societal change that can be brought on by the application of new science.

The public has witnessed the huge benefits of basic and medical research, but it is questioning whether these benefits bring attendant modifications of nature that require controls. The public also has become more engaged in debates about science and scientific progress. The new modes of rapid communication have provided novel platforms for these discussions.

At Asilomar, the press participated with the understanding that nothing would be written about what was said until the meeting was concluded. At the Summit, individuals sent blogs, tweets, and retweets as the discussion was taking place. The entire event was webcast around the world, and the video is available online for all to see.

This Summit incorporated many themes and many perspectives, but the overriding question was when, if ever, will we want to use this gene-editing technology. When will it be safe to use it? When will it be therapeutically justified to use it? And a more difficult question, when will we be prepared to say that we are allowed to use editing for genetic enhancement purposes?

These are deep and disturbing questions, and the Summit will not be the last word on human gene editing. Rather, we hope that our discussions will serve as a foundation for a meaningful and ongoing global dialogue.

David Baltimore is president emeritus and Robert Andrews Millikan Professor of Biology at Caltech. He chaired the planning committee for the International Summit on Human Gene Editing.