Forum – Winter 2020
Combatting tech hype
We enjoyed Jeffrey Funk’s “What’s Behind Technological Hype?” (Issues, Fall 2019). However, as authors of a recent book on financial speculation arising from the commercialization of new technology, Bubbles and Crashes: The Boom and Bust of Technological Innovation (Stanford, 2019), we take issue with a few of Funk’s interpretations.
First, Funk points in several instances to the “lack of good economic analysis” as a critical factor leading to hype. However, what’s really needed is different economic analysis. Understanding technology calls for economic analysis that engages what Robert Shiller called “narrative economics.” Traditional economic approaches are not much help when confronting the fundamental uncertainty that arises from the introduction and potential adoption of a new technology or system. To understand choices at that margin, we need to be sensitive to the sources and impacts of narratives. Narratives are a double-edged sword: carefully deployed, they can coordinate collective action and funnel resources into risky but ultimately profitable ventures, but narratives can also lead to hype, speculation, and damaging bubbles. Unfortunately, in spite of Shiller’s call to action, most economists would not recognize the study of narratives as central to the study of booms and busts, so we need more than “good” economic analysis.
Second, once we accept the intractability of uncertainty, it is not realistic to expect to be able to entirely soften the blow of failure. Indeed, failure may be good, and not in some milquetoast, learning-from-failure way. Awful, terrible, value-destroying failure is good because it signals that our local instance of late-entrepreneurial capitalism is still capable of taking big risks. The implications of this logic are far-reaching: what if the risk that we stop failing (because we stop placing big, transformational bets) is more dangerous than the cost of a little too much hype? This is a hard question to answer, partly because the costs and benefits are incommensurable and partly because they accumulate across time in messy, discontinuous ways. From our perspective, the critical issue is not minimizing failure, but maximizing the categories and numbers of people who can afford to fail. Unfortunately, recent macroeconomic developments suggest that we are doing little to redress the “Lost Einsteins” problem, thereby losing even more of the bold, if risky, ideas that we need in order to support meaningful economic experimentation.
A more critical, narrative economic analysis would focus on how much of the imagined future builds on only imagined elements of the new technological system. Knowing how much is imagined might counteract hype that glosses over these elements.
Brent Goldfarb
Associate Professor
David A. Kirsch
Associate Professor
Robert H. Smith School of Business
University of Maryland
Jeffrey Funk has written a provocative and wide-ranging treatise on the impacts of technological hype in society. Many of his observations are trenchant. The article is particularly timely in the current environment within which profitless firms such as Wework, Uber, Lyft, Peloton, and so many others have been hyped as “technology” firms, rather than as more prosaic office space rentals, algorithmically dispatched taxis, and home athletic equipment suppliers. As my colleague John Zysman and I have shown, we are again in a technological bubble of seemingly limitless capital and paroxysms of hyperbole regarding “disruption.” It is also clear that platform firms such as Apple, Amazon, Facebook, Google, Microsoft, and numerous start-ups are investing enormous sums and hiring many PhD data scientists in a bid to build better algorithms and even “quantum computers” to transform global society. In an earlier article in Issues (“The Rise of the Platform Economy,” Spring 2016), we concluded that the digital technologies were driving a transition to what we termed a “platform economy.” Three years later, we are more convinced that such a transformation is taking place—and this is a direct result of technological improvement.
What Funk very effectively points out is that so many newly developing fields of science are overstated far beyond all reason. These propaganda campaigns are often directed at various naïve actors including politicians or investors, but may fool the public as well. Scientists and firms exaggerate the potentials of their inventions because there will be no consequences if, after getting the funds, their supposed sure thing fails. This exaggeration has led the federal and state governments to invest huge sums into the newest hyped technology, be it nanotechnology, superconductivity, embryonic stem cells, or, most recently, artificial intelligence. Almost invariably, these excesses lead to inefficient and, unfortunately, wasteful investments in research and development. Is the situation today worse than in the dot.com bubble? Perhaps. There is more capital than ever, and ever greater promises to disrupt this industry or that. However, technological hype is not new and has always been a way of separating the naïve from their funds. Yet, sometimes to be a successful innovator it is important to have unreasoning and blind faith—it is necessary to overcome those people who say something cannot be done. The line between fool and visionary can be fine.
Funk laments the fact that companies or inventors make hyperbolic claims about their particular technology. But companies’ overblown assertions should not be surprising. Firm founders and executives are seeking more funds to realize their vision or build their careers and thus exaggerate or even prevaricate—there is no punishment for wild exaggeration. In the interests of selling their consulting services, McKinsey and other such companies likewise pursue this strategy. For them, new “disruptive” technologies, whether ultimately disruptive or not, provide the entrée for their sales teams. Companies and consultants can also recruit the media to assist, as they too stand to profit from the hype. The interests of each of these actors are aligned for hyperbole.
Funk’s wide-ranging article provides much to ponder and is a much-needed caution about the unrestrained science and technology hype that sometimes appears to overwhelm reason. It is important to maintain healthy skepticism regarding the claims by scientists, venture capitalists, university administrators, and consultants; but also to recognize that enthusiasm is also an important contributor to technological development.
Martin Kenney
Distinguished Professor of Community and Regional Development
University of California, Davis
Jeffrey Funk is right in worrying about the hype and the lack of serious innovation—but perhaps not about what’s behind it.
The main source of the hype is the massive distortion of incentives created by the financial casino that has resulted from market fundamentalism and the resistance to state action. New technologies and new products have become a financial end in themselves rather than a way to profitably provide products and services for a population capable of paying for them. Profound inequality has deprived the majority of consumers of the means to satisfy their needs. Cheap products from China have kept the consumerism of the “American Way of Life” still defining the patterns of consumption.
The socially useful products and the productivity increases that could come from new technologies will actually come only when government provides a set of policies that involve a clear directionality and a win-win game between business and society. Information technologies would be capable of transforming most products into services, as they have been doing with films, music, and books, and as they could do with durables by renting them as a service (and reviving maintenance) rather than selling products, from automobiles to refrigerators and furniture.
Other potential productivity increases refer to materials and energy. If these categories were taxed, together with transport, there would be an incentive to innovate in new biomaterials, less material content, turning products into services, sustainable housing, and many other serious new ways of fulfilling needs, many of them produced nearer to the consumer to avoid the transport costs (and the climate consequences).
But the current tax system and the accompanying regulations were designed for the world of mass production, mass consumption, and waste, and the globalized economy has been superimposed on the old regime, making it worse.
Funk is right to worry about the hype. But the solution is not in improving forecasting or the measure of organizational success or the education of scientists and decision-makers, although all those things would be useful. The real solution lies in radically changing the policy context, beginning with the incentive system to finance, which facilitates a betting casino world and taxes short-term capital gains at far less than half of what hard-earned incomes must pay. Once the playing field is better aligned, serious innovation will be made and the hype will end.
Carlota Perez
Honorary Professor
Institute for Innovation and Public Purpose, University College London
Science Policy Research Unit, University of Sussex
Science, sport, and sex
In “Science, Sport, Sex, and the Case of Caster Semenya” (Issues, Fall 2019), Roger Pielke Jr. and Madeleine Pape systematically misrepresent the International Association of Athletics Federations’ position on female athletes with differences of sex development (DSDs). The IAAF is neither targeting athletes whose appearance is “insufficiently feminine” nor trying to “question and reclassify the sex of such athletes.” It does not use testosterone levels “as the basis for … sex classification.” In fact, its regulations expressly state that they are “not intended as any kind of judgement on or questioning of the sex or the gender identity of any athlete.”
On average, men have physiological advantages (including bigger and stronger muscles and bones, and more hemoglobin) that give them an insurmountable performance advantage over women. Therefore, women can excel at sport only if they compete in a separate category. The primary driver of the sex difference in sport performance is the testosterone levels that men’s testes produce post-puberty (7.7-29.4 nmol/L) compared with the levels produced by women’s ovaries and adrenal glands (0.06-1.68 nmol/L). Therefore, the IAAF has to address, rationally and fairly, the two categories of athletes with a female gender identity (46 XY DSD, and trans-female) who have testes producing testosterone in the normal male range that gives them the same physiological advantages.
Biological sex is not a spectrum. In 99.98% of cases, all aspects of sex (genetic, gonadal, hormonal, anatomical) are aligned, making classification as male or female straightforward. Complete alignment is lacking in only an estimated 0.02% of births. For example, XY babies with 5-alpha reductase deficiency (5-ARD) lack the hormone responsible for normal genital development, and so are born with undescended testes and ambiguous external genitalia. Endocrinology nosology classifies them as undervirilized males, but in some countries they may be assigned a female legal sex at birth. On puberty, however, their testes start producing normal male levels of testosterone, causing the same androgenization of their bodies as XY individuals reared male. A reported 50%–60% of 5-ARD individuals reared female therefore transition upon puberty to a male gender identity, which is why current medical advice is to assign a male sex to 5-ARD babies.
Whether given a legal female sex from birth (some XY 5-ARD) or later (trans-female), such individuals have enormous physiological advantages over XX females, making competition between them unfair. The IAAF nevertheless permits 46 XY DSD and trans-female athletes to compete in the female category, provided they reduce their testosterone levels below 5 nmol/L, whether by surgery or hormone therapy (the recognized standard of care in such cases). This is because a woman with ovaries, even if she has the hormonal disorder polycystic ovary syndrome, will not have testosterone above 5 nmol/L unless she is doping or has a serious medical disorder. The data gathered by the researcher Richard V. Clark confirm this. The fact that 5-ARD individuals have testosterone below 5 nmol/L before puberty is entirely expected, and Pielke and Pape’s attempt to discredit Clarke’s paper is irrelevant.
On the sports field, biological sex must trump gender identity. References by Pielke and Pape to “biological sex as assigned and maintained at birth” serve only to confuse the two and distract from the facts that have to be addressed if competition in the female category is to remain fair.
Stéphane Bermon
Director, Health and Science Department
World Athletics (formerly known as the International Association of Athletics Federations)
Jonathan Taylor
Partner, Bird & Bird
(Both authors appeared for the IAAF in Court of Arbitration for Sport cases: C. Semenya and ASA vs IAAF, and D. Chand vs AFI and IAAF)
Richard Clark and his coauthors have already responded to Roger Pielke Jr. and Madeleine Pape’s criticism of their paper elsewhere. Here I want to address Pielke and Pape’s decision to play the race card as part of their disinformation campaign against the International Association of Athletics Federations’ regulation regarding differences of sex development (DSDs). Their campaign is based in two related arguments, neither of which has legs. The first is that the regulation discriminates against women of color based on their appearance. The second is that it mostly targets women of color.
Stripped of its misleading vocabulary and veneer, the argument that the regulation discriminates against women of color based on their appearance goes like this: athletes with testes, male testosterone levels, and male secondary sex characteristics look male only to those of us who privilege white female features and have a racist view of the black female body. Racism exists and should always be exposed for the scourge that it is, but calling someone a racist because they can distinguish an androgenized body from a nonandrogenized one is wrong. We shouldn’t have to spill ink making the obvious point that discriminating on the basis of race and distinguishing on the basis of sex aren’t the same thing. Racism and sexism sometimes intersect, and this results in special burdens for black women, but that’s not what’s going on here.
Male secondary sex characteristics develop in early adolescence when testes—but not ovaries—begin to produce increasing amounts of testosterone that, in turn, cause the androgen-sensitive human body to go through male rather than female puberty.
Caster Semenya’s own experience is illustrative. Growing up and competing in local competitions, opposing sport teams would contest and then test her sex in the crudest sense of that term. Her principal has said that he didn’t realize she was a girl for most of the years she attended his school. Her father has said that she sounds like a man, and an early coach tended to use the male pronoun when referring to her in conversation. She has been described by African people in African publications as having a “masculine phenotype” and “man-like physical features.” Perhaps most telling, the first time they met in the girls’ locker room, the woman who eventually became Semenya’s wife thought she was a boy and questioned her presence in that space. The suggestion that these reactions are anything but universal is both offensive and destructive.
It’s also wrong to suggest that the IAAF’s DSD regulation mostly targets women of color from the Global South. As the African National Congress itself has conceded, albeit incompletely, it affects athletes from “East Europe, Asia and the African continent.” These are the facts: The regulated conditions, including 5-alpha reductase deficiency, exist in all populations. Sometimes differences that have a genetic basis cluster in particular geographical areas. And because different cultures place different value on the appearance of boys’ external genitals, when a male child presents ambiguously at birth, preferences for their legal sex assignment may vary based on those norms. Finally, because individuals in different regions have different access to medicine and medicine itself has a cultural basis, DSDs are sometimes addressed surgically in infancy and sometimes children are left to grow up naturally. But neither the conditions themselves nor the approaches to their management are racially bound.
The Olympic Movement has had an eligibility rule for the women’s category for decades. Across those decades, affected athletes have come from different countries and continents, including from Western Europe and the Americas. The rule was not put in place for Semenya.
It’s important not to be blinded to the facts by politics and incendiary rhetoric. In the sport of track and field, black and brown women from around the world and across the tonal spectrum are not only ubiquitous, they are properly celebrated as beautiful, as strong, as winners. As is often the case in our sport, many of these women grew up in economically challenged circumstances, and they beat many who were more privileged. The IAAF’s DSD regulation hasn’t changed this. Since it’s been in place, as expected, all 12 of the medals in the affected events have gone to women of color. These athletes are also deserving of consideration.
I agree with Pielke and Pape that Caster Semenya is perfect just the way she is. This doesn’t change the fact that she and others with 46 XY DSD have the primary and secondary sex characteristics the women’s category was designed specifically to exclude.
Doriane Coleman
Professor of Law
Duke Law School
I was both pleased and disturbed to see the article by Roger Pielke Jr. and Madeleine Pape. I was pleased because it adds further argument to the worldwide campaign to have the most recent version of the IAAF’s sex test, the “Eligibility Regulations for Female Classification” (aka the “Caster Semenya Rules”), abolished on the grounds that they lack scientific merit, violate human rights and medical ethics, and unfairly target women from the Global South.
But I was disturbed because the article further exposes the extent to which the IAAF architects of the Caster Semenya Rules manipulated the categories of analysis to justify the regulations. As an Olympian in athletics myself, I have always hoped that my sport would be governed in the spirit of fairness and best practice in policy-making, drawing on impartial, widely vetted science. But as Pielke and Pape make clear, the drivers of the Caster Semenya Rules chose categories that would give them the nonoverlapping distribution of testosterone that they used to construct the regulations, presenting “what (they felt) ought to be (as) what is.” Pielke and Pape offer a much more realistic and inclusive approach to classification for the purposes of organizing sports competition, but there is nothing in the IAAF’s behavior to suggest that it would ever be open to such an approach. It’s been that way for more than 50 years, as leading geneticists and ethicists have urged the IAAF to abolish the test only to be ignored.
It’s hard to know how to right this wrong. In the decision that upheld the IAAF regulations, the Court of Arbitration for Sport (which serves as the highest appeal body in international sport) said that it did not have to consider human rights, medical ethics, or the quality of the science. It’s impossible to argue today that sport is not in the public realm nor that it should be free from the obligation to enforce the protections that exist in other spheres of society, especially human rights. In every country in the world, sport is enabled by public funding and publicly created facilities, and the best athletes and teams are held up as exemplars of universities, cities, and even entire countries. If the international sports bodies do not voluntarily accept responsibility for human rights and accountable policies, I am confident that a strategy to bring them under such standards will soon be found.
In the meantime, Pielke and Pape do us all a favor by exposing the manipulation of data that the IAAF employs to justify the persecution of Caster Semenya and other outstanding athletes. Their article is one more step in the campaign to the complete abolition of the sex test.
Bruce Kidd
Professor, Faculty of Kinesiology and Physical Education
University of Toronto
In their attempt to distinguish transgender athletes from those labelled as intersex/DSD, Pielke Jr. and Pape err in their suggestion that appearance of external genitalia would prove an acceptable basis for determining sex for athletic purposes.
Although in social situations, gender identity should be used to categorize humanity, it seems likely to be the case that in sport the biology that is important for performance should be considered also, including hormone exposure and its consequences. To date, testosterone level is the most well-established correlate with athletic performance and the best basis for distinguishing individuals in sex-based categories. In the Semenya case, the Court of Arbitration for Sport panel of judges all agreed that “on the basis of the scientific evidence presented by the parties, the Panel unanimously finds that endogenous testosterone is the primary driver of the sex difference in sports performance between males and females.” Unless and until data suggest a better method, we advocate for continued use of testosterone to separate male athletes from female ones.
Joanna Harper
Loughborough University
Joshua D. Safer
Mount Sinai Hospital, New York
(Harper was a witness for the IAAF at the Semenya and Chand trials, and Safer has served as an adviser to the IAAF)
Mindfulness muddle
In “Mindfulness Inc.” (Issues, Fall 2019), Matthew C. Nisbet presents an important critique of the state of popular mindfulness programs. Building on my book The Mindful Elite (2019) and Ron Purser’s book McMindfulness (2019), he argues that teaching people to cope with stress and other problems through introspective meditation depoliticizes and privatizes stressors. Instead a “social mindfulness,” as termed by Purser, is needed: this would not only hone individual self-reflection, but channel attention back to direct action for structural social reform to hold institutions more accountable for the pressures they place upon individuals. I agree with all these points of Nisbet’s argument.
It is important, however, to bring attention to the process of how the field of mindfulness developed. As human beings, leaders of the mindfulness movement were not omniscient, and I do not want to over-rationalize their decision-making. In popularizing mindfulness, mindful leaders sought first to bring meditation into mainstream society. They correctly surmised that by introducing it into powerful organizations, where they knew people who might be sympathetic to meditation, they would have the most impact. This is an effective strategy many people in their (affluent) shoes would have taken.
However, as mainly white, highly educated, members of the middle and upper classes, these mindfulness advocates had some blind spots. Their affluent networks aided their expedient—and successful—promotion of mindfulness, but handicapped them in other respects. Mindful leaders seemed unaware of how their manifold minor alterations to make meditation appealing in new institutions increasingly came to support the structures and cultures of the organizations they inhabited. Over time, their initial commitments to reform society more broadly seemed to fall by the wayside.
Staggering inequality and major cultural fissures pervade not only the United States but the globe. These fissures also permeate the mindfulness community. The movement is not centralized or well-regulated. As a result, a variety of mindfulness programs proliferate. Of these, programs in business tend to be the most instrumentally inclined—seeking productivity and career advancement for practitioners—and thereby deviating from mindfulness’s Buddhist roots. Some mindfulness programs in nonprofits, education, health care, religious organizations, and other sectors maintain more Buddhist ethical roots.
Programs also vary widely within sectors. Some mindfulness leaders are working to address the critiques leveled against them, to diversify the movement, face structural inequalities, and enact social reform. I hope they succeed in their efforts to advance social mindfulness.
Jaime Kucinskas
Department of Sociology
Hamilton College
Matthew Nisbet has provided an accurate description of the lucrative mindfulness industry, illustrating how market forces have co-opted mindfulness to further a neoliberal agenda. And market data forecasts it will grow to $2.08 billion by 2022. Cleary, there are huge financial interests driving this growth. The fact that mindfulness has become a fashionable commodity easily accommodated into the dictates of the marketplace should give us pause. But why should we be concerned? Isn’t this just a natural consequence of a capitalist economy that responds to the needs of Western consumers?
Contemporary mindfulness is a recent and modern invention. Mindful merchants claim that their products are derived from Buddhism, but that is only a slick marketing move used to exploit Buddhism for its exotic cultural cachet. The former Tibetan monk Thupten Jinpa, a Buddhist scholar and frequent translator for the Dalai Lama, suggests that it would be more helpful to view contemporary mindfulness as loosely inspired by Buddhism, rather as a secularized derivative. In fact, as Jinpa points out, contemporary mindfulness practices have little resemblance or equivalence to Buddhist mindfulness teachings that have always been integrated with ethical and soteriological aims. Nevertheless, clinical and therapeutic mindfulness programs have offered thousands of people modest benefits in reducing stress and anxiety and in improving mental health. These salutary outcomes from mindfulness-based interventions are laudable and beneficial, and are not the direct target of the McMindfulness critique.
So what really is at stake? As I so often get asked, “what’s the harm if mindfulness provides modest benefits to individuals?” As the French philosopher Jacques Derrida notes, when the poison is in the cure, that harm is hard to see. Despite the potential health benefits, mindfulness practices have become co-opted and instrumentalized for furthering a neoliberal agenda. For example, the popularity of corporate mindfulness programs can be explained in part by how they shift the burden of responsibility for reducing stress to individuals, despite hard evidence that workplace stressors are tied to a range of systemic and structural issues such as a lack of health insurance, job insecurities, unrealistic work demands and long hours, and lack of employee discretion and autonomy. Mindfulness practices have been retooled for productivity improvement and for muting employee dissent. Moreover, by insourcing the causes of stress to individual employees, corporations are absolved of taking responsibility for the very conditions generating the need for such therapeutic interventions. Corporate mindfulness programs are hyped as “humanistic,” cloaking the fact that they ideologically function as the latest capitalist spirituality, yoking the psyche of the worker to corporate goals.
Taking our cue from Derrida’s problematizing of pharmaceuticals as the “pharmakon,” where a drug can be both beneficial and detrimental, the “mindfulcon” is an apt term for denoting the risks that arise when mindfulness is hijacked and corrupted by commercial interests and profit-making enterprises.
Ronald E. Purser
Professor of Management, San Francisco State University
Author of McMindfulness: How Mindfulness Became the New Capitalist Spirituality (Repeater Books)
Reassessing Roundup
Geoffrey Kabat’s illuminating essay “Who’s Afraid of Roundup?” (Issues, Fall 2019) should stir the reformational zeal of all who care about how scientific causal judgments are reached and implemented. As Kabat notes, the problems with the evaluations by the International Agency for Research on Cancer (IARC) are both substantive and procedural. The IARC’s disqualification of manufacturers’ consulting experts from its decisional process has silenced important voices, while permitting consulting experts of the plaintiffs’ bar, the lawsuit industry, to populate IARC working groups. The plaintiffs’ bar may be the largest rent-seeking group in the United States, and the IARC’s asymmetrical conflicts policy has allowed plaintiffs’ lawyers and other advocacy groups to have an undue influence on its evaluations.
Kabat points to the confusing distinction between “hazard” and “risk,” which distinction is further obscured by extrapolations from high-exposure animal studies to humans with low exposures. Because of human defensive mechanisms, which must be overcome to induce cancer, the relevancy of high-exposure animal studies to lower human exposures may be lacking.
IARC evaluations are often undermined by undue methodological flexibility in the observational studies relied on by its working groups. Contrary to statistical best practices, many relied-upon epidemiologic studies analyze multiple agents, at various exposure levels, against dozens of outcomes. Similarly, the animal studies used in the glyphosate case, and in many others, lack prespecified endpoints, but recklessly declare “effects” whenever different rates yield p-values below 5%, regardless of how many statistical tests are conducted.
Although the IARC once enjoyed tremendous prestige and authority, its evaluations are now seen as suspect. Given its dubious methods and the rise of rigorous systematic review methodology, we should question the current role of IARC in our thinking about carcinogenesis.
Nathan A. Schachtman
Attorney, Schachtman Law
Former Lecturer in Law, Columbia Law School
Geoffrey Kabat has done an excellent job presenting IARC’s disregard for the scientific process and its arrogance toward the scientific community. The failures in IARC have undermined the public trust in regulatory science and provided ample kerosene for activist groups to pour on a chemophobic public. IARC, as Kabat rightly illustrated, has become a threat to the scientific method. He was, however, measured in his critique of the relationship that has been established between IARC and a group of scientists working for US tort law firms.
These American regulatory scientists frequently working with IARC have been implementing an alternative to the democratic risk assessment process they call “adversarial regulation.” They feel it is more effective to ban a substance or process by suing a company into compliance or bankruptcy via relentless lawsuits and activist campaigns. Unlike the risk assessment process, there is little need for evidence or scrutiny of data.
Kabat referred to the large number of lawsuits on Roundup as well as talc, both based solely on controversial evidence provided by IARC monographs. He indicated the important role IARC monographs play as (at times the only) evidence in these lawsuits. What needs more attention though is how the law firms suing these companies are combining forces with a large number of scientists who had worked on IARC monographs related to these substances (often, to IARC’s knowledge, acting as litigation consultants for these law firms during IARC Working Group meetings). In the case of benzene, IARC even held and produced a monograph solely on the basis that US tort law firms needed a direct link between non-Hodgkin’s lymphoma and benzene exposure. It appears that glyphosate was also rushed through to meet the tort lawyers’ objectives.
David Zaruk
The Risk-Monger
Geoffrey Kabat properly takes the IARC to the woodshed for multiple offenses against science, reason, and the public good in its fallacious classification of glyphosate as a possible carcinogen. But he is far too kind and understates the case.
Regulatory agencies around the world have examined whether glyphosate is carcinogenic, and without exception concluded it is not. (IARC is not a regulatory agency.) The US Environmental Protection Agency’s position is representative of this consensus, and directly relevant to the consumer safety issues that are the entire and complete raison d’etre of the litigation over Roundup. The issue is not, in fact, a matter of scientific dispute, despite the claims of the plaintiff’s attorneys.
Kabat makes all this clear, and demonstrates that “what is at stake is society’s ability to rely on the best scientific evidence on questions that are entangled with competing interests and deeply held worldviews.”
If farmers lose the freedom to use glyphosate, which has been shown to be safer than table salt, baking soda, coffee, chocolate, and beer, they will be compelled to revert to obsolete weed control measures that are less efficient and less sustainable, increasing agriculture’s greenhouse gas emissions at a time we least need it. If this comes to pass, IARC will have been instrumental in perpetrating grave harm against the common good. They are deserving not of polite correction but frank and forthright condemnation. It’s long past time for funders to withdraw support from IARC and for the United Nations Food and Agriculture Organization to clean house.
L. Val Giddings
Senior Fellow
Information Technology and Innovation Foundation
The apparent acceptance by so many in the scientific community of the classification by the IARC Monographs Program of glyphosate as a “probable carcinogen” is evidence of the failure of the vaunted self-correction mechanism of science. It is scandalous that despite the absence of credible evidence that glyphosate is associated with cancer risk from animal, human, or laboratory studies, the IARC classification still stands almost five years after it was announced. It is incumbent upon the Monographs Program either to refute the demonstration of serious scientific flaws in the Monograph 112 Working Group deliberations on glyphosate, or to retract its erroneous classification of glyphosate. My published critiques of the IARC glyphosate deliberations accepted that IARC was evaluating hazard not risk, that the IARC criteria for identifying a carcinogenic hazard were appropriate, and that the body of studies relied on by IARC was sufficient to come to a valid conclusion regarding the potential carcinogenicity of glyphosate. The IARC criteria do not support the conclusion that glyphosate is a probable carcinogen based on an honest, rigorous, and complete synthesis of all of the evidence in studies relied upon by IARC.
In his article, Geoffrey Kabat notes procedural irregularities in addition to serious misrepresentations of scientific evidence in the IARC glyphosate report. The failure of the Monographs Program to follow strictly its published protocol is disturbing. In March 2014 an advisory group chaired by Christopher Portier added glyphosate to the list of agents given “medium priority” for evaluation in future Monographs. A call for experts for Monograph 112 in July 2014 included an agenda that mentioned only organophosphate insecticides. In an October 2014 IARC announcement of upcoming meetings, glyphosate had been added to the agenda for Monograph 112. This was after the call for experts period had closed. The Monographs Program should disclose when the addition of glyphosate to the Monograph 112 agenda was first publicly acknowledged, and explain the rush to evaluate a medium priority agent so quickly.
It is past time for the IARC Monographs Program to be held accountable for its erroneous glyphosate classification. The absence of such accountability puts the long-term scientific value of the Monographs Program at risk.
Robert E. Tarone
Retired; National Cancer Institute (28 years) and International Epidemiology Institute (14 years)
(Dr. Tarone participated as an unpaid fact witness in two California trials involving Monsanto, the maker of Roundup, in January 2020)
Geoffrey Kabat has misrepresented the difference between hazard assessment (HA) and risk assessment (RA). This is important to the Roundup story because the IARC, which conducts HA, has classified glyphosate, the active ingredient in Roundup, a “probable carcinogen,” while several national and international regulatory agencies cited by Kabat conduct RA, and their conclusions support the safety of the herbicide. The evaluations conducted by IARC and by the regulatory agencies are designed to answer different questions, and so it is not surprising that they reach different answers.
In the Preamble to its Monograph series in which agent evaluations are published, IARC states: “A cancer ‘hazard’ is an agent capable of causing cancer, while a cancer ‘risk’ is an estimate of the probability that cancer will occur given some level of exposure to a cancer hazard.” HA tries to answer a fundamental question about whether a substance can cause cancer, explicitly separating that question from a second important question: how large is the risk of cancer from a substance in a specific exposure scenario? RA, which addresses this second question, is conducted somewhat differently by different agencies, but fundamentally requires two critical additional pieces of information that are not needed for HA: first, a judgment about how large a risk is “acceptable” or “unacceptable,” and second, an assumption about a specific exposure scenario.
There are uncertainties inherent in both HA and RA, but Kabat’s characterization of the difference is not helpful to readers wishing to understand the debates surrounding the safety of glyphosate. He attempts to trivialize the HA process by saying that “IARC considers any scientific evidence of possible carcinogenicity, no matter how difficult to interpret or how irrelevant to actual human exposure.” Setting aside the pejorative tone (according to Kabat we should use only evidence that is easy to interpret?), one can read between the lines a correct distinction—that IARC does not consider human exposure scenarios because these are not relevant to the fundamental question of whether a substance has the capacity to cause cancer.
He also fails to point out that along with those additional assumptions needed for RA come many additional opportunities for error. Because of its complexity and the number of required assumptions, it can be difficult for policy-makers or the public to understand how dependent the results may be on biases and limitations in the data. About 20 years ago, a now-famous experiment was conducted in which four different RA teams were provided with the same set of data with which to evaluate the cancer risk to children wearing pajamas impregnated with a flame-retardant chemical known as tris. Their calculations of the numbers of additional lifetime kidney cancers per million exposed children differed by three orders of magnitude—from 7 to 17,000 additional cancers, or perhaps you might say, from reassuring to not so much.
Kabat provides a long list of regulatory agencies that have declared glyphosate to be safe. I went to the public website of the first one on his list, Health Canada. Here’s what it says: “products containing glyphosate do not present unacceptable risks to human health or the environment when used according to the revised product label directions.” I do not doubt the integrity and rigor of Health Canada’s review, but I find these words only modestly reassuring: What is unacceptable risk? And what happens when the pesticide is not used as directed? IARC’s hazard assessment does not depend on assumptions about acceptable risk and about how the pesticide is used or misused. These are the kinds of uncertainties and easily hidden assumptions that probably underlay the thousandfold differences among the RA calculations in the tris experiment mentioned above.
There will always be substantial uncertainties about the safety of newly invented chemicals such as glyphosate. And there is urgent need to know whether to allow their widespread use. Under these conditions, I think it is useful for the public to know that a group of experts has evaluated all the evidence and decided that glyphosate is probably a carcinogen—a statement that acknowledges the uncertainties in the evidence. It doesn’t say what to do. But that’s someone else’s job.
David Kriebel
Professor and Director
Lowell Center for Sustainable Production
University of Massachusetts Lowell
The glyphosate affair and subsequent rulings exemplify two critical issues. One is how science is generated and communicated to the public and how that may be co-opted by single-issue advocates. The other is the role of corporate power in manipulating scientific processes, institutions, and public opinion when profits are at risk. Geoffrey Kabat focuses on the former while ignoring the latter. Both are of importance to science and society.
The glyphosate court rulings showed how Monsanto, as a company that perceived the IARC report as a threat to its profits, reacted in ways that were “anti-scientific,” much as the tobacco industry did before it. These tactics included trying to engineer journal retractions from unhelpful studies, ghostwriting scientific and journalistic articles, and coordinating efforts to criticize not just the IARC determination on glyphosate, but IARC as an institution and its role more generally.
Third-party groups joined in with these tactics. These efforts ironically included social media and meme-ification efforts. For example, a group called the Campaign for Accuracy in Public Health Research, which arose from the American Chemistry Council, also then funded by Monsanto, posted graphics on social media with statements such as “IARC’s cancer assessments are nothing but smoke and mirrors.”
Kabat is right to point out that hazard determinations should not become unscientifically weaponized on social media. However, the use of such tactics by commercial interests underline how important it is to also ensure the integrity and independence of those institutions that assess toxicity. What’s concerning is that court proceedings also revealed evidence that Monsanto funded the American Council of Science and Health, where Kabat serves as an adviser, with a view to help in discrediting IARC. There are evidence-based and commonsense reasons for disclosing relationships of this nature.
In making the moral and empiric case for science, it is more important than ever that we view risk clear-mindedly. To focus solely on single-issue advocates as barriers to progress, while ignoring the challenge that corporate conflicts of interest pose to scientific evidence, processes, and institutions, is a risk we cannot afford to take.
Nason Maani Hessari
2019–2020 Harkness Fellow in Health Care Policy and Practice
School of Public Health
Boston University
Retrofitting social science
In “Retrofitting Social Science for the Practical & Moral” (Issues, Fall 2019), Kenneth Prewitt’s case for reestablishing “a social science for the sake of society” is timely and compelling. Those of us involved in the American Political Science Association (APSA) have seen that appeals to the “usefulness of useless knowledge” now often fail to persuade state and national governmental funders, private foundations, and tuition-paying parents of the value of our research and teaching.
APSA has responded with initiatives to translate technical political science scholarship into brief, accessible presentations and to disseminate them widely, and to strengthen teaching. We have begun “Research Partnerships on Critical Issues” that link academic scholars with bipartisan experts outside academia, beginning with a project on congressional reform. We have created an Institute for Civically Engaged Research (ICER) and established an award for distinguished civically engaged scholarship in order to encourage work that does not just study political problems but tries to do something about them. The ICER curriculum gives great attention to the ethical issues of avoiding either exploitation of, or co-optation by, the community partners in such engagement. As APSA president, I also urged scholars and journal editors to strive harder to ensure that our publications connect specific research findings with the “big pictures” of politics and the world that show their substantive importance; and I called for more work that synthesizes disparate research endeavors to explore how they collectively provide guidance on real-world problems.
I also second Prewitt’s suggestion that “retrofitting” social science means devoting more time within our universities to making our work more intrinsically as well as more visibly valuable. Institutions should not hire simply with a view to raising rankings, which can homogenize the issues we address and the methods we apply to them. Instead, they should hire scholars in diverse fields who share interests in specific substantive problems, as some political science departments have begun to do. We must also get past seeing civically engaged research as second-class or suspect. Finally, institutions must embrace the challenges of improving teaching, and they must honor the colleagues who are finding ways to do so, as much as they do research excellence.
Precisely because the social sciences need to do more work of value and to communicate that value more clearly, I would not term this direction a “Fourth Purpose,” as Prewitt does. That term is unclear, and it sounds low priority. Like Prewitt, I do not have a ready alternative. “Civic Purpose” or “Public Purpose” might work better, though both have drawbacks. In truth, as Prewitt recognizes, what we are really talking about is restoring the “Original Purpose” to the social sciences.
Rogers M. Smith
Christopher H. Browne Distinguished Professor of Political Science
University of Pennsylvania
Immediate past president, American Political Science Association
Kenneth Prewitt argues that social scientists should redirect their research to respond to social problems and that universities must “firmly institutionalize” this work to “reestablish a social science for the sake of society.” This argument is compelling and justified—for reasons beyond those that Prewitt outlines. Reorienting research to be more responsive to today’s challenges would help impart the value of the social sciences, as Prewitt argues, and would also invigorate universities, whose worth is currently under attack from many sides. As I explained in “The Future of Higher Education is Social Impact,” published in the May 18, 2018, Stanford Social Innovation Review, harnessing the talents of social scientists to pursue the public good can change the perception of higher education from biased and wasteful to innovative and essential.
According to Prewitt, the largest obstacle to meeting this aim is convincing universities to accept the challenge. In my view, this obstacle can be surmounted by following five steps:
- Recognize that the current basis for evaluating the research performance of social science faculty—the extent to which their research contributes to further research—is arbitrary. There is nothing intrinsic to this criterion that would prevent it from being equally counterbalanced by another criterion: the contribution of a research product, or a line of research, to solving social problems.
- Establish new metrics that reflect this reorientation. Contributions of research to the public good could be measured by tallying the number of public forums in which evidence informs policy or practice decisions. In addition to such quantitative markers, qualitative testimonials could attest to the extent to which research has informed policy decisions. Similar to testimonials currently used to assess contributions to research, these policy testimonials would be compelling only insofar as they indicate how specific research conveyed knowledge that changed the thinking of those reflecting upon it.
- Link formal reward mechanisms to research contributions that offer social impact. Promotion guidelines could give credit to research contributions to public welfare as well as research contributions to subsequent research. Social impact could be included in merit-based compensation adjustments. And competitive awards, such as named chairs, could be available for those whose research contributes to solving social problems.
- Incentivize faculty through opportunities such as summer salary and teaching release for those who present compelling proposals for social impact research. Faculty in professional fields such as education and social work may be most naturally positioned toward working with public agencies and nonprofit organizations, but discipline-based social sciences faculty could also be encouraged to pursue these opportunities for “boundary-spanning.”
- Create structures that support social science research in the public good. Arrangements such as boundary-spanning partnerships, interdisciplinary hubs that connect with local communities, and institutes that provide centralized resources to facilitate social impact research can lower the barriers that make it difficult for faculty who wish to reorient their work.
Importantly, retrofitting social science means broadening social science, not overturning it. Doing so would not only orient social research toward addressing twenty-first century problems; it would strengthen the case for the modern university.
Adam Gamoran
President, William T. Grant Foundation
Industry and policy stakeholders increasingly agree to admit that social science and humanities (SSH) contribute to innovation, economic growth, and social progress in democratic societies. This is not surprising: SSH researchers untiringly engineer conceptual resources that go on to permeate all sectors of human activity, and their expertise informs the shaping of governments’ ethical, legal, and political decisions as well as the mechanisms through which science knowledge is translated into social and economic progress.
Kenneth Prewitt leverages this insight to make a twofold point. On one hand, the better part of the reason why the usefulness and value of SSH is still not widely recognized is that we lack proper impact models—and I would add: assessment frameworks. On the other hand, SSH impact cannot happen without scholars engaging, practically and morally. This position is one that many SSH advocates subscribe and that I fully embrace.
Yet I am also discontented with the article. I think it does not muster the level of care these concerns need to receive in order to yield adequate solutions. Understanding the way that SSH create change, and how change is optimized through the creation of partnerships and/or collaboration with stakeholders groups outside of academia, requires a theory of change that is informed by a solid grasp of the history and sociology of knowledge. At the very least, it should be clear that current disciplinary and subdisciplinary labels and divisions cannot be projected back without complications. Our conceptions of the precise scope and methods of the disciplines we associate with SSH, and in particular, of their boundaries, continue to be in flux and they are anything but hermetic. But more important, we cannot naively ignore that our understanding of disciplinary boundaries unsurprisingly depends on the kinds of conceptual resources and investigative methods we consider to be adequate given our purposes as researchers. In SSH, the latter are spread out and dissonant, and they continue to evolve.
The task of offering workable models for impact and engagement in social science and humanities, in my opinion, cannot be fulfilled without a rigorous understanding of the nature and structure of social institutions. It requires careful analyses and adequate data: it needs to mobilize appropriate conceptual tools. I hope to see more work along those lines published in Issues in the future.
Sandra Lapointe
Board of Director, Canadian Federation for the Humanities and Social Sciences
Associate Professor of Philosophy, McMaster University
Director, The Collaborative
Ethics in technology assessment
“Incorporating Ethics into Technology Assessment,” by Zach Graves & Robert Cook-Deegan (Issues, Fall 2019),very crisply illustrates a key issue facing American government today, especially Congress. Though the authors were too polite in their quasi-dismissal of Peter Theil’s controversial hypothesis of a technology “innovation slowdown” (the evidence is anecdotal at best), even with a reduced pace of innovation, those already advancing quickly—such as artificial intelligence, blockchain technology, the internet of things, quantum computing, autonomous vehicles, big data mining, hypersonic weapons, hydraulic fracturing, and gene editing technology, to name but a few—present formidable challenges both in reaping their rewards to society and in coping with their often complicated and value-challenging consequences.
As the authors predict, there are many risky ways to build “ethical analysis into technology assessment.” But the fundamental design by which the Office of Technology Assessment (OTA) accomplished this with an increasingly effective process that evolved over decades, as the authors illustrate with several examples, avoids much of that risk by “informing the debate” rather than dictating a solution that hinges on value choices and trade-offs. The authors conclude that the original OTA model, subject to modernization after a quarter of a century, is still a very robust one, as illustrated by its replication in nations worldwide. Modernization of a restored OTA could accelerate the agency’s response time, improve the efficiency of assembling essential and the most current information in carrying out technology assessments, expand the ability to convene the best external and staff experts to participate in the agency’s work, and improve outreach and access to the agency’s services and badly needed expertise to members and staff across Capitol Hill.
Restoring an OTA is but a first step, albeit an important one. Building the capability envisioned with the Government Accountability Office’s Science, Technology Assessment, and Analytics team could have an important role as well, focusing performance audits on evaluating management of the ever-growing part of the nation’s science and technology enterprise managed by the federal government. Reinvesting in the expertise resident in the congressional committees of jurisdiction, in the capabilities of the readily accessible resource of Congressional Research Service, and in more widely utilizing the fresh insights of the American Association for the Advancement of Science’s science and technology fellows can all play important roles in restoring Congress’s capacity for understanding and shaping the accelerating role of science and technology in virtually all aspects of modern life.
Peter D. Blair
Distinguished Senior Fellow
Schar School of Policy and Government
George Mason University
Zach Graves and Robert Cook-Deegan rightly call attention to Congress’s need for more technical expertise and for the need to incorporate ethics into the practice of technology assessment (TA)—a historically important vehicle for providing such expertise. Both needs are evident, even if the role that ethics plays in TA, and technical expertise in general, too often goes ignored in policy circles.
The challenges and opportunities posed by technological and scientific developments today are widely discussed, as is, increasingly, Congress’s lack of preparedness to grapple with them. Meanwhile, the social and ethical implications of such developments, in such diverse areas as autonomous vehicles and gene editing, provide ample fodder for popular and academic discourse. All the more surprising, then, is the relative absence of such considerations among TA advocates.
TA has always been understood in at least two distinct ways. First, TA is construed as expert advice—providing lawmakers with technically sound information to inform the policy-making process. Understood in this way, TA need not incorporate ethical considerations.
Second, TA is a means of shoring up democratic control of science and technology. TA arose at a time of increasing awareness of the social and ethical—especially environmental—implications of scientific and technological change. The creation of the Office of Technology Assessment was partly a response to the growing sense that citizens and their representatives—as opposed to executive agencies—must be better positioned to wrestle with the scientific and technological challenges and opportunities facing society. Understood in this way, TA is inherently value-laden, since its purpose is to respond to ethical and even political imperatives.
These two views of TA are not incompatible; after all, consideration of the ethics of science and technology requires expert knowledge. But, taken together, they are incompatible with a “linear” view of expertise, according to which technical knowledge is formulated in a value-free context and then transferred over into the value-laden realm of politics. The historical origins and practice of OTA belie this view.
Graves and Cook-Deegan are right to insist that ethics should play a more prominent role in discussions about—and in the practice of—TA. And they are also right to insist that incorporating ethics, or values generally, need not require abandoning TA’s classical commitment to disinterestedness and nonpartisanship. By bringing value judgments and ethical disagreements more clearly to the fore, TA may facilitate a fairer and more transparent kind of deliberation about scientific and technical problems. In so doing, technology assessment would not be importing an alien practice so much as recognizing the ineliminable role that values play in the formulation of technical expertise.
M. Anthony Mills
Director of Science Policy
R Street Institute
Graves and Cook-Deegan pin a deterioration of Congress’s ability to grapple with science and technology (S&T) issues on deep staff cuts and elimination of the Office of Technology Assessment. This thinned-out legislative workforce, they argue, compromises Congress’s readiness to grasp the implications of innovations and trails executive branch and private sector capacities, posing risks for the country. They call for upgrading Congress’s S&T advice and internalizing ethical inquiry into technology assessment. While more in-house expertise can undoubtedly help Congress conduct legislative matters more effectively and with greater insight, internalizing ethical inquiry into technology assessment needs more scrutiny on institutional, ethical, and political grounds.
First, is technological assessment better suited than other deliberative processes for navigating ethical considerations? Negotiating legislative priorities in a pluralistic society can be challenging, even when conducted in good faith. Congress represents a diverse society, making critical policy decisions involving multiple values, forms of knowledge, and constituencies. Public trust in Congress’s role provides the impetus for democratic pressure that it live up to constitutional aspirations. Asserting “dysfunction” of Congress and describing closure of OTA as a “lobotomy,” as Graves and Cook-Deegan do, promotes cynicism and devalues nontechnical kinds of knowledge that legislative processes legitimately include. Cynicism can be corrosive, fueling a vicious cycle of low expectations and poor performance that frays the fabric of governance in ways that no S&T advice can offset. The benefits of better S&T advice depend on a functional legislative body and public trust in the value of demanding that it be such.
Second, does incorporating ethics into technology assessment necessarily improve the ethics of S&T decisions? “Ethics” refers to systematic ways of sorting through good and bad or right and wrong; there are multiple coherent systems for working through such questions. For example, aiming to harm the fewest people is an ethical stance, but not the only one. Whose or which ethics should prevail? Offering what the authors call an “explicit framing of the value choices” or “fairly presenting different sides of an ethical dilemma” could be an aspect of technological assessment, but systematic methods of ethical inquiry that align with scientific assessment run the risk of incrementally leading to a kind of tailoring that becomes mutually reinforcing.
Third, is a broader remit for technology assessment feasible given tight budgets and political support for targeted S&T advice? Technology assessments of broad scope and depth require experts from many fields, and that takes additional time and managerial sophistication. Comprehensive assessments of issues before Congress won OTA staunch supporters, but critics claimed that the agency strayed from a technology focus and that reports took too long to produce. Support for new and improved S&T advice within Congress seems oriented toward short and medium-term efforts.
Augmenting S&T capacity for Congress points to the value of “looking beyond the technical aspects,” in Graves and Cook-Deegan’s words, and consultative mechanisms that illuminate ethical dimensions of policy options seem likely to be extremely beneficial. Internalizing ethical inquiry into assessment methods, however, risks a variety of perverse institutional and ethical outcomes and conflicts with current political tolerances.
E. A. Graffy
Professor of Practice, School for the Future of Innovation in Society
Arizona State University
Graves and Cook-Deegan do an excellent job of explaining the importance of including ethical considerations within technology assessment (TA). I would like to add several points.
First, considering ethical impacts is essential but insufficient for grappling with the normative dimensions of technological innovation. Ethical analysis tends to focus on the impact of innovation on individual people or groups, while overlooking impacts on the basic structure of society. An example: an anticipatory ethical analysis of the interstate highway system might have weighed the value of speedy personal transport against the danger of fatal vehicle crashes. But would it have considered that the voracious demand for gasoline would provide a rationale for expanding US military capabilities in the Middle East, contributing to establishing the politically powerful military-industrial complex of which President Dwight Eisenhower warned in 1961? Ethical analysis is crucial, but so is analysis of technologies’ structural social impacts.
Second, TA as we have known it focuses on the social impacts of individual innovations, such as driverless cars or smartphones. But the effects of technologies on the basic texture and structure of society are typically a product of synergistic interactions among complexes of seemingly unrelated technologies. An example: face-to-face community life in the United States has been attenuated over time by the combination of air conditioners and TVs that lure people off their front stoops on hot summer days, suburbs built without sidewalks, smartphones that keep people’s eyes glued to their small screens, and so on. Studying the ethical and social impacts of individual technologies is important, but so is assessing the synergistic effects of technological complexes.
Finally, Graves and Cook-Deegan mention the value of enrolling stakeholder representatives in TA, but they overlook the importance of also involving laypeople who are not members of organized stakeholder groups. Stakeholders such as an environmentalist, a corporate chief executive, and a labor organizer will each bring a crucial value orientation to the table, but experience shows that neither individually nor collectively will they call attention to the types of structural social impacts that I have been highlighting. In contrast, methods of participatory technology assessment that have been pioneered in Europe over the past three decades—such as citizen-based consensus conferences—tend to do a better job in this regard. Such methods have now been implemented many times in the United States, including by the nongovernmental Expert and Citizen Assessment of Science and Technology (ECAST) network.
Experts and stakeholders bring along a robust base of technical knowledge and well-honed analytical capabilities. But lay participants in a well-structured TA process often add heart-and-mind human, ethical, and political-power considerations from which the experts shy away or in which they are simply inexpert.
To be fair, Graves and Cook-Deegan are considering the real-world political challenges involved in reestablishing a national TA capability. Incorporating structural social analysis within TA might (or might not) pose risks to the enterprise—but omitting such analysis guarantees that Congress will remain poorly informed about some of technologies’ most profound social repercussions. That said, even a nonideal technology assessment agency would be far better than none.
Richard Sclove
Cofounder of ECAST
Author of Reinventing Technology Assessment: A 21st Century Model (Woodrow Wilson International Center for Scholars, 2010)
Kids online—and alright
Are the kids alright? I welcome Camille Crittenden’s article, “The Kids Are Online—and Alright” (Issues, Fall 2019), as it explains to an often panicky and increasingly dystopia-minded public that children and young people are alright in a digital world.
Nonetheless, it is hard to point to robust social science evidence that permits us confidently to weigh overall the risks versus opportunities of internet use. Children differ, as do their circumstances, and as do the values that parents and the wider society place on the multiple outcomes of internet use. Moreover, contexts are crucial. For the child who has run about all day, a couple of hours streaming video may provide vital downtime, but for the child avoiding physical activity, or lacking access to safe outside play spaces, extended video viewing may compound a preexisting problem.
No wonder that social scientists are recognizing that what matters is less the time spent online than the nature of online contents and how children engage with them. Crittenden highlights the benefits of internet access for children’s reproductive and mental health, professional development and economic security, and civic engagement. But our Global Kids Online research shows that only a minority of children really attain these.
Through the metaphor of the “ladder of online participation,” we have called on adult society to celebrate not only the beneficial outcomes identified by Crittenden but all the steps of the ladder. This means supporting rather than deploring children’s mundane game playing, chatting, image sharing, and video viewing online because it is precisely thus that they take the first steps toward gaining the skills and efficacy needed for more advanced health, civic, and workplace benefits.
With this in mind, I am puzzled by Crittenden’s focus on provision of broadband to the exclusion of other dimensions of digital inclusion. Surely meaningful access—which enables children to develop in a digital world to their full potential– requires not just broadband but also safe spaces for using technology, diverse service provision, media literacy education, constructive parental and educator support, and more.
However, I do appreciate Crittenden’s effort to reveal the confounding and contextual factors that undermine the temptation to blame the internet for children’s difficulties. But it would be a mistake to counter such a naïve inference by simply asserting that the kids are alright. For many are not: some are excluded or abused or living in extreme situations, and many struggle with academic and peer pressures, family tensions, or an uncertain future. Though undoubtedly use of the internet intensifies the opportunities—and risks—that children experience, it is the deeper issues of socioeconomic inequality, discrimination, and violence, among other factors, that have long blighted children’s lives and doubtless will continue to do so.
Sonia Livingstone
Department of Media and Communications
London School of Economics and Political Science
Tackling tough decarbonization
In “An Innovation Agenda for Hard-to-Decarbonize Energy Sectors” (Issues, Fall 2019), Colin Cunliff outlines the toughest technical problems that will be faced in the transition to net zero emissions. I fear, however, that the US Department of Energy in its current form would struggle to overcome these problems, even with increased funding for research, development, and demonstration (RD&D).
My first concern relates to the applied technology offices of DOE, such as the Office of Energy Efficiency and Renewable Energy and the Office of Nuclear Energy. Cunliff recommends six particular areas for expanding public RD&D investments, some of which fit reasonably well into these existing applied offices. Other areas could be tackled as cross-cutting initiatives involving multiple offices.
Sending all the new funding for these activities to the DOE applied technology offices as they exist today would be the easiest solution, but it would also be a mistake.
The applied offices are oriented toward making incremental progress along established technology pathways. They face pressure to meet specific targets for technology improvement to justify their appropriations, and they are therefore less likely to pursue ideas that are uncertain or high-risk. But we should not limit our thinking to established technology pathways in these crucial areas. For example, there are several possibilities for long-duration grid energy storage, such as thermal storage, batteries, or hydrogen production, each with multiple competing designs or approaches. When the state of a technology is relatively immature, there is significant uncertainty around which approach will ultimately be most competitive, and overinvestment in a particular approach runs the risk of locking the nation into a higher cost pathway.
Fortunately, DOE already has one solution to this problem: the Advanced Research Projects Agency–Energy (ARPA-E), with its a reputation for risk-taking and a management style that allows it to pursue many approaches in parallel and see what sticks. Congress should recognize the opportunity that ARPA-E provides, as a source of potential breakthrough ideas that can transform the view of the available technology pathways. Any funding boost for the applied offices in these six challenges should come with a commensurate boost for ARPA-E.
My second concern is the basic science function of DOE, which is currently funded through the Office of Science. Cunliff recommends increasing the supply of scientific research that will underpin technology advancement. Unfortunately, the Office of Science is limited in its ability to produce translatable research, due in part to the organizing principle of DOE’s RD&D activities. A sharp separation between so-called basic and applied research has led the Office of Science to avoid connections to technology, lest it be perceived as doing applied research. But the logic of separating these two activities is built on a faulty premise: the linear model of innovation. Major innovations tend to involve collaboration between researchers pursuing fundamental discoveries and those pursuing useful inventions—a kind of collaboration that is impossible if there is a defensive wall around basic research.
The first step toward addressing this problem would be to revive the position of undersecretary of science and energy, so that these RD&D activities can be overseen by a single administrator. A stronger step would be to change how funding is allocated across DOE. If a fraction of the basic research budget were distributed through the technology offices, could those offices seed new lines of research in response to the needs of the technology? If part of the Office of Science budget came from the applied research funding stream, could it use that money to explore potential applications of novel scientific research? These questions are worth considering.
Congress should scale up investments in innovation for decarbonization, and it should also use this moment as an opportunity to improve on the status quo funding mechanisms. Internal reforms to take advantage of the synergy between science and technology could greatly enhance the impact of DOE’s research, development, and demonstration budget.
Anna Goldstein
Senior Research Fellow
University of Massachusetts Amherst
Climate emergency hazard
As Mike Hulme makes clear in “Climate Emergency Politics Is Dangerous” (Issues, Fall 2019), declaring a climate emergency shortcuts democratic policy-making for expedience. Society may thus adopt a singular focus on reducing greenhouse gas emissions—at the expense of other pressing social concerns such as inequality, public health threats, and others that are sure to arise. These risks are serious, but they are worth facing because the climate problem is both pervasive and urgent.
First, the pervasiveness of the challenge makes it difficult to imagine the world retaining an exclusive focus on reducing emissions over the long term. Climate change is a truly global public goods problem, involving millions of decisions under deep uncertainty about future impacts and costs to address them, with diverse perspectives about risk tolerance and time preferences, all sustained over the course of multiple decades. It is difficult to imagine nine billion people—or even an undemocratic climate elite—decarbonizing the economy over decades without taking into account competing social priorities given that carbon-intensive activities affect all aspects of the economy.
Moreover, historical evidence suggests that issues such as public health scares, recessions, and military conflicts have a habit of foisting themselves on us. Surely emerging ones such as artificial intelligence would do the same. Political theories such as the issue-attention cycle indicate that a more likely path is that such crises divert our attention from long-term goals, rather than seeing us stoically suffering crises due to our obsession with emission reductions. The Cold War, the 1970s oil crises, and the War on Terror drew in tremendous resources, but they were never the only social objective; they also created spillover benefits.
Second, the primary motivation for declaring a climate emergency is not just the dire future consequences, which are indeed prone to exaggeration, but the languid pace at which the world has addressed the climate problem over the past 30 years. The decades-long lifetimes of both carbon-intensive infrastructure and atmospheric carbon are inertial forces that make decarbonization of the world economy urgent, as well as daunting.
A gradual approach that positions climate change as one of many social objectives is likely to see the next 30 years match the progress of the previous 30, resulting in proliferation of policies without stringent commitment, peaking of global emissions but not deep reductions, and improvements in low-carbon technology without widespread adoption. That would bring us to 2050 facing a set of unappealing, even if not existential, choices. How do we rapidly adapt to a disrupted climate? How do we manage the resettlement of millions of climate refugees? Who decides how much sunlight-blocking planetary cooling we deploy? None of these decisions seems particularly amenable to a democratic process, as they are likely to made from a reactive position while resolving a crisis.
Developing a broad consensus to commit to addressing climate change during what may be an nascent policy window entails risks—but they are likely more manageable than those we corner ourselves into via a gradual approach, even with the virtues of it being democratic. A climate emergency is a way to start getting serious.
Gregory F. Nemet
Professor, La Follette School of Public Affairs
University of Wisconsin-Madison
Mike Hulme is rightly concerned about what is being excluded from climate emergency politics. Poor and otherwise marginalized people do not necessarily have the luxury of singling out climate change as an emergency because they face a host of other, perhaps equally significant risks.
Hulme is also right to point out that some of these risks may be systemically exacerbated in the name of decarbonization. In the global finance sector, where climate change is being taken seriously by regulators (as evidenced in the fostering of voluntary measures to disclose climate change risk), decarbonization efforts are thus far unfolding without significant consideration of social risk. It is important to note that financial disclosure of climate change risk is not being fostered by the finance industry because of the climate emergency per se. Rather, climate system instability is seen to be creating a second-order, global-scale risk to the global financial system that may occur, for example, if the insurance sector is hit hard by multiple disasters or if fossil fuel companies collapse.
The financial sector is most fundamentally concerned about stability risk to the financial system, and is thus undertaking new measures to prevent financial instability associated with climate change. Decarbonization is only a part of this effort. Led by global financial industry regulators, the aim is to stabilize the climate-finance “meta-system” and prevent a global financial crisis arising from systemic effects of the bio-physical impacts of climate change and the transition to decarbonization. The solution being proposed is the circulation of better information about climate-related risk through markets, believed to enable better analyses of interactions between the climate and finance systems. This data collection and analysis is intended to provide the basis on which both stabilization and decarbonization of the global economy can be achieved.
Although this may appear to be good news for the “climate emergency,” for those concerned with equity, justice, human rights, and sustainable development, the changes to the global financial system warrant closer scrutiny. Enabling systemic decarbonization through financial markets may, if not implemented with careful social analyses and new social policy, also introduce new systemic risks for the world’s poor of being even further marginalized by the global financial market’s climate change “solutions.”
This can occur in multiple ways: through the individualizing of climate risk among the poor instead of providing more systemic redress through institutional reforms and access to justice alleviating poverty and vulnerability; through transfer of financial and climate risk to those without good access to risk information and adaptive capacity; and through the use of private, proprietary climate-finance models, which leads to lack of access to data and modelling outputs, and ultimately the privatization of financial decision-making on climate change risk. If decarbonization is achieved by systemic financialization of climate risk, itself a highly complex task, the poor may nevertheless be facing an unprecedented new form of systemic inequity.
What is needed, as Hulme suggests, is to expand attempts to decarbonize—to always include the ultimately more sustainable and just tasks of holistically addressing the range of social, economic, and environmental challenges facing the world’s poor and marginalized.
Carol Farbotko
Commonwealth Scientific and Industrial Research Organization (CSIRO)
A growing social media problem
Noam Chomsky and Edward Herman dedicated their 1988 book, Manufacturing Consent, to the late Alex Carey, an Australian sheep farmer turned lecturer whose research at the University of New South Wales focused on what he called “industrial psychology.” They credit Carey as inspiring the core idea of their book, the “propaganda model of communication,” which they argue is used by the corporate media to manipulate public opinion and manufacture consent for policy. Chomsky has also called Carey a “pioneer” of research on industrial propaganda. So, if Carey were here today, what would he say went wrong in the United States, when and why? His answers, I suggest, would be very different from those offered by Cailin O’Connor and James Owen Weatherall in “The Social Media Problem Is Worse Than You Think” (Issues, Fall 2019).
For starters, O’Connor and Weatherall proclaim that the theory of truth that grounds their analysis and their remedies is a variety of pragmatism. But Carey was deeply suspicious of pragmatism, perceiving it to be intimately related to propaganda. He wrote: “One general point should not escape notice. There is a remarkable correspondence in attitude to truth between pragmatists and propagandists. Both justify the promotion of false beliefs wherever it is supposed that false beliefs have socially useful consequences.”
Thus the first of two key issues I think Carey would have with O’Connor and Weatherall’s analysis and proposal is its confidence that we could make decisions about matters that concern truth and falsity without become propagandists ourselves, when our understanding of these concepts is grounded in a variety of pragmatism.
His second complaint would take aim at the assumption that social media companies and legislators and bureaucrats—“elites”—are who should hold the power in a democratic society to make decisions about the management of misinformation. In a retrospective on the propaganda model 10 years on, Herman argued that it provided a good explanation of how the media covered the North American Free Trade Agreement (NAFTA). In their coverage, he argued, the “selection of ‘experts’, and opinion columns were skewed accordingly; their judgement was that the benefits of NAFTA were obvious, were agreed to by all qualified authorities, and that only demagogues and ‘special interests’ were opposed. The effort of labor to influence the outcome of the NAFTA debates was harshly criticized in both the New York Times and the Washington Post, with no comparable criticism of corporate or governmental (US and Mexican) lobbying and propaganda.”
It’s interesting to ponder, in the causal chain of events that put President Trump in office in 2016, whether homegrown managerial propaganda in the 1990s or recent Russian social media trolling was the weightier cause. Such historic matters should be a warning, at least, of the potential long-term consequences that could plausibly emerge when those who control the means of information production and distribution form a consensus of opinion about what’s “right,” and then go about systematically shutting out or deprioritizing the voices of “ordinary” people on that basis.
However, I do think that O’Connor and Weatherall are right that algorithmic decision rules are a worthy target for intervention. That they are presently constructed to maximize profits, often at the expense of other values, such as social media users’ autonomy and the functioning of a democracy, would undoubtedly disturb Carey. His solution, however, would put the many, not the few, properly in control of controlling the “mass mind.”
Erin J. Nash
School of Humanities and Languages
University of New South Wales (Sydney)
US Census set for changes
Of the topics carefully reviewed by Constance Citro in “Protecting the Accuracy of the 2020 Census” (Issues, Summer 2019), I select one for elaboration: she writes that every census is a lesson in how to improve the next one. This will be so for 2020, but with an unprecedented outcome: the 2030 census will look less like the 2020 census than the 2020 resembles the 1790 census.
This rash statement is not predicated on the serious threat posed by current treatment of the census as a tool to gain partisan advantage. That is being equally seriously resisted at state and local levels and by commercial and advocacy groups, thus far with success, as evidenced by events surrounding the proposed addition of a citizenship question. The American people do not want to lose what the Constitution gave them—a reliable census that, among many other benefits, allows them to hold politicians accountable for their often-exaggerated election promises.
The turning point that will follow 2020 is not in politics but in a new data science designed for twenty-first century conditions. The Census Bureau will still anchor the nation’s information platform, as it did across the nineteenth and twentieth centuries, but it will provide greater demographic and geographic granularity; it will have the capacity to continuously update key variables in areas such as health, economics, transport, and agriculture; and it will offer stronger privacy protection than currently available.
The American Community Survey will remain, and now joined by two new data flows: one created by linkage across federal and state administrative records, and the other by arrangements to draw from third-party data—especially commercial transactions and social media. We will learn from 2020 how soon, effectively, and accurately we can begin to replace (expensive) survey data with (already paid for) administrative record data. (The nasty, wasteful legal battle in 2019 over adding a citizenship question to the 2020 form did, after all, end with an agreement to use administrative data, as was, from the outset, strongly recommended by the Bureau.)
Third-party data is less ready, first needing extensive scientific attention to barriers and challenges regarding such areas as privacy, proprietary constraints, standardization, data security, the protection of trend lines, and, of course, public trust. In some areas, progress will be quick and impressive, as is true for the census address file and today’s economic statistics; in other areas, slow and frustrating. But the overall picture points to a future in which third-party data will provide much of what census and survey data now make available.
This transforms the Census Bureau from its exclusive focus on data collection to an additional task of curating data—assessing the accuracy, coverage, privacy, and costs of commercially provided data products, and making decisions about which can be safely incorporated into the nation’s statistical system. And this is why the 2030 census will look less like the 2020 census than the 2020 resembles the 1790 census.
Kenneth Prewitt
Columbia University
Director, US Census Bureau (1998–2001)
Future of artificial intelligence
In the midst of all the hype around artificial intelligence and the danger it may or may not pose, Justine Cassell’s article, “Artificial Intelligence for a Social World” (Issues, Summer 2019), clearly and intelligently makes the important point that AI will be what we choose it to be, and that autonomy, rather than being the only possible goal, is actually a distraction.
This fixation on autonomy has never made sense to me as an AI researcher focused on conversation. My colleagues and I work toward creating systems that can talk with us, carry on sensible conversations, provide easier access to information, help with tasks when it is not practical to have another human collaborate. Examples of this type of work include the NASA Clarissa Procedure Navigator, designed to assist astronauts on the understaffed International Space Station perform procedures more efficiently by acting like a human assistant, and the virtual coach for the Oakley RADAR Pace sports glasses that provide interactive conversational coaching to distance cyclists and runners. These types of systems are sophisticated AI but are focused on collaboration rather than autonomy. Why are our conversational AI systems helpful, friendly, and collaborative? We chose to apply our scientific investigation and technology to problems that required collaboration. We were interested in interaction rather than autonomy, as are Cassell and her colleagues.
In her article, Cassell shows what a different, and I think better, kind of AI we can have. She demonstrates, through her fascinating research on social interaction, how AI can be a scientific tool to answer social science questions and how social science results can feed back into the AI technology.
By using virtual humans Cassell and colleagues are able to control variables in conversation that would be impossible to control in human subjects. They were, for example, able to create virtual children that differed only in whether they spoke a marginalized dialect or the “standard school dialect.” With these controlled virtual children Cassell and colleagues were able to investigate the effect of using children’s home dialect while brainstorming about a science project compared with using the standard school dialect for brainstorming. The children who brainstormed with the virtual child in their home dialect had better discussions. This is an example of using AI technology, the virtual human, to answer a scientific question about children’s learning outcomes.
Then Cassell and colleagues discovered that it was not the dialect itself that mattered but the rapport that the dialect fostered. Studies of this difficult concept of rapport have led to data that are used to build a predictive model, and to algorithms that can be used in a system that attempts to build rapport with its human interlocutor—thus feeding the social science back into the technology.
Cassell’s article is filled with examples of solid and creative research, and she makes important points about the nature of artificial intelligence. It is recommended reading for anyone looking for a path to a positive and constructive AI future.
Beth Ann Hockey
Chief Technology Officer
BAHRC Language Tech Consulting