From "The Other Night Sky" series, 2007; C-Print, 48 x 60 inches

Forum – Winter 2019

Alzheimer’s dilemmas

In “Progress Against Alzheimer’s Disease?” (Issues, Fall 2018), Robert Cook-Deegan recaps the remarkable confluence of policy, advocacy, and practice that helped transition care and caregiver respite from an isolated family endeavor to an expansive community-based approach. The article provides valuable insights that should help guide further progress that is urgently needed.

The Robert Wood Johnson Foundation (RWJF) in the mid-1980s identified dementia care and respite as a gaping void. As telling evidence, the neurologist David Drachman, an early researcher in Alzheimer’s disease who was cited in Cook-Deegan’s story, has said that he and his colleagues typically had two patients: the person with dementia and the caregiver. Similarly, Jerry Stone, who in 1980 helped form the Alzheimer’s Disease and Related Disorders Association, has said that he often wondered how most families could support the care his wife received for 15 years.

In early efforts to address dementia-related care and respite needs, the Office of Technology Assessment (OTA) staff, along with the group’s advisers and contractors, were a hub of expertise and advice, from science to finance, policy, and practice—this latter provided primarily by OTA staffers Nancy Mace and Katie Maslow. RWJF also identified 20 known specialized adult day care centers nationally from a list developed by the National Council on Aging. This nexus of federal, nonprofit, patient advocacy, and clinical resources, much of it orchestrated by the OTA, was invaluable.

RWJF, in cooperation with the Alzheimer’s Association and the federal Administration on Aging, established a program office at Wake Forest University codirected by Burton Reifler, the school’s chair of psychiatry (and a former RWJF Clinical Scholar), and Rona Henry. Many involved in the effort participated in the site visit and recommendation process. Visits revealed that centers had secured community-based space and were generally providing quality care, but their financial viability was always at risk, dependent upon charitable giving and bake sales primarily.

The program sought to determine if centers could expand dementia-specific services to participants and other community and in-home respite groups, provide services appropriate to various levels of disease severity, and become financially viable. Behavioral challenges meant that centers needed to keep their clients safe by preventing wandering, and also to provide meaningful and engaging care such as music, art, and cooking that drew on previously learned skills.

Rather than provide grant funds and expect sites to become self-sustaining, RWJF used a deficit-financing model. The original 17 sites received funds that gradually declined as site revenues increased. With continuous technical assistance, each site conducted marketing research to find out what people wanted and would be willing to pay for, and then expanded services accordingly. Center innovations included in-home, overnight, or weekend respite, with the centers providing care that was either daily, several days per week, or intermittent part-day. Grantees determined unit costs and pricing and used sliding scale and “scholarship” opportunities since public financing was absent. Grant funding for transportation expanded participation and was a predictor of success. The program also demonstrated that providing emotional and practical support to caregivers enabled many families to continue to care for members at home rather than turning to nursing homes.

Additional RWJF programs extended this effort, and more than 5,000 community-based services are now in operation. However, financing strategies and continued innovative expansion have lagged. They are acutely needed for participants and caregivers to benefit.

Those benefits are sometimes unexpected. At a national program meeting in Utah, where staffer David Sundwall had initially stressed to OTA the need for expanded services, participants with dementia performed in a band on stage, playing instruments they had learned years ago. The performance was jaw-dropping.

Carolyn Asbury

Senior Program Officer
Robert Wood Johnson Foundation


In “Asking the Right Questions in Alzheimer’s Research” (Issues, Fall 2018), Susan Fitzpatrick identifies a range of serious challenges for neurodegenerative disease research—indeed, for brain research more generally. Two in particular deserve further elaboration. The first is her exploration of animal models in neuroscience. The second is her observation that researchers have failed to embrace the prospects of low-tech but potentially high-yield sources of data and insight from human patients.

Fitzpatrick correctly notes the unyielding focus on a relatively small number of animal species, primarily mice and rats, in neuroscience laboratories. Efforts key in on what can be asked of these animals, what can be modeled in them, and not necessarily on what needs to be asked or modeled in order to actually understand or intervene in the human disorder of interest. She introduces a perfectly damning metaphor: “It has not just been the problem of looking for the keys under the street light, but of finding in that light a bent nail and declaring that the key has been found.” I would, in fact, go further: we whack at these bent nails with broken hammers housed in jewel-encrusted toolboxes paid for with taxpayer dollars and/or profits from me-too drugs that have failed in any real way to alter disease trajectories for the millions of people who suffer from brain (and many other) disorders.

It need not be this way, of course. But scientific research, when not entirely creative, is just the opposite: self-reinforcing and self-perpetuating. Entire research enterprises can become so narrowly channeled and pathway-dependent that alternatives are not just not feasible but unimaginable. Except, perhaps, from the outside, or at least from the periphery. Issuing from the president of a small but strategically important neuroscience philanthropy, Fitzpatrick’s insights are terrifically important.

Looking at the second challenge she identifies, Fitzpatrick notes that “the revolution in technologies that track our every word and move as whole humans [has been] rarely integrated into the biomedical approach.” She has in mind in-body and on-body devices, surveillance techniques, and a whole suite of behavioral and cognitive assays, all of which take a distant second (or eighth or eightieth) to more reductionistic approaches focused on nonhuman putative models. Clinical phenomenology is now remarkably robust and could shed important light on the human phenotypes that could eventually be better modeled and assayed in other systems. Moreover, as Fitzpatrick notes, “A richer understanding and characterization of the human disease in the full context of how an organism does or does not accomplish the behaviors needed to thrive in its environment might suggest very different targets for therapeutic interventions.”

We are better off, in both the short and the long term, with minds wide open to novel approaches, with a diverse range of tools and strategies, and with a willingness to reconsider whether we’ve been asking any of the right questions all along.

Jason Scott Robert

School of Life Sciences and Lincoln Center for Applied Ethics 
Arizona State University


Brains are complex, and they engage with complex (social, biological, physical, technological) environments. Susan Fitzpatrick’s essay is a necessary call to action, stressing the importance of embracing this reality if we hope to make significant progress in understanding aging brain function and its breakdown in age-related disease. Central to her discussion is a consideration of the individual.

With the introduction of new methods and tools for measuring and studying the complex organization of the brain and its interacting components, efforts to understand brain function through the lens of networks have been reenergized. Descriptions of Alzheimer’s disease, and brain aging more generally, have frequently invoked network-related concepts to explain patterns of brain structure and function. However, it is only recently that this research has been able to apply a formal science of networks to characterize the observations.

As a cognitive neuroscientist who studies aging brain networks, I am a part of this effort, and I share Fitzpatrick’s enthusiasm about studying the brain as a complex and adaptive system. It’s important to emphasize how a network-based approach has particular appeal when considering the person-to-person variability that accompanies aging. It’s easy to recognize that some individuals maintain good cognitive health well into their later adult years, while others are more vulnerable to rapidly declining cognitive ability and disease, evident even from middle age. However, although there are multiple risk factors for Alzheimer’s disease and other forms of dementia, there are rarely any determinants. The individual variability has often been described with hand-wavy explanations, but an absence of a neural substrate that adequately explains this variability has prohibited significant progress in our understanding of the causes and consequences of cognitive decline and disease. Framing the problem and observations in the context of changes in functional and anatomical brain networks that are defined and measured using formal methods provides an opportunity to understand the individual differences in resilience and vulnerability that accompany aging.

This approach has benefited other domains of science to reveal how variability in both vulnerability to network degradation and consequences of this damage is evident across many real-world networks, and can account for different observed outcomes. For example, understanding differences in citywide public transportation networks has led to a deeper appreciation of why certain cities exhibit greater fault tolerance to short- and long-term interruptions in metro or bus service, and what is needed to support and revitalize preexisting infrastructures given changing transit demands. The application of this type of framework toward understanding brain network variability across individuals is clear.

A second central idea in Fitzpatrick’s essay considers the complexity of an individual’s environment as he or she ages. In neuroscience research, features of an individual’s environment are rarely measured. When they are, they are often coarse summaries, and are primarily treated as sources of group sampling “noise” that must be statistically accounted for. However, even broad characterization of environments reveals robust associations with age-related illness. For example, economic disadvantage is related to greater incidence of dementia and age-related decline. If certain environmental factors promote brain resilience or expose brain vulnerability, it is important to know what they are and how they operate. The determinants of successful brain aging can’t be limited solely to substrates of the brain but must also include the environment with which the brain engages.

Research in cancer is quickly revealing that an effective path toward successful detection, treatment, and prediction of disease progression involves understanding not only cancerous cells but also the local ecosystems in which they flourish or fail—both the “seed” and the “soil” (as first described by the twentieth-century surgeon Stephen Paget). Progress in research on Alzheimer’s disease and other age-related diseases will benefit when we embrace a similar perspective. It’s time to study brain complexity, both in terms of brain networks themselves and also the environments in which individual brain networks develop and mature.

Gagan S. Wig

Assistant Professor of Behavioral and Brain Sciences
Center for Vital Longevity
The University of Texas at Dallas

Workforce preparation

Educators and policy-makers should carefully consider William B. Bonvillian and Sanjay E. Sarma’s article, “The Quest for Quality Jobs(Issues, Fall 2018). Despite unemployment below 4% and billions of dollars invested in educational institutions, the United States has increasing income inequality and a declining middle class. Technology and trade make economic prosperity increasingly dependent on postsecondary education, as reflected in disproportionately high unemployment and underemployment of workers lacking credentials beyond high school. Still, the authors are optimistic. They argue that the upskilling of the workforce required to maintain competitiveness and drive upward mobility is already happening, albeit slowly, and that the will to change is there among the public, the private sector, and policy-makers. So, what are the critical points of leverage?

One barrier slowing the pace of change is that college and career advising is siloed within educational institutions, with counselors who can be responsible for a thousand or more students. Hundreds of websites offer information about colleges and careers, including data about job openings and salaries, but awareness and use of these sites among educators and students are low. Preparation programs focus on the social and emotional supports counselors provide, with little emphasis on career advising. More professional development for counselors and advisers is the typical policy response, but high turnover rates and the limited impact of short-term training without follow-up undermines its efficacy.

In Texas, we are taking a new approach with promising early results. In 2015, the state legislature charged the University of Texas at Austin to work with agencies, nonprofits, and other colleges to leverage digital learning to improve college and career advising. The centerpiece of the Texas OnCourse initiative is a competency-based academy, developed with input from more than 2,500 educators and dozens of employers, partner organizations, and colleges. Today, more than 10,000 secondary counselors participate in the OnCourse Academy, and they serve more than 95% of the state’s secondary students. Counselors use online modules and tools for just-in-time support, and they are linked with each other and supported by a network of senior fellows, practicing counselors with deep expertise who are advisers, and implementation coaches. More than 95% of users report that this infrastructure enables them to serve more students.

The Texas OnCourse approach tackles one of the structural barriers presented by the fragmentation of educational systems. However, this work also illustrates that training and connecting counselors and advisers isn’t sufficient. Educators need better tools that few institutions, systems, or states have developed. These include sequences of courses mapped not only to credentials but also to job opportunities. As students acquire credits from multiple institutions, they need the ability to look across institutions and see what is likely to apply. The most important questions students and advisers are asking are less about counting job postings or individual colleges’ graduation rates, and more about “how do I get there from here?”

A few institutions, systems, and states are working on these issues today. Imagine what might be possible if the will to change were channeled into a serious, networked effort among educators, employers, policy-makers, and the nation’s great colleges.

Harrison Keller

Deputy to the President for Strategy and Policy
The University of Texas at Austin

Labor market information gaps

In “Fixing an Imperfect Labor Market Information System(Issues, Fall 2018), Sanjay E. Sarma and William B. Bonvillian offer a compelling argument and get right that the US labor market is one of the most decentralized and dynamic labor markets in the world. This is a strength, not a weakness.

However, even so, it does pose significant challenges for all major stakeholders, including employers trying to signal changing job requirements, credentialing organizations seeking to align to those changing requirements, and learners trying to connect what they know and are able to do to career opportunity. Although these problems may have seemed daunting in the past, there are new advancements under way that will help us sync our signals, even in a dynamic and changing labor market.

Take the Job Data Exchange (JDX) as an example. The U.S. Chamber of Commerce Foundation is working with partners to transform how employers signal changing competency, skill, and credentialing requirements direct from the hiring systems themselves. If successful, the JDX will provide employers with open data tools that enable them and their human resource technology partners to send better, faster, clearer signals to preferred and trusted education and workforce partners. This dynamic labor market system was unthinkable not long ago, but recent advancements in data standards, linked data on the web, and artificial intelligence allow us to now address this challenge.

All the while, we must keep in mind that though technology can solve many problems, it is a tool, not a silver bullet. Even with a tool such as the JDX, it is still up to the human users to adopt and utilize these tools properly lest we be sharing bad data in new ways.

Therefore, we need a twofold solution. The tools are one solution, but we also must cultivate new skills, practices, and behaviors in our human resource and talent management systems to organize hiring requirements at the competency and skill level, and we must update them frequently based on best-in-class job analysis. Only then will these new data and technology tools be effective in helping employers more clearly and effectively signal their need.

The article also lays out a vision for a new job navigator that will make use of all the data on skills, job opportunities, and learning to empower individuals in a complex labor market.

For this, something we need to keep in mind is that the navigator of the future will not likely come as a single product or service, but instead will be made up of hundreds of new tools, applications, innovations, and digital solutions. The promise of a job navigator requires a new kind of data ecosystem that is open, decentralized, distributed, and public-private. This ecosystem is eminently achievable, but it will require new rules of the road that will challenge business models and mindsets. The T3 Innovation Network was recently formed as a partnership of over 150 public and private organizations to do just that, and it is actively working toward developing the open data infrastructure of the future for a more equitable talent marketplace.

Jason A. Tyszko

Vice President
Center for Education and Workforce
US Chamber of Commerce Foundation

Skills for future workers

In “Artificial Intelligence, Robots, and Work: Is This Time Different?” (Issues, Fall 2018), Stuart W. Elliott provides a thoughtful analysis of whether artificial intelligence (AI) will have a disruptive impact on the types of jobs available to people during the next few decades. When considering literacy in international assessments, approximately half of people do not have much difficulty handling tasks up through Levels 2, whereas three out of eight can handle Level 3, and one out of eight can handle Levels 4 and 5. Computers with AI facilities are similarly projected by experts to handle Level 2 tasks and possibly Level 3 tasks in the next decade or so, but not many tasks at Levels 4 and 5. Elliott claims that the race between computers and humans in handling tasks at the more difficult levels will require input from experts in psychology, education, and testing in addition to computer science and economics. The analyses he presents are quite compelling to me as a professor who has conducted interdisciplinary research in all these areas (except for economics) and who has developed computerized learning environments with AI components.

There are reasons to be optimistic that humans will maintain the lead on tasks at Levels 3-5. First, AI can perform quite impressively on very specialized tasks at these levels, but the performance does not generalize beyond each specialized task. A program that interprets problems in biology will not transfer to medicine because the corpus of knowledge and reasoning strategies are quite different. Humans are also not particularly impressive at generalization and transfer, but they hold the lead and such capacities are essential for nonroutine problem solving at Levels 4 and 5. Second, humans hold the lead in social interaction and collaborative problem solving, which are critical twenty-first century skills. Computers products such as Siri, Alexa, and Echo can handle short exchanges on generic topics but not lengthier coherent conversations, nonliteral language, humor, deictic expressions (here, there, this, that) and other dimensions of discourse that are contextually specific—and that will always have high demand in the workforce.

There are also reasons to believe that computers will take the lead on performing both routine and nonroutine tasks within specific domains of application, but without much need for generalization and transfer. This has already been demonstrated in law for information retrieval tasks and in finance for report generation tasks. Intelligent computer tutors are approximately on par with human tutors in helping students learn about specific topics, such math, physics, or biology. Computer tutors are likely to exceed human tutors as they acquire new content and strategies through larger corpora, data mining, and machine learning. Human memories cannot handle such rich volumes of information with precise detail and discriminations.

Elliott persuasively argues that it is difficult to predict how disruptive AI will be on the workforce. Markets and enterprises can evolve swiftly and require a large workforce even when they have dubious value. The intrinsic value of agriculture, manufacturing, and education is indisputable whereas sports, games, and social media have value on very different dimensions.

Arthur C. Graesser

Professor
Department of Psychology and Institute for Intelligent Systems
University of Memphis
Former chair of the Collaborative Problem Solving Expert Group for the Programme for International Student Assessment 2015


“After the invention of the steam engine, even if horses had worked for free, nobody would have hired them.” This saying, often attributed to the late Nobel Prize winner Robert Solow, describes in drastic terms what happened to the major source of work power during and after the industrial revolution. Just as steam engines were able to replace dozens or more horses, computers and robots are now able to replace the cognitive work of hundreds of humans in routinized tasks. In the past, automation and the use of machines were seen as signs of progress because most of the work, where humans were substituted by machines, was unpleasant, dangerous, and low-paid.

Today, it’s different. Computers and artificial intelligence (AI) are replacing humans in tasks that demand higher cognitive skills and longer education—and therefore those jobs are better paid and take place in nicer working environments. This gives rise to fear, and this is precisely where Stuart W. Elliott’s article sets in, showing that the majority of the adult US workforce reaches only levels of cognitive competencies that could relatively easily be replaced by computers already today.

Facing the threat of being replaced by machines and AI, there are three possible escape routes, two of which Elliott describes. Of the three, one is unlikely, one undesirable, and one difficult and uncertain but nevertheless the only way forward. The first and unlikely way to get around the threat of being replaced by machines is a massive upskilling of the current adult population and the coming ones. I fully agree with Elliott’s assessment that although the differences in the levels of cognitive competencies between adults of different countries show some potential for the United States, it is very unlikely that education alone could protect a high share of people from the menace of being replaced by AI.

The second strategy, sometimes put forward by the developers of AI themselves, is reducing the cognitive demand of activities with the help of computers, such that humans with lower competencies are able to execute more demanding tasks. Although this sounds appealing at first, what share of the profit of producing goods and services would companies want to give to humans who are up to their job only because of the use of expensive AI? This leaves us with the third option, which is doing things that computers are not (yet) able to do. But as Elliott correctly points out, a lot of the skills that computers are not (yet) able to perform are those that are also asking high levels of cognitive skills, such as creativity or critical thinking.

But the question to ask is not which skill or competence computers are least likely to perform in the near future. Rather, the question—that the nation’s educational authorities have to think about when designing curricula—is: what are the skills necessary to produce services and goods that humans in the future would rather pay another human for and not a machine? Horses, by the way, have found their new role. In many countries, there are more horses than ever since World War II—not to plough land anymore, but as leisure companions.

Stefan C. Wolter

Department of Economics
Centre for Research in Economics of Education
University of Bern
Switzerland


The future of education is of course closely connected with the future of jobs and the labor market. Both the article by Stuart W. Elliott and another in the same volume by Philip Brown and Stuart Keep, “Rethinking the Race Between Education & Technology,” underline this, albeit in different ways.

Brown and Keep show that substantial differences exist in the reading of today’s labor market and in predictions about its future. The latter may be very confusing, as a well-known Dutch joke holds that “predicting is difficult, especially when it is about the future.” What they do show, however, is that each of the three contrasting views on the future of the labor market and workers that they describe will have a severe impact on education’s main objectives and organization. In Elliott’s article, the search is for what skills and what level of skills future education systems should prioritize in order to secure a match between demand and supply of skills and to foster labor market success—which also relates to the future of education.

Brown and Keep more than capably present their projected contrasting futures and summarize the different claims, but they only briefly touch on the underlying differences in perspective and the concomitant emphasis on different forces that lead to these different views. Unfortunately, merely presenting the three different views next to one another without offering a critical evaluation of them is, we think, a missed opportunity.

Elliott tries to overcome the unsettled dispute between those who believe that the new technology (artificial intelligence, robots) differs strongly from previous spurs in technological improvement and those who believe that this time is no different from previous times. It does so in a very interesting, practical, and innovative way by looking into the possibility of “machines” to perform human tasks. In his approach, he applies standardized test items that are developed in the context of PIAAC. PIAAC is the Organization for Economic Cooperation and Development’s (OECD) survey of adult skills, in which literacy, numeracy, and problem-solving skills of people between the ages of 16 and 64 are assessed. He convincingly concludes that new technologies will probably pose a serious challenge for education systems to equip all people with the right type and the right level of skills that they need to be competitive with the “machine.”

Another stimulating feature of Elliott’s contribution is the critical assessment of the somewhat lazy allegation that humans have a great advantage over machines in other skill domains, such as social skills. His call to look more closely at these skills to arrive at a more balanced assessment of the position of humans is much needed.

Finally, we’d like to underline an important aspect that is touched on in both articles, however only briefly. Our reading of the impact of today’s and possibly future technology is that it will probably lead to a very different organization of work. In the words of Andreas Schleicher, the OECD’s most distinguished spokesperson on education and education policy, “in the future everybody is creating his own job.” This of course refers to the world of the platform and gig economy. We find that both articles fall short in fully grasping that aspect of the future of work on the future of education.

Tim Schokker
Ted Reininga

Dutch Ministry of Education, Culture, and Science


Policy articles about “the future of work” are often based on little more than speculation. So Stuart W. Elliott is to be commended for consulting with artificial intelligence (AI) experts on how computers are likely to perform over the next few years on reading proficiency tests. His experts predict that machines will soon score at least as well as most US adults. Barring educational advances that he finds implausible, Elliott concludes that “employers will automate many literacy-related tasks over the next few decades” and so, with such jobs gone, people will either have to develop other skills or face unemployment.

Elliott’s argument rests on two assertions and a deduction. He asserts first that people who can do only tasks that machines can soon perform will not have jobs, and second that people with skill X can do tasks that machines will not be able to perform any time soon. He then deduces that people with skill X will therefore have jobs.

The first assertion reflects what economists call the “lump of labor fallacy,” a zero-sum view of how new jobs displace old ones that doesn’t necessarily hold up in practice. For example, the economist James Besson found in a study reported in 2015 that demand for bank tellers actually increased after ATM machines were introduced. Rather than assuming a fixed amount of work that people or machines will compete to do (and that machines will always get the job if they can perform it at all), it is better to concentrate on new ways that humans and AI can complement one another, as Ajay Agrawal, Joshua Gans, and Avi Goldfarb suggest in their 2018 book, Prediction Machines: The Simple Economics of Artificial Intelligence. Perhaps, for example, AI could help provide the education people need to read better than machines?

The second assertion is backed up by judgments that Elliott has compiled from experts in the case where X = “can perform at Level 3 or higher on a PIAAC literacy test.” He also calls for similar research on X = “can perform high levels of social interaction.”

Elliott’s deduction, however, does not follow from his two assertions even if we accept them as true. The deduction would follow if, for example, the first assertion said instead that “people who can do tasks that machines cannot perform will have jobs.” But this is different from his assertion, and the two versions are not logically equivalent. Saying they are represents what is called the inverse fallacy. In fact, the alternative statement itself seems hard to believe. Machines are not likely to have human empathy any time soon. Does that mean we can all count on having jobs?

Even harder to believe is the related claim that if more people received education enabling them to score higher than Level 3 on PIAAC, then their jobs would be safe. This is a matter of causal inference about counterfactuals. That kind of reasoning is truly an example of a task that no machine learning system is on a path to perform since all such algorithms are ultimately based on statistical correlations only. By Elliott’s argument, perhaps this means there will eventually be sustained demand for humans who can carry out such reasoning.

Daniel L. Goroff

New York, NY

Amping up research universities

In “Research Universities and the Future of Work” (Issues, Fall 2018), Mitchell Stevens channels his inner Clark Kerr and perceptively diagnoses an opportunity and imperative for research universities to more boldly apply their expertise to pressing questions about the future of work and workers. As he observes, considerable political attention during the past presidential administration focused on ways in which community colleges might be better engaged in this challenge, but research universities—despite their considerable federal funding—were largely left out of the conversation.

I agree that they should be brought back in. For the nation once again finds itself in a “multiversity” kind of moment: just as the flood of Cold War, post-Sputnik federal spending spurred Kerr to reimagine the political and economic role of the university in American life, so does today’s politically fractured, software-saturated, and gig-and-flex-shift economic landscape point toward the need for better application of what research universities do well—and a finer understanding of where and how they might do better.

But just as in the age of Master Plans and moon shots, there are internal hurdles that must be recognized and surmounted in order for the research university to take on an even more productive and expansive role. Some stem from institutional history; others from the current political moment. And any prescriptions for engagement must reckon with these structural challenges.

One hurdle: the diversity of teaching and research disciplines within the research university. Stevens correctly identifies a crucial need to better map educational experience onto job skills, something made difficult by both the nineteenth-century-born disciplinary verticals around which the research university remains arranged, and the twentieth-century administrative structures that govern university operations. He moves beyond the tired break-down-those-silos arguments and proposes that instead of integrating disciplines we must integrate data in a more finely grained and responsive manner. Certainly, given how far artificial intelligence and machine learning have leapt in recent years, it is high time for research universities to apply deeper data science and data integration to better measure learning goals and outcomes.

But we should be careful not to further push the already sharp vocationalism that is driving so much of higher education right now, nor to further commoditize learning toward a “product comparison” model for students and their families. Yes, better information is good, but research universities and their allies must also make a qualitative, ethics-driven case for a broad-based liberal education as foundational and essential to workforce preparation. This goes beyond a better database: as universities already have discovered, quantitative measures that map well onto the STEM disciplines (science, technology, engineering, and mathematics) do not necessarily capture the return on investment delivered by humanistic disciplines and the attainment of soft skills such as critical thinking, analytic writing, and evidence-based argumentation. Yet those skills are more valuable than ever in an era of data-privacy breaches, misinformation campaigns, and software-driven automation that drives up the value of creative thinking and tacit knowledge.

A second challenge with which to reckon: the sharpening inequities and economic insecurities consuming the modern workforce are also happening within higher education itself. The sharp disinvestment of state resources in higher education since the Great Recession has left most public research institutions, both flagships and non, reeling from budget cuts and turning increasingly to nontenured and temporary faculty appointments to address a still-surging demand for undergraduate teaching. The future of work is on stark display inside the research university itself, particularly the non-elite publics, and it is an ill-paid, insecure, and not at all pretty picture. It also is unsustainable, particularly if the research universities are to take a bolder role in redefining the terms on which the future of work might occur.

As Stevens reminds us, universities are unlike any other social institutions—and they also are products of the society that made them. Federal monies still flow into research university coffers, but they chiefly go for research and administration, not teaching, and they go disproportionately toward knowledge production in certain disciplines, not all. Reckoning with these disparities is a critical job for university faculty and leaders; remedying them is a critical challenge for policy-makers.

Margaret O’Mara

Howard & Frances Keller Endowed Professor of History
University of Washington

Growing entrepreneurs

In “Investing in Entrepreneurs Everywhere” (Issues, Fall 2018), Tracy Van Grack emphasizes many of the right considerations that policy-makers and business leaders should have in mind as they seek to encourage entrepreneurship. Supporting young firms and searching for overlooked sources of talent are essential to restoring and broadening the United States’ economic vitality.

These challenges have taken on a new urgency. Declining entrepreneurship since the 1970s is part of a broader story of declining business dynamism, as I have summarized in a 2018 paper with coauthors at The Hamilton Project: young firms are less numerous and employ fewer workers, highly educated individuals are less likely to be entrepreneurs, and business formation is taking longer than it used to. At the same time, economic gaps between the best- and worst-performing places have become disturbingly large. These patterns have implications for both national economic growth and the availability of economic opportunity outside the main clusters of innovative activity.

Van Grack’s focus on start-up support, rather than subsidies for large, incumbent businesses, is particularly welcome. As Aaron Chatterji, a professor of business and public policy at Duke University, has described, states’ use of subsidies creates a playing field that is implicitly tilted against start-ups and young firms. A powerful political logic makes it difficult for any one state to stop favoring incumbent businesses; large, mobile firms can play one state off against another, securing large subsidies in the process. As Chatterji proposes, one solution is to provide a countervailing federal incentive to minimize these subsidies.

To address the geographic gaps that Van Grack discusses, we should be thinking about a range of ways to improve the productive potential of struggling areas. Two promising ideas are found in work by Abigail Wozniak and by E. Jason Baron, Shawn Kantor, and Alexander Whalley. Wozniak would allow college graduates to delay their loan repayment while they search for employment in labor markets further from their postsecondary institutions. Baron, Kantor, and Whalley would enhance linkages between research universities and struggling areas, increasing the spillover of innovations that support local economic activity.

One point of partial disagreement concerns the “mercenary culture” of Silicon Valley workers that Van Grack mentions (and implicitly disfavors). For the US labor market as a whole, the opposite problem is much more apparent: workers often have weak bargaining power relative to employers, are moving across places and jobs at a falling rate, and are generally struggling to achieve robust wage growth. Removing impediments to worker mobility is a core part of the solution to stagnant wages and productivity, and policy-makers should pay heed to the proposals contained in The Hamilton Project’s volume Revitalizing Wage Growth: Policies to Get American Workers a Raise, particularly those that address noncompete and no-poach agreements.

Ryan Nunn

Policy Director, The Hamilton Project
Fellow, Economic Studies at the Brookings Institution

Facts and fears

In “Fear Mongering and Fact Mongering” (Issues, Fall 2018), Adam Briggle notes that he was a graduate student in one of my classes some 15 years ago. In fact, he was much more than that; he remains one of the most talented thinkers and writers whom I have had the pleasure to teach in my career. So, when he writes, I read with interest.

Briggle argues that we should consider augmenting the standard definition of research misconduct (falsification, fabrication, and plagiarism, or FFP) with a new category of misconduct, which he calls the “responsible rhetoric of research” (RRR). To illustrate violations of RRR, he cites two researchers whose work, in his view, illustrates irresponsibility.

The first is Bjorn Lomborg, whose “sins” are well-documented. His book The Skeptical Environmentalist, published in 2003, argued that the state of the environment was better than often portrayed. Like Julian Simon before and Hans Rosling since, Lomborg is part of a longstanding academic and political tradition. Lomborg’s book met with fierce opposition, including demands from other academics and scientists that his publisher drop his book, an investigation by a Danish investigatory body, and demonization by many of his peers, continuing today. Briggle joins with Lomborg’s many critics to express concern about the political implications of his writing, worrying that Lomborg’s views could “cause irrational calm and complacency.”

As his second example of irresponsibility, Briggle targets me. I’ve long argued that the world has seen a dramatic drop in lives lost to disasters, and that as poverty around the world has been reduced, the economic toll of disasters has not increased as fast as increasing global wealth. This is indeed good news. These are hardly controversial views, as they are also conclusions of the Intergovernmental Panel on Climate Change, which produces periodic assessments of climate science, impacts, and economics, as well as being indicators of progress under the United Nation’s Sustainable Development Goals.

Briggle notes that my work is, from his position, “logically, or empirically, flawless.” He then offers a cartoonish characterization of my political views: “Pielke is not a climate denier. In fact, he advocates for a carbon tax.” Despite these apparent virtues, Briggle views my writing as presenting a “danger” because political actors who he apparently opposes might “make hay” with “good news facts.”

As with Lomborg, here as well Briggle is late to the party. In 2014, my research related to disasters led a group of climate scientists and journalists to campaign (successfully) to have me removed as a writer for Nate Silver’s website FiveThirtyEight. The following year a member of the US Congress suggested that I might be taking secret money from fossil fuel companies and had me formally investigated by my university. Of course, the investigation cleared me of the baseless smear, but even so, severe damage was done to my career. Nonetheless, I continue to write and publish in the peer-reviewed literature, in popular outlets, and in policy settings.

I welcome Briggle’s disagreement with the substance, focus, or rhetoric of my writing. His sharp mind and incisive writing can help us all to become smarter. However, Briggle’s suggestion to equate judgments of RRR with FFP represents yet another effort from within the academy to silence others whose views are deemed politically unwelcome or unacceptable. At most research institutions, the penalties for researchers who engage in FFP are severe, and often include termination of employment. Of course, Briggle is not alone in sending a powerful and chilling message about which views are deemed acceptable and which are not.

Briggle says these are “vexed matters, and not amenable to easy answers.” To the contrary, the answer here is simple.

Let me simply point to an article by Lomborg’s editor, Chris Harrison of Cambridge University Press, published in 2004 in Environmental Science & Policy. In it, Harrison cites an editorial in the March 8, 2002, issue of Science by its editor, Donald Kennedy, on responsible behavior when it comes to publishing views that some may object to: “I have been asked, Why are you going forward with a paper attached to so much controversy? Well, that’s what we do; our mission is to put interesting, potentially important science into public view after ensuring its quality as best as we possibly can. After that, efforts at repetition and reinterpretation can take place out in the open. That’s where it belongs, not in an alternative universe in which anonymity prevails, rumor leaks out, and facts stay inside. It goes without saying that we cannot publish papers with a guarantee that every result is right. We’re not that smart. That is why we are prepared for occasional disappointment when our internal judgments and our processes of external review turn out to be wrong, and a provocative result is not fully confirmed. What we ARE very sure of is that publication is the right option, even—and perhaps especially—when there is some controversy.”

Amen.

Roger Pielke

University of Colorado


Adam Briggle calls our attention to the rhetorical choices scientists face when they make public arguments about climate change. He asks, “has the climate science community hid behind neutral facts and insufficiently scared the public?” Using examples from Roger Pielke’s rhetoric about climate change, he suggests that sticking to the facts and remaining neutral may mislead publics.

As a rhetorician who studies scientific and medical arguments, I applaud Briggle’s attention to the integral roles that rhetoric plays in science. I agree that facts alone may not compel change, and that sometimes a healthy dose of fear can promote a much-needed sense of urgency. Yet I am compelled to point out the sometimes counterproductive and unethical uses of appeals to fear, particularly when it comes to talking about climate change.

The field of communication has produced a body of research about the efficacy of using fear appeals. Results about the effectiveness of fear appeals related to climate change are mixed, with some studies showing fear appeals to be effective catalysts for attitudinal or behavioral change and others finding fear appeals ineffective. Other research reveals a “boomerang” effect of fear appeals that produces the opposite of the intended effect. Still other research suggests that a saturation of fear appeals can also diminish the public’s engagement with climate change issues, leading to apathy and issue fatigue.

We might instead ask whether fear is an ethical and appropriate way for scientists to communicate the risks of climate change to varied publics.

Fear appeals attract attention, to be sure, but from an ethical standpoint, their corrosive effects on public life are readily apparent. As Ted Nordhaus and Michael Shellenberger aptly summarized in the New York Times in 2014, “More than a decade’s worth of research suggests that fear-based appeals about climate change inspire denial, fatalism, and polarization.” Fear appeals can further induce a sense of helplessness in the face of “catastrophic” and “unprecedented” climate change, which can seem to be so large as to exist outside the realm of human intervention. In short, scholarship from communication and related fields repeatedly suggests that appeals to fear are inadequate to sustain long-term engagement with climate change.

Our democracy, not to mention the future of life on this planet, depends on robust engagement with scientific and technical matters. To facilitate such engagement, scientists must find ways to communicate with publics and politicians in ways that connect data with the everyday experiences and lived concerns of a variety of people and groups, ways that create both a sense of concern and a belief that effective change can be made. Focusing on productive solutions to the challenges posed by climate change is one means of fostering engagement. Narrative, imagery, metaphor, and making active links to daily life are other rhetorical tools besides fear appeals that can engender climate change concern and encourage attitudinal and behavioral change. Together, these provide productive platforms on which to build and sustain both public—and planetary—life.

Lisa Keränen

Chair, Department of Communication
University of Colorado Denver


One of the great ironies of the national conversation on climate change is that it forces scientists, who are supposed to seek truth about the world, to become experts in lies. When we speak in public, we confront a vast landscape of untruth for which we were never prepared. These lies come in different flavors, each requiring a different strategy to counteract. There are brazen falsehoods (“coal ash is good for human health”) designed to shock you into silence. There are willful misinterpretations where something true (“water vapor is a strong greenhouse gas”) is used to support something patently false (“so carbon dioxide cannot be responsible for global warming”). Attaching a lie to an incontrovertible fact forces you to concede part of the point, and you risk looking pedantic or desperate in countering it. There are lies that pretend to accept the facts but deny their consequences (“the worst-case warming scenarios won’t be so bad”) and lies that hide in the uncertainty that always accompanies discovery. The continued existence of science can always be used as an argument against science by those who maintain that if we don’t know everything, we must know nothing.

Then there are the criticisms leveled at our own truthfulness. Speak in precise technical language, and you must be concealing something from the nonexpert public. But adopt a metaphor to explain something complicated, and the inexactness of the comparison means you must be lying. Scrub your communications of all emotion, and lose human connection. But reveal anger or fear, and destroy your credibility as an objective observer. In our scientific training we learn to write for our peers and, sometimes, to clearly state the broader impacts of our work. But at no point do we learn to anticipate the half-lives of public statements, how our words can be turned against us, taken out of context, and passed around to justify attacks on us, our colleagues, or science itself.

The truth, to paraphrase the poet Walt Whitman, contains multitudes. But untruth contains more. For every true story that can be spun from existing facts, there are thousands more lies that use, ignore, or twist the facts in myriad ways. We learn quickly not to engage with the blatant lies, because when one is cut down, many more arise to take its place. Once facts are added to the mix, though, we struggle. Where is the boundary between true and false? When does incompleteness become misunderstanding, and when does misunderstanding become obfuscation? We’re not taught to know the differences.

The best thing about science is that it doesn’t have all the answers. Scientific expertise and training can’t prepare us for the landscape we face. We need help—from our colleagues in the humanities and social sciences, from artists and writers, from members of communities on the front lines of climate change. Despite our lack of training, scientists are beginning to find our voices and to speak out. We need to remember to listen too.

Kate Marvel

Associate Research Scientist
NASA Goddard Institute for Space Studies
Department of Applied Physics and Mathematics
Columbia University


In response to Adam Briggle, I’ll note that the ancient Greek sophist Gorgias, in the dialogue Plato named after him, claims that his art of rhetoric allows practitioners to captivate audiences with their eloquent speech. Gorgias asserts, for example, that he persuades patients better than his brother, the doctor. And in debates about infrastructure the rhetors, not the engineers (he says), are the ones who get their proposals adopted.

Even if Gorgias’s claim to persuasive omnipotence is true—and given his lackluster performance in the dialogue, it likely isn’t—the power to displace experts and enslave citizens is not one welcomed in any well-constituted republic. A Gorgianic rhetoric of effectiveness inevitably arouses resistance in those subjected to its control; a focus on persuasion destroys the conditions of trust that make persuasion possible.

Later theorists learned from Gorgias’s embarrassment. Rhetoric, the art of civic discourse, must provide norms for conduct, not just tips for effectiveness. Rhetoric is bound in some way to be good. But good how? One leading proposal is that the art of rhetoric provides a framework of the Latin term officia, or “offices”—roles in civic decision-making that rhetors can take, each with its own set of responsibilities. The persuasive effect of fulfilling any role can vary widely depending on circumstances far beyond a rhetor’s control. But rhetors will be judged as successful if they do what they can to fulfill their responsibilities.

The expansion of expertise and the broadening of democratic participation have opened new roles for experts beyond those available to Gorgias’s doctors and engineers. Advisers take responsibility for helping decision-makers make the right decision. Advocates take responsibility for justifying the right decision—right according to them. Reporters (that is, report writers) take responsibility for empowering audiences to make well-founded assessments of complex topics. Arbiters take responsibility for issuing authoritative pronouncements on matters submitted to them. Translators take responsibility for making scientific results truly available to publics. Prophets take responsibility for calling the people back to the righteous way. In all these roles, and others, scientists contribute to a flourishing ecology of civic discourse.

There is no rhetoric without responsibilities. Briggle’s call for wider recognition of what he calls the “responsible rhetoric of research” is therefore welcome, especially in the current rush to develop a science of effective science communication. And his newly coined category for judging misconduct should both help scientist-rhetors become more aware of the responsibilities they can undertake and provide them with the resources they need to meet them—the traditional rhetorical tools of personal character, reasoned argument, and, always, appropriate emotion.

Given the variety of available roles, there can be no single standard of fact or fear regulating scientists’ rhetorical performances. What we can say is that no scientist in any role is responsible for resolving our civic controversies. Demanding that scientists be able to captivate a congressional committee means focusing not on their responsibilities but on their persuasiveness. And as Gorgias found, the single-minded pursuit of effective science communication self-destructs.

Jean Goodwin

SAS Institute Distinguished Professor of Rhetoric & Technical Communication
Department of Communication
North Carolina State University

Talking about gene-edited crops

In “Regulating Gene-Edited Crops” (Issues, Fall 2018), Jennifer Kuzma identifies four patterns of behavior that undercut the public’s confidence in first-generation crop biotechnology, and she argues that despite scientists’ best intentions, these patterns are being replicated as targeted modification and gene-editing usher in food biotechnology 2.0. She maintains that attempting to find more strategic language, refusing to label, avoiding an overhaul of the regulatory process, and insisting on a science-based “product not process” approach to risk serves to reduce the public’s confidence in the institutions for governance of food biotechnology. These behaviors communicate a negative image of scientific ethics that itself contributes to the perception that products of agricultural biotechnology—including gene-edited biotechnology—are themselves risky.

Kuzma’s four behaviors contribute to a vicious circle of reasoning that amplifies the perception of risk. They suggest that as a group, scientists developing gene-edited crops are self-seeking and less than forthcoming when others raise any question about their methods or motives. They are, to use Kuzma’s phrase, judged to be untrustworthy, whether this judgment is warranted or not. But the cycle of reasoning does not end there. Common sense says that it is risky to rely on untrustworthy people or institutions, and this implies caution when assessing whatever product or service they might be offering. But this is translated into the judgment that the product itself is risky, and, of course, anyone who offers a risky product is not to be trusted. So, the cycle begins again.

This is the virtue-risk feedback loop described by Paul B. Thompson in his 2015 book, From Field to Fork: Food Ethics for Everyone, and it is important to recognize that it is perfectly rational. Although regulatory risk assessment appropriately quantifies hazards and assesses their likelihood, daily life requires judgments in which uncertainty about the quality of one’s information can create peril. When the person or group supplying that information has something to gain, assessing their behavior may be a more reliable way to judge the riskiness of one’s situation than listening to what they say. As explained as far back as 1970 by the economist George A. Akerlof, in an article titled “The Market for ‘Lemons’: Quality Uncertainty and the Market Mechanism” in the Quarterly Journal of Economics, the sleazy used-car dealer became a paradigm example of the untrusted source of information in US culture, and the product he was selling was highly discounted as a result.

This means that it is entirely rational to be suspicious about the safety of food biotechnology. The evidence for this judgment does not reside in the hazards of using rDNA techniques or in the remote possibility that some hazard will materialize. The evasive and unethical behavior is reason enough for the average person to be circumspect. Although I do not personally have grave concerns about gene-edited crops (or about first-generation genetically modified organisms, for that matter), I will vigorously defend the reasonableness of those who do, given the patterns that Kuzma has identified. And then a further pattern of calling nonscientists who raise questions about bioengineered crops irrational emerges as yet another mistake that scientists need to learn not to make.

Paul B. Thompson

Michigan State University

Cite this Article

“Forum – Winter 2019.” Issues in Science and Technology 35, no. 2 (Winter 2019).

Vol. XXXV, No. 2, Winter 2019