Carolina Oneto, "Imaginary Places IV," 2023, cotton fabrics, cotton batting, threads for piecing and quilting, 56 x 55 inches.

Second-Order Effects of Artificial Intelligence

In “Governing AI With Intelligence” (Issues, Summer 2024), Urs Gasser provides an insightful survey on regulating artificial intelligence during a time of expanding development of a technology that has both tremendous upside potential but also downside risk. His article should prove especially valuable for policymakers faced with making critical decisions in a rapidly changing and complex technological landscape. And while it is difficult enough to make decisions based on the direct consequences of AI technologies, we’re now beginning to understand and experience some second-order effects of AI that will need to be considered.

Two examples may prove illustrating. Focusing on generative AI, we’ve witnessed over the past decade or so rapid development and scaling of the transformer architecture and diffusion models that have revolutionized how we generate content—text, images, software, and more. Applications based on these developments (e.g., ChatGPT, Copilot, Midjourney, Stable Diffusion) have become commonplace, used by millions of people every day. Much has been observed about increases in worker productivity as a consequence of using generative AI, and indeed there are now numerous careful empirical studies demonstrating positive effects to productivity in, for example, writing, software development, and customer service. But as worker productivity goes up, will there be reduced need for today’s quantity of workers? Indeed, the investment firm Goldman Sachs has estimated that 300 million jobs could be lost or diminished by AI technology. The company goes on to estimate that 25% of current work tasks could be automated by AI, with particularly high exposures in administrative and legal positions. Still, the company also points out that workforce displacement due to automation has historically been offset by the creation of new jobs following technological innovation and that new jobs are created that actually account for employment growth in the long run.

We’re now beginning to understand and experience some second-order effects of AI that will need to be considered.

A second example relates to AI energy consumption. As generative AI technologies and applications scale with more and more content being generated, we are learning more about the energy that is consumed in training the models and in generating the new content. From a global energy consumption view, one estimate holds that by 2027 the AI sector could consume as much energy as a small country (e.g., the Netherlands)—potentially representing a half a percent of global energy consumption by then. Taking a more granular view, researchers have reported that generating a single image based on a powerful AI model uses as much energy as it does to charge an iPhone, and that a single ChatGPT query consumes nearly as much energy as 10 Google searches. Here again there may be some good news, as it may well be possible to use AI to come up with ways to reduce global energy usage that more than makes up for the increased energy usage need to power modern AI.

As use of AI expands, these and other second-order (and higher) effects will likely prove increasingly important to consider as we work to develop policies that lead to responsible governance of this critical technology.

Professor Emeritus, Department of Computer Science

Southern Methodist University

There is much wisdom in Urs Gasser’s essay: verbal and visual maps of the emerging variety of governance approaches across the globe and some cross-cutting insights. Especially important is the call for capacity-building to equip more people and sectors of societies to contribute to the governance and development of what seems the most significant technological development since the invention of the book. Apple’s Steve Jobs once said, “Technology alone is not enough. It’s technology married with the liberal arts, married with the humanities, which yields us the results that make our hearts sing.” Ensuring that the “we” here includes people from varied backgrounds, communities, and perspectives is not only a matter of equity and fairness, but also important to the quality and trustworthiness of the tools and their uses.

The challenge is not simply unequal knowledge and resources, but also altering who is “at the table” where vital decisions about purposes, design choices, risk levels, and even data sources are made.

In mapping governance approaches, Gasser includes developing constraining and enabling norms efforts that seek to “level the playing field” through AI literacy and workforce training efforts, and addressing gaps and transparency or disclosure requirements that “seek to bridge gaps in information between tech companies and users and societies at large.” Here the challenge is not simply unequal knowledge and resources, but also altering who is “at the table” where vital decisions about purposes, design choices, risk levels, and even data sources are made.

Individual “users” are not organized, and governments and companies are more likely to be hearing from particularly powerful and informed sectors in designing governance approaches. What would it take to ensure the active involvement of civil society organizations—workers’ groups, Indigenous groups, charitable organizations, faith-based organizations, professional associations, and scholars—in not only governance but also design efforts?

Experiments in drawing in such groups and individuals would be a worthy priority for governance initiatives. Meanwhile, Gasser’s guide offers an effective place for people from many different sectors to identify where to try to take part.

300th Anniversary University Professor

Harvard University

Much talk about governing artificial intelligence is about a problematic balance. On the one hand, there are those who caution that regulation (not even overregulation) will slow innovation. This position rests on two assumptions each requiring substantiation, not assertion: that regulation retards frontier thinking, and that innovation brings wider social benefit beyond profit for the information economy. On the other hand, there are those who fear the risks of AI, already apparent and many yet to be realized. Whether the balance debate is grounded in more than supposition, it raises fundamental questions about how we value and prioritize AI.

Urs Gasser is confident of a growing consensus around the challenges and benefits attendant on AI, but not so about the “guardrails” necessary to ensure its safe and satisfying application. He holds that there is a galvanizing of norms that might provide form for governing AI intelligently. No doubt, there have been decades of deliberation in formulating an “ethics” to influence the attribution and distribution of responsibility without coming closer to agreement on what degree of risk we are willing to tolerate for what benefits, no matter the principles applied to either. National, industrial, and global energies directed at governance exhibit diversity in strategies and languages that in actuality are as much evidence of a failing to achieve a common intelligence for governing AI, than demonstrating an emerging consensus. Not surprising when politically, economically, commercially, and scientifically so much hope is invested in an AI-led recovery from degrowth, and AI-answers to impending global crises.

Are we advancing toward a more informed governance future for AI by concentration on similarities and divergences in systems, means, aims, and purposes across a tapestry of regulatory styles? Even if patterns can be distilled, do they indicate anything beyond Gasser’s “islands of cooperation in the oceans of AI governance”? He is correct in arguing that guardrails forming boundaries of permission within which a healthy alliance between human decisionmaking and AI probabilities are essential, if even a viable focus for AI governance is to be determined. However, with governance following what he calls the “the narrow passage allowed by realpolitik, the dominant political economy, and the influence of particular political and industrial incumbents,” the need for innovation in AI governance is pressing.

Are we advancing toward a more informed governance future for AI by concentration on similarities and divergences in systems, means, aims, and purposes across a tapestry of regulatory styles?

So, we have the call to arms. Now, what to do about it in practical policy terms? Recently when asked what was the greatest danger posed by AI, a renowned data scientist immediately responded “dependency.” Our digital universe has enveloped us in a culture of convenience, making it almost impossible to determine whether the ways we depend on AI-assisted technology are good or bad. Beyond this crucial governance question, it is imperative to reposition how we prioritize intelligence. Why should generative AI predominate over the natural processes of human deliberation? From the Enlightenment to the present age of techno humanism, scientific managerialism has come to dominate reason and rationality. It is time for governance to show courage in valuing human reasoning when measuring the benefits of scientific rationality.

Distinguished Fellow, British Institute of International and Comparative Law

Honorary Professorial Fellow of the Law School, University of Edinburgh

Cite this Article

“Second-Order Effects of Artificial Intelligence.” Issues in Science and Technology 41, no. 1 (Fall 2024).

Vol. XLI, No. 1, Fall 2024