Amy Karle, "BioAI-Formed Mycelium" (2023)

The Question Isn’t Asset or Threat; It’s Oversight

As part of a research group studying generative AI with France’s Académie Nationale de Médecine, I was surprised by some clinicians’ technological determinism—their immediate assumption that this technology would, on its own, act against humans’ wishes. The anxiety is not limited to physicians. In spring 2023, thousands of individuals, including tech luminaries such as Elon Musk and Steve Wozniak, signed a call to “pause giant AI experiments” to deal with “profound risks to society.”

But the question is more complex than restraint versus unfettered technological development. It is about different ways to articulate ethical values and, above all, different visions of what society should be.

A double interview in the French journal Le Monde illustrates the distinction. The interviewees, Yoshua Bengio and Yann Le Cun, are friends and collaborators who both received the 2018 Turing Award for their contributions to computer science. But they have radically different views on the future of generative AI.

The solution is oversight of the corporations building AI.

Bengio, who works at a nonprofit AI think tank in Montreal, believes ChatGPT is revolutionary. That’s why he sees it as dangerous. ChatGPT and other generative AI systems work in ways that cannot be fully understood and often produce results that are simultaneously wrong and credible, which threatens news and information sources and democracy at large. His argument mirrors philosopher Hans Jonas’s precautionary principle: since humanity is better at producing new technological tools than foreseeing their future consequences, extreme caution about what AI can do to humanity is warranted. The solution is to establish ethical guidelines for generative AI, a task that the European Group on Ethics, the Organisation for Economic Co-operation and Development, UNESCO, and other global entities have already embraced.

Le Cun, who works for Meta, does not consider ChatGPT revolutionary. It depends on neural networks trained on very large databases—all technologies that are several years old. Yes, it can produce fake news, but dissemination—not production—is the real risk. Techniques can be developed to flag AI-generated outputs and reveal what text and images have been manipulated, creating something akin to antispam software today. For Le Cun, the way to quash the dangers of generative AI will rely on AI. It is not the problem but the solution—a tool humanity can use to make better decisions. But who defines what is a “better decision”? Which set of values will prevail? Here I see in Le Cun’s arguments parallels to the economist and innovation scholar Joseph Schumpeter, who argued that within a democracy, the tools humans use to institutionalize values are the law and government. In other words, regulation of AI is essential.

These radically disparate views land on solutions that are similar in at least one aspect: whether generative AI is seen as a technological revolution or not, it is always embedded within a wider set of values. When seen as a danger for humanity, ethics are mobilized. When social values are threatened, the law is brought in. Either way, the solution is oversight of the corporations building AI.

This opens a door for the public to weigh in on future developments of generative AI. A first step is to identify interests and stakeholders clustering in each position and draw them into how to better inform the development and regulation of AI. As with every other technological advance, humans obviously can decide things in their own way.

Your participation enriches the conversation

Respond to the ideas raised in this essay by writing to [email protected]. And read what others are saying in our lively Forum section.

Cite this Article

Didier, Emmanuel. “The Question Isn’t Asset or Threat; It’s Oversight.” Issues in Science and Technology (): 84–85. https://doi.org/10.58875/XHGV3050