“The Complexity of Technology’s Consequences Is Going Up Exponentially, But Our Wisdom and Awareness Are Not.”

Tristan Harris talks about the challenge of online misinformation, ways to govern artificial intelligence, and a vision of technology that strengthens democracy.

Tristan Harris is a technology ethicist and the cofounder of the Center for Humane Technology. He’ll be speaking at the Nobel Prize Summit 2023: Truth, Trust, and Hope at the National Academy of Sciences on May 24–26. In advance of the summit, Harris talked with Issues editor Sara Frueh about the challenge of online misinformation, ways to govern artificial intelligence, and a vision of technology that strengthens democracy.

Tristan Harris
Photo courtesy Abby Hall.

Science aspires to be accurate and impartial and nuanced. In the current online environment, is it even possible to convey scientific information well? If so, how should scientists and journalists approach that?

Harris: Oftentimes people think, “Well, I’ll do it personally in a way that is nuanced and offers context, and if I can be one of the good actors, maybe I can set an example, and maybe other people will follow me, and then maybe it’ll be a race to the top for who does that nuanced, context-driven thing better.”

But the fundamental problem is an engagement-based design paradigm in which, for example, social media algorithms rank which content we see, and the design choices reward shorter, bite-sized bits of information that prefer a lack of context and nuance. This is a side effect of sorting for what is engaging—and engaging is sticky and mimetic and simple.

So it’s not a content moderation problem; it’s a design and paradigm and business model problem. The system in place rewards values that are not congruent with the scientific process and that kind of epistemic practice.

We at the Center for Humane Technology focus all our time on systemic change. Short of that, there are certainly better practices and good examples out there. Instead of debates, can we have antidebates where people come to the table to discuss any controversial issue? Could participants come to the table starting with what they agree with from the other side’s perspective and how they would build on top of it? Could there be good faith turn-taking? Could all the key stakeholders of a major debate be represented? There are all these subtle qualities that we call good faith dialogue that we’re not used to encoding explicitly. But if you were to encode them, you’d start to see that there are ways of doing collective sense-making of an issue.

It’s not that I don’t think good faith journalism or other forms of media can exist online—it’s just that we want a world that rewards that systematically.

When you think about how AI—especially the large language models/chatbots—is likely to affect misinformation and its impacts, what worries you most? And can we do anything about it?

Harris: It is an enormous problem. One of the open questions people have is: Why haven’t GPT-2 and GPT-3 already resulted in the mass proliferation of AI-generated accounts spreading misinformation on the internet? Of course, one of the answers could be that it has, and we just haven’t found out yet because it really does pass the Turing test—it won’t clearly have the marks that it’s machine generated. I think that presents a real problem.

I think we’ll need to start looking at—everybody’s been saying this—human authenticated real ID-type systems that verify that I am indeed a unique human being; I can’t register multiple accounts. And then create headwinds so that people—individual accounts that represent real human beings—have an increasing price for posting more frequently.

It’s almost like insurance. You can’t just have 20 things happen to your life and use your insurance over and over and over again. Your premium goes up. There’s a kind of a headwind that disincentivizes people from overexploiting a channel or a commons. There’s a commons of attention, and if people are posting frequently, introducing some amount of scarcity into that might also help—equalizing the playing field and putting some limits and constraints there. Wouldn’t we do better if we had a slower, more thoughtful information sharing economy?

We really are left with some conundrums. There’s a great paper by Tim Wu, who used to work at the White House for a couple years. He’s a law professor. He wrote a paper in 2017, “Is the First Amendment Obsolete?” One of the things I had to reckon with is that a lot of our ideas about freedom of speech originated in a time when speech was expensive and listening was cheap. Now that’s been reversed: listening is expensive and speech is cheap, because now I can automate and dump speech into the commons ad nauseam. Listening has gotten expensive because everything is now an opportunity cost. The attention economy is maxed out.

And so we have to recognize when there’s a limit, when the moral and philosophical guiding values are out of date for the new reality. To the degree that there are economic incentives for moving attention around in the world, we should structure them to help our information system respect attentional boundaries.

You’re talking about one of the problems being the quantity of all these things vying for our attention. What about problems with the quality of information—for example, people who are just playing around with ChatGPT and have noticed that it spits up wrong information once in a while, from what may feel like an authoritative source?

Harris: Yeah, hallucination is a problem. There’s the kind of malicious use of a new technology, which could be flooding the zone with media generated by bad actors, or there instead could just be a political motive. The next 2024 presidential campaign—will that be a human campaign, or will that be basically information bots flooding the zone, all run by AI?

We have to recognize when there’s a limit, when the moral and philosophical guiding values are out of date for the new reality.

And that is different from what you’re talking about, which is that even when people are trying to use these tools wisely—to rely on large language models (LLMs) for information, kind of like we rely on Wikipedia now—they can end up getting bad information. We saw this actually happen with Wikipedia 20 years ago. People used to say, “Don’t use Wikipedia as a source,” and now people have just social normed our way to arrive at, “Well, it’s OK, everyone’s doing it.” There’s probably going to be a similar thing with hallucinations by LLMs.

There’s evidence that if you simply prompt GPT-4 to reflect on the possible mistakes that it might have made in its answer, and then ask it to correct them—without specifying the answer you think it should be—it can correct some of those mistakes. No one who shaped GPT-4 knew that that was possible. The ability to self-correct is a thing that was studied later, after it was already released.

I’m not saying that as an AI optimist. The point is that we really don’t know what alien mind we are interacting with. We are deploying this stuff far faster than we understand what the capabilities are. It’s very reminiscent of social media. At first, YouTube recommendations on social media looked pretty great. And yet, if you walk away from YouTube and you come back 10 years later, suddenly everyone believes in a different fragment of a broken reality. You can’t detect the unpredictability of a system like this.

What are your thoughts on how governance of AI should operate going forward?

Harris: It’s such a huge conversation, because there are major for-profit centralized AI labs like OpenAI, like Anthropic, like Google, that are building large AI systems trained at massive data centers and computer farms of GPUs. Governing those actors is different than governing sites like GitHub (an open online platform where developers create and share code).

Facebook’s model leaked through the open internet, and now anyone can use their $65 billion parameter model. I can download it and run it on my MacBook M2. If I can do dangerous things with Facebook’s leaked model, how are you going to govern me? I’m a citizen. I have free speech. I can do anything on my laptop.

I think open societies have to reckon with the capacities that are now existing in anyone’s hands; that a 15-year-old who’s just playing around on GitHub can download and play with dangerous AI capacities. That’s one of the governance challenges that we have.

The point is that we really don’t know what alien mind we are interacting with.

When we got into this conversation 10 years ago, we were thinking about, how are we going to do AI ethics and design ethics for technology? The most obvious thought is we need a Hippocratic oath for technologists—a “do no harm” oath. We need the equivalent of a white lab coat. But ethical training doesn’t get you all the way to the level of responsibility we would need.

We need to try to apply the principle that power has to be matched with commensurate responsibility and awareness and wisdom. One of the meta problems is that the complexity of technology’s consequences for society is a line that’s going up exponentially. But our wisdom and awareness at institutions and in culture are not. There’s a gap, and in that gap is where externalities and risk and fragility grow.

To make this concrete, if AI has unpredictable capabilities and consequences—capabilities like theory of mind, or knowing chemistry, or being able to compel TaskRabbits with bank accounts to do things in the physical world—then there are really dangerous things that can happen. If those things exceed what the governing institutions can anticipate and preempt, then existential risk grows in that gap.

The real question is, how do we bring those lines into alignment? That either means slowing down some of the complexity of the technology development, as the “Pause Giant AI Experiments” letter called for, or it means increasing the complexity of governance and getting to consensus and agreement way faster. Structurally speaking, governance of tech has to have as fast of an evolutionary update loop as tech’s evolutionary update loop. Are our existing institutions adequate or able to move at that pace? They’re not.

The best example of that kind of governance that we have is the work done by Audrey Tang, Taiwan’s digital minister. She built a digital governance system that can leverage the collective intelligence of the people.

To that point, what is the role for policymakers and regulators in this? In terms of closing the gap between our technological capabilities and our wisdom, it seems like policymakers and citizens might have stronger incentives for wisdom than companies do.

Harris: There’s no incentive for wisdom for for-profit actors who see themselves as acting in an arms race, where the driving ethos is, “If I don’t race to deploy, I’ll lose to the companies that do.We definitely need to govern it and change the incentives. How do we get the actors competing towards a healthier, positive outcome? That has to be done with governance and regulation of some kind.

Structurally speaking, governance of tech has to have as fast of an evolutionary update loop as tech’s evolutionary update loop.

But the governance question is not just about, “Should Congress pass this law?” The answer to nuclear weapons wasn’t “let’s just pass this law on nukes.” It required international cooperation and new institutions—the Nuclear Test-Ban Treaty and the United Nations. There’s this whole set of things that have to exist. AI’s going to be very similar to that.

Given all the pieces that have to combine and coordinate to do governance for AI, how can it start? Who should take the first step?

Harris: It has to start with people and citizens, and I don’t say that in a populist political movement sort of sense. Social media’s corrosion of the social fabric and social trust means that we now live in a legitimacy crisis that puts our institutions in a double bind.

For example, the Biden administration launched a Disinformation Governance Board to deal with the problem of disinformation. The critique that went most viral on social media was, “It’s a Ministry of Truth.” And that got everybody outraged, and it was disbanded shortly after because the most radical and cynical take is what went the most viral. When institutions respond in good faith to a crisis, social media will always reward the most bad faith and cynical interpretation of their motives because that interpretation will go the most viral on social media. That’s a good example of what happens in our current social media environment without legitimacy.

Social media’s corrosion of the social fabric and social trust means that we now live in a legitimacy crisis that puts our institutions in a double bind.

Then you have institutional failures—the government making the wrong calls, such as wasting billions of dollars in misallocated handouts during the pandemic—and people are going to be upset about that and rightly calling those out. Social media amplifies both. Regulation and governance are almost impossible in this kind of legitimacy crisis. 

So the first step is to get citizens together in an Audrey Tang-style process where people take a narrow, AI-related issue, like intellectual property: How do we deal with compensating artists? They come up with an answer, like an attribution scheme for paying artists a certain amount based on how much of their stuff was trained in the system.

This kind of approach could help repair social trust and legitimacy. Let’s say that you, as a citizen, know that some other representative group of 100 citizens came together and sat in a room for three days, with top experts inputting into that process, and then they came out with, “This is a proposal of what we want to do.” You can’t disagree with the legitimacy of that proposal. It’s not a special interest. It’s not a lobby. It’s not a corporation. It’s people debating the issues.

That gets you legitimacy, which then gets you governance that doesn’t just die 10 days later because social media amplified bad faith takes. Citizens coming together in a process like I described creates the legitimacy for the legislation or the law to be deployed. And then that creates guardrails that are actually meaningful.

Are there ways to use technologies like social media and AI that would strengthen democracy rather than weaken it? What would that look like?

Harris: First, just to say—that is the goal. The goal is not a no-technology society or a less-toxic technology society, because we’re currently in a downward spiral where tech erodes culture and breaks shared reality. And that then creates a culture that is oblivious and doesn’t legitimize institutional responses, which means tech goes unregulated, which keeps the downward spiral going. So 10% less-toxic social media doesn’t get you to a different world; it just gets you to a slower downward spiral.

Imagine a world where AI understands the natural regenerative capacities or pathways of a social fabric.

We need to get to: tech plus democracy equals stronger democracy; tech plus families equals stronger families; tech plus mental health equals stronger mental health. Those are the equations that we’re looking for. How could it work? I think it is totally possible. It would require a really radical set of different incentives—we have to look at what regenerates the health of the social fabric.

If you ask yourself, for example, what helps cultivate the strength and health of the social fabric? What builds trust between people? What builds shared reality between people? What helps mental health improve between people? And then you identify the things that synergistically satisfy those conditions—like shared activities and group events and these kinds of things. For example, when people spend time with one another, with people they disagree with—and they’re a real person—that heals a lot of polarization.

Imagine a world where AI understands the natural regenerative capacities or pathways of a social fabric—AI that creates and routes human beings to the subjective experiences that satisfy more of those underlying conditions that regenerate the social contract. Instead of a better AI-powered dating app that routes you to the perfect match, which is going to be hard for it to do, it routes us to more community experiences. It’s more like we are just in abundant social connection all the time, which has many other positive cascading benefits.

Specifically for improving democracy—I think we can use AI to find what different groups might agree on in a seemingly otherwise polarized debate, and use large language models to find and generate new novel policy ideas that multiple groups might agree with. Audrey Tang is actually implementing that right now in her Taiwan system. Her next version is using GPT-4 and other large language models to accelerate the process by which consensus can occur. That’s a very optimistic and positive vision for how democracy can work with AI.

Your participation enriches the conversation

Respond to the ideas raised in this essay by writing to [email protected]. And read what others are saying in our lively Forum section.

Cite this Article

Harris, Tristan, and Sara Frueh. “The Complexity of Technology’s Consequences Is Going Up Exponentially, But Our Wisdom and Awareness Are Not.” Issues in Science and Technology (May 16, 2023). https://doi.org/10.58875/TQKW5953

https://doi.org/10.58875/TQKW5953