Episode 29: To Solve the AI Problem, Rely on Policy, Not Technology
Artificial intelligence is everywhere, growing increasingly accessible and pervasive. Conversations about AI often focus on technical accomplishments rather than societal impacts, but leading scholar Kate Crawford has long drawn attention to the potential harms AI poses for society: exploitation, discrimination, and more. She argues that minimizing risks depends on civil society, not technology.
The ability of people to govern AI is overlooked because many people approach new technologies with what Crawford calls “enchanted determinism,” seeing them as both magical and more accurate and insightful than humans. In 2017, Crawford cofounded the AI Now Institute to explore productive policy approaches around the social consequences of AI. Across her work in industry, academia, and elsewhere, she has started essential conversations about regulation and policy. Issues editor Monya Baker recently spoke with Crawford about how to ensure AI designers incorporate societal protections into product development and deployment.
Resources
- Learn more about Kate Crawford’s work by visiting her website and the AI Now Institute.
- Read her latest book, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence.
- Visit the Anatomy of an AI System artwork at the Museum of Modern Art, or see and learn about it virtually here.
- Working with machine learning datasets? Check out Crawford’s critical field guide to think about how to best work with these data.
Transcript
Monya Baker: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academies of Sciences, Engineering, and Medicine and by Arizona State University.
Artificial intelligence has become a pervasive part of our society. It helps screen job applicants, advise law enforcement, conduct research, even finish our sentences. But what role should it play? We often marvel at technical accomplishments and overlook how AI shapes society.
My name is Monya Baker, senior editor of Issues. On this episode, I’m joined by Kate Crawford, a leading scholar of the social and political implications of AI. Her latest book, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, examines the hidden cost of AI.
Welcome, Kate.
Kate Crawford: It is lovely to be here with you, Monya.
Baker: In your book, you call AI an extractive industry. What does that mean?
Crawford: Well, it’s one of those words that we really associate, of course, with mining. Now, of course, mining is part of the backbone of artificial intelligence as well, but the extractions that make AI work go all the way through data. So we can think about the way that large language models and ChatGPT are trained on datasets that are basically the size of the internet, which is entirely extracted and then harvested. We can think about the labor that is behind so many of these systems, from the clickworkers who are there labeling the datasets all the way through to the people who are actually doing the grunt work, often pretending to be AI systems themselves.
It’s very extractive of human labor while also hiding that human labor. And then finally, it’s very extractive in terms of environmental resources. Here we’re talking about energy, water in all of those data centers and, of course, the mineralogical layer, all of those rare earth minerals, lithium, cobalt, all of the things that make planetary computation work at scale. Across all of those three vectors, we see that it is actually the extractive industry of the twenty-first century—building on prior extractive industries, but really reaching this new focus specifically around data and labor.
Baker: What conversations do you think are really missing around AI? What are some things people aren’t talking about that they need to be talking about or maybe things that they are just now starting to talk about?
Crawford: There are so many, Monya, it’s hard to know where to begin. I mean, certainly, look, I’ve been in this research space for a long time. One of the big traps I think that people fall into is assuming that AI is a very abstract, immaterial, and objective set of mathematical functions or algorithms in the cloud, but it’s a profoundly material technology that’s leaving an enormous footprint on the planet, so part of really contextualizing how AI is made, and I mean that in the fullest sense, is really core to the work that I do. It’s really showing people, if we pull the curtain away, just like in Wizard of Oz, you’ll actually see that there are a few men holding the levers creating these systems that require such enormous resources to make work.
The other thing, of course, that’s very timely at the moment is the way in which large language models like GPT have really been focused on as a type of almost superintelligence. There is a lot of attention on the fact that we might be seeing the creation of so-called AGI, or artificial general intelligence, and I think this is an enormous distraction from how these systems actually work and from the very real problems that they have, both in terms of their technical errors, but also in terms of the social harms that they could actually cause because of the ways in which they’re being applied so widely, from healthcare to education to criminal justice to training, you name it. In that sense, I think part of what we need to do is really to demystify these types of magical discourses that get built up around artificial intelligence.
Baker: Another thing that you’ve talked about which I think plays into this really well are these epistemological assumptions that AI can entrench, that there are only six emotions, that there are only five races. I wondered if you could just talk about some of those conceptual errors. I was quite intrigued by your term enchanted determinism.
Crawford: Certainly. Well, enchanted determinism is a concept that comes from a paper that I wrote with a historian of science, Alex Campolo, and we tracked this phenomenon that’s really been emerging since the 1970s. When people are presented with an artificial intelligence system, they’re often presented as being both magical and enchanted, but also deterministic, which is to say that they can make startlingly accurate predictions about what’s going on or potential future events.
I think this phenomenon, this idea of seeing something as both magical but deterministic, creates a set of secondary effects such as assuming that we simply cannot understand how they work—they’re beyond us—and that we simply cannot regulate them because they are so magical and we have to allow them to do their thing. I think both of these phenomena are interrelated, but actually very dangerous, particularly because in so many ways, we can see how these systems are constructed. While it’s difficult to say exactly how a neural net achieves the answer, we know how we train them, we know what the algorithms are designed to do, and we know that they are in dire need of regulation, particularly when they’re actually causing serious harms, infringing on basic fundamental rights, et cetera.
Part of what I think we need to do as part of this demystification is really seeing these as we would see any other technology that is both very powerful, but also has very real downsides. We could think here of nuclear power, for example. We could think of pharmaceuticals which have very rigorous regulation and policy regimes. And frankly, in the United States, we’re very far behind that.
To answer the first part of your question around the politics of classification, we most commonly hear about the phenomenon of bias in AI. We can see that these systems quite frequently produce discriminatory results. They can represent people in highly stereotypical ways, and they do have these core assumptions which are really problematic, like six emotions or two genders or five races—ideas that have been completely, scientifically discredited, yet live on hard-coded into AI systems.
Part of what I think is really useful here is really looking at these classificatory logics and really tracing the histories. Where does this idea come from, that we could track and detect and predict emotion using AI systems? Certainly, that’s one of the things that I studied in Atlas of AI, and I have a chapter just looking at the strange history of emotion recognition. In looking at that, we see that these ideas, again, have been really seen to be deeply problematic, have been critiqued since the very inception, yet they get built into systems because they give us a false sense of quantifiability: that you can look at a person, see a facial expression and then somehow quantify what they’re feeling on the inside. There is literally no scientific evidence that this is possible. In fact, there’s considerable evidence that it is not possible.
A lot of what I’m doing here is really testing the scientific precepts of how these systems work, and often we find that they fail, and they have baked within their logics these really retrograde notions that have been thrown out of the scientific discourse and certainly seen as very socially and ethically problematic. That’s why keeping that skepticism, keeping that investigative focus on these systems is so needed.
Baker: Even though that scientific basis has been thrown out, the systems are making consequential decisions about other people’s lives and livelihoods. What do you see as some of the consequences that people need to be paying more attention to?
Crawford: Well, I mean it’s interesting because, of course, there are so many AI systems that are either part of our everyday lives that we interact with on our phones and social media now using large language model. And there’s a whole lot of AI systems that we never see that are in the backend of systems, that are producing recommendations, that may only be machine-readable but may ultimately influence whether you get a loan or whether you get bail, depending on what sort of class you’re accepted into. I think these are the sorts of systems that when we look at them individually, it’s also really important to look at the interconnection between systems. That if you start to get red-flagged by one system, it will start to cross into another, and we start to see a whole range of really negative impacts.
I think, in the end, I’ve really learned a lot from working with practitioners who are based in very specific areas of expertise—be that medicine, be that education, be that hiring—and then listening to their concerns and seeing where these systems are actually making bad recommendations that can materially impact people’s lives. That’s certainly where I think we need to put our focus as researchers.
But now we’re also facing a series of new and emergent, quite dispersed harms from these new chatbots and large language models that are being built into search, that are being built into word processing, that are being built into so many business and entertainment systems that we use every day. Here, because these large language models are still so new, the risks are emerging as we’re using them. So we could think about the ways in which large language models frequently make mistakes, frequently share forms of disinformation. They also have a tendency to bake in a particular worldview. They come with their own ideological position, which is stated as though it is a fact. And obviously they have a range of potential defamatory and privacy infractions that are occurring all the time.
With every one of these new systems, we start to see taxonomies of harm emerge and then expand. And this is another reason why we really urgently need more people in the social sciences and humanities, that is, disciplines that have been trained to study societies, interrelational communication, the way in which complex social institutions work so that they’re looking at these issues and asking these questions before an AI system is deployed rather than waiting until it’s beta tested on millions or billions of people—and then we see those downsides in real time, which, of course, is far more dangerous.
Baker: How does society get deliberate about how AI impacts society, given that technology advancement outpaces how regulations advance?
Crawford: Well, I’ll be honest with you, Monya, this is one of the issues that keeps me up at night. Certainly, one of the ways that societies have dealt with emergent, powerful technologies in the past—we could think about the car, we could think about the player piano—these were technologies that mechanized particular functions, such as the playing of music or getting from point A to point B, and what has always happened is that there have been forms of regulation. There have been forms of law, forms of policy that were set around these technologies to increase their safety and reduce harms and, in the case of the player piano, protect the incomes of the creatives whose work was being drawn on to reduce this technology.
Again, we’re seeing that now with AI systems that can produce images or sound or video with just a text prompt. Of course, all of those systems have been trained on the work of people—creatives over decades and centuries—who are not receiving recompense for the use of their work in these models. These are the traditional questions of law and regulation. But in so many ways, as you’ve pointed out, regulation moves slowly, and in some cases, that’s by design. It’s because it’s deliberative. It’s about allowing people to make decisions about the pluses and the minuses of particular technologies.
But we are really in a moment now where there’s a step function difference between the speed of innovation and the pace of regulation, which is not just slow because the process is slow, but also because there’s a lot of vested interests who have been preventing regulations from passing that would make sure that people are better protected. Right now, I think the thing that worries me is that we already have a regulation debt, if you will, in the United States. We still don’t have strong omnibus federal privacy protections, let alone specific AI regulation as they’re currently developing in the EU with their draft, but soon to be real AI Act.
The other thing that concerns me is, honestly, the way that these very powerful AI models are being produced by really fewer than a half-dozen companies. That is a profoundly concentrated industry. They don’t release how these systems work, and we’re simply expected to trust them and to hope that they’re not causing harm. I think that’s a very concerning position right now that I think in many ways regulators could be demanding greater transparency, greater auditability and, ultimately, greater accountability from these companies. We’re not seeing that yet, so that’s something that I think is certainly very troubling.
Baker: How do you think that accountability and transparency can be demanded given the powerful vested interests? Who would do it?
Crawford: I think there are ways. There are good examples internationally that we can always turn to. One of the ones that I’ve worked on and is now being used in countries like Canada and Australia is algorithmic impact assessments. This is particularly true in any service that a government is using. We could think about this in procurement, where you can really go through a process where, if a company is selling a particular AI system to government, you can say, “We are going to go through an algorithmic assessment process to not only test and see how this works, but also to get community feedback: Do we actually want to see the system being used in, say, welfare or being used in the provisioning of healthcare?” There are so many ways in which we can start to have more guardrails placed around these systems.
We could also think about trade secrets exceptions, particularly when something is going to be deployed in high-stakes domains. I think, certainly, without having to share the source code publicly, companies can certainly offer to give people access to the systems to test them in highly controlled circumstances—but at least so that people can have a sense of what the error rates might be or what the potential downsides could be of these systems.
These are just two of many proposals that have been circulated around how we could actually start to do more rigorous testing and auditing, and certainly we’re starting to see in the EU act more attempts to try and build this into the lifecycle of technology. I think we’ve really seen this legacy over the last 20 years of “move fast and break things,” this idea that you just build a system and throw it out there and see what happens. Frankly, these systems are now too powerful, affecting too many people, and in too sensitive social institutions to be allowing that to stand. I think this has become ultimately a democratic question, particularly when these tools are likely to be having a direct impact on the ability to trust information in the public sphere; to be able to make sure that people are not being manipulated by the tools that they use. I mean, these really are very high level risks.
Baker: I think it’s so important that the societal implications get baked into the development. What are the best ways to make sure that that happens really concretely?
Crawford: Well, I mean this is something that as a researcher I’ve worked on both in a theoretical domain, which is writing papers and showing different ways we could create systems such as algorithmic impact assessments and so forth, but also really at the interface of how do we share between communities, ways of working with, say, machine learning datasets better, that will actually raise these sorts of concerns early.
One of the things that I’ve been working on, I head up a project called the Knowing Machines project, which is also how we come to know these machines and also how these become machines of knowing in themselves. One of the things that this project has done, which I’ve really enjoyed, is that we’ve brought together a team that’s just very interdisciplinary—but even people who sit outside traditional academic disciplines, so we have machine learners; we have sociologists; but we also have legal professors; we also have investigative journalists; we also have artists. It’s a really fantastic group of people.
In working with this group, we realized that one of the things that would be really helpful would be almost like a critical guide for engineers or people who are coming to AI for the first time: how they could actually start asking some of these kind of questions around social harms, around what a dataset actually represents, how it can bake in particular assumptions that could be problematic down the road. And so we created this critical field guide to working with datasets, which we’ve now released.
I’ve been delighted to see that already now be used in a whole range of courses in the US and the UK as a way of really giving people a different way of working with AI systems from the outset, which is how do we ask these questions that in so many ways have been split off from computer science? We have ethics and policy questions all the way over there, and then we have technology on the other side. I think it’s really important to bring those together, to interleave them even at the very early stages where people first start to be trained in how to build these systems.
Baker: You know, when you were talking about interleaving and bringing insights into different parts of the process, I was struck by how interdisciplinary your background is. You’re used to thinking this way. You’re an academic. You’ve co-founded many interdisciplinary research groups. You’ve worked in industry, and you’re a musician. You’ve collaborated with artists and have exhibits in some of the world’s leading art museums. Tell me how do you see these different lenses impacting the way you see society and your work?
Crawford: Well, I certainly wouldn’t say that I’ve followed a traditional path into these questions by any way, but I would say that part of what I have found really useful in having an academic background and a creative background and a background in seeing how these systems get built by industry is that it’s by using these different lenses that we can get a rich sense of not just how they work, but how they might impact more widely how we live.
If there’s one thing that I’ve really enjoyed in my creative practice and in collaboration with artists has been sort of really asking this question around not just how should a technology be made better, but what sort of world do we want to live in—and then how do we get there, rather than letting technology set the agenda. In some ways, I think it’s helped me get past the idea that just because something is possible, it doesn’t mean that it has to be done. And that, in some cases, systems just present such a significant downside that we can actually say no, that we can have a real politics of refusal, which is, in some ways, I think a muscle that we have forgotten to exercise over the tech-utopian decades where everything new and shiny was exciting and had to be used.
I think, by building up that atrophied muscle, we will start to see that actually these systems can be very useful in some contexts and extremely harmful in others. And the only way we’re going to know is by doing a far more forensic, investigative study of what these systems do, what they contain, and then what happens when we apply them to complex social systems. I call this a social-systems analysis, which was an approach I really developed with the law professor Ryan Calo. That should give you a sense of why I think interdisciplinarity is relevant at every stage.
But certainly, in doing this, I think we are facing a couple of big challenges, and the first one I’ll say is what we’re seeing right now with GPT-4, which is that we’re being told by OpenAI that we cannot know anything about the training data that was really used to create this system. We don’t know about the algorithms or the different layers of processing or post-processing that they’re doing to create this particular large language model. That should really concern us because it means we can’t have that forensic testing. We can’t have those investigative questions that are needed in order to assess if something will ultimately be harmful as well as being potentially helpful in some domains. We have to be able to actually ask those questions scientifically and rigorously before we just blindly accept that we should be using something.
Baker: Do you have examples of other industries where society has made these demands successfully?
Crawford: Oh, absolutely. I mean, we could think about airplanes, for example, being something that goes through very rigorous safety testing. We could think about the FDA’s role in relation to pharmaceuticals, that you have to present really rigorous forms of testing. Now, obviously, these are systems that haven’t always worked. There’ve been occasional failures. We could think certainly what happened with OxyContin as an example of a real failure at the FDA level, but in some, it’s better to have these processes than to not have them, and they’ve certainly saved us in many instances where unsafe drugs could have been unleashed on the public and really threatened people’s lives.
In this sense, we are certainly looking like we’ve reached a point where we’re going to need a specialized regulatory agency just to keep up with what’s happening in artificial intelligence, and that is something that many researchers and policymakers have called for over the years. Frankly, I’ve sometimes been on the fence and thought, well, perhaps the existing agencies will be enough. They can really just expand their remit to look at how AI and machine learning is affecting their domains. But looking at what’s happened even just in the last year and the pace of that is such that I’m really starting to come around to the belief that we may need a dedicated agency to do that.
Baker: One of my questions was whether or not you thought the focus of regulating or controlling AI needed to be on the application, hiring, education, healthcare, or if it needed to be on the technology, and it sounds like you’re leaning towards the latter.
Crawford: Honestly, Monya, I think it’s both. Because I think in the end, you have to be able to assess and test the technologies to see how they’re working—but then you really need to see how they’re being used in practice, and that means looking in the domain-specific application. Looking at how it’s being used in a hospital, or in a schoolroom, in a courtroom. These are spaces where these systems are already if not being deployed, certainly being discussed as being ready to be deployed there, and I have very real concerns about that.
I’ve been doing this for a long time. I’ve spent a lot over the last five years in particular studying training datasets in very close-up, granular specificity, and I can tell you I just keep finding horrors in there. These are very indiscriminately collected, vast training datasets that are meant to be ground truth, and it’s an extremely slippery ground indeed. In addition to the horrors that we’ve published about in the past, it’s really this idea that somehow the internet is a reflection of all of the complexity of the world and reality that I think we have to be far more critical of. And that really matters if that is a system that’s going to be used to assess if someone is going to be employed or if someone is going to be released on bail.
I think these are the moments where these systems should be put under the highest levels of scrutiny. That requires both technical expertise and deep sociotechnical expertise, and I can certainly see that if we were, for example, to be designing an agency like that, that you would really be wanting to have all of those forms of expertise at the table to really assess whether a system is ready for primetime.
Baker: This is a little bit of a tangent, but I remember learning about a registry for clinical trials, ClinicalTrials.gov, and being overwhelmed with realizing that it was a human invention. ClinicalTrials.gov was put into place by a group of medical journals when they realized that pharmaceutical companies were only publishing results that were favorable for the drugs that they were testing, and ClinicalTrials.gov says, “No. When you start the test, you register it, and then people can check up on you.” It’s just such a part of clinical trials that it’s hard to imagine that had not always been there, and I can imagine a world where there’s similar protections with AI.
Crawford: I would absolutely agree with you. I think that’s an excellent example of how regulatory and policy regimes can iterate and can evolve, but we really have to start from somewhere. Certainly, one of the things that I worked on a few years ago, which was “Datasheets for Datasets,” was an attempt to really encode what datasets contained. The fact that these things are being applied off-the-shelf without any information of their provenance or their potential risks or what even that dataset was originally intended for really amazed us. That project, which was led by Timnit Gebru, has been really surprising to see it’s been very successful in that it’s now been used within certain companies, but it’s also being built into things like the AI Act as a way at least of creating some sort of provenance information.
I’ll be honest with you, when we started that project, I mean it continues to shock me that that didn’t exist. There wasn’t any type of consistent recordkeeping, and it’s still not an industry standard. It’s slowly emerging, but we still don’t have that as an enforced standard for datasets, which certainly strikes me as being essential at this time in history when these datasets are being used for so many applications.
Baker: Thank you for joining us, Kate.
Crawford: It was lovely to talk with you, Monya.
Baker: To find more of Kate’s work, visit her website at katecrawford.net and read her latest book, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Find links to these resources and more in our show notes. Email us at [email protected] with any comments or suggestions, and I encourage you to subscribe to The Ongoing Transformation wherever you get your podcasts. Thanks to our podcast producer, Kimberly Quach, and audio engineer, Shannon Lynch.
I’m Monya Baker, senior editor of Issues in Science and Technology. Thank you for joining us.