How Is AI Shaping the Future of Work?

For as long as people have speculated about the development of artificial intelligence, they have debated its potential impacts on the labor market. Today, several years into widespread use of large language models, those questions are more urgent, but the answers are less clear. Is AI already taking jobs away? Could human beings flourish in a world in which they no longer have to perform economically valuable work?

On this episode, Massachusetts Institute of Technology labor economist David Autor joins host Sara Frueh to discuss the possible impacts of AI on the future of work, what that means on an economic and human level, and what policies may be able to shape AI in a way that works for humans.

SpotifyApple PodcastsStitcherGoogle PodcastsOvercast

Resources

Transcript

Sara Frueh: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academy of Sciences and Arizona State University.

As artificial intelligence becomes ever more advanced and capable, it’s being harnessed more frequently in American workplaces, and it’s raising thorny questions about how it will impact the nature and number of human jobs in the future.

I’m Sara Frueh, an editor at Issues. I’m joined today by David Autor, a professor at the Massachusetts Institute of Technology and one of the world’s leading labor economists. He’s known for his research on the effects of globalization, automation and technological change on the labor market. He served on the National Academy’s committee that produced the 2024 report Artificial Intelligence and the Future of Work, and he’s written extensively on the subject.

He joins us today to talk about the possible impacts of AI on the future of work, what it means on an economic and human level and policies that may be able to shape the future of AI in a way that works for humans. David, welcome. Thanks for joining us.

David Autor: Thanks very much. It’s a pleasure to be here.

Frueh: I want to spend most of our time talking about AI and the future of work. But first, I’d like to hear your thoughts on how AI is impacting work now. There are a lot of news articles, for example, on how a lot of entry-level coding jobs are going away and speculation about how AI is impacting entry-level work in general. Do we have good big-picture data on how many workers are using AI in their jobs and the degree to which workers are being replaced by AI so far?

Autor: At least half of workers, at this point, are using it in their jobs, and probably more. In fact, more workers use it on their jobs than employers even provide it because many people use it even without their employer’s knowledge. So it’s caught on incredibly quickly. It’s used at home, it’s used at work, it’s used by people of all ages, and it’s used now equally by men and women and across education groups. So it’s pretty broadly used.

In terms of how many jobs have been lost to AI, we really don’t know. And the problem that makes this difficult is that the economy is, in general, undergoing a vastly slowing labor market. And so employment growth among young people is down in every sector, including among non-college, among college. And even some of the reports of decline of software developers, that starts before the release of ChatGPT. So the peak is in April of 2022, ChatGPT is released in November of 2022. So it’s not even clear that we should attribute that.

There are certain occupations that I think are much more threatened than software development, like language translation or medical transcription. And I’m worried for illustrators, especially because we don’t have good intellectual property laws around creative work as used by AI. A lot of people’s creative work is being sucked into machines and then recycled, and they’re not compensated for that.

So the only thing that’s unambiguously positive—that doesn’t have any of that sort of “on the one hand, on the other hand,” is new activities that come into existence because of the technology, requiring new expertise that’s valuable. And that’s a very uncertain process.

I’ll give you a great example of something that has come into being, it’s not just because of AI, but in 1980, the Census Bureau had never heard of the occupation of data scientist. It only recognized that, I believe, in 2018. There are a quarter million data scientists in the United States at this point, and that’s a job that didn’t really exist prior to the era of big data and computation. It’s a big new category of work that results from new technology, and it requires specialized expertise. But of course, it also requires high levels of education.

Frueh: Part of the anxiety around this moment is that we can envision jobs, areas of expertise that will go away from AI, but we’re still in that point where it’s unclear what and how many jobs will be brought into being because of this technology. Is that your sense of things?

Autor: Yeah, absolutely. But even more fundamentally, let’s say a million jobs are destroyed, and a million are created. The people who are taking the new work are not usually the people displaced from the old work. So even if we said, “Look, the labor market will be 5% better on average.” Well, it might be 90% worse for some people and 95% better for others, and no one experiences 5%.

Frueh: Yeah.

Autor: There’s many reasons to recognize there is real risk to people’s livelihoods. People are paid not for their education, not for just showing up, but because they have expertise in something. Could be coding an app, could be baking a loaf of bread, diagnosing a patient, or replacing a rusty water heater. When technology automates something that you were doing, in general, the expertise that you had invested in all of a sudden doesn’t have much market value.

It used to be worth a lot to know streets and routes. As a taxi driver, that was specialized knowledge, but now that’s all on a phone. Language translation is a very high-level cognitive skill, but now we have machines that can do it pretty well. They won’t do all of it, but they can do a lot of it. That’s a specialized form of knowledge that someone has made a substantial investment, and that’s where the value of their work is coming from. When they’re displaced, it’s not that they can’t find other work; it’s that they’re unlikely to find work that’s as well paid.

And so my concern is not about us running out of jobs per se. In fact, we’re running out of workers. The concern is about devaluation of expertise. And especially, even if, again, we’re transitioning to something “better,” the transition is always costly unless it happens quite slowly. And that’s because changes in people’s occupations is usually generational. You don’t go from being a lawyer to a computer scientist, or a production worker to a graphic artist, or a food service worker to a lawyer in the course of a career. Most people aren’t going to make that transition because there’s huge educational requirements to making those types of changes. So it’s quite possible their kids will decide, “Well, I’m not going to go into translation, but I will go into data science,” but that doesn’t directly help the people who are displaced.

And so, really rapid transitions in the labor market are scarring. And we saw this during the China trade shock, especially in the period between 2000 and 2007. More than a million manufacturing jobs were lost. That’s not a large number in the scale of the US economy, but it was very, very regionally concentrated in the South Atlantic and the Deep South. A relatively small handful of counties where the kind of lifeblood industries were wiped out, and those workers had to, if they were going to retain employment, many of them had to change locations or at least change careers within their location. And of course, the manufacturing work they were doing, they had specialized expertise in and experience.

And so the next thing that was available to them would much more likely be an inexpert job. They could do food service, cleaning, janitorial service, home health aide, security, and so on, but that’s not going to pay nearly as well because it’s inexpert work. Most people can do it without training or certification. It doesn’t mean it’s not socially valuable. Some of those things are life and death activities, like being a crossing guard, or driving a school bus or being a daycare teacher. The stakes are incredibly high. But because adults of sound mind and body and character can learn to do that work relatively quickly, it doesn’t tend to be highly paid.

Frueh: So you mentioned China shock and all the harms that resulted to people and communities over that. Do you think that we are potentially facing something of that magnitude again, but this time mostly with knowledge workers rather than manufacturing jobs?

Autor: So the most important similarity, if there is one, is the speed, that certain things could change very quickly. Certain occupations, like language translation, have changed very quickly. Already, the medical transcriptionist occupation has disappeared pretty much. And those things can happen if machines gain capabilities quickly.

The greatest similarity is that this could happen quickly in certain areas, in certain activities, and it’ll be extremely disruptive and scarring for the people who lose that work and have to do something that’s lower paid and not as consistent with their skillset. There are important differences. One of those differences is that the China shock was very regionally concentrated. It was, as I mentioned, in the South and the Deep South, in places that made textiles and clothing and commodity furniture and did doll and tool assembly and things like that. So it’s unlikely that the impacts of AI will be nearly as regionally concentrated. And that makes it less painful because it doesn’t sort of knock out an entire community all at once.

We’ve lost millions of clerical and administrative support jobs over the last few decades, but nobody talks about the great clerical shock. Why don’t they? Well, one reason is there was never a clerical capital of the United States where all the clerical work was done. It was done in offices around the country. So it’s not nearly as salient or visible. And it’s also not nearly as devastating because it’s a relatively small number of people in a large set of places. So that’s one difference.

The other is that AI will mostly affect specific occupations and roles and tasks rather than entire industries. We don’t expect entire industries to just go away. And so that, again, distributes the pain, as well as the benefits, more broadly.

And then the third is with the China trade shock, the point of view of most manufacturers is pure pain. There was no upside. It was just like, “Wow, prices are at a level we can’t compete at. We can’t stay in business.” Whereas for many firms, they’ll perceive AI as a productivity increase. Now, that doesn’t mean they won’t lay off workers and so on. I’m not saying that. But it will not feel the same for that reason. So I think there really will be important differences.

But one thing we can learn from the China trade shock was how disruptive and damaging and long-lasting the consequences were of job loss. And so we should have better safety net systems for dealing with those. I’m happy to talk about how we could do that better.

Frueh: Yeah, that would be great. If you could talk a little bit about what you see as steps policymakers can take to soften the blow of this transition and help workers acclimate to it.

Autor: Sure. And I should say, it’s not just a matter of acclimating. We should be directing where we want to go. We shouldn’t treat the technology as autonomous. My friend, the philosopher Josh Cohen, once said, “The future’s not just a prediction exercise, it’s a design exercise.” So we should be thinking about shaping where we want to direct the technology, not just adapting to what happens.

But in terms of adapting to what happens, actually, there’s a really interesting policy experiment during the Obama administration. Within the trade adjustment system, they set up something called wage insurance. Basically, if you lost your job because of trade, you qualified for essentially a special kind of unemployment insurance. Let’s say you were working a $25 an hour job, and you lost your job. Well, the wage insurance would pay you half the difference between your old job and your new job, for up to $5,000, over up to two years.

So let’s say, if you took a $15 an hour job, you’d be making effectively $20 an hour until the subsidy ran out. And this proved really effective at getting people to take work again quickly. It more than paid for itself very quickly. And why is that? Part of what makes job loss so scarring is if you’re making 25 bucks an hour, you may think it’s kind of beneath your dignity to take a $15 an hour job. That’s just a concession that you don’t want to make. And this policy at least says, “Hey, we get that. While you search for something better, we’ll make up half the difference.”

And this doesn’t have to be limited to trade or even to technology. It just has to be limited to people who lose work involuntarily and take a lower-paid job. And so it could be administered through the unemployment insurance system. Already, you only qualify for UI if you lose work involuntarily. So this would be a natural extension to make, and it could be quite cost-effective. It’s very scalable. It makes sense. And the evidence strongly suggests it worked quite well. We’d need more trials, but I would love to see those trials happen.

Frueh: What you just described was more of a reactive step, like this is happening, and then this is how we should react to it. When you talk about steering it, what are policies that we could actually direct this in ways that work for workers?

Autor: The reason I started with the safety net is we need a kind of belt-and-suspenders approach. No matter how much we steer people, there’s going to be displacement. In fact, there’s always job displacement, and we should have a system to help people with that.

So what does steering it mean? It means using it in ways that collaborate with people to make their expertise more valuable and more useful. Where are there opportunities to do that? They’re dispersed throughout the economy. One place where this could be very impactful is in healthcare. Healthcare is kind of one out of five US dollars at this point, employs a ton of people. It’s the fastest-growing, broadly, employment sector, and there’s expertise all up and down the line. We could, using these tools, enable people who are not medical doctors, but are nurses or nurse practitioners or nurses aides, for example, or x-ray techs, to do more skilled work, to do a broader variety or depth of services using better tools. And the tools are not just about automating paperwork, it’s about supporting judgment because professional expert work is really about decision making where the stakes are high and there’s not usually one correct answer, but it matters whether you get it approximately right or approximately wrong.

And so I think that’s a huge opportunity. And part of the reason it’s such a large opportunity, in addition to its scale and size and expense, is that at least half of the healthcare dollars are public dollars anyway. They’re coming through Medicare or through Medicaid or through subsidies to the private sector, like through the Affordable Care Act. So either way, the public has a lot of leverage here to say, “Hey, let’s do a moonshot of redesigning how we deliver some healthcare services and how we design the scope of practice of these different medical specialties.” That’s one area, though.

Another is how we educate. We could educate more effectively. We could help teachers be more effective in providing better tools. We could also provide better learning environments using these tools.

Another is in areas like skilled repair or construction or interior design or contracting, where there’s a lot of expertise involved. Giving people tools to supplement the work they do could make them more effective at either doing more ambitious projects, doing more complex repairs, or even designing and engineering in a way where they would be able to do tasks that would otherwise require higher certification.

Frueh: To reach this future that you describe, some of these ways to help people adapt, acquire new skills—to use this as an advantage rather than something that just harms the workforce—what do we need to do policy-wise in order to achieve what you just described? Of using this to augment people’s skills in a way? What kinds of retraining, what sorts of policies would you like to see to help that happen, to help steer it?

Autor: Exactly. And there’s not going to be one policy that’s going to do this. So I think some of it has to be very sectoral. In a moonshot in healthcare, for example, some of this has to be, where are the new opportunities emerging? What should people invest in? Some of those actually will be old opportunities, like in the trades.

And we have a lot of need actually. Our workforce in those areas is too small and is actually older, and we actually need a lot of new entry. And hopefully there’ll be a lot of investment. That was certainly something the Biden administration was planning, and a lot of it has been rolled back. And some of it is going to have to be developing better educational tools themselves to support adult learning. And adults learn much more effectively in the real world; they don’t want to go back to school.

And so, for example, there’s evidence that people using virtual reality to learn manufacturing skills do better than doing the same classes in a classroom. And so we could actually develop more tools like that. Historically, we know simulation’s a very effective way to learn, but it’s very expensive. So we have flight simulators, we have medical simulators, and these robotic patients that kind of bleed and scream when you do procedures on them. Why do we do this? We can’t practice on real patients or real airplanes. But as the cost comes down, a lot more things could look like that. So I think there will be advances there. I think that’s really important.

The challenge is things are moving quickly, and we have a lot of uncertainty. So that’s why we have to have multiple mechanisms in place, both insurance as well as investment, as well as technology creation.

It’s unfortunately the case that if you say, “What would’ve been the optimal moment in human history to give us these vast new powers? Right now? Or, let’s say, 1950?” The answer surely would’ve been 1950 because our institutions were in better shape, and we were able to make better collective decisions, and we had stronger guardrails, and the federal government made lots of successful, like, literally moonshots in that period. And I think this could have been harnessed well in that way as well.

But at the present, it’s much, much more decentralized. The institutions are much weaker; they’re much more divided. And additionally, unlike almost all major technologies of the 20th century, this one’s mostly private sector. Certainly the advent of the internet, of integrated circuits, of all kinds of transportation and communication capabilities, a lot of it came from the military, from universities, and from government grants, whereas AI is really a private sector activity.

So it’s very unusual and we don’t really have a close precedent for it, but that means that it’s much, much more steered by market incentives than by some notion of public interest.

Freuh: Actually, I want to build on that a little bit. And this may seem kind of random, but I was looking at OpenAI’s mission statement, which is, “To ensure that artificial general intelligence, by which we mean highly autonomous systems that outperform humans at most economically valuable work, benefits all of humanity.” Is that mission statement made up of two mutually exclusive goals?

I guess I want to talk about the purpose of work and what replacement of at least some, maybe a lot, of human jobs could mean for people and their sense of meaning, their sense of purpose. Can you build something that takes away the jobs of most humans—if that’s possible—and still benefit all humans? Is economically valuable work so tangential to our well-being that you could take it away and still have flourishing human beings?

Autor: Or even should that be your goal? Actually, I don’t even like that definition of artificial general intelligence. The goal of machines should not be to just do what people do slightly better. Our tools are valuable to us because they allow us to do things we can’t do. So many technologies enable capabilities that we simply don’t possess. Powered flight didn’t automate the way we used to fly. We just didn’t fly.

And that’s true for most of modern technologies. They’re important and consequential, not because they do the same old thing better, cheaper, faster, but because they enable us to do things we couldn’t do, telecommunication, flight, fighting disease with penicillin, seeing the insides of the interiors of subatomic particles, designing, computing things that we could never in a lifetime compute.

So I actually, I find their mission statement an amazing bait-and-switch. Artificial intelligence, by which we mean a machine that outcompetes humans in every domain. I’m reminded of this tweet I once saw that said, “We’re a modest company with modest goals. One, sell a quality product at a fair price. Two, drain the world’s ocean so we can find and kill God.” And that’s what I feel like when I read the OpenAI mission statement.

But I do think there’s this race for AGI that is not at all powered by some notion of human welfare, but it’s a competition among wealthy investors to reach this holy grail that they’ve held in their head since the 1940s when artificial intelligence sort of got started. And I don’t think that’s the most valuable goal, and I don’t know why that’s the one they’re pursuing. And I don’t think that automating everything or making machines better is the most useful application of this technology.

Like if you said, “Oh, I want to accelerate the rate of science,” I’m good with that. If you said, “I want to make crops, agriculture more efficient, I want to make power generation less environmentally costly,” those are all great uses, and that’s not competing with humans in any way, actually. None of us is a power generator; that’s not an occupation. The labor market, in my opinion, is the most important social institution because it serves a lot of purposes simultaneously that are all incredibly important.

One is it’s a means of income distribution that’s very equitable because in a world without or country without slavery and labor coercion, no one owns more than one worker, themselves. And so that inherently makes the value of labor income much more evenly distributed than capital, which people are not born with, has very concentrated ownership. So one, the labor market sort of gives rise to a relatively egalitarian distribution of income all on its own, if it’s working.

Two, it organizes. It gives people identity, meaning, purpose, rewards structure, social status, friends. I actually think most people, I know myself, don’t function well without those objectives.

Third, I think it’s very central to the functioning of democracies because the truth is, and you see this most in the United States, a person’s legitimacy as a citizen is very much tied to what they economically contribute. The US is extremely punitive to people who are not working for a living, unless they’re retired. If they’re retired, of course, you say, “Well, you’ve earned this.” If they’re a working age, but they’re not working, they’re not treated with much respect. And of course, if they’re children, we also under-invest in them as well.

And you could say, “Why is that?” Well, everybody’s seen as both a contributor and a claimant who’s working. I’m contributing, I’m paying taxes, I’m making stuff. And also claimant, I expect national defense, I expect security, I expect roads, I expect civil liberties, et cetera. So, in a country without work, even if with lots of income, we have a democracy problem because who would be seen as a legitimate stakeholder? You could imagine that if all the money was coming from a couple of companies, they’d say, “Well, we’re the makers here. We’ll give you all something, I guess, but what gives you the right to claim our resources?” So I worry about the political viability of a system that doesn’t have a well-functioning labor market undergirding it.

Even if we thought, “Okay, this would be great because although we won’t need to work, we’ll have all this income, everyone will benefit.” I don’t believe the everyone will benefit part. And even let’s say you believe that, you say, “Oh no, everyone in the United States will benefit even though no one needs to work.” Well, are you also going to extend that generosity beyond our borders? Does that also go to Kenya? Does that go to India? If they don’t need to work either, are we also going to support them? I think it would create an enormous democracy problem.

Freuh: When I think about this, I remember talking to Anne Case about this. She’s an economist who did work around deaths of despair, and that she said that part of what was driving those deaths was this being thwarted in your ability to contribute. So it’s hard for me to imagine that even if all of these great ideas of universal basic income, la, la, la, sort of took up some of the financial slack, that there wouldn’t be a huge loss of purpose for people if they lose their jobs, if people lose their jobs on a massive scale.

Autor: Yeah. So I like to contrast what I call WALL-E world versus Mad Max world. So I don’t know if you or your listeners have seen the movie WALL-E, but it’s a future where the robots do all the work and people sit around in these levitating armchairs, they’re 300 pounds overweight, they watch holographic TVs in their armchairs and drink “Big Gulps.” And it looks totally dystopian, but I think that’s the good scenario. That’s the good scenario because we’re wealthy and yet somehow we solve the income distribution problem so everyone can sit around purposelessly.

But I think a much more likely scenario is one that looks more like Mad Max: Fury Road, where there’s resources, but they’re controlled by a few people, and everyone drives around in their war trucks saying, “Compute, compute. I must get compute.” So even if you said, “Well, we eliminate the work, and we eliminate the purpose, but we still have wealth,” I don’t even see the second part working very well, let alone the first.

Frueh: So what are the most important things we have to do now, policy-wise, society-wise, to avoid that Mad Max scenario?

Autor: We need to make the labor market work. I think that’s the most important thing. The other thing we can do is we can hedge and diversify a bit. So although I do not at all like the idea of universal basic income because I don’t think it solves many problems, and I think it creates others, I do like the idea of what some people call universal basic wealth where people are granted the endowment of capital at birth, like ownership of stuff.

And this is not a crazy idea. Even the Trump administration is talking about MAGA bonds, people will be issued bonds. And what does that do? Well, one, it’s a one-time transfer. It grows in wealth. But it also diversifies people because, let’s say, labor becomes less valuable, but capital becomes more valuable. Well, you’re hedged. And then additionally over the longer run, it broadly distributes the ownership of capital. So we would have voting rights over the use of capital as opposed to voting rights through some other way.

With a 20-year lead time, a lot can happen with that. And I do think we have a 20-year lead time. Wage insurance, universal state capital, but then a lot of it is about investing to shape the way we use technology and the skills people have to use it. And I think there’s real room there to use technology to collaborate with people to make them more effective and open opportunity. It’s not just a matter of automating things away. And I think automation, not only is it overrated, it’s also over-promised. It doesn’t usually happen nearly as fast as the people designing the tools think that it will.

Frueh: So before we go, I think we’re almost out of time, I just want to ask a question that I often ask to people about new technologies. How hopeful are you, or aren’t you, that we can navigate AI and its impacts on work in a way that helps most of humanity rather than hurts it? And if you have optimism, in what is it grounded?

Autor: So I think one should distinguish between what I think is possible and what I think is likely. So what’s possible is a lot. There’s enormous opportunity. What’s likely is a subset of that because of the issues of governance and shaping that we’re talking about. I would say one reason for optimism is we lose sight in the West of the fact that this has been the best four decades of human history for improvements in human welfare.

Four decades ago, a very large proportion of the world was poor, and China was over 90%, living in extreme poverty. In China, now, it’s effectively zero percent. In the rest of the world, it’s fallen by 20 or 30 percentage points. And a lot of that is because we didn’t have a world middle class four decades ago, and now we do. So industrialization has led to a massive rise in prosperity, and not just in China, in Sub-Saharan Africa, in Central and South America. Many, many places are much better.

And I think AI actually has the potential to help more low-income countries become more autonomous, more effective, because they can have access to the equivalent of more expertise for medicine, for engineering, for computer science, for roads and bridges, for schools. So I think there’s room for hope there, a reason for hope there.

But I think the right approach, the right attitude to have is to be both optimistic and pessimistic simultaneously, to recognize there’s great opportunity and to be seeking it, to recognize that there’s significant risk, and to be trying to build systems in place that help us adjust to those risks. So those labor market programs I was speaking of, more broadly distributing ownership of capital. Because even if things improve, even if they improve a fair amount, the improvement will be uneven, and the losses will be generally very concentrated among people whose expertise is devalued, who lose their jobs, who lose their livelihoods, so they may not experience the average. No one might experience the average.

So we really want to be aware and putting systems in place now. And you could say, “Well, isn’t the world just a tough place? Get used to it.” But we all pay the price of the failure to do so. The China trade shock wasn’t just harmful to the people who directly experienced it. It roiled our politics. It made a lot of people extremely angry, and I think it’s done lasting damage to our national psyche. And even though it was all of us who got lower-priced products and maintained our livelihood.

So I think we have a collective interest in managing this transition well. And if successful, we will be more affluent. We will have more possibilities. But it does not follow that those will be at all evenly distributed unless we take actions in that direction.

Frueh: Thank you so much, Dr. Autor, for being with us and for sharing your insights on this.

Autor: Oh, thank you, Sara. It was really, really a pleasure to speak. Great questions. I really enjoyed the conversation.

Frueh: To learn more about AI and the future of work, check the show notes. Subscribe to The Ongoing Transformation wherever you get your podcasts. You can email us at podcast@issues.org with any comments or suggestions. And if you enjoy conversations like this one, go to issues.org, where you can also subscribe to our print magazine. Thanks to our podcast producer, Kimberly Quatch, and audio engineer, Shannon Lynch. I’m Sara Frueh, an editor for Issues. Thanks for joining us.

Your participation enriches the conversation

Respond to the ideas raised in this essay by writing to forum@issues.org. And read what others are saying in our lively Forum section.

Cite this Article

Autor, David and Sara Frueh. “How Is AI Shaping the Future of Work?” Issues in Science and Technology (January 13, 2026).