The AI Expertise Paradox

A DISCUSSION OF

How Is AI Shaping the Future of Work?

On the Issues podcast episode “How Is AI Shaping the Future of Work?” David Autor argues that while artificial intelligence may devalue routine knowledge work, high-level judgment and leadership remain essential. We agree. Scientific principal investigators, chief clinicians, judges, and senior policy leaders still require deep expertise, even if routine knowledge work is automated. Our concern is that the erosion Autor describes may propagate upward by undermining the developmental pipeline through which deep understanding and judgment have historically been forged.

Expertise is built through what we term productive struggle. Research shows that durable understanding emerges through active learning, the sustained engagement with tasks that require explanation, error correction, and reasoning under uncertainty. Early-career professionals historically developed judgment by wrestling with methods, data, and cases; when AI substitutes for that effort, it short-circuits the apprenticeship necessary for mastery.

The concern is not that AI eliminates the need for expertise. If anything, as Autor has argued, advancing technology increases the need for leaders capable of intellectual agency. As AI reduces the effort required to generate answers, the human role must shift toward problem framing, cross-domain synthesis, and the critical skepticism required to validate output and guide automated systems.

This creates what we call the AI expertise paradox: The early-career professionals whose output benefits most from AI today may be the least prepared to lead their fields in an AI-driven future. In essence, the tools that enable novices to perform more like experts simultaneously make them less likely to become experts. The paradox arises because AI creates an illusion of understanding by decoupling early-career output from the development of underlying mastery. High output is increasingly achievable without the cognitive labor that once built the understanding and judgment needed among those who advanced in their fields. Yet fields will continue to require experts who can make decisions under uncertainty and provide strategic direction when AI tools reach their epistemic limits.

The tools that enable novices to perform more like experts simultaneously make them less likely to become experts.

This shift alters individual incentives: Learners who invest time mastering material risk falling behind peers who optimize for AI-assisted production. Even the most capable students and workers may rationally choose to sidestep the learning process. At the same time, many systems of promotion rely heavily on early productivity as a signal of long-run potential, including publication counts in academia, briefs written in law, and patients treated in medicine. When productivity no longer reliably reflects developing expertise, these signals become distorted. Together, these forces raise the possibility of a future leadership gap in knowledge fields, where those at the top lack the foundational experience to interrogate results or set new directions.

The institutional response is not to resist AI but to realign incentives. Educational and career advancement must reflect understanding. Educational and professional environments must prioritize observable reasoning in their assessment practices: in-person examinations, oral defenses, and structured decision reviews. Research institutions may reserve space for “slow science,” prioritizing the process of discovery over the velocity of output. Without such adjustments, rational workers will optimize for output speed early in their careers, bypassing the productive struggle through which mastery is formed. The result would not be a shortage of tools, but of leadership.

Jarislowsky-Deutsch Chair in Economic & Financial Policy

Professor, Department of Economics and School of Medicine

PhD candidate in Higher Education, School of Education

Queen’s University, Ontario, Canada

Cite this Article

“The AI Expertise Paradox.” Issues in Science and Technology ().