How AI Sets Limits for Human Aspiration
We are watching “intelligence” being redefined as the tasks that an artificial intelligence can do. Time and again, generative AI is pitted against human counterparts, with textual and visual outputs measured against human abilities, standards, and exemplars. AI is asked to mimic, and then to better, human performance on law and graduate school admission tests, advanced placement exams and more—even as those tests are being abandoned because they perpetuate inequality and are inadequate to the task of truly measuring human capacity.
The narratives trumpeting AI’s progress obscure an underlying logic requiring that everything be translated into the technology’s terms. If it is not addressed, that hegemonic logic will continue to narrow viewpoints, hamper human aspirations, and foreclose possible futures by condemning us to repeat—rather than learn from—past mistakes.
The problem has deep roots. As AI evolved in the 1950s and ’60s, researchers often made human comparisons. Some suggested that computers would become “mentors” and “colleagues,” others “assistants,” “servants,” or “slaves.” As science and technology scholars Neda Atanasoski, Kalindi Vora, and Ron Eglash have shown, these comparisons shaped the perceived value not only of AI, but also of human labor. Those relegating AI to the latter categories usually did so because they believed computers would be limited to menial, repetitive, and mindless labor. They were also reproducing the fiction that human assistants are merely mechanical, menial, and mindless. On the other hand, those celebrating potential mentors and colleagues were tacitly assuming that human counterparts could be stripped of everything beyond efficient reasoning.
Comparisons between AI and human performance often correlate with social hierarchy. As society and technology scholars Janet Abbate, Mar Hicks, and Alison Adam have shown, in the 1960s and 1970s, women and minorities were encouraged to advance in society by learning to code—but those skills were then devalued, while domains dominated by white men were seen as the realm of the truly technically skilled. More recent OpenAI measures of AI against standardized exams endorse a positivist, adversarial, and bureaucratic understanding of human intelligence and potential. Similarly, AI-generated “case interviews” and artworks encode mimicry as the definition of intelligence. For a result from generative AI to be validated as true—or to shock others as “true”—it has to be plausible, that is, recognizable in terms of past values or experiences. But looking backward and smoothing out outliers forecloses the rich wellsprings of humanity’s imagination for the future.
Such practices will ultimately affect who and what is perceived as intelligent, and that will profoundly change society, discourse, politics, and power. For example, in “AI ethics,” complex concepts such as “fairness” and “equality” are reconfigured as mathematical constraints on predictions, collapsed onto the underlying logic of machine learning. In another example, the development of machine learning systems for game-playing has led to a reductive redefinition of “play” as simply making permissible moves in search of victory. Anyone who has played Go or chess or poker against another person knowns that, for humans, “play” includes so much more.
The portrayal of AI’s history is usually one of progress, where constellations of algorithms attain humanlike general intelligence and creativity. But that narrative might be more accurately inverted with a shrinking definition of intelligence that excludes many human capabilities. This narrows the horizon of intelligence to tasks that can be accomplished with pattern recognition, prediction from data, and the like. We fear this could set limits for human aspirations and for core ideals like knowledge, creativity, imagination, and democracy—making for a poorer, more constrained human future.