Chesley Bonestell, “The Exploration of Mars” (1953), oil on board, 143/8 x 28 inches, gift of William Estler, Smithsonian National Air and Space Museum. Reproduced courtesy of Bonestell LLC.

A Tool With Limitations

A DISCUSSION OF

An AI Society
Read Responses From

In the Winter 2024 Issues, the essays collectively titled “An AI Society” offer valuable insight into how artificial intelligence can benefit society—and also caution about potential harms. As many other observers have pointed out, AI is a tool, like so many that have come before, and humans use tools to increase their productivity. Here, I want to concentrate on generative AI, as do many of the essays. Generative AI is a special kind of tool designed to improve human productivity, but like all tools it has limitations. Growth, innovation, and progress in AI are inevitable, and the essays provide an opportunity to invite collaboration between professionals in the social sciences and humanities to work with computer scientists and AI developers to better understand and address the limitations of AI tools.

The rise and overall awareness of generative AI has been nothing short of remarkable. The generative AI-powered ChatGPT took only five days to reach 1 million users. Compare that with Instagram, which took about 2.5 months to reach that mark, or Netflix, which took about 3.5 years. Additionally, ChatGPT took only about two months to reach 100 million users, while Facebook took about 4.5 years and Twitter just under 5.5 years to hit that mark.

Generative AI is a special kind of tool designed to improve human productivity, but like all tools it has limitations.

Why has the uptake of generative AI been so explosive? Certainly one reason is that it helps productivity. There is of course plenty of anecdotal evidence to this effect, but there is a growing body of empirical evidence as well. To cite a few examples, in a study involving professional writing skills, people who used ChatGPT decreased writing time by 40% and increased writing quality by 18%. In a study of nearly 5,200 customer service representatives, generative AI increased productivity by 14% while also improving customer sentiment and employee retention. And in a study of software developers, those who were paired with a generative AI developer tool completed a coding task 55.8% faster than those who were not. With that said, we are also beginning to understand the kinds of tasks and people that benefit most from generative AI and those that don’t benefit or may even experience a loss of productivity. Knowing when and why it doesn’t work is as important as knowing when and why it does.

Unfortunately, one of the downsides of today’s class of generative AI tools is that they are prone to what are called “hallucinations”—they output information that is not always correct. The large language model technology upon which the systems are based is good at producing fluent and coherent text, but not necessarily factual text. While it is hard to know how frequently these hallucinations occur, one estimate puts the figure at between 3% and 27%. Indeed, currently there seems to be an inherent trade-off between creativity and accuracy.

So we have a situation today where generative AI tools are extremely popular and demonstrably effective. At the same time, they are far from perfect, with many problems identified. Just as we drive cars and use the internet, there are risks, but we use these tools anyway because we decide the benefits outweigh the risks. Apparently people are making a similar judgment in deciding to use generative AI tools. With that said, it is critically important that users be well informed about the potential risks of these tools. It is also critical that policymakers—with public input—work to ensure that AI safety and user protection are given the utmost priority.

Professor Emeritus

Department of Computer Science

Southern Methodist University

The writer chaired a National Academies of Sciences, Engineering, and Medicine workshop in 2019 on the implications of artificial intelligence for cybersecurity.

The essays on artificial intelligence provide interesting and informative insights into this emerging technology. All new technologies bring both positive and negative results—what I have called the yin and yang of new technologies. AI will be no exception. Advocates for a new technology usually emphasize its advantages and dismiss consideration of possible adverse effects. It is only later, when the technology has been allowed to operate widely, that actual positive and negative effects become apparent. As Emmanuel Didier points out in his essay, “Humanity is better at producing new technological tools than foreseeing their future consequences.” The more disruptive the new technology, the greater will be its effects of both kinds.

With AI, it’s not just the bias and machine learning gone amok, which are the current criticisms levied against the technology. AI’s influences can go far beyond what we envision at this time. For example, users of AI who rely on it to produce outputs reduce their opportunities for growth of creative abilities and development of social skills and other functional capabilities that we normally associate with well-adjusted human adults. A graphic example of what I mean can be seen in a recent entry in the comic strip Zits, in which a teenager named Jeremy is talking with his friend. He says, “If using a chatbot to do homework is cheating … but AI technology is something we should learn to use … how do we know what’s right or wrong?” And his buddy responds, “Let’s ask the chatbot!” By relying on the AI program to answer their ethical quandary, they lost the opportunity to think through the issue at hand and develop their own ethos. It is not hard to imagine similar experiences for AI users in the real world who are otherwise expected to grow in wisdom and social abilities.

Advocates for a new technology usually emphasize its advantages and dismiss consideration of possible adverse effects.

It will probably not be the use of AI in individual circumstances that becomes problematic, but the overreliance on AI that is almost bound to develop. Similarly, social media are not, by themselves, a bad thing. But social media have now overtaken a whole generation of users and led to personal antisocial and asocial behaviors. The potential for similar negative outcomes when AI use becomes widespread is very strong.

Back when genetic modification was a new and potentially disruptive technology, it was foreseen as possibly dangerous to society and to the environment. In response, policymakers and concerned scientists put safeguards in place to prohibit the unfettered release of gene-edited organisms into the environment, as well as the editing of human germ cells that transfer genetic traits from one generation to the next. Most of these restrictions are still in effect. AI could possibly be just as disruptive as genetic modification, but there are no similar safeguards in place to allow us time to better understand the extent of AI influences. And it is not very likely that the do-nothing Congress we have now would be able to handle an issue as complex as this.

Professor Emeritus

Fischell Department of Bioengineering

University of Maryland

Cite this Article

“A Tool With Limitations.” Issues in Science and Technology 40, no. 3 (Spring 2024).

Vol. XL, No. 3, Spring 2024