Yulia Pinkusevich, “Nuclear Sun Series” (2010), charcoal on paper.Courtesy of the artist and Rob Campodonico, © Yulia Pinkusevich.

AI and Copyright Law

A DISCUSSION OF

Who Is Responsible for AI Copyright Infringement?
Read Responses From

In “Who Is Responsible for AI Copyright Infringement?” (Issues, Fall 2024), Michael P. Goodyear presents a novel argument for treating AI systems as legal persons capable of direct liability for copyright infringement. While I appreciate the creativity and forward-thinking nature of his thesis, I believe that reassigning responsibility to the AI system itself, as a fictitious legal person, introduces unnecessary complexities. I propose an alternative perspective: the responsibility for preventing AI-generated copyright infringement should remain with the user. This approach leverages existing legal frameworks and practical tools, sidestepping the need to reimagine intellectual property law or assign legal personhood to machines.

Generative AI systems, like any other creative tool, are extensions of human agency. They are powerful and complex, but they are ultimately governed by the prompts and decisions of their users. If users employ AI tools to create content—such as text, music, or images—then they have a responsibility to ensure that their outputs comply with copyright laws before sharing or commercializing them. For example, Goodyear references the case of Shane, a user who inadvertently created song lyrics resembling Taylor Swift’s copyrighted work. I contend that this infringement, while unintentional, was entirely avoidable had the user exercised basic due diligence.

If users employ AI tools to create content—such as text, music, or images—then they have a responsibility to ensure that their outputs comply with copyright laws before sharing or commercializing them.

Preventing AI-generated infringement does not require new legal theories or personhood for machines. Established tools and techniques already exist to check whether AI outputs align with copyright laws. Plagiarism detection software can identify textual overlaps with existing works. Platforms such as YouTube use copyright-detection algorithms to flag infringing content. Moreover, advanced approaches, such as generative adversarial networks that train two neural networks to compete against each other to generate more authentic new data from a given training dataset, could be utilized to test originality by comparing AI outputs against databases of copyrighted material. In my view, users should leverage these resources to evaluate AI-generated outputs before using them publicly.

Rather than holding AI itself responsible, it is more pragmatic to reinforce user accountability. Assigning liability to a tool rather than the individual operating it would create legal and ethical complexities while undermining personal responsibility. For example, in copyright law, tools such as photocopiers, musical instruments, or word processors have never been held liable for their misuse; the individual or organization using the tool has always been accountable. Extending this principle to generative AI ensures continuity in legal reasoning while promoting informed and skillful use of technology.

I also recognize the role that AI developers can play in fostering responsible use. Developers could embed copyright-detection mechanisms into their platforms, such as automated alerts for outputs that closely resemble copyrighted works or built-in “copyright alignment checks.” These features would serve as a safeguard for users while reinforcing the importance of compliance. However, such tools should complement rather than replace the user’s primary responsibility for ensuring legal and ethical use of AI.

In sum, by emphasizing user responsibility and leveraging current resources, we can foster a culture of responsible AI use while maintaining continuity in copyright law. This solution acknowledges the transformative power of generative AI without sacrificing the principles that underpin accountability and creativity in the digital age.

Clinical Associate Professor of Management and Analytics

NYU School for Professional Studies

Michael P. Goodyear puts his finger on an important question. Fierce battles are currently raging in courts over expansive claims of AI copyright infringement, including assertions that any training on copyrighted works, irrespective of output, is infringing. In contrast, many see one scenario as easy. When an AI system generates output similar to copyrighted works in its dataset—whether lyrics or news articles—it is often taken for granted that the system’s producer infringes copyright. However, the question of which actor should have liability for output in AI’s complex technological setting is much more intricate as a matter of copyright law and policy.

Goodyear’s own answer is creative, but ultimately wanting. He suggests escaping the Scylla and Charybdis of user or producer liability via a third alternative: make the AI itself, endowed for this purpose with artificial legal personality, liable. The goal is not a pointless attempt to change the AI’s behavior or reach its nonexistent purse for compensation. The legal maneuver is designed to exempt both users and producers, while opening the door for limited secondary liability to producers under a “notice and revise” standard. This third way is unnecessarily tortuous and possibly self-defeating. Unlike in the case of corporations or pets, artificial legal personality serves no useful purpose here, neither as a way of pooling resources nor as a mechanism of representing otherwise ignored interests. Nor does AI liability have any significance. Its sole purpose is being a clever conceptual scarecrow designed to restrict actual liability to the proposed notice and revision standard imposed on producers. Lawmakers are unlikely to use the sledgehammer of AI legal personality to achieve this limited goal. More importantly, rather than hiding behind a fictitious legal personality, it seems more productive to openly discuss the appropriate standard to govern AI producers.

Rather than hiding behind a fictitious legal personality, it seems more productive to openly discuss the appropriate standard to govern AI producers.

What should we expect of AI producers whose systems generate infringing output? Like previous generations of digital platforms, such producers are gatekeepers of dual use technology. They make general design choices with respect to technology that has harmful as well as beneficial uses. The former impose widespread social risk of copyright infringement. Among the latter are not only consumptive benefits, but also productive empowerment of a new generation of democratized user creativity. Copyright’s treatment of AI gatekeepers is trapped between its all or (almost) nothing liability standard. Direct producers’ liability for all infringing output will over-impair the social benefits of AI. The design of AI systems is “lumpy”: design measures that limit infringement also restrict legitimate beneficial uses. Since producers’ private value of the beneficial consumptive and productive uses of AI is much smaller than their social value, the result of all-encompassing producers’ liability will be over-blocking of beneficial uses. In contrast, a mere duty to producers to respond to notices of individual infringement is too little. Such a thin standard misses producers’ ability to shape their system by initial design choices and the systematic character of the social risk their products impose.

The alternative is simple and, surprisingly, nonexistent in current copyright law. To escape legal liability for infringing output, producers should be required to take reasonable design precautions to reduce the risk of infringement on their systems. In legalese we call this standard “negligence.”

Professor of Law

The University of Texas School of Law

Cite this Article

“AI and Copyright Law.” Issues in Science and Technology 41, no. 2 (Winter 2025).

Vol. XLI, No. 2, Winter 2025