Eyes on AI

A DISCUSSION OF

Should Artificial Intelligence Be Regulated?

In “Should Artificial Intelligence Be Regulated?” (Issues, Summer 2017), Amitai Etzioni and Oren Etzioni focus on three issues in the public eye: existential risks, lethal autonomous weapons, and the decimation of jobs. But their discussion creates the false impression that artificial intelligence (AI) will require very little regulation or governance. When one considers that AI will alter nearly every facet of contemporary life, the ethical and legal challenges it poses are myriad. The authors are correct that the futuristic fear of existential risks does not justify overall regulation of development. This, however, does not obviate the need for monitoring scientific discovery and determining which innovations should be deployed. There are broad issues as to which present-day and future AI systems can be deployed safely, whether the decisions they make are transparent, and whether their impact can be effectively controlled. Current learning systems are black boxes, whose output can be biased, whose reasoning cannot be explained, and whose impact cannot always be controlled.

Though supporting a “pause” on the development of lethal autonomous weapons, the authors sound out of touch with the ongoing debate. They fail to mention international humanitarian law. Furthermore, their examples for “human-in-the-loop” and “human-on-the-loop” systems—Israel’s Iron Dome and South Korea’s sentries posted near the demilitarized zone bordering North Korea—are existing systems that have a defensive posture. Proposals to ban lethal autonomous weapons do not focus on defensive systems. However, by using these examples the authors create the illusion that the debate is primarily about banning fully autonomous weapons. The central debate is about what kind of “meaningful human control” should be required to delegate the killing of humans to machines, even machines “in” or “on” the loop of human decision making. To make matters worse, they suggest that a ban would interfere with the use of machines for “clearing mines and IEDs, dragging wounded soldiers out of the line of fire and civilians from burning buildings.” No one has argued against such activities. The paramount issue is whether lethal autonomous weapons might violate international humanitarian law, initiate new conflicts, or escalate existing hostilities.

The authors are strong on the anticipated decimation of many forms of work by AI. But to date, political leaders have not argued that this requires regulation or relinquishing research in AI. Technological unemployment is not an issue of AI governance. It is a political and economic challenge. How should we organize our political economy in light of widespread automation and rapid job loss?

From cybersecurity to algorithmic bias, from transparency to controllability, and from the protection of data rights and human autonomy to privacy, advances in AI will require governance in the form of standards, testing and verification, oversight and regulation, and investment in research to ensure safety. Existing governmental approaches, dependent on laws, regulations, and regulatory authorities, are sadly inadequate for the task. Governance will increasingly rely on industry standards and oversight and on engineering means to mitigate risks and dangers. An enforcement regime to ensure that industry acts responsibly and that critical standards are followed will also be required.

Scholar

The Hastings Center

Yale Interdisciplinary Center for Bioethics

Cite this Article

“Eyes on AI.” Issues in Science and Technology 34, no. 1 (Fall 2017).

Vol. XXXIV, No. 1, Fall 2017