Maintaining Control Over AI

A DISCUSSION OF

Human-Centered AI
Read Responses From

A pioneer in the field of human-computer interaction, Ben Shneiderman continues to make a compelling case that humans must always maintain control over the technologies they create. In “Human-Centered AI” (Issues, Winter 2021), he argues for AI that will “amplify, rather than erode, human agency.” And he calls for “AI empiricism” over “AI rationalism,” by which he means we should gather evidence and engage in constant assessment.

In many respects, the current efforts to develop the field of AI policy reflect Shneiderman’s intuition. “Human-centric” is a core goal in the OECD AI Principles and the G20 AI Guidelines, the two foremost global frameworks for AI policy. At present, more than 50 countries have endorsed these guidelines. Related policy goals seek to “keep a human in the loop,” particularly in such crucial areas as criminal justice and weapons. And the call for “algorithmic transparency” is simultaneously an effort to ensure human accountability for automated decisionmaking.

There is also growing awareness of the need to assess the implementation of AI policies. While countries are moving quickly to adopt national strategies for AI, there has been little focus on how to measure success in the AI field, particularly in the areas of accountability, fairness, privacy, and transparency. In my organization’s report Artificial Intelligence and Democratic Values, we undertook the first formal assessment of AI policies taking the characteristics associated with democratic societies as key metrics. Our methodology provided a basis to compare national AI policies and practices in the present day. It will provide an opportunity to evaluate progress, as well as setbacks, in the years ahead.

The current efforts to develop the field of AI policy reflect Shneiderman’s intuition.

Information should also be gathered at the organization level. Algorithmic Impact Assessments, similar to data protection impact assessments, require organizations to conduct a formal review prior to deployment of new systems, particularly those that have direct consequences for the opportunities of individuals, such as hiring, education, and the administration of public services. These assessments should be considered best practices, and they should be supplemented with public reporting that makes possible meaningful independent assessment.

In the early days of law and technology, when the US Congress first authorized the use of electronic surveillance for criminal investigations, Congress also required the production of detailed annual reports by law enforcement agencies to assesses the effectiveness of those new techniques. Fifty years later, those reports continue to provide useful information to law enforcement agencies, congressional oversight committees, and the public as new issues arise.

AI policy is still in the early days, but the deployment of AI techniques is accelerating rapidly. Governments and the people they represent are facing extraordinary challenges as they seek to maximize the benefits for economic growth and minimize the risks to public safety and fundamental rights during this period of rapid technological transformation.

Socio-technical imaginaries have never been more important. We concur with Ben Shneiderman in his future vision for artificial intelligence (AI): humans first. But we would go further with a vision of socio-technical systems: human values first. This includes appreciating the role of users in design processes, followed by the identification and involvement of additional stakeholders in a given, evolving socio-technical ecosystem. Technological considerations can then ensue. This allows us to design and build meaningful technological innovations that support human hopes, aspirations, and causes, whereby our goal is the pursuit of human empowerment (as opposed to the diminishment of self-determination) and using technology to create the material conditions for human flourishing in the Digital Society.

We must set our sights on the creation of those infrastructures that bridge the gap between the social, technical, and environmental dimensions that support human safety, protection, and constitutive human capacities, while maintaining justice, human rights, civic dignity, civic participation, legitimacy, equity, access, trust, privacy, and security. The aim should be human-centered value-sensitive socio-technical systems, offered in response to local community-based challenges that are designed, through participatory and co-design processes, for reliability, safety, and trustworthiness. The ultimate hope of the designer is to leave the outward physical world a better place, but also to ensure that multiple digital worlds and the inner selves can be freely explored together.

With these ideas in mind, we declare the following statements, affirming shared commitments to meeting common standards of behavior, decency, and social justice in the process of systems design, development, and implementation:

As a designer:

  1. I will acknowledge the importance of approaching design from a user centered perspective.
  2. I will recognize the significance of lived experience as complementary to my technical expertise as a designer, engineer, technologist, or solutions architect.
  3. I will endeavor to incorporate user values and aspirations and appropriately engage and empower all stakeholders through inclusive, consultative, participatory practices.
  4. I will incorporate design elements that accept the role of individuals and groups as existing within complex socio-technical networks, and are sensitive to the relative importance of community.
  5. I will appreciate and design for evolving scenarios, life-long learning and intelligence, and wicked social problems that do not necessarily have a terminating condition (e.g., sustainability).
  6. I will contribute to the development of a culture of safety to ensure the physical, mental, emotional, and spiritual well-being of the end user, and in recognition of the societal and environmental implications of my designs.
  7. I will seek to implement designs that maintain human agency and oversight, promote the conditions for human flourishing, and support empowerment of individuals as opposed to replacement.
  8. I will grant human users ultimate control and decisionmaking capabilities, allowing for meaningful consent and providing redress.
  9. I will seek continuous improvement and refinement of the given socio-technical system using accountability (i.e., auditability, answerability, enforceability) as a crucial mechanism for systemic improvement.
  10. I will build responsibly with empathy, humility, integrity, honor, and probity and will not shame my profession by bringing it into disrepute.

As a stakeholder:

  1. You will have an active role and responsibility in engaging in the design of socio-technical systems and contributing to future developments in this space.
  2. You will collaborate and respect the diverse opinions of others in your community and those involved in the design process.
  3. You will acknowledge that your perspectives and beliefs are continually evolving and refined over time in response to changing realities and real-world contexts.
  4. You will be responsible for your individual actions and interactions throughout the design process, and beyond, with respect to socio-technical systems.
  5. You will aspire to be curious, creative, and open to developing and refining your experience and expertise as applied to socio-technical systems design.
  6. You will appreciate the potentially supportive role of technology in society.

As a regulator:

  1. You will recognize the strengths and limitations of both machines and people.
  2. You will consider the public interest and the environment in all your interactions.
  3. You will recognize that good design requires diverse voices to reach consensus and compromise through dialogue and deliberation over the lifetime of a project.
  4. You will strive to curate knowledge, and to distinguish between truth and meaning; and will not deliberately propagate false narratives.
  5. You will act with care to anticipate new requirements based on changing circumstances.
  6. You will be objective and reflexive in your practice, examining your own beliefs, and acting on the knowledge available to you.
  7. You will acknowledge the need for human oversight and provide mechanisms by which to satisfy this requirement.
  8. You will not collude with designers to install bias or to avoid accountability and responsibility.
  9. You will introduce appropriate enforceable technical standards, codes of conduct and practice, policies, regulations, and laws to encourage a culture of safety.
  10. You will take into account stakeholders who have little or no voice of their own.

Professor, School for the Future of Innovation in Society and the School of Computing and Decision Systems Engineering

Arizona State University

Director of the Society Policy Engineering Collective and the founding Editor in Chief of the IEEE Transactions on Technology and Society

Lecturer, School of Business, Faculty of Business and Law

University of Wollongong, Australia

Coeditor of IEEE Transactions on Technology and Society

Professor of Intelligent and Self-Organising Systems, Department of Electrical & Electronic Engineering

Imperial College London

Editor in Chief of IEEE Technology and Society Magazine

Cite this Article

“Maintaining Control Over AI.” Issues in Science and Technology 37, no. 3 (Spring 2021).

Vol. XXXVII, No. 3, Spring 2021