Making Space for Technology Governance
A DISCUSSION OFImagining Governance for Emerging Technologies
Read Responses From
In “Imagining Governance of Emerging Technologies” (Issues, Spring 2022), Debra Mathews, Rachel Fabi, and Anaeze Offodile II outline the approach of the National Academy of Medicine’s Committee on Emerging Science, Technology, and Innovation (CESTI) for anticipating broad societal impacts from emerging technologies prior to their large-scale deployment and use. In full disclosure, I was involved in developing the figure, “Principles for the Governance of Emerging Technologies,” used in the article. I helped draft its first iteration for CESTI review and further development. I believe it provides a useful guide for the more holistic assessment of emerging technologies, their potential societal impacts, and procedural and substantive policy dimensions for technological governance. I am also impressed by CESTI’s use of narratives and scenarios to explore impacts and normative dimensions upstream of technology design, deployment, and use. I commend the committee for its efforts.
Unfortunately, as a society, we are far behind in the use of such diagrams and approaches for responsible research on and innovation of emerging technologies, many of which are accompanied by significant uncertainties about their impacts and important normative questions. The example the authors describe in their article, transcranial direct current stimulation, and its use for medicine and enhancement without US regulatory approval or governance of social and ethical issues, provides an example of current inadequacies with technology governance. The elephant in the room is, how can we remedy this deficit? To that question, Mathews and coauthors offer limited discussion. Models such as those proposed by CESTI are prevalent in the social science, science and technology studies, and policy literatures, but they will not have impact until policy spaces to implement them exist.
In my observations, the root cause for our inattention to the ethical and societal dimensions of emerging technologies, as well as the lack of policy spaces in which to consider them, is the lack of political will. We live in a society that is technologically optimistic, dominated by a capitalistic paradigm for technology governance. The predominant narrative is that social good is equivalent to technology development and adoption. With technology development comes capital, jobs, and economic growth. Regulations for safety, let alone considering the social and ethical dimensions or engaging diverse publics, are seen as getting in the way of these primary goals. Power for decisionmaking is concentrated in the industries developing the technology and regulators whose hands are tied by limited legal authorities and pressure from the US Congress to not overregulate (which in turn comes from industry lobbying). Technology governance takes place at the stage of regulation, and largely (almost exclusively) between the product developer and these constrained regulators.
Currently, spaces for the broader analysis and governance proposed by CESTI, and reported by Mathews and coauthors, are woefully lacking. It is imperative that these policy spaces be independently run; include voices of diverse and affected publics; and come with teeth—the ability to constrain or enable technologies—in order to execute the vision for more robust, responsible, and equitable futures with technology. We should turn our attention toward the creation of those spaces, including ways to overcome the political and economic forces, power structures, and strong techno-optimistic narratives that prevent their existence. Yet this is no easy task.
Goodnight-NCGSK Foundation Distinguished Professor in the Social Sciences
Codirector, Genetic Engineering and Society Center
North Carolina State University
Debra Mathews, Rachel Fabi, and Anaeze Offodile II outline a systematic methodology developed by the National Academy of Medicine to inform a novel governance framework for disruptive technologies. The authors note that “fostering the socially beneficial development of such technologies while also mitigating risks will require a governance ecosystem that cuts across sectors and disciplinary silos and solicits and addresses the concerns of many stakeholders.”
The governance framework, however, does not adequately address the role of risk mitigation in the governance process. I propose that the Academy’s Committee on Emerging Science, Technology, and Innovation include risk mitigation in its next round of study of policy tools for governing emerging technologies to ensure that innovation risks are identified and managed in order to ensure high quality and safe patient care.
Advances in patient care involve a learning curve with new potential sources of harm and unintended consequences. For example, data used to “train” technology enabled by artificial intelligence may not be sufficiently diverse, resulting in algorithmic bias that adversely affects certain patients. Risk assessment and mitigation thus should begin in the innovation sandbox and continue through each stage of the product lifecycle to identify, analyze, and control risks. A health care organization’s innovation risk appetite (the amount of risk an organization is willing to accept to achieve its objectives)and risk tolerance (the acceptable deviation from the organization’s risk appetite) should be incorporated into its enterprise risk management program. Since the introduction of emerging technologies presents strategic, operational, and patient safety exposures to the health system, innovation risk also should be included in the governing board’s risk agenda consistent with the board’s oversight responsibility.
Critical risks relating to emerging technologies are many and varied, including:
- Lack of integration with the patient’s electronic health record resulting in gaps in clinician documentation that could negatively affect diagnosis and treatment decisions;
- Injury to patients and medical malpractice liability exposure resulting from an evolving standard of care;
- Problems with the accuracy, integrity, and/or completeness of data or information utilized in the development of technologies;
- Vulnerabilities that may result in data privacy breaches and/or security incidents;
- Failure to appropriately determine data ownership, rights, and permitted uses in codeveloped intellectual property that adversely affects a codeveloper’s right to use, share, or sell the technology or information generated by it;
- Disproportionate allocation of contractual rights, responsibilities, and liabilities among all stakeholders throughout the entire development and deployment lifecycle;
- Inadequate insurance coverage for a product’s deficiencies when used by the organization during the development period and when marketed to third parties;
- Violations of federal and/or state fraud and abuse laws, such as when the technology could influence a provider’s referral decisions;
- Uncertain and inconsistent legal and regulatory requirements that may result in litigation, imposition of monetary penalties, or administrative agency action;
- Financial risk due to unclear reimbursement rules and policies that may affect achievement of economic objectives; and
- Reputational risk, such as ethical violations.
Since the dynamic nature and accelerated pace of tech-driven innovation carries inherent risks throughout the entire product lifecycle, it is prudent to include risk mitigation as a policy tool for the governance of emerging technologies.