Challenges raised by gene editing
In “Why We Need a Summit on Human Gene Editing” (Issues, Spring 2016), David Baltimore describes how the planning committee chose the main theme and diverse topics of presentations for the International Summit on Human Gene Editing, held in December 2015. I appreciate the committee’s dedication and effort to make the global forum a memorable and significant event. Dr. Baltimore also expressed hope that the discussions would “serve as a foundation for a meaningful and ongoing global dialogue,” and with that in mind, I would like to share what has been happening in Japan and offer some thoughts for the future.
In Japan, the Expert Panel on Bioethics of the Council for Science, Technology, and Innovation in the national government’s Cabinet Office has been considering the issue since June 2015. The panel decided to take action because of the publication of the first research paper about gene editing on human embryos, and was also prompted by statements by the US government and the International Society for Stem Cell Research in spring 2015. The panel held four hearings with experts in medicine and ethics, and delivered an interim report in April 2016.
The panel concluded that clinical usage of gene editing techniques on human embryos that would lead to heritable genetic changes in future generations should not be allowed at this time, owing to safety concerns, as well as other ethical, social, and philosophical issues. The panel’s report also refers to the technical, ethical, and social issues described in the Statement of the International Summit. Regarding basic research on human embryos, the panel judged that it might be possible to justify some areas of research, such as research into the function of genes during the development of human embryos. All such research, however, would need to undergo strict ethical reviews and—whatever the case—gene-edited embryos should never be implanted in the uterus.
The Japan Society of Gene and Cell Therapy (JSGCT) issued a joint statement with the American Society of Gene and Cell Therapy in August 2015 (Molecular Therapy, 23:1282). Furthermore, in April 2016, the JSGCT, in collaboration with three other academic societies in Japan, issued a proposal for the prohibition of clinical application of germline editing and urged the government to establish appropriate national guidelines for basic research.
As a participant in both international and national activities, I can confidently say that the international summit has had a positive influence on the discussion of gene editing in Japan. Two of the members of the Expert Panel on Bioethics who participated in the summit—including myself—presented reports at one of the panel’s meetings, as well as at several academic societies. The challenge now is how to make the dialogue truly global. As a result of the summit, there are surely many discussions taking place all around the world. I hope that those local discussions—particularly those in non-English speaking countries and regions such as Asia, Africa, and Latin America—will be welcomed into these global discussions, since the challenge of how to handle gene editing technology is one that concerns all of humanity.
I had the privilege of attending the International Summit on Human Gene Editing, and the take-home message for me was that from an experimental perspective, human somatic and germline gene editing are acceptable within local (and global) regulatory and ethical/moral frameworks. In addition, from a therapeutic perspective, somatic gene editing provides a very exciting and globally acceptable opportunity. In contrast, editing the human germline for therapeutic (or preventative) purposes raises many important questions for which there are currently no answers. These questions are complex and touch on issues such as altering the course of natural evolution (with unpredictable consequences) and eugenics, among many others. All present at the summit shared a strong commitment that the scientific community should not proceed in the direction of therapeutic/preventative human germline gene editing.
Having studied and worked in the “North” and now located in the “South,” I have often been asked whether technological advances such as gene editing are indeed relevant to emerging economies, given the need to focus on more pressing priorities such as basic education, health, and food security. I live in a country—South Africa—that has one of the highest levels of HIV positivity, most of the affected individuals being in the economically active segment of the population. My own research would see the implementation of advanced technologies in the genomics field (including gene therapy) in a country in which the number of HIV-positive individuals on antiretroviral therapy is far below 100%. Can one justify advanced technologies in the face of an inability to meet basic needs?
The answer does not appear to lie exclusively in the notion of distributive justice, but perhaps in the principles of health economics: if it makes sense from an economic perspective, then everyone stands to benefit. Too little has been done, in my opinion, to accurately estimate the benefits that would be derived from implementing the fruits of the genomics era (including gene therapy and gene editing) on a large scale in the developing world. This approach would require a calculation of the costs, for instance, of lifelong therapy for communicable diseases such as HIV, genetic disorders such as cystic fibrosis, sickle cell disease, and familial hypercholesterolemia, and then contrasting this to the cost of a one-off diagnostic test or therapeutic procedure. In practice, this would include, for example, the institution of newborn screening programs in the public sector (currently available only to the privileged minority in the private sector) and the application of gene therapy and gene editing for diseases including those mentioned above, bearing in mind the significant cost reduction that would occur with economies of scale.
I would welcome an opportunity to work with like-minded individuals on the health economics of the large-scale implementation in the South of the fruits of the genomics era (including gene therapy and gene editing), where the need paradoxically is as great, if not greater, than in the North, where most of the attention appears currently to be focused. The hope is that armed with an objective appraisal, it will be possible to approach leaders in government and business to convince them of the urgency to act.
The emerging power of biotechnology is promising an unprecedented ability to alter human structure and function. To foster global dialogue on those powerful possibilities, members of the international community gathered in Atlanta in May 2015 for Biotechnology and the Ethical Imagination: A Global Summit (BEINGS). Leaders from science, business, philosophy, ethics, law, social science, religious disciplines, and the arts and humanities convened to propose a set of ethical principles and policy standards for biotechnologies that impact the human species. The results will be published in the coming months.
The potential of biotechnological advances demands many such conversations as BEINGS, and so I applaud the sponsors of the International Summit on Human Gene Editing. But the key question for such conversations is: who should be at the table?
Human gene editing challenges our definitions of what it means to be human, as well as the proper limits of scientific interventions. But it also demands an examination of how our desire to define and heal disease is conflated with aesthetic definitions and desires, and strongly challenges us to examine our socially and culturally situated definitions of concepts such as “normal functioning” and “disability.”
How we approach such questions is historically and socially contingent. The impact of human gene editing will be felt beyond the biotechnologically advanced countries, and, as participants in the collective human experience, the world community deserves a voice. In the past, technologically advanced societies made decisions that had tremendous impact on the social progress and physical environments of other societies. We must learn from that history and solicit the combined wisdom of different cultures with different experiences and perspectives to thoroughly and transparently debate the implications of this technology. Different kinds of insights lie in the collective experience, the science and philosophy and art and literature of our species, including that of tribal and indigenous populations.
We must invest in those conversations now. It is not only the power of these technologies that challenge us, but their simplicity. It is challenging enough to determine how the scientists represented at the summit should handle human gene editing; it will become nearly impossible when the tools can be mastered by anyone with basic competence in genetic technical skills. The do-it-yourself, garage genetics lab may not be quite ready for human gene editing, but the ability to alter the genomes of plants and microorganisms is becoming routine. As these technologies become increasingly accessible, so will the potential for creating accidental (or, unfortunately, intentional) pathogens or environmentally destructive species.
How do we confront such challenges as a world community? I am not sure of all the solutions, but I am sure of the process: we need collective innovative thought from as many different fields, cultures, philosophies, and perspectives as possible. And that is going to happen only if we also invite critics, opponents to the technologies, and those whose disciplines or fields may at first seem irrelevant to the conversation, as we tried to do in BEINGS. Time is not on our side.
Regulating genetic research and applications
R. Alta Charo’s “The Legal and Regulatory Context for Human Gene Editing” (Issues, Spring 2016) provides an excellent broad overview of the current status and challenges surrounding biotechnology governance around the globe. The article also touches, if briefly, on the current oversight issues in biosecurity, mentioning self-governance models of the National Scientific Advisory Board on Biosecurity (NSABB) that emerged from recommendations in the 2003 National Academies’ report Biotechnology Research in an Age of Terrorism.
When thinking about biotechnology from a security governance perspective, it will be necessary to anticipate the types of security threats that may emerge as science and technology advance, the potential consequences of those threats, the probability that adversaries will obtain or pursue them, adversarial intent, and the potential effect on strategic stability.
The CRISPR/Cas9 system, and emerging variants on the system, enable unprecedented control and ease when editing the genome. It’s somewhat analogous to remote “command and control” of the genome and is what makes the tools novel and different from earlier gene editing methods. Potential impacts on biomedicine and human health are vast, including beneficial applications for enabling gene and cell replacement therapies, drug discovery through functional genomic screens, and simplifying the production of disease models that permit validated therapy targets and increased efficiency for testing drug efficacy. But the future challenges and pitfalls associated with CRISPR/Cas9, especially pitfalls with implications for international security, are still to be determined. Governance to address uncertainties, while not hindering research, is tough.
The broader biosecurity and nonproliferation communities (as well as congressional committee findings) have recognized that in the twenty-first century biological weapons are sometimes (but not always) cheaper, easier to produce, more widely available, within the capabilities of an increasingly large number of people with access to minimal technical skills and equipment, more concealable, and inherently exploitive of more dual-use technologies. The potential synergies between biotechnology and other emerging technologies, such as nanotechnology, big data analytics, and the cognitive neurosciences, not only suggest tremendous potential promise for advancement in technology for consumers and defense applications, but also raise new concerns. A driving concern is that within this century, both nation-states and non-state actors may have access to new and potentially devastating dual-use technology.
Reducing the risk from state-based misuse of biotechnology for biological proliferation will mean consideration of the highly transnational nature of biotechnology research and development. Traditional and innovative new approaches to nonproliferation and counterproliferation are important policy elements to reduce the risk of malfeasant application of technology. Robust international agreements lower the risk of terrorist applications by eliminating legal routes for states and terrorists to obtain agents, precursors, or weaponization materials, and by minimizing possible transfers from state to non-state actors through theft, deception, or other means. Efforts to strengthen the international regime to control transfers of dual-use materials and equipment are also important. The highly transnational nature of biotechnology research and development is a major consideration in reducing the risk of misuse of biotechnology for weapons.
In R. Alta Charo’s article about human gene editing regulation, she gestures toward a regulatory model that has a lot in common with the “learning health system,” a recently described model in which a health care system is constantly engaged in a process of policy monitoring, rapid feedback, and quality improvement. Analogously, as new technologies emerge, a “learning regulatory agency” would apply the assessment methods and standards that it knows well, but would always be on the lookout for systematic biases, flaws, or new domains of innovation that are poorly served by its current tools. The agency would then adapt, developing new methods and standards accordingly.
We think that as policy makers contemplate the promise and potential pitfalls of gene editing technologies, it will be valuable for a “learning agency” to take some lessons from another recent innovation in biotechnology—genomics and precision medicine. Following on the success of the Human Genome Project, precision medicine initiatives arrived with the potential to read patients’ health and response to therapy off their genetic profiles. In that spirit, biotechnology companies raced forward, searching for meaningful genetic associations and then developing diagnostic technologies to detect these genes. But lacking a structured system for market entry, companies began offering tests for sale directly to consumers, without having to demonstrate that these associations were clinically meaningful predictors of patient outcomes. Unfortunately, an insufficient evidence base about how to use and interpret genetic tests—as would be demanded by stricter regulation—often left both clinicians and patients confused. In part as a result, precision medicine has failed, thus far, to live up to its high expectations.
In response to this problem, the Food and Drug Administration is now proposing to oversee more closely certain aspects of clinical diagnostics. However, we think that one lesson for a learning agency to draw from the precision medicine case is to be more proactive in requiring evidence of clinical utility for new biotechnologies. Applying this lesson to gene editing could mean that developers will need to go beyond merely showing that their technologies can successfully edit their targets in controlled conditions—they must also show that this intervention translates into reliable, cost-effective, and clinically meaningful benefit.
This experience also calls into question the regulatory policy narrative that policy makers must choose between protecting the public through strict regulation and promoting innovation through loose regulation. Indeed, the loose regulatory environment surrounding diagnostic technologies in precision medicine seems to have failed by both measures: There have been few genuine therapeutic breakthroughs emerging from this space, and the public has been made more vulnerable to scientifically dubious claims surrounding the utility of genetic testing. Both of these outcomes can erode public trust in science and regulation. Thus, although the ideal of an adaptive learning agency is a good one, it is critical to be clear about what should be driving this adaptation. An ideal learning agency would adapt and evolve alongside the science (with all of its messy uncertainty), not to the perception of what the science could be.
There is currently a three-speed Europe for the biotechnology industry, with each of its three applications—health care, industrial, and agricultural—operating under different regulatory approval processes.
From 2012 to 2017, it is estimated that the annual growth rate for biotechnology products in the health care field (medicines and diagnostics) will be 5.5%, and 80% will be due to small- and medium-size enterprises. Europe has seen a greater number of approvals of biosimilar products (22 over the past 10 years) than in the United States, and more than 40 monoclonal antibodies active in different therapeutic areas, but especially cancer therapy, have received regulatory approval. This has been helped by several imaginative European Early Access Approval schemes that are hedged around by the requirement for strictly enforced post-marketing safety studies. These are monitored by new safety regulations introduced in 2013, including the establishment of a new European safety committee, the Pharmacovigilance Risk Assessment Committee. Health technology assessments, more highly developed in Europe than in the United States, and pressure on health care budgets by cash-strapped payers limit the availability of costly biotechnology products, even after regulatory approval.
Other new European regulatory initiatives relevant to health care biotechnology products include introducing in 2014 improved regulations for approving clinical trials; replacing the inconsistent implementation of the previous European Clinical Trials directive; developing in 2007 a new product classification called Advanced Therapy Medicinal Products, which incorporated gene therapy, cell therapy, and tissue engineered products; and creating a new office within the European Medicines Agency to provide regulatory advice to small- and medium-size enterprises at a favorable financial rate.
Assessing the platform economy
In “The Rise of the Platform Economy” (Issues, Spring 2016), Martin Kenney and John Zysman offer a fresh and comprehensive look at one of the most urgent topics in industrial economics. The article helps our understanding, as the authors refuse any form of technological determinism. Their phrase “technologies—the cloud, big data, algorithms, and platforms—will not dictate our future. How we deploy and use these technologies will”—is an inspiration. Of course, if we deviate from a techno-determinism understanding of the platform economy, we realize we don’t know much, and many doubts about the development of the high tech industry come to light. This article therefore leaves the reader with many unanswered questions—a very welcome and healthy result!
A logical consequence of Kenney and Zysman’s reasoning is the following: If a debate over policy on the rise of the platform economy is not going to be straightforward or simple, what are the dimensions that could guide industrial decisions and national innovation strategies to steer the economic transformation? As I was reading, I scribbled in the margins five issues that should be kept in mind to characterize various aspects of the platform economy.
First, who are the actors that will guide a transformation in the platform economy? Will incumbents resist the pressure of new entrants? Different sectors and different stages of a technology life cycle will provide different answers to these questions. Certain layers (or sectors) of the platform economy will be characterized by stronger barriers of entry and will assign a dominant position to platform leaders. To maintain leadership in these layers, scale will matter, and industrial policy should be very much aware of that. On the other hand, other layers (or sectors) of the platform economy will be characterized by significant opportunities of disruption by new entrants and entrepreneurs. Flexible specialization will be key to achieving competitiveness in these areas.
Second, what will the recombination of the factors of production look like? The platform economy will change the role of labor. As Kenney and Zysman suggest, we can imagine a reorganization of work that will prefer a looser relationship between employer and employee. Also, we need to consider the role of users, and their relevance as codevelopers for parts of the offering for services and products exchanged on the platform economy. How can we motivate these users? How can we protect their contributions? Should the development of users become one of the objectives of innovation policy?
Third, what will a new social contract look like? As a consequence of redesigned labor practices, a social welfare that is founded on formal and stable participation of individuals in the labor force might not set the right incentives to get people to contribute to a platform economy. Can policy redesign a new welfare state based on different criteria?
Fourth, how can we determine the most sustainable position for companies within the platform economy? Firms can compete to become platform leaders, but they could also very well position on a niche/module of the platform. Also, they could serve platform leaders and niches by providing important complementary assets.
Finally, what is the approach to take toward the development of a new technology? As we overcome the truism of technology determinism, pursuing technological leadership might not be the only strategy. We find many different alternatives to consider. Countries and companies that are leading the transformation could, indeed, choose to stay ahead and to lead others on the road to technology adoption. Alternatively, they could follow rather than lead the adoption of certain individual technological trends. Or they could prefer to hold the decision, and strategically prevent the adoption of certain technologies.
Countries as well as companies are trying to position themselves within the platform economy. The debate worldwide on Industry 4.0 is active and urgent. Kenney and Zysman encourage managers and policy makers to take a savvy, informed, and disenchanted approach in order to not end up in the digital peripheries of the future.
Martin Kenney and John Zysman raise an important set of observations about the rise of the platform economy. There are few issues that elicit more passion with fewer facts than the gig economy and its impact on the lives of workers (especially at ground zero in the Bay Area of California where I live). The authors provide a very helpful and dispassionate overview and framework for how to understand the phenomena.
I would suggest four areas for near-term attention to push the discussion forward:
First, gather real data, not a set of anecdotes. Government data on employment in the platform economy is poor, and what is collected is dated. JP Morgan Chase Institute is the most comprehensive to date (https://www.jpmorganchase.com/corporate/institute/report-paychecks-paydays-and-the-online-platform-economy.htm), but much more is needed, including ethnographic research on the workers themselves.
Second, rethink the definition of employee and contractor. The US classification system no longer reflects the reality of the marketplace. Too often companies are winning via regulatory arbitrage and “catch me if you can” approaches to how they classify their workforce. Seth Harris and Alan Krueger have started the debate (http://www.brookings.edu/research/papers/2015/12/09-modernizing-labor-laws-for-the-independent-worker-krueger-harris).
Third, reimagine how benefits are provided independent of an employer. The discussion around portable benefits has sparked a very constructive set of conversations around how to deliver benefits that stay with the employee as they collect wages (https://medium.com/the-wtf-economy/common-ground-for-independent-workers-83f3fbcf548f#.a1rsc9p22). We should expect to see a series of experiments around the country on how to make this idea real.
Fourth, engage a broader conversation around the social contract. The great recession exposed the fragility of the post-WWII contract, and the expansion of the platform economy is laying it bare. New America and others are helping frame a dialog that reflects the reality of today’s economy (https://www.newamerica.org/economic-growth/the-next-social-contract-an-american-agenda-for-reform/).
It will be next to impossible to put the platform economy genie back in the bottle—and we shouldn’t try. Now that it is here, we need to adapt, not pine for a nostalgic view of the workplace.
Digitalization is, indeed, changing industries and societies in fundamental ways, as Kenney and Zysman so accurately note. Digital platforms are the key change agents in this development. It is really difficult to find any business or area of life that would remain untouched as digitalization goes forward. The authors touch on some of these changes. They make a number of references to the Nordic countries, which in many respects have been the forerunners in digitalization. There is also a wide public debate on the challenges that digitalization brings to the society.
Intelligent information and communications technology solutions could, according to some observers, reduce global greenhouse gas emissions by 16%, providing a huge opportunity to solve the problem of global warming. Payment systems are changing. Some developing countries have shown the way. Kenya is a country where more than 50% of all financial transactions are made with mobile devices using a system called M-Pesa, provided by Vodafone and a Kenyan operator, Safaricom. A high-level media advisory group for the European Union concluded in 2013 that the media industry is now dominated by a platform game controlled by foreign players. These are just a few examples of the fundamental changes digitalization can bring to society.
Kenney and Zysman concentrate in their article on working life, which will change as people are more and more doing work that is made available by platforms. A number of platforms are distributing tasks that people can do and get compensated for as the task has been completed and approved by the customer. This is, indeed, a new issue not only for the people involved but also for the traditional institutions that create and control the labor market rules and legislation. The labor market rules are structured primarily to cover conventional employment situations where an employee and an employer have clear identities and certain rights and obligations. National tax revenues are dependent on these arrangements.
Nordic countries are widely recognized as welfare states with a high level of public expenditure and heavy taxation of income and consumption. In principle, the welfare societies have been able to make structural reforms in the economy due to the fact that people feel safe even if they lose their jobs for a short period. Now this situation is changing, and it is going to be extremely interesting to see how the Nordic welfare societies can adapt to the new conditions created by digitalization. Some suggestions have been presented in the public debate. We should tax environmentally harmful consumption more heavily, or we can pay people permanent financial compensation without any work obligations, etc. None of these suggestions, however, has been able to give a sustainable solution for how to adapt to the new conditions. It will remain to be seen how different countries will be able to adjust. As a general conclusion, we can say that doing nothing is not an option. Countries will have to change to be able to benefit from the platform economy and digitalization. They are here to stay, and it’s up to us whether we will be able create a sustainable future for us and our children.
In their review of the platform economy, Kenney and Zysman offer a number of interesting observations, but they also fall short in several ways. They take a very US-centric view that lacks global perspective, and they focus narrowly on labor issues and give little attention to how platforms are influencing innovation through harnessing third-party complements.
There is a more interesting global story to tell about the variation in platform creation between North America, Asia, and Europe. Analysis I’ve done suggests that the top US platforms earned $1.3 trillion in profits over the past five years. Asia comes in second at $217 billion. Europe, by contrast, earned only $86 billion, since it has relatively few platform companies, and the ones that it does have are transaction-based rather than integrated platforms that bring together both the matching functions and accelerated third-party innovation.
A chunk of the $1.3 billion in profits the US platform companies have garnered have gone back to shareholders. However, a good portion is being invested in artificial intelligence, automation, and a bunch of other next-generation stuff that we have been reading about from companies such as Facebook, Amazon, and Apple. This suggests that platform companies have the potential to influence national systems of innovation, with potentially long-term consequences for national competitiveness. Whereas they have a marginal impact on employment relations, they are golden geese from the standpoint of innovation.
Thus, there is much more to say about the global status of platforms, including their significance in terms of rent/value accumulation and influence on national competitiveness.
Is it time for the science policy/funder communities to be more scientific about how to invest in science? This is the question posed and persuasively addressed by Katy Börner in “Perspective: Data-Driven Science Policy” (Issues, Spring 2016). The opportunity to gather, analyze, and visualize data from myriad resources (publications, grants, patents, social media, and so on) in unprecedented volumes provides a powerful argument, even a compelling rationale, for putting data-generated knowledge to work informing governments and private funders how to “most productively” (in Börner’s words) invest in research.
I agree that funders should be willing to become more scientific when it comes to investing in science. Cultivating a willingness to pose thoughtful questions about the purpose and the nature of investing in knowledge generation and use, developing robust ways to acquire and study the data available, and being honest and transparent about our determinations of success (or not) against meaningful outcomes strikes me as the right way to consider how best to deploy limited resources. On the flip side, data-driven science policy benefits from the same skepticism, caution, and debate ongoing for other “big data” initiatives—how “right” are the questions asked, how good are the data gathered, and what values are represented in outcome measures? Science and scholarship are, fundamentally, human enterprises in service to the common good. Attempts to make inquiry too efficient or optimal risk pushing us away from investing in research that is heterodox to prevailing wisdom, orthogonal to reigning dogma, or skewed to the interests of particular stakeholders. It may just be that some inefficiency is necessary to allow space for novelty to emerge.
Data-driven investment poses positive opportunities and some tricky challenges for nonprofit private funders. Foundations, charities, and individual donors typically rely on eminence-based rather than evidence-based decision making. Like government funders, foundations, charities, and wealthy individuals assemble panels of experts to shape initiatives, provide merit reviews of proposals, and make funding recommendations. Unlike government funders, private funders typically have limited resources and invest in science with small numbers of grants on more targeted topics with short time scales, limiting the amount of data available for analysis. The James S. McDonnell Foundation, for example, makes about 30 new grants a year via investment strategies including: identifying where modest research investments could help fill gaps in scientific knowledge; looking for emerging areas of research at the intersections of traditional disciplines; and identifying questions early in their inception.
It is easy to see how the data-driven approaches Börner describes can help us to more systematically “map” knowledge gaps or target emerging lines of research that could get a boost from targeted, albeit, modest funding. Visualization approaches that dynamically monitor how ideas, theories, and tools are crossing disciplinary boundaries or merging into novel hybrid fields allow us to see how influential our grantees and their publications are in the broader scientific community. Importantly, data-driven approaches can temper our expectations (and claims!) as to what can be achieved with limited investment, guide how funding strategies might need to be altered or adjusted to better match our goals, and identify new research directions for the future. In my view, the challenges for private philanthropic supporters of science are philosophical: in our enthusiasm for data, how do we maintain our core characteristic of independent and diverse decision making?
Katy Börner’s excellent article highlights the impact of enormous computer power for models and simulations, using big data to parameterize models, and increasing capabilities for interactive visualizations. Policy makers can now be immersed in the complexity of their domains and empowered to interactively explore a wide range of “what if” questions.
These capabilities enable decision makers from many disciplines beyond science and technology to engage in such explorations to inform their positions on a diverse array of policy issues, including education, energy, health care, and security. The process of “informing” decision making is rather different from IBM’s computerized Watson just telling them the answer. In fact, the key insights usually come from small, multidisciplinary groups using the technology to investigate and debate alternative futures.
Our experience is that senior decision makers readily adapt to such interactive environments, often wanting to “take the controls” and pursue their own ideas and questions. The biggest adoption hurdle involves policy analysts who are reluctant to abandon PowerPoint to create interactive environments that enable such explorations. Taking this leap successively involves understanding decision makers’ “use cases” and paying attention to their “user experiences.” This is often best approached by analyst-designer teams.
We have also found that it is best to stick with commercial off-the-shelf tools—for example, AnyLogic, D-3, Excel, R, Simio, Tableau, and Vensim—that allow embedding Java code, for instance, rather than writing software from scratch. We often use combinations of these tools. This practice can enable creating a prototype interactive environment within a week or two, which, in turn, allows rapid user feedback and easy mid-course corrections.
The goal of all this is evidence-based policy, rather than policies based largely on political positions or ideologies. Indeed, we have experienced many instances of stakeholders’ ardent positions dissolving in the face of evidence, with them at the controls. Decision making is a lot more efficient and effective when you can get rid of bad ideas quickly.