Forum – Spring 2014
Wet drones
Bruce Berkowitz’s “Seapower in the Robotic Age” (Issues, Winter 2014) is a timely piece. He makes the astute observation that the revolution in unmanned aerial systems is but the first in a wave of robots that will likely appear next in the maritime domain. He provides an historical perspective of past innovations at sea, a surprising number of which involved semi-automatic systems, beginning with the century-old torpedo. He identifies possible applications of robots at sea and discusses a range of pitfalls and problems.
But Berkowitz misses or underemphasizes two key issues that may complicate the deployment of robots at sea: growing cyber insecurity and the legal uncertainty of robot self-defense.
The recent use of unmanned aerial systems, which began to unfold in the early 2000s, was deployed against fairly unsophisticated enemies. Nevertheless, at least a handful of expensive drones have been lost to “hacking,” compelled to defect, so to speak, to foreign air space. How much more will maritime systems be subject to cyber attack? I suspect significantly more.
Maritime systems move more slowly and often “loiter.” They can be intercepted by both manned and unmanned sea or air systems. Additionally, unlike air systems, which are difficult to capture because hacking the system typically risks losing the air frame and cargo to the powers of gravity and crash landings, a floating, unmanned maritime system is relatively easy to capture. It could be argued that an expensive, state-of-the-art unmanned maritime system would pose an especially appealing target to rival powers. Additionally, it might prove unnecessary to hack the system. Rather, any means that disables the propulsion system could result in capture by other drones, high-speed manned vessels, or by low-tech means such as nets and a couple of strong fishermen.
What would follow in the wake of human capture of our unmanned systems? I submit it will get complicated by a host of issues that Berkowitz, to be fair, did not have the time or space to address. For example, would unmanned maritime machines have the right to fight in self-defense to avoid capture by other machines or by human captors? If not, would it be necessary for human combatants to remain nearby in order to defend otherwise vulnerable maritime drones? To those who believe such a conundrum unlikely, remember that in Fall 2012 the Iranians made several attempts to intercept, and by some reports fired at, unmanned U.S. drones operating in the Persian Gulf. The attacks were met not with armed drones, but by the use of manned aircraft to accompany the unmanned systems. But to this day, the rules governing drone self-defense are unclear.
Berkowitz is certainly correct in his broad predictions: A revolution at sea is coming, and it will involve unmanned systems. But with this wave of machines will come confusion, ambiguity, legal wrangling—a proverbial storm. No doubt a fog of uncertainty will accompany the issue of robot self-defense, providing proof that at least some of Clausewitz’s 19th century observations are indeed timeless, even in the robotic age.
Bruce Berkowitz offers a reasonably comprehensive and balanced assessment of the current state of play regarding unmanned maritime systems (UMS). However, one of his statements—that the Navy cannot, as a matter of DoD policy, deploy UMS that automatically identify and destroy vessels that meet their criteria for being hostile—needs some additional explanation. Modern mines are a form of lethal UMS. They await a predetermined signature along with other criteria, which, if met, causes them to explode. Some mines, such as the old encapsulated torpedo (CAPTOR) would release a MK-46 homing torpedo if a contact set it off. Other mines, such as the U.S. Navy’s submarine launched mobile mine (SLMM), can be launched at a distance and navigate to where it will lie in wait. In each of these cases there is movement involved, in the first instance to kill and in the second to arrive at the ambush position. The only movement not involved is movement to search. The SLMM, as well as stationary mines, are available for Navy use, so the restrictions in the DoD policy document are rather narrow and technical. Even these might not be viable much longer as warheads become ever more discriminating. Unmanned systems have so much promise for maintaining U.S. dominance in the undersea environment that progress will occur. Berkowitz is right: Unmanned systems will reduce risk to sailors and will allow the Navy to maintain certain types of presence with a smaller fleet.
One class of system that Berkowitz does not mention is the amphibian, a vehicle that is let loose in the water but crawls up on land. There exists any number of potential uses for this type of system in a complex littoral, especially one featuring offshore islands. Berkowitz also gives short shrift to unmanned aircraft launched from underwater. He shouldn’t, because this concept has considerable potential, especially for small flyers. One can imagine encapsulated UAVs lying on the sea floor waiting to receive a signal to cut loose of their ballast and float to the surface, releasing a UAV that performs a search pattern and broadcasts its findings, or perhaps radiates a deceptive signal to confuse an enemy. It might also serve as a communications relay, retransmitting low-power transmissions between a submarine and forces over the horizon. Again, the number of potential uses for this kind of undersea/airborne robotic vehicle marriage is almost unlimited, especially if we think in terms of air vehicles that can be folded into a torpedo-sized canister.
Reduced fleet size and evolving threats virtually guarantee that the Navy will invest in all manner of robotic systems. Just as mechanization and automation allow a single Midwest farmer to tend over a thousand acres with perhaps one part-time assistant, the advent of robotics will permit fewer sailors on fewer ships to conduct missions that formerly required many more.
Useful models
Andrea Saltelli and Silvio Funtowicz (“When All Models are Wrong,” Issues, Winter 2014) provide a checklist to aid in responsible development and use of models. I agree with most, but not all, of their comments and suggestions. Their discussion deals with models in all fields, many of which are empirical, and many deal with the capricious nature of human actions. However, some models have a strong foundation anchored in the physical laws of nature. The best example, perhaps, is the models used for numerical weather prediction (NWP), which do not follow some of the rules proposed on the checklist. Predicting weather entails dealing with odds, owing to the innate lack of predictability associated with what mathematicians now call “chaos,” the tremendous sensitivity of results to small perturbations whose importance grows over time, eventually rendering a weather forecast useless after 10 to 14 days. But weather prediction has advanced enormously by using complex numerical models, built around the physical laws expressed as equations (Newton’s laws of motion, conservation of mass, conservation of energy and the thermodynamic equation, equation of state). The reliability associated with 3-day forecasts in the 1970s is now possible for 6-day forecasts. The model complexity continues to grow, and the world’s largest supercomputers are used to carry out the computations. Saltelli and Funtowicz’s recommendation that stakeholders be able to replicate the results is absurd in this case. The validity of the models is constantly tested as the weather develops, and the feedback is used to refine the models.
This is not so much the case for climate models. These models are used for future projections that cannot be verified for decades, and so they are policy instruments. They are built on the NWP atmospheric models but with the inclusion of other parts of the climate system, such as the land surface, oceans, and ice masses. Many aspects of the climate problem hinge on how well many of the interactions are represented, and in this case the physical laws are not known or are very complex. Processes not explicitly represented by the basic dynamical and thermodynamic variables in the equations on the grid of the model need to be included by parameterizations. These include processes on smaller scales than the grid such as convection and boundary layer friction and turbulence, processes that contribute to internal heating such as radiative transfer and precipitation, both of which require cloud prediction, and missing processes such as land surface, carbon cycle, chemistry, and aerosol processes. While our knowledge of certain factors increases, so does our understanding of factors we previously did not account for or even recognize, and hence uncertainty is apt to increase.
The current practice with climate models is to continue to build them to include as much complexity as possible in order to replicate the real world. The process of model development never ends. In general each new generation of such models does show improvements. Older versions of the models, which, it can be argued, are better evaluated in the literature and somewhat understood, are cast aside for the latest and greatest. However, it can be argued that predictions or projections that correspond to a given “what-if ” emissions scenario should be based on a known model whose results are reproducible. Yet new versions of climate models are created, and runs made with them are immediately made available to the community for use in Intergovernmental Panel on Climate Change (IPCC) reports without adequate testing or evaluation. Although some IPCC models deliberately have modest evolutions, some are “bleeding edge” models that are not yet tried and tested. The practice violates many of principles outlined by Saltelli and Funtowicz. The question is whether the balance is right between building the next generation model and exploiting the known model.
Transparency is a desirable goal but one that is easily undermined. Another difficulty not discussed by Saltelli and Funtowicz is that in climate science there are vested interests and deniers of climate change whose goal it seems is to undermine the science and projections using any means possible. Many of the denier arguments have been proven wrong time and again, but they keep reappearing.
Models are useful for many purposes, but they can easily be abused and should not be used as black boxes without full understanding of the approximations, assumptions, limitations and strengths. Models are tools and can be extremely valuable if used appropriately.
As a researcher in uncertainty quantification for environmental models, I heartily agree with Saltelli and Funtowicz that we should be accountable, transparent, and critical of our own results and those of others. Open access journals, particularly those accepting technical topics (e.g. Geoscientific Model Development) and replications (e.g. PLOS One), would seem key, as would routine archiving of preprints (e.g. arXiv.org) and (ideally non-proprietary) code and datasets (e.g. FigShare.com). Academic promotions and funding directly or indirectly penalize these activities, even though they would improve the robustness of scientific findings.
However, I found parts of the article somewhat uncritical themselves. The statement “the number of retractions of published scientific work continues to rise” is not particularly meaningful. Even the fraction of retraction notices is difficult to interpret, because an increase could be due to changes in time lag (retraction of older papers), detection (greater scrutiny through efforts such as RetractionWatch.com), or relevance (obsolete papers not retracted). It is not currently possible to reliably compare retraction notices across disciplines. But in a study by Daniele Fanelli of scientific bias, measured by fraction of null results, geosciences and environment/ecology were ranked second only to space science in their objectivity. It is not clear that we can assert there are “increasing problems with the reliability of scientific knowledge.”
There was also little acknowledgement of existing research, such as the climate projections used in UK adaptation, on the question of which of the uncertainties has the largest impact on the result. Much of this research goes beyond sensitivity analysis, which is part of the audit proposed by the authors, because it explores not only uncertain parameters but also inadequately represented processes. Without an attempt to quantify structural uncertainty, a modeller implicitly makes the assumption that errors could be tuned away. While this is, unfortunately, common in the literature, the community is making strides in estimating structural uncertainties for climate models.
The authors make strong statements about the political motivation of scientists. Does a partial assessment of uncertainty really indicate nefarious aims? Or might scientists be limited by resources (computing, staff, or project time) or, admittedly less satisfactorily, statistical expertise or imagination (the infamous “unknown unknowns”)? In my experience modellers may be resistant enough to detuning models and broadening uncertainty ranges without added accusations about their motivation. It would be better to simply argue for the benefits of uncertainty quantification. By showing that sensitivity analysis helps us understand complex models and highlight where effort should be concentrated, we can be motivated by better model development. And by showing where we have been “surprised” by too-small uncertainty ranges in the past, we can be motivated by the greater longevity of our results.
Green skies
In “Greenhouse Gas Emissions from International Transport” (Issues, Winter 2013), Parth Vaishnav addresses a concern we share, climate change. Before commenting on “market-based measures” to reduce greenhouse gas emissions, I want to offer some context about the aviation industry, Boeing, and the environment.
Aviation is an essential part of modern life, with about 3 billion people boarding commercial airplanes every year. Even with increasingly sophisticated digital technologies and social networks, airplanes retain a unique ability to bring people together. Commercial air travel also helps foster economic development and trade. Our industry generates about 5% of global GDP and supports an estimated 56.6 million jobs, including about 170,000 at Boeing.
And as an industry, we understand that environmental responsibility plays a crucial role in our long-term license to grow.
Since the late 1950s, Boeing has improved the fuel efficiency of our airplanes by 70%, which is essential to our customers, not only for environmental impact, but also because of the rising costs of fuel. On a per-passenger-mile basis airplanes today are more efficient than cars and many other forms of transportation. Today, commercial air travel produces about 2% of global manmade CO2 emissions, which is projected to increase to 3% by 2030. This is why Boeing and our industry continue to take action to reduce emissions and improve efficiency.
The aviation industry was the first sector to set ambitious targets for CO2 emissions reduction, including industry-wide carbon-neutral growth beginning in 2020 and a 50% reduction in net CO2 emissions in 2050 compared to 2005 levels.
Boeing R&D investments focus on innovations in propulsion, lightweight materials, and avionics that improve the environmental performance of our products. These innovations are reasons why our 787 Dreamliner is 20% more fuel efficient—and produces 20% less CO2 emissions—than the airplane it replaces. In addition, we work aggressively with global partners to commercialize sustainable aviation biofuel, and we engage research institutions around the world to improve the efficiency of flight. We also advocate for modernized air traffic management systems that would cut carbon emissions for all airplanes flying by an estimated 12%.
It’s also important to note that our industry has agreed that global market-based measures may play a role to bridge a short-term emissions gap before these new technologies reach their potential. We believe that any money generated from these measures should be put to use to find innovative ways to continue to reduce emissions.
Innovation and new technology will always be at the heart of aerospace. At Boeing, we are actively testing lower-emission aircraft, including a blended wing-body design and hydrogen-powered propulsion. We are also working with the National Aeronautics and Space Administration to explore hybrid, solar, and electric-powered airplanes to create cleaner modes of flight in the decades to come.
Our industry is building on its demonstrated progress and holding ourselves accountable to support continued global economic growth and create a more sustainable future.
Farmer suicides
Keith Kloor’s article (“The GMO-Suicide Myth,” Issues, Winter 2014) does a disservice to its scientific audience, and I take issue with it. Not with its thesis that Bt cotton is not directly causing Indian farmer suicides: that is obvious, and could be shown simply by noting that the biggest spike in farmer suicides occurred in Andhra Pradesh in 1998, four years before Bt cotton was even on the market. (The 1998 suicides were publicized in the Wall Street Journal and other newspapers, and received international attention; how short our memory is.) Or one could simply summarize the 2011 Gruère and Sengupta article in Journal of Development Studies showing that suicides have not climbed as Bt cotton has been almost universally adopted in India.
What I take issue with is the use of human tragedy in rural India simply to land a few blows in the relentless genetically modified organism (GMO) brawl. As I have pointed out before, both sides in the brawl claim the suicide epidemic bolsters their case, and neither shows concern for actually understanding what is behind it. Despite a headline invoking the “the real reasons why Indian farmers take their own lives,” this article mentions almost none of the serious scholarship on the topic, omitting even A. R. Vasavi’s insightful and widely-read Shadow Spaces: Suicides and the Predicament of Rural India.
Farmer suicide is a complex problem that can hardly be blamed on a bank policy change that Kloor heard about at a conference. Most small farmers don’t even borrow from banks, and anyway this begs the question of why cotton farmers’ need for credit has risen. State-encouraged, pesticide-intensive hybrid cotton spread during the 1990s, contributing to intractable problems in ecology, farm economics, and farmer decisionmaking. There were social effects as well, as risk and debt became increasingly individualized, unmooring farmers from sources of support.
But Kloor’s goal was not to understand the problem of farmer suicide, but rather to use it to whip up hatred toward Vandana Shiva and “liberal and environmentalist circles,” where GMOs are unpopular. The intent was to turn a complex social science question into a moral fable. Moral fables need villains (as Kloor himself notes), and egged on by Ron Herring, he uses the plight of Indian peasants to villainize Shiva, just as Shiva uses the peasants to villainize Monsanto.
Of course Shiva is wrong on Bt cotton killing farmers, as is Patrick Moore’s hysterical charge that Golden Rice critics are murdering Asian kids. For GMO brawlers like Kloor and Moore and Shiva, the aim is to enflame the like-minded, and hopefully to spread “motivated reasoning” to the undecided. Motivated reasoners use low standards of proof for claims they like, high standards for ones they don’t, and fixate on trashing opponents’ weakest arguments instead of actually considering their strongest. Villainization encourages motivated reasoning, and then charges of murder by the likes of Shiva and Moore really clear the benches.
In other writing Kloor calls GMO opponents unscientific. However, I would suggest that it is articles like this, which bash one side’s irresponsible claims but not the other’s, and which aim to create exasperation rather than insight, that are the real impediments to the scientific understanding of our world.