Cybersecurity: Who’s Watching the Store?

Government is not doing all it could to research the problem or to exercise its proper regulatory role.

With information technology (IT) permeating every niche of the economy and society, the public has become familiar with the dark side of the information revolution–information warfare, cybercrime, and other potential ways nefarious parties might try to do harm by attacking computers, communications systems, or electronic databases. The threats people fear range from nuisance pranksters abusing the World Wide Web, to theft or fraud, to a cataclysmic meltdown of the information infrastructure and everything that depends on it. As IT becomes more tightly woven into all aspects of everyday life, the public is developing an understanding that disruption of this electronic infrastructure could have dire–conceivably even catastrophic–consequences. During the past decade, government officials, technology specialists, policy analysts, industry leaders, and the general public have all become more concerned about “cybersecurity”–the challenge of protecting information systems. Prodigious efforts have been expended during this time to make information systems more secure, but a close examination of what has been achieved reveals that we still have work to do.

The threat to information systems potentially takes many forms. Experts generally offer four different “attack modes”: denial, deception, destruction, and exploitation. Or, to put it another way, someone can break into an information system to stop it from operating, insert bogus data or malicious code to generate faulty results, physically or electronically destroy the system, or tap into the system to steal data. Experts also agree that such threats can come from a variety of sources: foreign government, criminals, terrorists, rival businesses, or simply individual pranksters and vandals.

Many people associate cybersecurity with the Internet revolution of the 1990s. In fact, the idea of information warfare directed at computer networks dates back to 1976, to a paper written in the depth of the cold war by Thomas Rona, a staff scientist at the Boeing Company. Rona’s work was an outgrowth of electronic warfare in World War II and the introduction of practical computers and networks. He speculated that in the emerging computer age, the most effective means to attack an adversary would be to focus on its information systems.

Rona’s research came at a propitious moment, because the Department of Defense was itself just beginning to consider whether such tactics might be a silver bullet for defeating the Soviet Union. This interest had been triggered, ironically enough, by Soviet military writings. The Soviets believed the United States was preparing for radioelektronaya bor’ba–“radio electronic combat.” As it turned out, U.S. capabilities were not nearly as far along as the Soviet writers feared. But once U.S. officials discovered that Soviet officials were concerned about computer attacks, they began to look into the possibilities more closely.

The payoff occurred in the 1991 Gulf War, the first conflict in which U.S. commanders systematically targeted an adversary’s command and control systems. These efforts were an important reason for the U.S.-led coalition’s lopsided victory. After the war, when U.S. officials realized how important this “information edge” had been, they started to worry more about the vulnerability of its own electronic networks.

Throughout the early 1990s the Defense Department examined this threat more closely. The closer officials looked, the more worried they became. They were especially concerned about the vulnerability of U.S. commercial systems, which carry the vast majority of military communications. One of the first unclassified studies was the Report of the Defense Science Board Task Force on Information Warfare-Defense (IW-D), which the Defense Department released in November 1996. This report was followed by other studies that reached similar conclusions about the cybersecurity threat.

Largely as a result of these studies, in May 1998 the Clinton administration issued Presidential Decision Directive 63, which directed federal agencies to take steps that would make their computers and communications networks (in addition to other critical infrastructure) less vulnerable to attack. It also led to the establishment of several measures intended to address the threat to the commercial sector. These included:

  • appointment of a national coordinator for security, infrastructure protection and counter-terrorism on the National Security Council staff with responsibility for overseeing the development of cybersecurity policy;
  • establishment of the National Infrastructure Protection Center in the Federal Bureau of Investigation (FBI), which was responsible for coordinating reports of computer crime and attacks so that the federal government could respond effectively; and
  • establishment of the Critical Infrastructure Assurance Office (CIAO) to coordinate the government’s efforts to protect its own vital infrastructure, integrate federal efforts with those of local government, and promote the public’s understanding of threats.

Computer security received even more attention in the late 1990s because of the “Y2K problem”–the possibility that at least some computers would fail when their software and internal clocks mistook the year 2000 for 1900. Also, a series of high-profile viruses such as Melissa and Love Bug and computer hacking cases such as that of Kevin Mitnick caught the attention of the press. When combined with the fact that millions of Americans became personally familiar with computers–and everything that could go wrong with them–cybersecurity was transformed from an esoteric topic familiar only to computer specialists and military thinkers to a public policy issue of widespread concern.

The Clinton administration appointed Richard Clarke, a National Security Council staff member, to the post of national coordinator for security, infrastructure protection and counterterrorism. Clarke directed the development of the National Plan for Information Systems Protection Version 1.0: An Invitation to a Dialogue, a 199-page document released in January 2000. According to the CIAO, the plan “addressed a complex interagency process for approaching critical infrastructure and cyber-related issues in the federal government.”

The new Bush administration agreed with the Clinton administration about the importance of cybersecurity and began a review of cybersecurity policy when it entered office in January 2001. In October 2001, the administration issued Executive Order 13231, which established a new effort for protecting information systems related to “critical infrastructure,” including communications for emergency response.

The Bush administration also began an effort to develop a new cybersecurity strategy. It retained Clarke as special adviser for cyberspace security within the National Security Council. A draft of the new strategy was released to the public in September 2002. The administration held a series of public hearings among representatives from government, industry, and public interest groups during the next several months and released The National Strategy to Secure Cyberspace in February 2003.

How effective?

The new cybersecurity strategy has five components:

  • a cyberspace security response system–a network through which private sector and government organizations can pool information about vulnerabilities, threats, and attacks in order to facilitate timely joint action;
  • a cyberspace security threat and vulnerability reduction program, consisting of various initiatives to identify people and organizations that might attack U.S. information systems and to take appropriate action in response;
  • a cyberspace security awareness and training program, consisting of several initiatives to make the public more vigilant against cyberthreats and to train personnel skilled in taking preventive measures;
  • an initiative to secure the governments’ cyberspace, which includes programs that state and federal government agencies will take to protect their own information systems; and
  • national security and international cyberspace security cooperation–initiatives to ensure that federal government agencies work effectively together and that the U.S. government works effectively with foreign governments.

Although many of these individual initiatives are probably valuable, the approach of the current plan, like that of its 2000 predecessor, lacks at least three features taken for granted in most other areas of public policy. This may be the most fundamental shortcoming of U.S. policy for cybersecurity up to now.

First, the assessment of the threat, and thus the strategy’s estimates of the potential costs of inaction, is largely anecdotal. The strategy also lacks a systematic analysis of alternative courses of action. As a result, the new strategy cannot provide a clear comparison of the costs and possible benefits of the various policies it proposes.

Second, the strategy lacks a clear link between objectives and incentives. Economic theory holds no opinion on whether people are inherently well-meaning or evil; rather, it simply assumes people respond to incentives. That is why a clear, rational incentive structure is the cornerstone of any effective public policy. Unfortunately, the cybersecurity strategy lacks incentives. In a related vein, it also lacks a component that is closely related to incentives: accountability. There is no mechanism in the policy that holds public officials, business executives, or managers responsible for their performance in ensuring cybersecurity.

The greatest threat may simply be the economic harm that would result if the public loses confidence in the security of information technology in general.

Third, the strategy rejects regulation, government standards, and use of liability laws to improve cybersecurity in toto. These are all basic building blocks of most public policies designed to shape public behavior, so one must wonder why they are avoided in this case.

The rationale for the current strategy is that it avoids regulation and government-imposed standards to ensure that U.S. companies can continue to innovate, remain productive, and compete in world markets. This statement, however, overlooks another basic fact about public policy: Such policies always must reconcile individual profitability and economic efficiency with security, which has some of the characteristics of a public good. It is precisely because there are competing interests that policymakers must strike the right balance–not reject such measures completely, as the current strategy does.

Ideally, the new cybersecurity strategy would have established an analytical framework that explained how it selected some options and rejected others. Instead, the current strategy merely gives a laundry list of activities that may be excellent ideas or a total waste of effort but that bear no relationship to the severity of the threat and provide no link between proposals and priorities.

What is the need?

The most important question to ask in addressing any public policy issue is: What problem needs to be solved? Yet despite all the attention that cyberattacks receive in the media, there is little hard data for estimating the size of the cybersecurity threat or for calculating how much money is already being spent to counter it.

The data gap begins with the government. According to the General Accounting Office, the federal government spent $938 million on IT security in 2000, just over $1 billion in 2001, and $2.71 billion in 2002. However, the data do not tell us how much is being spent on different kinds of security measures. Moreover, there is no way to determine from the data whether all government agencies keep track of IT security spending in the same way.

Publicly available data on private IT security spending is, if anything, even less reliable and harder to come by. According to the Gartner Group, a leading IT consulting firm, worldwide spending on security software alone totaled $2.5 billion in 1999, $3.3 billion in 2000, and $3.6 billion in 2001. Once spending on personnel, training, and other aspects of information security is considered, total IT security spending could be substantially more. But the bottom line is that neither government nor private sector statistics on IT security spending are terribly useful for the kind of analysis that is common in most other policy sectors.

The most often cited source for IT security data is probably the FBI-sponsored survey published by the Computer Security Institute (CSI), a San Francisco-based membership organization for information security professionals. In 2002, CSI sent its survey to 503 computer security practitioners in U.S. corporations, government agencies, and financial institutions. The survey asked respondents about the security technology they use, the types and frequency of attacks they had experienced, and the losses associated with these attacks.

Needless to say, this is a very small percentage of all computer networks and hardly a scientific sample. Yet the greatest shortcoming of the CSI survey is that it lacked reliable procedures for uniformity and quality control. Each respondent decided for itself how to respond. One company might estimate financial damages from cybercrime with data from its accounting department, using insurance claims and actual write-offs on its balance sheets. Another might provide a gut estimate from a systems operator who monitors network intrusions. In either case, CSI did not require substantiation.

Even so, fewer than half the respondents in the 2002 CSI survey (44 percent) were willing or able to quantify financial losses due to attacks, which means that the data that are provided are almost certainly statistically biased. This is suggested by the results of the survey, which should raise questions even at face value. For example, survey responses from 1997 to 2002 indicate that the number of attacks in some categories has been constant or falling, even though the number of potential targets during this time grew exponentially. Similarly, the total cost of these attacks soared, despite the fact that companies were more aware of the cyberthreat and were spending more to protect themselves.

In recent years CSI has conceded weaknesses in its approach and has suggested that its survey may be more illustrative than systematic. Nevertheless, government officials and media experts alike freely cite these and other statistics on the supposed costs of cybercrime, even when the estimates fail the test of basic plausibility. For example, in May 2000 Jeri Clausing of the New York Times reported that the Love Letter virus caused $15 billion in damage. Yet the most costly natural disaster in U.S. history–Hurricane Andrew, which in 1991 swept across Florida and the Gulf Coast–caused $19 billion in damages. Moreover, this figure reflected 750,000 documented insurance claims, plus tangible evidence of 26 lost lives along with the near-total destruction of Homestead Air Force Base and an F-16 fighter aircraft. Are we to believe that one virus has almost the same destructive power of a Class 1 hurricane? Similarly, during a February 2002 committee hearing, Sen. Charles Schumer (D-N.Y.) cited a report claiming that the four most recent viruses caused $12 billion in damage. By comparison, the Boeing 757 that crashed into the Pentagon on September 11 caused $800 million in damages. Could the four viruses cited by Schumer really cause 15 times as much damage?

In reality, analyzing the damage of most network intrusions is time-consuming and expensive, which is why it has rarely been done on a large scale. To analyze an attack on a computer network, someone must review logs and recreate the event. Even then, sophisticated attackers are likely to stretch their attacks over time, use multiple cutouts so a series of probes cannot be traced to a single attacker, or leave agents that can reside in a system for an extended time–all making analysis harder. Logically, the trivial attackers are the ones most likely to be detected and the sophisticated ones are most likely to go unrecorded.

Without an exhaustive research program, which has not been carried out, the exact scope and nature of the cyberthreat contains enormous uncertainty. The current strategy proposes to identify threats, but it does not propose to collect reliable data that would define the threat. This lack of data is not an argument for ignoring cyberthreats. However, when the available data contain this much uncertainty, dealing with the uncertainty of the threat must be an integral part of the strategy. A prudent policy will focus on the most certain threats that have high probability and high potential costs. It will hedge against the less certain, less dire threats, and include mechanisms that direct efforts toward those areas with the greatest payoff and limit the resources that might inadvertently be spent on wild goose chases.

In our view, the greatest threat may simply be the economic harm that would result if the public loses confidence in the security of information technology in general. A threat that is only slightly less pressing is the possibility that a foreign military power or terrorist group might use the vulnerability of an information system to facilitate a conventional attack. The possibility that a purely electronic attack might cause a widespread collapse of information systems for a prolonged period with large costs and mayhem is possible, but a second-order concern, if only because potential attackers have other alternatives that are easier to use, cheaper, and more likely to be effective in wreaking havoc.

The government role

The most significant feature of the role set forth for government in the current cybersecurity strategy is that it arbitrarily precludes action common in other regulatory domains. It also defines a role that is dubious at best. For example, the strategy states, “In general, the private sector is best equipped and structured to respond to an evolving cyberthreat.” This is also true in other regulatory domains such as occupational safety. No government body is responsible for issuing grounding straps and face goggles. Unfortunately, the strategy ignores a basic fact of regulation: Although implementation is left to the private sector, the government has a large role in setting standards, designing regulations, and enforcing these measures.

There is a good economic rationale to consider changes in liability law that would give software and hardware companies some responsibility.

The strategy goes on to say that the federal government should concentrate on “ensuring the safety of its own cyberinfrastructure and those assets required for supporting its essential missions and services.” It also says that the federal government should focus on “cases where high transaction costs or legal barriers lead to significant coordination problems; cases in which governments operate in the absence of private sector forces; resolution of incentive problems…and raising awareness.” Alas, the government itself has a dubious record. As recently as February 2002 the Office of Management and Budget identified six common government-wide security gaps. These weaknesses included lack of senior management attention, lack of performance measurement, poor security education and awareness, failure to fully fund and integrate security into capital planning, failure to ensure that contractor services are secure, and failure to detect and share information on vulnerabilities.

In other words, more than six years after the Defense Science Board’s IW-D study and three years after the government’s first cybersecurity plan, most government agencies have yet to take effective action. This is hardly an argument for making government the trailblazer in security.

The reality of the situation is that the government is poorly suited for providing a model for the private sector. Government bureaucracies (not necessarily through any fault of their own) have too much inertia to act decisively and quickly, which is what acting as a model requires. Because of civil service tenure, government agencies lack an important engine of change found in the private sector, namely the ability to replace people inclined to act one way with people who are inclined to act another. Also, government agencies are locked into a budget cycle. In most cases, a year is required for an agency to formulate its plan, another year is needed for Congress to pass an appropriation, and a third year is required for an agency to implement the plan–at a minimum. This is why government agencies today are rarely at the leading edge of information technology. There is no reason to believe cybersecurity will be an exception to the rule.

The bulk of the responsibility for “securing the nets” will inevitably fall to the private sector because it designs, builds, and operates most of the hardware and software that form the nation’s information infrastructure. This is why the strategy’s determined avoidance of regulation and incentives is so misguided.

Organizations such as the information security analysis centers that the government has encouraged industries to establish are valuable for coordinating action against common threats, such as viruses and software holes. Larger response centers such as the Computer Emergency Response Team/Clearance Center at Carnegie Mellon University can play a similar role for the information infrastructure as a whole. However, the ownership and operation of the information is simply too diffuse to deal with real-time hacking and more serious cyberthreats through any kind of centralized organization. Cybersecurity is a problem requiring the active participation of scores of companies, hundreds of service providers, thousands of operating technicians, and millions of individual users.

The most effective way to shape the behavior of this many people is by setting broad ground rules and making sure people play by them. Anything else amounts to trying to micromanage a significant portion of the national economy via central control. Two central questions must be addressed. First, what kind of incentives will be effective at providing additional security? Second, how can we begin to design systems that will provide an efficient level of security–that is, one that yields a level of security where the difference between benefits and costs is maximized?

Policy options

A number of options should be on the table for designing more effective cybersecurity.

Better use of standards by the government and the private sector. The government may consider developing standards for software protocols for the future Internet that are more secure. These could include, for example, software that limits anonymity or requires “trust relationships” in multiple components of a network. (A trust relationship is one in which a user must identify herself and demonstrate compliance with technical standards before, say, she can gain entry to a database or use software.)

At a minimum, the government should consider playing a more active role than it does now in setting standards. Currently government policy is biased against intervening in the standard-setting process. Yet that is exactly what it should be doing when market forces left to themselves do not provide sufficient security to the country as whole, which many experts believe is the case at present. Despite the claims of critics who want to “keep the information frontier free,” the government, in fact, was a main contributor to the development of the current Internet, including the processes that resulted in current standards.

In addition to developing standards for software that are more secure, the IT industry should also consider developing more rigorous security standards for operations and software development. These should address both outside threats, such as hackers, and inside threats, such as sabotage and vandalism inside a company. After the rise in virus attacks and hacking incidents in 2000-01, some companies (most notably, Microsoft) announced that they would make security a higher priority in the design of their products. Critics of these efforts have complained that they were inadequate and were often disguises for marketing strategies designed to impede competition. Whatever the merits of these criticisms, they illustrate how government could serve as an honest broker–if it takes a more active role. Such standards could be voluntary or enforced through regulations. The more important point is to ensure that someone establishes “best practices” for industry and government that can be flexible for a variety of users but still provide a legal hook for liability.

Better use of regulation. In some cases the government may want to issue regulations to establish minimal acceptable security standards for operators and products. These would be cases in which the market has clearly failed and government action is required to address situations in which there are inadequate incentives or other factors preventing the private sector from developing these standards by itself. It may also want to develop an approach that requires firms to certify in their annual reports that they have complied with industry best practices.

Liability. Many computer and software makers have generally fought changes in the liability laws. A key argument is that an increase in liability has the potential to reduce innovation in the fast-moving IT sector: true enough, but only if the changes are poorly crafted or go too far. There is a good economic rationale to consider changes in liability law that would give software and hardware companies some responsibility, so that they have an incentive to increase the amount of attention paid to security issues.

Liability represents a big step over many of the voluntary measures that are being advocated now, which we doubt in most cases are likely to be adequate in addressing the problem. The strength of liability is that it is a market mechanism that is much more efficient in shaping the behavior of millions.

Reforming IT liability is, in effect, a market-style measure to promote better security by providing those best positioned to take action with the incentive to do so. In this same vein, the government should consider measures that would require corporations to tell their stockholders whether there are significant cybersecurity risks in their business and to certify that they are complying with industry standards and best practices to address them.

Clearly, one size does not fit all when it comes to cybersecurity. The kinds of measures appropriate for a Fortune 500 corporation are probably inappropriate for a start-up company operating out of a garage. The best approach is probably to let the market, combined with reasonably defined roles of legal responsibility, to tailor an optimal solution. But for this to happen, government will need to remove the obstacles that prevent the market from doing this currently, and to play a role in those cases in which a “public goods” problem dissuades companies and consumers from acting.

Research. Most important, we need to recognize that we are largely flying blind at this point in a public policy sense, because we have such a limited understanding of the costs of cybersecurity attacks and the benefits of preventive measures. The government should sponsor research on this subject–research that, up to this point anyway, the private sector has been unwilling or unable to conduct. It should also develop mechanisms for systematically collecting information from firms (with appropriate privacy protections) that would allow the government to help develop a better strategy for addressing cybersecurity in the future.

Security and privacy. Finally, public officials must learn how to balance privacy and security, and public policy analysts must do a better job of explaining the balance between these two goals. Simply put, technology often leaves no practical means to reconcile privacy and security.

For example, a trusted IT architecture, in which only identified or identifiable users can gain access to parts of a computer or network, inherently comes at the expense of privacy. A user must provide a unique identifier to gain access in such a system, and this naturally compromises privacy. Even worse, the data that a network uses to recognize a trusted user often can be used to identify and track the user in many other situations.

On the other hand, technology that guarantees privacy usually presents some insurmountable problems for security. The classic example is strong encryption. Because it is impossible for all practical purposes to break strong encryption, a person using it can conceal his communications, thus ensuring privacy. But such protection also can make it impossible to trace criminals, terrorists, hostile military forces, or others who would attack computer networks.

One way of addressing this problem is to concede the technical threat to privacy and use strict laws and regulation to compensate. This, of course, was the idea behind key escrow, which government authorities proposed in the mid-1990s as an alternative to completely eliminating restrictions on encryption. Under the proposed system, third parties would hold the “keys” to a cipher (actually, the means to break the cipher via a back door). Under certain specified conditions, the third parties could be ordered to provide the keys.

The U.S. government (in particular, law enforcement and intelligence organizations) took an imperious approach to the issue, which proved foolhardy because, in fact, they could not control the spread of encryption even if they tried. At the same time, the IT industry adamantly resisted the proposal, arguing that foreign customers would not buy “crippled” U.S. software or hardware, and thus opposed any restrictions. In the end, the technology did prove beyond control, and the net result was soured relations between government and industry, which continue even today.

Rather than focus on whether or not to control a particular technology, society would often be better off addressing the consequences of abuse of the technology. There are numerous precedents for such an approach. For example, technology allows people to track their own rental records at the local Blockbuster, but laws provide assurances that such use carries substantial penalties. Similarly, trusted systems could be required in specific applications (e.g., financial institutions, critical infrastructure). Then people could be given the choice of whether they wished to use those networks. Other systems could be non-regulated (common e-mail). Laws could ensure that privacy was protected–and that users who tried to enter a system without complying with disclosure requirements were criminally liable. These regulations should be enforced in a way that engenders public support. One approach is to have a non-political, bipartisan governing body that makes sure government enforces these standards and does not abuse its own access to personal data.

A leadership role

Designing a cybersecurity policy is not simple. There is very little good information on the costs of cybersecurity attacks and the benefits of proposed policy measures. The problem is extremely complicated because of the IT infrastructure, the large number of users, and the diverse nature of potential attackers.

Addressing this problem will take economic insight and political courage. Given the complexity of the problem, we think a variety of policy instruments should be used, including voluntary standards, regulation, and liability. The challenge for policy research is to develop deeper insights about the precise nature of the cybersecurity problem and the costs and possible benefits of different policy interventions.

The challenge for politicians is to give more than lip service to this issue. That means taking a leadership role in communicating the importance of the problem and defining a mix of government and private-sector strategies for dealing with it in a manner comparable to that found routinely in other areas of public safety and homeland security.

At a minimum, funding of serious, comprehensive research on the size of the problem and the benefits and costs of policy measures should be relatively noncontroversial and beneficial. Somewhat harder is educating the public about difficult trade-offs that need to be faced. At some point, we are likely to find, for example, that security cannot be enhanced without making some sacrifice in other features, such as ease of use or total assurance of privacy and anonymity. Such tradeoffs should not be swept under the rug but rather discussed as part of a continuing dialogue over the best way to approach this difficult problem.

Your participation enriches the conversation

Respond to the ideas raised in this essay by writing to [email protected]. And read what others are saying in our lively Forum section.

Cite this Article

Berkowitz, Bruce, and Robert W. Hahn. “Cybersecurity: Who’s Watching the Store?” Issues in Science and Technology 19, no. 3 (Spring 2003).

Vol. XIX, No. 3, Spring 2003