Research‎ > ‎

Cybersecurity

Updated: 22 March 2017

Just because it is possible to hack something does not mean that hacking has political utility. Not all actors have the ability to inflict great harm through computer network operations, and those that do often lack the motivation. This page summarizes some theoretical and empirical findings from my own research. Since these articles and chapters are scattered all around, I thought it might be helpful to gather them in one place as a sort of cybersecurity manifesto. (See also the bibliography by Hannes Ebert and Tim Maurer on cybersecurity research.)

Stuxnet undermines conventional wisdom about cyberwar. The most famous example of cyber warfare (the use of hacking to cause serious physical damage) is the American-Israeli hack of Iran's nuclear infrastructure. This case provides an opportunity to test popular beliefs that cyberspace provides asymmetric advantages for offense over defense, for weak over strong actors, and for revision over deterrence. In the case of Stuxnet the opposite seems to be the case: offense was difficult, stronger attacked weaker, and deterrence shaped the choice of cyber means. The supposed cyber revolution appears to be more of an evolution in the art of covert action.

Cybersecurity is fundamentally an intelligence problem. Cyber conflict is more like a contest between intelligence and counterintelligence than a military battle. The modern intelligence enterprise is a product of technological innovation, and the contemporary concern about cybersecurity is just the latest historical development of this trend. There are more ways and means available for deceptive collection and influence than ever before, and actors beyond state intelligence agencies are increasingly involved. The fact that militaries worry about cyber power owes more to an increasing military dependence on intelligence than the emergence of a new weapon system. My chapter in the forthcoming Oxford Handbook of Cybersecurity examines the targeting, collection, analysis and application, covert action, and counterintelligence dimensions of cybersecurity, highlighting the continuities with traditional problems.

Cybersecurity relies on deception, which is self limiting. Cyberspace is built on trust, and attackers use deception to exploit it; but deception is more likely to fail against more complex targets, and deception can be used for defense as well. Offensive and defensive advantages depend on the relative skill at deception rather than categorical properties of the technology. Erik Gartzke and I examine the logic of deception in cyberspace, which is distinct from strategies of disarmament, defense, and deterrence, and explore the ways in which it can be used to reinforce network protection. Computer security engineers have been using active defense and deceptive methods for a long time, but the importance of deception for defense has been overlooked in the cyber debate. Elsewhere we explore the utility of cyber deception in naval operations.

Cyber espionage alone does not create competitive advantage. Like all forms of intelligence, stolen data must be processed, analyzed, and disseminated to decision makers who know how to use it and actually decide to use it to make a difference. For national security and economic espionage alike, the problems of absorption and application can be more difficult than acquisition, which is what most media accounts focus on. In the chapter "From Exploitation to Innovation," Tai Ming Cheung and I provide a conceptual model for this process and use it to examine Chinese industrial espionage. China has conducted a major cyber espionage campaign, but there is less evidence that it has benefited. I was suspicious that the September 2015 agreement between the United States and China would curtail espionage; Chinese threat activity does seem to have decreased, but lack of observed data could be due as much to professionalization and reform in the PLA and MSS.  

The United States has advantages over China in cyberspace. China has an active cyber program, but the threat it posed is often exaggerated. In the arenas of military, intelligence, political, and governance competition, the United States enjoys some important advantages in the cyber domain, but China is able to create friction in the relationship. Cybersecurity tends to exacerbate the ambiguous relationship--both cooperative and competitive--that exists between the two great powers. This article and this talk draws from my volume with Tai Ming Cheung and Derek Reveron, which also demonstrates the importance of bringing an area studies perspective to bear for understanding cybersecurity. We are cautiously optimistic that cyber conflict between America and China will not escalate out of control. Joel Brenner and I debate my article here.

Coercion is difficult in cyberspace, but not impossible. Erik Gartzke and I offer typologies of cyber threats in terms of their operational costs and their political benefits, and of the coercive strategies that use cyber means. Coercion is difficult because signaling and deception do not mix well, but the combination of cyberspace with other domains can improve coercion, both deterrence and compellence. Strategy in cyberspace is constrained by a similar logic described in the Cold War as the stability-instability paradox, but it is not only deterrence by punishment but also the mutual benefits of interconnection that constrains aggression. 

The Sony hack illustrates the ambiguity of cyber coercionSteph Haggard and I examine the North Korean hack of Sony through the lens of the stability-instability paradox: effective deterrence at a high level can incentivize exploitation at a lower level. We find that the attempt to coerce back-fired in this case because North Korea overplayed its hand in the context of these dynamics.

The attribution problem, and thus deterrence, gets easier at scale. Offense dominance is not a categorical property of cyberspace. Rather, it depends on the relative costs of attack and defense, which change as targets become more complex and valuable. Problems that are hard for defenders to solve for low value targets get easier to solve for high value targets. I show how rationalist ideas about war can be readily applied to cybersecurity and offer a simple formal model to illustrate the effects of changing assumptions about the scaling of costs.

Cybersecurity motivates concerns about cross-domain deterrence. Cyberspace is not just the fifth domain of warfare, as the Pentagon defines it, it is also the means to command and control anything in the other domains. Moreover it blurs the boundaries between peace and war and public and private to a great degree. The cyber domain is fundamentally cross-domain. Likewise, cyber vulnerabilities and the rise of cyber commands have raised the problems of interaction with nuclear, space, and conventional capabilities to the fore. Both CDD and cybersecurity reflect the increasing complexity of security affairs.

Hacking nuclear command and control is bad for strategic stability. While Erik Gartzke and I have both together and separately argued that cyberwar tends to be overhyped, there is one important exception. Offensive cyber operations against nuclear command and control have the potential to be extremely destabilizing in a nuclear brinksmanship crisis because of the asymmetry of knowledge about the true balance of power. This extreme example highlights the logic of cross-domain deterrence, whereby different means support different ends and can be in tension. The deception of cyber, which is useful for warfighting should deterrence fail, is in tension with the transparency of nuclear forces, which is useful for ensuring that deterrence does not fail. The US "left of launch" activity against North Korea is a concerning development by this logic.

Hacking can be a source of military innovation. The cybersecurity debate focuses on actors hacking each other to degrade their performance but overlooks the prevalence of  hacking within states to improve their own performance. Commercial information technology lowers the barriers to innovation for better or worse, so military users often end up adapting software in surprising ways. The case of the FalconView mission planning system, developed by reservists and guardsman and widely adopted and adapted throughout the U.S. military, exemplifies the potential of user innovation. Military users hack their own C4ISR to cope with the fog and friction of war. This activity can be a source of security vulnerabilities but also new military functionality, and defense policy must be careful not to undermine the latter in addressing the former.

Political scientists can (and should) study cybersecurity. Cybersecurity research must chart a course between the Scylla of threat inflation (Lucas Kello and I debate this) and the Charybdis of cynical complacency (Brandon Valeriano and Ryan Maness debate this). Although the danger and destruction of cyber conflict pales by comparison with even minor historical wars, cyber conflict continues to unfold in surprising ways. The study of cybersecurity is part of a rising tide of research on the role of intelligence, secrecy, and unconventional war in statecraft. I am working on an extended review of the emerging international relations literature in this area. 

Cyberspace is an institution, which restrains the severity of conflict within it. How would I synthesize these insights about the cybersecurity problem and its implications for international security? My book Shifting the Fog of War puts cybersecurity into historical context as just the latest incarnation of a timeless control paradox, whereby efforts to improve information processing can create new potential for confusion. The chapter "The Fog of Cyberwar" argues that we should understand cyberspace as the most sophisticated sociotechnical institution ever built, not simply a material infrastructure. Conflict within this institution is different than traditional conflict in anarchy since the competitors depend upon the voluntary adoption of shared standards and protocols. As a result, cybersecurity is "restrained by design" (here is a talk on this theme). Attackers and defenders both use stealth and stratagem to learn about and influence the other in a conspiracy of restraint that avoids open warfare. 




Comments