Cybersecurity Stovepiping

25 Pages Posted: 7 Apr 2015 Last revised: 5 Dec 2017

See all articles by David Thaw

David Thaw

University of Pittsburgh - School of Law; University of Pittsburgh - School of Information Sciences; Yale University - Information Society Project; University of Pittsburgh - Graduate School of Public & International Affairs; National Defense University - College of Information and Cyberspace

Date Written: December 5, 2017


Most readers of this Article probably have encountered – and been frustrated by – password complexity requirements. Such requirements have become a mainstream part of contemporary culture: "the more complex your password is, the more secure you are, right?" So the cybersecurity experts tell us… and policymakers have accepted this "expertise" and even adopted such requirements into law and regulation.

This Article asks two questions. First, do complex passwords actually achieve the goals many experts claim? Does using the password "Tr0ub4dor&3" or the passphrase "correcthorsebatterystaple" actually protect your account? Second, if not, then why did such requirements become so widespread?

Through analysis of historical computer science and related literature, this Article reveals a fundamental disconnect between the best available scientific knowledge and the application of that knowledge to password policy development. Discussions with leading computer scientists during this period suggests that this disconnect cannot be fully explained by a simple failure to identify the shortcomings of complex passwords. Nor can it be fully explained by a failure of computer science research to consider the user design implications of password complexity and associated research in psychology. Rather, this Article proposes that the disconnect resulted from a "stovepiping" failure of a different type – the failure to connect the results of scientific knowledge to a characterization which could drive a shift in policy direction.

The result is that what was required was not merely new computer science evidence, but the characterization of that evidence within a framework demonstrating that continuing the original course of action was actually result in a worse condition than originally existed. This type of net benefit/loss economic framing was largely missing from the discourse regarding authentication at the time, and indeed, remains deeply undertheorized in contemporary discourage regarding cybersecurity policy.

The implications of these results are compelling. If the assertions in this Article are correct, the technical complexity of society has vastly outstripped our policymaking process' ability to keep pace. A dystopian view of this result suggests we are headed toward technocracy. (How did you feel the last time Facebook or Google implemented a major overhaul?) A perhaps more optimistic view, however, suggests that such technical complexity is not a new concept in relative terms, and that historical context can provide some guidance as to how to adapt.

The optimistic view suggests the conclusion that looking to the process for regulating the practices of medicine, aviation, and other technologies which were at the time vastly outpacing the knowledge of policymakers can afford suggestions as to how policymakers should proceed in the Information Age. Developing a science of cybersecurity and requiring evidence-based policymaking provide solutions not only applicable to the specific problems presented in this Article, but also potentially for other highly-technical subjects faced by an increasingly complex society.

Simply put, cybersecurity policymaking must, as with other technical fields, move towards requiring evidence-based policymaking in the first instance. To do otherwise in such a highly-technical and rapidly-evolving field undermines the very purposes of the regulatory process itself, particularly in the context of delegation to “expert” administrative agencies. This Article examines that concept through the lens of the specific problem of password complexity, and offers a policymaking prescription by way of example: the myth of "risk prevention" must be replaced with the empirically-founded calculus of "risk management." And the primary question to be addressed must not be "is your system secure?", but rather "do your risk mitigation techniques match your risk tolerance?"

Note: ** This is an *extremely* early draft of a very controversial project. Feedback is most welcomed.

Keywords: cybersecurity, privacy, data security, data breach, security breach, technology, risk management, risk tolerance, evidence-based regulation, rulemaking, policymaking

Suggested Citation

Thaw, David, Cybersecurity Stovepiping (December 5, 2017). 96 Nebraska Law Review 339 (2017), U. of Pittsburgh Legal Studies Research Paper No. 2016-07, Available at SSRN: or

David Thaw (Contact Author)

University of Pittsburgh - School of Law ( email )

3900 Forbes Ave.
Pittsburgh, PA 15260
United States


University of Pittsburgh - School of Information Sciences ( email )

Pittsburgh, PA 15260
United States

Yale University - Information Society Project ( email )

P.O. Box 208215
New Haven, CT 06520-8215
United States

University of Pittsburgh - Graduate School of Public & International Affairs ( email )

Pittsburgh, PA 15260-0001
United States

National Defense University - College of Information and Cyberspace ( email )

300 5th Ave
Ft McNair
Washington, DC 20319
United States

Do you have negative results from your research you’d like to share?

Paper statistics

Abstract Views
PlumX Metrics