A Theory of Disclosure for Security and Competitive Reasons: Open Source, Proprietary Software, and Government Agencies
48 Pages Posted: 13 Nov 2005 Last revised: 10 Jun 2017
Date Written: January 1, 2006
A previous article proposed a model for when disclosure helps or hurts security, and provided reasons why computer security is often different in this respect than physical security. This paper provides a general approach for describing the incentives of actors to disclose about their software or systems. A chief point of this paper is that the incentives for disclosure depend on two, largely independent, assessments - the degree to which disclosure helps or hurts security, and the degree to which disclosure creates advantages or disadvantages for the organization competitively.
The paper presents a 2x3 matrix, where disclosure for security and competition are assessed for three types of systems or software: Open Source; proprietary software; and government systems. The paper finds greater convergence on disclosure between Open Source and proprietary software than most commentators have believed. For instance, Open Source security experts use secrecy in stealth firewalls and in other ways. Open Source programmers also often rely on gaps in Open Source licenses to gain competitive advantage by keeping key information secret. Meanwhile, proprietary software often uses more disclosure than assumed. For security, large purchasers and market forces often lead to disclosure about proprietary software. For competitive reasons, proprietary software companies often disclose a great deal when seeking to become a standard in an area or for other reasons.
Despite this greater-than-expected convergence of practice for Open Source and proprietary software, there are strong reasons to believe that less-than-optimal disclosure happens for government systems. The tradition of military secrecy, and the concern about tipping off attackers, leads to a culture of secrecy for government security. Market mechanisms to force disclosure are less likely to occur for agencies than for private companies. Competition for turf, such as the FBI's reputation for not sharing with local law enforcement, further reduces agency incentives to share information about vulnerabilities.
Part I of the paper briefly recaps the relevant portions of the Security Disclosure model from the prior paper. Part II shows the incentive problems that exist when large databases are breached and the data of individuals is leaked. This sort of breach appears to be accompanied by significant externalities, so that breach notification statutes or similar measures are likely appropriate. Part III looks at the six parts of the matrix, analyzing the incentives for disclosure or secrecy for security reasons and competitiveness reasons, for Open Source software, proprietary software, and government systems.
This research provides a general approach for determining when disclosure is societally efficient (the first paper) as well as for describing the incentives actors face to disclose or not (this paper). The actual decision of whether to disclose in a given instance will depend on assessment of the empirical magnitude of the factors set forth in the papers. The research provides, however, the first theoretical structure for assessing these issues, which are so important to the design of systems and software in our information-rich age.
Keywords: computer security, trade secrets, Open Source, software
Suggested Citation: Suggested Citation