eLearning

Open-source and the “security through obscurity” fallacy

The security of open source software is a key concern for organizations planning to implement it as part of their software stack, particularly if it will play a major role. Currently, there is an ongoing debate on whether open source software increases software security or is detrimental to its security. There are a variety of different benefits and drawbacks for both sides of the argument.

The main concern is that because free and open source software (FOSS) is built by communities of developers with the source code publically available, access is also open to hackers and malicious users. As a result, there could be the assumption that FOSS is less secure than proprietary applications. This assumption has a name – it is called “Security through obscurity” – an attempt to use secrecy of design or implementation to provide security. Unfortunately, security through obscurity can give you a false sense of security and ultimately lead to an insecure system.

Security through obscurity has never achieved engineering acceptance as a good way to secure a system. The United States National Institute of Standards and Technology (NIST) specifically recommends against using closed source as a way to secure the software (i.e. “security through obscurity”), and they state, “system security should not depend on the secrecy of the implementation or its components[1].

Too often people assume that secrecy equals security [2]. Nothing could be further from the truth. Today’s strong cryptography is based on the assumption that an “adversary” will know both that something is encrypted, and what the encryption scheme is. The notion that hiding the means of encryption will somehow make the data in question more secure is a notion that has been obsolete since World War II. Strong crypto assumes, rather, that despite the fact that the encryption algorithm is a matter of public knowledge, that the data in question will remain encrypted and secure.

Open Source software is based on a similar notion of security. Hiding source code is a bad way to assume you’ll achieve security, because even a powerful and highly proprietary company can’t guarantee that source code won’t leak out. Instead, security should be based on a worst-case scenario: assume your “adversary” has access to the source code; and deal with it.

For example, the “Security by design [3] principle advocates that the software should be designed from the ground up to be secure. Malicious practices are taken for granted and care is taken to minimize impact when a security vulnerability is discovered or on invalid user input. In other words, good engineering practice is what makes a system secure and not whether or not the source code is open


[1] http://csrc.nist.gov/publications/nistpubs/800-123/SP800-123.pdf

[2] http://onlamp.com/pub/wlg/4436

[3] http://en.wikipedia.org/wiki/Security_by_design


Improve your employee, partner and customer training with our enterprise-ready learning management system. Book a demo now and see why our diverse portfolio of customers consistently give us 5 stars (out of 5!)

Book a demo