• Dana Epp

Shattering the crystal and poking holes in the black box


Let's shatter the crystal and poke holes in the black box.


There has been some banter online which is focused on detailing if Open Source Software is secure, or if it is a fertile ground for foul play. Both sides have some compelling points of interest, and yet both are flawed. It is impossible to defend either side of the equation when both sides are entrenched with "grass roots" style feelings which perpetuate fiction from fact. FUD seems to be a shield for everyone now adays, and quite frankly it never aids in honest and unbiased point/counter-point discussion.


Rather that regurgitate the strong points of either argument, allow me to put this into context from a secure coding perspective. Although it is a tangent from the original baseline discussion, I think if you read through my thoughts here, you will see what I am getting at.


The reality is that both sides miss the fact that THEORY and REALITY doesn't mix when it comes to software engineering of today, especially when talking about the crystal box approach to secure code, and that of the black box approach.


Whenever OSS is discussed in the context of security, the position always ends up leading towards its "golden child" - that of strong cryptography. Since the days of Bletchley Park in World War II, encryption ciphers have been typically reviewed for years by experts in the field. The source is available so cryptographers can audit the entire algorithm and build proofs to show its strengths and weaknesses. When NIST decided on building a new encryption standard for Federal Information Processing (As part of FIPS standards) back in 1997 they intelligently turned to the crypto field and had the entire process reviewed. AES has under gone a thorough audit process. It took them over a year to get 15 original algorithms reviewed and submitted for consideration. From there after rigorous testing and cryptanalysis for a few months 5 ciphers survived analysis from experts. Finally in October 2000, 3 years after the project began, NIST announced the selection of Rijndael as the proposed algorithm for the Advanced Encryption Standard.


Sit back for a second and take that in. This open process, from original design specification took YEARS of audit and evaluation from EXPERTS in the crypto field. Consider that as we discuss OSS in general.


Indeed OSS as it relates to crypto is a good thing. But this was because there was stakeholder responsibility involved. Cryptanalysts put their credibility, expertise and jobs on the line in this process. This is not always the case when OSS is written. Many projects are written by CS students in college that like the ideals of the open source movement and want to hack; typically for experience, sometimes for fame. There is a sense of accomplishment, but typically not that of responsibility. This of course is not ALWAYS the case, and there are plenty of great OSS software like the Linux kernel, Apache, Samba and OpenOffice that don't follow this at all. Which gets me to my point.


Gene Spafford once said that "when the code is incorrect, you can't really talk about security. When the code is faulty, it cannot be safe." I have used this quote before in other entries because I really think it gets to the heart of the major problem as it relates to secure programming of today. Coding for coding sake is one thing, but designing safe and secure software that our critical infrastructure and businesses use is a totally different beast. And the development methodologies that base around the expectation of developer responsibility falls into different categories, depending on the programmers involved.


It is true that with OSS, anyone can review the code and audit it. In REALITY how many people ACTUALLY do this? Be honest with yourself. When was the last time you went through every line of the Linux kernel? When a security patches is released for Apache, how many of you go through a significant code review? How many of you ACTUALLY just run apt-get or emerge and suck down the latest binaries? How many of you launch Red Carpet and download the RPMs? How is this any different than running "Windows Update"?


You see THEORY and REALITY have no place mixing when arguing points about either crystal or black boxed security. Assuming you are following best practices as it relates to patch management on your systems you grab the latest fix and apply it to your systems. You trust your package source and simply install it. Hey, you might even compile it. There is nothing wrong with that. However you leave that trust in the source. The same source you don't look at when you type "make".


In past years we have seen the compromise of the GNU FTP server, compromise of Debian's development servers, the attempted compromise of the Linux kernel tree, and the release of parts of Microsoft Windows 2000 master sources. These are ALL vendors of trust. We rely on their best practices to protect us. Be it crystal or black... none are perfect.


Coming back from that tangent for a second lets reflect on actual project/product. Typically (but not always) black boxed software comes from a commercial vendor that has a business interest in seeing it succeed. They are trying to protect their intellectual copyright and possibly wish to try to use security by obscurity. (Which rarely works by the way). Yet they typically have a sense of responsibility in maintaining their software. They have a financial interest in doing so. When looking at this from a secure programming objective however, history has shown these vendors fall flat on their face.


Why? Building secure software has been seen as an impractical goal, because the business has other pressing objectives. Even though secure programming helps them increase efficiency and cut costs in the long run as it relates to the development lifecycle, the burden of company growth has them writing software cheaper and faster which typically isn't of the best quality. But that's changing.


Although history is riddled with vagrant disregard for secure code quality in the operating systems and applications we use, it is changing because the very industry that accepted this behavior in the past now requires safer and more secure software. If you look at my past entries I have pointed out examples where Microsoft's impressive security structure has continued to build a developer environment fusing secure coding practices into their daily lives. This fundamental shift continues to strengthen the design practices of black box software, which in time should result in a safer and more secure computing environment. We now see companies building commercial black boxed software with proper functional design specs, threat models, and test plans. Code goes through a strict source code management system and gets audited at various levels of development, testing and release. These are all major components to build better software.


With OSS, you rarely see this sort of design thinking into the project. Developers have an itch to scratch and they go do it. They make it work for their needs and hope others get on board with it so it can be refactored, and hopefully audited. SourceForge and Github are full of such projects that rarely get off the starting blocks. More importantly, there are examples of OSS that get used by many in the open source community, but don't have a strong developer following. Don't get me wrong. There are amazingly talented OSS developers out there. I am friends with many of them, and I spent years being part of that community and writing my fair share of code. However, you can't wave the "OSS is better because it's audited" flag when no one cares to get involved with it. Many projects die because there is no one responsible for its growth. Without corporate backing and fiscal responsibility the code is rarely maintained. Successful projects like the Linux kernel, Apache and Samba got there because there was a great developer following... many with corporate backing (in developer time, money or both). And even then many of these projects have taken YEARS to build a system with some sort of respectable code audit facility - which I don't think we can blindly trust. A good example of this was the huge PGP vulnerability that was in "open sourced" code for years before being detected... even though the code went through various different audit processes. Further to this, we have seen the failure of the DARPA funded 'Sardonix' security auditing project going by the way side as security researchers who were part of the project not able to get it going. That's really too bad, as I have huge respect for Crispin Cowan (who lead the project), and would have liked to see him succeed with this.


In the end, code quality and the "correctiveness" of software is determined relative to the specifications and the design put in. Using the term that "open source can be audited" is a futile discussion when people DON'T do it. And many of those that do have no real experience in secure coding practices to do it effectively. There are great examples where I am wrong in that statement (FreeBSD Information handling policies come to mind), but if you look at the entire OSS landscape, I am more generally correct than I am wrong in this statement. Although it CAN be audited... it rarely is. And when it is, it's rarely done by professionals who know what they are doing. (My apologies to the numerous secure programming developers and test engineers I do know that take pride in their work in this area. I am generalizing here, and not referring to you.)


Education is a key culprit to this. Developers are coming out of school and have no secure coding experience. They don't know how to write defensive code, nor do they know how to audit code for such quality. Much of the code quality we expect in software doesn't exist because the quick time-to-market turn around of new software sacrifices quality for quantity. And this problem plagues both camps. It's just viewed differently.


Knowing OSS is rarely audited on a routine basis, let's get back to basics here. Any vendor CAN have their source code audited. OSS uses a free and open access to source code trees, and makes this easy. Black box vendors such as Microsoft use Shared Source initiatives, and pay 3rd parties to audit their code. An example of this was the .NET Security framework audit completed before its release. (Note that years later, they open sourced it anyways) Some vendors, when selling to organizations such as the government and military, require code audit and correctiveness testing through standard bodies such as the Common Criteria Standard or in house code audit teams. The CSE and NSA have entire teams whose function is to do this. Stating that OSS is better for code audits is a fallacy when you look at those responsible for the code. Code quality is going to be dependent on the designers and the engineers, typically being PAID to do it right. You don't always get that from OSS projects. You can. But you rarely do.


I think both camps are going to see a paradigm shift in the coming years. Especially as more vendors adopt OSS. We see examples of Microsoft, IBM, Apple, Novell and Sun (to name just a few) embracing OSS and putting significant assets... including financial resources into projects. If they do this correctly and don't muddy the development process with business politics, we might begin to start seeing projects have a more focused design structure in the software. The result should include better secure programming practices, which will include better auditing and improved quality as it relates to security. Hell, I can't wait to actually see such func specs, threat models and test plans for many of the open source projects out there. I would love to read these design docs and learn how they would approach such development and testing. We could learn a lot from their practical experience in these successful projects.


Yet as I say this I look over at Redmond and notice the significant investment it is putting on the table for its own processes, and those for its 3rd party developers. I can note several examples where tools like prefast, prefix, AppVerifier and FxCop are being integrated into our tools, helping us to make more secure software. They are investing in the training of outside developers and generally are building a strong foundation for the "next generation" of black box software.


In THEORY code quality and code correctiveness are enhanced with access to source code. In REALITY that is only the case if code audits are actually done. And done by those that know what they are doing.


Before I end this, I need to take a moment to go on a tangent and discuss responsible disclosure as it relates to crystal and black box security. This is a totally different aspect where OSS is traditionally MUCH better suited. The incentive for full disclosure when new vulnerabilities are found are much more heightened in OSS because people everywhere can see it. In black boxed systems that use closed source, this isn't always the case. We see the time from vulnerability report to fix much longer in black boxed systems because there doesn't always seem to be the same sense of urgency to fix issues. You can see proof of this in the announcements that are presented on lists such as bugtraq. Companies like Microsoft may take up to 6 months to fix issues where as in OSS, the turn around time is rarely past a few days. Then again, with so much Microsoft OSS stuff now in GitHub... this is changing.


Tangent aside, you will note I am trying not to take sides in this debate. That's because I don't think it matters. The point shouldn't be if access to source code is the issue. It should be about the design and audit practices that are applied to the code base. When the code is incorrect, you can't really talk about security. When the code is faulty, it cannot be safe. When code isn't audited, you will never be able to know the difference.

© 2020 by Dana Epp

  • White Twitter Icon
  • LinkedIn - White Circle