Skip Navigation
Resources Blog Flaws vs Bugs

Flaws vs Bugs

DevOps is certainly the buzzword of the year. Everywhere you turn, people are referring to DevOps and Continuous Delivery. It seems as though the final frontier to developer productivity has arrived. The reality, which is what large organizations deal with on a day to day basis, is like all development methodologies in the past; the devil is in the detail. There is no such thing as a one size fits all model. Development practices vary for organizations developing SaaS vs. embedded systems vs. an industrial SCADA system manufacturer. From a security perspective, the way we introduce security into the development process must accommodate how applications or services are being developed. It's a "different strokes for different folks" approach; there is no such thing as a one size fits all security model. Not every organization is like Microsoft - just because their SDL program works for them, doesn't mean that it will work for you.

I was glad to here Gary McGraw mention a lot of these things in his interview on the Trusted Software Alliance site. A couple of points, which he has been making for years, really stood out. Gary noted that there is a difference between flaws and bugs. Although this is not a new discovery, the point often gets lost in translation. We spend a lot of time and energy finding and fixing security bugs due to the fact that the tools make it easy to find bugs, and they are the easiest things to fix. Conversely, we aren't doing enough to find and fix security flaws, flaws that are introduced when software without security bugs is misused in a way that introduces a security flaw. For example, unsalted password hashes are more of a flaw than a SQL Injection bug (although the more I see cases of password thievery due to inadequate hashing, the more I am inclined to consider these bugs - but that is an argument for another day). We see the same thing in the consumption of vulnerable components - it should be obvious that you should avoid using components that have known security bugs, but the inverse is not always true. Just because you use components without known security bugs, your application may not be secure. The way you use the components could introduce a security flaw. The Apache Axis2 library scenario provides a good example of this situation (https://issues.apache.org/jira/browse/AXIS2-4318).The library was using a vulnerable version of Apache's HttpClient library to implement SSL and there was no way to configure Axis2 with the more recent version of the HttpClient (without the vulnerability). The interesting thing about the fix the Axis2 team put in place was that all they did was make the underlying SSL transport configurable. The default still relied on the vulnerable version of HttpClient. To me the fix may be even more dangerous. A novice programmer may not understand the subtle details of the fix (a flaw vs. a bug), and will likely download the newer version of the Axis2 library without realizing that they need to make a configuration change. In this case, it's not as simple as just using the latest "patched" library.

As Gary pointed out in the TSA interview, using good components in a bad way can be just as bad or worse than using flawed components in the first place - because you think you are secure and you are not. Hopefully we can continue to raise the bar so that we don't lose the forest through the trees. We need to realize that a secure development practice requires some level of architectural risk analysis. If not, you may be bug free, but you won't be flaw free, and flaws can be equally damaging.

Picture of Ryan Berg

Written by Ryan Berg

Ryan is the former Chief Security Officer at Sonatype. He is now the Chief Scientist/Director of Research and ML at Barkly.