DevOps is certainly the buzzword of the year. Everywhere you turn, people refer to DevOps and continuous delivery. It seems the final frontier to developer productivity has arrived. The reality, which is what large organizations deal with daily, is like all development methodologies in the past; the devil is in the detail.
There is no one size fits all model. Development practices vary for organizations developing SaaS vs. embedded systems vs. an industrial SCADA system manufacturer. From a security perspective, the way we introduce security into the development process must accommodate how applications or services are being developed. It's a "different strokes for different folks" approach. There is no such thing as a one size fits all security model. Not every organization is like Microsoft - just because their SDL program works for them doesn't mean it will work for you.
I was glad to hear Gary McGraw mention many of these things in his interview on the Trusted Software Alliance site. A few points, which he has been making for years, stood out. Gary noted that there is a difference between flaws and bugs. Although this is not a new discovery, the point often gets lost in translation. We spend a lot of time and energy finding and fixing security bugs, due to the fact that the tools make it easy to find bugs, and they are the easiest things to fix. Conversely, we aren't doing enough to find and fix security flaws. Flaws introduced when software without security bugs is misused in a way that introduces a security flaw.
For example, unsalted password hashes are more of a flaw than a SQL Injection bug (although the more I see cases of password thievery due to inadequate hashing, the more I am inclined to consider these bugs - but that is an argument for another day). We see the same thing in the consumption of vulnerable components - it should be obvious that you should avoid using components with known security bugs, but the inverse is not always true. Just because you use components without known security bugs, your application may not be secure. The way you use the components could introduce a security flaw. The Apache Axis2 library scenario provides a good example of this situation (https://issues.apache.org/jira/browse/AXIS2-4318). The library used a vulnerable version of Apache's HttpClient library to implement SSL, and there was no way to configure Axis2 with the more recent version of the HttpClient (without the vulnerability). The interesting thing about the fix the Axis2 team put in place was that they only made the underlying SSL transport configurable. The default still relied on the vulnerable version of HttpClient. To me, the fix may be even more dangerous. A novice programmer may not understand the subtle details of the fix (a flaw vs. a bug), and will likely download the newer version of the Axis2 library without realizing they need to make a configuration change. In this case, it's not as simple as just using the latest "patched" library.
As Gary pointed out in the TSA interview, using good components can be just as bad or worse than using flawed components - because you think you are secure and not. Hopefully, we can continue to raise the bar so that we don't lose the forest through the trees. We need to realize that a secure development practice requires architectural risk analysis. If not, you may be bug free, but you won't be flaw free, and flaws can be equally damaging.
Ryan is the former Chief Security Officer at Sonatype. He is now the Chief Scientist/Director of Research and ML at Barkly.
Explore All Posts by Ryan BergTags