Five or so years ago, when I began my own DevOps journey, I knew the potential gains we could make, but there weren't any success stories or data from companies like ours at that time. I understood the what and the why although my why at the time was to gain the trust of our business partners, because I felt it was badly damaged after years of under delivering on our promises. As a servant leader, I understood the cultural aspects, and our team, after some initial heavy lifting, started to see tremendous gains in speeding up our daily work through automation, which in turn freed us up to be more productive. We were even partnering with Security back then, as it seemed obvious that we had to get them onboard.
Today, there are numerous success stories from all sorts of companies, including old, highly regulated, Fortune 100 companies, that look like worse case scenarios for DevSecOps. Their stories and the personal journeys of DevSecOps practitioners give us concrete examples of the types of gains you can expect on your own journey. The numbers are compelling reasons to embark on your own DevSecOps journey if you haven't already.
Some of my favorite examples come from our own customers, whom I've witnessed launch and roll our DevSecOps initiatives over the last few years.
A financial services company that had 800 open source components under management in their manual approval process, which took three weeks, adopted our Sonatype Repository Firewall solution to automate that process.
They deleted all the components in their Sonatype Nexus Repository and started rerunning their build with Sonatype Repository Firewall turned on, and 30 days later discovered:
They actually had 19,000 open source components flowing into those builds, 850 of which were automatically quarantined in an approval process that now took less than 3 seconds, saving them 54,000 man hours along the way.
More broadly, their DevSecOps initiative saw software quality go up over 40%, while developer productivity rises by ~30% due to early feedback shifting that feedback to the left.
There is so much focus on shifting left and empowering developers, that it is easy to overlook the impact Operations and Security teams are enjoying all the way to the right, in Production. Putting a high quality, secure application in production is a great start, but what happens when a zero-day vulnerability is announced? The last couple of years have seen several nasty vulnerabilities announced that have led directly to well known data breaches.
When I talk to our customers about how our solutions help in their Priority Incident Response Teams (PSIRT), the answers all tend to be the same:
We were able to immediately answer the first question we're faced with, are we impacted? If so, which apps and what data is at risk. Before we could track and trace open source components like this, we would have relied on end-point server scanning, which can take weeks to finish this discovery.
One particularly noteworthy example was when a CISO, upon learning about an impacted app from one of last year's struts vulnerabilities, decided to pull-the-plug on one of their apps sitting on particularly sensitive data. This was the same vulnerability that impacted Equifax, the Canadian IRS and several others.
The above examples are real world proof of the types of gains so many teams are seeing in various industries. The stories have been rolling in for years now and showed be a big enough carrot for organizations to be running toward DevSecOps if they're not already. If they're still not enough, let me offer you this stick.
Depending on which accounting you read, it took either 2 or 3 days for an exploit to happen after the vulnerability was announced. This was a fairly big reduction in the time attackers could weaponize a new known vulnerability. The chart below shows the average time for such weaponization over the years.
I recall from my earlier days working under the assumption that we had ~30 days before vulnerabilities with CVSS scores of 9 or 10 were turned into exploits. In 2015, that number was down to 15 days, and then, in 2017, the exploit was out in 3 days. Initially, I was skeptical, and assumed this was a one-off and couldn't be the new norm until I saw this article.
It appears that attackers are accelerating, shortening delivery times and increasing productivity just like everyone else.
At this years RSA conference, David Hogue, technical director at the NSA, shared their best best practice to prevent these zero-day attacks. David said keeping their systems as updated as possible, as quickly as possible, is one of the best defense practices.
If the appeal of productivity is attractive enough to run towards DevSecOps, then perhaps the fear of being breached is something you'll want to escape. You can choose the carrot or the stick, but either way, you'll want to change if you haven't begun already.
I'll leave you with a Deming quote, It is not necessary to change. Survival is not mandatory.