|
A number of security predictions have been doing the rounds over the last few weeks, so I decided to put pen to paper and write a list of my own.
However, I have a quite a few predictions so I have listed them over several blog posts (see part 1 & part 2). After all, I didn’t want to bombard you with too much information in one go!
Part three examines the threats associated with data breaches.
Data Breaches
What would an annual collection of predictions be if it didn’t include a perspective on data breaches? The biggest news stories for 2014 were all about brand-name organizations being breached and the tens-of-millions of credit card numbers and personal information that was stolen.
Of course there’ll be more breaches in 2015—and they’ll be bigger and more sophisticated—the stars have foretold it! Well, actually, less reliance upon the movements of the stars and more likely a summary analysis of public breach disclosure statistics for the last decade would have been sufficient.
While it doesn’t take a soothsayer to predict the increase in data breaches for 2015, it does raise another important question. Are we being breached more often?
In many ways there is a large disparity between the breach statistics being recited and the volume of associated hacking activity. Most certainly the number and sophistication of hacks has been increasing year-on-year since the birth of the Internet, but the increase in hacking frequency most likely parallels the like-for-like growth of the Internet in general—while the growth in data breach disclosures looks more like a scary hockey-stick projection.
I think that there are a handful of critical factors as to why the metrics for data breaches are on a dramatic incline:
All of the above, when combined, makes for a case of observational bias. It bears a striking similarity to another story were a number of scientific papers released in the middle of the twentieth century that discussed how the increased annual count of tornadoes in the USA were due to an increased settlement of the West, farming techniques and global warming, only to be later debunked.
The reality of the situation was that more people were settling in the West and communication channels and alerting mechanisms had advanced, which meant that there were more people capable of observing tornadoes and reporting them.
Vulnerabilities
Going hand-in-hand with data breaches is of course the discussion on vulnerabilities and vulnerability disclosure. Unlike previous years—where the mainstay of predictions had been vulnerability specific—very few vendors voiced their predictions for the growth of vulnerabilities.
This may be because these projections have been relegated to annual threat reports (of which there are many)—rather than summaries of the public’s top-ten things to worry about in 2015.
Vulnerability landscape
It is however, interesting to see why so few people commented on what the vulnerability landscape will look like for the year. With such notable pan-Internet threats such as Heartbleed and Shellshock making huge splashes in 2014, only a handful of commentators projected more of these big vulnerabilities.
When vulnerability predictions were made for 2015 they often took the form of changes in attack vector or emphasized a particular category of technology. For example, pointing out that point-of-sale (PoS) systems are vulnerable to attack and, as companies hardened those systems in the wake of big breaches in 2014, that attackers would move to exploit vulnerabilities in the payment processors instead.
Vulnerability disclosure has changed radically in the last five years. What once was largely the realm of security vendors investing in research teams to find or categorize new bugs, or setting up purchase programs to entice third-party researchers to disclose to them first (seeking to gain advantage over competitors by covering a vulnerability first), is rapidly becoming a standalone business.
Bug bounty programs—often funded by the vulnerable software vendors themselves—pay the researchers directly for their discoveries. The surprise for many is how well this new arrangement is working out.
Bug bounty frameworks
With a direct line between the researcher and the vulnerable vendor, a legal framework allowing them to hunt without fear of prosecution, and assurances of hassle-free payment, more bugs are being found and disclosed this way. In effect, a chunk of the security quality assurance process has been conveniently outsourced in a pay-for-results model. The grey and underground channels for researchers to sell and disclose newfound vulnerabilities will continue to exist, but it would seem that less talent is following that path now with the commercialization of bug bounty frameworks.
The elephant in the room is, however, the software and code not owned by any particular organization, but used my many. Unfortunately the biggest vulnerabilities of 2014 lay undiscovered for years in some of the most popular Open Source software powering the Internet. The question on the lips of many is which new open source vulnerabilities will surprise us in 2015.
Conclusion
It is likely that 2015 will see a sea-change in the way open source code is viewed and managed. The ferocious media attention to Shellshock and Heartbleed has already initiated a renewed vigour in bug hunting open source projects. I think that, over the next couple of years, the key outcomes will be something along the following trajectory:
Vendors of automated code analysis and bug hunting tools will take the lead in analysing popular open source projects. By uncovering new bugs they’ll initially harness the media to extol the virtues of their advanced technology and, once the media tires of bug overload, they’ll shift to publishing statistical reports and cite academic papers as competitive differentiators.
Sponsored byCSC
Sponsored byDNIB.com
Sponsored byIPv4.Global
Sponsored byVerisign
Sponsored byWhoisXML API
Sponsored byRadix
Sponsored byVerisign
I’d add a fourth conclusion there:
* To the consternation of vendors, purchasers will require that the proprietary code be subject to the same analysis and assessment as the open-source code. Pressure will increase as problems are found to have more often originated in the proprietary code than the OSS code.
Why do I predict that? Because many OSS projects are already being scanned and analyzed for errors (eg. the Linux kernel itself, the PostgreSQL database, Apache projects) by Coverity and others, and to date Coverity’s found that the error density for OSS is significantly better than for proprietary projects Coverity also scans (an average of 0.59 errors per thousand lines for OSS vs. 0.72 for proprietary in 2013). My experience as a developer is that most proprietary software outside of a few industries isn’t routinely scanned for errors, it’s seen as a cost with little benefit to sales and so gets the same treatment as most other QA (ie. it’s first on the chopping block when time has to be made to add the latest new feature Marketing’s asked for). With OSS having gotten a head start and eliminated most of the low-hanging and even a lot of the high-hanging fruit, it’s not hard to predict that the most errors are going to surface in the codebase that hasn’t been subject to that analysis on a regular basis yet.
Wasn’t the world going to collapse due to lack of IPv4 addresses?
;-)