Home / Blogs

Software Insecurity: The Problem with the White House Cybersecurity Proposals

The White House has announced a new proposal to fix cybersecurity. Unfortunately, the positive effects will be minor at best; the real issue is not addressed. This is a serious missed opportunity by the Obama adminstration; it will expend a lot of political capital, to no real effect. (There may also be privacy issues; while those are very important, I won’t discuss them in this post.) The proposals focus on two things: improvements to the Computer Fraud and Abuse Act and provisions intended to encourage information sharing. At most, these will help at the margins; they’ll do little to fix the underlying problems.

The CFAA has long been problematic; the concept of computer use in “excess of authorization” has been abused by prosecutors. The new proposal does amend that, though the implications of the language change are not obvious to me. Fundamentally, though, the increased penalties in the new CFAA matter only if the bad guys are caught. That rarely happens. Increased penalties won’t deter attackers who doesn’t think they’ll ever actually come into play. It’s often been noted that it’s certainty of punishment, not severity of punishment, that is actually effective.

The new reporting rules may have some beneficial effect, but it will be minor. Some sites, especially the large, sophisticated ones, can be helped by knowing what attackers can do; arguably, this will let them tweak their monitoring and/or firewall rules. Some government agencies will get a broader picture of attack patterns; this may let them improve attribution or perhaps issue better advisories. Most sites, though, aren’t helped by this; they have to wait for vendors to fix the problem. And therein lies the rub: most security problems are due to buggy code.

Certainly, there are other factors contributing to security problems, such as horrible usability; however, a very high percentage of system penetrations are due to the very mundane-sounding problem of flawed code. This specifically includes all “drive-by downloads” and “privilege escalation” attacks following some user-level penetration. The only way we will significantly improve our overall security posture is if we can make progress on the buggy code issue. The White House proposals do nothing whatsoever to address this—and that’s bad.

To be sure, it’s not an easy problem to solve. Microsoft, despite a tremendous (and admirable) effort, still has buggy code to deal with. Passing a law banning bugs is, shall we say, preposterous. But would changes to liability law help, perhaps by banning the disclaimers in EULAs? How about tax breaks for certain kinds of software development practices? Limiting the ability of companies to write off expenses incurred by dealing with breaches? The equivalent of letters of marque for bug hunters, who would be paid a bounty by the vendor for each security hole they find and report? All of these are at least somewhat problematic (and I’m not even serious about the last one), but at least they attempt to address the real issue.

Deterrence won’t suffice, even for ordinary criminals; it won’t matter at all to the more serious state-sponsored attackers, despite the indictment of some alleged Chinese military hackers. The goal should be prevention of attacks, not punishment after the bad guys have succeeded. This proposal doesn’t even try to address it.

By Steven Bellovin, Professor of Computer Science at Columbia University

Bellovin is the co-author of Firewalls and Internet Security: Repelling the Wily Hacker, and holds several patents on cryptographic and network protocols. He has served on many National Research Council study committees, including those on information systems trustworthiness, the privacy implications of authentication technologies, and cybersecurity research needs.

Visit Page

Filed Under

Comments

Need to be more specific Phillip Hallam-Baker  –  Jan 16, 2015 3:25 AM

The problem is not the buggy code.

The problem is a security architecture which means that the security of the systems depend on the security of several million lines of code that we cannot possibly audit rather than a much smaller number of lines that we might have a chance at auditing.

Another problem is a software engineering approach that encouraged people to use unsafe languages such as C that lack array bounds checking etc ‘carefully’ rather than building checks for overflow errors etc. into the compilers. Sure we can’t be absolutely sure that a compiler is correct either. But it is much easier to audit one compiler than several hundred thousand apps.

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

Related

Topics

New TLDs

Sponsored byRadix

DNS

Sponsored byDNIB.com

Brand Protection

Sponsored byCSC

Domain Names

Sponsored byVerisign

Threat Intelligence

Sponsored byWhoisXML API

Cybersecurity

Sponsored byVerisign

IPv4 Markets

Sponsored byIPv4.Global