Home / Blogs

Restricting Anti-Virus Won’t Work

In a blog post, Stewart Baker proposed restricting access to sophisticated anti-virus software as a way to limit the development of sophisticated malware. It won’t work, for many different and independent reasons. To understand why, though, it’s necessary to understand how AV programs work.

The most important technology used today is the “signature”—a set of patterns of bytes—of each virus. Every commercial AV program on the market operates on a subscription model; the reason for the annual payment is that the programs download a fresh set of signatures from the vendor on a regular basis. If you pay your annual renewal fee, you’ll automatically have very up-to-date detection capabilities. It has nothing to do with being a sophisticated defender or attacker.

Yes, there are new versions of the programs themselves. These are developed for many reasons beyond simply increasing vendor profits. They may be designed for new versions of the operating system (e.g., Windows Vista versus Windows XP), have a better user interface, be more efficient, etc. There may be also be some new functionality to cope with new concealment techniques by the virus writers, who haven’t stayed still. To give one simple example, if detection is based on looking for certain patterns of bytes, the virus might try to evade detection by replacing byte patterns with equivalent ones. Suppose that the standard antivirus test file was trying that. A simple variant might be to change the case of the letters printed. An AV program could try to cope by having more patterns, but that doesn’t work very well. In the test program, the message printed contains 30 letters, which means that there are 230—1,073,741,824—variations. You don’t want to list all of them; instead, you have a more sophisticated pattern definition which can say things like “this string of bytes is composed of letters; ignore case when matching it against the suspect file”. But that in turn means that the AV program has to have the newer pattern-matcher; at some point, that can’t be done in a weekly update, so you have to get newer code. To that extent, the suggestion almost makes sense, save for two problems: first, the overwhelming majority of folks with the newest versions are simply folks who’ve just purchased a new computer; second, updates that can’t be handled by today’s pattern-matching engines are comparatively rare. The real essence of “updated” is the weekly download of a new signature database, but any responsible computer owner or system administrator has that enabled; certainly, the software is shipped that way by the vendors.

The reliance on patterns, though, explains one reason why things like Stuxnet and Flame weren’t detected: they were rare enough that the vendors either didn’t have samples, or didn’t have enough information to include them in the signature database. Note carefully that a single instance of malware isn’t good enough: because of the variation problem, the vendors have to analyze the viruses enough to understand how they change themselves as they spread. This may require multiple samples since of course the virus writers try to make their code hard to analyze.

You might say that instead of everyone downloading new signatures constantly, the programs should simply upload suspect files to some central server. Again, there are problems. First, that would create a tremendous bottleneck; you’d need many, really large servers. Second, most companies don’t want internal documents sent to outside companies for scanning, but such documents can and have been infected. (I’ve seen many.) IBM has even banned Siri internally because they don’t want possibly proprietary information to be sent to Apple. Third, client machines have limited bandwidth, too (technologies like DSL and cable modems are designed for asymmetric speeds, with much more capacity downstream than upstream); they can’t send out everything they’re trying to work with. Fourth, although the primary defense is the AV program checking the file when it’s first imported, the weekly scan of an entire disk will pick up viruses that are matched by newly-installed signatures. Thus, the first machines that had Stuxnet weren’t protected by antivirus software. However, the signatures are now common, which means that presence of it can be detected after the fact. Fifth, you want to have virus checking even when you’re running without net access, perhaps when you’re visiting a client site but you’re not on their network. Sixth—I’ll stop here; that model just doesn’t work.

There’s a political reason why restricting AV vendors won’t work, too: it’s a multinational industry, and there are vendors that simply won’t listen to the US, NATO, etc. The New York Times ran an article that did more than speculate on possible links between Kaspersky Lab and Russian interests: “But the company has been noticeably silent on viruses perpetrated in its own backyard, where Russian-speaking criminal syndicates controlled a third of the estimated $12 billion global cybercrime market last year, according to the Russian security firm Group-IB.”

On a technical level, the rise of ever-smaller and cheaper chips and devices has led to decentralization, a move away from large, shared computers and towards smaller ones. Decades ago, the ratio of users to computers—timesharing mainframes—was much greater than one; now, it’s less than one, as people use smart phones and tablets to supplement their laptops and desktops. Trying to move back towards centralization of security-checking is probably not going to work unless a countervailing technological trend, thin clients and cloud-based everything, gains more traction than has happened thus far.

There’s another technological wrinkle that suggests that restricting state-of-the-art antivirus might be counterproductive. Some AV programs use “anomaly detection”—looking for software that somehow isn’t “normal”—and uploading unusual files to the vendor for correlation with behavior on other computers. (Yes, I know I said that companies won’t like that. I doubt they know; this is fairly new technology, and not nearly as mature as signature matching.) I wonder if this is is one way that Kapersky and others got their old samples of Flame:

When we went digging through our archive for related samples of malware, we were surprised to find that we already had samples of Flame, dating back to 2010 and 2011, that we were unaware we possessed. They had come through automated reporting mechanisms, but had never been flagged by the system as something we should examine closely. Researchers at other antivirus firms have found evidence that they received samples of the malware even earlier than this, indicating that the malware was older than 2010.

If so, barring suspect sites from advanced AV technology would deny vendors early looks at some rare malware. (I won’t even go into the question of whether or not random computers can be authenticated adequately: they can’t.)

I’ve oversimplified the technology and left out some very important details (and some more of my arguments); the overall thrust, though, remands unchanged: trying to limit AV won’t work. However…

Kaspersky wrote: “As far as we can tell, before releasing their malicious codes to attack victims, the attackers tested them against all of the relevant antivirus products on the market to make sure that the malware wouldn’t be detected.” There are two ways to do that. First, you—or the bad guys—could buy lots of different AV programs. That’s not entirely unreasonable; no one vendor has a perfect signature database; there are many viruses caught by some reputable products but missed by other, equally reputable ones. The other way, though, is to use one of the many free multiple scanner sites. There may be some opportunity for leverage there.

By Steven Bellovin, Professor of Computer Science at Columbia University

Bellovin is the co-author of Firewalls and Internet Security: Repelling the Wily Hacker, and holds several patents on cryptographic and network protocols. He has served on many National Research Council study committees, including those on information systems trustworthiness, the privacy implications of authentication technologies, and cybersecurity research needs.

Visit Page

Filed Under

Comments

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

New TLDs

Sponsored byRadix

IPv4 Markets

Sponsored byIPv4.Global

Domain Names

Sponsored byVerisign

Cybersecurity

Sponsored byVerisign

Brand Protection

Sponsored byCSC

DNS

Sponsored byDNIB.com

Threat Intelligence

Sponsored byWhoisXML API