Home / Blogs

What Must We Trust?

My Twitter feed has exploded with the release of the Kaspersky report on the “Equation Group”, an entity behind a very advanced family of malware. (Naturally, everyone is blaming the NSA. I don’t know who wrote that code, so I’ll just say it was beings from the Andromeda galaxy.)

The Equation Group has used a variety of advanced techniques, including injecting malware into disk drive firmware, planting attack code on “photo” CDs sent to conference attendees, encrypting payloads using details specific to particular target machines as the keys (which in turn implies prior knowledge of these machines’ configurations), and more. There are all sorts of implications of this report, including the policy question of whether or not the Andromedans should have risked their commercial market by doing such things. For now, though, I want to discuss one particular, deep technical question: what should a conceptual security architecture look like?

For more than 50 years, all computer security has been based on the separation between the trusted portion and the untrusted portion of the system. Once it was “kernel” (or “supervisor”) versus “user” mode, on a single computer. The Orange Book recognized that the concept had to be broader, since there were all sorts of files executed or relied on by privileged portions of the system. Their newer, larger category was dubbed the “Trusted Computing Base” (TCB). When networking came along, we adopted firewalls; the TCB still existed on single computers, but we trusted “inside” computers and networks more than external ones.

There was a danger sign there, though few people recognized it: our networked systems depended on other systems for critical files. In a workstation environment, for example, the file server was crucial, but it as an entity wasn’t seen as part of the TCB. It should have been. (I used to refer to our network of Sun workstations as a single multiprocessor with a long, thin, yellow backplane—and if you’re old enough to know what a backplane was back then, you’re old enough to know why I said “yellow”...) The 1988 Internet Worm spread with very little use of privileged code; it was primarily a user-level phenomenon. The concept of the TCB didn’t seem particularly relevant. (Should sendmail have been considered as part of the TCB? It ran as root, so technically it was, but very little of it actually needed root privileges. That it had privileges was more a sign of poor modularization than of an inherent need for a mailer to be fully trusted.)

The National Academies report Trust in Cyberspace recognized that the old TCB concept no longer made sense. (Disclaimer: I was on the committee.) Too many threats, such as Word macro viruses, lived purely at user level. Obviously, one could have arbitrarily classified word processors, spreadsheets, etc., as part of the TCB, but that would have been worse than useless; these things were too large and had no need for privileges.

In the 15+ years since then, no satisfactory replacement for the TCB model has been proposed. In retrospect, the concept was not very satisfactory even when the Orange Book was new. The compiler, for example, had to be trusted, even though it was too huge to be trustworthy. (The manual page for gcc is itself about 90,000 words, almost as long as a short novel—and that’s just the man page; the code base is far larger.) The limitations have become painfully clear in recent years, with attacks demonstrated against the embedded computers in batteries, webcams, USB devices, IPMI controllers, and now disk drives. We no longer have a simple concentric trust model of firewall, TCB, kernel, firmware, hardware. Do we have to trust something? What? Where do we get these trusted objects from? How do we assure ourselves that they haven’t been tampered with?

I’m not looking for concrete answers right now. (Some of the work in secure multiparty computation suggests that we need not trust anything, if we’re willing to accept a very significant performance penalty.) Rather, I want to know how to think about the problem. Other than the now-conceputal term TCB, which has been redfined as “that stuff we have to trust, even if we don’t know what it is”, we don’t even have the right words. Is there still such a thing? If so, how do we define it when we no longer recognize the perimeter of even a single computer? If not, what should replace it? We can’t make our systems Andromedan-proof if we don’t know what we need to protect against them.

By Steven Bellovin, Professor of Computer Science at Columbia University

Bellovin is the co-author of Firewalls and Internet Security: Repelling the Wily Hacker, and holds several patents on cryptographic and network protocols. He has served on many National Research Council study committees, including those on information systems trustworthiness, the privacy implications of authentication technologies, and cybersecurity research needs.

Visit Page

Filed Under

Comments

One way of getting around the trust Todd Knarr  –  Feb 18, 2015 11:09 PM

One way of getting around the trust problem is to not trust any single thing. Aircraft and spacecraft have used it for ages: 3 independent units performing the same calculations in parallel, and they should all produce the same result if they’re operating correctly. If 2 out of 3 agree on a result then it’s probably right and the disagreeing one is assumed faulty and taken off-line. If all of them give different results then you don’t know which ones are faulty but at least you know you can’t trust them. For instance, for the hard-drive-firmware malware I’d use drives from 3 different manufacturers, purchased at different times from different vendors.

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

DNS

Sponsored byDNIB.com

Cybersecurity

Sponsored byVerisign

IPv4 Markets

Sponsored byIPv4.Global

Domain Names

Sponsored byVerisign

Threat Intelligence

Sponsored byWhoisXML API

Brand Protection

Sponsored byCSC

New TLDs

Sponsored byRadix