|
A report “Securing Cyberspace for the 44th Presidency” has just been released. While I don’t agree with everything it says (and in fact I strongly disagree with some parts of it), I regard it as required reading for anyone interested in cybersecurity and public policy.
The analysis of the threat environment is, in my opinion, superb; I don’t think I’ve seen it explicated better. Briefly, the US is facing threats at all levels, from individual cybercriminals to actions perpetrated by nation-states. The report pulls no punches (p. 11):
America’s failure to protect cyberspace is one of the most urgent national security problems facing the new administration that will take office in January 2009. It is, like Ultra and Engima, a battle fought mainly in the shadows. It is a battle we are losing.
The discussion of the organizational and bureaucratic problems hampering the government’s responses strikes me as equally trenchant, though given that my own organizational skills are meager at best and I have limited experience understanding the maze of the Federal bureaucracy I can’t be sure of that… (Aside: although there were some very notable technologists on the committee, it seems to me to have been dominated by political and management types. A strong presence of policy people on this committee was, of course, necessary, but perhaps there should have been more balance.)
The report noted that the US lacks any coherent strategy or military doctrine for response. To be sure, the government does seem to have some offensive cyberspace capability, but this is largely classified. (The report cites NSPD-16 from 2002 (p. 23), but as best I can determine it itself is classified.) As noted (p 26.), the “deterrent effect of an unknown doctrine is extremely limited”. (I was pleased that I wasn’t the only one who thought of Dr. Strangelove when reading that sentence; the report itself has a footnote that makes the same point.)
The report is, perhaps, too gentle in condemning the market-oriented approach to cybersecurity of the last few years. That may reflect political realities; that said, when the authors write (p. 50):
In pursuing the laudable goal of avoiding overregulation, the strategy essentially abandoned cyber defense to ad hoc market forces. We believe it is time to change this. In no other area of national security do we depend on private, voluntary efforts. Companies have little incentive to spend on national defense as they bear all of the cost but do not reap all of the return. National defense is a public good. We should not expect companies, which must earn a profit to survive, to supply this public good in adequate amounts.
they were too polite. How could anyone have ever conceived that it would work? The field wasn’t “essentially abandoned” to market forces; rather, the government appears to have engaged in an excess of ideology over reality and completely abdicated its responsibilities; it pretended that the problem simply didn’t exist.
I was rather surprised that there was no mention of a liability-based approach to security. That is, computer owners would be liable for attacks emanating from their machines; they in turn would have recourse (including class action suits) against suppliers. While there are many difficulties and disadvantages to such an approach, it should at least be explored.
The most important technical point in this report, in my opinion, is its realization that one cannot achieve cybersecurity solely by protecting individual components: “there is no way to determine what happens when NIAP-reviewed products are all combined into a composite IT system” (p. 58). Quite right, and too little appreciated; security is a systems property. The report also notes that “security is, in fact, part of the entire design-and-build process”.
The discussion of using Federal market powers to “remedy the lack of demand for secure protocols” is too terse, perhaps by intent. As I read that section (p. 58), it is calling for BGP and DNS security. These are indeed important, and were called out by name in the 2002 National Strategy to Secure Cyberspace. However, I fear that simply saying that the Federal government should only buy Internet services from ISPs that support these will do too little. DNSSEC to protect .gov and .mil does not require ISP involvement; in fact, the process is already underway within the government itself. Secured BGP is another matter; that can only be done by ISPs. However, another recent Federal cybersecurity initiative—the Trusted Internet Connection program—has ironically reduced the potential for impact, by limiting the government to a very small number of links to ISPs. Furthermore, given how many vital government dealings are with the consumer and private sectors, and given that secured BGP doesn’t work very well without widespread adoption, US cybersecurity really needs mass adoption. This is a clear case where regulation is necessary; furthermore, it must be done in conjunction with other governments.
The scariest part of the report is the call for mandatory strong authentication for “critical cyber infrastructures (ICT [information and communications technology], energy, finance, government services)” (p. 61). I’m not sure I know what that means. It is perhaps reasonable to demand that employees in these sectors use strong authentication; indeed, many of us have advocated abolishing passwords for many years. But does this mean that I have to use “strong government-issued credentials” to sign up with an ISP or with an e mail provider? Must I use these to do online banking? The report does call for FTC regulations barring businesses from requiring such for “all online activities by requiring businesses to adopt a risk-based approach to credentialing”. But what does that do? What if a business decides that risks are high? Is it then allowed to require strong authentication?
For that matter, the report is quite unclear on just what the goals are for strong authentication. It notes that “intrusion into DOD networks fell by more than 50 percent when it implemented Common Access Card” [sic] (p 62). But why was that? Because people couldn’t share passwords? Because there were no longer guessable passwords? Because keystroke loggers have nothing to capture? Because there is accountability, rather than deniability, for certain actions? There is no guidance here. The benefits of “in-person proofing” are lauded because it “greatly reduces the possibility that a criminal can masquerade as someone else simply by knowing some private details”. Quite true—but is the goal accountability or prevention of electronic identity theft? (It’s also worth noting that there are a variety of attacks—some already seen in the wild against online banking—that completely evade the nominal protections of strong authentication schemes. I don’t have space to discuss them here, but at a minimum you need secure operating systems (of which we have none), proper cryptography (possible but hard), automatic bilateral authentication with continuity-checking (relatively easy, but done all too rarely), and a well-trained user population (probably impossible) to deflect such attacks. That is not to say there are no benefits to strong authentication, but one should be cautious about looking for a panacea or rushing too quickly to cast blame on apparently-guilty parties without a lot more investigation.)
There are cryptographic technologies that permit multiple unlinkable credentials to be derived from a single master credential. This would allow for strong authentication and protect privacy (the latter an explicit goal of the report), but would perhaps do little for accountability. Should such technologies be adopted? Without more rationale, it’s impossible to say what the committee thinks. That said, the general thrust seems to be that centralized, strong credentials are what is needed. That directly contradicts a National Academies study (disclaimer: I was on that committee) that called for multiple, separate, unlinkable credentials, since they are better both for security and privacy.
This report calls for protecting privacy. It offers no guidance on how to do that; it instead advocates policies that will compromise privacy. And instead of describing as a legitimate concern “the spread of European-style data privacy rules that restrict commercial uses of data pertaining to individuals” (p. 68), it should have endorsed such rules. There are only two ways to protect privacy in the large, technical and legal. If the technical incentives are going to push one way, i.e., towards a single authenticator and identity, the legal requirements must push the other. It is not enough to say that “government must be careful not to inhibit or preclude anonymous transactions in cases where privacy is paramount” (p. 64), when technologies such as third-party cookies can be used to track people. This can include the government; indeed, www.change.gov itself uses YouTube, a subsidiary of Google, one of the biggest purveyors of such cookies. Perhaps a medical information site would not require strong authentication, but absent regulation the fact of an individual’s visit there is almost certainly ascertainable.
It is worth stressing that government violations of privacy are not the only issue. The government, at least, is accountable. The private sector is not, but dossiers compiled by marketeers are at least as offensive. Sometimes, in fact, government agencies buy data from the private sector, an activity that has been described as “an end run around the Privacy Act”.
Will social networking sites require this sort of ID, in the wake of the Lori Drew case and the push to protect children online? If so, what will that do to social privacy? What will it do to, say, the rate of stalking and in-person harassment?
Make no mistake about it, this “voluntary” authentication credential is a digitally-enabled national identity card. Perhaps such a card is a good idea, perhaps not; that said, there are many questions that need to be asked and answered before we adopt one.
There’s another scary idea in the report: it suggests that the U.S. might need rules for “remote online execution of a data warrant” (p 68). As I noted the other day, that is a thoroughly bad idea that can only hurt cybersecurity. More precisely, having rules for such a thing is a good idea (if for no other reason than because insecure computers will be with us for many years to come), but wanting an ongoing capability to actually use such things in practice is very, very dangerous.
This brings up the report’s biggest omission: there is no mention whatsoever of the buggy software problem. Quite simply, most security problems are due to buggy code. The hundreds of millions of “botted” computers around the world are not infected because the attacker stole a password for them; rather, there was some sort of flaw in their mailers, browsers, web servers, social networking software, operating systems, or what have you. Ignoring this when talking about cybersecurity is ignoring the 800—nay, 8000—pound gorilla in the room.
The buggy software issue is also the problem with the discussion of acquisitions and regulation (p. 55). There are certainly some things that regulations can mandate, such as default secure configurations. Given how long the technical security community has called for such things, it is shameful that vendors still haven’t listened. But what else should be done to ensure that “providers of IT products and systems are accountable and ... certify that they have adhered to security and configuration guidelines”? Will we end up with more meaningless checklists demanding anti-virus software On machines that shouldn’t need it?
Of course, I can’t propose better wording. Quite simply, we don’t know what makes a system secure unless it’s been designed for security from the start. It is quite clear to me that today’s systems are not secure and cannot be made secure. The report should have acknowledged this, and added it to the call for more research (p. 74).
There’s another dynamic that any new government network security organization needs to address: the tendency within government itself to procure insecure systems. The usual priorities are basic functionality, cost, and speed of deployment; security isn’t on the radar. Unless Federal programs—and Federal program managers—are evaluated on the inherent security of their projects (and of course by that I do not mean the usual checklists), the effort will not succeed. The report should have acknowledged this explicitly: more security, from the ground up, will almost certainly cost more time and money. It will require more custom-built products; fewer COTS products will pass muster. To be sure, I think it will save money in the long run, but when budgets are tight will it be security that gets short shrift?
On balance, I think the report is an excellent first step. That said, some of the specific points are at best questionable and probably wrong. We need public debate—a lot of it.
Sponsored byDNIB.com
Sponsored byRadix
Sponsored byCSC
Sponsored byIPv4.Global
Sponsored byWhoisXML API
Sponsored byVerisign
Sponsored byVerisign