|
In the last few weeks we’ve seen two very different approaches to the full disclosure of security flaws in large-scale computer systems.
Problems in the domain name system have been kept quiet long enough for vendors to find and fix their software, while details of how to hack Transport for London’s Oyster card will soon be available to anyone with a laptop computer and a desire to break the law.
These two cases highlight a major problem facing the computing industry, one that goes back many years and is still far from being unresolved. Given that there are inevitably bugs, flaws and unexpected interactions in complex systems, how much information about them should be made public by researchers when the details could be helpful to criminals or malicious hackers?
When Dan Kaminsky discovered a major security flaw in DNS he kept it quiet. DNS is the service that translates domain names like ‘www.bbc.co.uk’ into internet protocol addresses like 212.58.253.67 that can be used by computers, and the flaw he found affected almost every internet-connected computer because it could be used to fool our computers into believing IP addresses provided by malicious DNS servers.
As a result someone trying to visit the BBC website, their bank or a webmail account could be sent to a fake site without knowing it.
Instead of publicising what he had found Kaminsky told vendors like Microsoft and Sun and for the past few months they have been working on a co-ordinated solution that involves updates to much of the core software that makes the internet work. The idea was that the problem would have been resolved before Kaminsky published details at the upcoming Black Hat security conference.
Unfortunately the plan has gone awry in the last few days after another researcher, Halvar Flake, kicked off a discussion about the flaw that prompted Matasano Security to post full details on their own blog. That post has been taken down, but is of course around in the Google cache and the details have circulated widely. [NB this is a correction of the original, which said that “Halvar Flake, apparently pinpointed the details in a blog post of his own.”]
As a result Kaminsky and others are advising any systems administrator who has not yet applied the update to their servers to patch them “Today. Now. Yes, stay late.” It’s sound advice (and if you’re reading this but have unpatched DNS servers then stop now and go and fix your systems).
Kaminsky’s caution would seem to contrast starkly with the decision by Professor Bart Jacobs to publish details of the security vulnerabilities his research team has found in one of the world’s most popular contactless smartcards, the MIFARE Classic, which is used in London’s Oyster card, because they remain unfixed.
After his team from Holland’s Radboud University announced that they planned to publish details of how to copy cards and change their contents at will the manufacturer, NXP Semiconductors, went to court and were granted a preliminary injunction forbidding publication.
Now a full hearing has overturned the injunction, so the papers will be released as planned, and we will soon now how to add extra money to the balance on our Oyster cards because of the poor security of the system.
However this is not a case of a maverick academic simply publishing without considering the economic or social impact. Jacobs told NXP about his findings in 2007, and even informed the Dutch government so that they could take steps to secure government buildings that used smartcards to control access, while the papers concerned won’t be published until October this year.
But instead of using the time to fix the problems NXP has tried to stop publication, arguing that necessary changes will take ‘up to a number of years’, and ignoring the fact that the necessary skills are probably already in the hands of criminal groups.
The DNS vendors did not head off to court to try to stop Kaminsky speaking at Black Hat, perhaps because DNS is not owned by anyone while NXP Semiconductors own MIFARE and make a lot of money out of it.
DNS is a community good, and we all benefit from its safe and reliable operation, while smartcards generally serve the interests of private companies or those wanting to manage our lives in various ways.
And because NXP was trying to protect its commercial interests rather than those of the wider community, it failed to get the injunction it wanted. The judge even noted that ‘Damage to NXP is not the result of the publication of the article but of the production and sale of a chip that appears to have shortcomings’, a remarkably sensible thing for a judge to say in a case about computer security.
So who is right? Dan Kaminsky for keeping things quiet, or Bart Jacobs for pushing ahead with publication? I think both are.
We can have general principles and decide to override them if circumstances allow, and indeed we do this in many areas of our daily lives so should not expect the politics of technology to be different. Full disclosure is, in most situations and for most problems, the best way to ensure that those at risk can protect themselves and those responsible for flawed software have an incentive to fix it.
But sometimes, as with Dan Kaminsky’s discovery about DNS, a more cautious approach is called for. Kaminsky is not planning to keep his findings secret, but the public interest is best served by allowing those who provide DNS servers the time they need to ensure a smooth transition to updated versions instead of causing a panic.
NXP went to court to protect themselves from the painful reality that their chip is flawed, instead of doing all they could to resolve the problem, and as a result many of their users find themselves having to review their security procedures.
The similarities to arguments about free expression are not mere coincidence, of course. Shouting ‘bug’ on a crowded internet is just as dangerous as shouting ‘fire’ in a crowded theatre, even in societies where free speech is valued and protected by law, and we should not assume that full disclosure is always the right way forward.
Sponsored byCSC
Sponsored byRadix
Sponsored byDNIB.com
Sponsored byVerisign
Sponsored byVerisign
Sponsored byWhoisXML API
Sponsored byIPv4.Global
You have a vast misunderstanding of the facts.
1) DNS impacts the entire Internet population.
2) The attack is trivial.
Those two things set this apart from your comparison by ORDERS of magnitude. Also, Dan included a wide circle of operators who could get changes out to a large number of organizations in a swift manner, and more importantly, ensure vendors had a solution for when it was disclosed.
Non-disclosure was never being considered. You clearly fail to grasp that.
Ouch. A vast misunderstanding sounds pretty bad, but I’d argue that a smartcard attack that affects tens of millions of users, could compromise access to many secure sites and prompted the Dutch government to rethink their plans for their transport system is pretty significant too. I realise that those directly involved in fixing the DNS mess, as OpenDNS is, see it as really important, but transport systems in the real world matter too.
I’m not sure what your point about non-disclosure is - I note that Dan Kaminsky was planning to give full details after the problem had been sorted out, and contrast that with a decision to publish an exploitable hack despite the vendor’s opposition, and go on to discuss the issue in broader terms.
Having OpenDNS to act as safe server now that the details are apparently in the wild also means that the consequences should be minimised for those who haven’t been able to patch their systems, which is great: it’s a pity there isn’t that level of support for MIFARE users.
Don’t overlook the fact that the ethics of shouting “fire” in a crowded theatre vary greatly depending on whether and to what degree the theatre is actually on fire. The “shouting fire” analogy, cliché though it is, gives rise to another angle on this issue, however.
Most of our societies have developed to the stage where buildings are constructed with fires in mind. Buildings have measures for fire prevention, detection, containment, and eradication, as well as alarms and designated evacuation paths to aid those at risk. Because of this “design with fire in mind” approach, it’s actually not all that dangerous to shout “fire” (or sound the fire alarm) in a crowded theatre these days. In fact, I recommend without reservation that you shout “fire” in a crowded theatre if there is a fire, because that is designed to be the path of least damage.
I concede that immediate, full, public disclosure may not be the path of least damage as regards bugs in our technological infrastructure. That is largely because we don’t design with the failure of those systems in mind. We have enough difficulty getting them to work at all without planning for failure, and we are primarily interested in the kinds of bells and whistles which aren’t alarms. Consequently, we find ourselves in a bit of a pickle when anyone does actually notice a major problem. On the one hand, disclosure notifies the bad guys that there is a weakness to be exploited—and giving them a rough idea where to look is basically as bad as giving them the details. On the other hand, the bad guys don’t tell us when they discover exploits, so it’s not safe to assume they are unaware of the weakness, and we may not know how to detect exploitation without knowing the details.
The poor security researcher is always the bearer of bad tidings—bad tidings that we aren’t equipped to handle well. It’s hard to determine the path of least damage when we haven’t designed one.
I agree entirely, Brett, and I think your point that it’s a good idea to shout ‘fire’ if there really is a fire is one that it’s easy to overlook. We need systems that are designed to cope when problems are uncovered, that offer users a safe route out of danger and that minimise the likelihood of panic, but until we get them then the dilemma for security researchers is a very real one.
Bill’s piece dumbs down the issue just a bit too far. When presenting tech to the general public (as I’ve heard Bill do for many, many years), there is a constant danger of over-using metaphors and avoiding explaining the special features of a particular issue.
In this case, even within the realm of network protocols, I would say that this DNS vulnerability is sui generis. Given the
particular
facts of the case, my personal opinion is that Dan has carefully plotted out the most ethical and noble path possible. He has clearly analyzed the responses of both the good guys and the bad guys to the revelations. He even anticipated the reactions of the
out there who know enough to figure out the bug and show off their reverse engineering skills. He even gave an estimate (I don’t remember where) that a fair number of people would reverse engineer the thing in about 3 weeks, i.e. before his talk on the issue. Given the uncertainties of anticipating the behaviour of a whole planet, I think he’s done pretty well.In my own case, I was able to patch all of my old systems from BIND source, which took 7 days because of the strange configuration choices of my old linux distributions. A further delay occurred due to the need to reconfigure routers. I mention these details because the 13 day tip-off after patched BIND source was released gave me just the safety margin I needed to fix things. I imagine that many other net admins has similar multi-day delays to get their systems patched due to
. In the 13 days, I was also able to tip off net admins for other sites as to the real seriousness of the issue.All in all, anyone who trusted Dan Kaminsky has benefited from the whole process of (1) secret development of solutions, (2) public announcement of the urgent need to patch, and (3) admonition not to reveal the vulnerability for a month.
In my opinion, Dan’s course of action should go down in the technology ethics textbooks as a prime example of
the right way to do it
.
Anyone who resorts to bus and cinema metaphors is just missing the point that modern technology is
different
. People who communicate tech through simplistic metaphors do more harm than good. It would be better to explain to the public what is
different
about tech instead of reaching for 19th century metaphors to avoid the hard task of explaining tech to the public.
I hope this response will not be regarded as inflammatory or ad hominem. I do appreciate the difficulties that anyone experiences in explaining tech to the public. Bill just happens to do this as a career. He is one among many thousands who grapple with this difficult task.
Alan’s right that metaphors and simplified analogies create a danger that those who you want to understand the issue think they have a much better grasp of what’s going on than they really do, and I’m not going to get defensive or claim that he doesn’t appreciate the brilliance of my insights, because in this job it’s often about doing the least worst job of explaining something to a general audience and any journalist needs to listen carefully to those who take the time to criticise and review what they say.
Remember though that this article was written for the BBC Technology site, with a very general audience and a senior editorial team who wouldn’t know a poisoned cache if it bit them on the leg, so it was inevitably going to appear too simple. Remember too that the point (apart from reminding anyone with a server to patch it) was to talk about disclosure policies rather than Dan’s findings in detail.
Even so, there is a real need for more detailed but stil accessible explanation, and re-reading the piece I could probably have spent less time pontificating and more time explaining - though I worry I’d end up losing a lot of non-tech readers.
I’ll think about this next time and try to get a better balance. Good to know that there are people out there who’ll offer feedback, too.
I don’t want to wind this out into a long thread, but perhaps I could venture a few points which I do think are particular to the recent DNS protocol vulnerability. These “morals” of the story can be communicated to a general audience, but they are quite particular to this vulnerability.
1. It is difficult for a researcher to hold on to a discovery and keep it quiet. It is every researcher’s instinct to publish, and publish quickly. Nowadays some people even create crypto signatures of discoveries and publish those instead, just to establish priority. At least one person actually did that on a blog in the case of this vulnerability. Dan Kaminsky held on to a big discovery from about March to July. That’s a huge long time in the Internet era. Looking back, it’s difficult to realize that the secrecy of the vulnerability over that time was not at all assured in advance.
2. Many people in many big organizations kept their mouths shut for months, despite the availability of ample means to publish anonymously in this Internet era. Those people could easily have let the cat out of the bag. Those people were not under military-style secrecy contracts which would land them in several years of prison for disclosing even the number of paper clips in their office. This is the modern era where everyone has one or more blogs to feed.
3. Dan stood the dual risk of saying too much and causing panic, or saying too little and not being taken seriously enough. As things turned out, he did say enough for people who understand protocols and the industry to take him seriously. It took a certain amount of courage to cry wolf when no one would see the size of the wolf for a few weeks.
4. From my personal perspective, I’m rather amazed that despite the pressure on Dan, he answered a couple of e-mails from a complete stranger (namely myself) within about an hour. He didn’t have to answer e-mails from a random stranger in Australia. But that is another odd thing about the Internet era. One day you hear about a big network infrastructure vulnerability on a national radio station, and a little later, you’re discussing the matter with the principal figure in the matter.
5. After the July 8 announcement, many protocol researchers kept quiet about what they knew. I know from my own work in protocols research that once you know the basic characteristics of an issue, it doesn’t take too long to find it. We all read the same specs, after all. Dan gave out ample hints for professional protocols researchers to find the vulnerability in a jiffy. And yet, the great majority of those kept quiet. Once again, in the Internet era, that’s remarkable. These days you can hardly walk out the door without your picture appearing on some web site. You discuss something over dinner and next day it’s on a blog, and the day after that it’s in the Google cache.
6. When you “do the right thing” about a vulnerability, you don’t become very famous because the problem is solved before the world can descend into chaos. If you quietly avert a disaster, most people won’t believe that you did anything wonderful. People get more credit for a cure than they do for prevention. If Dan had announced the vulnerability in the way some people think he should have, there
would
have been chaos, and he would have been more famous, and there would perhaps have been more gratitude for the cure. And yet he chose prevention rather than the infect-then-cure path.
7. During the last 2 weeks, I have often thought about the folly of making the whole world rely on a network infrastructure which was developed for very much more humble applications. It made me think of what the world would be like if the DNS fell in a heap. Maybe there is another super-bug out there, maybe even bigger. Maybe the world is a little foolish to depend so much on the Internet.
Anyways, that’s my take on some of the “morals” to come out of this story. And I didn’t even have to mention randomization of UDP source ports!
Bill,
I think that you are missing the critical aspects of both attacks. The widespread implementation of a smartcard based attack is more complex than implementing a software attack that uses the internet as its delivery path. The software attack is almost a zero cost attack. The smartcard attack requires hardware. It also requires technological knowledge in addition to software knowledge. The “weaponisation” of the smartcard attack is quite different to that of a purely software based attack. Unlike a software based attack (to a large extent), it requires someone to go out and physically use the device and risk being caught. With smartcard attacks, there is always the possibility that others can do it and will do it. To be effective, a smartcard based attack where the device has to be used in public has to be almost physically the same as the legitimate card.