|
The NANOG list yesterday was the virtual equivalent of a nearby nocturnal car alarm: “panix.com has been hijacked!” (whoo-WEE, whoo-WEE); “those jerks at VeriSign!” (duhhhhh-WHEEP, duhhhh-WHEEP); “no one’s home at Melbourne IT!” (HANK, HANK, HANK, HANK).
Finally, on Monday morning in Australia, the always-competent and helpful Bruce Tonkin calmly fixed the situation. So the rest of us can get some sleep now.
But as we nod off in the quietness, let’s consider just exactly what happened here. A savvy NY ISP had its domain name hijacked and moved from Dotster to Melbourne. Somehow this happened without either Dotster or Melbourne getting any official notifications. As a result, email directed to panix.com domains was redirected to Canada. VeriSign said that the registrars in question would have to get involved in order for the situation to be reversed. Melbourne IT was closed and seemed to have no emergency contact information.
This was a very bad day for panix.com. And, I think, a bad day for Melbourne IT (but thank goodness for Bruce).
Panix.com should have had its names locked down so that changing their nameserver information required a login to the the registrar. (Panix said they did do this, but somehow this status was changed by someone—maybe the hijacker.) Melbourne IT should have been reachable.
Some on the NANOG list have suggested that ISPs should cooperate to point to the right information without waiting for registrars/registries to tell them what to do. This is, of course, the secret to the DNS: ICANN isn’t in charge; the ISPs are. If they decided not to point to whatever ICANN denominates as “authoritative” information, no one could say Boo to them. It’s up to the ISPs what to do.
On the other hand, I think the real lesson here is that customer service (24/7, someone answering the phone and dealing with nocturnal virtual car alarms) is everything. It is, in fact, law.
This may seem like an overstatement to you. But look at it this way: how often do you see a cop on the streets online, walking by and noticing that something’s amiss? You don’t. You’re online anyway, because the law of customer service is taking care of you. Most things get resolved without the involvement of any central authority, whether it’s ICANN, the FTC, the FCC, or a federal judge.
I hope that panix users are feeling confident in panix today—they should, because panix did everything humanly possible to fix this situation. And peaceful acclamation to all customer service people out there. You’re the law and we believe in you.
Sponsored byRadix
Sponsored byWhoisXML API
Sponsored byCSC
Sponsored byVerisign
Sponsored byVerisign
Sponsored byDNIB.com
Sponsored byIPv4.Global
I am surprised that we don’t see more of these incedents, frankly.
The new ICANN transfer policy opened the door to this, but greatfully contains some forms of recovery in the event it is misused (as we see is possible).
I applaud Bruce Tonkin of Melbourne IT on his work to rectify the situation.
ICANN has a comment review period which opened on 1/12/2005 [link below] is the appropriate forum for comments on the Panix experience.
It is unfortunate that Verisign gets mud slung on it for this, as it was not really party to the mixup, other than to accomodate ICANN’s Policy and the requests of registrars.
There are two key culprits here:
1] The customer of the gaining registrar who made the request.
2] The newly enacted ICANN registrar transfer policy that went into effect which makes such hijacks technically possible.
Panix could have mitigated this proactively by flagging the status of the domain with their registrar such that it would be ineligible for transfer, yet I feel that it is the fact that consequences of the ICANN policy change on registrar transfer needed better communication to the “At-Large” public.
ICANN Has a comment section for comments their new transfer policy:
Public Comments on Experiences with Inter-Registrar Transfer Policy
If you manage domains personally or for your company, and you have not already, it would be prudent to lock any domains under your management so that they are not succeptable to similar rogue transfer requests.
It’s good news that Bruce Tonkin of MelbourneIT fixed this. But we also need some answers to important questions about what went wrong, some of which need to come from MelbourneIT. The first is why (if VeriSign says the registrars must be involved) was there not someone reachable at MelbourneIT 24x7 who could deal with it (or get someone who can)? The second is what actually did go wrong at MelbourneIT that let it be in a position where it took them to fix it.
Supposedly no notices were given and the process wasn’t followed as it should have been. If this is true, then why not? Who did the wrong things? And why is it that the system allows it to happen? We need to know these answers so the system can be fixed (even if the fix is just a people issue ... though I suspect it is broader than that).
What if ... the registrar receiving the domain was one of the several that don’t really have any competent operation taking place? What if they could not be reached at all, or just would not respond? What if they were working with the thief, or actually are the thief? What does it take ... and how long does it take ... for a central authority (e.g. VeriSign and ICANN) to get involved and correct the problems? IMHO, this is something that should be happening within 24 hours.
Still, I do want to have a smooth transfer system working, too. I had to transfer a couple of my domains away from one registrar I had tried for a while because of the fact that they made a programming change on their web site that simply didn’t work in every browser I tried. Of course, I had to confirm the transfer with a code that was emailed to me which was not arriving. So two strikes against them. When I called, they people who answered could not help. When I emailed, I got template replies that referred me to the web site that didn’t work (even for the message that merely said the web site was broken). I eventually traced the problem down to some extent to being an error in their own DNS server. But by then, four strikes against them, I was not going to stay even if they fixed it. It ended up taking about 45 days to get the move completed, and one of the domains was down during the process because their web server would not process any name server changes properly.
A balance is going to be hard to do. On the one hand, registrars could be taking domains improperly. On the other, they could be holding the ones they have. Ultimately, I believe the decision needs to be out of the hands of the individual registrars, at least at some point in time after the normal process is tried. But how to make that secure is the big question.
Just to set the record straight. It was Melbourne IT customer service which resolved the problem. I only became involved after the problem was resolved to understand how the problem occurred in the first place, and also look at how to improve our services.
The answer to these problems defintly lies with the registrars, since that is where all the interaction actually occurs. So it is up to the registrars to act responsibly, and choose to work with honest customers.
One way of doing this is for the winning registrar to require confirmation from the Admin contact before releasing the request for the transfer to the losing registrar. Once this occurs there is validation that the customer requesting the transfer is the owner of the domain. This policy ensures that whomever is in control of the domain is making the transfer request, and that the transfer should go through.
What is essential here is that the winning registrar be held accountable for doing business with reputable customers. The only experience I have with this system is with GoDaddy.com and they do validate the customers ID by sending an auth email to the admin contact. If this email is not answered the request is not sent to the losing registrar.
Of course if someone gets into your account at the losing registrar and changes all of this contact info all the security checks in the world (under the new or old ICANN policy on transfers) are mute.
Thanks for setting the facts straight, Bruce. I sure hope you can let people know about your perspective on what happened, and how this might be avoided in the future. Regardless of any errors at your end, it sure seems to me that the system allows any bad actor (that happens to be a registrar) to take advantage of, or game, the system, or whatever it is that went wrong. It seems to me that something in the system needs updating.
I think what everyone is rather quick to look at Melbourne IT, and seem to be ignoring Dotster. Although it is the gaining registrar?s responsibility to authenticate the transfer one should the question the validity of Dotster?s claim that they knew nothing, which I find hard to believe. That the registry had a 1 transaction failure that happened only to Dotster is almost inconceivable, I have never heard of it occurring EVER. Almost everybody making comment on this list has pointed the finger at VERISIGN & Melbourne IT and made the assumption that their systems are at fault when in reality it may have actually have been Dotster who additionally neglected to send a transfer communication to PANIX.
We know that PANIX never received an communication from Dotster which could indicate that in actuality Dotster?s systems may have been compromised, because otherwise PANIX would have received an auth email which the would have rejected. Alternatively Dotster?s transfer mechanism failed. Additionally what did Dotster actually do to assist their customer through this scenario, a call to MIT at the recognition point may have actually solved the issue for PANIX earlier.
Finally you have all forgotten 1 thing it is called ?DUE PROCESS?, Melbourne IT has to investigate the matter and determine the validity of PANIX?s request. They had and obligation to do so, henceforth a 24 hour turn around time is actually quite acceptable, I doubt any other registrar would dispute that. The only point that I concede is that there is better inter registrar communications and contact points in the event that something like this happens again, the registrar?s (the people who know best) actually sort the issue through. This should have neve have fallen on PANIXS shoulders to chase. DOTSTER should have done this for them.