|
Most new domain names are malicious.
I am stunned by the simplicity and truth of that observation. Every day lots of new names are added to the global DNS, and most of them belong to scammers, spammers, e-criminals, and speculators. The DNS industry has a lot of highly capable and competitive registrars and registries who have made it possible to reserve or create a new name in just seconds, and to create millions of them per day. Domains are cheap, domains are plentiful, and as a result most of them are dreck or worse.
Society’s bottom feeders have always found ways to use public infrastructure to their own advantage, and the Internet has done what it always does which is to accelerate such misuse and enable it to scale in ways no one could have imagined just a few years ago. Just as organized crime has always required access to the world’s money supply and banking system, so it is that organized e-crime now requires access to the Internet’s resource allocation systems. They are using our own tools against us, while we’re all competing to see which one of us can make our tools most useful.
My thinking when I created the first RBL (now called a DNSBL; mine was the MAPS RBL though and so that’s how I still think of it) back in the mid/late 1990’s, was that universal access between e-mail servers was a greater boon to the bad guys than to the good guys, and so I worked to create a way that cooperating good guys could make their mailers less accessible. While I didn’t reach my objective of stopping spam, I did help establish the “my network, my rules” theory of limited cooperation for Internet resources. Simply put, it’s up to every network owner to decide who they will or won’t cooperate with, and the way to get your traffic accepted by others is to be polite and to spend some effort trying to avoid annoying folks or letting your customers annoy folks.
Here, in 2010, I’ve finally concluded that we have to do the same in DNS. I am just not comfortable having my own resources used against me simply because I have no way to differentiate my service levels based on my estimate of the reputation of a domain or a domain registrant. So, we at ISC have devised a technology called Response Policy Zones (DNS RPZ) that allows cooperating good guys to provide and consume reputation information about domain names. The subscribing agent in this case is a recursive DNS server, whereas in the original RBL it was an e-mail (SMTP) server. But, the basic idea is otherwise the same. If your recursive DNS server has a policy rule which forbids certain domain names from being resolvable, then they will not resolve. And, it’s possible to either create and maintain these rules locally, or, import them from a reputation provider.
ISC is not in the business of identifying good domains or bad domains. We will not be publishing any reputation data. But, we do publish technical information about protocols and formats, and we do publish source code. So our role in DNS RPZ will be to define “the spec” whereby cooperating producers and consumers can exchange reputation data, and to publish a version of BIND that can subscribe to such reputation data feeds. This means we will create a market for DNS reputation but we will not participate directly in that market.
The first public announcement of DNS RPZ was at Black Hat on 29-July-2010 and then at Def Con on 30-July-2010.
The current draft of “the spec” is here. No backward-incompatible changes are expected, and both reputation providers and recursive DNS vendors are encouraged to consider developing products that use this format to express DNS reputations.
The current patches for BIND9 are shown below. We expect this functionality to be part BIND9 9.7.3 which is several months off. Customers of ISC’s BIND support should contact ISC before applying these patches or any other patches to their production systems.
Comments and questions can be sent here. I’d especially like to hear from content providers who want to be listed by ISC as having reputation content available in this format, and also recursive DNS vendors whose platforms can subscribe to reputation feeds in this format. An online registry will follow.
We’re about to enter a bold new world where the good guys do not automatically grant the use of their DNS resources to bad guys. I don’t like the need for this but I’m finally pulling my head out of the sand. So, let’s party.
Sponsored byVerisign
Sponsored byRadix
Sponsored byCSC
Sponsored byIPv4.Global
Sponsored byDNIB.com
Sponsored byWhoisXML API
Sponsored byVerisign
I’m glad to see you’ve come to the same conclusion I have—that people should have control over their DNS experience. I’ve built OpenDNS on this core belief and the fact that 1% of the world’s Internet traffic is today routed through our servers is an indication of widespread support. Pending some further analysis, OpenDNS will probably be a supporter of ingesting these feeds for our customers.
As an aside, there will never be a market for RPZ data, just as there was never a market for MAPS data. Value added companies like Postini and Barracuda emerged instead, providing what the market really wanted, which is a solution—not a technology.
<< As an aside, there will never be a market for RPZ data, just as there was never a market for MAPS data. Value added companies like Postini and Barracuda emerged instead, providing what the market really wanted, which is a solution — not a technology. >> you're sure? five years later, i see this: https://dnsrpz.info/ and it looks like there are sellers, and also buyers. see also: https://www.farsightsecurity.com/Overview/NOD/
Five years later, a very small data intelligence market has emerged. A sign of what's to come? I don't think so. A sign of early adopters seeing a good technology and buying it with fattened cyber security budgets? I think so. But if I still take a long-term view here, do I think this will continue or become a larger market? No. -David
>buying it with fattened cyber security budgets? I think so. That is worth repeating. It is also worth noting a coupl of other issues. Through the New TLD program ICANN has finalized its control off the root monopoly. Splitting the root is now exponentially harder and not likely to ever happen. The fact that we now face ICANN becoming an supranational organization like the UN will only motivate more money to censor and control the masses. I remain pesimestic, I still expect government to eventually publish RPZ files and demand, in some political way, they be implemented by internet service providers. The TPP may well be the start. In fact Drudge recently stated he has been told its near the end for him, that "the votes" are now available to shut him down making his sort text "copyright infringement". To me RPZ has grown and sits in a perfect place to be used to implment global censorship such that "nobody is in control" of it so everyone involved escape responsiblity for the act. This is far from over ......
Real world example of how to expect RPZ to be used: http://www.wired.com/2012/11/russia-surveillance/ "The new system is modeled on the one that is used to block extremist and terrorist bank accounts. The Roskomnadzor (the Agency for the Supervision of Information Technology, Communications and Mass Media) gathers not only court decisions to outlaw sites or pages, but also data submitted by three government agencies: the Interior Ministry, the Federal Antidrug Agency and the Federal Service for the Supervision of Consumer Rights and Public Welfare. The Agency is in charge of compiling and updating the Register, and also of instructing the host providers to remove the URLs. If no action by the provider follows, the internet service providers (ISPs) should block access to the site in 24 hours. The host providers must also ensure they are not in breach of current law by checking their content against the database of outlawed sites and URLs published in a special password-protected online version of the Register open only to webhosters and ISPs." True, in the above mentioned article it is suggested that Deep Packet Inspection is used to implement the filtering. It would be "interesting" to obtain the current domain list and then query various Russian DNS servers to see if they return the expected responses ... In other words see if RPZ is currently part of their tool set. Additionally: "Our elections, especially the presidential election and the situation in the preceding period, revealed the potential of the blogosphere.” Smirnov stated that it was essential to develop ways of reacting adequately to the use of such technologies and confessed openly that “this has not yet happened.” The solution appears to have been found in the summer, when the State Duma approved the amendments, effectively raising the internet-filtering system to a nationwide level, thanks to DPI technologies. Maybe because government officials had, for so many years, claimed that Russia could not adopt the Chinese and Central Asian approach to internet censorship, the solution took the national media, the expert community and the opposition completely by surprise." The above years old article comes from researching this current article: http://www.telegraph.co.uk/news/worldnews/europe/russia/11934411/Russia-tried-to-cut-off-World-Wide-Web.html
This is an interesting proposal, but I was disappointed to see “speculators” tarred with the same brush as “scammers, spammers, [and] e-criminals.” Registering a descriptive domain name, putting advertisements on a parked page, and hoping to make money on ad revenue or a domain sale isn’t a malicious use of the DNS.
Blocking Spammers and purported criminals by subverting user intent is a road to hell - not necessarily paved with good intentions. Navigation gamesmanship destroys the value of domain names, ICANN, all registries and the DNS because it destroys the reason for owning names: Traffic and hope for future traffic.
Those who would consider such concepts have not thought through the ramifications of their wishes, or what would happen if everyone did the same as they did. Placing power in the hands of hollow platforms, which simply wheedle or game their way to the top of the navigation hierarchy removes uniformity from the global browsing experience, denegrates the utility of the Web and adds credence to those advocating the splitting of the root.
As an individual with Internet access, I have only encountered David Ulevitch owned OpenDNS pages when I try to type the domain names of my colleagues (to confirm who owns them) and then unwillingly been redirect to OpenDNS parked pages which have been inserted in lieu of the parked page provider I was actually looking for. It’s a “classic” damming of the river upstream where the hotel or ISP I am viewing a website through, has contracted with OpenDNS to subvert the browsing experience, because it is economically expedient and seemingly consequence free to do so. No better than Gator’s virtual Wallet back in 2001. One advertisement subverted for another where that advertising revenue is shared with the ISP and OpenDNS or another subvertor.
The end-game of such gamesmanship at the DNS level should be plain for anyone with a shred of intelligence to see. The land title office claiming, why don’t we just insert “our” name on the vacant property address… Advertising replaced by other advertising held out to be the correction of an error. An undesirable form of commentary held out as a crime. It is the final slippery slope will destroy the usefulness of the DNS and the internet you want for your children.
As the owner of many thousands of meaningful domain names which people request via browser type-in, I may not be loved by the DNS community for displaying advertising on my sites. I and those like me are frequently vilified by those who did not economically participate in the great domain boom and latecomers are constantly trying to rewrite the “errors” that allowed speculators to profit on domain names. The important thing to note is that the sites in question “belong” to the name owner. It’s THEIR site. They bought their way to the table when they bought the domain name which people request.
I own many bad names which get no traffic and no company which steals my traffic helps me pay the renewal/ICANN fees on those.I reserve the right as the owner of the destination people seek, to display the content I chose - and I live with the legal, economic and ethical consequences of my decision when I stand as the registrant of the name.
Inserting yourself arbitrarily between a consumer who requests a site and the owner of that site because you woke up one morning and felt like you were helping the navigator (or hurting a site owner you arbitrarily labeled as malicious) makes you the bad guy. It is not your decision to make. You are killing with kindness. When you re-characterize the human behavior of requesting a particular site as an “error” because it is economically useful for you, it makes you the criminal.
If nanny state concepts like these take hold it ultimately strengthens Google, Facebook, Twitter and other platforms. It weakens all domain names, all new tlds, positive disruptive DNS related technology, registrar businesses, ICANN and global CC TLD registries.
The very system we enjoy today, the profits at registries and cash flow-stream which fuels all ICANN matters, root Servers, working groups, constituencies, legal-fees comes from the registrants of domain names (big and small). For that registration base to thrive we need freedom of Navigation, we need to reinforce that navigation will be authoritative and experiences will be uniform.
Parasitic intermediaries, which would inject themselves are the undoing of the root.
When I look at the look at the last 20 years of the Web - the Microsoft, Google contest; it is Google who has done more ‘evil’ than Microsoft. It would have been easy for them to do so, but Microsoft has not stopped or shaped users navigating to domain names through it’s browser. It is Google who has, via its toolbar, browser and exclusive search-index. The true evil is re-characterizing your evil as good. Google could not have become what it has without the kindness of Microsoft and had Microsoft played dirty, Google would have never been what it is.
Correction and blocking concepts such as these, rhyme with such evil. Calling yourself a good guy by subverting human intent is just evil.
I have long felt that the greatest threat to the Domain Name Industry, domain names, registries and ICANN are those who would renegotiate the browsing standards. I have learned that freedom on the Internet is provided by freedom of navigation.
Cheering half-baked concepts such as these to a tipping point where everybody is subverting any experience they don’t like will make the modern Internet which gave us the luxury of sitting on our high-horse to criticize it, look like a twisted upside-down Planet of The Apes. Only Less talking monkeys and more big-corporation, Big-brother structure.
Frank: Paul was talking about malicious domains, not parked websites. And he wasn't equating the quality of new domains, in that first sentence, just the source of new domain names. It's easy to get off on a tangent here and get excited about parked domains and OpenDNS' implementation(s), but that's not the driving reason for ISC's work here. They're looking to provide a hook into BIND so that responding behavior can be controlled based on feeds. There will be a variety of feeds likely created -- one for sites that are hosting malware, another for sites that host various vices, another for new domains registered with eNom, etc.
The desire to insert hooks and control levers into BIND really raises an eyebrow. The beginning of the end, never looks obvious at the time. You have to think about what continued precedents such as these amount to, and what happens to the broader Web if everyone operating a root-server gets creative with this. Your children's Internet will not be as free as the one which brought you and I prosperity.
The person who is going to rely on the names decides who they choose to advise them on which are malicious and which are not. Having ICANN do this for everyone would be very bad. Everyone choosing a filtering service of their choice is a totally different issue. People may not want Vixie to be doing this for them either, maybe they choose another company. Google already filters out malware sites from its search service. As the person who cleans up the computers in the house, I don't want the kids connecting to those sites in the first place. A filtering service may not stop 100% of the compromises, but if it reduces the number of infections from 5/year to one it means I spend 1/5th the amount of time on cleanup.
Few people know that they can change their DNS, even fewer know how. For years some ISPs have been capturing all port 53 queries and force them into their DNS server. Those customers have no choice even if they do know how to change their DNS settings. This “feature” is now in BIND with 85% market share of DNS server software globally. www.isc.org/community/blog/201005/dnsbind-canards-redux Is it reasonable to believe this feature will eventually be disabled by default? I think not.
I don't see how this decreases choice. An ISP could provide two flavors of DNS, one that's filtered, one that's not. Most consumers don't care -- they just want to surf the Internet. If an ISP can keep their customers malware-free then it's good for both parties. If an ISP already captures all port 53 traffic then the subscriber is already "stuck". But they can choose another ISP.
“One man's trash is another man's treasure.” It is simply not reasonable to assume this system will err on the side of fewer dezonings, versus be overly broad in removing sites from the internet. Those that are involved effectively get a proxy vote from those not involved. You assume the few represent the masses, they do not. The person visiting the site was not the one that voted As far as what ISPs could do, that has no realtionship to what they will do. It also has no relationship to what is legally possible when someone else makes the choice “at arms length”. The concept of Network Neutrality is simple: all packets treated the same. Freedom and choice are lost the we allow 3rd parties we have no association with to step in and decide which packets we can and can’t see.
There is no particular reason that DNS has to be run over port 53. One (legitimate) reason that some ISPs intercept DNS queries is that it is a way to stop DDoS attacks slamming the root or .com or whatever. I have Comcast, and I only use the Google DNS on my machines.
I am struck by this implication. Google, perhaps the largest monopoly that ever came into being besides Ma Bell (and Ma never told us who we could and couldn't dial), is your preferred choice for Open DNS over Comcast? Okay, I understand that Comcast wanted to meter your speeds at one point and still does, but what do you think Google wants? Open-ness? No, I say Google laid out the plan already, and this whole change to bind is just feeding into it. 1) Call for Open-DNS 2) Change DNS to self 3) Profit
Yes, I was pointing to the fact that people can work round port 53 blocking. The difference between Ma Bell and Google is accountability. Google has huge market share but so did Yahoo before Google came along. The only way that Google can keep that share is if they are constantly worried that they might loose it. Google understands that fact, Facebook does not.
Which people will work around port 53? The 99.9999% that have no clue what we are talking about? A DNS fingerprint is more than obvious, port agnostic filtering will result. None of this has anything to do with techs. In the extreme this has to do with the likes of say a single working mom trying to transition to the information economy and doing so. A person with a lot to say and a lot of people wanting to listen ... And an empowered few able to shut her up forever. We the people that understand the technology need to protect those that do not understand and are never likely to be able to protect themselves. And some day each one of us is going to need that help, with proposals like that I fear that time might be sooner than I’d ever thought possible!
this post starts with a bold assertion:
“Most new domain names are malicious.”
and the whole remainder of the post, and launch of this idea, is based on that statement. is it true?
this is quite inconsistent with our view of the data. we have a reasonable, and certainly statistically significant set of data. I am very interested in whether i) other registrars have a fundamentally different registration experience than we do, or ii) the statement is hyperbole.
I would love a deeper dive into the data, whether here or offline.
and @frank bulk, the author brought frank s in with this comment:
“Every day lots of new names are added to the global DNS, and most of them belong to scammers, spammers, e-criminals, and SPECULATORS (emphasis added).”
again, I wonder if that was a turn of a phrase or a thought out statement. if thought out, then there are real problems with this approach.
"I am stunned by the simplicity and truth of that observation." I am stunned by the lack of supporting data.
The DNS cannot serve every purpose for everyone.
The ICANN DNS is based on the principle of making access as easy and as cheap as possible. There are no checks made when someone registers a domain name today and it is implausible to expect that they can be established in the future.
There are many benefits to open-ness. But there are also disadvantages. The fact that someone wants to talk to me does not mean that I want to listen or that I have an obligation to try. It is entirely reasonable for individuals to choose to move to a smaller Internet of their own choice.
What is not reasonable is to expect (or for that matter allow) ICANN to do that job.
Internet crime is a consequence of an accountability-free approach to Internet architecture. Nobody sent spam when the consequence of doing so was being kicked off the computer system that was necessary for access. Spam only appeared when accountability-free Internet access started to emerge.
You don’t need the same degree of accountability for every purpose however. If I am browsing a Web site I probably don’t need a very high degree of accountability, unless that Web site wants to use javascript which is code that is going to run on my computer in which case the degree of accountability is a little higher, if they want me to buy something from them or run their software, the degree of accountability required becomes much higher, I want criminal sanctions in case of default.
And we are going to need even more accountability still if we are ever going to get to the ‘deep e-commerce’ that we all thought was round the corner in 2000, but kind of died with the dotcom bust and 9/11 and everything.
Paul’s suggestion is a positive one in my view. But I think to make it really work we have to be willing to adjust the DNS specs to suit. Using TSIG is OK, but I would like to have a more practical set up scheme than shared keys.
In the original Internet architecture, IP and packets were primary and the DNS was an afterthought. The architecture that has evolved makes DNS primary and the packets are just a means to an end. The DNS is the one part of the architecture that we can expect to be around in 200 years time. The packet layer will almost certainly be different, the DNS protocol itself will probably have been replaced a couple of times. But the DNS system will have survived.
At the moment the Internet has two identity infrastructures, DNS and PKI. The DNS is open registration, the PKI has tiered access. Domain Validated certificates can be obtained with minimal validation, but EV certificates require a demonstration of accountability. What we really need is a single system and the way to do that is to work out how we can leverage both existing infrastructures to build the infrastructure we need.
[Removed as per CircleID Codes of Conduct]
That is where the will lead, deeper and deeper down the DNS “stack”.
And if most all the largest ISPs dezone, for all intents and purposes the domain is not published. That will eventually be the excuse to move the ban to the registry entry. And what reasoable person will conclude ISPs will not “share” data?
As for email we choose our spam filters. We have no choice regarding the zone record “votes” others will make for our domains and those of our customers. The two are not comparable.
This “electronic democracy” is two wolves and a sheep deciding what is for dinner.
Sure, if a domain is being used to control a botnet then most selective DNS providers are going to block it. That is what they are being asked to do by their subscribers. Is that meant to be a 'bad thing' somehow? What I don't think very likely is that this is going to allow China or Iran or the Moral Majority or any other group to extend their control over the Internet any more than has happened already.
A legal chain is formed in your example. You blocked "my" site, and "I'm" not likely to entertain a transparent review of what "I" did to cause you to implement that filter. "I" will not win and you'll likely come after "me" for damages. DNS RPZ has ZERO accountability. THOSE ADDING RECORDS TO THE TABLE ARE NOT ACCOUNTABLE TO THOSE AFFECTED BY THOSE ENTRIES. THOSE AFFECTED BY THE ENTRIES HAVE NO VOTE. And the danger of being that pointed is having those most creative wrap logical fallacies around the basic spec to make it appear those issues can and will be addressed. They can't. “Censorship ends in logical completeness when nobody is allowed to read any books except the books that nobody reads.” - George Bernard Shaw
The same was true of the MAPS spam blacklist at first. Then this became very clear with blacklists falling over themselves to be as unaccountable as possible. Eventually the consequences of having no accountability became clear and sensible ISPs dropped the blacklists causing the trouble. If I decide that I don't like your Internet traffic, I don't have to accept it. Thats tough for you. You don't get a vote in my decision on which traffic to accept and you never will. Deal with it. Because at the end of the day, if people want this type of Internet service they are going to get it. Nobody needs to ask your permission or anyone else's to deploy this. Just as nobody needed to ask permission to start filtering their email, they don't have to ask permission to filter their Web. We have had content filtering services aimed at excluding pornography and certain political content for over a decade. The world has not come to an end. All that is likely to happen here is applying the same idea to excluding a different set of sites where the problem is malware.
You use the word "service" as if to suggest this it is optional. It's not, and that is the issue as you say here: >If I decide that I don't like your Internet traffic, >I don't have to accept it. Thats tough for you. You >don't get a vote in my decision on which traffic to >accept and you never will. Deal with it. Exactly. And that works both ways. Thus the title of my original post: DNS RPZ = Empowering Electronic Balkanization If malware really is the issue you wish to address then rather then carving up navigation demand ICANN end privacy Whois. Simple and to the point. There already exists the ability to delete a domain for bad whois, but with privacy whois that can’t be done. So lets cut to the heart of the matter and hold the registrant responsible, not some electronic manifestation of them. Further, demand centralization of the COM/NET whois harmonizing them with the other gTLDs. Then, since the registries must store all domain update transactions, require ALL registries to implement a WhoWas function based on those stored records and for a fee say not more that $10 per query. As DomainTools has shown, the profits here would be very large for the registries and they already have the data (except Verisign). Since the registry is the authority the WhoIs and WhoWas provided is authoritative. DomainTools is not authoritative and filled with bad data. Such a Registry level WhoWas service will provide a decent audit trail for such behavior of individuals. This is a real solution to hold the guilty accountable without creating collateral damage and other liabilities for unrelated people, organizations, and their domains. But no, it’s not particularly sexy nor easy to tamper with.
Better identification won't fully resolve the problem -- I wish it was. There's lots of spammers and bad websites where the owner and ISP won't/can't do anything about it. And getting ICANN to enforce even the policies it has on the books is not realistic. RPZ gives the DNS server admin a level of control and responsiveness that they didn't have before.
Agreed, nothing is perfect, to me it's just better to catch fewer than unfairly harm honest folks. As for enforcement, as I've stated below, Afilias implemented a pretty serious policy on these matters. For some reason people think of ICANN and don't think of talking to the registries. Not to mention the registrars where the TOS will likely give them the ability to do whatever they want. >RPZ gives the DNS server admin a level of control >and responsiveness that they didn't have before. Absolute power currupts ... Look, I'm all for making such a thankless job easier. Which is why the more pain you can focus squarely on the bad guys the more control you will REALLY have!
I don't plan to wait on ICANN for anything. There are really good reasons why ICANN cannot and will not do anything to 'fix' whois. It is only actually in charge of some of the DNS for a start. Last I saw, Nominet still does not acknowledge ICANN as having any authority over it whatsoever (they exchanged letters, so what). And I would expect the same to be true of the other European CC TLDs. There is no way that ICANN can force the EU registries to breach their understanding of EU privacy requirements. Nor is it going to push on the registrars for the same reason. And even if this was a possibility, the accountability would be that the registrant's WHOIS data would be published. Which might enable a cancellation of the registration eventually if found to be fake but otherwise would have no consequences whatsoever since much of the Internet crime is being committed with government protection from prosecution. Compare and contrast the non-accountability of that procedure and the months long delays it would entail with the fact that a selective DNS can disable malicious zones in seconds. If a domain is being used for botnet control, fast flux phishing or malware distribution, shut them down until they clean up their act. If a selective DNS provider is being abusive, then make the fact public and people can decide if they want to change.
>There are really good reasons why ICANN cannot >and will not do anything to 'fix' whois. It is >only actually in charge of some of the DNS for a start. That’s not how our contracts read. Nor the ICANN emails when someone complains about a whois record. Nor the ICANN required whois data escrow deposits. >And I would expect the same to be true of the other European CC TLDs. Yes, I'm aware that ICANN failed to get as many ccTLDs to sign redeligation contracts as it wanted. So the issue of ccTLDs is more difficult. One registry showing a profit from a WhoWas service strikes me as creating more of a ripple then you might think. As you say (next) that does not necessarily give you the legal teeth to do anything, but nobody is hiding (similar to your selective DNS provider misuse comment). I also see lots of problems far beyond the ones specified that would be solved with an end to privacy whois. >but otherwise would have no consequences whatsoever since >much of the Internet crime is being committed with >government protection from prosecution. Understood >Compare and contrast the non-accountability of that >procedure and the months long delays it would entail >with the fact that a selective DNS can disable >malicious zones in seconds. What I read about registries rarely seems to reconcile with my personal experiences. Registries take their "brand" pretty seriously and never seem happy about misuse of their domains (broad statement since I don't know them all). In fact if memory serves Afilias for example has stated fast flux is forbidden without first obtaining permission. Yes, it only takes one bad registry in the root to cause a mess. Circling back to the government issues, it still seems that we're back to the domain being given more value than the registrant places on it. It will be replaced very quickly (domain fast flux if you will). And the issue of false selective DNS triggers remains. I foresee some real wars in this regard. Losing email is one thing (queuing) but lost sales etc of a down website is incomprehensible to me. You'll have IT teams running around with no clue why their customers have no access. You have registrants calling hosting companies getting billed for the calls and the hosting companies scratching their heads while the meter runs. The business owner will be totally powerless and losing money. The smaller they are the more devastating this will be. And it’s worth the reminder that they might have had their server taken over. This will easily destroy small businesses if affected. Large business are not going to have a great time either, but they will likely isolate the problem quickly and sterilize the problem just as quickly. Abuse and exploits are admittedly my biggest fear. Some kind of spoof attack to sucker the system into deleting a specifically targeted zones for example.
>So most really don't choose their own spam filter. For personal users I generally agree. For business this would not be the case at all. Email hosting and spam filtering would both be selected, perhaps from the same provider perhaps not. >The point I'm trying to make is that the zone record holder >is equivalent to the operator of an e-mail server. No argument from me, except the usefulness of that relationship and the problems that assumption causes. Every time a registry tries to pump it’s reg count, or a registrar does, with some discounting program who regs most of the domains? Spammers, malware, etc, all the folks stated as the targets (speculators are generally excluded here). Look at the first Afilias .INFO promotion, the problem was so bad there was a massive industry wide “filter bias” on the .INFO TLD itself, in spam filters and the search engines! In general Afilias did enjoy some stickiness of regs but the total damage from spammers and malware was huge. In fact Afilias implemented a rather heavy handed policy regarding it’s ability to delete such domains in the future ... We’ve seen this enough times for the details to be clear. Simply put, the “bad guys” do not give a damn about a unique domain name. So while your juiced up servers go to war tracking these people down they will query your servers (or have a parallel setup to monitor your ban lists), see they were detected, and move on to the next domain name and free hosting account ALL BEING DONE BY AUTOMATION. They keep making money and you keep wasting time money and effort chasing after them. But there are lots of pretty flashing lights in the mean time, a never ending game of Whack-A-Mole. But you feel like your are doing something as DNS RPZ helps you chase them. The bad guys place no value in their domain names, none. That is why the DNS/Email relationship is meaningless for the intended application. For those poor people that have their servers hijacked, you’ll get them, and how do they have themselves removed from the ban list after you’ve shut down their business?
Your point about .info matches my (little) understanding of the events that transpired. It's true that the "bad guys" will likely cycle through more domains more quickly, but I see DNS RPZ as one tool in the toolchest, and I don't think anyone is suggesting it's a silver bullet. "Poor" people who have their server hijacked will need to do a better job of securing it. Just like an e-mail server that winds up on a DNSBL because someone is relaying e-mail through it, they'll need to best to remove the malicious content and request a delisting. Frank
Ed:
I’m not following what you’re saying. AFAIK, Google does not use BIND for their own DNS services.
Frank
But Google does use their own DNS for putting ads on pages that don't resolve (the eventual end of all things Google). Is this what in the end all unplusgood DNS RPZs will resolve to? More ads more spam and more ISPs earning from non-resolution?
That is precisely the problem. We are creating an incentive for sites NOT to resolve.
And never be able to know why, who did it.
Google's DNS does not return a page with ad when pages don't resolve. Their DNS was a response, in part, to Comcast trying to turn such ad pages into an RFC. Moreover, I'd be happy to have a DNS service that didn't resolve spammers, malicious pages and even content-less pages from speculators and domainers. The latter add nothing of value to the internet community, even if they do create value for the speculators. Instead, they fill the net with useless pages and create an artificial scarcity in domain names. Having a DNS service that helps drive the economic incentive out of parked pages sounds great to me -- so long as it's a service I get to choose, a la OpenDNS.
Google's DNS does not return a page with ad when pages don't resolve Not yet. What do you think they want one day? Peace joy and love? Or money? so long as it's a service I get to choose RPZ DNS offers no choices
Google's likely to decide that simply faster DNS, getting people to more pages faster, is more in their interest than crappy ads on pages that don't resolve. Luckily for the greater internet community, this puts their interests and Google's in line. If and when, Google starts to follow the low-grade economic model of domainers and ISPs like Comcast, I'll find another DNS provider. And the same with RPZ DNS, I can choose whether to use it or not. All I'm hearing here is a bunch of whining from domainers that their bottom-feeder economic model is in peril from a DNS that actually serves people and which would be totally voluntary. Parked pages are parasites that found an ecological niche thanks to a very open internet policy. Finding a way to squeeze them out without restricting anyone's ability to buy a domain name sounds ideal to me.
And the same with RPZ DNS, I can choose whether to use it or not. Actually, you can't. Did you like when you could only choose Microsoft Internet Explorer 4 or Microsoft Internet Explorer 5? All I'm hearing here is a bunch of whining from domainers that their bottom-feeder economic I suspected you were flamebait, but thanks for proving it. You may now go forth and propagate with others of your ilk.
And thanks for proving that when you are called out for FUD -- that Google DNS shows ad -- that you'll avoid the substantive questions. I susptected you were just another bottom-feeding domainer who refuses to acknowledge there's a damn good reason for DNS services for people who have no use for parked domains and artificial domain name shortage. Thanks for proving it. You may continue to go forth and propagate the net with your crappy pages -- whining about the idea of creatively using DNS to make the net better, instead of actually building something people want.
I'm curious what will replace the parked page that will make your experience better/faster/safer ? "I’ve no doubt you’ll make the main stream media your friend" - Charles Christopher ;)
The use of Google's DNS is optional. If Google used RPZ for their own financial advantage through ad-filled pages, Google DNS users could choose another DNS server.
The advertisers placing advertisements on those parking pages might have a different opinion as you just deleted a large amount of advertising space. Some of it likely being their most profitable insertions. Google would not care as their auction system will price the available inventory upwards for the scarcity you just created for them created. ISPs would also love you for getting rid of the “speculators” as that frees up more domains for them to offer up via wildcarding. The more typeins you can force into a deleted state the better. In other words the “uselessness” of speculators parking pages gets transferred and monetized to the ISP, and likely they will eventually register those domains since they are uniquely positioned with DNS logs that can be mined for keywords. If such pages are so useless one should ponder why so many people obtain such profits from those DNS entries (AKA domain name), if they exist, and even if they do not. But of course we’re not likely to hear much debate about ISP wildcarding, that’s ok. And it’s also ok for Browsers to intercept the dezoned domains so the browser author can monetize the error traffic you just created. So much uselessness, and so much effort to be part of it. Wonder, wonder, wonder .... As for “selfish speculators”, yeah, I ignorantly thought that to once upon a time. Then I watched as registry after registry uses them as their UNPAID MARKETING FORCE for their domains and thus directly driving interest in New TLDs as they deploy. One need only look in the main stream media for the countless opportunities that registries have promoted a high dollar sale of one of their domains in order to pump interest. And often times an evil speculators gave the registry that opportunity to promote itself. The domain arbitrage opportunity very efficiently converts to a motived sales force that has to pay renewals fees each year to cover costs of domain they can’t be monetized. Through the years registries have happily given up that arbitrage opportunity to speculators to promote their domain space for the registry. Of course after years of watching them do it, registries now reserve domains and auction others both at launch and afterwards, now they know how to duplicate the promotion and get the auction rev themselves. Of course only speculators have the understanding to properly price the domains at the landrush auction and everybody scratches their head over the prices and calls the speculators nuts (speculators seeming incapable of ever doing anything right) ... But registries are never evil for doing this, the speculators are considered stupid for paying so much, then they get criticized for building a business they expect to profit from when someone approaches them for one of their domains. Everybody wants everything for free and gets mad at others when they demonstrate the gonads to accept risk and successfully turn that risk into profit.. Same old ignorance, different decade. I’m sure we all expected it. Now lets get back to DNS RPZ, and how it will affect those that will never be able to protect themselves from it.
>This means we will create a market for DNS reputation That says is all. The goal here is to use BIND to ligitimize this specification and do what they can to fully deploy into ALL DNS SERVERS. There is no other reasonable inference of that quote, none. As to others following their lead, I'd rather avoid it from the start.
How do you see Network Neutrality interacting with established ISP practices of spam filtering and selective port filtering?
I understand your concern that an ISP who uses the DRZ functionality in BIND would be providing a selective experience of which the subscriber may or may not be aware of, but that doesn’t interact well with the fact that ISPs (typically) do their best to provide a good and safe Internet browsing experience for their customers.
If as an ISP operator (I do that in my $DAYJOB) I can prevent most of my customers from visiting most $BAD sites that will infect their PCs for a minimal cost on my part, lowering my helpdesk support costs and reducing customer frustration and cost, how is that bad for either party?
I’ve neglected to mention that the ISP should be disclosing that their doing DNS filtering. Customers know we do that with their e-mail and I readily tell them if the ask why port 445 doesn’t work across the internet.
Frank
>spam filtering and selective port filtering? That is why I no longer use an ISP email account and fortunately there are plenty of options. Port filtering, such as my previous ISP intentionally killing my Vonage service, bothers me greatly. Fortunately I had another ISP available and up until recently I've seen zero filtering of any kind by them. Yes this is a huge problem especially when the ISP does it to kill competing services like VOIP and video on demand in order to force the customer back to their paid services. But that is very different, as I said we have no choice regarding DNS RPZ directly or indirectly. DNS is the foundation of the internet and DNS RPZ is going to empower unrelated 3rd parties to make decisions for the rest of us, without our vote. Just as with email these lists will be shared. I just received a call from a friend that is email has been banned by a local company. He runs a promotional business, never spams, he only sends emails to a large list of customers that he’s built over many decades. What happens is someone decides to dezone his domain name? It’s bad enough to lose revenues due to his email not meeting some arbitrary metric, but killing his website does nothing but see if his has the time and bank account to try to get the entry returned before he’s bankrupted. That power of DNS RPZ will be more than obvious to those with power and resources as well. If someone pays you to delete someone else’s site from your network there will be legal recourse if caught, or doing it without a solid legal reason. There will be no practical recourse for DNS RPZ. Taking this to a limit I can see bored smart people no longer hacking websites but getting together to see if they can get ebay or Google into the DNS RPZ tables. If DNS RPZ implements an exception table of any kind then admission of its foundational flaws are made, but not before permanent damage is done likely forever and is impossible to reverse. Furthermore, when an ISP implements filtering that information gets out into the market place and people can use their money to reward or punish that ISP. Assuming a non filtering ISP is available. To this day the filtering by my former ISP is costing them money, due to my encouraging others never to use them and when they don’t listen and find out I was right I encourage them to tell others. With DNS RPZ there will only be some mysterious consciousness out in the ether, which is totally unaccountable to anybody! This should be obvious to all. This is the latest in technology seduction, one destined to leave a very bad mark.
With a DNS RPZ the website wouldn’t be down, it would just be inaccessible by those using a DNS server with using a RPZ feed that marks that site as undesirable. Not unlike outgoing e-mail servers which sometimes (accidentally) get on a blacklist, that may happen with a zone. No doubt that there’s risk with implementing an RPZ, but the painful reality of botnets and malware have brought us (well, at least hte ISC) to this point.
Frank
>down / inaccessible I think that's unfair hair splitting. To the world that can't access the server, the website is "down". To the small business that knows little about IT, the website is "down". Made worse by the fact that all the internal known administrative points have NOTHING to do with why the site is "down" or "inaccessible". This is a debug nightmare.
Look, nobody should be creating or centralizing (functionally or procedurally) an "Internet Off Switch". If you build an internet off switch to go to war with the attackers the attackers do what? The attackers attack the switch. It's what they do, it's their nature. We can all easily intuit how this is going to play out. Is ISC willing to go on record and say that is impossible?
There's not "internet off switch" and this new DNS RPZ functionality is not that. There's lots of DNS server vendors and they're not implemented uniformly. And there's likely to be several DNS RPZ feeds. You're concerns are overstated.
Yes there is. The controller of the DNS server with 85% market (the purpose of this thread) share said this in the above: >This means we will create a market for DNS reputation [...] Last I knew, most called 85% a "monopoly".
85% seems high, and I'm not sure how ISC counts it. But even if that's the case, the ability to use DNS RPZ does not mean that everyone will implement it, and people are free to use other DNS servers like PowerDNS, Nomnium, etc.
If a Web site is valuable, then perhaps there is an onus on the Web site provider to help distinguish their legitimate content from malicious copies. That is precisely the problem we set out to address with Extended Validation certificates. An EV cert provides a quantifiable indication of accountability and accountability is a pretty good predictor for trustworthiness. There is a well defined procedure and a competitive market for EV cert provision. Of course an EV cert is not going to guarantee that people want your content. But they have the right to refuse to deal with you. Just as many banks have now ceased to provide any banking services to Nigeria as a result of the number of scams coming from that country and the complicity of the government in the scams. Of course the filtering services must also be accountable. I really did not like Vixie's refusal to be accountable for his MAPS activities. But the market sorted itself out fairly quickly. Nobody uses raw blacklists as the sole blocking criteria any more. There are feedback loops and blacklists that have false positives have their merit scores rapidly downgraded.
Most small business rely on others for all this and that their server will be secure. They simply are incapable of handling this issue. There are more than enough public reports of the largest websites in the world being successfully attacked and manipulated. As I said the large corporations will know what happened, no doubt they will subscribe the DNS RPZ monitoring services that spring up. So with the largest corporations having their server successfully hacked are we assume that small business with no IT staff are more competent? Not likely. This is going to turn into a mess, and with no legal recourse when it happens. There is no accountability of those building the lists.
No more a mess than e-mail. And there shouldn't be any legal recourse. You're forgetting the principle "my network, my rules, your network, your rules". The ISP operating the recursive DNS server for their customers is electing to use a certain RPZ feed and those who are generating the RPZ feed aren't forcing anyone to use it. That small business, when they can't visit microsoft.com and ask their ISP for help. Their ISP will let them know what happened and stop using that RPZ feed or put in safeguards (aka whitelists) for certain domains. Remember, the ISPs have an incentive to keep their false positives low, otherwise there's no net benefit. Also, those who generate the RPZ feeds will also mature and have safeguards in place for sites like google.com and microsoft.com. Frank
>And there shouldn't be any legal recourse. I can't top that statement.
It's not a debug nightmare. They can contact the website holder of find out what the problem is, they can contact their ISP, they can attempt to manually resolve the DNS record, or they can check the DNS RPZ feeds. There's probably more things I haven't thought of. More likely is they'll move on to the next website, unless that website belongs to a partner/vendor/supplier.
You reverse what I said. I said the "website holder", a small business with no IT staff, was told their website is down by a customer or peer. No call they make will likely be to anybody involved or empowered. In the mean time their business is shut down, their livelyhood threatened. The posts I keep reading for this case is that website holder is intentionally doing something wrong, the posts also refuse to acknowledge the dependency that website holder has on their service provider and how others might exploit DNS RPZ to attack a competitor. In each case you folks demand the website holder accept accountablity for your rules (which I don't even see defined, as you'd incur legal consequences if you CLEARLY did), while refusing to believe you should have any when shutting them down. What a deal! THAT is a debug nightmare since someone totally outside their policies, procedures, and expectations, was empowered to shut down their site to large portions of the internet. I think that's more than clear.
My apologies if I haven't been clear. I definitely had the "website holder" in mind when I wrote my response. I see what you mean by lack of empowerment -- if customer A can't access company B's website, company B likely won't get very far with customer A's ISP. That said, there are umpteen DNSBLs today and e-mail still flows. There are occasional false positives, but they get worked out. Why would customer A's ISP have legal consequences for using a DNS RPZ feed? If "large portions of the internet" refers to sites hosting malware, then I'm OK with that. But I think you're concerned that a sizable percentage of good sites will be inaccessible to a large portion of the Internet, and if that's the case, I think you're overly concerned.
>I think you're overly concerned. Lets hope you are right, and that I am wrong. However, from where I'm standing, I see far to many changes taking place that support my concerns.
“Most new domain names are malicious.”
No surprise there. By July 1996, the entire .net TLD was a “sewer”.
I can’t imagine what it has become since then, but I’m puzzled as to why the stock footage business at footage.net is your poster child for abuse.
Newsgroups: comp.protocols.tcp-ip.domains
From: .(JavaScript must be enabled to view this email address) (Paul A Vixie)
Date: 1996/07/06
Subject: Re: .com versus .net
> I am wondering what is the the real difference between a .com domain and a
> .net domain if you are registering for a commercial organization???
Generally this depends on your mood. If you happen to feel like being under
the “NET.” top level domain, then you definitely ought to indulge yourself.
While once reserved for Network Infrastructure purposes, “NET.” has become
quite a sewer. FOOTAGE.NET is my poster child for this kind of childishness,
but there are quite a few others which have nothing to do with infrastructure:
APPLETON-BUSINESS.NET
HOUSTON-BUSINESS.NET
ALBANY-MARKETPLACE.NET
FAMILY.NET
WATER.NET
CATALINA-INTER.NET
AIRPORT.NET
The list goes on. My ability to peruse it does not. The point is, there are
no rules, “NET.” is the next “COM.” and you’d all better get your domains up
and running before somebody else beats you to it. More, bigger, better, and
faster. Never mind what it was intended for. The InterNIC is not allowed to
turn you down, no matter what your “business description” says. So why not?
(PS., Maybe if we pollute everything to ruin, folks will head to deeper water
and use domain names that make sense but have more dots in them, sooner. So,
send that “NET.” domain application in TODAY! Don’t delay!)
—
Paul Vixie
La Honda, CA “Illegitimibus non carborundum.”
pacbell!vixie!paul
Who’s talking absolutely power? We’re just talking about DNS resolution here. Let’s not blow this out of proportion.
That’s what the DNS RPZ does, is focus on the bad guys.
>That's what the DNS RPZ does, is focus on the bad guys. No, it's a tool you use to block those YOU define as bad guys.
Good point, well said.
This is a sweeping statement. However had a reputation based DNS option been available a few years ago when ICANN essentially facilitated Domain Tasting, then it could have solved a lot of the problems long before ICANN ever got around to dealing with the problem. But the current situation is not that Domain Tasting mess. There is still a level of malicious domain name registration but it would be unfair to say that most new domain names registered on a daily basis are malicious.
Many newly registered domains are automatically parked on the registrar’s PPC websites before they are used or developed. For some, they will stay that way until they are dropped without being renewed. Others are registered for the purposes of PPC and speculation but this does not mean that they are malicious registrations. Significant percentages of various TLDs are on PPC parking. In some cases, the percentages on PPC parking will come close to or exceed the numbers of actively developed domain names in that TLD.
This proposal has some very interesting possibilities but it may also have some unintended consequences. If it is implemented by ISPs, they will become the gatekeepers for their users. They would, in effect be a multitude of little versions of the “Great Firewall of China”. What if they start forcing users to use their DNS too?
The browser plugins would probably be the earliest users of such a blacklist. A whole section of the web could fade out of view in just a few months. The Direct Navigation model could take another serious kicking as a reputation based DNS could replicate the principle of Google’s reputation based link algorithm. The unintended consequence is that this could end up taking away the decision about what site to visit from the user and giving it to the ISP or someone else.
Yes, but providing cybersquatters with a minimum 30 days during a UDRP proceeding to "blackhole" a domain name, so that the trademark owner winds up with a domain that won't route, will be interesting to watch.
Did you mean "route" or "resolve"?
I meant "work", but the aggressive iPhone spell check and my thumbs apparently had other ideas.
hi john, it is entirely conceivable, given the discussion of reputation of address blocks, and therefor of routing, on nanog, involving paul and others, that a domain may "resolve" yet not "route" by policy. this in addition to a domain not being resolved by policy for some set of resolvers. and isn't that spell checker a pita?
Can you point me to the URL on the NANOG archive? I don't recall that thread. Thanks.
Speculation is how you bring future prices into the present. Speculation is good. More speculation is better. Bad speculation isn’t a problem because the speculator runs out of money. Unfortunately, there is subtantial overlap between the field of trademarks and domain names, and the rules differ between them. Trademarks have had hundreds of years of case law; domain names hundreds of months.
For example, you can’t get a trademark on a product you aren’t selling, yet you can get a domain name for a product you aren’t selling. For example, you can’t sell a competing product under a similar name by trademark law, but that works fine under domain names. For example, trademarks have fields, but domain names don’t.
And unfortunately, a lot of what is called “speculating” is infringement, plain and simple.
I like you points. And it works the other way to. Creating an "off switch" that costs nothing to use (no accountability) makes it all to easy to use, in cases it should not. The renewal fees make the "speculator" accountable. And what makes DNS RPZ accountable? The direct quote above said it all: >And there shouldn't be any legal recourse.
The ability of members of Congress to manipulate the financial industry destroys the theory by removing the precondition that customers, not politicians, make the rules. Once you let politicians frick with things, they can and will be corrupted by the frickees. The only solution is separation of state and markets. That includes, of course, DNS services.
It’s been fascinating reading through the posts that Paul’s announcement here has created. There is a lot of passion coming throughout the threads from many posters debating the desirability of the mere “existence” of a mechanism like an RPZ. Problem is, RPZ’s of some form have been with us for years and that debate is long over. SURBL, Spamhaus, OpenDNS, my own company, and many, many others have been providing domain reputation services that can be used at the DNS resolver level (if desired) for years now. A large number of enterprises (and even ISP’s) create their own custom block lists at the domain level, and implement them either in DNS and/or at the firewall to protect their networks. Sometimes that even extends to the TLD level - for instance, a lot of enterprises block all of a TLD like .RU or .CN as a matter of policy. Heck, some governments routinely block domains to “protect” their citizens.
All Paul has done is create a standard way for people running resolvers to incorporate data from sources of their choosing. He’s not “running” a list, just making it possible for anyone using BIND to easily choose a list(s) they like and use it to protect the users of their resolvers. Those could be many, and vary depending upon the nature of the networks and users that use those resolvers. For example, for a highly sensitive critical infrastructure environment, a network security officer may want to implement a policy that blocks entire TLDs, any domain reported as being malicious or compromised, and domains on hosts they’ve tracked trying to penetrate their networks from being resolved by machines on that network. That same officer may choose a wide-open resolver for the public access network they run, and a “malware domain blocked” resolver for their employee network. That’s just standard risk-assessment and implementation of security policies. And yes, this kind of activity happens TODAY! Lots of folks out there are using DNS level blocking in proprietary resolver solutions to stop malware communications, prevent employees from visiting phishing sites, and implement other aspects of their security policies. Interestingly, the debate here has largely concentrated on port 80 resolution for the most part (web). The beauty of using the DNS as a filtering tool is that malware and other undesirable activities happen over almost any port, so you can address all attacks at this level - of course if appropriate. That way your employee’s machines can’t leak data to a keylogger C&C;, be part of the Conficker network, or anything else that can be quantified and blocked out via the DNS. Of course, if you want to just block port 80 traffic, products/services like WebSense, NetNanny and others routinely block large swaths of domains based on their content type (yep, just like the anti-spam vendors blocking port 25 traffic!) so there’s yet another example of why the “existence of domain reputation lists” issue is moot.
So given the existence of these kinds of lists as a fact, the thing that’s really at issue here in my opinion is whether ISC’s solution is a good one, both technically and as a way to create a better, more open marketplace for reputation services. The current solution set of proprietary ratings, kludged extensions, custom resolvers, and non-standard reporting by vendors, internal sensors, and data sharing partners makes it difficult for people running resolvers to evaluate reputation data quality and efficacy for their own purposes. I applaud ISC for attempting to create a standard we can all look at and evaluate for whether or not it works, and potentially brings order to a rather chaotic marketplace. In fact, I’d venture to say that by creating such a standard, you’re much more likely to see better accountability out of folks supplying such data. Whenever you can compare things side-by-side, you get a better understanding of who’s doing a better job. That can also lead to far better transparency by the way…
I do have some concerns/questions about the scalability of the ISC approach and would like to see some standardizations of how reputation is actually reported. I think that’s a far more fruitful topic for debate here rather than should such lists exist at all. As I said in the title to this post, that ship has sailed. Deciding what this marketplace should look like, standards employed, implementation issues, and how it operates are of great interest to me. I think talks about standards, best practices, feedback, and “governance” would be beneficial.
>I think talks about standards, best practices, >feedback, and "governance" would be beneficial. I think we should ask why such important issues were not addressed before implementation and release. From the last line of ISC's History page: "We have a global constituency." Yet when I look at the dates of a Google search of "DNS RPZ" I find no evidence of a "global constituency" discussion, or any suggestion of it. I find no search results before those starting on or about the time of this CircleID original post.
Hello,
I believe the ISC’s threat model regarding DNS black listing does not address all adversaries, and the proposal actually nullfies the distributed, fault tolerant nature of DNS.
Consider: under the current system, a government controlling a DNS operator(s) must approach each DNS operator with their [secret] request. Under the proposed scheme, the distributed, fault tolerant nature of DNS will be nullified. That is, a government only needs to poison the database of one cooperating operator, and other cooperating dns operators will dutifully incorporate the changes. To make matters worse, the poisoning will cross national/political boundaries - something governments don’t fully enjoy under the current system.
Email black lists (and the small number of associated failures) are generally acceptable. Email is only a service even though communications is a high value service. However, DNS is the ‘one ring to rule them all’, and I believe the risk associated with governmental subversions and tampering is not acceptable.
If history is an indicator of future expectations, the US will probably be the country to most abuse the proposed system, and the abuses will occur under the guise of national security. For those who claim otherwise, I still remember the NSAKEY incident, and note that US government is abusing both US citizens and foreign nationals under the gestapo legislation known as the PATRIOT Act.
I would bet the proponents of the ‘Internet Kill Switch’’ are salivating like Pavlov’s dog….
Jeffrey Walton,
Baltimore, MD, US
Jeffrey: You're presuming that all ISPs, all DNS operators, are using the DNS RPZ, are using ISC's BIND, and have configured to use ISC's new feature. Since that's not the case, you're concern is vastly overstated.
Hi Frank,
Just to play devil’s advocate here: Is the ISC proposing a system they expect no one to use?
I claim the ISC’s proposal will enjoy popularity. It appears others, such as Ulevitch and his OpenDNS agree [http://www.circleid.com/posts/20100728_taking_back_the_dns/#6806]. This initiative will probably gain a sizeable share almost immediately.
Jeff
Truthfully I’m not concerned that this particular situation is the big bad or the undoing of browsing. I don’t think it will get traction. But this trend is nobody’s friend. Looking at a domainer’s site and saying “I want to block or hijack that traffic because they are only showing advertising, and I can do that” is the classic road to hell because it disrupts thhe uniformity of the browsing experience. Say tomorrow as the site’s owner I develop and one person sees the content and the next person doesn’t. It’s bad because the lack of uniformity screws up the utility of the net. And it sets horrible precedent for those wishing to limit free speech or subvert other forms of commentary. I may not like the content at http://www.lemonparty.org but I respect the right of those folks to gross me out. Long live free expression on the net. Advertising included.
@Frank I remember a thread on Sevenmile.com where the possibility of search engines and browser developers reacting against the high numbers of parked/PPC domains by effectively working to block these sites. The browser developer or search engine would have provided a solid target. However this goes well beyond that in that it provides the tools necessary to implement blocking. Some ISPs are already monetising NXDOMAIN traffic but this would allow them to monetise direct navigation traffic at the expense of those keyword domain owners. Conceptually, it would be easy to set up the initial blocks for such an operation by simply flagging all domain names on the major PPC nameservers. The Javascript/IFRAME inclusions would be the next layer. However once such a system is intiated, it would be very difficult to limit. And if PPC and domaining is hit first, what's to say that webmaster revenuestreams like Adsense and other such advertising would remain untouched? The wildcard in all this is not how easily this can be implemented from the DNS angle but rather in how the end users react.
ISPs provide a service to their customers. If customers don't want to visit parked/PPC sites, would it be wrong if the ISP's DNS server prevented them from going there? As mentioned elsewhere in this thread, ISP transparency is important, and customers should be able to use alternatives (IOW, and ISP forcing the use of their RPZ-enable DNS servers would not be something I support). Frank
>If customers don't want to visit parked/PPC sites, would it be >wrong if the ISP's DNS server prevented them from going there? Mission Creep. Historically the only way self regulation will work is if nobody takes the first step, not taking the stop is the regulation. As I keep saying DNS RPZ is not giving the people you speak of the control you suggest. Unrelated third parties are totally in control, and are defining the experiance. >ISP forcing the use of their RPZ-enable DNS >servers would not be something I support Some ISPs are allready doing just that, all DNS queries being forced into their server will no ability to perform direct queries, NONE. So these networks deny all use of alternatives. If they implement DNS RPZ their customers have ZERO options to access the blocked content. A peer is New York City is “blind” to his DNS updates, when using this ISP and is forced to remote to other locations to make realtime lookups / debug. Yes they do eventually propagate out, but the point is this ISP’s behavior shows how it’s positioned to block website access to a major metro area using DNS RPZ. That’s not a trivial issue. And there are other ISPs doing it as well.
This is how it could be presented by the ISPs: a value added service. The end users don’t have to use it but the selling point for ISPs would be that it would be a “more user friendly” and “safer” web. The problem is that most end users are not highly technical and such a service would build on their fears. The ISPs should be more transparent on this issue (and also on the NXDOMAIN monetisation) but would the customers care? Would they even know? If the ISPs were not transparent then the decision would no longer be the customer’s.
It’s about reaching the destination you intend.
No Internet browsing individual wants a walled garden. That was the excuse Prodigy and Compuserve gave for not opening up to the Web in 1994!! My God, is this how far we’ve come in 20 years??? That we do exactly the same thing?? I know I’m getting old when I hear the exact same types of comments at age 40 that I did at age 20. This is the same discussion as 20 years ago.. Let’s just hope the outcome is the same.. with free and unfettered access for all for the next 20 years - driven by customers who “Can’t get to the same site as their neighbor”. I can see the customer service folks in different call-centers left to defend the ill-intended actions of their employer: “It’s just a different version of the ad site you’re looking for!”
Heaven forbid we stop serving stick men with shovels saying “this page is under construction” .. because one day soon, somebody is going to put up something IMMEDIATE and LIVE that they think the world should see at that inactive website and it would be a pity if some lumbering ISP is too busy forwarding other people’s visits directly to Yahoo!, to allow the new message of the site’s actual owner to resolve in a timely fashion.
“Avertising”, “information” or “protest” .. A website’s content is the provenance of the site owner and ill-indended intermediaries inject themselves at their peril. I look forward to the congressional testimony from neer-do-well ISPs and Ad-maketplaces explaining why this is OKAY to do to the public they should be serving.
If an ISP or keyword marketplace wants to take traffic coming to any given website for it’s own, they should dig deep (as thousands have before them) and BUY the destination site their users seek. To take what is intended for others, regardless of the content there, is to give rise to a cause of action to take their right to operate their ISP away from them. I would be glad to stand as the plaintiff for such a class-action - and I suspect that some version of that may come at some point. There is a great deal of sympathy for net-neutrality and individuals have a right to stand tall and say “I get to choose my destination, regardless of the content my ISP thinks I’m entitled to.”
If after 20 years we’ve decided: “We’re tired of ad pages so let’s let the ISP decide what can resolve”, then the net as we know it is done. To paraphrase Franklin: Those who who would trade freedom for comfort, deserve neither.
Paul, apparently you want to redo for the DNS what you’ve done for SMTP 15 years from now.
SMTP RBLs have been useful for quite a few years. Let’s face it: they are not anymore. Having been on both sides (I’ve been both a provider and a user of RBLs), my opinion is that to date, SMTP RBLs are more a problem than a solution. They are only used now by people who don’t know better. They complicate mail server administration and are known to reduce reliability. There are tons of anecdotes about abusive blacklisting by RBLs, everyone can cite scores of examples of being unable to deliver email to some misguided big site using a bad RBL. RBLs tend to favorize big sites over small sites.
Did the RBLs solve the spam problem? Absolutely not. At best, they can be used for spam scoring. On the other hand, RBLs helped a lot in making SMTP delivery more unreliable than ever; many other antispam methods are much more efficient today. RBLs are mostly dead for antispam.
So I really, really don’t think it would be a good idea to try and apply the same ideas to the DNS. I can see nothing good coming off that.
A lot of good came from SMTP RBLs. It wasn't a silver bullet - and Paul isn't suggesting that DNS RBLs are a silver bullet, either. It's just one more tool at our disposal. I don't know of one anti-spam product in the service provider space that doesn't make significant use of RBLs. Over time most people have shifted away from bad RBLs and RBL management practices have improved. That's good news for those looking to start DNS RBLs, because they can build on those practices. Frank
There was a real problem with Paul's solution. He was demanding accountability of everyone but himself. A system like that is bound to fail in the end. But he did realize that spam was going to be a major problem long before it was a crisis. Anyone who looked at what Paul was doing with MAPS and copied him a year or so later with 50% less dogma could be a very rich person today if they had wanted.
I’m torn on the idea. As someone who filters spam, I love it. As someone who protects free speech it worries me. As many of you know the US is trying to censor the Internet by passing COICA - taking domain names offline who are merely accused of copyright violations. So when someone create a sucks site to expose bad products or bad business practices they are accuse of intellectual property crimes.
My point is to make sure the design of the system can not be exploited to suppress free speech as a side effect of stopping spam. Keep that concern in mind as you design your system.
Also - I would like to see some sort of DNS lookup to determine the age of a domain and the expiration date through DNS (high speed) as opposed to whois. That way domains that are very new can be distinguished for those who are established.
And so it begins, just 6 months later:
http://isoc.org/wp/newsletter/?p=3091
http://news.cnet.com/8301-31921_3-20029282-281.html
Perhaps ISP acceptance of government approved RPZ lists will be the “compromise” for not using the “off switch”.
And so it continues
“Algeria shuts down internet and Facebook as protest mounts”
http://www.telegraph.co.uk/news/worldnews/africaandindianocean/algeria/8320772/Algeria-shuts-down-internet-and-Facebook-as-protest-mounts.html
I don’t believe that’s been substantiated:
http://www.renesys.com/blog/2011/02/watching-algeria.shtml
http://www.piracynetwork.com/tag/wataniya-telecom-algeria
“A few that we checked were unreachable, including the telecommunications regulatory authority (http://www.arpt.dz), the Prime Minister’s office (http://www.cg.gov.dz), and other sites hosted at Djaweb (Telecom Algeria’s hosting brand).”
At this moment: http://www.algerietelecom.dz & AT.DZ
- Not resolving
- No reponse from name servers (DNS-*.DJAWEB.DZ, per article)
Probably an idea to hold off making conclusions as to the cause at the moment. It may be censorship or it may be hacktivism. Some of the regimes in the region are drawing the lesson that youth is demanding access to Facebook and has unblocked it.
http://freedns.afraid.org/news/
“2011-02-13 19:16:12
It looks like service to mooo.com has just been restored.
As a reminder, it may take as long as 3 days to fully be restored across the entire internet, (172800 glue TTL + 86400 A TTL).
2011-02-12 07:07:07
Last night on Friday February 11th at around 9:30 PM PST mooo.com (the most popular shared domain at afraid.org) was suspended at the registrar level.
There is no ETA at this time. Due to the way propagation works it will take 3 days for a restoration to take effect.
freedns.afraid.org has never allowed this type of abuse of its DNS service.
We are working to get the issue sorted as quickly as possible.”
Reminder of a previous comment:
>And there shouldn’t be any legal recourse.
I think some comments in this page cut directly to the heart of my orignal comments in this very thread, regarinding “reputation”:
http://www.dotweekly.com/when-mooo-com-was-seized-by-ice-80k-subdomains-affected
“After about 26 hours they decided to unseize mooo.com (due to TTL it could take effect +3 days later in some parts of the world). But the damage was already done. Some people had to explain to visitors of their blogs that they weren’t pedophiles. ICE totally screwed up this time and risked peoples reputations and possibly lives. Vigilante justice is often acted upon accused pedophiles.
The child pornography warning graphic that was displayed”
This wasn’t about DNS reputation, it’s about suspension of someone’s registration. It wouldn’t be accurate to conflate the two.
afraid.org is a DNS service whos customers now have a reputation of being pedophiles, becuase ICE said so.
This is about recognizing unaccoutable internet censorship.
As afraid.org says, they had no opertunity to face their accuser before their domain was rezone. A third party came in it did what they thought was right, no discussion. Obviously someone able to overide registrar and registry contracts felt mooo.com had a bad “reputation” and that we all need to be saved from it’s current zone settings, and used their ability to shut down that domain name.
We’re all smart enough to clearly see the analogy here.
It’ss different. The implementation of DNS RPZ supports would allow ISPs to voluntarily use the DNS blacklist of their choice. No gov’t intervention at all. In Afraid’s case, a gov’t suspended their domain at the registrar level. The only correlation is if the gov’t mandated the use of DNS RPZ with certain blacklists. I don’t anticipate that happening.
Proposed, but didn't happen, did it. If something like COICA passed, it would be regardless of the status of DNS RPZ support in BIND. Just because a tool can be used for mal-intent, doesn't mean we can't build the tool at all.
“DNS blacklist of their choice. No gov’t intervention at all.”
Why, after what we see ICE doing, should we belive governments around the world will NOT be publishing their own “recommended” DNS blacklist’s? And that laws demanding their use will NOT eventually follow?
“If you are on the no fly list, becuase you are known as maybe a possible terrorits”.
Rahm Emanual, White House Chief of Staff
When I was in school, if I said “you are known as maybe” in an english class I’m pretty sure I’d get a grade of F on that day, the trivium education is now dead .... Now such logical falicy filled statements become law.
We are moving in a very dangerous direction: internet censorship.
“What Libya Learned [shutdown] from Egypt”
http://www.renesys.com/blog/2011/03/what-libya-learned-from-egypt.shtml
http://www.japanfocus.org/-Makiko-Segawa/3516
”The measures include erasing any information from internet sites that the authorities deem harmful to public order and morality.”
And about 1 year later I believe the final link is now about to be put in place. Having censorship list features now built into BIND should make this very easy to implement now that ISP’s will have a “legal requirement” for implementation.
http://torrentfreak.com/anti-piracy-censorship-bill-passes-senate-committee-110526/
“order ISPs to block the website,”
http://www.pcmag.com/article2/0,2817,2385307,00.asp
“order ISPs and other DNS providers to blacklist certain Web sites,”
Or alternatively, Congress will order ICANN to block the domains at source. ICANN and VeriSign are both 100% within the power of the US government. Until now there has been no mechanism for ICANN to really enforce restrictions on the country code TLDs. But with DNSSEC that all changes. If people start relying on the ICANN root signature for DNSSEC the US govt is going to be able to block domains through ICANN. Without DNSSEC they can order ICANN to attempt to block TLDs that refuse to comply with their requirements but they are going to fail and wreck ICANN in the process.
Can you clarify what you mean by "within the power of the US government"? Did you mean jurisdiction?
US government influence over ICANN goes way beyond mere jurisdiction. This is something of a concern in government circles as the executive has to consider the possibility of being forced into some action by the legislative branch. Yes, I know what the 'experts' say on the matter. The fact that someone considers themselves an expert on DNS protocol does not in any way make them an expert on the policy issues. What I read in the original article was a group of self-identified experts in DNS protocol trying to lecture policy makers in what is the policy makers' area of expertise. That does not seem like a winning argument to me.
First you wrote "ICANN and VeriSign are both 100% within the power of the US government" and now it's "influence"?!? Based on ICANN's (frustrating) policy development history, the USG clearly hasn't exercised the power you say it has. It would be a mistake for the technical people not to provide expert feedback on policy issues. Perhaps they don't/won't listen, but at least it's on the record. I don't think DNS-OARC is about hire lobbyists just yet.
Depending where you are in the USG, the ability of the USG to control ICANN may be a bug or a feature. I have been in rooms with people of the 'hell yes we gonna use control of the DNS as a political tool' approach. They were the same people who see the possibility of a critical infrastructure attack more as an opportunity to declare martial law and suspend the constitution forthwith than a problem to be solved by anticipating it in advance. Fortunately the USG has been able to prevent those people getting their way to date by pointing out that while USG controls ICANN, trying to control the DNS through ICANN is like using a toothpick as a tire lever, the instrument is just not strong enough for the intended use. That is why DNSSEC changes the equation, the lever goes from being a tooth pick to being an unbreakable titanium girder. But the other reason has been that USG Internet policy has been focused on encouraging use of the Internet and encouraging Internet openness. Which is of course the reason that China and Russia are so upset with the fact of US control and the scheming to get the DNS under the control of ITU. I don't think that US control has been a bad thing. In fact as long as it is ensuring the openness of the DNS and Internet it is good. But the likes of Putin etc. oppose US control for the same reason. And the whole situation becomes very different when the character of US control is changed and that creates opportunities for the authoritarian and anti-democratic forces in the US polity.
Yes, there is little in the USG's perspective that's consistent through and through. =) While the USG may have the "controlling interest" in ICANN, I encourage greater worldwide participation in its governnance. The less we can make any one nation control this worldwide resource we call the Internet, the more representative it will be of international needs and subject to any one country's policies. That said, there's the risk that policies, in an attempt to achieve consensus, are developed with the lowest common denominator standard. Can you explain this statement: "That is why DNSSEC changes the equation, the lever goes from being a tooth pick to being an unbreakable titanium girder." Doesn't DNSSEC reduce DNS response manipulation?
To understand how DNSSEC changes the equation, consider the following scenario. Opportunistic Congressman decides to grandstand to his constituents by proposing to drop Cuba and/or Palestine out of the root zone. To do this he proposes a bill that forces ICANN to drop the TLDs or face criminal sanctions. Blocking such a bill will extract a cost from the administration. There are many Congressmen who will automatically back any bill that is endorsed by certain interests. The cost to the administration depends on what tools they have to discourage passage. Case 1: No DNSSEC If there is no DNSSEC deployment the situation for the administration is easy. Grandstanding senator and supporters are told that pushing the bill will inevitably lead to the breakup of ICANN and a loss of US influence. It is easy to see the outcome here. Non US ISPs will simply ignore ICANN's (coerced) dictats and the root effectively moves to a new set of root servers that are under ITU rather than ICANN control. So blocking such a plot is easy. Secretary of state calls up director of AIPAC and explains the situation in words of one or two syllables. Support for plot collapses and bill is forgotten. Case 2: DNSSEC deployed with devices embedding ICANN root Deployment of DNSSEC changes everything because the US now has the ability to make good on its coercive threats. The US now has the power to drop country code TLDs out of the root. So even if the administration wants to avoid doing so, the forces of ignorance and bigotry can probably muster a veto-proof majority in each house. US members of Congress are hardly known for their willingness to resist pressure to perform counterproductive actions. Witness the situation in Cuba where the regime could be dismantled in a couple of years by simply dropping the sanctions regime entirely but that can't happen because the Cubanista lobby is only interested in change in the Cuban government that leads to a restoration of the previous regime and the property their families owned under the corrupt prior regime. In this case the administration probably looses the argument. There is still a revolt but it is much more costly and difficult to make the transition. Instead of control transferring to the ITU in a couple of months under the prior case the transition takes a couple of years to complete. This is not a theoretical case. It is a concern that Steve Crocker himself has admitted to me was raised as a concern of the Russian government. But Crocker is not concerned about it so obviously the Russians are not going to take pre-emptive measures in deference to him. Of course in the real world Russia has become a gangster kelptocracy. The government is engaged in murder of opponents in ways that are clearly intended to demonstrate their willingness to do such things. Why else use plutonium for a murder? At one of the cyber-security meetings between the US and Russia one of the Russian negotiating team even made a death threat against a member of the US delegation. So people who are discounting the seriousness of the issue are pretty much fooling themselves. Russia and China are going to block the ICANN DNSSEC root because the survival of their despotic regimes depends on doing so. That is precisely why the US is maintaining control of the root in the first place. That is why I am planning a rather different approach to root management for embedded devices so as to diffuse this issue.
Look at what ICANN did to .BIZ, they said screw off we WILL DELETE YOU and they did. It's easier to split the root sooner than later. I see the New TLD deployment as a consderable bookend on that possibility from a market standpoint. Nobody will deploy with the treat of ICANN pulling a .BIZ on them, but split the root and ICANN will never to that again .... I think Phillip's point make the case for DNSSEC being a book end from the tech standpoint. Again, I don't WANT the root split. I want a good solid leash placed on arrogance that I see increasing right now and I just don't see any other options. Just as DNSSEC and ICANN releasing New TLDs will forever solidify it's monopolistic control of the root, I've no fear that anybody would be stupid enough to try to deploy a competing .RU or .CN, lest they like pissing away money having their infrastructure attacked and never run reliably. A split root creates some real competition and the accountability that can come with competition. The USG wants to censor .COM & .NET? Go for it, and watch the Versign reg rate and stock price go down significantly - BTW anybody that thinks their reg rate is real needs to go through their zone file BY EYEBALL as I have personally done. People need to take very seriously increasing censorship which results form the control we have given up to an undeserving central authority. Choices become fewer, and harder to implement, with time. The door it closing quickly. If somebody has a better option than splitting, I'd love the hear it. China and many other countries have allready done it since ICANN refused their requests for IDN TLD support. Then many years later, ICANN finally started to deploy IDN TLDs. Root splitting worked, did'nt it. It produced the intended result: Accountability after years of being ignored.
[Please ignore my previous post - or Mods please DELETE. My cut and paste missed the first part of my intended post. Sorry] Or another "checkmate" angle: New TLDs It's actually easy to split the root now, not so easy as DNSSEC becomes more established. Right now it's just chaging ones DNS servers or adding a hosts files. Yes I've oversimplified a little, but not much in real terms. Again, recall what I said about my ISP having implemented New.Net zones in their resolvers. There is a direct analog to the New TLD deployment. As ICANN adds New TLDs under it's scope, nobody well try to compete. It's reasonable to expect the more potentially successful TLDs to go first. This has the practicle effect of reducing the potential reward of going it alone via alternative roots regarding those TLDs. Look at what ICANN did to .BIZ, they said screw off we WILL DELETE YOU and they did. It's easier to split the root sooner than later. I see the New TLD deployment as a consderable bookend on that possibility from a market standpoint. Nobody will deploy with the treat of ICANN pulling a .BIZ on them, but split the root and ICANN will never to that again .... I think Phillip's point make the case for DNSSEC being a book end from the tech standpoint. Again, I don't WANT the root split. I want a good solid leash placed on arrogance that I see increasing right now and I just don't see any other options. Just as DNSSEC and ICANN releasing New TLDs will forever solidify it's monopolistic control of the root, I've no fear that anybody would be stupid enough to try to deploy a competing .RU or .CN, lest they like pissing away money having their infrastructure attacked and never run reliably. A split root creates some real competition and the accountability that can come with competition. The USG wants to censor .COM & .NET? Go for it, and watch the Versign reg rate and stock price go down significantly - BTW anybody that thinks their reg rate is real needs to go through their zone file BY EYEBALL as I have personally done. People need to take very seriously increasing censorship which results form the control we have given up to an undeserving central authority. Choices become fewer, and harder to implement, with time. The door it closing quickly. If somebody has a better option than splitting, I'd love the hear it. China and many other countries have allready done it since ICANN refused their requests for IDN TLD support. Then many years later, ICANN finally started to deploy IDN TLDs. Root splitting worked, did'nt it. It produced the intended result: Accountability after years of being ignored.
>To understand how DNSSEC changes the equation, >consider the following scenario. DNSSEC SIGNED: FLU.GOV HEALTHCARE.GOV SEC.GOV WEATHER.GOV WHITEHOUSE.GOV GSA.GOV NOT SIGNED: CIA.GOV <------- DEFENSE.GOV FEMA.GOV IRS.GOV PENTAGON.GOV
>If people start relying on the ICANN root signature for DNSSEC
>the US govt is going to be able to block domains through ICANN.
Centralization has a lot of management benefits. I prefer centralization for this reason.
However the power of the internet is too precious to humanity to allow manipulation by a single entity.
I’ve been watching for something like this for many many years, I knew this was comming it was fairly obvious. This is why DNS RPZ has infuriated me from the beginning.
Likewise I’ve pondered solutions, and it addresses the issue you mention. I see no way out of this except root splitting. China did this back in 2006 when they implemented IDN TLDs of .COM and .NET through I-DNS. The “monopoly” of the current root is an illusion, the root could be split overnight with minimal disruption. If fact this occured as far back as New.Net when my ISP at the time zoned New.Net in their DNS and thus all domains resolved without any plugin or other change to my PCs.
I think it’s time to split the root and eliminate root centralization.
It’s very unfortunate that it’s come to this, but I can’t see any other solution. Under that system, if the US gov wants to censor .COM’s, got fo it. They will destroy Versign as people move to TLDs that are out of reach of the US gov. If the US Gov then requires ISPs the filter the roots of other countries the reaction will be the same, commerce will be affected and so will large US companies who will have something to say about this.
From the internet user point of view, there really is no need for a centralized root. A TLD ROOT table on people’s PC’s is trivial to implement and store. When ISP then force back all DNS requests into their DNS RPZ, or other filtering mechanism, the next step will be to deviate from port 53 and plaintext.
Once the root is split at the TLD level there will be no way to stop the technical evolution of DNS “censor free zones”. It’s the only way I’ve even been able to come up with to end the arrogance.
It's amazing how you're proposing that we split the root even before the threats (facilitated by DNS RPZ) to open Internet access have occurred! It seems that such action would cause more damage than something like DNS RPZ. Just in case you think that Paul Vixie and other DNS experts aren't concerned about DNS RPZ misuse, it's worth reading this whitepaper which is their response to the PROTECT IP Act of 2011.
>It’s amazing how you’re proposing that we split the root even before the
>threats (facilitated by DNS RPZ) to open Internet access have occurred!
They are occuring, and have been occuring.
The idea that they have not started is a statement that can only be made by someone that has not actually had to deal ISP blocking and manipulation issues.
If fact at this very moment I’m dealing with yet ANOTHER block of domains that an ISP is blocking as I work to see exactly what they are doing and why.
And for those reading this, the most simple proof of my statements is the most common manipultion that I’ve seen done for some time now. ISPs are ignoring TTL values and greatly extending them into the multihour range. They are also watching SOA records, and if they don’t like them they will mark the domain as having no record and block the domain.
And I’ve compared my notes with others managing tens of THOSANDS of domains, they are seeing the same things I am.
I’m long past giving anybody the benefit of the doubt. I had to switch ISPs to give me a clear uncorrupted veiw of the internet for management reasons. Over the past few months my current ISP has now started manipulations as well. I’m now out of options. The only way to get a clear view of the internet is setup a VPN to one of our colo racks and do my debugging out from there. That’s insane!
A peer in New York City has had these issues for years. I’ve sent them my custom written tools and they are useless, his ISP forces all DNS queries into their resolver thus denying direct DNS queries! This guy manages a lot of customers and this makes him totally blind to DNS debugging from his office and home.
And you want me to believe the problem does not exists? Sorry, long past that Frank ... Long past that by many YEARS ....
I’ve been involved in engineering for over 30 years. When it comes to White Papers, talk is cheap, and that to comes from experiance ....
If you would be willing to document these blocks on a web page the broader community would definitely benefit from your research and work.
>If you would be willing to document these blocks on a web page the
>broader community would definitely benefit from your research and work.
I think a better solution would be involving “domainers” in these discussions.
To date, due to their managing truely massive blocks of domains, they tend to be far more aware of these issues than most. And yet they are often demonized anytime they are mentioned.
This knowledge exists and is available.
While I appreciate your words, I’m nobody special. I’d rather see the various groups start working with domainers and tap the knowledge and experiance of their collective real world battle scars. The groups working together is a real solution, and one that will not easily succomb to number 33:
http://www.nizkor.org/features/fallacies/poisoning-the-well.html
So is that DNS-OARC, EFF? or another group? This issue of blocking is enough of an issue for five persons to collaborate on a white paper.
>So is that DNS-OARC, EFF? or another group?
There is no organized “group” representing domainers that I know of. That has been another major issue needing to be addressed in my mind. It’s been attempted in the past and then the group gets destroyed from within, and I think my comments above suggest what I feel is going on.
However there are some top people that post on CircleID, and to this very thread.
>This issue of blocking is enough of an issue for five
>persons to collaborate on a white paper.
It’s a truely massive issue, long past needing confrontation.
I’m glad to finally see more people concerned, and perhaps just plane pissed off ...
First step is Knowldege, next step is Action. Waiting any longer is NOT AN OPTION!
Just the threat of root splitting, so long as its REAL (no joking! no BS! MEAN IT!), is a useful tool at this point.
A 6 year old CircleID thread worth considering:
http://www.circleid.com/posts/splitting_the_root_its_too_late/
Splitting the Root: It’s Too Late
Dec 02, 2005 2:13 PM PDT Comments: 12 Views: 15,364
“One of the consistent chants we’ve always heard from ICANN is that there has to be a single DNS root, so everyone sees the same set of names on the net, a sentiment with which I agree. Unfortunately, I discovered at this week’s ICANN meeting that due to ICANN’s inaction, it’s already too late. “
“A friend who traveled to Arabic countries reported that ISPs simply reroute traffic for the public routes to their own root servers,
and most people are none the wiser except that Arabic domain names work.
He only realized what was going on when he tried to reach the Red Cross web site and kept getting the local Red Crescent instead, and tracked it down to the DNS returning different answers from what he’d expected to get from the usual DNS. “
Bill Clinton on “Reputation Management”.
http://www.politico.com/news/stories/0511/54951.html
I tried to find a video with just the discussion at issue and could not. The only one I could find was this one. The section of interest runs from time index 00:55 to 04:26
http://www.youtube.com/watch?v=iJgyfXIDAHk
does anybody know how the patent referred to here relates to DNS RPZ?
Given the complete lack of any explanation, patent name, or patent number, I gott ssuggest that no, nobody knows.
But I will say that they sound like another Intellectual Ventures sock puppet. Certainly an NPE.
I wonder what the date on the patent is, and whether my ignoreip patch is prior art: http://tinydns.org/djbdns-1.05-ignoreip.patch
>Post #17
>Phillip Hallam-Baker
>Yes, I was pointing to the fact that people can work round port 53 blocking.
>Post #32
>Ryan Singel
>I’ll find another DNS provider.
Now see this thread:
http://www.circleid.com/posts/20120327_dns_changer/
and this provided link:
http://dns-ok.us/
and this text on that page:
“Please note, however, that if your ISP is redirecting DNS traffic for its customers you would have reached this site even though you are infected.”
No, you can’t work around the intercepts / “redirections”, that is why they are there. And I’ve experianced them first hand. Any DNS query gets captured and FORCED into the ISP’s resolver.
Query to [any IP]:53 is routed to the ISP’s Resolver.
The march of censorship continues:
https://www.eff.org/deeplinks/2012/04/eff-joins-two-coalition-letters-opposing-cispa
“Crovitz: The U.N.‘s Internet Power Grab
Leaked documents show a real threat to the international flow of information.”
“Another proposal would give the U.N. authority over allocating Internet addresses.”
http://online.wsj.com/article/SB10001424052702303822204577470532859210296.html
http://www.circleid.com/posts/20120619_proposed_ietf_standard_creates_nationally_partitioned_internet/#8985
The proposal, entitled, “DNS Extension for Autonomous Internet (AIP),”
describes a way to give each nation, which the proposal cleverly calls an AIP, “its own independent domain name hierarchy and root DNS servers.” That would allow them to create their own top level domains without any need to coordinate them with ICANN or any other global entity. In other words, each country runs its own domain name space and decides for itself what TLDs exist and which domain names from outside will resolve in that space.
http://online.wsj.com/article/SB10001424052702303822204577470532859210296.html
The broadest proposal in the draft materials is an initiative by China to give countries authority over “the information and communication infrastructure within their state” and require that online companies “operating in their territory” use the Internet “in a rational way”—in short, to legitimize full government control. The Internet Society, which represents the engineers around the world who keep the Internet functioning, says this proposal “would require member states to take on a very active and inappropriate role in patrolling” the Internet.
http://www.ft.com/cms/s/0/9b122512-cb71-11e1-911e-00144feabdc0.html#axzz20UIa2J19
“Russia’s ‘internet blacklist’ sparks fears”
“Russia’s parliament has passed a law to create an “internet blacklist” in a move both internet and civil rights groups warn could be used to curtail internet freedoms in Russia.”
“High quality global journalism requires investment. Please share this article with others using the link below, do not cut & paste the article. See our Ts&Cs;and Copyright Policy for more detail. Email .(JavaScript must be enabled to view this email address) to buy additional rights. http://www.ft.com/cms/s/0/9b122512-cb71-11e1-911e-00144feabdc0.html#ixzz20UJ52woL
“Brett Solomon, executive director of human rights organisation Access, said: “The creation of a website blacklist looks like a power grab by the government to exert greater control over its citizens, silence opposition and win the ongoing political debate playing out on Russia’s internet.””
“High quality global journalism requires investment. Please share this article with others using the link below, do not cut & paste the article. See our Ts&Cs;and Copyright Policy for more detail. Email .(JavaScript must be enabled to view this email address) to buy additional rights. http://www.ft.com/cms/s/0/9b122512-cb71-11e1-911e-00144feabdc0.html#ixzz20UJDswjY
By the end of the week, the Duma is likely to pass a law forcing non-governmental organisations with foreign funding to register as “foreign agents” and submit to greater regulation, and it is considering another bill, aimed at the press, that would make libel a criminal offence. Last month, the Duma passed a law sharply increasing fines for protest violations, which opposition leaders say is a blatant attempt to intimidate demonstrators.”
Thousands protest in Japan against new state secrets bill
http://rt.com/news/japan-secrets-bill-protests-133/
http://hosted.ap.org/dynamic/stories/E/EU_TURKEY_INTERNET_RESTRICTIONS?SITE=AP&SECTION=HOME&TEMPLATE=DEFAULT&CTIME=2014-02-18-14-34-59
“The legislation, approved by Parliament earlier this month, allows the telecommunications authority to block websites for privacy violations without a court decision. It also forces Internet providers to keep records of users’ activities for two years and make them available to authorities.”
http://news.yahoo.com/brazil-presses-eu-undersea-cable-skirt-u-links-115123537—finance.html;_ylt=AwrTWfz.cAtT4nwACq3QtDMD
“At a summit in Brussels, Brazilian President Dilma Rousseff said the $185 million cable project was central to “guarantee the neutrality” of the Internet, signaling her desire to shield Brazil’s Internet traffic from U.S. surveillance.”
“The Internet is one of the best things man has ever invented. So we agreed for the need to guarantee ... the neutrality of the network, a democratic area where we can protect freedom of expression,” Rousseff said.”
“Brussels is threatening the suspension of EU-U.S. agreements for data transfers unless Washington increases guarantees for the protection of EU citizens’ data.”
From the OP:
“I did help establish the “my network, my rules”“
what goes around comes around .....
next up, EU should setup a there own root servers. no doubt the privilege access NSA has to the A and B roots are traffic / traking tap points for them. when you have log files for the root servers, you have the beginnings of all internet communications ....
“when you have log files for the root servers, you have the beginnings of all internet communications”
this is wrong, and displays a common misunderstanding of the nature, meaning, and purpose of root name servers. they periodically tell each recursive name server on the internet how to find COM, NET, ORG, and around 300 other top level names. they see about 500K queries per second in the aggregate. there are likely 10B queries per second being answered at “the beginnings of all internet communications”.
i wish brazil well in their endeavours. if they ask my advice i shall give my best. i am not proud of the U.S.‘s snooping, nor its second class treatment of non-citizens and data belonging to non-citizens.
“this is wrong”
Strictly speaking, yes I am wrong. Functionally speaking I am correct.
Your background on this issue is known, mine is not. So let me fill in that blank. Many years ago I was dealing with DDOS attacks on my servers. My background is engineering, and Ive never feared setting a long block of time aside to focus on solving nasty problems. I decided to understand how DNS really does work. I found, as usual the RFCs to be useless. So I decided to hand code my own DNS server an implement it across tens of thousands of domain names. I did that over a decade ago, and the server does log every query. Analysis of the logs revealed many fingerprints that DDOS have in common, thus resulting in code that detects the attacks and ignores the false queries. In over ten years there has never been a successful DDOS on my server (knocking on wood here!). And given that I’m not that smart nor funded, has always raised the question in my mind of how ISPs can detect malware and stop providing customers service until they clean their PCs (Ive seen this first hand) and yet FAIL to do the same for DNS queries that have source IPs different that their originating IP on their network ..... Hmmmmm ... Like insurance companies, ive come to the cynical view that ISP like “problems” as they can then justify higher fees and justify other behavior (netflix caving to comcast for traffic transport payments).
After hand coding the DNS server, I hand coded a router, HTTP server, Email Server, etc. Having been a coder for almost 40 years I’ve noted how people tend to stay “in their box” and often miss the big picture. I certainly don’t claim to have the detailed level of understanding that many have, but I feel mine is pretty formidable at this point. And its my nature to look at systems and entertain myself trying to figure out how to break them, and thus hardened them ...
Now as I recall reading the details of the ISC root server, there was something like 30+ GBS of traffic in its system. Thats a bit different than “500K queries per second in the aggregate”. Of course as I know from looking at my own DNS logs, some days being as large as a gigabit or more, most DNS queries are junk at best and at worst attacks or probes
Yes, the root servers are suppose to just resolve the string to the left of the rightmost dot (the dot representing “root”). However the root server see the ENTIRE string for the query for which there was no cache higher up. So to start with, individual service queries DO make it to the root server fully intact.
While my use of the idea of log files was an attempt to be brief, and wrong, I will now use this an opportunity to make a point that I never see discussed. When queries to make it to the root server, the identity of the originating server is known as well as the ISP (Ip ownership info). Sadly most treat the 13 root servers as the 12 apostles sitting with Christ and thus can do no wrong .... The root servers DO NOT deserve such trust. Before the Snowdon docs what I’m about to say would be flatly rejected, but now I bet most with ponder this well .....
The A and B root servers are controlled by the US gov. If the US gov wants to respond to a query WITH A LIE (and short cache time), it can do so and a user on an ISP CAN NOT KNOW THIS. The ISP CAN know this IF they actually looked, but likely they NEVER WOULD. If they did look they could be bullied to shut up about it .... You may have noticed in the news lately there has been a rash of banker suicides, one having commited suicide with a nail gun multiple nails in his torso and his head ..... Hmmmmmm
http://theweek.com/article/index/256560/whats-up-with-the-recent-rash-of-banker-suicides
Funny how they seem to have some involvement with the LIBOR scam, considered to be the greatest financial scan in the history of mandkind ...
The so called hacker group “Anonymous” is a good thought experiment here. They get way too much credit for what they do. From what I have seen everything they have done can be accomplished by a person with a packet sniffer and access to a primary backbone of the internet ... And there are LOTS of such people. You see we have been trained to think that compromised PC are the primary threat to the internet, not the folks involved “on the inside” so to speak. The internet is “private wires” being stitched together, and each is given implicit trust it does not deserve .... So to such trust in the root servers is UNDESERVED.
Paul, I sit here in Salt Lake looking across the valley at a billion dollar NSA facility that I, and you, paid for with no idea whats its purpose is. With this post I think you understand I know DNS operations far better than most. And you know as well as I do, control of the responses of a root server, properly crafted, can allow communications to be incepted WITH EASE. Yes Paul, you cant intercept a specific transaction with ease, but you can eventually gain access to ANY PC or server you wish .... If you take your time.
While I cant prove the root servers are being used in this way, we have plenty of evidence that communications pathways ARE being intercepted:
http://www.zdnet.com/nsa-malware-infected-over-50000-computer-networks-worldwide-7000023537/
Paul, it would be very foolish to think the US gov has missed the opportunity to use well crafted responses at the root to intercept communications AROUND THE WORLD. Very foolish indeed ....
Sorry for using the idea of log files to convey a simple idea to lay people ..... However THANK YOU for giving me the opportunity to go into more detail about the issues I was getting at!
Sounds like all the more reason to use DNSsec.
“Now as I recall reading the details of the ISC root server, there was something like 30+ GBS of traffic in its system.”
i was the founder of ISC and was its president for 16 years. f-root has never seen that amount of traffic except during DDoS events, and i’ll be very surprised if ISC ever went on record saying what you recall reading.
“Yes, the root servers are suppose to just resolve the string to the left of the rightmost dot (the dot representing “root”). However the root server see the ENTIRE string for the query for which there was no cache higher up. So to start with, individual service queries DO make it to the root server fully intact.”
this still represents a misunderstanding on your part, perhaps two misunderstandings. yes, some individual queries do make it to the root server fully intact. for each recursive name server to learn the TLD NS RRset, it must forward one query per DNS TTL for every TLD. once this has been done and the TLD NS RRs are all cached, that recursive name server will send zero queries to root name servers. exceptions are typo’s in the input names such that the TLD in the query is garbage and not in cache.
seeing one query per-recursive per-TLD per-TTL is not the same as what you claimed, nor is it a data leak that i’m particularly worried about. those who are worried about it can operate a stealth slave server for the root zone inside their network and avoid sending any queries to the 13 public root name servers. both ISC and ICANN make the root zone available via zone transfer to facilitate that configuration.
“The A and B root servers are controlled by the US gov. If the US gov wants to respond to a query WITH A LIE (and short cache time), it can do so and a user on an ISP CAN NOT KNOW THIS. The ISP CAN know this IF they actually looked, but likely they NEVER WOULD.”
protocol designers and implementers cannot take responsibility for ISP’s not caring. what we can, and have taking responsibility for is whether lies of the form you’re describing are detectable. i’m proud to say that with DNSSEC, yes they are. no root name server can answer with untrue (unsigned or mis-signed) data without detection. i wasn’t worried about the root server operators lying, but i’m glad that that concern is now “completely off the table”.
“And you know as well as I do, control of the responses of a root server, properly crafted, can allow communications to be in[ter]cepted WITH EASE.”
i also know that the root name servers are among the most closely watched parts of the infrastructure, and that there are safer and quieter ways to perform interception than to modify these responses. for example, nation-state created X.509 certificates having correct or wildcard distinguished names, and TCP/443 policy routing.
“Paul, it would be very foolish to think the US gov has missed the opportunity to use well crafted responses at the root to intercept communications AROUND THE WORLD. Very foolish indeed ....”
then colour me a fool, since my company (farsight security) operates the largest Passive DNS network in the world, and we see pretty much everything that the root name servers say, and we would know if the behaviour you’re describing was occuring—ever, not just rarely.
vixie
As for the bandwidth number, I have no time to dig up what I read. But the text was not just for that bandwidth but was the basis for people creating filters withing the system to help others process the data in realtime (or close to it) for enemy discovery (spam, malware, etc). Seemed like a brilliant design to me, great use of critical infrastructure data. Paul, I have a typo of the root servers name, had it for about 12 years now. I obtained it in hopes to get a hint of what goes on at the root level of the hierarchy. Interestingly *I* see the *FULL* strings being passed in the queries by the misconfigured servers that pop up from time to time. Like you ive heard *MANY* say the full original query never makes it done that far, yet I see those queries ..... Furthermore *I* pass the full query during all steps of my servers recursion and *NO SERVER* has *EVER* had a problem receiving substrings for which is has no authority. Is my server "behaving badly"? Perhaps, but if so perhaps that is why I have a "unique view" as I did not previously know what is possible and what is not, and now I do .... Perhaps those with DNS bibles on their desks might try "breaking" their code to see what *REALLY* happens, and what does not ..... Might give them some useful insights as well .... As I found out when I hand coded my DNS server, what should happen and what does happen are two different things. As for your comment about what you see and what you dont on your network, due to DNS spoofing, there will always be ambiguities in which unfounded trust can be justified. Further, I would never expect such an exploit to be used carelessly or often. But I *DO* expect its used .... There are certain methods one treats with great respect ..... Also not the Snowdon archive has been reported as having 1 million plus pages. This is why we keep hearing more slowly over time. Would you like to bet the most juicy stuff is being left for last?
I just did a back of the napkin calculation and I think I see my error. Your number suggests 43.2 billion root server queries per day (500K per second * 60 * 60 * 24). So I think I remembered a number correctly but not its "units". Sorry for the error. But the spirit that the number was offered remains intact, at 43 billion samples per day, the root servers DO provide and ideal point though which to statistically intercept targeted communications, as well as MONITOR the worlds traffic IN REAL TIME. To put the number into perspective, Internet World Stats currently states 2.4 billion internet users as of June 30, 2012. Or 18 root server queries per day for each possible internet users .....
todays post on circleid:
http://www.circleid.com/posts/20140316_if_the_stakeholders_already_control_the_internet_netmundial_iana/
Go to Time Index 26:41 http://www.c-span.org/video/?317453-1/communicators-fadi-chehad
“How do we govern what is on the internet?” He makes that comment in regards to the issue “What is next”
http://news.yahoo.com/domain-name-revolution-could-hit-trademark-defence-un-231011348.html
“Under international rules, Web registration firms must void the registration of losers in WIPO cybersquatting cases. The UN body is already hearing its first case concerning a new gTLD, filed in February and pitting a German company against the Dutch registree of the still-inactive website canyon.bike.”
case was lost of a GENERIC keyword!
http://www.thedomains.com/2014/03/17/canyon-bike-lost-in-1st-udrp-decision-on-a-new-gtld-domain-name/comment-page-1/
its here, goodbye old internet, welcome new internet ....
http://news.yahoo.com/putin-foes-fear-internet-crackdown-blogger-law-sails-152919266—sector.html
MOSCOW (Reuters) - Russia’s upper house of parliament approved a law on Tuesday that will impose stricter rules on bloggers and is seen by critics as an attempt by President Vladimir Putin to stifle dissent on the Internet.
...
“The new policy is to restrict free information exchange, restrict expression of opinion, be it in written text, speech or video. They want to restrict everything because they’re headed towards the ‘glorious past’,” Anton Nosik, a prominent Russian blogger and online media expert, told Reuters.
“China is much more liberal than what Russia wants to achieve,” he said, describing the move as unconstitutional.
http://www.usatoday.com/story/news/world/2014/06/04/tiananmen-anniversary/9946783/
“BEIJING – Tight security around Beijing’s Tiananmen Square Wednesday combined with a months-long crackdown against dissidents and a quarter century of enforced amnesia to prevent public commemoration of the crushing of the 1989 pro-democracy movement.”
“What hasn’t shifted is official silence on the protests, backed by a massive security apparatus and strict control of the Internet. The entire “Beijing Spring” of 1989 is not taught in schools. Most Chinese are unaware of the sensitive date, called “6/4” in Chinese, which sealed in blood the Party’s approach to controlling a billion people: allow economic freedoms and punish those seeking political rights.
http://www.campusreform.org/?ID=5696
“Andrew Lampart, a student at Nonnewaug High School, was assigned a debate on gun control.
When he attempted to perform research on the issue, he discovered that many conservative websites and news outlets were blocked but their liberal counterparts were not.”
“I used my study hall to research gun control facts and statistics. That is when I noticed that most of the pro-second amendment websites were blocked, while the sites that were in favor of gun control generally were not,” Lampart told Campus Reform.”
“As FoxCT reports, “Lampart claims he alerted the Superintendent of Woodbury schools to the situation but, after a week, nothing was done. He took his concerns to the Board of Education on Monday.”
It is unclear if the Nonnewaug High School plans on adjusting the Political/Advocacy Groups section of restricted web pages. A request for comment from Nonnewaug High School was not returned in time for publication.”
a question i have is how many tax dollars went into setting up such a biased filtering system? what is the extent to which such filters have been designed and implemented across how many subject areas? perhaps this would be a good college thesis for andrew to research .....
http://www.circleid.com/posts/20140829_radical_shift_of_power_proposed_at_icann_govts_in_primary_role/
“Why ICANN would voluntarily choose to empower non-democratic governments with an even greater say over global Internet policies as this bylaws change would do is anyone’s guess.”
based on my posts to this thread over the past 4 years, i would suggest the reason is to allow greater access to the internet “off switch”.
https://www.techdirt.com/articles/20151119/19003432869/france-responds-to-paris-attacks-rushing-through-internet-censorship-law.shtml
“France Responds To Paris Attacks By Rushing Through Internet Censorship Law”
...
“It’s difficult to see how it does any good, and instead it opens up the possibility of widespread government censorship and the abuse of such a power.”
http://online.wsj.com/community/groups/censorship-america-1369/topics/french-government-plans-extend-internet
“French Government Plans to Extend Internet Censorship”
...
“PC Inpact and digital rights organization La Quadrature du Net have therefore argued that the bill puts “the entire Internet” under government censorship.”