Home / Blogs

Is the Internet Dying?

There are indications that the Internet, at least the Internet as we know it today, is dying.

I am always amazed, and appalled, when I fire up a packet monitor and watch the continuous flow of useless junk that arrives at my demarcation routers’ interfaces.

That background traffic has increased to the point where it makes noticeable lines on my MRTG graphs. And I have little reason for optimism that this increase will cease. Quite the contrary, I find more reason to be pessimistic and believe that this background noise will become a Niagara-like roar that drowns the usability of the Internet.

Between viruses and spammers and just plain old bad code, the net is now subject to a heavy, and increasing level of background packet radiation. And the net has very long memory - I still get DNS queries sent to IP addresses that haven’t hosted a DNS server - or even an active computer - in nearly a decade. Search engines still come around sniffing for web sites that disappeared (along with the computer that hosted them, and the IP address on which that computer was found) long ago.

Sure, most of this stuff never makes it past the filters in my demarcation routers, much less past my inner firewalls. But it does burn a lot of resources. Not only do those useless packets burn bits on my access links, but they also waste bits, routing cycles, and buffers on every hop that those useless packets traverse.

It will not take long before the cumulative weight of this garbage traffic starts to poison the net. Already it is quite common for individual IP addresses to be contaminated from prior use. I am aware of people who are continuously bombarded by file access queries because a prior user of that address shared files from that address. Entire blocks of IP addresses are also contaminated, perhaps permanently, because they once hosted spammers thus causing those address blocks to be entombed into the memories of an unknown number of anti-spam filters not merely at the end user level but also deep in the routing infrastructure of the net. And a denial-of-service virus, once out on the net, can only be quieted, not eliminated; such viruses remain virulent and ready to spring back to life.

The net does not have infinite resources - even if IPv6 is deployed the contamination of IP address space will merely be slowed, not stopped.

Better security measures, particularly on the sources of traffic, will help, but again, unless something radical happens, the contamination will merely be slowed, not stopped.

I believe that something radical will happen: We may see the rapid end to the “end-to-end” principle on the Internet.

We are already observing the balkanization of the net for political and commercial reasons. Self-defense against the rising tide of the net’s background packet radiation may be another compelling reason (or excuse) for net communities to isolate themselves and permit traffic to enter (and exit) only through a few well-protected portals.

This balkanization may be given additional impetus by a desire to escape from the ill effects of poorly designed regulatory systems, such as ICANN.

So, between spam, anti-spam blacklists, rogue packets, never-forgetting search engines, viruses, old machines, bad regulatory bodies, and bad implementations, I fear that the open Internet is going to die sooner than I would have expected. In its place I expect to see a more fragmented network - one in which only “approved” end-to-end communications will be permitted.

The loss of open end-to-end communications will, in itself, be a great loss.

But of even more concern will be the fact that these portals, or gates, will require gatekeepers, which is merely a polite word for censors. Our experience with ICANN has shown us how easily it is for focused and well-financed interests to capture a gatekeeper. In the present political climate in which government powers are conferred, without a counterbalancing obligation of accountability, onto private bodies, the loss will be much greater.

—-
Posted with permission from the CaveBear Blog

By Karl Auerbach, Chief Technical Officer at InterWorking Labs

Filed Under

Comments

rhyolitic  –  Aug 25, 2003 9:00 PM

When our hardware and protocols mature in speed, bandwidth and technique, the technical problems will disappear.  Yet I share your concern for the ultimate control mechanisms on the net.  The consumer based infrastucture of the U.S.A. clearly serves business first and liberty second.  If our liberties undercut market potential, they will be curtailed.  That has a whole range of implications for the future of the net. 

But as far as I’m concerned, the “approved end to end” connections already exist in the form of VPNs.  VPN clients have the nasty habit of preventing communication between networks.  You may find yourself with multiple VPN clients that all contend to control your access.  This model fails miserably.  Segmented networks should be seen as an obvious threat to government and business as well.

Allen Smith  –  Aug 29, 2003 7:44 PM

I suggest that there are means for enforcing accountability on software companies (e.g., Microsoft) that are not currently being taken advantage of. EULAs (shrinkwrap licenses) are _End User_ License Agreements. If a software company goofs up and the results have negative effects on people who _aren’t_ end users of that particular product (e.g. recipients of virus-laden emails), that company is not protected from lawsuits by the EULA, as far as I can see. (Worried about open-source software, freeware, etcetera? First, lawsuits target those with resources. Second, even if the purpose of the lawsuit is to shut someone down, an open-source foundation should be able to simply transfer its copyright back to the original authors, who only gave it up on condition of the software continuing to be available.)

Similarly, if people start holding an ISP properly responsible for, for instance, failing to cut connections from its users when said connections are causing problems for others (sending viruses/worms, for instance), by simply cutting off _all_ connections from said ISP (blacklists do that - more selectively than a whitelist approach, which is the “approved end-to-end” problem he pointed out), by lawsuits against said ISP, or other means, that will likewise help quite a bit.

VoxDomains.com CEO  –  Aug 31, 2003 6:54 AM

Internet is not dying. At leaset because there is a growing number of people who can not imagine they can live in a world without it. Internet is just changing. I am quite familiar with the problem featured in this article. There is nothing wrong with commercialization. Look around. Every portal site and not only portals, are trying to make their users “more loyal” by creating different types of communities and offering different bonuses to their members. This is why “virtual states” will have more and more physical (commercial) bounderies. Balkans are like the rest of the world. Simply, they are a small copy of this world. The problem with SE indexes is closely related to the problem of creating Artificial Intelligence (AI). Too many researchers are try to make a sytem “learn intelligently”, but very few of them are trying to make it “forget intelligently”. These processes are the two sides of one and the same coin. This is how Nature solved the need/resource trade-off.

Dustin  –  Oct 11, 2003 12:39 AM

Vendors will soon be beating down the doors of service providers with new backbone and access gear that will allow a level of control over traffic never before thought possible. This new gear will slide into place to help prevent the flood of traffic generated by worm propagation and for other mom and apple pie reasons. Then later the service providers will realize that this fine degree of control over all traffic will give them power, which they will then abuse. Expect the end shortly after that. The first exchange in this battle has already occurred. The call for neutral networks has resulted in resounding report of self-regulation from the cable broadband providers.

Anecdote. Recently while visiting DC I spoke big national ILEC policy person that told me they supported network neutrality. I asked if they would ever discriminate against traffic that offered a service that competed with a similar offering of their own. Certainly not was the immediate and believable answer. I asked if they would make access to traffic enhancement technology like QOS part of this strictly neutral regime. You guessed it, they did not feel that was needed. I did not bother to explain to him how that would be a form of traffic discrimination. You can draw your own conclusions on the meaning of this dialog.


Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

Related

Topics

DNS

Sponsored byDNIB.com

Brand Protection

Sponsored byCSC

New TLDs

Sponsored byRadix

IPv4 Markets

Sponsored byIPv4.Global

Threat Intelligence

Sponsored byWhoisXML API

Cybersecurity

Sponsored byVerisign

Domain Names

Sponsored byVerisign