NordVPN Promotion

Home / Blogs

P2P: Boon, Boondoggle, or Bandwidth Hog?

Depending on whom you ask, peer-to-peer (P2P) services may be the best thing that ever happened to the Internet or a diabolical arbitrage scheme which will ruin all ISPs and bring an end to the Internet as we think we know it. Some famous P2P services include ICQ, Skype, Napster, and BitTorrent. Currently a new P2P service called iPlayer from BBC is causing some consternation and eliciting some threatening growls from British ISPs.

P2P explanation for non-nerds: a P2P service is one in which transactions take place directly between users’ computers rather than on some central server somewhere in cyberspace. Google search is NOT a P2P service; when you make a query, a Google-owned server somewhere searches a Google database and then returns the answers to your computer. Napster IS (or WAS) a P2P service; the music you downloaded from it wasn’t stored in any central site or sites; it was on the computers of the people who contributed it and was transferred directly from their computers to yours without passing through any central server.

Advantages of P2P

Scalability: P2P services are inherently scalable. If each user is sharing part of the load, more users mean not only more demand but also more capacity. By contrast, if a service runs on a central host, more users will eventually mean that more resources need to be added at the host. If new host resources aren’t added, the service breaks or slows to a crawl or suffers in some other way.

Survivability: If you don’t have a central server, you’re not vulnerable to central failure—nor can terrorists target a service whose elements are widely dispersed. Related post: America’s Antiterrorism Network—Distributed Data Storage. The Internet itself can be considered a network of peers since it has no central site; it was designed to be survivable and its headless nature was an essential element in its survivability.

Hardware Economics: ICQ, an early chat service, was one of the earliest free Internet services to net a small fortune for its founders. The founders could afford to make the service free even as it attracted hordes of users because of its P2P architecture. They didn’t have to have revenue to buy lots of hardware because the work of making connections and even storing the directory was done cooperatively on the computers of their users. Making a service free is a good way to get lots of users in a hurry. But, if it is free and not ad-supported, lots of users can mean a big unfunded hardware bill (even though hardware is much, much cheaper than it used to be, even in the ICQ days). P2P is a resolution to this quandary.

Bandwidth Economics: Here’s where the controversy begins! Suppose that all Skype calls had to pass through central servers; those servers would have to have huge pipes to connect them to the Internet. eBay, Skype’s owner, would have to pay huge sums to ISPs for those huge pipes. That would make ISPs happy but Skype doesn’t work that way. Calls go “directly” over the Internet from one Skype user to another; even call setup is done by using the shared resources of online Skype users rather than a centralized resource (see here if you didn’t know you agreed to help connect other people’s calls when you agreed to the Skype TOS). So the bandwidth needed for both the calls and the call setup is provided by the users. If eBay had to provide all this bandwidth, Skype-to-Skype calls probably wouldn’t be free.

BBC is planning to make most of its content available free over the Internet for a limited time after showing (remember, they are funded differently than American TV). They say their system is P2P meaning that the shows will mostly travel from one user’s machine to another over those users’ own Internet connections rather than being served directly from BBC to each user . “Foul!” cry the British ISPs, “BBC isn’t going to have to buy more bandwidth to offer this service; they’re going to use the bandwidth users already have. Usage’ll go up. We won’t get any more revenue from anyone. Customers’ll complain that their Internet connections are getting slow.”

Who’s right? More in an upcoming post.

By Tom Evslin, Nerd, Author, Inventor

His personal blog ‘Fractals of Change’ is at blog.tomevslin.com.

Visit Page

Filed Under

Comments

Matthew Elvey  –  Aug 23, 2007 9:15 PM

P2P services are inherently scalable?  No, they’re not, though they are much easier to provide cheaply in high volume, and many are scalable.  “Some early P2P implementations of Gnutella had scaling issues.” - http://en.wikipedia.org/wiki/Scalability

You’re confabulating ‘inexpensive’ and ‘scalable’.

If a service runs on a central host, and adding resources allows it to scale in a linear or better fashion, then it’s scalable.

But you’re right about the economics and you’re right about survivability, provided you’re talking about a P2P system that has no central server; most P2P systems do rely on a central server for some things.

ISPs will deal with the increased traffic in order to prevent severe congestion.  Some will just try to ban, block or throttle it (with tools like these) while others will add capacity, and most will do some of both.  If you’re an ISP, one relatively cheap way to add capacity is to set up P2P content distribution nodes on your ISP network to feed your users.

Let’s consider a user on a typical 1.5Mbps downstream DSL link paying $30/mo. Consider the worst case: they’re pulling down 1.2Mbps 24/7, AND none of the content is coming from the ISP’s own network.  It’s only paying about $10/Mbps/month for peering since it’s buying in bulk.  So, sure that’s significant, but it’s not completely unsupportable.  A 6Mbps cable customer paying $50/mo and using it 100% 24/7 is going to be more of a strain, plus the shared medium setup on a cable network means that there will be congestion if enough customers in a neighborhood try to do this. 

Not surprisingly, given my quick calculations, Comcast is known to use Sandvines and block, e.g. BitTorrent seeding, while I don’t think AT&T has done that yet.

I remember when I first got DSL (circa ‘98) and called up the PacBell NOC because a 56kbps video stream I had bought access to was hitting extreme congestion on their backbone, and it took them a couple weeks to fix the problem, by adding capacity.  Having to constantly add capacity isn’t new; it’s been the rule since the ‘net was born.

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

Related

Topics

Threat Intelligence

Sponsored byWhoisXML API

DNS

Sponsored byDNIB.com

Domain Names

Sponsored byVerisign

New TLDs

Sponsored byRadix

IPv4 Markets

Sponsored byIPv4.Global

Cybersecurity

Sponsored byVerisign

Brand Protection

Sponsored byCSC

NordVPN Promotion