Home / Blogs

Knowing Less

The announcement yesterday morning in the Times that New York State AG Andrew Cuomo had reached an agreement with three US network operators (Verizon, Sprint, and Time Warner) about blocking child pornography was both less and more important than it appeared.

It’s less important in that part of the agreement covers something ISPs already do, which is to react swiftly when they get information about child porn on the sites they actually host. Federal law requires hosts to report such material they learn about promptly to the federally-sponsored National Center for Missing and Exploited Children (NCMEC), or risk fines of $50,000 per image. ISPs also routinely (and promptly) take down this material from their servers once they know about it. (Here’s a code of conduct suggesting this.)

It’s more important in that it takes a practice that had been implicit, voluntary, and quiet—the sharing of information between law enforcement authorities and ISPs about child pornography, including the sharing of lists of sites that law enforcement authorities want blocked—and makes it into an enforceable, mandatory agreement. It looks as if the explicit agreement that is the subject of the Times story concerns only Usenet. But there is no particular limiting principle here, and the agreement could extend to anything else that law enforcement believes to be illegal.

Coupled with the story that French ISPs have agreed to block a wide variety of distasteful content, including hate speech as well as child pornography, today is a day to reflect on possible negative consequences of such an approach. (I’ve written about this in the past, here.)

What’s on the list? Once AGs move beyond Usenet, and they will, what more will be added? Will innocent speech be mistakenly identified as unsavory, and blacklisted? How does a site get off the list? Movie houses can stop showing adult movies, and URLs can stop being the site of indecent material.

Who makes the list? The story about the French approach indicates that users can nominate sites to be blocked. In our country, NCMEC has enormous authority. What’s the clearance process for these techniques? Could they be abused?

What does it take to carry out the desires behind the list? If an ISP is handed a list of sites/communications to block, what do they have to do to implement the blockade? Four years ago blocking IP addresses (the then-common technique) was found to block too much innocent speech to be constitutional. (Opinion here) If ISPs have figured out how to do more sensitive filtering, does that sensitivity require detailed information about what each user is up to (deep packet inspection)? Could that sensitivity be misused for the ISPs own commercial reasons (targeted advertising)? Also—could the blocking of categories (note the reference to all usenet postings) result in the blocking of innocent speech?

What does this mean for the free flow of information online? Governments around the world have an interest in shielding their citizens from illegal speech—in the case of child pornography, the animating idea is protecting children from being abused. We react immediately to child pornography, and its creation and dissemination is widely prosecuted. (We’re even going after the dissemination of virtual child porn these days—but see Justice Souter’s stirring dissent.)

Going beyond the prosecution of the people involved in the production of this unspeakable material to requiring network operators to affirmatively monitor for child pornography is a substantial step—and one U.S. federal law has avoided as far as I know. Will this interest in going after intermediaries, reported on yesterday, always trump the social value of having an interconnected, un-inspected, fast-flowing internet?

By Susan Crawford, Professor, Cardozo Law School in New York City

Filed Under

Comments

Content filering is not uncommon Dan Campbell  –  Jun 12, 2008 1:51 PM

Content filering is not uncommon on private enterprise / business networks.  The proxy devices, for example Blue Coat, can be configured to be pretty specific, to URL level and user level, or can be configured with general rules that end up being catch-alls in certain categories, e.g., pornography, gambling, financial, etc.  These are prone to errors, catching things not necessarily intended.  For example, PowerBall often gets categorized as a gambling site (maybe it kind of is, but it’s not exactly off shore online gambling site that the generic “gambling site” rule configured by an organizaton was intending to limit.)  Similarly, workplaces want to restrict people from doing stock trading during the day, but invariably catch other financial sites and end up restricting staff from doing basic banking.  The more general it is configured, the more likely it will filter too much, the more specific it is configured, the more likely it will miss what it intends to catch, or it will be easy for a web site to be changed and bypass the filter.

Many ISPs are in the game of just IP transport and have pretty open networks, just providing the channel for passing packets.  Many large backbone providers don’t want to insert firewalls or other devices in the data path for many reasons.  To insert more intelligent devices in the data path can really complicate the architecture and sometimes negate other benefits, features, performance, redundancy, etc., as some devices rely on being inline (which, due to cost, causes a dilema on where to put the devices or how many to deploy), or being out of path with traffic redirected, such as through WCCP, which can present another whole host of issues, or if the devices are somehow stateful (such as most firewall implementations), which is counter to the connectionless nature of IP.  And this is before we talk about the effort and staff resources need to configure and operate the devices (as well as whatever network changes were required.)

It is inherently dangerous to have ISPs making decisions on what gets blocked.  The decisions will end up varying across the industry and will seem arbitrary or even malicious or according to some hidden agenda, perhaps even angering customers (see Comcast issue!), even if the intentions are good.

There are entities on the Internet with no governing authority whatsoever that provide blacklists based just on IP address (such as from someone who is spamming).  I’ve seen cases where they publish an ENTIRE IP address block aggregate, catching a large portion of an ISP’s subscriber base when the intent was to reveal what may be just one individual subscriber who is spamming or hacking.  Many other ISPs or organizations pick up the address blocks off these lists and block traffic without any further discretion, even to the entire address block.  It causes major issues, yet still it happens sometimes.  Any form of traffic filtering will end up with issues or cause concern regardless of how awful or harmless the content, or how noble the original intent.

Nevertheless, we have to find a middle ground and allow some restrictions, whether that be because of the content itself or for other reasons (such as to protect performance, again see the Comcast throttling case.)  Where we draw the line and who gets to make the decisions will always be open to debate and differences of opinion.

Comment Title:

  Notify me of follow-up comments

We encourage you to post comments and engage in discussions that advance this post through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can report it using the link at the end of each comment. Views expressed in the comments do not represent those of CircleID. For more information on our comment policy, see Codes of Conduct.

CircleID Newsletter The Weekly Wrap

More and more professionals are choosing to publish critical posts on CircleID from all corners of the Internet industry. If you find it hard to keep up daily, consider subscribing to our weekly digest. We will provide you a convenient summary report once a week sent directly to your inbox. It's a quick and easy read.

I make a point of reading CircleID. There is no getting around the utility of knowing what thoughtful people are thinking and saying about our industry.

VINTON CERF
Co-designer of the TCP/IP Protocols & the Architecture of the Internet

Related

Topics

New TLDs

Sponsored byRadix

Cybersecurity

Sponsored byVerisign

Domain Names

Sponsored byVerisign

DNS

Sponsored byDNIB.com

IPv4 Markets

Sponsored byIPv4.Global

Threat Intelligence

Sponsored byWhoisXML API

Brand Protection

Sponsored byCSC