|
When one person transmits the speech of another, we have had three legal models, which I would characterize as Magazine, Bookstore, and Railroad.
The Magazine model makes the transmitting party a publisher who is entirely responsible for whatever the material says. The publisher selects and reviews all the material it published. If users contribute content such as letters to the editor, the publisher reviews them and decides which to publish. The publishing process usually involves some kind of broadcast, so that many copies of the material go to different people.
In general, if the material is defamatory, the publisher is responsible even if someone else wrote it. In New York Times Co. v. Sullivan, the court made a significant exception that defamation of public officials requires a plaintiff to show “actual malice” by the speaker, which makes successful defamation suits by public figures very rare.
The Bookstore model makes the transmitting party partly responsible for material. In the 1959 case Smith v. California, Smith was a Los Angeles bookstore owner convicted of selling an obscene book. The court found that the law was unconstitutional because it did not require that the owner had “scienter,” knowledge of the book’s contents, and that bookstore owners are not expected to know what is in every book they sell. (It waved away an objection that one could evade the law by claiming not to know what was in any book one sold, saying that it is not hard to tell whether one would be aware or not.) In practice, this has worked well, allowing bookstores and newsstands to sell material while still having to deal with defamatory or illegal material if told about it. Bookstores and newsstands engage in limited distribution, selling a variety of material but typically selling one copy of a book or magazine at a time to a customer.
The third is the Railroad model, or common carriage. Originally this applied to transport of people or goods, in which the carrier agrees to transport any person or any goods, providing the same service under the same terms to everyone. In the US, telephone companies are also common carriers, providing the same communication service to everyone under the same terms. As part of the deal, the carriers are generally not responsible for the contents of the packages or messages. If I send a box of illegal drugs or make an illegal, threatening phone call, I am responsible, but UPS or the phone company is not.
Common carriage has always been point to point or at most among a set of known points. A railroad takes a passenger or a box of goods from one point to another. A telephone company connects a call from one person to another, or at most to a set of other places determined in advance (a multipoint channel.) This is nothing like the publisher, which broadcasts a message to a potentially large set of people who usually do not know each other.
How does this apply to the Internet? Back in 1991 in Cubby vs. Compuserve, a case where a person was defamed by material hosted at Compuserve, a federal court applied the bookstore standard, citing the Smith case as a model. Unfortunately, shortly after that in Stratton Oakmont vs. Prodigy, a New York state court misread Cubby and decided that an online service must either be a publisher or a common carrier and since Prodigy moderated its forum posts, it was a publisher.
In response, Congress passed section 230, which in effect provided the railroad level of immunity without otherwise making providers act like common carriers.
There are not many situations where one party broadcasts other people’s material without going through a publisher’s editorial process. The only one I can think of is public access cable channels, which unsurprisingly have a contentious history, mostly of people using them to broadcast bad pornography. The case law is thin, but the most relevant case is Manhattan Community Access Corp. v. Halleck, where the Supreme Court ruled 5-4 that even though a New York City public access channel was franchised by the state, it was run by a private entity so the First Amendment didn’t apply. These channels are not a great analogy to social networks because they have a limited scope of one city or cable system, and users need to sign up, so it is always clear who is responsible for the content.
Hence Section 230 creates a legal chimera, splicing some common carrier liability treatment on a broad range of providers who are otherwise nothing like common carriers. This is a very peculiar situation and perhaps one reason why Section 230 is so widely misunderstood.
Does this mean that the current situation is the best possible outcome? To put it mildly, a lot of people don’t think so. Even disregarding those who have no idea what Section 230 actually does (e.g., imagining that without 230, their Twitter posts would never be deleted), there are some reasonable options.
The magazine model, treating every platform as a publisher, won’t work for reasons that I hope are obvious—the amount of user-contributed material, even on small sites, is far more than any group of humans could possibly review. (On my own server, I host a bunch of web sites for friends and relatives, and even that would be impossibly risky if I were potentially liable for any dumb thing they or their commenters might say.)
The bookstore model, on the other hand, worked when the Cubby case applied it to Compuserve, and it could work now. Sites are immune for the material they haven’t looked at or been told about, but they have to do something when notified. Getting the details right is important. The copyright notice and takedown rules of the DMCA sort of work but are widely abused by people sending bogus complaints in the (often correct) hope that sites will take the material down without reviewing the complaint or allowing the party that posted the material to respond. There has to be a reasonable balance between what is a notice and what is a reasonable response, but that doesn’t seem impossible to figure out.
Sponsored byVerisign
Sponsored byRadix
Sponsored byVerisign
Sponsored byCSC
Sponsored byIPv4.Global
Sponsored byDNIB.com
Sponsored byWhoisXML API
Thanks John. -Joe
John -
A very nice article (as usual)... Regarding this proposition -
I can’t help wonder if the attribution of the content should be a factor in required handling - since content without attribution clearly requires intervention by the online platform, but content with clear attribution has a party that should bear foremost responsibility for their remarks.
For content that has clear attribution, shouldn’t the author/speaker be the first stop in addressing issues, and then provide for recourse with the online platform if the speaker is not responsive in a timely manner?
(As someone who has experience in the multiple paradigms involved, I would welcome your thoughts on whether attribution should play any role in handling of content on the Internet.)
I don't think attribution makes much difference. If someone posts "let's meet up on Thursday and burn down John C's house", attribution might give you more recourse, but it doesn't affect whether the material needs to come down. Also, for the most part, attribution of user generated content is pretty weak. Even if people register, typically all you've got is an e-mail address and an unverified name. I suppose that if you're hosting paying customers, you are confident you know who they are, and that you have recourse (i.e., they're in the US, not Russia or Brazil), they could indemnify you and you could be more likely to leave stuff up.
Agreed that for some content (i.e. that likely to produce imminent lawless action, violent extremism, child exploitation, etc.) express removal is called for, but not all content issues are the similar nature with that same clarity of tradeoffs involved. In many cases, the harm of the content is more diffuse and there may be countervailing arguments that warrant consideration - e.g. violent content that might be important to keep available so that public reporting can be done regarding the event, incorrect statements in the political scope that similarly need reporting coverage despite being factually incorrect, content that might be marked for removal for violating copyright or state secrecy requirements but has potentially valid whistleblower merit, etc. Traditionally these arguments regarding suppressing potentially undesirable or harmful content would be heard in an actual judicial setting with due process applied, and hence my question about doing exactly that when the attribution is well-known and there is not imminent harm involved.
In case it wasn’t clear, I’m not saying that you have to remove stuff as soon as someone tells you about it. You have to do something reasonable, where we need to figure out principles for what’s reasonable.
In some cases it might be immediate removal, or it might be to ignore the complaint, or anything in between. I still don’t think attribution is a big part of that because most sites only weakly know who their contributors are.