|
Mark Zuckerberg shocked a lot of people by promising a new focus on privacy for Facebook. There are many skeptics; Zuckerberg himself noted that the company doesn’t “currently have a strong reputation for building privacy protective services.” And there are issues that his blog post doesn’t address; Zeynep Tufekci discusses many of them While I share many of her concerns, I think there are some other issues—and risks.
The Velocity of Content
Facebook has been criticized for being a channel where bad stuff—anti-vaxxer nonsense, fake news (in the original sense of the phrase…), bigotry, and more—can spread very easily. Tufekci called this out explicitly:
At the moment, critics can (and have) held Facebook accountable for its failure to adequately moderate the content it disseminates—allowing for hate speech, vaccine misinformation, fake news and so on. Once end-to-end encryption is put in place, Facebook can wash its hands of the content. We don’t want to end up with all the same problems we now have with viral content online—only with less visibility and nobody to hold responsible for it.
Some critics have called for Facebook to do more to curb such ideas. The company itself has announced it will stop recommending anti-vaccination content. Free speech advocates, though, worry about this a lot. It’s not that anti-vaxxer content is valuable (or even coherent…); rather, it’s that encouraging such a huge, influential company to censor communications is very dangerous. Besides, it doesn’t scale; automated algorithms will make mistakes and can be biased; people not only make mistakes, too, but find the activity extremely stressful. As someone who is pretty much a free speech absolutist myself, I really dislike censorship. That said, as a scientist, I prefer not closing my eyes to unpleasant facts. What if Facebook really is different enough that a different paradigm is needed?
Is Facebook that different? I confess that I don’t know. That is, it has certain inherent differences, but I don’t know if they’re great enough in effect to matter, and if so, if the net benefit is more or less than the net harm. Still, it’s worth taking a look at what these differences are.
Before Gutenberg, there was essentially no mass communication: everything was one person speaking or writing to a few others. Yes, the powerful—kings, popes, and the like—could order their subordinates to pass on certain messages, and this could have widespread effect. Indeed, this phenomenon was even recognized in the Biblical Book of Esther
3:12 Then were the king’s scribes called on the thirteenth day of the first month, and there was written according to all that Haman had commanded unto the king’s lieutenants, and to the governors that were over every province, and to the rulers of every people of every province according to the writing thereof, and to every people after their language; in the name of king Ahasuerus was it written, and sealed with the king’s ring.
3:13 And the letters were sent by posts into all the king’s provinces, to destroy, to kill, and to cause to perish, all Jews, both young and old, little children and women, in one day, even upon the thirteenth day of the twelfth month, which is the month Adar, and to take the spoil of them for a prey.
3:14 The copy of the writing for a commandment to be given in every province was published unto all people, that they should be ready against that day.
3:15 The posts went out, being hastened by the king’s commandment, and the decree was given in Shushan the palace. And the king and Haman sat down to drink; but the city Shushan was perplexed.
By and large, though, this was rare.
Gutenberg’s printing press made life a lot easier. People other than potentates could produce and distribute fliers, pamphlets, newspapers, books, and the like. Information became much more democratic, though, as has often been observed, “freedom of the press belongs to those who own printing presses”. There was mass communication, but there were still gatekeepers: most people could not in practice reach a large audience without the permission of a comparative few. Radio and television did not change this dynamic.
Enter the Internet. There was suddenly easy, cheap, many-to-many communication. A U.S. court recognized this. All parties to the case (on government-mandated censorship of content accessible to children) stipulated, among other things:
79. Because of the different forms of Internet communication, a user of the Internet may speak or listen interchangeably, blurring the distinction between “speakers” and “listeners” on the Internet. Chat rooms, e-mail, and newsgroups are interactive forms of communication, providing the user with the opportunity both to speak and to listen.
80. It follows that unlike traditional media, the barriers to entry as a speaker on the Internet do not differ significantly from the barriers to entry as a listener. Once one has entered cyberspace, one may engage in the dialogue that occurs there. In the argot of the medium, the receiver can and does become the content provider, and vice-versa.
81. The Internet is therefore a unique and wholly new medium of worldwide human communication.
The judges recognized the implications:
It is no exaggeration to conclude that the Internet has achieved, and continues to achieve, the most participatory marketplace of mass speech that this country—and indeed the world—has yet seen. The plaintiffs in these actions correctly describe the “democratizing” effects of Internet communication: individual citizens of limited means can speak to a worldwide audience on issues of concern to them. Federalists and Anti-Federalists may debate the structure of their government nightly, but these debates occur in newsgroups or chat rooms rather than in pamphlets. Modern-day Luthers still post their theses but to electronic bulletin boards rather than the door of the Wittenberg Schlosskirche. More mundane (but from a constitutional perspective, equally important) dialogue occurs between aspiring artists, or French cooks, or dog lovers, or fly fishermen.
Indeed, the Government’s asserted “failure” of the Internet rests on the implicit premise that too much speech occurs in that medium, and that speech there is too available to the participants. This is exactly the benefit of Internet communication, however. The Government, therefore, implicitly asks this court to limit both the amount of speech on the Internet and the availability of that speech. This argument is profoundly repugnant to First Amendment principles.
But what if this is the problem? What if this new, many-to-many communications, is precisely what is causing trouble? More precisely, what if the problem is the velocity of communication, in units of people per day?
High-velocity propagation appears to be exacerbated by automation, either explicitly or as a side-effect. YouTube’s recommendation algorithm appears to favor extremist content. Facebook has a similar problem:
Contrast this, however, with another question from Ms. Harris, in which she asked Ms. Sandberg how Facebook can “reconcile an incentive to create and increase your user engagement when the content that generates a lot of engagement is often inflammatory and hateful.” That astute question Ms. Sandberg completely sidestepped, which was no surprise: No statistic can paper over the fact that this is a real problem.
Facebook, Twitter and YouTube have business models that thrive on the outrageous, the incendiary and the eye-catching, because such content generates “engagement” and captures our attention, which the platforms then sell to advertisers, paired with extensive data on users that allow advertisers (and propagandists) to “microtarget” us at an individual level.
The velocity, in these cases, appears to be a side-effect of this algorithmic desire for engagement. Sometimes, though, bots appear to be designed to maximize the spread of malicious content. Either way, information spreads far more quickly than it used to, and on a many-to-many basis.
Zuckerberg suggests that Facebook wants to focus on smaller-scale communications:
This is different from broader social networks, where people can accumulate friends or followers until the services feel more public. This is well-suited to many important uses—telling all your friends about something, using your voice on important topics, finding communities of people with similar interests, following creators and media, buying and selling things, organizing fundraisers, growing businesses, or many other things that benefit from having everyone you know in one place. Still, when you see all these experiences together, it feels more like a town square than a more intimate space like a living room.
There is an opportunity to build a platform that focuses on all of the ways people want to interact privately. This sense of privacy and intimacy is not just about technical features—it is designed deeply into the feel of the service overall. In WhatsApp, for example, our team is obsessed with creating an intimate environment in every aspect of the product. Even where we’ve built features that allow for broader sharing, it’s still a less public experience. When the team built groups, they put in a size limit to make sure every interaction felt private. When we shipped stories on WhatsApp, we limited public content because we worried it might erode the feeling of privacy to see lots of public content—even if it didn’t actually change who you’re sharing with.
What if Facebook evolves that way, and moves more towards small-group communication rather than being a digital town square? What will be the effect? Will smaller-scale many-to-many communications behave this way?
I personally like being able to share my thoughts with the world. I was, after all, one of the creators of Usenet; I still spend far too much time on Twitter. But what if this velocity is bad for the world? I don’t know if it is, and I hope it isn’t—but what if it is?
One final thought on this… In democracies, restrictions on speech are more likely to pass legal scrutiny if they’re content-neutral. For example, a loudspeaker truck advocating some controversial position can be banned under anti-noise regulations, regardless of what it is saying. It is quite possible that a velocity limit would be accepted—and it’s not at all clear that this would be desirable. Authoritarian governments are well aware of the power of mass communications:
The use of big-character-posters did not end with the Cultural Revolution. Posters appeared in 1976, during student movements in the mid-1980s, and were central to the Democracy Wall movement in 1978. The most famous poster of this period was Wei Jingsheng’s call for democracy as a “fifth modernization.” The state responded by eliminating the clause in the Constitution that allowed people the right to write big-character-posters, and the People’s Daily condemned them for their responsibility in the “ten years of turmoil” and as a threat to socialist democracy. Nonetheless, the spirit of the big-character-poster remains a part of protest repertoire, whether in the form of the flyers and notes put up by students in Hong Kong’s Umbrella Movement or as ephemeral posts on the Chinese internet.
As the court noted, “Federalists and Anti-Federalists may debate the structure of their government nightly, but these debates occur in newsgroups or chat rooms rather than in pamphlets.” Is it good if we give up high-velocity, many-to-many communications?
Certainly, there are other channels than Facebook. But it’s unique: with 2.32 billion users, it reaches about 30% of the world’s population. Any change it makes will have worldwide implications. I wonder if they’ll be for the best.
Possible Risks
Zuckerberg spoke of much more encryption, but he also noted the risks of encrypted content: “Encryption is a powerful tool for privacy, but that includes the privacy of people doing bad things. When billions of people use a service to connect, some of them are going to misuse it for truly terrible things like child exploitation, terrorism, and extortion. We have a responsibility to work with law enforcement and to help prevent these wherever we can”. What does this imply?
One possibility, of course, is that Facebook might rely more on metadata for analysis: “We are working to improve our ability to identify and stop bad actors across our apps by detecting patterns of activity.” But he also spoke of analysis “through other means”. What might they be? Doing client-side analysis? About 75% of Facebook users employ mobile devices to access the service; Facebook clients can look at all sorts of things. Content analysis can happen that way, too; though Facebook doesn’t use content to target ads, might it use it for censorship, good or bad?
Encryption also annoys many governments. Governments disliking encryption is not new, of course, but the more people use it, the more upset they will get. This will be exacerbated if encrypted messaging is used for mass communications; Tufekci is specifically concerned about that: “Once end-to-end encryption is put in place, Facebook can wash its hands of the content. We don’t want to end up with all the same problems we now have with viral content online—only with less visibility and nobody to hold responsible for it.” We can expect pressure for back doors to increase—but they’ll still be a dangerous idea, for all of the reasons we’ve outlined. (And of course, that interacts with the free speech issue.)
I’m not even convinced that Facebook can actually pull this off. Here’s the problem with encryption: who has the keys? Note carefully: you need the key to read the content—but that implies that if the authorized user loses her key, she herself has lost access to her content and messages. The challenge for Facebook, then, is protecting keys against unauthorized parties—Zuckerberg specifically calls out “heavy-handed government intervention in many countries” as a threat—but also making them available to authorized users who have suffered some mishap. Matt Green calls this mud puddle test: if you drop your device in a mud puddle and forget your password, how do you recover your keys?
Apple has gone to great lengths to lock themselves out of your password. Facebook could adopt a similar strategy—but that could mean that a forgotten password means loss of all encrypted content. Facebook, of course, has a way to recover from a forgotten password—but will that recover a lost key? Should it? So-called secondary authentication is notoriously weak. Perhaps it’s an acceptable tradeoff to regain access to your account but lose access to older content—indeed, Zuckerberg explicitly spoke of the desirability of evanescent content. But even if that’s a good tradeoff—Zuckerberg says “you’d have the ability to change the timeframe or turn off auto-deletion for your threads if you wanted”—if someone else (including a government) took control of you’re account, it would violate another principle Facebook holds dear: “there must never be any doubt about who you are communicating with”.
How Facebook handles this dilemma will be very important. Key recovery will make many users very happy, but it will allow the “heavy-handed government intervention” Zuckerberg decries. A user-settable option on key recovery? The usability of any such an option is open to serious question; beyond that, most users will go with the default, and will thus inherit the risks of that default.
Sponsored byWhoisXML API
Sponsored byCSC
Sponsored byIPv4.Global
Sponsored byRadix
Sponsored byVerisign
Sponsored byDNIB.com
Sponsored byVerisign
Since you made a biblical reference, let me say a hearty amen to your insight about how this problem is perhaps the result of many-to-many communications, its velocity, and the inherent nature of social media.
Could it even be possible to conceive of a social network where the data is public yet privacy is protected? And what entity (or entities) could afford to keep the lights on?
I vaguely recall a moment where some said Twitter should be considered a public utility. Well, knowing what governments could potentially do with data, some privacy advocates would say the government is just as bad (or worse) solution than a publicly-traded company like Facebook. eh?
To add an exploratory question, is it even conceivable for a social network to have “a strong reputation for building privacy protective services” baked in to its user data management? Everyone would have to have concealed identities (aka anonymous aliases) and that’d pretty much implode the value of social networking we’re experiencing, on a good day.
[disclaimer]