|
The victory of Trump left the world perplexed, it did not take long to appear texts blaming Facebook and its bubble for the unexpected result. The “bubble”, a theme that circulated more for the academic and technical spheres gained an unusual popularity in the last days, never saw so many texts on the subject published in such a short space of time, and in the recognized spaces of global journalism.
But after all, what are the bubbles and how can they decide on elections?
First of all, you need to understand one thing: the bubble is not something that you “enter”, much less collective; the bubble is personal and you build it around yourself with the help of algorithms.
What are bubbles and algorithms?
It is possible that you do not understand what a bubble or filter bubble is and even what an algorithm is, let alone use the two to build something around you, do not worry, let’s explain.
Let’s start from the beginning, Eli Pariser, a cyber-activist who researched the subject, and wrote an excellent book “The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think”, where he describes in a timely manner how the invisible filters appeared and gives them the name of “filter bubble”, also known as “bubble”. According to Pariser, one of technology’s visionaries Nicholas Negroponte, an MIT researcher, back in 1994 thought of what he called “smart agents,” which would be algorithms that work as content curators. Early experiences with “smart agents” were a disaster, but Jeff Bezos developed the concept of Amazon’s “smart agent” based on the bookseller who knows you and recommends the books he believes would be interesting. In order to reach this goal, the algorithm started to record everything that happened on the site, who saw or bought a book, and which ones were also interesting, and quickly created a relationship logic able to present with reasonable margin of success reading suggestions based on their research , from where you access, and your purchase and search history, was created the first efficient bubble.
Two people with identical devices accessing Google on the same network with the same keywords will never get the same results.
Google has also adopted an algorithm with criteria that define the relevance and adherence of the search to what it believes you are looking for, based on countless variables that range from your search history to the location and device from where it is being made search. Two people with identical devices accessing the same network with the same keywords will never get the same results. The same goes for Netflix, Facebook and many other networks and services you access, which are always recording and comparing an increasing number of information about you, your relationships and interests to try to hit your desire efficiently. What happens with the Facebook timeline, which in practice is not chronologically ordered, but according to what Facebook’s algorithm thinks is relevant to you is that you do not realize it until you pay attention to the details.
Problems with bubbles
Laziness is human nature, having an algorithm offering content and information tailored to you is almost a dream, but it can quickly become a nightmare.
Bubbles are invisible
The first problem with bubbles is that they are invisible, if you do not know they exist, you will hardly notice them, and often even conscious does not seem to notice. Bubbles are algorithms built to offer you content and information that they deem most appropriate for your interests.
Bubbles are algorithms
Bubbles are algorithms, mathematical models, based on logic, and for this reason not able to perceive subjective elements as emotions, body language and contexts in which they are triggered ... yet ...
Cathy O’Neil, a data scientist, presents in her book Weapons of Math Destruction a series of cases and raises ethical and logical questions about various mathematical models about big data, and how they can increase inequality and threaten democracy. According to O’Neil, mathematical models are the engine of our digital economy and are not neutral and much less perfect, as it presents in these two conclusions:
You build your bubble
Bubbles do not exist without you, the bubbles that present content that you access on Facebook, Google, Amazon, Netflix and others are exclusively yours, and you have built them. Many algorithms compare their habits with other people who think similar, but they need to learn about you. On Facebook, for example, the algorithm called EdgeRank records everything you do, what posts you like, comments, shares, which clicks, which ones spend more or less time, and so it “learns” to satisfy you.
The problem, as seen, is that bubbles are algorithms, mathematical models that are recursive, they are always improving and learning with you, when you do not pay attention to a post or when you reject posts and block people or simply ignore them, EdgeRank understands that you no longer want this content and delete it from your timeline, creating a distortion of reality that Eli Pariser calls “Good World Syndrome.” The “Good World Syndrome” is when your timeline is so “purified” that it only presents content that ideologically aligns with you and pleases you, the good world, is precisely in this distorted perception of reality, where “everyone” thinks the same to you.
Bubbles are your common sense
The way we perceive the world is complex, the brain uses shortcuts to make decisions, which are mostly intuitive. Leonard Mlodinow in the book “Subliminal: How Your Unconscious Mind Rules Your Behavior”, demonstrates how much our decisions are based on subjective questions, so much so that according to the author some scientists estimate that we are only aware of about 5% of our cognitive function. The other 95% go beyond our consciousness and exert enormous influence on our lives. We may not realize it, but we are forming the understanding of common sense in all social cycles, including in our bubbles. Common sense formed within a purified bubble, which hit the “good world syndrome” is totally distorted from reality, and can lead to radicalism.
We are part of each other’s bubbles
Just as we construct our bubbles with content produced by other people, they construct their bubbles with contents that can be yours as well, the bubbles are not reciprocal, not always the author of the content that interests you has an interest in its content. Because we are responsible for the content that interests someone, we are at some level influencing these people.
Bubbles are our Matrix
We are always believing that in social networks, especially on Facebook, we are publishing to the world, but in practice, we are publishing to a very restricted audience, 3% to 6% of our “friends” and followers. Your publications can only achieve greater reach when someone shares them. That is, we are almost always receiving information from the same people that interest us, and sharing the same ones that have interest in you. We feel like talking to a crowd of millions, but in practice, your listeners would fit most of the time in your living room.
If we did a graphical representation of what your bubble would look like, we would have something like a Venn diagram, but it would be a three-dimensional diagram showing how your bubble is built with the connection of other bubbles. The size of the intersection space would be proportional to the level of interest in a particular bubble. Even so, it is necessary to emphasize that the relations of the bubbles of the content that you consume and the content that you produce can be totally different.
How can bubbles influence an election?
Now that you know what algorithms, bubbles, and their problems are, it is easier to understand how they can influence not just an election, but important decisions like what happened in the coup suffered in Brazil this year. Let’s focus initially on Facebook, a social network of size never before imagined, a “media vehicle” with two BILLION users worldwide, representing 2/3 of all users with Internet access in the world. With this dimension, it is impossible to dissociate it from being part of the equation that evaluates global changes in social behavior, such as the emergence of the extreme right and the wave of hatred and polarization.
Earlier this year I made public part of my research project, with which I am trying a master’s degree—The political and ideological power of the Filter Bubble (Portuguese)—it contains some relevant cases of how Facebook can influence its users, both the result of an Election as the behavior itself:
Influencing an Election
For Zittrain (2014) Facebook can decide an election without anyone realizing it. In his text, he demonstrates that simply prioritizing a candidate on the timeline is enough for this, especially against undecided users. To support his thesis, Zittrain cites a study developed on November 2, 2010, where a publication that helped find the voting zone in the United States featured the user’s option to click a button and inform six friends who had already voted. This produced an increase in the number of voters in the experiment region.
Influencing user behavior
The controversy over the filter bubble has gained a significant dimension and has come to draw attention not only to researchers but mainly to activists, lawyers, and politicians when a study by researchers linked to Facebook concluded that it was possible to change the mood of users by emotional contagion by the social network. The experiment consisted in transferring emotions by contagion without the knowledge of those involved, Kramer et al. (2014, p.8888) and was successful:
In an experiment with people who use Facebook, we test whether emotional contagion occurs outside the face-to-face interaction between individuals, reducing the amount of emotional content in the timeline. When positive expressions were reduced, people produced fewer positive posts and more negative posts; When negative expressions were reduced, the opposite pattern occurred.
Subjugating subjectivity
Here is where the big point comes in, it is possible to influence massively using Facebook, provided the network is known and its operation, for example, John Rendon (RAMPTON; STAUBER, 2003) who defines himself as an “information warrior and an administrator of perceptions”, for him the key to changing public opinion lies in finding different ways of saying the same thing. This pattern can be perfectly found in Good World Syndrome.
Purification of the timeline would be enough to change users’ perceptions and make them believe in their version of the facts, but although many people build their common sense through Facebook, there is a need to reinforce the narrative that one wants to establish, otherwise it would suffice for the user to move away from the social network for a few days to rebuild his sense of common sense and enter into a cognitive dissonance around it.
As in every good fiction film, where the protagonist is led to believe in an induced reality, many elements around he must correspond to the narrative that one wants to induce. The great secret lies in the application of radical communication techniques, which we used in cyber-activism in Brazil in 2008 when we still had absolute mastery of communication techniques in the network.
At the time we used the collective blogs, where dozens of blogs posted on a specific date texts against the object of our cause, were hundreds of blogs, which radically changed the result of Google when researching the subject, presenting several links against the object in the top of the results. The Facebook user, for example, usually checks Google for topics that interest him, and based on the principle that if he is already in the good world syndrome, he is predisposed to share any information that corroborates with his point of view, regardless of reliability From the source. Notice how many fake blogs have appeared in the last two years here with texts against the deprived Brazilian government. The same happened in the American elections, a multitude of fake texts were shared, many by fakes users (robots), but a significant part by users predisposed to do so.
Another technique was to fragment the information in order to direct it to different groups of interlocutors, for example in the bill cited (object), an article proposed a penalty of two years of detention if software violated. We passed the information to the groups of gamers, that the use of bots in games could incur two years of jail if the law would be approved. This engages more people. Today with WhatsApp the fragmentation is much easier, it is used for example what O’Neil calls predatory marketing, dialoguing with the point of pain of the interlocutors offering the relief. It is enough to identify the pains of the interlocutors to offer fragments that interest the narrative and that are a relief for them. In the 2014 and municipal elections this year, the volume of false information against opponents was a huge thing, the same I believe was made in the American elections.
These are two of the various radical communication techniques that can reinforce the narrative that one wants to construct, and which gain consistency when returned to Facebook, and shared by ordinary users, reinforcing their beliefs and producing a large bubble that is actually a sum of thousands of individual bubbles that are ideologically aligned, and thus subjugating the subjectivity of a large mass of society.
Finally, this text aims to present another reflection on the debate, in addition to many others that have been published.
Sponsored byVerisign
Sponsored byVerisign
Sponsored byCSC
Sponsored byDNIB.com
Sponsored byIPv4.Global
Sponsored byWhoisXML API
Sponsored byRadix
Thanks for writing this, including the consideration of remedies.
Jacob Weisberg’s fascinating article on “Bubble Trouble” in Slate in 2011 on the occasion of Pariser’s book’s publication, looks like it foretold what has unfolded over the past year - especially the observation about “the Web turning into everybody’s narcissistic “Daily Me” feed.” It turns out it wasn’t the Web, but social media that seem to have done the trick, coupled with the television news media who have emulated the same bubble promoting architectures.
It should be noted that it was the Apple spinout General Magic in the early 1990s that pioneered some of the early mobile agent information gathering with its Telescript platform coupled with its Magic Cap mobile personal communicator. Indeed, there was a crossover into the Nick’s Media Lab crew via researcher Pattie Maes who jointed together with the General Magic staff to form The Agent Society to promote the technology. General Magic subsequently failed for multiple reasons, but many of the people and ideas scatter across Silicon Valley.