↑ Filter bubbles ?
Let’s not research the bubble in a bubble. The “Filter Bubble Problem” is usually presented as a technological phenomenon, wherein major Internet venues (search engines like Google, merchandising sites like Amazon, social media sites like Facebook) try to use what they have learned about us - our likes and dislikes - to anticipate what we want to see and present it to us.
History
In 1996, MIT researchers Marshall Van Alstyne and Erik Brynjolfsson warned of a potential dark side to our newly interconnected world: Individuals empowered to screen out material that does not conform to their existing preferences may form virtual cliques, insulate themselves from opposing points of view, and reinforce their biases. Internet users can seek out interactions with like-minded individuals who have similar values, and thus become less likely to trust important decisions to people whose values differ from their own.
The phrase “filter bubbles” was first coined by Eli Pariser. Our social circles are filter bubbles. Neighbourhoods and classrooms are filter bubbles. And we all have confirmation biases. When bubbles are small, and we do not participate in other bubbles, they can be a real problem, for ourselves and for society. In his revolutionary book Filter Bubbles (The Filter Bubble: What the Internet is Hiding From You, Eli Pariser, 2011), Pariser explained how Google searches bring up vastly differing results depending on the history of the user. He cites an example in which two people searched for “BP” (British Petroleum). One user saw news related to investing in the company. The other user received information about a recent oil spill. Pariser describes how the internet tends to give us what we want: Your computer monitor is a kind of one-way mirror, reflecting your own interests while algorithmic observers watch what you click.
Pariser terms this reflection a filter bubble, a “personal ecosystem of information”. It “protects” us from any sort of cognitive dissonance, by limiting what we see. But it is algorithms that are doing the screening out, not empowered users, and virtually everything we search for online is being monitored — for someone else’s benefit.
The problem
A personalised news website or search result listing could assign more importance to specific items, based on the (assumed) interests of a user. As a result, users may encounter only a limited range of ideas. On the surface, this may seem natural.
Research has shown there is little difference between the effects of self-selected personalisation, where people actively choose which content they receive, and pre-selected personalisation, where algorithms personalise content for users without any deliberate user choice (Should we worry about filter bubbles?, Frederik J. Zuiderveen Borgesius, Damian Trilling, Judith Möller, Balázs Bodó, Claes H. de Vreese, Natali Helberger, 2016). This may suggest that there is nothing to worry about and the real problem is not in the technology, but inherent in human nature. While true, it doesn’t mean there is no problem, nor that the problem is not aggravated by increased use of the internet as information resource. Many of our problems are global. Our keep-it-local and natural bubble inclinations are no longer universally beneficial.
A filter bubble is a state of intellectual isolation: People living in a filter bubble are more likely to find a lot of support for their views, and less likely to find arguments that go against their own views. Information is no longer put into perspective by different points of view and creates a perception of the world that is determined more by opinions than by facts, causing:
An increased vulnerability to accepting fake news (Exposure to ideologically diverse news and opinion on Facebook, Eytan Bakshy, Solomon Messing, Lada A. Adamic, 2015).
Reinforcing of our own biases by providing the false sense that our own particular views are universally held (Confirmation Bias: We interpret facts to confirm our beliefs, VeryWell Mind, 2018)
Discouraging serious consideration of opposing views.
Sites and applications using these algorithms want us to want them, and take advantage of our natural tendencies to reward us with things (venues, products, opinions) we find comfortable, yet are not always useful or in our best interests. This filtering is happening without our knowledge (and awareness) of how it works. We are giving away our power of self-determination to service providers. Executives at Google could easily have been flipping elections to their liking with no one having any idea what is happening, and Cambridge Analytica(s) actually did so on facebook.
Known mitigations
Almost no one clicks down through many pages of search engine listings in order to find alternative content.
Any ranking algorithm can be adapted to produce a first page listing of results with more than just the most popular objects.
Or, simply provide results based only on keywords, without any automated “personalisation” or “tailoring” of results and providing a “DIY tailoring of search results” menu to users.
Algorithms can also be used to guard content quality (although there are some dangers to that also).
Key to reducing the negative impact of the filter bubble is transparency, more public awareness of how the algorithms work.
Education: Maybe we can try that demonstrate (effects of) bubbles and the resulting bias?