Living in online filter bubbles
On December 4, 2009, Google’s corporate blog released a post that did not receive much fanfare or attention but made sweeping changes in the landscape of the internet bringing about a paradigm shift in its nature. The post’s headline said ‘Personalized Search for Everyone’. This meant that since the very next day, Google would use 57 signals to guess our identity and produce search results on that basis.
A race to know as much as possible about you has commenced amongst not just Google but all internet giants. Behind the sites we visit, there is a huge new market of information growing that runs on the data fed into it by popular sites and applications. According to a study conducted by the Wall Street Journal, the top 50 internet sites install an average of 64 data-laden cookies and personal tracking beacons. This intense struggle for achieving accurate relevance and a customised landscape has led to a new form of internet where algorithms are beginning to organise our lives. Inventiveness is sparked by the merger of concepts from various contexts and disciplines; with personalisation as specific and particular as it has become, it wouldn’t lead to the shattering of preconceived notions which is essential for human growth.
A locus shift:
China’s censorship has received quite a large proportion of reporting. Whether it’s banning the word ‘democracy’ from the Chinese web or the Great Firewall that inspects every single packet of data entering or exiting the country; China’s censorship is harsh.
Point being, even in an age largely dominated by the technological world, the government still has the authority to curate the truth. This goes into showing how the internet, which was initially proposed as a decentralising agent, is now becoming a centralising power in the hands of few major corporate actors who represent the novel loci.
A salient feature of filtering is the Friendly World Syndrome. While living in the filter bubble, the public realm — where grave problems and issues are discussed — is largely ignored. This is what the theorist Dean Eckles calls a Friendly World Syndrome where the biggest problems fail to reach us at all. Facebook, for instance, works on likes and the stories that are on its fore front get the most likes and are generally likable. Depressing, boring and complicated problems don't make it. As a result, certain dimensions are obscured.
This makes perfect sense for the handful of wealthy individuals controlling our online landscape; they are well aware of content that will garner pleasant feelings, positive responses and grant them what they desire — increased social media presence. But the deeper implications of this include us clinging to a sense of false optimism derived from being exposed to acutely-controlled social media.
Take the Amazon fires as a fairly recent example. Around three weeks after the fire had started and ruined a significant amount of the vast forest, it started to appear on social media. Until that point, there was no discourse, let alone concern, on any online platform. And why would there be? The moderators of our online spaces know very well that memes will bring them more traffic than posts on climatic disasters and tragedies.
Sites create a prototype of our identities in their databases. Ivy League students, for instance, are targeted with career advertisements that state school children don't encounter. This results in the latter's destiny being shaped solely by the interplay of media and identity. Increasingly, the faux identities that the sites create are not only highly reductionist in nature but also incomplete, stereotypical and generalising. Google’s filtering, for instance, works on click signals while Facebook’s does on, primarily, the things we share. As the former is almost always done in a personal setting with no one around to scrutinise the topics we click on, it’s more transparent and depictive of ourselves. In the latter, on the other hand, factors such as fear of judgment and peer pressure come into play. Consequently, two different identities emerge: Facebook’s more performance-based, while Google’s more behaviourist. Unfortunately, both are incomplete depictions of us.
These sites also tend to play with, what behavioural economists call, the present bias by not effectively maintaining a balance between our present selves and our future selves. The present bias refers to the conflicting behaviours between our current attitude and our aspirations. We could aspire to be highly successful in the future but would want to be unproductive and waste time in the present. Algorithm-based personalisation fails to strike a balance. With our present-self giving in click signals, the results are leaned towards our short-term desires that are disruptive to our long-term personal growth.
To begin with, we can be mindful while filling personal details forms and make sure to give only what is required. We can also start by at least skimming through the terms and conditions disclaimers that pop on our screens prior to making accounts on mobile applications. This will do the bare minimum job of keeping us aware of what data we're unknowingly giving away to be used against us. If we think some particular application is asking for too much, we could find alternatives for that service, if possible, or leave a review clearly stating our problem. At least this will spark some conversation about the topic and let these faceless giants know that we do not exist solely to feed them bytes. We can also switch to browsers that do not track you, such as DuckDuckGo that has recently been gaining prominence.
To attempt to escape residing in our own happy, carefree bubbles, we'd need to consciously avoid getting entangled in them. The issues that’ll risk human lives in a couple of years from now, terrorism, energy deficiencies, climate change and so on, can only be solved if we acknowledge their presence and converse about them today.