Are you stuck in an echo chamber online? Read this to escape

The social media algorithm might have boxed you in, but scientists have been studying the phenomenon for a while.

WrittenBy:Ashutosh Shukla
Date:
Illustration of a person sitting inside a bubble, using a laptop with their phone nearby.

Science is an ongoing effort to understand the world we live in. And as the world changes, as new, unexplained classes of questions emerge, there are attempts to try and understand them, leading to the development of a new branch of science.

One such branch of this ever-growing tree is opinion dynamics. The question it addresses is this: If a bunch of people have certain opinions on a topic and are allowed to interact among themselves by some method, then how does the group’s opinion evolve over time? Mathematical and physical models and computational tools are used to answer this question and to find insights in opinion formation among large groups of humans.

This field has existed before the big revolution in exchange of opinions – also known as social media – became commonplace. When they create accounts on social media, ordinary people are quickly flooded with the opinions of hundreds of others. It is easier now to influence an ordinary person with ideas than it would have been 20 years ago.

The liberalisation of opinions implies more interactions between different people, making it more difficult for a researcher to track individual opinions. Statistical physicists convert this problem to that of interacting multiagent systems and study it on computers. Physicists have been studying similar problems in different contexts for quite a while. 

Statistical physicists make models of many interacting agents (on computers) and set the rules for interactions. They then observe the evolution of opinions of these agents. By suitably defining the interactions, they can simulate various models to study the diffusion and evolution of opinions in humans. Tracking opinions at an individual level may not be possible, but insights about the group’s thinking are drawn by this analysis.

And here’s what they’ve found.

Echo chambers fueled by algorithm

The internet and social networks allow people with opinions on unpopular topics to find support. People can also post opinions on popular topics and expect contemporary thinkers (currently online) to either validate their claims and provide arguments and evidence in support of them, or present counter-arguments with evidence to disagree.

In this ocean of discourse, where ideas are accessible at a minimal tariff, we perhaps wouldn’t expect people to live isolated lives with outdated beliefs. On the contrary, they’re easy to find – post an unpopular opinion and we’ll find them in the comments. 

Many online debates now involve a side intimately connected to the issue. Take gender rights, for example. They have a better understanding of their issues, such as seeking legal equality for same-sex marriages. But there will always be another side whose job is to negate everything they say. These are the people with whom one can argue for hours, who – despite any amount of scientific and historic data – will not budge from their positions, more so if they are categorically wrong.

The uninitiated might say these people live in bubbles. Scientists say they live in echo chambers.

Echo chambers are online groups where people with similar opinions on any topic share their voices and validate each other. A newcomer on the internet can be pulled into any echo chamber and learn the ‘facts’ of any side. This newbie might think everyone they follow online is in support of those ‘facts’.

Researchers of opinion dynamics understand this phenomenon as an unintentional consequence of the human need to feel liked.

Our beliefs need consistency. When we approach a ‘fact’, we happily engage with it if the ‘fact’ seems to support our opinion. But if the ‘fact’ contradicts our opinion, just processing it in our brain leaves us in a bad mood. This uneasiness is called cognitive dissonance, which roughly translates to “I don’t like that and I don’t know why”.

Thus, we’re inclined to talk more with people who think like us – a concept called homophily – and avoid people who don’t.

The next ingredient in this khichdi is social media companies’ profit-seeking ‘algorithm’. The job of many engineers in Big Tech is to maximise the time spent by users on their platforms. It’s why YouTube has to guess what we might want to watch next, or Amazon has to guess what category of products we would like to buy today. Similarly, social networks have to offer up posts on our feeds that are pleasant to us. 

The ‘algorithm’ has become more proficient at this. It can’t understand the meaning of what person A or person B has posted online, but it can guess that both A and B would not only understand each other, but might also like each other. This implies that when a person with a slight ideological leaning joins a social media platform, they get to only interact with like-minded people until they’re deep inside an echo chamber. 

If social media was not optimised for this, someone would go online, find people telling them they’re wrong, and discourage them from reopening the app. After all, the real world is hard enough. Why shouldn’t everybody like us in the digital world?

The formation of echo chambers was modelled based on these interactions in this paper. Various observed features from the online echo chambers were qualitatively reproduced, validating the model.

So, the internet pairs us with a community where everyone has similar opinions. Our thoughts are echoes from inside this community, giving us a sense of belonging. Hereafter, the truth is what the whole pack says. 

Looking at comment sections online, we know that any argument has two sides. There are consequently at least two echo chambers. For any topic, these two groups can have opposing views and the distribution of opinions is said to be polarised. The discourse is now fragmented as there is little meaningful cross-talk between two groups of opposing ideologies. Reports further indicate that misinformation and fake news are harboured by echo chambers that accelerate the antagonising of the other group.

Ideally, the distribution of a group of people should be ‘normal’, which means most people would be indifferent to the topic, not holding strong views. Only some would have extreme views in favour or against.

But the echo chamber phenomenon makes people choose sides and the distribution is divided – making the opinions ‘polarised’.

subscription-appeal-image

Support Independent Media

The media must be free and fair, uninfluenced by corporate or state interests. That's why you, the public, need to pay to keep news free.

Contribute
imageby :

Depolarisation of opinions

A depolarised world is better at democratic deliberation and diffusing misinformation. And now that we have a model for how opinions are polarised online, we can prescribe ways to depolarise this distribution.

One class of solution is where recommender algorithms are modified to discourage the formation of echo chambers. A person’s digital environment is ‘nudged’ just enough to moderately expose them to ‘good’ content throughout the spectrum for an extended period. 

Researchers from IISER Pune show that the opinions of the distribution can be restored to normal over time, achieved by exposing people with ideas outside their box. A model that forms echo chambers was used and additional interactions were added. Each agent now is exposed in minute quantities to content from a random bunch of ideas from all over the available spectrum of ideas, instead of just content from inside the echo chamber. The method works even without the algorithm knowing which echo chamber the agent is in, making it non-intrusive.

This method tries not to go against Big Tech’s plan to maximise engagement because only minimal nudging is required to depolarise an echo chamber.

A variant of the strategy with similar results was shown by other researchers in this report. The group is depolarised to give a neutral mass consensus. However, the number of people with extreme opinions was slightly increased.

But, simply changing someone’s social media feed does not result in instantaneous change in their opinions. The models assume that the agents engage positively with these new opinions. Nonetheless, the studies show not only the power of science to understand the online behaviour of a large mass of people but also the power of the algorithms to control the distribution of online opinions of these people.

This seems dangerous now, as Big Tech can decide the distribution of the opinion of a group of people by slight modifications to their algorithms.

A more complete attempt to solve the problem, which is also less prone to external interference, can be reached from the other side. As we know, the problem has two contributing elements: the human element and the profit motive. To have a truly depolarised world, human agents have to decide they do not want their opinions to be a see-saw in someone’s office. They can then decide to engage with all arguments before forming a belief. And when faced with a person of opposing belief, engaging in an logical and informative discussion passionately but non-violently to reach a consensus or impasse. 

A deliberate attempt to engage with opposing ideologies is the way to peace.

The Science Desk is a collaborative effort between Newslaundry’s subscribers and its editorial team. 

Also see
article image‘No moderation, algorithm, paid blue ticks’: How social media fuelled fake news on Israel-Palestine conflict
article imageThe echo-chamber of liberal outrage
subscription-appeal-image

Power NL-TNM Election Fund

General elections are around the corner, and Newslaundry and The News Minute have ambitious plans together to focus on the issues that really matter to the voter. From political funding to battleground states, media coverage to 10 years of Modi, choose a project you would like to support and power our journalism.

Ground reportage is central to public interest journalism. Only readers like you can make it possible. Will you?

Support now

You may also like