Opinion

When algorithms have biases: Can political AI be tamed?

With the ubiquity of AI-based generative tools such as ChatGPT, the debate around algorithmic biases in artificial intelligence has intensified.

Previous research has documented that AI tools can favour some demographics while excluding others in recruitment, criminal justice, social media content moderation, and education. In response, major AI tech companies have attempted to create fair and unbiased systems by creating algorithmic guardrails that produce diverse outputs from various political and ideological standpoints. Guardrails are post-hoc rules or boundaries technically imposed on a model's responses to align with societal norms and to promote a diversity of ideas, people and content.

This guardrail approach addresses the issue from a technological standpoint. But the root of the problem is political, spotlighting the need for a politically neutral AI system. And so far, the technological solution to the political neutrality of AI systems doesn’t seem to be working.

Take last month’s controversy on social media, when Google’s Gemini created images of prominent white people (like the US’s founding fathers)  and Nazi soldiers as people of colour. It was, in the words of The Verge, perhaps an “overcorrection to long-standing racial bias problems in AI” – but it backfired spectacularly. 

Some critics labelled such instances as the tech industry’s blatant attempt to appear “woke”. Elok Musk, who launched an “anti-woke” AI named Grok, criticised texts produced by Gemini as “super racist and sexist” for suggesting that white people “acknowledge their white privilege”. Google CEO Sundar Pichai eventually said his team was working around the clock to fix Gemini and that they were already seeing “substantial improvements”.

Google’s Gemini was on the receiving end of India’s ire last month too, when it responded to a query by saying Prime Minister Narendra Modi’s policies are “fascist”. Minister of State for Electronics and Information Technology Rajeev Chandrasekhar said Gemini had “directly” violated India’s technology law and “several provisions of the criminal code”.

The (im)possibility of apolitical AI

Yann LeCun, chief AI scientist at Meta, recently said on a podcast that the biases in AI systems are not due to technical deficiencies but that “the biases lie in the eyes of the beholder”.

People find content generated by AI to be biased when they disagree with said content, interpellated by a particular ideological and political framework. For example, a left-leaning individual may view diversity, equity and inclusion policies as steps towards social justice. But a right-leaning individual may see them as compromises on meritocracy. Although both sides may agree on the principles of equality and justice, their interpretations remain disputed.

Such disagreements are contestations on the meanings of certain ideas, like social justice, and the means to manifest them in society, such as DEI policies. 

Secondly, AI systems reflect the narratives and perspectives present in their training data – social media, books, research, news articles – often reinforcing existing viewpoints and presenting them as common sense. Disagreements arise when hegemonic beliefs are exposed and challenged by differing opinions. For instance, some researchers discovered that Amazon’s AI model for recruitment favoured male candidates, ignoring language like “women’s chess club captain” for “not matching closely enough the successful male job applicants of the past”.

Political theorist Chantal Mouffe argues that political contention is an inevitable and unavoidable aspect of human existence. She says the cultural domain has become an arena for political demands due to the weakening of electoral politics – what she called the “post-political condition”, where citizens feel increasingly powerless in enabling change due to a lack of political choices across the spectrum. 

Both the AI experts such as LeCun and political scientists such as Mouffe agree on the infeasibility of techno-solutionism to resolve the question of AI’s politics. In other words an apolitical AI is an impossibility. This marks a larger challenge to the very foundation of liberal democracy, which believes in deliberative consensus as a means to resolve political conflict.

Liberal democracies, influenced by German theorist Habermas’ concept of deliberative democracy, aim to achieve political consensus through public debate and discussion. However, ensuring that this process remains productive, inclusive and rational has proven to be illusive. Habermas' approach struggles in today's polarised and populist world, as marked by the unprecedented rise of ethno-nationalism and xenophobia. Instead of reaching a consensus, political camps are increasingly more antagonistic, emotionally charged and intolerant of each other. 

In this inflamed context, politicised AI systems will only degrade the democratic dialogue, perhaps much worse than social media’s impact on politics.

Manufacturing consent

Every form of cultural production in future is expected to be mediated and produced by politicised AI systems. We are yet to grasp the impact of such AI on cultural productions such as songs, stories, poetry, movies, journalism, art and so on.

Unlike Photoshop, which acts exactly as per a user’s command, AI tools are political, subjective and non-deterministic. If social media, which is just a medium of exchange, can impact society and undermine democracy, AI’s impact could be unimaginable. It’s difficult to predict how it will affect society when it starts creating content on a large scale. In this situation, what is dangerous is not the politics but concentration of consent manufacturing tools in a few hands.

We are at the cusp of an unprecedented time when a handful of AI tech companies can shape opinions through AI-mediated contents, such as news, books, stories and images, even against a user’s opinion. For instance, in our interaction with ChatGPT, it suggested to us that “OpenAI's efforts to make GPT-3 accessible through APIs is a step towards democratisation” – a conclusion that disregards our understanding of democratic practices of open-source models. In this context, the question of safeguarding freedoms of cultural production becomes paramount. 

The answer lies not in suppressing the politics in AI but radically decentralising AI. For a healthy democracy, the approach should be to accept the inevitability of politics to enhance plurality, to contest the centralisation of power. This perspective urges that AI tech should not be a monopoly of a handful of powerful tech giants, instead encompassing a wide array of players, particularly the marginalised or underrepresented. 

The first step to achieve this is to make AI technology open-source. When non-state and non-corporate actors will own the means of cultural production, challenging consent manufacturing effects of AI becomes possible. AI-mediated cultural productions too need to exhibit various political leanings, which is only possible when the technology is accessible to everyone and is open-source. This will lead to expression of counter-hegemonic and plural viewpoints, retaining the political agency in cultural productions. 

In other words, keeping the AI mediated cultural production decentralised and contested is the primary political antidote to the homogenising effect of politicised AI systems.

If you’re reading this story, you’re not seeing a single advertisement. That’s because Newslaundry powers ad-free journalism that’s truly in public interest. Support our work and subscribe today.

Also Read: AI is everywhere, iPhone users are sad, China doesn’t care: Tech highs and lows of 2023

Also Read: AI and the newsroom: Nine Indian media houses are opting out of AI trackers. Here’s why