Shot

Ahead of AI summit, report raises concerns about India’s AI governance, ‘democratic backsliding’

Ahead of the India AI Impact Summit 2026, the Internet Freedom Foundation and the Center for the Study of Organized Hate have released a report raising concerns about gaps in AI governance and the consequences for Indian democracy.

Titled AI Governance at the Edge of Democratic Backsliding, the report has highlighted what AI regulatory gaps can mean for the media, ranging from the recent three-hour window for platforms to disable access to synthetic content deemed “unlawful” to the increased potential for censorship.

The summit is meant to take place during February 16 to February 20 in Delhi and is being organised by the Ministry of Electronics & Information Technology (MeitY) in collaboration with a wide range of national and international partners. 

According to the report, the India AI Governance Guidelines issued by the Centre in November 2025 and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 on synthetically generated content (SGI) are expected to impact journalists. 

Referring to the 2026 rules that impose a three-hour timeline for platforms to disable access to “unlawful SGI” after receiving a court or an executive order, the report states platforms could be encouraged to proactively monitor content, leading to the automatic blocking of journalistic pieces that use AI tools for clarity or translation. 

It points to a ‘collateral censorship’ that stems from the lack of clarity in the 2025 guidelines, which initially proposed a broad, ambiguous definition of SGI that could encompass a range of common filtering and editing tools. 

Although the subsequent 2026 rules narrowed the definition of SGI to also include amendments exempting educational and research content, “there is no explicit reference to exemption for journalistic, artistic, or satirical content,” the report notes. 

The report also raises privacy concerns, particularly the requirement for intermediaries to embed permanent metadata or provenance in SGI and the prohibition on users removing these labels. These requirements, the report notes, lack safeguards for privacy and anonymity. For journalists working with sensitive information or whistleblowers, this could prove problemsome. 

When whistleblowers use AI to protect their identities by redacting a document, the software may leave a permanent digital trace. This can make it difficult for journalists to keep the source anonymous. One of the recommendations made in the report is to “implement a robust framework for whistleblower protection and legal protections for researchers.”

Moreover, platforms may be required to identify and disclose the identity of a user violating SGI rules to government authorities without a prior judicial order. This creates a significant risk to the privacy and safety of journalists and their sources.

The government introduced these rules earlier this year, particularly around SGI, with the “stated aim to address harms from deepfakes, misinformation, and other unlawful SGI that can infringe the privacy of citizens or undermine the national security and integrity of the nation,” according to the report. 

“However, these amendments have raised concerns around both the efficacy of the proposed measures to address harms and the possibility of being misused to harass, intimidate, and retaliate against innocuous users, thereby creating significant risks to privacy and freedom of expression,” it adds. 

Key concerns beyond journalistic work

Furthermore, the report highlights the increasing use of AI for predictive policing and facial recognition. “The resultant feedback loop reinforces biases of police officers and institutionalises and legitimises discrimination as data-driven scientific policing.” To address this, the report recommends: “Prohibit the use of predictive policing and the use of biometric and facial recognition systems for mass surveillance.”

India aims to democratise AI through public investments, such as the India AI Mission it launched in March 2024, with a budget of Rs 10,300 crore spread over five years. “However, the recent budget saw cuts to the India AI Mission, possibly pointing towards greater interest in attracting private investment to build infrastructure. This was also reflected in tax incentives for data centres, including a tax break until 2047 for foreign cloud providers using Indian data centres,” notes the report. 

"This raises questions about whether India’s vision to democratise access will challenge global monopolies and power concentration, or whether it will instead create new monopolies domestically while still relying on foreign investment.”

To safeguard local communities, the report recommends: “Mandate meaningful Environmental and Social Impact Assessments, with local community participation, before establishing data centres.”

The report also identifies the deployment of opaque algorithms in electoral processes – such as e-voting applications and electoral roll revisions. 

"Recently, opaque algorithmic systems are being increasingly deployed in elections. This can impact the right to vote of citizens, especially those belonging to marginalised communities,” the report notes. “Without adequate safeguards, such an application can not only undermine the secrecy of voting but also lead to fraudulent voting and hamper the sanctity of elections.”

To maintain the integrity of our elections, the report recommends: “All procurement, development, and deployment of AI systems by state authorities... or law enforcement agencies must be transparent and subject to independent oversight, risk assessments, and robust monitoring.”

Small teams can do great things. All it takes is a subscription. Subscribe now and power Newslaundry’s work.  

Also Read: Efficiency vs ethics: The AI dilemmas facing Indian media

Also Read: AI, influencers, and a public that’s losing interest: The big challenges for Indian media