Shot

Facebook said no problem as staff memos pointed out hate speech in India: Indian Express report

Internal reports and memos exchanged within Facebook between 2018 and 2020 have raised questions about the social media giant’s approach towards hate speech and misinformation in India, according to a report in the Indian Express.

Despite many red flags, there were glaring gaps in the company’s response﹘revealed in documents part of the disclosures made to the US securities and exchange commission and submitted to the US Congress by the legal counsel of former Facebook employee and whistleblower Frances Haugen.

A “constant barrage of polarising nationalistic content”, “fake or inauthentic” messaging, “misinformation” and content “denigrating” minority communities were among issues raised in explicit alerts by staff mandated to undertake oversight functions, according to the report.

But an internal review meeting in 2019 with Chris Cox, then vice president of Facebook, found “comparatively low prevalence of problem content (hate speech, etc)” on the platform. “Survey tells us that people generally feel safe. Experts tell us that the country is relatively stable.”

Two reports flagging hate speech and “problem content” were presented in January-February 2019, ahead of the Lok Sabha elections. Another report in August, 2020, mentioned that the platform’s artificial intelligence tools had failed to identify problematic content as they were unable to “identify vernacular languages”.

The review meetings with Cox took place a month before the Lok Sabha poll schedule was announced.

Facebook did not respond to requests for comment by the Indian Express on Cox’s meeting and these memos.

While the first report pointed out that as high as 40 per cent of sampled top VPV (view port views) postings﹘a Facebook metric to measure how often the content is viewed﹘were either fake or inauthentic in West Bengal, the second report raised concerns about a test account experience.

Written in February 2019, the second report noted that the test user followed only the content recommended by the algorithm and did not add any friends.

Over the next two weeks, and especially following the February 14 Pulwama terror attack, the algorithm started suggesting groups and pages which centred mostly around politics and military content.

As per the disclosures, employees also asked how Facebook planned to “earn back” the trust of workers from minorities communities, especially after a senior Indian executive shared a post which many felt “denigrated” Muslims.

Also Read: BJP asked Facebook to monetise OpIndia, remove Bhim Army page: Indian Express

Also Read: In Australia vs Google and Facebook, does journalism really benefit?