Of the total “borderline content” ignored by moderators at social media giant Meta, formerly known as Facebook, nearly 40 per cent was problematic and included “hate”, “nudity” and “violence”, according to a report in the citing an internal memo of the company, dated April 15, 2019.
“Borderline” content is defined by Facebook as content that “does not violate the community standards” but is identifiably problematic in an area covered by these “community standards”. The data was calculated with Facebook’s internal algorithms that counted views and interactions.
The memo revealed that such “borderline” content was actually viewed 10 times more, in comparison to content that was flagged as outrightly in violation of Facebook’s guidelines
The memo, dated April 15, 2019, came almost a year after Mark Zuckerberg, Meta Platforms Inc Chief Executive Officer, shared a detailed note on how the platform planned to .
The report pointed out that the company underlined the “need for intervention to counteract perverse incentive” to reach more views as “borderline” content “gets more interaction than benign posts”.
The Indian Express quoted a Meta spokesperson on the conflict between high engagement with borderline posts and the need to remove such posts.
“We train AI systems to detect borderline content so we can distribute the content less,”he said.
The spokesperson said that content identified as “hate speech or violence and incitement” by those technologies is distributed “significantly” less, to decrease the risk of “problematic content going viral and potentially inciting violence ahead of or during the election”.
This comes after a series of internal documents and a Facebook whistleblower recently that Facebook did not take significant action despite hate speech and “problematic content” being flagged in India in the internal reports of the company.
This isn’t the first time the company’s internal memo has been made public.A report based on an internal strategy note dated August 6, 2019 had this month that Facebook staff tasked with reviewing hate speech on the platform had faced cost cuts even as divisive content increased across most markets, including India.