Meta’s Oversight Board earlier upheld a decision to restore a post on violence against a civilian in Sudan for similar reasons.
Board points to ‘public interest’
While the board acknowledged that depicting non-consensual sexual touching can lead to significant risk of harm, both to the victim and in terms of emboldening perpetrators and increasing acceptance of violence, the post documented violence and discrimination against Dalit and Adivasi communities and was posted to raise awareness.
“The post therefore has significant public interest value and enjoys a high degree of protection under international human rights standards,” the board observed.
As per the board, since the woman is not identifiable in the video and the post was restored with a warning label, it “outweigh[s] the risk of harm”.
Is the ‘newsworthiness allowance’ inadequate?
The board said that “newsworthiness allowance” is “inadequate” for dealing with cases of sexual abuse at scale. It noted that this exception is rarely used – it was applied only 68 times globally and “only a small portion” were issued in relation to the adult sexual exploitation community standard between June 2021 and June this year.
It said the term is vague, allows for too much discretion, cannot ensure consistent application at scale, and has no clear criteria to assess potential harm caused by content that violates its sexual abuse policy.
The board wants Meta to provide clearer standards under the adult sexual exploitation policy that clearly indicate how posts shared to raise awareness can be distinguished from those perpetuating sexual violence.
Under this exception, Meta should consider the context of the post and allow a post to remain on its platforms if it judges that it entails minimal risks of harm for the victim by considering whether or not the victim is identifiable, whether the content involves nudity, and whether the content has been shared in a sensationalised context. This exception, as per the board, should be applied at “escalation” only – for restoring user-reported or algorithmically-flagged posts, not for letting them remain in the first instance.
Earlier this year, the board had upheld the company’s decision to restore a Facebook post depicting violence against a civilian in Sudan for similar reasons. At the time, it had recommended that the company similarly amend its violent and graphic content community standards to clearly define how such content meant to document human rights abuses can be distinguished from similar content that is meant to provoke.
Update at 4.34 pm, Dec 14: This report has been republished after a change in embargo timeline.