“I’m asking you to intervene in this instance because the President of the United States has taken something that does not belong to him — the memory of my dead wife — and perverted it for perceived political gain...Please delete those tweets.”
On May 25, Kara Swisher of the New York Times reported about this moving written by Timothy Klausutis to Twitter CEO Jack Dorsey. Who is Klausutis and how does US President Donald Trump’s tweet affect him?
In 2001, Timothy Klausutis’ wife Lori worked with former US Representative and now MSNBC host Joe Scarborough when she was found dead in her office.
Over the last few days, Trump has fuelled conspiracy theories and insinuated in tweets that Scarborough was responsible and involved in Lori Klaustis’ death. She died of a .
Twitter, while saying sorry for the pain caused by the tweets, refused to delete them.
But then this happened:
On May 26, Twitter added a fact-check label to Trump’s tweets
And then, this:
On May 29, Twitter flagged and hid Trump’s tweets that "glorified violence". A move that led Trump to sign an executive order that limits the legal protection platforms like Facebook, Google and Twitter have over its content.
Trump has also tweeted that Section 230 of the Communications Decency Act should be revoked. What is Section 230, you ask? TL;DR: A 1996 law that protects platforms for being liable for the content posted by their users. It is central to the platform, not publisher narrative. Here’s a and a with experts in case you are interested in a deep dive.
What and why of content moderation
The intense criticism of how platforms handle misinformation, hate and fake news — often to the point of their — has pushed them to have comprehensive content moderation guidelines.
YouTube has . Twitter’s got , while Facebook puts it as . It also has a new but that’s a different beast to break down altogether. These so-called guidelines are essentially a rulebook that platform content moderators (humans and AI) follow to block, flag, hide, demote, and remove any content. From fake news (via third-party fact-checkers) and misinformation to child pornography, the ambit for moderation pretty much covers everything.
Enters the newsworthiness exception
Both Facebook and Twitter have a "newsworthiness" exception to their moderation guidelines. In order to serve public interest, content that would otherwise be taken down will continue to stay if it’s from a politician or a world leader.
Flagging and hiding Trump’s tweets is a move away from this exception. In fact, it is a first on flagging a warning label to any public figure’s tweet. It also opens a Pandora's box of whether a platform should be liable for content that goes on it. But before that:
Didn’t Trump put the same content on Facebook? How did Facebook respond?
For starters, Facebook has not removed Trump’s Facebook and Instagram posts with the same content that have more than a million likes.
In addition to the newsworthiness exception, Facebook had made the decision to exempt politicians and political speech from its third-party fact-checking process. Why? It it doesn’t want to “referee political debates”. It doesn’t fact-check political advertisements, even the ones with false information as per . Twitter, on the other hand, has announced a ban on all political advertising, with a policy being underworked currently.
“I just believe strongly that Facebook shouldn’t be the arbiter of the truth of everything that people say online.” Mark Zuckerberg, CEO, Facebook, appeared on Fox News before any other news outlet to defend Facebook’s decision to not fact check Trump’s tweets. The choice of the channel makes it clear who the message was for.
Meanwhile, some Facebook employees clearly don’t agree with Zuckerberg
Who defines and draws the line?
"Arbiter of truth" is a phrase that both Zuckerberg and Jack Dorsey, the CEO of Twitter, used in their defence this week. Both Facebook and Twitter believe that even though millions of pieces of content are published every day on their platforms — they are, by definition and nature, a tech platform and not publishers.
To misquote Spider Man’s Uncle Ben: With the publisher tag comes liability and accountability for content.
The responsibility for the content on their sites and app was, and remains, a Pandora’s box. It not only ties to how their algorithms are coded but also how they make money. That is the reason why platforms want to stay as far as they can from being seen as policing any content. The phrase "not an arbiter of truth" comes in handy here. Facebook’s decision to have third-party fact-checkers and now an independent oversight board fits into that narrative.
There’s little doubt that platforms need to do a lot more and take more accountability. Governments too are taking notice. France has just passed a that forces social media companies to delete certain content within an hour.
Plus, this extends to things beyond moderation. Algorithms need to be tweaked to disincentivise and stop the spread of misinformation, as .
And, transparency. We need a whole lot of transparency in the guidelines, moderations and who defines them. Dorsey at least seems to agree on that bit:
“It is ironic that the print media, which otherwise demands transparency and accountability from the rest of the observable universe, would act so opaque and tight-lipped about its own internal rumblings,” Ayush Tiwari’s report on Hindustan Times laying off of more than 100 people.
From our ongoing series that looks into media ownership patterns.
Abhinandan Sekhri spoke to Krishnaswamy, the only Indian on the newly constituted Facebook oversight board.
Ali wrote the on the role of technology in facilitating the rise of a “Hindu vigilante” in Uttar Pradesh.
“While most TV news professionals have scoffed at the idea of running Amazon-provided content as news, at least nine stations across the country ran some form of the package on their news broadcasts.”
The definite guide on understanding the rise and unmatched success of TikTok and its parent company, Byte Dance
YouTube said it’s an error and it’s working on the issue. Error, huh? 🤔
Chug it out
“From Mexico to Malta, attacks on journalists and publishers have proved deadly to individuals and chilling to broader freedoms. And now Covid-19 is being used as an excuse to silence more voices.”
This will go a long way in helping reader-supported publishers power recurring subscriptions via UPI apps like Google Pay. To give you an idea of how big this could be: India has 900 million debit cardholders, against 40 million credit cardholders.
“Reading and typesetting these 1,000 names was brutal, but these are extraordinary stories that require commensurate design”: Tom Bodkin, NYT’s design director since 1987.
A video from Ronit Roy is helping protesters in the USA in making baklava masks out of t-shirts. It’s got four million-plus views already!
This was the fourth edition of Stop Press. Navigating the intersection of big tech with media is a complex and messy subject. There’s barely any mention, forget analysis in India about big tech’s impact and role in media.
I want to use Stop Press to explain and contextualize questions at the heart of the big tech-media puzzle. Do let me know your thoughts and what you want to get covered more under Stop Press. I’m at firstname.lastname@example.org
If you liked what you read, do .