By Danielle Keats Citron and Hany Farid
As the U.S. Congress gathered at the Capitol Building in Washington to certify the 2020 presidential election, then-President Trump spoke at a nearby rally where he called upon a thousands-strong mob to “fight like hell.” After Trump told the mob, “we’re going to the Capitol,” they did. In the aftermath of the riots, five lay dead, hundreds of officers were seriously injured, and our democracy was shaken to the core.
As the mob ransacked the Capitol and lawmakers hid behind barricaded doors, Trump turned to social media, repeating the baseless election fraud claims. Facebook removed the posts and placed a ban on Trump’s Facebook and Instagram accounts until the “peaceful transition of power is complete.” It attributed its decision to Trump’s use of its platform “to incite violent insurrection against a democratically elected government.” TikTok, Twitter, Snapchat and others followed suit.
Facebook’s Oversight Board will soon consider whether the platform properly revoked Trump’s access, and if a permanent ban accords with Facebook’s commitment to “Voice” and “Safety.” Other social media outlets may soon consider the same (though Twitter appears to have made their earlier suspension permanent).
In our view, Trump should not be allowed back on the outlets that he routinely abused as a public official. While in office, Trump violated companies’ terms of service by harassing individuals, inciting violence, and spreading harmful disinformation about public health. He was an olympic-level policy violator—every day brought a new violation, to the detriment of our democracy, national security, and safety. He should not be allowed back.
The debate about the Trump ban has been muddied by cries of censorship. Tech companies, however, are not the U.S. government, so the First Amendment does not prevent them from setting rules of the road for their services. Facebook, Twitter, and others have long prohibited many forms of speech, including nudity, bullying, calls for violence, and fraudulent material. Congress wanted to incentivize this kind of content moderation when it adopted Section 230 of the Communications Decency Act. Section 230(c)(2)(A) says, “No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material … whether or not such material is constitutionally protected.” Platforms are squarely within their rights to ban Trump for repeatedly violating their rules.
Trump has, for example, violated Facebook’s terms of service again and again by posting hate speech (too many examples to count), COVID-19 disinformation, and bullying. The proverbial last straw was his incitement of violence, resulting in the assault on Capitol Hill. Exceptions to these rules have been made for sitting leaders because the public is rightly interested in their views, but it is difficult to justify extending this exception to former public officials—especially former public officials like Trump whose online activity has posed a danger to the public in the past and who has done nothing to suggest that he would change his ways in the future.
Whether Trump should be welcomed back is a tough question for companies’ public relations team. Many will not be happy with the decisions, but, in our view, precedent supports a permanent ban for Trump. In May 2019, Facebook banned several high-profile individuals, including Nation of Islam leader Louis Farrakhan, for promoting “violence and hate.” Farrakhan had a long history of inflammatory anti-Semitic and homophobic remarks. Trump’s vast and sundry violations far eclipse the activities leading to Farrakhan’s ban. In this regard, banning Trump would be in line with other high-profile bans.
Morality is on the side of a permanent ban. In the past, social media companies have waited too long to remove accounts inciting death and destruction. Starting in 2013, the Myanmar military created fake Facebook profiles that directed vicious hatred at the country’s Muslim Rohingya minority. The online campaign incited genocide and the rape of tens of thousands of women, resulting in the largest forced human migration in recent history. In 2018, Facebook admitted that it did “not do enough to prevent the platform from being used to foment violence.”
Social media companies should learn from this mistake to say enough is enough. More than 450,000 people have died in the U.S. due to COVID-19, in part because of Trump’s downplaying and politicizing of the deadly pandemic. Five people died at the Capitol, in part due to Trump’s incitement. Trump has promoted hateful, racist, and xenophobic views, and supported white supremacists with a wink and a nod. We should not have to wait until more health disinformation is spread, more hate is sown, and more people die before we acknowledge the clear and present danger that Trump poses. Everything tells us that Trump’s past is nothing other than a prologue.
Trump, of course, is not the world’s only demagogue. The decision facing Facebook and others will recur across the globe as dangerous authoritarians wreak havoc on individuals, societies, and democracies, while cowardly hiding behind their screens. Trump is not sui generis and companies should be ready to ban or block officials whose policy violations pose a threat to the public, as Twitter has done when they suspended Rep. Greene for promoting the baseless QAnon conspiracy, and blocked COVID-related misinformation posted by Brazil’s president Bolsonaro, Venezuela’s president Maduro, and Iran’s supreme leader Khamenei.
The decision around Trump’s ban will be among the first the Facebook Oversight Board will make, but it is hard to imagine a more consequential case. The world is watching to see if the board is capable of speaking truth to power, to both Zuckerberg and Trump. In saying enough is enough, the board will show that certain lines cannot be crossed. It is a privilege to use these online platforms—they don’t owe us their service. Serial violations that cause lasting, widespread harm to public health and the body politic warrant the permanent revocation of that privilege.
Originally published as The Case for Trump’s Permanent Ban From Social Media; by SLATE on February 5, 2021. Reprinted with the author’s permission.
Hany Farid is the Head of School and Associate Dean in the UC Berkeley School of Information and a professor with a joint appointment in the School of Information and department of EECS. He focuses on digital forensics and misinformation.
Danielle Keats Citron is the Jefferson Scholars Foundation Schenck distinguished professor of law at the University of Virginia School of Law, a 2019 MacArthur Fellow, vice president of the Cyber Civil Rights Initiative, and author of Hate Crimes in Cyberspace.