Apparently, “Vox Populi” has spoken, and they want former President Trump back on Twitter. A thin majority of 51.8 percent of 15 million poll respondents (including, most likely, bots) voted in favor of Trump’s return.
Since purchasing Twitter, Musk has moved at light speed with little care for consequences. Immediately after elevating himself to self-proclaimed chief twit, Musk shoved out chief legal officer Vijaya Gadde, the leader of all things trust and safety. Musk then fired 3,000 people who put Gadde’s policies into practice: the contractors “behind the screen” who dealt with reports of hate speech, harassment, stalking, threats, nonconsensual intimate imagery, spam, and other violations of Twitter’s rules. In one fell swoop, content moderation at Twitter was eviscerated.
Before getting into what it might mean if Trump starts tweeting again, it is helpful to appreciate what Musk has disassembled and what he will likely try to reassemble once perils become clear and advertisers flee.
Over the years, Twitter invested significant resources into addressing online harms. This effort, however, began frustratingly slowly. In 2009, Twitter banned only spam, impersonation, and copyright violations. Then, the lone safety employee, Del Harvey, recruited one of us (Citron) to write a memo about threats, cyberstalking, and harms suffered by people under assault. Harvey wanted to tackle those harms, but the C-suite resisted in the name of being the “free speech wing of the free speech party.”
Twitter largely stuck to this script until 2014, when cyber mobs began shoving women off the platform in a campaign known as Gamergate. At that point advertisers decided they did not want their products appearing next to rape and death threats and non-consensual pornography. Gadde built an impressive trust and safety team, tripling the number of people on it. Harvey, Sarah Hoyle, and John Starr designed policies banning cyberstalking, threats, hate speech, and nonconsensual pornography. On the product side, Michelle Haq put those policies into practice. Thousands of moderators were hired; Product managers worked on making reporting processes more efficient and responsive.
That was just the beginning. In 2015, Gadde, Harvey, and Hoyle created a Trust and Safety Council, made up of global civil rights and civil liberties groups. (We have served on that council ever since on behalf of the Cyber Civil Rights Initiative, where we serve on the Board and in leadership positions.) That same year, Jack Dorsey returned as CEO and made trust and safety a priority. This was especially evident after the 2016 election. In response to the disinformation and hate speech that plagued the platform during the election season, Dorsey and Gadde gathered a small kitchen cabinet (Citron, former New York Times editor Bill Keller, and Berkeley journalism school dean Tom Goldstein) to map a path forward to ensure that the platform would enhance public discourse rather than destroy it.
On Dec. 2, 2016, Dorsey—along with Gadde and Harvey—sat down with this group to talk about how Twitter should best tackle disinformation that was leading to the erosion of trust in democracies across the globe. The assembled did not have all the answers, but it was clear that the company was on high alert and would dedicate resources to dealing with destructive online behavior.
For the next two years, the council met to provide advice about new products and services. After Harvey and Hoyle moved on in 2018, Gadde brought Nick Pickles on board. That team tackled new problems, including deepfakes and other digitally manipulated imagery. They worked on the “Healthy Conversations” initiative that sought feedback on fostering civil dialogue. Gadde’s team updated hate speech policy to ban “dehumanizing speech.” (This, of course, is an abridged history of Twitter’s work on content moderation.)
Crucial to note is Twitter’s blind spot when it came to rule-breaking: public officials. Trump (and others) were given free range to spout hate speech, harassment, election lies, and health disinformation in violation of the company’s rules. Twitter and others stuck to the position that public officials “were different,” as opposed to our mantra that “with great power, comes greater—not less—responsibility.”
On Jan. 6, 2021, as a mob descended on the US Capitol, many called for Trump’s long-overdue removal, after which Gadde convinced Dorsey, and Trump’s account was temporarily suspended.
On Feb. 6, 2021, we wrote together to make the case for Trump’s permanent ban from social media. In our view, “enough was enough”: Trump had used his social media presence to downplay and politicize the deadly pandemic; he also used it to incite a mob that left five dead, countless seriously injured, and the nation and world shaken. Better late than never, Twitter and other social media companies took away Trump’s online megaphone.
Now, Musk has invited him back, but the former president has suggested he is not interested. He has a new place to post where the rules are literally made for him: In February 2022, Trump set up Truth Social. With fewer than 2 million users, his reach is anemic, but that hasn’t stopped him from using it to spread conspiracy theories, election lies, hate, and antisemitic tropes.
Despite his protests, surely, Trump will be tempted to return to Twitter to reconnect with his 86 million followers. But the platform he may return to is very different from the one he left in February 2021, when we opposed his return. That would have been bad enough. Now, Trump will return to a platform with a decimated trust and safety team. What could possibly go wrong?
Since taking over, Musk has been bulldozing and backtracking on content moderation in real time. He adopted a verification scheme without any mooring to its original purpose: protecting against impersonation for those most likely to face it. Now, he is trying to fix that goof. He will soon discover that ripping down an entire edifice of trust and safety will lead to real harm for users and will scare away (more) advertisers. Unlike some of his earlier mistakes, however, he will also learn that it will not be easy to reconstruct an entire team that took more than a decade to build.
Musk likely adopts the interpretation that “Vox populi, vox Dei” implies that the people are always right, but one of the earliest references to this phrase comes in a letter from Alcuin to Charlemagne in 798: “And those people should not be listened to who keep saying the voice of the people is the voice of God, since the riotousness of the crowd is always very close to madness.”
With the guardrails removed, these words ring true.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.