Musk, Twitter and Misinformation in a Time of War
To say that Elon Musk has failed as the head of Twitter would be a colossal understatement. Musk has destroyed almost everything that once made using the platform worthwhile.
We at Free Press have been covering this trainwreck since Musk took over nearly a year ago, but Twitter’s mishandling of the platform in the aftermath of Hamas’ attack in October and Israel’s subsequent military response marks a new low. The platform’s algorithms have boosted violent and disturbing images — some real, some faked — and disinformation about the conflict has spread across Twitter, and even jumped the digital median to appear in mainstream news outlets.
Since Musk took over at Twitter, Free Press has co-led a coalition that has successfully called on the site’s top advertisers to stop spending money on the platform. These blue-chip brands have fled because of his decisions to let hate and disinformation overrun the site.
That Twitter is now so inundated with grisly content and lies about the Middle East conflict is a natural result of Musk’s decision to lay off most of the company’s Trust and Safety team and gut the platform’s content-moderation rules. With few left at the company to vet questionable and violent content, posts are often left unchecked to spread like wildfire.
Musk’s decision to give special prominence to subscribers’ posts — without adequately verifying users’ identities — has given a soapbox to all sorts of grifters, conspiracy theorists and propagandists seeking to drown public discourse in lies coming from both sides of the conflict and many points in between.
Setting a bad example for other platforms
Whether it’s in times of crisis or calm, people should be able to rely on the social-media platforms they use to find accurate and even lifesaving information. As other tech platforms decide on ways to address this and other global crises, they must look to Twitter as an example of what not to do.
As soon as Musk took over Twitter, many suspected that his reckless decisions would harm people in the real world. There were reasons to believe this from the outset: Use of the N-word surged immediately after Musk’s purchase last October, allowing bad actors to test the limits of the platform’s moderation systems. Musk reinstated the accounts of thousands of these hate merchants, announcing a “general amnesty” for those who were banned during Jack Dorsey’s prior leadership.
Musk not only decimated the teams charged with vetting disinformation, he has pushed the burden of fact checking onto Twitter users. Community Notes, a Musk-favored tool that allows platform users to provide context to controversial posts, has been completely overwhelmed since Hamas’ attack. Twitter claimed that there were more than 50-million posts immediately following the outbreak of violence, including those spreading disinformation. The sheer volume of fake reports stretched well beyond the reach of any user-powered fact checking.
It’s gotten so bad that former Twitter insiders who watchdog the feature told Wired that Community Notes itself has become a vehicle for spreading lies about the conflict. “A reliance on Community Notes is not good,” one of them said. “It’s not a replacement for proper content moderation.”
Soon after the outbreak of violence, the European Union’s industry chief, Thierry Breton, urged Musk to provide evidence that Twitter was addressing the rampant spread of disinformation, in accordance with new EU online content rules. Failure to do this could result in massive fines.
Twitter CEO Linda Yaccarino responded to Breton’s inquiry with details that the platform had removed or labeled “tens of thousands” of posts — just a drop in the bucket given the glut of disinformation and inauthentic content.
According to Free Press’ own analysis, “there is no sign that Twitter has implemented any content-moderation rules to specifically cope with this crisis nor brought back critical content moderators to mitigate the deluge of harmful posts.”
And while it’s not just Twitter — platforms like Meta and YouTube have also failed to adequately and swiftly remove disinformation about the conflict — Twitter’s failure stands out due to Musk’s adamant unwillingness to prioritize content moderation and safety. Bad decision after bad decision, over the course of 12 months, have come home to roost on Twitter in these recent two weeks, amplifying existing political discord and eroding the platform’s once-reliable ability to connect people during crises.
Abandoning content moderation is disastrous
Content moderation matters for the platform in multiple ways. First, the failure to vet and remove violative content alienates users. Second, it also affects major brands. As a result of Free Press’ work with allies, hundreds of prominent companies have pulled their advertising from Twitter, resulting in a more-than-50-percent drop in ad revenues. And third, it exposes a platform like Twitter to billions of dollars in potential fines, compounding the company’s already dire prospects as a business.
But stemming the flood of hate and disinformation on platforms like Twitter also matters for reasons far beyond the platform itself. Failure to moderate content inevitably leads to migration of platform lies to mainstream media. Already, news outlets like CNN, The Los Angeles Times and others are retracting poorly vetted coverage of stories that originated and went viral on social media.
Twitter continues to exemplify the disastrous human implications that come from abandoned platform-integrity commitments. The company’s downward spiral should serve as a wake-up call for other platforms. With its traffic and revenues in decline, Twitter is edging even closer to bankruptcy, incapable of making interest payments on the $13-billion debt Musk incurred when he bought the platform.
Disinformation about the Israel-Hamas conflict is a horrific example of why we so desperately need better moderation and tech executives who put platform integrity — and saving lives — over their hunger for profits. The current conflict is a case in point and the reason Free Press and our allies have been calling for stronger vetting by platforms year round.
Musk doesn’t care enough to fix things, but it’s not too late for other tech platforms to do more to protect their users and promote peace.