Civil-Society Leaders Assess Big Tech Election-Integrity Efforts in Momentous Election Year
WASHINGTON — On Tuesday, leaders of the diverse global coalition of more than 200 civil-society organizations, researchers and journalists reiterated their demands for six interventions Big Tech platforms must adopt to safeguard election integrity in 2024.
Earlier this month, this coalition sent a letter to executives at 12 platforms, giving the companies until April 22 to respond and indicate whether they would adopt the interventions. During a briefing today, coalition leaders discussed the comments from the eight Big Tech platforms that responded. Four fringe platforms — Twitter, Discord, Rumble and Twitch — didn’t respond, continuing to evade public transparency and accountability for how they will manage the spread of disinformation this election year.
In the coming days, Free Press will release specifics from the companies that responded and will share analysis based on years of monitoring Big Tech platforms and vetting their commitments. Preliminary findings appear below.
“Social-media companies must redouble their election-integrity efforts as generative AI has the potential to supercharge hate and lies that endanger democracy and public safety,” said Jessica J. González, co-CEO of Free Press, which organized the coalition behind the platform demands. “Instead, many companies have re-platformed white supremacists and election conspiracists, refused to flag Big Lie content as false even though it erodes trust in democratic institutions, and laid off large swaths of staff responsible for reining in abusive AI and moderating content across languages. That platforms like Twitter, Rumble, Twitch and Discord failed to respond to our demands altogether shows their utter disregard for the precarious state of democracies around the globe.”
“2024 is the tipping point for democracy,” said Maria Ressa, Nobel Peace Prize Laureate, Rappler CEO and member of the Real Facebook Oversight Board. “There is a concerted effort to silence facts and evidence-based reasoning … There is so much at risk because it’s not just companies themselves; geopolitical warfare is exploiting the weaknesses that have … never been addressed. We appeal for enlightened self-interest from these tech companies: The way to protect your business is to make sure the public-information ecosystem stays healthy — that voters can actually have agency and that we have the facts to make the right choices.”
“Working at Twitch showed me what is really possible, especially across the industry, and so I know that companies have the ability to implement and enforce election-integrity efforts,” said Anika Collier Navaroli, former senior Twitter and Twitch policy official. “My experiences over the past two elections have also shown me how important it is that the civil-rights community, researchers and journalists work together with technology companies, especially in the midst of these historic 2024 global elections. We need to meaningfully work together on interventions like those proposed in the coalition letter.”
“We saw the proliferation of disinformation starting in 2014, targeting disproportionately the Latino community,” said María Teresa Kumar, president and CEO of Voto Latino. “We recognize again that nefarious activity oftentimes starts and incubates targeting the Latino community and then proliferates everywhere else … In Spanish and Spanglish, a lot of fact-checkers aren’t taking information down fast enough so it’s infecting key voters … and we’re seeing this in key swing states where the disinformation is real and the fear-mongering is real … That is why we’re still calling on these platforms to strengthen their non-English content-moderation and enforcement practices.”
“No moderation failure has been more important to the rise of authoritarian movements, extremism, hate speech and its concomitant real-world violence than the loophole [tech companies have] given the politically powerful due to things like ‘newsworthiness,’ or excuses that their commentary is in the public interest,” said Heidi Beirich, co-founder of the Global Project Against Hate and Extremism. “Their political ads are also not moderated in any way and are another place where this kind of abuse can be sown … There’s a large body of research that suggests that the incendiary rhetoric of political leaders can make political violence more likely.… That’s why it’s critical that tech companies treat all users the same and enforce content-moderation rules for public and political figures.”
“Responses from these companies should not be taken as fulfillment of their own accountability,” said Nora Benavidez, Free Press director of digital justice and civil rights. “In previous election cycles, platforms have made commitments to protect election and platform integrity — only to turn off essential functions and interventions that then lead to real-world harms. Coalitions like ours are essential, to serve as a watchdog when platforms evade responsibility to users around the world.”
Free Press Takeaways from Platform Responses:
- Eight of 12 platforms responded in writing to coalition demands.
- Four fringe platforms (Twitter, Discord, Rumble and Twitch) did not offer any substantive response whatsoever to the coalition.
- None of the platforms committed to adopting all six of the coalition’s demands.
- None of the platforms provided a timeline to implement their commitments.
- None of the platforms committed to moderating Big Lie content.
- Only TikTok committed to hiring more human reviewers. Meta, Twitter, YouTube and other platforms have laid off tens of thousands of essential staff in the last 18 months.
- The majority of the eight responding platforms expressed commitments to moderating hate and lies in non-English languages:
- TikTok said it will invest $2 billion in trust and safety in 2024 and will moderate in 50 languages.
- Google and YouTube said they would moderate content in English, Spanish and EU and Indian languages. They did not commit to reviewing content in Asian or African languages.
- All of the responsive platforms — with the notable exception of TikTok — claim that they will hold VIP accounts to the same standards as other users; speakers on the call expressed doubt that this was happening presently, and we will be monitoring this.
- Some but not all of the platforms committed to prohibiting deepfakes and other false information in political ads:
- Meta said it would label AI in political ads, but did not commit to human review of political ads.
- Reddit and Snapchat committed to prohibiting lies in election ads and to using human review of all political ads.
- We need more information about political ads on Google and YouTube — commitments there seem weak.