Skip Navigation
collage of staff members

Help Us Raise $50K by 12/31!

Every donation will be doubled until we reach our goal - Give Today!
Get updates:

We respect your privacy

Thanks for signing up!

WASHINGTON — On Friday, a collection of the world’s largest technology platforms and artificial-intelligence companies signed an accord committing to better detect and label AI-generated disinformation that’s being posted on their networks to manipulate public opinion during a busy election year that’s already well underway.

In the voluntary commitment, 20 companies, including Adobe, Google, Meta, Microsoft, OpenAI, TikTok and Twitter, agreed to cooperate on creating new tools to detect, label and debunk “deceptive AI election content,” including  faked images and audio of political candidates and other prominent figures. They also pledged to provide “swift and proportionate responses” and to share more information about “ways citizens can protect themselves from being manipulated or deceived.”

In December, Free Press documented the retreat of Meta, Twitter and Google-owned YouTube from prior commitments to protect election integrity. Between November 2022 and November 2023, these companies eliminated a total of 17 critical policies across their platforms. This backslide included rolling back election-misinformation policies designed to limit “Big Lie” content about the 2020 vote and weakening user protections around political ads. In that time, Meta, Twitter and YouTube collectively laid off more than 40,000 employees, with significant cuts occurring in the content-moderation and trust-and-safety categories.

Free Press Senior Counsel and Director of Digital Justice and Civil Rights Nora Benavidez said:

“Voluntary promises like the one announced today simply aren’t good enough to meet the global challenges facing democracy. Every election cycle, tech companies pledge to a vague set of democratic standards and then fail to fully deliver on these promises. To address the real harms that AI poses in a busy election year, these companies must do more than develop technology to combat technology. You can’t simply tech around this problem. We need robust content moderation that involves human review, labeling and enforcement.  Free Press has put forward concrete recommendations for years and we need reinvestments in critical staffing and policies as well as robust human-centered and in-language moderation processes.

“Whether these tech giants actually do the work of better detecting, labeling and debunking AI-generated disinformation remains to be seen. Lofty principles and promises like those expressed in this accord are nice, but the rubber meets the road when it comes to implementation. There are deeply complex questions about how to moderate AI-generated content. But we know one thing for sure: Proper and extensive auditing of automated tools requires human interaction and oversight. These companies cannot expect automated tools — absent the adequate staffing to train and review AI processes — to effectively maintain platform health and protect election integrity.

“These companies must do more than what is outlined in the accord: They must share more details about their processes for reviewing, labeling and up-or-downranking AI-generated content. They must commit to a human-centered review process for all moderation decisions. This includes staffing content-moderator processes with real humans to oversee, audit and train automated tools — eliminating biased results that could impact free expression. They also need to bring back and strengthen political-ads policies, including ensuring that human review is part of every political ad buy. This includes mitigating false and extremist content in political ads, including ads that contain generative and synthetic content, by increasing friction and downranking prior to and during review of all advertising requests.

“Free Press is also calling for more in-language human- and automated-review processes to examine and enforce policies with local, cultural context. AI has opened the doors to supercharged and tailor-made content that polarizes unique user groups. Absent local and in-language understanding of content reaching users around the globe, content moderation won’t effectively address the most potent artificial content. And while the accord briefly touts AI’s beneficial ability to flag deceptive content across multiple languages, the signing companies don’t commit to staffing up the in-language teams needed to ensure that the technology isn’t flagging false positives or missing important nuances that only a human moderator could detect.   

“This is not a drill: We are in one of the most consequential election years in recent memory. Social-media companies need to show the public that their policy guardrails are back on the books, that critical election-integrity teams are staffed and trained up, and that well-trained and resourced people are overseeing the robots. As we learned on January 6, 2021, there are dangerous repercussions when platform companies retreat from commitments to protect election integrity. With the global proliferation of AI technology, these dangers are more present today than they were four years ago.”

More Press Releases