Major Tech Platforms Fall Short as Compliance Deadline for EU's New Rules Passes
WASHINGTON — As a deadline for compliance with the European Union’s Digital Services Act (DSA) passed on Friday, Free Press found that the major platforms’ policies have yet to fully align with the law’s requirements, potentially exposing companies like Google and Meta to tens of billions of dollars in EU fines.
The DSA, adopted in 2022, requires greater transparency and accountability from major platform companies including Google, Meta, TikTok and Twitter (X). Under the DSA, platforms must act “expeditiously” to stop the spread of harmful content; give users information about how ads are targeting them; and offer users clear instructions on how to opt out of such targeting. Platforms must also stop placing targeted ads that rely on sensitive personal data, including a user’s sexual preferences, health information, political beliefs and religion; and be more transparent about their operations — sharing algorithmic information with researchers and disclosing their content-moderation practices in annual reports.
A platform company that fails to comply with these DSA requirements by the EU’s Aug. 25 deadline could be fined up to six percent of its annual global revenue — with repeat offenders facing the prospect of a complete ban from operating in Europe.
Companies are failing to rein in disinformation and extremism
In 2022, Free Press released the report Empty Promises: Inside Big Tech’s Weak Effort to Fight Hate and Lies, which revealed the failures of Meta, TikTok, Twitter and YouTube to curb the spread of election disinformation and extremism across their networks. Free Press has continued this analysis in 2023, reviewing the policies of the four major social-media platforms to consider how prepared, both in writing and practice, the companies are to manage the spread of disinformation, protect their users and be transparent about their policies and operations.
At the time of the report’s 2022 release, Free Press found that the companies created a labyrinth of commitments, announcements and policies that make it exceedingly difficult to assess what they’re really doing, if anything, to protect users. This lack of clarity and disclosure alone may be enough to warrant EU enforcement under the DSA’s public-reporting requirements.
Twitter failed a check on its EU preparedness earlier this year, prompting EU officials to direct the company to hire more content moderators. In the intervening months, there has been no sign that the company has added people with these skills to its staff. Since taking over the company in October 2022, Elon Musk has reduced Twitter’s staff from 8,000 to 1,500 employees, laying off many people who were part of the company’s trust and safety and human rights teams.
Over the summer, other platforms tested their systems for compliance with the DSA and failed to meet the law’s standards, according to Reuters. Free Press has found that since the beginning of the year, platforms have been in steady retreat from increased accountability as they’ve laid off moderation staff and loosened standards that would better protect users from exposure to disinformation and abuse.
Nora Benavidez, Free Press’ senior counsel and director of digital justice and civil rights, said:
“We welcome greater pressure on the platforms to be more accountable to their users and open about the ways they address the spread of hate and disinformation across their networks. Unfortunately, however, we’re still hearing empty promises from the executives at Google, Meta and Twitter, who’ve pledged to improve their content moderation and reporting systems, but have very little to show for it.
“And while some of the platforms have announced last-minute changes to their policies and practices — including Meta’s plan to let European users turn off algorithmic prioritization of posts in their feed — these are too little to meet the DSA's full requirements and too late to meet today’s deadline for compliance.
“Our ongoing monitoring of platforms includes examining what companies commit to do in terms of content moderation, election integrity, researcher access and transparency — and then analyzing how they’re performing against those policies. Platforms like Meta, TikTok, Twitter and YouTube too often make verbal and written commitments to only the most basic online protections to limit the spread of disinformation and hate. But in practice, our research shows ongoing gaps in companies’ enforcement of these meager safety policies.
“These systemic failures across all of the major social-media companies show how little the companies care about fighting extremism and lies and safeguarding election integrity on their platforms — even when faced with potential EU fines that could amount to billions of dollars. The failures we continue to document are alarming, and the blame for the risk online disinformation poses to users and our democracy lays squarely at the feet of platform executives who choose empty promises over protecting people.”