Skip Navigation
collage of staff members

Help Us Raise $50K by 12/31!

Every donation will be doubled until we reach our goal - Give Today!
Get updates:

We respect your privacy

Thanks for signing up!

The social-media company claims to have 40,000 people working to combat hate speech. But internal documents now prove the company takes action on as little as 3–5% of hateful content and removes only 0.6% of violent and inciting content.

These were some of the revelations recently unveiled by Frances Haugen, the former Facebook data scientist and whistleblower behind The Wall Street Journal’s “Facebook Files” series — an exposé that shed light on a range of problems at a company that consistently puts growth and profits ahead of keeping people safe.

The “Facebook Files” are damning and tell the story of a company that has exempted high-profile users from its rules, enabled products and features that it knows are harmful to millions of young users, used algorithms designed to amplify disinformation about COVID-19 vaccines and facilitated violence against ethnic minorities around the world.

And despite what Facebook wants you to believe, this is not happening because the social-media platform is merely a mirror of society. The problem is Facebook’s hate-and-lie-for-profit business model.

Defective by design

The platform’s algorithms are designed to optimize engagement, a key component to increasing profits for its ad-driven business model. The result is weaponized algorithms that determine user identity and push the worst content — including hate, disinformation and calls for violence — to us all based on the data extractions that the company’s AI prompts. And this functionality is a core feature, not a bug.

In the runup to the 2020 election, Facebook actually changed its algorithms to provide safety features that downgraded misinformation. But as soon as the election was over, the company flipped the switch back. A few weeks later, the entire team devoted to handling problems like disinformation and election risks was dissolved.

Haugen’s work here is important. By turning over tens of thousands of pages of documents to federal regulators, she is providing evidence for what advocates and organizers have been saying for years. “As a publicly traded company, Facebook is required to not lie to its investors or even withhold material information,” John Tye of Whistleblower Aid told 60 Minutes. “So, the SEC regularly brings enforcement actions, alleging that companies like Facebook and others are making material misstatements and omissions that affect investors adversely.”

But the SEC isn’t the only regulator that can take action against Facebook. The company has been running an aggressive PR campaign for the last year practically begging for policymakers to step in. But as detailed in The Markup last month, many of the “regulations” Facebook claims to be open to are things the company already does or is already compelled to do under the European Union’s General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA).

So what would real, meaningful regulation for Facebook look like? What would actually address the grave harms Haugen and others have been sounding the alarm over?

Here’s where we can start:

1. Passing comprehensive federal privacy legislation.

  • Congress must pass a data-privacy law that limits the collection of our data, protects our civil rights, stops abusive practices, prohibits algorithmic discrimination and provides for proper oversight and enforcement, including regular mandated transparency reports from the platforms.

2. Passing legislation to tax online advertising and direct those monies to support high-quality noncommercial and local journalism.

  • A 2-percent tax on the targeted-advertising revenues of the top-10 online platforms would yield more than $2 billion for a national endowment to support journalism that meets the needs of diverse communities.

3. Using existing authorities at various government agencies to regulate data collection and algorithmic decision-making in a coordinated fashion.

  • The Federal Trade Commission should use its existing authorities to investigate and enforce against harms caused by abusive commercial data practices, and launch a rulemaking proceeding to minimize data collection and discriminatory algorithmic practices.
  • The White House should develop interagency leadership to identify harmful data practices by social-media companies and take action against companies’ civil-rights violations.
  • All agencies with civil-rights authority and responsibilities should regulate data collection and algorithmic decision-making within their scope of authority and competence. This must be done in a coordinated fashion across the government.

Facebook has shown us repeatedly that the company cannot and will not regulate itself. The Biden administration and Congress need to take action.

More Explainers