What Congress & the FTC Can Do Now to Protect Against AI Abuses
AI has taken over Washington, D.C.
There have been congressional hearings, closed-door sessions and multiple other events where lawmakers have asked about the best approach to regulating this technology.
Reading The Washington Post’s latest reporting, the answer seems to be: “Who knows?” Following a three-hour Wednesday meeting between lawmakers and several Big-Tech CEOs, including Elon Musk and Mark Zuckerberg, the Post concluded that the legislative forecast for AI was “cloudy” and that Congress still had “a long way to go” before it could craft rules to rein in the worst possible AI outcomes.
In reality, there are very useful things that Congress can do right now. It just needs to do them. Lawmakers can start by passing already introduced legislation that safeguards against two problems plaguing algorithmic technology: algorithmic discrimination and data-privacy violations.
Eliminate automated discrimination
Free Press Action has been pushing for reforms for years, including introducing model legislation that frames algorithmic discrimination and data privacy as civil-rights concerns. We’ve also worked with the Disinfo Defense League to help mobilize dozens of grassroots organizations behind a more just and democratic tech-policy platform.
We already know that companies that use any digital automation often train their algorithms using untold amounts of data gleaned from internet users. Even the most careful internet users have had their personal information scraped by often unscrupulous social-media companies, internet service providers, retailers, government agencies and more. This data can include users’ names, addresses, purchasing histories, financial information and other sensitive data such as Social Security numbers, medical records, and even people’s biometric data like fingerprints and iris recognition. Third-party data brokers often repackage and sell such content without our knowledge or consent.
Tech companies that deal in sophisticated algorithms often say they’re gathering this information to deliver hyper-personalized experiences for people, including easier online shopping, accurate rideshare locations and tailored streaming entertainment suggestions. But dangerous consequences flow from having advanced algorithms analyze our data. Companies can use this data processing to exclude specific users from receiving critical information, violating our civil rights online and off.
Algorithms make inferences about us based on our actual or perceived identities and preferences. These algorithms are built on decades of institutionalized discrimination and segregation. Trained by humans using our language and patterns of behavior, algorithms inject offline biases into the online world.
For example, an algorithm widely used in U.S. hospitals to allocate health care to patients systematically discriminated against Black people, who were less likely than their equally sick white counterparts to be referred to hospitals. In 2022, Meta settled a lawsuit with the Department of Justice regarding its housing advertising scheme, which resulted in Black users seeing fewer or no ads for affected housing on Facebook. And during the 2020 elections, Black, Indigenous and Latinx users were subjected to sophisticated microtargeting efforts based on data collected about them — and targeted with deceptive content on social media about the voting process.
These companies must eliminate algorithmic discrimination or face enforcement actions for liability. Congress and the Federal Trade Commission should investigate voter suppression and other civil-rights violations flowing from algorithmic discrimination and other abusive data practices.
Focus on data privacy and transparency
Using massive data sets of personal information to train next-generation AI could magnify these abuses. Lawmakers and regulators should protect users’ online privacy from the worst AI might offer.
One powerful way to disrupt AI’s misuse of our data is to limit the amount and types of data that companies that train these systems can collect and store. We may want to share our data to receive services we sign up for, but companies shouldn’t be allowed to collect more information than they need from us or retain it for longer than they need it to provide the requested goods or services. They also shouldn’t be allowed to modify their terms of service after the fact to use data collected for other purposes to instead train generative-AI models.
We can also limit tech companies’ ability to offer this data to third-party sellers and AI companies. Our data should be ours; we should be able to easily opt out of companies’ collection and retention of our data. And we should have rights to easily access, correct, delete or download the personal information companies do gather about and by us.
Companies that process data should disclose not just what information they collect, but the sources of that information. They must also be transparent about how our information is used, how they make decisions about what content to show us and how they secure our data.
Your toolkit for change and accountability
The Algorithmic Justice and Online Platform Transparency Act (AJOPTA) focuses on preventing discriminatory algorithmic processes that disproportionately impact communities of color. The legislation would require tech platforms to annually disclose how they collect and use each user’s personal information in their algorithmic processes. It would make it unlawful for these companies to process user data in a manner that segregates, discriminates or otherwise makes unavailable goods, services, facilities, privileges or other advantages on the basis of an individual’s race, ethnicity, gender identity, religious belief, sexual orientation, disability status or immigration status, among other categories.
Other legislation, including the Platform Accountability and Transparency Act and the American Data Privacy and Protection Act, could ensure that AI technology doesn’t violate our digital civil rights. Meanwhile, the Fourth Amendment Is Not For Sale Act would stop the harmful and unconstitutional sales of personal information to government authorities without a legal warrant. Right now, data brokers are exploiting a privacy loophole to help law-enforcement agencies violate our constitutional Fourth Amendment rights and purchase location-tracking data on people all over the United States.
Aside from Congress, the FTC has the rulemaking authority to effectively enforce against and prevent data abuses and other unfair or deceptive practices. Last November, Free Press called on the agency to define blatantly discriminatory data practices as unfair and therefore unlawful acts, and we’ve invited people to tell their own stories of algorithmic abuse so the FTC understands the personal scope of the problem.
Other federal agencies like the Consumer Financial Protection Bureau, Department of Education, Department of Justice and Department of Labor should study how personal information is used in their fields, identify disparities and risks for discrimination, and issue public reports to Congress on a regular basis, with a special focus on the potentially discriminatory effects on people of color and non-English-speaking communities.
People should be able to control their online experiences. We should be able to go online confident that our every click and keystroke aren’t being monitored and recorded for some unknown purpose, and without our prior approval.
While Congress struggles for new ways to rein in AI, it should know that it already has a toolkit full of useful legislation to help prevent the worst abuses. In contrast to what The Washington Post claimed, the forecast for protecting digital civil rights is not uncertain. Congress just needs to act.