Blog

Consumer Protection in the Age of AI: The FTC's Approach to AI Regulation

Brenda Leong
Ellie Graeden
Ekene Chuks-Okeke
Blair Robinson
July 2, 2024

Artificial intelligence is revolutionizing our daily lives in health care, shopping, and travel, but it is —as we become increasingly aware — also introducing serious risks for consumers. The U.S. Federal Trade Commission recently banned Rite Aid from using facial-recognition technology for five years due to its use of a biased system that falsely accused many women and people of color of shoplifting. Additionally, the FTC required Rite Aid to delete all collected biometric data.

Unfortunately for Rite Aid, that’s not as simple as deleting spreadsheets. Once an AI model has been trained on data, that data is baked in. Deleting the data requires deleting the model or proving that it has "forgotten" the data, a technical feat that is not well-established and unlikely to be approved by the FTC in the near future.  Rite Aid learned its lesson the hard way; in response, other companies using AI should put responsible governance principles and engineering practices in place up front.

The Rite Aid case underscores a crucial lesson: companies must not expect to get away with treating AI as a mysterious black box. They need to understand and anticipate the potential impact of the models they develop and use. The FTC is leveraging all of its authority to control the rapidly growing consumer AI market. It relies heavily on the FTC Act, data-focused regulations such as the Children's Online Privacy Protection Act, and sector-specific legislation such as the Fair Credit Reporting Act, and if comprehensive U.S. privacy or AI legislation ever passes, chances are the FTC will be one of the lead enforcement agencies.

The FTC has been proactive and transparent in approaching the emerging consumer AI industry. In recent remarks at the FTC’s Annual Technology Summit, FTC Chair Lina Khan highlighted the importance of tackling bottlenecks in the AI market that dominant companies could use to limit competition: indeed, the FTC is investigating several AI companies for monopolistic practices. In addition to concerns about the competitive marketplace for AI, the commission is increasing its focus on deceptive methods and invasiveness powered by AI tools.

Drawing from findings from their recent study of the data-brokering market — which led companies to collect personal information aggressively, sometimes through deception or applying long-standing rules mandating fair business practices — the FTC is paying attention. The regulators are concerned these methods are reaching new levels of invasiveness powered by AI tools.

In short, the FTC has a plan to address AI regulation. Do you?

They speak clearly and carry a big stick

The FTC has clearly and often signaled that AI companies must continue to take data protection and fair business practices seriously and follow existing FTC guidelines in regulated industries. The message is clear: The "old" rules still apply.

AI doesn't change the rules

Companies may consider tapping their user base for training data for AI models due to its attractive cost - namely, "free." However, they should exercise caution when gathering new data types or using existing data for new purposes. The FTC considers privacy policies a contract with users, which can't be changed without proper notification. Companies also can't use dark-patterns to obtain deceptive consent. By the same token, price fixing by an AI is still price fixing. Even in this brave new consumer AI world, the existing FTC rules carry on.

Fines are bad; losing your model is worse

The FTC's order mandates Rite Aid stop using facial recognition and other biometric tracking technologies and delete all collected data, images, algorithms, and products developed from that data. Once an AI model like facial recognition has been trained on data, that information is permanently encoded into the model's weights in an opaque fashion data scientists are only beginning to understand.  Since deleting, or disgorging, the entire model is currently the only reliable way to remove training data, Rite Aid lost an expensive investment: models like theirs typically cost millions to develop. Loss of the product, indefinitely if not forever, hurts more than the fines.

Security still matters

AI developers are responsible for securing both their data and their models. Security best practices remain critical: implement robust authentication mechanisms, such as multi-factor authentication, and define user roles with specific permissions and limits. Recognize and mitigate risk related to both external adversaries and insider threats. Also, update these controls regularly and monitor to adapt to evolving security threats. The FTC recognizes that unauthorized access, data breaches, and misuse are even more dangerous to consumers when AI and sensitive information are involved and are holding companies accountable.

Sights are set on sensitive data

The FTC has been targeting companies that misuse sensitive data, such as children's voices, video and facial biometrics, and DNA. Sensitive data poses a high risk to individuals, and deidentification or levels of anonymization don’t solve all the problems. AI companies need tailored approaches to de-identify different data sources, as more than general pseudonymization is required for high-risk data like DNA and biometrics. Each category of sensitive data collected and used in AI development requires specific privacy and safety measures.

Don't stretch the AI truth

The FTC cares whether companies do what they say they do, so keep claims in check.  In advertising and branding, do not exaggerate what your model can do. Don't claim products rely on AI if they don't. Don't claim AI has capabilities it doesn't have, or that your AI product is better than a non-AI product unless you can back it up. The FTC has called out several services making false claims, including those claiming they can reliably detect AI-generated content. Companies using AI must carefully consider their public claims around AI – those claims must be honest and accurate. Reputation – and profits – are on the line.

Separate fact from fiction

The rise of AI-powered technologies such as chatbots, deep fakes, and voice cloning has made it easier than ever to spread misinformation. These technologies erode trust and create false confidence in the information presented, making it difficult for consumers to discern the truth. The FTC is keenly interested in developing the tools necessary for consumers to tell reality from AI fantasy.

So, what should companies using AI do?

Companies relying on AI must accurately inventory their systems, document their development and procurement processes, establish robust governance protocols, perform risk assessments, and enforce long-term oversight practices. They should also map their data, review privacy and security policies, and establish incident response plans.

Implement robust governance

AI companies must genuinely do what their new AI policies claim — ensuring they’re more than a boilerplate designed to check a box. The FTC advises organizations adopt self-governance in compliance, ethics, fairness, and nondiscrimination to avoid regulatory scrutiny and promote integrity. It also recommends independent audits to meet standards, maintain transparency, and build trust with stakeholders and the public.

Conduct and act on risk assessments

Bridging the internal gap between policy and technical operations is crucial. Companies must conduct — and document — AI model risk assessments; integrate evaluation and mitigation measures directly into design specs; and layer internal and external oversight roles that match the level of risk identified. This includes considering the system's foreseeable downstream uses, accounting for active and passive monitoring, and ultimately assessing its actual performance and impact once deployed.

An ounce of corporate prevention is worth a pound of FTC cure

Making informed choices in the initial design stage of software development is essential to prevent future compliance and security problems. Implementing principles inspired by and Secure by Design ensures privacy, security, and ethics are integrated into every development phase, ultimately saving time and resources while preventing potential legal issues. We'll echo the FTC in citing the great Dr. Ian Malcolm: "Can I make this?" and "Should I make this?" are two very different questions. In addition, companies should thoroughly inventory and map their data. It is challenging for a company to defend against a bias claim if it is unaware of the data its model uses. Along the same lines, sensitive data and high-risk use cases require extra care. Systems using sensitive data such as health, biometrics, and children's information need special security, privacy, and risk-assessment measures – which are impossible without a thorough data inventory.

Say what you do and do what you say

Companies integrating AI should define a clear and detailed use case for their systems to avoid scope creep, which can lead to loss of focus and overly ambitious projects. The FTC has warned against overpromising and underdelivering to consumers,  so an effective AI tool should be designed to address specific issues rather than being a universal solution. Tying each product to a specific business use case also positions it for more effective data governance, ensuring that only the data needed is used and limiting potential impact or liability from breaches.

This list is what we know today – based on past FTC practices, recent FTC AI-specific guidance and activity, and projected enforcement of evolving applications. It provides a useful baseline for companies moving forward with AI, no matter where it is used in their business operations. But one last point: every AI system is built on data. Being able to say what an AI product, service, or feature does means knowing the data its built on and keeping track over time. Companies cannot let shiny new AI options distract them from the long-standing requirement for general good governance and risk management practices. That's what will get them in trouble with the FTC, and with their customers too.

Originally published on the IAPP website on June 26, 2024.