FTC Bureau of Consumer Protection Director Andrew Smith this week published some helpful pointers for companies that are developing or using AI to support consumer-facing services. These pointers are drawn from past FTC enforcement actions, reports, and workshops. They boil down to one overarching message: Companies shouldn’t surprise consumers – or themselves – in how they develop or use AI.
Taking care with AI can bring benefits beyond helping to avoid FTC scrutiny. It can also help avoid frayed relationships with consumers and business partners. In addition, paying attention to AI now may leave companies better prepared to deal with future regulations, such as the profiling and automated decision-making provisions of the California Privacy Rights Act ballot initiative, aka CCPA 2.0.
Director Smith’s recommendations fall under four main categories:
- Be transparent;
- Explain your decision to the consumer;
- Ensure that your decisions are fair; and
- Ensure that your data and models are robust and empirically sound.
Although many of these messages relate to sector-specific laws that the FTC enforces, such as the Fair Credit Reporting Act (FCRA) and Equal Credit Opportunity Act (ECOA), they have broader applicability. This post takes a closer look at some of the wider implications of the FTC’s AI guidance.
Keep an Eye on Sectoral Privacy Lines. Long-established laws such as the FCRA, ECOA, and Title VII of the Civil Rights Act of 1964 apply to uses of AI in the areas of consumer reporting; consumer credit; and voting, education, and the offering of public accommodations, respectively. Meeting the obligations of these laws depends on recognizing whether and when they apply. However, the laws discussed in the FTC’s blog post are far from exhaustive. One important law to add to those flagged in the blog post: HIPAA. Although it is usually clear when an entity is acting as a healthcare provider, insurer, or clearinghouse, it may be more challenging to determine when a company becomes a “business associate” of a covered entity. The response to COVID-19 has accelerated the race to develop health-related AI applications, which makes it more urgent for companies to recognize when they are acting as business associates and to understand their responsibilities under HIPAA.
Comprehensively Evaluate Data and AI Models. According to the blog post, the FTC has developed legal and economic criteria for evaluating the presence of illegal discrimination in AI systems – at least in the ECOA context. Specifically, the agency will look at inputs to determine whether they include “ethnically-based factors, or proxies for such factors, such as census tract” as well as outcomes, “such as the price consumers pay for credit, to determine whether a model appears to have a disparate impact on people in a protected class.”
The post strongly suggests that the FTC’s attention to potential discrimination through uses of AI is not so limited: “Companies using AI and algorithmic tools should consider whether they should engage in self-testing of AI outcomes, to manage the consumer protection risks inherent in such models” (emphasis added). The FTC, however, does not offer a framework for these evaluations, nor has it indicated more generally what kinds of AI discrimination risks might be actionable under Section 5. Still, making good-faith efforts to identify and mitigate such risks could help companies to stay ahead of the enforcement curve.
Conduct Due Diligence on Vendors, and Constrain Downstream Users. Another theme that runs throughout the FTC’s AI guidance (as well as its privacy, data security, telemarketing, and other areas of FTC consumer protection policy): companies should carefully assess how upstream providers of AI-related data and analytics comply with their legal obligations, and they should impose appropriate constraints to prevent their own customers from using AI services in inappropriate or illegal ways. Although the FTC focuses on upstream and downstream requirements under FCRA and ECOA, these considerations are equally important when Section 5 is the main consideration, and staying out of more highly regulated activities is the intent. Conducting due diligence before entering into an AI-related business relationship, requiring contract terms that spell out permissible uses of AI systems and data inputs, and monitoring the performance of business partners are all critical to achieving these ends.
The FTC’s role in overseeing AI uses in the economy is in its infancy and will continue to evolve. We will keep a close watch on further developments on this front, and for a broader and more detailed review of AI-related data issues, the ABA Section of Antitrust’s report on Artificial Intelligence and Machine Learning: Emerging Legal and Self-Regulatory Considerations provides a helpful resource.