FTC, CFPB, DOJ, and EEOC Pledge Increased Focus on Discrimination in AI

Artificial intelligence and algorithmic processes continue to remain at the top of federal law enforcement agencies’ agendas. Yesterday, the FTC, CFPB, DOJ, and EEOC issued a joint statement pledging to use their respective tools to protect the public from bias in automated systems and artificial intelligence.”

While these agencies’ general commitment to monitoring AI processes is not new (for example, see here, here, and here for recently-published guidance related to the use of AI in various contexts), the joint statement shows they are now making concerted efforts to approach AI enforcement in a methodical and coordinated manner. The statement summarizes the agencies’ existing legal authority and prior work relating to AI issues, along with three main areas of concern:

  • Bad data: Datasets used to train algorithms may be unrepresentative, imbalanced, biased, or contain other errors that could result in discriminatory outcomes. Similar outcomes could occur if automated systems end up correlating data with protected classes.
  • Lack of transparency: Many algorithmic models are black boxes” whose internal workings may not be clear even to their developers. The lack of transparency makes it difficult to evaluate whether the systems are acting fairly.
  • Unanticipated uses: Automated systems designed with one purpose in mind may be appropriated for other uses. In such cases, the repurposed algorithm may produce improper results because the system’s design is based on flawed assumptions about its users, relevant context, or underlying practices or procedures it might replace.

In addition to discrimination harms, the joint statement also points to other possible AI-related harms, such as companies overstating AI capabilities and using improperly collected data to train AI systems.

The joint statement doesn’t necessarily break new ground, but it communicates a level of urgency, prioritization, and cross-agency collaboration that should not be overlooked. Companies using automated systems to make decisions that could affect consumers should carefully monitor outcomes for discriminatory impact and take efforts to control for biases in training datasets.