This week, State AGs and their staff gathered to participate in the annual National Association of Attorneys General AG Symposium, where they discussed topics such as leadership, relationships with prosecutors, and Supreme Court updates. One of the most topical panels was a discussion of “Regulating Algorithms – The How and Why” moderated by Natalie Hanlon Leh, Chief Deputy AG of the Colorado AG’s Office and featuring several academics in law and technology including Professors Ellen P. Goodman, Michael Kearns, and Beth Simone Noveck.
The panelists first noted with the increasing proliferation of generative AI (e.g., ChatGPT), algorithms have become more complex than ever before. Creators have more insight into the inputs for the machine learning of trained models versus generative models. But panelists noted that while AI can result in bias stemming from training inputs, it can also be a tool for reducing or identifying bias. AG offices will likely need to advise their client agencies regarding the potential risks of AI technology as states consider using it to enhance their own services, so AGs were encouraged to become more knowledgeable. Professor Kearns noted it is much more difficult to identify risks without better understanding the specific use case for a given application of AI. Professor Goodman also described certain algorithmic claims as being potentially deceptive, which is hard to evaluate without understanding how machine learning works. Professor Noveck pointed to the importance of how humans would be incorporated in the ultimate algorithm workflow. She also stressed that outputs should be constantly reassessed when using AI and whether that use leads to discrimination and bias.
Specific legislation regarding the use of algorithms in decision making has been implemented in the EU and New York City and proposed in Colorado, but panelists raised questions regarding effectiveness and enforcement.
The AGs were engaged in discussion, although they admitted it was complicated to even formulate questions. North Carolina Attorney General Josh Stein asked what State AGs could do either in enforcement or supporting legislative efforts to ensure that AI is used as a positive tool and not one that is harmful to consumers. He also asked if there are rogue AI companies they should be going after. While no panelist took the bait in identifying “rogue companies,” it is clear from General Stein’s line of questioning that AGs are interested in how they can impact the AI space and find potential new areas for enforcement. As we had previously noted, the State AGs are already looking at big tech companies in part for how their algorithms affect children. Given regulator interest and evolving technology, this is a critical time to help educate enforcers on AI technology and its use, and businesses should be prepared for increased questions on their use of new technologies in the services they provide.