When a disclosure is necessary to prevent an ad from being misleading, the disclosure must be presented in a “clear and conspicuous” manner. Exactly what that means depends a lot on the context, but one question we get regularly is whether disclosures can be presented through a hyperlink. In a recent decision involving ads for HelloFresh, NAD looked at FTC guidance and considered just how much of a disclosure can appear on a separate page.

HelloFresh advertised that consumers could “get 16 free meals with your purchase + free shipping.” A link invited consumers to “learn more.” If a consumer were to click on that link, she’d find a detailed disclosure explaining the material terms of the offer. That disclosure included a lot of information and ran for 194 words. That may be too long to include in some ads, so it raises the question of whether presenting it through a link is sufficient.

NAD started by looking at the FTC’s .com Disclosure guidelines. Among other things, those guidelines state that “disclosures that are an integral part of a claim or inseparable from it should not be communicated through a hyperlink.” However, the FTC also acknowledges that “hyperlinks can provide a useful means to access disclosures that are not integral to the triggering claim,” provided that the link is obvious, near the claim, and conveys the nature of the information on the landing page.

In the past, NAD has held that “hyperlinks may not adequately alert consumers to the nature of the material limitations associated with the continuity plan that are material to purchasing decisions.” In this case, NAD parsed through the 194-word disclosure and determined that some terms were so integral to the claim that they should “be clearly and conspicuously disclosed in close proximity to the free claims.” Other terms could be provided via a link.

Advertisers will have to go through an exercise of parsing through their own offer terms to divide those terms that are “an integral part of a claim or inseparable from it” from those that are “not integral to the triggering claim.” The former should generally be included near the claim, while the latter may be provided through a link. Unfortunately, there often isn’t a clear answer to this problem and companies will often have to make difficult decisions.  

Last June, the FTC announced that it was looking for input on ways to modernize the .com Disclosure guidelines, including by providing new guidance on the use of hyperlinks. The updated guidance may provide some answers. However, based on the tone of the press release – which among other things, complains that some companies are “burying disclosures behind hyperlinks” – it’s likely that not everyone will like those answers.

This week, the FTC held its Talking Trash at the FTC workshop, a four-hour event intended to examine “recyclable” claims in ads. We’ve sifted through some of the trash and pulled out a few things worth noting.

  • Substantial Majority Test:  The Green Guides state that a company can make an unqualified “recyclable” claim, as long as a substantial majority – defined as 60% – of communities or consumers where a product is marketed have access to recycling facilities that will accept the material. We learned right up front that the FTC is particularly interested in whether it should take another look at the “substantial majority” test. No one recommended that the FTC change that threshold – and one audience member noted that doing so would cause confusion – so we don’t suspect the FTC will do so, unless there is compelling survey data to suggest it is necessary. As we heard during the workshop, whether something will be accepted for recycling can vary from state-to-state and town-to-town, so imposing a stricter standard would make “recyclable” claims harder to manage.
  • Capability v. Actuality:  Some panelists suggested that advertisers shouldn’t be able to make “recyclable” claims unless they have evidence that a product is actually recycled into something new. Plaintiffs have advanced similar theories in lawsuits, but some courts – like this one – have rejected them, noting that “no reasonable consumer would understand ‘100% recyclable’ to mean that the entire product will always be recycled.’” Instead, that court held that “recyclable” simply means that a product is capable of being recycled. We agree, but we’ll see how the FTC comes out. Again, we think there would need to be pretty compelling consumer perception survey data suggesting consumers understand recycling to mean that it the entire product will always be recycled in order for the FTC to change its view in the Green Guides.
  • Resin Identification Codes:  Resin Identification Codes or “RICs” –  little numbers at the bottom of a container enclosed within a chasing arrow triangle, like this – were also a hot topic. Because consumers may interpret a RIC to mean a package is recyclable, the Guides advise marketers to place it in an inconspicuous location, such as on the bottom of the container. Panelists generally agreed that consumers continue to be confused, and some suggested that for plastics that are generally not recyclable, companies should be required to include a disclosure stating that a product is not recyclable. Currently, if a product may be recycled by only a few consumers, companies must include a strong qualifier, such as “this product is recyclable only in the few communities that have appropriate recycling programs.” Will the FTC require another disclosure saying “this product is not recyclable” for a product that is not recyclable, and doesn’t claim to be recyclable, but has the RIC number at the bottom of the package? We’ll have to wait and see. In the meantime, remember that a new law in California will also impact this issue.
  • Chemical Recycling:  Another hot topic at the workshop involved chemical recycling. Although FTC staff made it clear at the outset that the workshop was not for discussing environmental policy, that’s exactly what happened when panelists and attendees debated chemical recycling. Currently, the primary technology for plastic recycling is mechanical recycling, which uses physical processes – such as sorting, grinding, and washing – to recover used plastics. GAO reports that mechanical recycling technology is expensive, labor intensive, and generally results in lower quality plastics. Chemical recycling (or advanced recycling) uses heat or chemical reactions or both to break down plastic material from the polymer down to the monomers and additives. The industry says that through advanced recycling, a “circular” plastics economy can be created that reduces the need to tap virgin fossil fuels to make its products. Some chemical recycling is used to break down plastic into fuel, which was not favored at the workshop since fuel will eventually get burned and end up in the atmosphere. Whether claims around chemical recycling resulting in new plastics will be permitted is an open question and one that the FTC will assess during its review of the Guides. Since this is a new and emerging area of interest with open questions around the environmental benefits versus trade-offs, we think it is unlikely that the FTC will provide much concrete guidance on these types of claims.
  • Circular Economy:  Many of the panelists brought up the term, “circular economy,” which is about reusing products, rather than scrapping them and then extracting new resources. We heard how there has never been more momentum around circularity and products should be designed with recycling and end use in mind. To this end, a panelist from The Recycling Partnership noted that the organization provides helpful guidance for product design. While not discussed at the workshop, we expect the Green Guides might provide new examples of how marketers may substantiate claims touting that their product promotes a circular economy.
  • Rulemaking:  FTC staff was very interested in hearing from panelists on whether the FTC should engage in rulemaking or if the Guides are working. In the wake of the Supreme Court’s decision in AMG (holding that the FTC can’t obtain monetary relief under Section 13(b)), the FTC is increasingly relying on other legal tools to get money – notably, alleging rule violations wherever possible, which enables the FTC to seek civil penalties and/or consumer redress. This has resulted in a long list of proposed rules. Our colleague Jessica Rich covered many of the pending rules here.  Panelists were split on whether rulemaking should be initiated, and we’ll have to wait to see if the FTC adds this topic to its growing list.

While the comment period for the Green Guides is now closed, the Commission is still accepting comments until June 13, 2023 for those who wish to provide input on the topics discussed at the workshop.

Google updated its privacy terms earlier this month, shifting away from offering many of its advertising services on a “service provider” basis.  With the change, Google states that its Customer Match, Audience Partner API, and certain audience-building services no longer meet the CCPA’s strict new requirements to be offered on a “service provider” basis.  The implication of this change is that companies leveraging these services are “selling” or “sharing” personal information and will need to offer consumers an opportunity to opt out.

“Restricted Data Processing” Under the CCPA

Since 2019, Google has offered a number of its services on a “restricted data processing” basis.  Where a service is configured for restricted data processing, Google acts as a service provider with respect to personal information (i.e., names, email addresses, online identifiers) that Google collects from advertisers, publishers, and other partners. 

Under the California Consumer Privacy Act (CCPA), which first took effect in 2020, a service provider is not permitted to use personal information other than for business purposes associated with offering services.  For example, the CCPA does not permit a service provider to resell personal information processed on behalf of a business or to use the information to build profiles about individual consumers for its own commercial benefit.

In documentation available at https://business.safety.google/rdp/, Google explains that when restricted data processing applies, Google will use personal information for business purposes such as ad delivery, reporting and measurement, security and fraud detection, debugging, and to improve and develop product features.  Google cites these policies to support its position that it is a “service provider” for many of its advertising-related services, such as Google Ads, Google Analytics, Tag Manager, and Display & Video 360.

What’s changing?

Starting July 1, 2023 – the day that the California Privacy Rights Act (CPRA) amendments to the CCPA become enforceable – Google will no longer offer restricted data processing for the following services in California:

  • Any feature that entails uploading customer data for purposes of matching with Google or other data for personalized advertising (e.g., Customer Match)
  • Any feature that entails targeting user lists obtained from a third party (e.g., Audience Partner API)
  • Any feature that entails creating, adding to, or updating user lists using first-party customer data (e.g., audience building with floodlight tags and audience-expansion features in DV360)

These changes reflect key amendments to the CCPA.  In particular, the CPRA amendments define “cross-context behavioral advertising” to mean “targeting of advertising to a consumer based on the consumer’s personal information obtained from the consumer’s activity across” the internet, and prohibit service providers from offering services that involve “sharing” personal information for purposes of “cross-context behavioral advertising.”

The clear but unstated message behind these changes is that Customer Match involves cross-context behavioral advertising.  When an advertiser uses the Customer Match service, the advertiser provides Google with a target audience, and Google displays ads to that audience on its search results.  Because the service involves targeting ads to consumers on Google based on the consumer’s interactions with the advertiser, Google’s apparent position is that Customer Match is a cross-context behavioral advertising service.

As noted above, advertisers, publishers, and other businesses that share personal information with third parties (such as Google) for cross-context behavioral advertising must offer consumers an opportunity to opt-out of the “sale” and “sharing” of their personal information.  In addition, as described in the latest CCPA regulations, these businesses are required to enter into a contract for the “sale” or “sharing” of personal information that requires the third party recipient to comply with the CCPA and provide the same level of privacy protection for consumer data as any business subject to CCPA.

Where can I find the restricted data processing contract?

Google publishes its restricted data processing contract for US state privacy laws at https://business.safety.google/usaprivacyaddendum/.

What about Google Analytics?

Google Analytics is a popular service that allows businesses to gain insights into who visits their digital properties.  Google states that it will act as a service provider for Google Analytics as long as the business disables sharing with other Google products and services

Google offers a variety of privacy-related tools for Google Analytics, including support for deletion requests, here.

What about real-time bidding?

Google also offers services like Display & Video 360 and Authorized Buyers that enable advertisers to respond to bids in real-time for ad inventory across the web.  Google indicates that these services continue to operate using restricted data processing but also makes clear that restricted data processing “does not extend to the sending or disclosure of data to third parties that you may have enabled in our products and services.”  As a result, publishers issuing bid requests and advertisers responding to publisher bid requests should understand that personal information conveyed to third parties for bidding purposes may not be covered by Google’s restricted data processing terms.

During the Federal Trade Commission’s (FTC) Open Meeting on May 18, the Commissioners unanimously voted to adopt the Policy Statement on Biometric Information and Section 5 of the FTC Act. The Policy Statement broadly defines biometric data, catalogues the risks the Commission believes are posed by technology that utilizes biometric information, and imposes substantive requirements on companies employing these technologies.

The Policy Statement refers to biometric information as data in the form of depictions, images, descriptions, or recordings of “physical, biological or behavioral traits, characteristics, or measurements of or relating to an identified or identifiable person’s body.” The scope of such information includes what typically comes to mind when consumers think of biometric data (e.g., facial recognition, iris or retina, fingerprints or handprints, genetics, voice), but also characteristics of movement or gesture like gait or typing pattern. Such information also includes data derived from the depictions, images, recordings, etc., to the extent it would be “reasonably possible” to identify the consumer from whom the original information was derived. The Policy Statement provides an example of biometric information, referring to a facial recognition template that encodes measurements or characteristics of a consumer’s face that was derived from her photograph.

It also highlights risks posed to consumers by biometric technologies, including: revealing sensitive information about individuals (e.g., attendance at political events), fraud (e.g., using consumer’s images in deepfakes), and, most importantly, bias that leads to harmful or illegal discrimination. Large databases of biometric information may also be an attractive target for other illicit uses by malicious actors. The FTC broke down the practices it will examine for potential violations of Section 5 of the FTC Act (see below), but a major development is that the FTC is essentially requiring companies using biometric information to undertake risk assessments before they collect or use biometric information or deploy biometric information technology.

The Statement specifies the practices the Commission will scrutinize in determining whether companies are compliant with Section 5:

  • False or unsubstantiated claims relating to the validity, reliability, accuracy, performance, fairness, or efficacy of technologies using biometric information.
  • Deceptive statements about the collection and use of biometric information.
  • Failing to assess foreseeable harms to consumers before collecting biometric information.
  • Failing to promptly address known or foreseeable risks.
  • Engaging in surreptitious and unexpected collection or use of biometric information.
  • Failing to evaluate the practices and capabilities of third parties.
  • Failing to provide appropriate training for employees or contractors.
  • Failing to conduct ongoing monitoring of technologies that the business develops, offers for sale, or uses in connection with biometric information.

Takeaways

In many ways, this Policy Statement follows the approach that the FTC has taken in privacy and security cases for decades. However, the Policy Statement does attempt to impose substantive requirements on companies (i.e., risk assessments) and further evidences the FTC’s commitment to leveraging Section 5 as a disparate impact antidiscrimination statute. The Commission is clear that companies’ risk assessments should consider whether any algorithms or technical components of the system have been tested for disparate impact. The strong implication is that companies must not use algorithms or technology that have not been tested for disparate impact. Commissioner Bedoya, in his statement at the Opening Meeting, emphasized his belief that companies cannot use technology until they have thought about how bias could affect consumers and proactively address such harms.

The Statement also provides insight into how the Commission will evaluate the reasonableness of companies’ use of biometric data. Under Section 5 of the FTC Act, in order for the Commission to find a practice unfair, it must determine that it causes or is likely to cause substantial injury to consumers, that is not reasonably avoidable by consumers, and is not outweighed by any countervailing benefits to consumers or competition. In the privacy and data security space, the FTC’s unfairness determination has been guided by an assessment of whether a company’s overall practices were reasonable. Traditionally, the FTC has steered away from bringing cases that ultimately require the FTC to argue that a business made a decision that, in hindsight, was not optimal. In the Policy Statement, the FTC states that if it feels that companies’ employing biometric information in their technology could have used a “less risky alternative,” it will weigh that more heavily against any arguments that the technology was more convenient, efficient, or profitable.  The FTC seems skeptical that companies could present evidence of benefits to consumers or competition that could outweigh what it views as serious risk of injury to consumers.

 The Policy Statement is another reminder that regulators are focused on the use and collection of sensitive data, and businesses collecting or using biometric data should review their practices to determine whether they comport with this latest guidance.

As we have previously reported, State Attorneys General have joined other enforcers in addressing the latest AI technology. At the recent 2023 NAAG Consumer Protection Spring Conference, two separate panels discussed how the AGs are focusing on AI.

When asked about concerns with AI, New Hampshire Attorney General Formella explained that technology often moves faster than the government. He is working to engage with the private sector to understand better what emerging technologies are doing, and encourages an open line of communication. New York’s First Assistant Attorney General, Jennifer Levy, noted that her office has brought recent actions involving algorithmic decision-making, including: 1) working with the state education department to put guardrails around a contract with a vendor using facial recognition for school discipline, given potential algorithmic bias, 2) bringing litigation with the CFPB against Credit Acceptance Group, alleging they used algorithms to skew the principal and interest ratio, and 3) settling with lead generators of fake comments regarding the repeal of net neutrality. She echoed that laws don’t always catch up to practices.

Later in the day, attendees were treated to a panel on “Artificial Intelligence & Deep Fakes: The Good, The Bad & The Ugly.” Kashif Chand, Chief of the New Jersey Division of Law’s Data Privacy & Cybersecurity Section, moderated with Patrice Malloy, Chief of the Multistate and Privacy Bureau of the Florida Attorney General’s Office, and they were joined by panelists Santiago Lyon, Head of Advocacy and Education for the Adobe-led Content Authenticity Initiative and Serge Jorgensen, Founding Partner & CTO of the Sylint Group. Chand began by explaining that years ago states relied on general UDAP laws to address new technologies, and now many states have technologists and additional laws to handle privacy and technology issues. He noted that to deal with deep fake issues, for instance, states can use misrepresentation and deception claims as well as unfairness and unconscionability. Turning to AI, Chand focused on whether consumers are being told what the intended use of the AI is. Specifically, there may be significant omissions by creators that would lead consumers to think something is going to happen when it is not, which could give rise to an unfairness claim. Chand pointed to Italy’s block of Chat GPT because of potential processing issues and children’s access, not relying on new laws, but instead using the GDPR generally. But even states without specific data privacy laws can still rely on UDAP theories to address these same concerns.

Lyon described the importance of provenance to the future of AI: the Internet must allow for transparency and labeling of content’s origins to determine authenticity. Jorgensen echoed that one issue is consumers may not even know when AI in use, such as meeting software transcribing notes or AI making hiring decisions. Malloy raised the question as to how consumers can consent if they don’t even know the technology is being used. Jorgensen said developers can consider security and privacy by design, and that the industry will have to think more about this.

Lyon and Jorgensen both raised concerns that data training sets could become tainted with either copyrighted or illicitly gained data. However, as panelists pointed out, if more limits are put in place over data sets, it is an open question how certain AI models can gain enough data to generate output. Chand emphasized that transparency is key for consumers to understand what they are giving up and what they are getting in return. Chand also noted that once a company makes data claims, it is hard to verify other than with the use of white hat hackers and researchers. Chand noted that as AI learns more, businesses need to monitor how it is being used to ensure they do not create deceptive trade practices.

With misinformation becoming tougher to spot, panelists emphasized the need for increased transparency and consumer education and information. Chand noted that future generations will continue to have a better understanding of the use of technology and controls over privacy as they benefit from today’s regulations and education.

Based on this panel, adopters of AI in their business should consider the following:

  • How will you disclose the use of AI technology?
  • How will you educate consumers about the potential risks, benefits, and limitations?
  • How can you consider consumer choice when training AI?
  • How will you monitor how your AI is evolving?
  • How will you prevent potential algorithmic bias?
  • How will you protect children’s data?
  • How will you protect proprietary or copyrighted data?

While answers to the aforementioned may differ depending on the specific situation of each business, remember that transparency with consumers and the public is key to staying off the radar of enforcers.

Abraham Lizama purchased a turquoise sweater from H&M’s “Conscious Choice” collection, a line of clothing “created with a little extra consideration for the planet” which generally include “at least 50% of more sustainable materials.” Although we imagine that Lizama looked quite handsome in his sweater, he soon regretted his purchase and filed a class action against the retailer, accusing it of greenwashing because the sweater did not meet his view about what’s good for the environment.

Lizama argued that H&M misled consumers into thinking that its Conscious Choice collection was “environmentally friendly.” (By our count, that phrase appears more than 100 times in the complaint.) The court pointed out, though, that H&M never actually uses that phrase to describe its garments. Moreover, H&M does not represent that its Conscious Choice products are “sustainable” – only that the line includes “more sustainable materials” and its “most sustainable products,” which the court said are obvious comparisons to H&M’s regular materials.

Lizama argued that the garments were not sustainable because recycling PET plastics into clothing isn’t as good for the environment as recycling those plastics into plastic bottles. Even if that’s the case, the court noted that the comparison wasn’t relevant in determining whether H&M’s claims were misleading. “Instead, the relevant comparison is whether one garment using recycled polyester is more sustainable than another garment using non-recycled (also known as virgin) polyester.”

Continue Reading H&M Wins Dismissal in Greenwashing Suit

Earlier this week, FDA issued draft guidance for staff updating the agency’s existing

enforcement policy regarding major food allergen labeling and cross-contact prevention. The updated guidance reflects the addition of sesame as a major allergen, discusses how allergens must be disclosed when used as an ingredient in packaged food, and details the preventive controls provisions in 21 CFR § 117 applicable to preventing allergen cross contact.  The updated guidance also details the circumstances in which failure to properly declare allergens or prevent cross-contact render a food misbranded or adulterated.  Stakeholders have until July 17th to submit comments

Although not included in the guidance, FDA’s press release also makes clear the agency’s position regarding recent industry trends of adding sesame to products and declaring it as an allergen rather than taking steps to remove it from products and facilities. The press release states:

The FDA is aware that some manufacturers are intentionally adding sesame to products that previously did not contain sesame and are labeling the products to indicate its presence. While the draft CPG does not specifically address the issue of industry adding sesame to products that did not previously contain it, the draft CPG does address the FDA’s enforcement policy for labeling and cross-contact controls for major food allergens, including sesame. The FDA is engaged with various stakeholders on this issue. The FDA recognizes that this practice may make it more difficult for sesame-allergic consumers to find foods that are safe for them to consume-an outcome that the FDA does not support. (emphasis added)

So, what’s the takeaway? Although the draft guidance is intended as direction to FDA staff, it also provides helpful clues for industry.  The updated guidance is far more detailed than the prior version, reflecting a heightened concern about food allergens and, likely, increased enforcement focus. 

To prepare for their next FDA facility inspection, packaged food manufacturers should review labels to ensure that ingredient lists are up to date and allergens are declared as required.  In addition, reviewing and updating cross-contact prevention controls, employee training, and recall policies will help the next facility inspection go as well as possible.

For food retailers such as supermarkets and restaurants, although local health authorities are the primary inspectors, the updated guidance also serves as a helpful guide for preparation, particularly in light of the allergen updates to the Model Food Code, which articulate best practices for retail food safety. 

Last week, State AG executives and consumer protection staff gathered for the 2023 NAAG Consumer Protection Spring Conference. After a warm welcome to Florida by John Guard, Chief Deputy Attorney General in Florida, first on the agenda was the much-anticipated discussion with Attorney General John Formella of New Hampshire and AG executives Lacey Mase, Chief Deputy in Tennessee, Jennifer Levy, First Assistant Attorney General in New York, and Nathan Blake, Colorado Deputy Attorney General for Consumer Protection. The panel was moderated by prominent Consumer Protection figures Jeff Hill, Executive Counsel in Tennessee and Susan Ellis, Division Chief of the Consumer Protection Division in Illinois.

Priorities

Attorney General Formella discussed that his priorities include elder abuse and financial exploitation given the aging state, consolidation of the healthcare industry and the resulting lack of services and rising prices, privacy, and social media platforms’ impact on youth.

New York’s Levy noted that most exceptionally, this bipartisan group is able to talk about priorities together, because ultimately consumer protection attorneys’ top priority is protecting the most vulnerable. AG James prioritizes obtaining restitution over penalties and using funds for abatement rather than to supplant the general fund. New York also focuses on threats to public health broadly, including their opioid work, JUUL, lead’s effect on children, COVID-19 issues, charity care in hospitals, and mental health resources. 

Blake’s position, echoed Levy in terms of collaboration across party lines and regions. He underscored the importance of tackling technology in his office, including social media’s effect on teens and AI bias and potential use in scams, as well as potential deceptive advertising in the newly regulated marijuana industry in his state.

Tennessee’s Mase said AG Skrmetti prioritizes consumer protection and some of his current focuses include the solar industry and protecting veterans.

Challenges

The discussion turned next to issues the offices are contending with currently. Several stated that they continue to wrestle with the right fit of outside counsel and local government assistance with consumer protection cases.  The panel also discussed enforcement efforts to ensure that companies comply with CIDs and properly preserve documents when under investigation.   

Reading Tea Leaves: 10 Years From Now

The panel was then asked what would be the biggest issues in 10 years for their offices. Several talked about the perpetual scams (like the grandparent scam) that will never really go away but will change with the times. UDAP statutes are flexible to address most future situations they explained, with the caveat that some specific areas such as technology may require legislative fixes to avoid the law falling behind. Tennessee made a more specific prediction that VR shopping and biometric wallets may create consumer protection issues in the future.

How to Approach AG Offices

Audience questions led to a discussion regarding the best way to approach an AG office. The panelists noted that for informational meetings, they like to hear from concerned companies, but had several suggestions for businesses approaching their offices:

  • The presentation needs to be tailored and specific.
  • Know your audience, and be prepared and transparent to the greatest extent possible.
  • It is important for both businesses and AGs to build relationships and know who to talk with if future issues arise.
  • Know what the AG’s mission is, and focus your presentation on how you will contribute to or align with that mission.
  • Most importantly, don’t waste an AG’s time. (And know how to spell the AG’s name!)

Takeaways

We heard from the panelists that their offices are focused on several key areas. Keep an eye out for more developments on these hot topics:

  • Healthcare and Mental Health
  • Elder Scams
  • Social Media and Children
  • Privacy and AI

In the past couple years, the Federal Trade Commission has gone 0 for 2 before the Supreme Court. In AMG, the Court found that Section 13(b) of the FTC Act does not provide the Commission with the authority to obtain equitable monetary relief. Last month, in Axon, the Court held that parties need not wait until the conclusion of administrative proceedings before challenging the constitutionality of the FTC’s structure, but may bring their complaints to district courts. Given this recent track record, the Commission probably wasn’t thrilled to find itself before the Fifth Circuit, defending against constitutional challenges raised by Traffic Jam Events, and its owner, David Jeansonne.

The Fifth Circuit is probably not the venue administrative agencies would choose to hear these issues, having recently handed down decisions in Jarkesy (vacating a SEC administrative order on the ground that it was unconstitutional because: the petitioners were deprived of the right to a jury trial where the agency sought penalties; the congressional delegation of power permitting the SEC to determine whether it brought cases administratively or in district court was unintelligible; and the ALJ’s removal restrictions were unconstitutional) and Community Financial Services Association of America  (finding the funding structure of the CFPB unconstitutional). But, nonetheless, on May 3, 2023, a three-judge panel of the Fifth Circuit heard oral argument in Traffic Jam Events LLC v. FTC.

The road to the Fifth Circuit was circuitous, and began in June 2020, when the FTC filed a complaint in the U.S. District Court for the Eastern District of Louisiana against the marketing services company and its owner. Traffic Jam specialized in providing marketing materials for auto dealerships. The complaint alleged that the company misled consumers by sending deceptive mailings suggesting that the company was affiliated with a government COVID-19 stimulus program, and indicated that consumers had won specific, valuable prizes that they could collect once they visited car dealership. The District Court denied the FTC’s Motion for a Temporary Restraining Order.

Subsequently, in August 2020, the FTC filed an administrative complaint mirroring the prior federal court complaint. Traffic Jam challenged the filing on the grounds that the injunctive order was not based on substantial evidence, that the alleged acts or practices did not involve interstate commerce, that it was not a creditor subject to the Truth in Lending Act, and finally, that the Commission lacked substantial evidence to hold Jeansonne individually liable. Traffic Jam lost before the Commission on summary decision, and the Commission entered an order that, among other things, banned Traffic Jam and Jeansonne from virtually all commercial activity pertaining to the auto industry.

Traffic Jam then appealed to the Fifth Circuit, challenging the constitutionality of the FTC’s entire administrative process, as well as the validity of the broad-sweeping injunctive order. Traffic Jam argued that the FTC’s administrative process denied petitioners their due process rights. In addition, Traffic Jam alleged that the ALJ and Commissioners enjoy unconstitutional removal protections (i.e., they can only be removed for cause). Finally, in the wake of Jarkesy, Traffic Jam argued that the order’s ban on allowing the company or Jeansonne to operate in the auto industry was akin to the issuance of a civil penalty and, therefore, the administrative process deprived them of their Seventh Amendment right to a jury trial.

The oral argument focused on whether Traffic Jam had waived these constitutional challenges, as well as its challenge to the order, by failing to raise them below in the administrative proceeding. The panel was relatively quiet, and its limited questions to Traffic Jam primarily focused on whether they even had jurisdiction over such claims.

Although there’s always some danger in making predictions based on oral arguments, the judges did not seem particularly keen to use this case to further weaken the FTC. While the FTC may prevail in this matter, the case itself is just the latest in a long (and growing) chain of cases challenging the authority of administrative agencies writ large. We’ve written about the recent judicial chipping away at what some call the administrative state here, here, and here. We’ll continue to watch this case closely and post updates here.

In January 2022, the Texas Attorney General filed a lawsuit against Google alleging that the company engaged iHeartMedia DJs to provide endorsements for its Pixel 4 phone, even though they had never used it. In November 2022, the FTC and several state attorneys general announced settlements with Google and iHeartMedia over the same conduct. Although Texas settled with iHeartMedia, it continued to separately pursue its case against Google. Last week, the parties agreed to a settlement.

According to the FTC and states, Google hired iHeartMedia and other radio networks in 2019 to have DJs read ads for the Pixel 4 phone. GooglePixel 4 provided scripts that included endorsements written in the first-person. For example:  “It’s my favorite phone camera out there, especially in low light, thanks to Night Sight Mode;” “I’ve been taking studio-like photos of everything;” and “It’s also great at helping me get stuff done, thanks to the new voice activated Google Assistant that can handle multiple tasks at once.”

Despite the first-person endorsements, the AG alleged that most of the DJs who made these statements had never used a Pixel 4 phone. Apparently, iHeartMedia recognized the problem and asked Google to provide phones for its DJs. According to the AG’s press release, “When confronted with the reality that Google’s ad campaign violated the law, rather than take corrective action, Google continued its deceptive advertising, prioritizing profits over truthfulness.” The ads were played more than 2,400 times in Dallas, Fort Worth, and Houston.

A notable aspect of the Texas settlement is the monetary payment by Google — $8 million is going to Texas alone, compared to the combined $9 million Google paid when settling with the FTC and the states of Arizona, California, Georgia, Illinois, Massachusetts, and New York last November. Given the coordination of all the states and FTC on the settlement with iHeartMedia, it is notable that when it came to Google, Texas went at it alone, first by filing suit separate from the multistate effort and then reaching a separate settlement that included significantly more money for Texas than the other states received.

The phenomena of a state holding out from a multistate settlement to obtain a better result, especially when that state is currently in litigation, is not new. In rare situations (most recently in opioid and vaping settlements), Attorneys General have been persuaded to use “most favored nation” clauses in their multistate settlements to help discourage a state from holding out for more money later, but these have been few and far between. This is because ultimately each state is a sovereign entity – a fact that can sometimes be easily overlooked during the course of a multistate negotiation where a small group of states may seemingly be speaking for the whole. At the end of the day, each state will do what is in the best interest of their office and their constituents, which may vary.

Here, Texas’ motivation for holding out may be part of AG Paxton’s efforts to reign in “Big Tech.”  While all states are actively engaged in the space, Texas has been a leader in these efforts.  Indeed, the Texas AG website has a page devoted to “Big Tech,” which explains why people should be concerned about big technology companies and what the AG is doing about it. And although it settled with Google over these endorsements, it still has at least three other pending lawsuits against Google – challenging its dominance in the adtech chain, accusing it of violating the state’s biometrics laws, and alleging deceptive practices related to location tracking (another example where Texas sat out on a multistate settlement).

When faced with an investigation by multiple Attorneys General, it is critical to understand the objectives and priorities of each office involved, and recognize they may change from state to state. While getting true global peace may be an impossible challenge in some instances, having a deep understanding of each office involved is the best way to find a path to resolution.