If you’ve been shopping lately, it’s likely that you’ve encountered empty shelves and shortages of items, such as (for inexplicable reasons) toilet paper. This tends to happen whenever a disaster – whether that’s a hurricane or COVID-19 – strikes. In some cases, retailers respond to these shortages by increasing prices. Although there may be legitimate reasons for doing that, retailers should keep in mind that price gouging laws in many states can impact their ability to increase prices. State regulators are likely to give increased attention to price gouging issues in the coming weeks and months. For example, California Attorney General Xavier Becerra announced today that his office sent “several letters calling on large online marketplaces to intensify their efforts to combat price gouging related to novel coronavirus—or COVID-19—on their platforms.”

More than half the states have laws that prohibit charging excessive prices on certain products after a triggering event, such as a declaration of a state of emergency. What constitutes an excessive price varies by state, but many laws look at the price that had been charged for an item over a specific period prior to the emergency. Some statutes have specific thresholds. For instance, a 10% price increase is presumed to be excessive in New Jersey, while Pennsylvania assumes that a 20% increase is excessive.

Most laws have exceptions to these prohibitions. For example, in many states, a price increase is not unlawful if a company can prove that the increase was directly attributable to additional costs imposed on it by the supplier of the goods, or directly attributable to additional costs for labor or materials used to provide services. If you are a retailer and need to increase prices as a result of increases from your suppliers, make sure to check the relevant state laws and to document those increases.  While generally enforced against retailers, some laws expressly apply to suppliers, distributors, and/or wholesalers – and others are broad enough to arguably apply to these entities. At minimum, this means that retailers may have leverage to request documentation to support the increase.

Civil penalties for violations generally range from $99 to $250,000 per violation. (The high end of that range applies in Texas if a consumer who is impacted is at least 65 years old.)  Some states also have criminal penalties for violations.  If you’re going to increase prices on goods, make sure you take a look at these laws first.​

For other helpful information during this pandemic, visit our COVID-19 Resource Center.

As the novel coronavirus (COVID-19) has reached pandemic levels, companies of all sizes and in all industries face myriad impacts to business operations and the health and well-being of employees.

To help clients navigate these new challenges, including the unpredictability of any outbreak-related business disruption, Kelley Drye has compiled a free resource center to help businesses navigate this uncertain environment.

Check it out for articles, webinars, and blog posts that cover a range of topics, including the following:

  • Legal exposure due to business interruptions and unsatisfied contracts, including counsel on contractual obligations, especially for significant business concerns.
  • Supply chain disruptions that are impacting the manufacture of consumer goods forcing manufacturers to seek alternative product sources that meet U.S. consumer regulations.
  • All types of employment issues, including how to communicate to your employees, managing affected employees, remote work policies, privacy of record and employee travel, among other pressing issues.
  • Evaluating disruptions to trading and markets, M&A/corporate transactions, commercial contracts, corporate governance (contingency planning for annual meetings) and disclosures for publicly traded companies.
  • Monitoring the federal government’s efforts to address these issues, as well as emerging issues that businesses may face.

We are updating the COVID-19 Resource Center as events unfold, so check back regularly.

Facial Recognition Tech Enforced by Vermont AG Under State Privacy & Data Broker LawsVermont Attorney General Thomas Donovan Jr. has ratcheted up ongoing scrutiny of facial recognition technology.  On March 10, the Vermont AG sued facial recognition technology provider Clearview AI and moved for a preliminary injunction against the company.  Clearview drew wide attention in January following the publication of a New York Times story that detailed how the company reportedly collected approximately three billion digital photographs, primarily by scraping them from social networks and websites.  The Times also reported that Clearview’s customers include more than 600 law enforcement agencies, which apparently may use the service to connect facial images with individuals’ names.

Citing the Times’s story and several other public sources, the Vermont AG’s complaint accuses Clearview of a wide variety of unfair and deceptive practices under Vermont’s Consumer Protection Act.  The AG also alleges that Clearview violated Vermont’s data broker law, which went into effect in 2019, by obtaining “brokered personal information” through the “fraudulent means” of unauthorized screen-scraping.  The AG is seeking broad relief against Clearview, including an injunction ordering Clearview to delete photos of Vermont residents from its database and to refrain from collecting their images going forward, restitution, disgorgement, and civil penalties of $10,000 for each image collected in violation of the Consumer Protection Act.

This post takes a closer look at the complaint’s view of the privacy harms that Clearview allegedly caused and how these harms inform the Vermont AG’s legal claims against the company.  A key takeaway is that businesses would be well served by performing privacy due diligence and a risk assessment when exploring the use of data-driven services – from data acquisition and modeling, to marketing claims.

A Dark View of Facial Recognition’s Surveillance Applications

Aside from challenging Clearview’s business and data practices, the Vermont AG’s complaint raises more general concerns about facial recognition technology and describes the harms caused by Clearview’s alleged conduct in sweeping terms.

Two aspects of the complaint’s focus on surveillance-related harms are particularly noteworthy.

  1. Critical Take on Law Enforcement’s Use.  The complaint is openly critical of law enforcement agencies’ use of Clearview’s database – a position that law enforcement agencies rarely take against their counterparts.  But the Vermont AG’s message is unmistakable: “Law enforcement’s use of a massive facial recognition database, like the one described [in the complaint], essentially puts every individual in that database, whether they had ever done anything wrong or not, into a permanent, inescapable virtual line-up or ‘rogue’s gallery’ accessible for any reason at any time.’” (Complaint paragraph 20)
  2. Naming Customers. The complaint calls out several major companies for using Clearview’s service.  Although the complaint does not suggest that any of these companies acted improperly, the AG’s attention is a reminder that merely using a controversial technology can create negative publicity.

The complaint goes on to portray Clearview as rushing headlong into an area that policymakers and other companies have treated with great caution.  Asserting that “[o]nce entered into a facial recognition database, the individual loses an enormous amount of anonymity, privacy, and freedom,” the complaint states that “businesses and policymakers have been particularly cautious regarding the implementation of facial recognition technology because the potential for misuse and the consequences of such misuse are so dire.”  The PI motion states that Clearview developed a “dystopian surveillance database.”

The complaint also asserts, “leading-edge companies with large caches of photographic data such as Facebook and Google have declined to make a facial recognition tool available, though they have the capability to do so.”  According to the complaint, platforms’ forbearance from developing such tools contributed to “strong social norms against the type of mass-collection and facial recognition implemented by Clearview”; and Clearview violated the reasonable expectations of consumers that were based on this norm.

Alleged Privacy Violations Under Vermont’s UDAP Law

The complaint alleges that the following practices are “immoral, unethical, and unscrupulous”:

  • Collecting facial images by screen-scraping on third-party sites, without consent of the image owners and in violation of the sites terms of service.
  • Collecting minors’ images without parental consent.
  • Invading consumers’ privacy.
  • Exposing “sensitive personal data to theft by foreign actors and criminals.”
  • Violating consumers’ civil rights and chilling First Amendment interests in assembly and political expression.

In a separate deception count, the Vermont AG focuses on several of Clearview’s claims about its privacy and data security protections, including alleged misrepresentations about consumers’ ability to opt out of the database and that Clearview’s processing “does not unduly affect” consumers’ “interests or fundamental rights and freedoms.”  This count also alleges that Clearview misrepresented the accuracy of its facial recognition matching capabilities as well as the company’s success in assisting law enforcement investigations.

Alleged Violation of Vermont’s Data Broker Law

Finally, the Vermont AG charges Clearview with violating Vermont’s data broker law.  One prohibition under the law is against acquiring “brokered personal information through fraudulent means.”  (For an overview of Vermont’s law, including its registration requirements, see this post.)  Facial images posted on social networks and other sites are, according to the complaint, “brokered personal information” because they meet the requirement of being “categorized or organized for dissemination to third parties.”  Clearview’s use of screen scraping to acquire these images constitutes the allegedly “fraudulent means” of acquiring brokered information.  The complaint, however, does not allege that the platforms that host these images are themselves data brokers.

Advertising and Privacy Law Resource Center

This post updates an earlier post relating to marketing around the coronavirus.

We noted a couple news items this week that help add context to the pervasiveness of and risks related to price gouging enforcement.  In this story, the New York Times reported on a merchant who was selling hand sanitizer and related protective gear on Amazon, at profit levels that corresponded with the growing public concern. Amazon removed his listing along with hundreds of thousands of others and suspended thousands of sellers’ accounts for price gouging.  He’s now left with 17,700 bottles of hand sanitizer.

The California, Washington, and New York attorneys general offices are investigating price gouging complaints.  The New York AG issued multiple cease and desist letters last week relating to exorbitant prices on hand sanitizer and disinfectant spray.  The California AG issued a consumer alert regarding price gouging following announcement of a state of emergency.  The Washington AG issued a similar alert calling on consumers to report price gouging and scam products.

On the advertising claims front, the New York AG announced enforcement against Alex Jones, who operates the InfoWars website. TheCoronavirus Advertising-Related Enforcement Ongoing AG alleged that Jones was marketing and selling toothpaste, dietary supplements, and creams as treatments to prevent and cure the coronavirus.  The NY AG also issued cease and desist letters to Dr. Sherill Sellman, who was selling colloidal silver as a coronavirus cure, and to disgraced televangelist, Jim Bakker, for featuring claims that Sellman’s colloidal silver product could “eliminate [coronavirus] within 12 hours.”  The State of Missouri has also brought enforcement action against Mr. Bakker.

So, what’s the lesson?  In our prior coronavirus marketing post, the lessons were to know and understand the pricing laws and to avoid overstating the benefits of any product.  The follow-on issue is one of ethics and brand management:  We’re in a public health crisis.  Brands and platforms that demonstrate that they are working to comply with the law and take proactive consumer protection measures may forego short term profits, but they stand to gain long term consumer trust and maybe even generate some goodwill with regulators.

In addition to retail platforms, advertising and social media platforms may want to take note.  CDA Section 230 is alive and well but does any platform want to go to bat for advertising allegedly scam products?  The Washington AGs office stated that they will use their consumer protection laws to sue platforms or sellers even if they aren’t in Washington, as long as they were trying to sell to Washington residents.  Every other state AG undoubtedly agrees with this approach.

And finally for some comic relief…for some insightful advice from John Oliver, check out this link at the 17-minute mark.


Join us for our next webinar, covering influencer issues, on March 24 by signing up here.


Advertising and Privacy Law Resource Center

 California Attorney General (AG) released third draft of proposed CCPA regulationsOn Wednesday, the California Attorney General (AG) released a third draft of proposed CCPA regulations for public comment.  The draft contains a series of technical corrections, along with a handful of substantive incremental modifications to the prior draft.  The limited number of changes signals that the rulemaking process is reaching an end.

The following is a summary of key modifications the AG is proposing in the latest draft:

  • Service Providers – The AG revised the exemptions to the general rule that service providers may not retain, use, or disclose personal information obtained in the course of providing services.

First, the AG removed an exemption allowing service providers to perform the services specified in the written contract with the business that provided the personal information.  In its place, the AG added a new exemption: “to process or maintain personal information on behalf of the business that provided the personal information, or that directed the service provider to collect the personal information, and in compliance with the written contract for services required by the CCPA.”  This new exemption significantly narrows the ability of a service provider to use personal information to perform services generally, now requiring that the service provider limit the use of personal information “on behalf of the business that provided the personal information.”

Second, the AG edited a clause that allowed a service provider to use personal information for internal purposes to build or improve the quality of its services.  The AG clarified that the exemption does not allow a service provider to build or modify consumer profiles to use in providing services to another business; or correcting or augmenting data acquired from another source.  These clarifications indicate that the AG seeks to limit a service provider from using personal information it obtains through providing a service to develop consumer profiles that it can resell.

  • Removal of Opt Out Button – In the prior draft of the regulations, the AG proposed a standard opt out button and logo for the industry to adopt.  But the opt out button came under scrutiny in comments submitted by Lorrie Cranor of Carnegie Mellon University, which highlighted usability issues presented by the color and appearance of the AG’s proposed button.  Cranor’s team noted that the icon looked deceptively like an actual toggle switch, and when combined with its red color, could be misinterpreted as indicating an off-state.  “[A] consumer may misinterpret the [AG] toggle icon as an indication that they have already opted-out of the sale of their personal information,” Cranor’s team wrote.  In the latest version, the AG removes all reference to the opt out button.
  • Exemption from Notice at Point of Collection – A business that does not collect PI directly from a consumer is not required to provide a notice at the point of collection if that business will not sell the consumer’s personal information.
  • Guidance on IP Addresses – The AG abruptly removed guidance indicating that an IP address that does not link to a particular consumer or household would not be “personal information.”  The new draft does not include new guidance, however, leaving the prior guidance as the only interpretation issued by the AG on whether IP addresses are “personal information.”
  • Privacy Policy Disclosures – The AG restored language from the first draft of the regulations requiring a business to identify the categories of sources from which personal information is collected and the business/commercial purpose for collecting or selling personal information, both in a manner that provides consumers a meaningful understanding of the information disclosed.  The new language does not require these disclosures “for each” category of personal information.
  • Sensitive Data Disclosures – The AG proposes that even if a business withholds sensitive data in response to a request to know, the business must still provide a description of the information withheld.  For example, a business should not provide an actual social security number, but should state that it holds the consumer’s social security number.
  • Denial of Deletion Request – When a business that sells personal information denies a deletion request, the business must ask the consumer if the consumer wants to opt out of the sale of their personal information.
  • Definition of a Financial Incentive – The AG removed a confusing element of the definition of a financial incentive that had previously indicated that a program, benefit, or other offering, including payments to consumers, would be a “financial incentive” where a company compensated the disclosure, deletion, or sale of personal information.   The AG clarified that a financial incentive relates instead to the collection, retention, or sale of personal information.
  • Annual Privacy Policy Disclosures – The requirement to disclose metrics when a business buys, receives, sells, or shares personal information of more than 10 million consumers in a calendar year will now only apply to businesses that know or should reasonably know that they meet the threshold for such a disclosure.

The deadline to submit written comments to the proposed modifications is March 27, 2020. Our firm will continue to review the draft regulations as we work with clients to develop practical guidance on complying with the CCPA. If you have questions on how the regulations may impact your business, or if you would like assistance in submitting a written comment, please contact Alysa Hutnik, Aaron Burstein, Katie Townley, Carmen Hinebaugh, or Alex Schneider.


Advertising and Privacy Law Resource Center

Over the past few weeks, a number of organizations have announced their plans to cancel conferences, festivals, and other events over fears about spreading the coronavirus. Undoubtedly, the companies who’ve paid to sponsor these events have by now pulled out their sponsorship agreements to see what those agreements say about what happens next.

When companies start to negotiate a sponsorship for an event, it’s common to focus on the benefits of the partnership and to ignore the possibility that the event won’t run as planned. After all, we’re lucky enough to live in a world where that rarely happens. But when it does, it serves as an important reminder that companies sometimes need to plan for these contingencies.

Make sure your sponsorship agreement addresses what will happen if an event is cancelled or you don’t otherwise get the benefits you paid for. For example, do you have a right to cancel the agreement? If so, what will happen to the money you’ve already paid? Or do you have the ability to get make-good benefits? If so, how will those be determined?

Keep in mind that even well-drafted cancellation and make-good clauses may not make you completely whole. For example, even if you can get a pro-rata refund of your sponsorship fees, or even if your benefits will roll over into a future event, you may still lose money that you’ve invested in marketing assets or other activations.

Event organizers frequently purchase insurance to cover the financial risk of an unexpected cancellation of their event or a reduction in attendance. Event cancellation insurance is typically designed to pay the event organizer for its lost profits or to reimburse it for refunds it has to pay to attendees or sponsors if the event is cancelled for reasons covered by the policy.

When drafting your agreement, in addition to standard forms of coverage, consider exploring whether the event organizer has event cancellation insurance. For more details on this insurance, please see this advisory from our Insurance Recovery team. And if you’re thinking about strategies to mitigate business interruptions to your own company and ensure employee safety, please see this post from our Labor and Employment team.

For other helpful information during this pandemic, visit our COVID-19 Resource Center.


Advertising and Privacy Law Resource Center

Before You Market Around CoronavirusUntil recently, most consumers likely associated anything starting with “Corona” with a sunny beach and a lime wedge.

Not anymore.

The public is rightly concerned about coronavirus and how to avoid catching it.  And where the public has questions, marketers will have answers.  Here are a couple things to think about before rushing that next campaign out the door.

State and Local Laws Prohibit Price Gouging

As hand sanitizer has become scarce, some who have it have sought to capitalize on consumer demand and no small amount of fear.  We noticed this story about Amazon cracking down on third-party merchants selling coronavirus products at inflated prices.

Many states have laws governing price gouging.  New York’s law prohibits merchants from taking unfair advantage of consumers by selling goods or services that are “vital to the health, safety or welfare of consumers” for an “unconscionably excessive price” during an abnormal disruption of the market place or state of emergency.

New York’s price gouging law does not specifically define what constitutes an “unconscionably excessive price.”  However, per the NY AG, the statute provides that a price may be “unconscionably excessive” if:  the amount charged represents a “gross disparity” from the price such goods or services were sold or offered for sale immediately prior to the onset of the abnormal disruption of the market.  Merchants may provide evidence that justifies their higher prices were justified by increased costs beyond their control.

California’s law is more prescriptive.  California’s anti-price gouging statute, Penal Code Section 396, prohibits raising the price of many consumer goods and services by more than 10% after an emergency has been declared.  There may also be local laws that prohibit price gouging.

State attorneys general and CA district attorneys have reported receiving price gouging complaints.  Companies that fail to comply will risk being enforcement targets.

Be Careful Not To Oversell

The FTC and FDA issued warning letters to seven companies allegedly selling unapproved products that may violate federal law by making deceptive or scientifically unsupported claims about their ability to treat coronavirus.  Both agencies issued statements indicating that they are prepared to take further enforcement action to prevent the public from being misled.

An equally concerning scenario is the marketer who sees an opportunity to market around coronavirus with a product that has value but not to the degree that it would be an effective prevention tool.  For example, dust masks are not the same as N95 face masks.  Hand wipes without alcohol will not kill the same germs as those with alcohol.  Tito’s Handmade Vodka is not hand sanitizer.  It would be potentially misleading and deceptive to market dust masks, hand wipes without an effective sanitizer, or even Tito’s Handmade Vodka hand sanitizer as effective coronavirus prevention tools.  It’s also a waste of good vodka.  But, we digress.

The lesson is this:  The rush to meet consumer demand should not overcome the legal clearance process or common sense.  Rules still apply even in – and maybe especially in – times of public health emergency.

Stay tuned.  We’ll update this post as the situation evolves.


Advertising and Privacy Law Resource Center

The FTC announced a settlement with NeuroMetrix, Inc., and its CEO, Shai Gozani, relative to allegations that the marketers made deceptive pain relief claims on a medical device called Quell.  Quell is an FDA-cleared transcutaneous electrical nerve stimulation device (TENS) – which provides pain relief through the use of mild electrical signals.

The FTC alleged that the defendants marketed Quell, intended to be worn just below the knee, as “clinically proven” to provide “widespread chronic pain relief” throughout the body.  The advertising included claims that the device sends neural pulses to the brain, triggering a natural response that blocks pain signals in the body.  Some ads also referenced users being able to discontinue use of pain medications because of the benefit they received from Quell.

The FTC alleged that the defendants lacked scientific evidence to support widespread chronic pain relief claims.  Further, although the product is FDA-cleared to provide localized pain relief, the FTC alleged that it was not FDA-cleared to provide pain relief throughout the body.

In addition to injunctive relief, the settlement requires the defendants to pay $4 Million and requires them to turn over up to an additional $4.5 million in future foreign licensing payments.  Ouch.

So, what’s the takeaway?  There is no question that Quell is an FDA-cleared medical device based on recognized pain relief technology.  Further, it appears that the defendants conducted  clinical studies to test effectiveness of their product.  But, the core of the FTC’s complaint is that by claiming that the device could be worn below the knee but provide relief from chronic pain throughout the body, Quell simply went beyond their FDA clearance, and beyond the limits of their scientific substantiation.

This is a departure from the more recent health claims enforcement we’ve seen from the FTC over the past few years.  Quell has a recognized benefit and is supported by clinical research.  Commissioner Wilson’s statement acknowledges this and questions whether the Commission has gone too far in this case, potentially stifling innovation.  However, as the country struggles to recover from the opioid crisis and consumers search for effective alternatives, marketers of pain relief products should take note that the FTC is watching not just for products that provide no relief, but also for those that promise relief beyond what they can actually provide.

This morning, the FTC announced that Teami – a company that sells teas and skincare products – agreed to settle charges that it promoted its products using deceptive health claims and endorsements by influencers who failed to clearly disclose that they were being paid for their posts. In addition, the FTC sent letters to ten influencers detailing their obligations.

As we’ve described in previous posts, influencers need to clearly disclose any connection they have to the companies whose products they endorse. Although many of Teami’s influencers – which include celebrities such as Cardi B and Jordin Sparks – did include those disclosures, consumers wouldn’t see them unless they clicked the “more” option on Instagram below the posts.

Notably, the social media policy the company provided to its influencers specifically requires the disclosures “to be seen in Cari Endorsement Videothe first part of your post without clicking anything else.” Despite this – and even though many of the agreements required influencers to get approval before posting – many of the posts did not comply with this policy.

The settlement addresses the company’s obligations to monitor influencers in more detail than we’ve seen in past settlements. For example, the company must communicate disclosure obligations to influencers and obtain signed acknowledgements from them. The company must also monitor compliance with its policies and take specific steps to addresses failures.

Although the settlement applies to all influencer campaigns, the requirements are more relaxed for campaigns involving a large number of influencers who don’t earn more than $20 per month. For example, the company doesn’t have to review every single post from influencers in this category. Instead, it is enough to review posts from the top 50 influencers generating the highest level of sales in the previous month.

The letters the FTC sent to influencers highlight specific posts that failed to comply with the FTC guidelines and explain what the influencers should have done. For example, the FTC notes that the disclosures should be above the “more” button, and that they shouldn’t be buried among other hashtags. In addition, the FTC reminds influencers that they could be held personally liable for non-compliance. The recipients are required to respond to the FTC describing what they will do to comply with the law.

For more on this settlement and other influencer issues, please sign up for our 30-minute webinar on March 24, 2020 at 12:00 PM ET.

Advertising and Privacy Law Resource Center

The Electronic Privacy Information Center (EPIC) has filed a complaint with the Federal Trade Commission (FTC) alleging that Airbnb is violating the FTC Act and the Fair Credit Reporting Act (FCRA) by assigning “secret ratings to prospective renters, based on behavior traits using an opaque, proprietary algorithm.”  EPIC is a non-profit that seeks to advance consumer privacy rights and has successfully filed complaints resulting in FTC enforcement actions related to a number of major companies and their privacy practices, including Facebook, Snapchat, Google, and WhatsApp.

The complaint filed last week targets Airbnb’s “risk assessment score,” which EPIC alleges is based off an algorithm that uses personal information obtained from third parties, including “personal data collected from web pages, information from databases, posts on the person’s social network account, posts on a blog or a microblog account of the person, a comment made by the person on a website, or a directory listing for a company or association.”  To support its allegations, EPIC cites a patent obtained by the company Trooly, which purports to “determin[e] trustworthiness and compatibility of a person.”  Airbnb acquired Trooly in 2017.

According to the patent, the algorithm uses information collected from third parties to try to identify “negative traits” and assign “trustworthiness” scores based on personality and behavior traits that “predict the likelihood of the person being a positive actor in an online or offline person to-person interaction.”  EPIC alleges that the system is “biased, unprovable, and not replicable” because an algorithm cannot assess a particular individual’s relative “goodness” or “badness.”  Because these algorithms are likely to cause substantial injury to consumers (i.e., prospective renters who are denied short-term rental properties as a result of a negative score) that is not reasonably avoidable and not outweighed by countervailing benefits, EPIC alleges that Airbnb has committed unfair practices in violation of the FTC Act.   EPIC further alleges that Airbnb violated FCRA because the “trustworthiness scores” are consumer reports under FCRA that bear on the individuals’ “character,” “general reputation,” and “personal characteristics,” but that lack reasonable procedures to ensure accuracy.

The FTC does not generally respond publicly to complaints from third parties like EPIC.  The complaint is an important reminder for companies that use, aggregate, or disclose data to consider the legal and regulatory implications under the host of relevant federal and state laws.  The FTC’s 2016 Big Data Report provides a helpful overview of what the agency views as the benefits and risks of data aggregation, along with legal considerations for companies using big data.