Last week, in Cline v. Touchtunes Music Corp., No. 18-1756,  the Second Circuit Court of Appeals upheld a Manhattan district judge’s decision to approve a low-cost class action settlement in what the judge termed a “nuisance” case, while basically zeroing out the $100,000 fee requested by the plaintiffs’ class counsel.

Defendants who have faced silly but not entirely motionable class actions can momentarily enjoy the schadenfreude of watching a plaintiff’s law firm come away with nothing for its efforts.  The problem with decisions like Cline, however, is that they may make plaintiffs’ counsel more hesitant to settle a cheap case on cheap terms.  For the class action defense bar, raising potential settlement costs is nothing to celebrate.

The facts of Cline are almost too silly to merit repeating.  The defendant provided a digital jukebox application to (for example) bars and restaurants.  Patrons could pay money to play songs, and the app’s terms told the patrons clearly that their songs weren’t guaranteed to play and that no refunds would be provided under any circumstances.  What the terms didn’t disclose, however, is that the restaurant manager had the ability to manually skip songs.  The plaintiff, ostensibly on behalf of a class of people whose songs were skipped, sued for the lost value—about 40 cents each—of not hearing the songs be played.

The defendant, after two tries, couldn’t quite get the whole suit dismissed.  A highly experienced district judge left alive a false advertising claim for the non-disclosure.  At that point, early last year, the parties agreed to a settlement.  About 166,000 patrons whose songs weren’t played, and for whom the defendant had contact information, received a code by email good for one free song play on any of the defendant’s jukeboxes.  Other people could file claims for codes, and 2,200 people did so.

The plaintiff’s counsel sought a $100,000 fee for themselves and a $2,000 incentive fee for the plaintiff.  The judge approved the settlement but not these fees.  He rejected the incentive award and, in place of the $100,000 fee, which plaintiffs’ counsel contended was their “lodestar” of hours worked, the judge instead granted a fee of 20 cents per song code that class members actually redeem within the one-year expiration period.  That fee is likely to be less than $1,000.  The plaintiffs’ counsel appealed that reduction, but the Second Circuit upheld it in a summary order, finding that the district judge acted within his discretion, especially in a “coupon”-type settlement.

What should not be lost in any analysis of Cline is this key statement in the Second Circuit’s opinion:  “[C]lass counsel’s lodestar fee application was not supported by contemporaneous billing records, and…no substantial explanation had been provided for a $10,000 ‘consulting fee’ for which reimbursement was sought.”  The Second Circuit thus reinforced that plaintiffs’ counsel absolutely can still seek lodestar-based fees even when settling for coupons or in-kind goods, provided that they support those fees with appropriate billing detail.

If plaintiffs’ counsel try to tell you that they don’t want to enter into a coupon or in-kind settlement because Cline makes them fearful of receiving no fee in the case, therefore, remind them that the problem in Cline wasn’t the settlement structure; it was the law firm’s failure to document its fees.  Don’t make that mistake, and Cline shouldn’t be an issue.  Low-value cases like Cline still should be able to settle on low-value terms.


Businesses often include mandatory arbitration clauses in their pre-dispute dealings with customers to prevent costly consumer class actions in favor of streamlined (often individual) arbitration.  The Federal Arbitration Act (“FAA”) makes such arbitration agreements “valid, irrevocable, and enforceable, save upon such grounds as exist at law or in equity for the revocation of any contract.”  Relying on the FAA, the Supreme Court has defended business enforcement of such clauses against state- and judge-made exceptions.  For example, the Supreme Court has held that the FAA preempts state laws that pose obstacles to its enforcement, prevents courts from invalidating an arbitration agreement on the basis of the cost to arbitrate exceeding the potential recovery, and requires courts to enforce contractual provisions that delegate to an arbitrator the determination of whether an arbitration agreement applies to a dispute.  As a result, the existence and enforcement of mandatory, individual arbitration agreements have become more commonplace in consumer-facing industries.

Democratic senators are seeking to change this by introducing a bill that would narrow the FAA by prospectively barring pre-dispute arbitration agreements and class-action waivers in consumer, employment, antitrust, and civil rights disputes.  In these four areas, the proposed legislation, entitled The Forced Arbitration Injustice Repeal Act of 2019 (the “FAIR Act”), S. 635, H.R. 1423, would also override agreements to have arbitrators determine arbitrability or the FAIR Act’s applicability to the dispute, opting instead for courts to determine these issues under federal law.

The FAIR Act likely faces the same opposition from Republicans that have defeated similar proposals, including the renditions of the “Arbitration Fairness Act” that were rejected from 2007 through 2018.  Although the FAIR Act may garner attention in the current political climate, in large part due to its employment-related provisions, it likely faces an uphill battle in the current Republican-controlled Senate and White House. But it’s a bill that’s worth keeping an eye on.  We’ll continue to post updates on any key developments.

The FTC recently announced a $5.7 million settlement with app developer for COPPA violations associated with its app (now known as TikTok)—the agency’s largest-ever COPPA fine since the enactment of the statute. The agency charged the app company, which allows users to create and share videos of themselves lip-syncing to music, with unlawfully collecting personal information from children.

To create a TikTok profile, users must provide contact information, a short bio, and a profile picture. According to the FTC, between December 2015 and October 2016, the company also collected geolocation information from app users. In 2017, the app started requiring users to provide their age, although it did not require current users to update their accounts with their age. By default, accounts were “public,” allowing users to see each other’s bios (which included their grade or age). It also allowed users to see a list of other users within a 50-mile radius, and gave users the ability to direct message other users. Many of the songs available on the app were popular with children under 13.

The FTC further alleged that received thousands of complaints from parents asserting that their child had created the app account without their knowledge (and noted an example of a two-week period where the company received more than 300 such complaints). The agency also noted that while the company closed the children’s accounts in response, it did not delete the users’ videos or profile information from its servers.

The FTC’s Complaint focused on practices spanning from 2014 through 2017. was acquired by ByteDance Ltd. in December 2017, and merged with the TikTok app in August 2018.

COPPA identifies specific requirements for operators who collect personal information from children under 13, including obtaining consent from parents prior to collection and providing information about collection practices for children’s data. Online services subject to the rule generally fall into two categories: (1) sites that are directed to children and collect personal information from them; and (2) general audience sites that have actual knowledge that they are collecting personal information from children. Civil penalties for violations of COPPA can be up to $41,484 per violation.

According to the FTC,’s app fell into both categories:

  1. The company included music and other content appealing to children on the app. For example, many of the songs included on the app were popular with children under 13, and the app used “colorful and bright emoji characters” that could appeal to children.
  2. Once the company began collecting the ages of its users, had actual knowledge that some of its users were under the age of 13. In spite of this, the company did not obtain consent from the parents of users under the age of 13, or comply with other COPPA requirements.

FTC Commissioners Chopra and Slaughter issued a joint statement on the settlement, pointing out that FTC staff had uncovered disturbing practices of a company willing to pursue growth at the expense of endangering children. They also noted that previously, FTC investigations typically focused on individual accountability in limited circumstances, rather than pursuing broader enforcement against company leaders for widespread company practices. The Commissioners further indicated that as the FTC continues to pursue legal violations going forward, it is time to “prioritize uncovering the role of corporate officers and directors” and to “hold accountable everyone who broke the law.”

This settlement indicates that the FTC continues to prioritize privacy enforcement—particularly where vulnerable audiences, such as children, are involved. Future FTC enforcement actions could signal an expanded approach to individual liability, including with respect to larger companies.

The case is also a good reminder of the value in performing robust privacy due diligence when considering acquiring an entity, and meaningfully assessing the risk of a company’s data practices before adding them to the portfolio. A widely popular business with significant data assets may not look as attractive once civil penalties and injunctive terms are added to the mix.

The Federal Trade Commission (FTC) announced this week that it is seeking comments on proposed amendments to the Privacy Rule and Safeguards Rule under the Gramm-Leach-Bliley Act (GLBA).  These two rules outline obligations for financial institutions to protect the privacy and security of customer data in their control.  While the proposed changes to the Privacy Rule are modest, the expansive list of specific cyber controls proposed for the Safeguards Rule is material and could impose a new de facto minimum security standard that implicates many businesses, including those outside the coverage of the Rule.

Privacy Rule

The Privacy Rule, which went into effect in 2000, requires a financial institution to inform customers about its information-sharing practices and allow customers to opt out of having their information shared with certain third parties. Changes to the Dodd-Frank Act in 2010 transferred the majority of the FTC’s rulemaking authority for the Privacy Rule to the Consumer Financial Protection Bureau.  Only certain motor vehicle dealers are still subject to FTC rulemaking under the Privacy Rule.  To address these changes, the proposed amendments would remove from the Rule examples of financial institutions that are no longer subject to FTC rulemaking authority, and provide clarification to motor vehicle dealers regarding the annual privacy notices.

Safeguards Rule

The Safeguards Rule, which went into effect in 2003, requires financial institutions to develop, implement, and maintain comprehensive information security programs to protect their customers’ personal information. Currently, the Safeguards Rule emphasizes a process-based approach that is flexible in how the program is implemented so long as it meaningfully addresses core components, and where the safeguards address foreseeable internal and external cyber risks to customer information.

The proposed amendments to the Safeguards Rule would still follow a process-based approach but add significantly more specific requirements that must be addressed as part of the company’s information security program. These include, for example:

  • Appointing a Chief Information Security Officer (CISO) (e.g., a qualified individual responsible for overseeing and implementing the information security program and enforcing the program). The CISO can be an employee, affiliate, or a service provider, but if the latter, additional requirements apply;
  • More specificity in what the required information security program’s risk assessments involve;
  • More specificity in what is required as part of a company’s access controls for their information systems;
  • Updating risk assessments and resulting safeguards concerning a company’s data and system identification and mapping;
  • Employing encryption of all customer information stored or transmitted over external networks or implement alternative compensating controls that are reviewed and approved by the company’s CISO;
  • Adopting secure development practices for in-house developed applications that handle customer information;
  • Implementing multi-factor authentication for any individual with access to customer information or internal networks that contain customer information (unless the CISO approves a compensating control);
  • Including audit trails that detect and respond to security events;
  • Implementing change management procedures;
  • Implementing safeguards that both monitor authorized activity and detect unauthorized activity involving customer information;
  • Regular testing of the effectiveness of the information security program’s key controls, systems, and procedures, including continuous monitoring or annual penetration testing and biannual vulnerability assessments;
  • Establishing a written incident response plan that addresses goals, outlines the internal processes for incident response, defines clear roles, responsibilities and levels of decision-making authority, identifies external and internal communications and information sharing, identifies requirements for the remediation of identified weaknesses in information systems and controls, addresses the documentation and reporting of security events and related incident response activities, and the evaluation and revision of the program, as needed post-incident;
  • Requiring the CISO to at least annually report to the board of directors or equivalent governing body on the status of the information security program, the company’s compliance with the Safeguards Rule, and material matters related to the information security program.

The proposed modifications would exempt small businesses (financial institutions that maintain customer information concerning fewer than five thousand consumers) from some of the Safeguard Rule’s requirements.

In addition, the proposed modifications would expand the definition of “financial institution” to include entities engaged in activities that the Federal Reserve Board determines to be incidental to financial activities (e.g., “finders” that bring together buyers and sellers of a product or service), and incorporate the definition of this term directly in the Safeguards Rule, instead of by reference based on the Privacy Rule.

Two Republican appointed-Commissioners, Noah Phillips and Christine Wilson, dissented from the proposed amendments, noting that it may not be appropriate to mandate such prescriptive standards for all market participants. They maintained that producing guidance for companies would be a better approach than one-size-fits-all amendments that all companies will have to follow. The Commissioners also made a case that the proposed amendments are based on the New York State Department of Financial Services cyber regulations, which are too new for the FTC to evaluate for impact or efficacy.  They also expressed concerns with the rigidity that these new requirements would place on what is now a flexible approach, and whether these amendments would place the Commission in the stead of a company’s governance in deciding the level of board engagement, hiring and training, and accountability design, among other controls.


While the proposed amendments are limited to financial institutions subject to the GLBA Privacy Rule and Safeguards Rule, if adopted, the specificity of the cyber controls proposed are likely to factor into contract terms that financial institutions impose on their partners and service providers, as well as serve as a potential model for other industries. If adopted, these would be the most explicit cyber regulations in the United States to date.  At the same time, it is notable that the agency declined to adopt a safe harbor based on a showing of compliance with an industry standard, such as NIST or PCI DSS.  In other words, the proposed changes suggest a potential new minimum standard for enterprise security programs that warrant close consideration.  Given the influential role that the Safeguards Rule played in developing information security programs outside of the financial sector, these new proposed requirements may well become the de facto industry standard if history is a guide.

The deadline to submit written comments will be 60 days after the notice is published in the Federal Register. We will continue to monitor these developments.


The National Institute of Standards and Technology (NIST) released a preview of its plans for a standard Privacy Framework this past week.  The purpose of the Framework is to help organizations better manage privacy risks.

The Privacy Framework would breakdown privacy functions into five categories: identify the context of processing, protect private data, control data through data management, inform individuals about data processing, and respond to adverse breach events.

Also, organizations would be able to reference the Privacy Framework when deciding how to tailor compliance to the organization’s risk tolerance, privacy objectives, and financial resources.

NIST enters the privacy policy-making arena in a crowded field.  The NTIA has solicited comments on developing an approach to consumer privacy, Congress is considering competing legislative options for federal privacy legislation, and California is gearing up this year for the 2020 implementation of the CCPA.

But as NIST explains on its website, the NIST framework is intended to compliment statutory and regulatory rules, not replace them: “the NIST framework is envisioned as an enterprise-level privacy risk management tool that can be compatible with and support organizations’ ability to operate under applicable domestic and international legal or regulatory regimes.”

Throughout the process of developing the Privacy Framework, NIST has emphasized that it will leverage its 2014 Cybersecurity Framework – both as a template and as an example of the value of standards documents.  The agency recently celebrated the five-year anniversary of the Cybersecurity Framework in February, touting the fact that the Framework has been downloaded more than half a million times.

Kelley Drye will continue to track developments at NIST on the development of a Privacy Framework.  If you have questions about the Privacy Framework or are interested in submitting comments, please contact Alysa Hutnik or Alex Schneider at Kelley Drye.

Asserting the authority to oversee the Consumer Product Safety Commission, Frank Pallone, Jr. (D-NJ), Chairman of the Committee on Energy and Commerce, and Jan Schakowsky (D-IL), Chair of the Subcommittee on Consumer Protection and Commerce, have requested information from the Commission concerning the CPSC’s workload and its dealings with the public with regard to consumer complaints and FOIA requests. In a letter to Acting Chairman Ann Marie Buerkle, the Committee has requested information such as:

  • A list of rulemakings, petitions, applications, complaints, requests, and other items pending before the CPSC, including the length of time the matter has been pending and associated staff;
  • The total number of reports of unsafe products received through from FY 2016-2019;
  • Information pertaining to the number of investigations opened and closed by the Office of Compliance & Field Operations from FY 2016-2019;
  • Details about involvement in voluntary standards development;
  • A list of all FOIA requests from FY 2016-2019;
  • A list of civil penalties, including lists of internal “referrals” for civil penalties; and
  • A list of all matters from which CPSC leadership or staff has been recused from FY 2016-2019 and the reason for each recusal.

The Committee has requested a complete written response to these questions by March 22, 2019. We expect that an oversight or similar hearing will likely follow the CPSC’s response, and we will continue to monitor developments.

The current and future definition of what qualifies as an automatic telephone dialing system (ATDS or autodialer) remains a hotly debated and evaluated issue for every company placing calls and texts, or designing dialer technology, as well as the litigants and jurists already mired in litigation under the Telephone Consumer Protection Act (TCPA).  Last year, the D.C. Circuit struck down the FCC’s ATDS definition in ACA International v. FCC, Case No. 15-1211 (D.C. Cir. 2019).  Courts since have diverged in approaches on interpreting the ATDS term.  See, e.g., prior discussions of Marks and Dominguez.  All eyes thus remain fixed on the FCC for clarification.

In this post, we revisit the relevant details of the Court’s decision in ACA International, and prior statements of FCC Chairman Ajit Pai concerning the ATDS definition to assess how history may be a guide to how the FCC approaches this issue.

Continue Reading Taking Stock of the TCPA in 2019: What is an “Autodialer”?

The draft National E-Commerce Policy (“Draft Policy”) released by the Government of India on February 23, 2019 for stakeholder comments, has left the e-commerce sector in jitters. For global market players, the protectionist construct of the Draft Policy seems to suggest a shift of India’s focus from ‘Ease of Doing Business in India’ to ‘Make in India’. If the Draft Policy is implemented in its present form, it may have a serious impact demanding drastic change in internal strategies, policies and cost allocations for foreign companies having e-commercial presence in India. The Draft Policy is open for stakeholder comments up to March 9, 2019.

The Draft Policy focuses on: (i) restriction on cross-border flow of data; (ii) local presence and taxability of foreign entities having significant economic presence in India; (iii) creating a robust digital infrastructure for e-commerce, from online custom clearance to online resolution of consumer complaints; (iv) promoting exports from India with a boost to start-ups and small firms; and (v) regulatory changes to augment economic growth in e-commerce.

The key highlights of the Draft Policy are as follows:  Continue Reading Doing Business in India? Keep an Eye on This….

In a decision that will limit the Federal Trade Commission’s (FTC) ability in both consumer protection and antitrust matters to bring certain claims in federal court, the Third Circuit Court of Appeals held in FTC v. Shire Viropharma, Inc. that the FTC may only bring a case under Section 13(b) of the FTC Act when the FTC can articulate specific facts that a defendant “is violating” or “is about to violate” the law.

Since the 1980s, the FTC has filed most of its cases challenging deceptive or unfair practices under Section 5 of the FTC act in federal court, instead of administratively. The FTC’s authority to file these types of cases in federal court is found in Section 13(b) of the act, added to the act in 1973, which permits the FTC to seek an injunction in federal court “[w]henever the Commission has reason to believe . . . that any person, partnership, or corporation is violating, or is about to violate, any provision of law enforced by the [FTC].” While in cases of pending acquisitions or ongoing fraud it may be clear that the FTC has reason to believe someone “is violating” or “is about to violate” the law, the FTC has also brought cases under Section 13(b) for claims arising from abandoned conduct. The Shire decision addressed the FTC’s authority to bring an action in federal court under Section 13(b) in these circumstances.

In Shire, the FTC alleged that Shire abused the U.S. Food and Drug Administration’s citizen petition process to maintain its monopoly on a drug it manufactured. The complaint alleged that Shire filed forty-six citizen petitions between 2006 and 2012. In 2017, the Commission filed its complaint, which alleged, inter alia, that “[a]bsent an injunction, there is a cognizable danger that Shire will engage in similar conduct” and “[Shire] has the incentive and opportunity to continue to engage in similar conduct in the future. At all relevant times, [Shire] marketed and developed drug products for commercial sale in the United States, and it could do so in the future.”

Shire filed a motion to dismiss, arguing that Section 13(b) only allowed the Commission to pursue injunctive relief where the violation is occurring or is about to occur. After considering the text of the statute and the legislative history, the court agreed. Because the FTC failed to “plausibly suggest [Shire] is ‘about to violate’ any law enforced by the FTC, particularly when the alleged misconduct ceased almost five years before filing of the complaint,” the court dismissed the case.

On appeal, the FTC argued that a “likelihood of recurrence” standard, borrowed from the common law standard for injunctive relief, should govern when the FTC may bring an action in federal court under Section 13(b). The FTC also advanced a “parade of horribles” argument that crafty defendants could flaunt the FTC’s authority by swiftly shutting down their operations at the outset of an FTC investigation to immunize themselves from a federal court action.

The Third Circuit rejected these arguments. It concluded that the statutory text under Section 13(b) requiring that the FTC have reason to believe a wrongdoer “is violating” or “is about to violate” the law unambiguously prohibits only existing or impending conduct. The Court also rejected the FTC’s arguments that its decision would hamper its law enforcement efforts, noting that Section 5 of the FTC Act would continue to allow the FTC to bring administrative actions based on past conduct. The Court further noted that if the FTC determined during the pendency of an administrative action that a respondent was violating or about to violate the law, it could then seek injunctive relief in federal court under Section 13(b). Having determined the appropriate legal standard, the Court of Appeals upheld the district court’s holding that the FTC failed to allege in its complaint that the defendant “is violating” or “is about to violate” the law.

The FTC is likely to appeal the decision in Shire, but there is no guarantee that the Supreme Court will grant certiorari given the plain language of the statute and the lack of any contrary circuit authority. In the meantime, the same issue in the context of a consumer protection action is likely headed to the Eleventh Circuit Court of Appeals in FTC v. Hornbeam Special Situations, LLC, No. 1:17-cv-3094 (N.D. Ga.). where the FTC sued a variety of defendants, including the estates of deceased individuals, for allegedly billing consumers without their authorization.

While the FTC continues to have the option to bring cases against past violations administratively under Section 5, including to seek a cease and desist order, it may decide to exercise more restraint in bringing cases involving abandoned conduct. This is especially true for claims subject to statutes of limitations. Where the FTC does decide to pursue conduct that has ceased, it may seek tolling agreements during the investigational phase.

The FTC may consider bringing more administrative actions under its Part 3 authority. As former Commissioner Maureen Ohlhausen has observed, “[t]he FTC’s Part 3 authority is a powerful tool for developing or clarifying the law.” Yet, over time, the FTC has brought far fewer Part 3 cases – 94 cases during the period 1977 to 1986 compared to 12 during the period 2007 to 2016. Shire, and quite possibly Hornbeam, should cause the Commission to assess the reasons behind this trend and to take steps to ensure the Part 3 process fulfills the role intended by Congress when it was created. That could very well mean that cases that would have been brought in federal court may find their way to hearing being brought before administrative law judges.

This week, the FTC announced its first case involving fake reviews on an independent website.

Cure Encapsulations sells a weight-loss product exclusively on Amazon. When the company wanted to boost its sales, its owner turned to Amazon Verified Reviews (or “AVR,” for short), a website that offers Amazon sellers services designed to “push your product towards the top” using “verified” product reviews that will “help your product rank better in the internal search engine.”Garcinia Cambogia Bottle

In an e-mail exchange, the owner of the company told AVR that he needed “real positive reviews from real aged accounts” to boost the product’s ratings to a 4.3. AVR responded by posting a series of five-star reviews with weight loss claims. Although the reviews appeared to come from consumers, they were actually “fabricated by one or more third parties who were paid to generate reviews.”

The proposed order prohibits the defendants from mispresenting that an endorsement is from an actual consumer, when it’s not. The order also addresses the substantiation required for various types of claims. Lastly, the order imposes a judgment of $12.8 million, which will be suspended upon payment of $50,000 and the payment of certain income tax obligations.

Although this is the first FTC case involving fake reviews on an independent site, this isn’t the first case dealing with this issue. For example, the New York Attorney General has entered settlements with companies in the “reputation management” industry who have used fake reviews to enhance their client’s sales. So both buyers and sellers of fake reviews can face scrutiny.


After we published this post, an Amazon spokesperson contacted us with the following statement: “We welcome the FTC’s work in this area. Amazon invests significant resources to protect the integrity of reviews in our store because we know customers value the insights and experiences shared by fellow shoppers. Even one inauthentic review is one too many. We have clear participation guidelines for both reviewers and selling partners and we suspend, ban, and take legal action on those who violate our policies.”