Photo of Matthew Sullivan

Email
(202) 342-8580
Bio

FTC Chairman Joe Simons recently acknowledged the Commission’s plan to use its authority under Section 6(b) of the FTC Act to examine the data practices of large technology companies.  In written responses to questions from members of the U.S. Senate Commerce Committee following in-person testimony in November 2018, Chairman Simons confirmed that plans were underway to gather information from tech companies, though the specific targets or areas of focus remained under consideration.

As described by the FTC, Section 6(b) of the FTC Act “plays a critical role in protecting consumers,” and broadly authorizes the Commission to obtain information – or “special reports” – about certain aspects of a company’s business or industry sector.  Companies that are the focus of an FTC study pursuant to Section 6(b) must respond to a formal order issued by the Commission that, similar to a civil investigative demand, can include a series of information and document requests.  The information obtained through the order may then be the basis for FTC studies and subsequent industry guidance or rulemaking.

The revelation of the pending 6(b) orders comes amid concerns from federal and state lawmakers and regulators about transparency relating to “Big Data” practices and online data collection, and the use of artificial intelligence and machine-learning algorithms in decision-making.  In remarks this week to attendees of an Association of National Advertisers conference, Chairman Simons noted a potential lack of transparency in the online behavioral advertising context and “the fact that many of the companies at the heart of this ecosystem operate behind the scenes and without much consumer awareness.”

 

 

This week, President Trump signed an executive order outlining a national plan to promote the development and adoption of artificial intelligence (AI) technologies.  The order serves as the official launch of the “American AI Initiative,” which includes five areas of focus:

  • Invest in AI R&D – Prioritize AI investment in Federal agencies’ R&D missions
  • Unleash AI Resources – Enhance availability of Federal data, models, and computing resources to America’s AI research and development experts
  • Set AI Governance Standards – Led by the National Institute of Standards & Technology (NIST), develop technical standards for reliable, secure, trustworthy, and interoperable AI systems
  • Build the AI Workforce – Prioritize fellowships and training with Federal agencies to cultivate AI-focused skills and education
  • International Engagement and Protecting the U.S. AI Advantage – Implement an action plan to protect U.S. AI intellectual property

The order does not include a timeline or allocate specific funding for AI initiatives, though the Administration has indicated that a detailed plan to further the goals in the order will be released this year.

The order comes a day after remarks by FTC Commissioner Rohit Chopra that referred to potential negative outcomes of AI technology. In a speech at the Silicon Flatirons Conference in Colorado, Commissioner Chopra raised concerns about biases, and potential inequality based on gender or race, that can result from “black box” decision-making technology that combines AI algorithms with massive data collection. Commissioner Chopra noted that current consumer protection laws that exist to address human bias in the marketplace must similarly be structured to account for AI-generated biases, echoing sentiments raised by participants at the FTC’s AI-focused competition and consumer protection hearing held last year.

Last week, five advertising and marketing trade associations jointly filed comments with the California Attorney General seeking clarification on provisions within the California Consumer Privacy Act (CCPA).

While expressing “strong support” for the CCPA’s intent, and noting the online ad industry’s longstanding consumer privacy efforts like the DAA’s YourAdChoices Program, the group proposed the following three clarifications relating to CCPA provisions that, unless modified, the group believes could reduce consumer choice and privacy:

  • Notice relating to a sale of consumer data: A company’s written assurance of CCPA compliance should satisfy the requirement to provide a consumer with “explicit notice” (under 1798.115(d)) when a company sells a consumer’s personal data that the company did not receive directly from such consumer;
  • Partial opt-out from the sale of consumer data: When responding to a consumer’s request to opt out of the sale of personal data, companies can present consumers with choices on the types of “sales” from which to opt-out, the types of data to be deleted, or whether to opt out completely, rather than simply offering an all or nothing opt-out.
  • No individualized privacy policies: Businesses should not be required to create individualized privacy policies for each consumer to satisfy the requirement that a privacy policy disclose to consumers the specific pieces of personal data the business has collected about them.

The associations signing on to the comments include the Association of National Advertisers, American Advertising Federation, Interactive Advertising Bureau, American Association of Advertising Agencies, and the Network Advertising Initiative. The comments represent an “initial” submission intended to raise the proposals above and, more broadly, highlight to the California AG the importance of the online-ad supported ecosystem and its impact on the economy.  The associations plan to submit more detailed comments in the coming weeks.

The comments coincide with a series of public forums that the California AG is hosting to provide interested parties with an initial opportunity to comment on CCPA requirements and the corresponding regulations that the Attorney General must adopt on or before July 1, 2020.

 

On Monday, France’s Data Protection Agency announced that it levied a €50 million ($56.8 million) fine against Google for violating the EU’s new General Data Protection Regulation (GDPR).  The precedent-setting fine by the Commission Nationale de l’Informatique et des Libertés (“CNIL”) is the highest yet imposed since the new law took effect in May 2018.

How Does Google Violate GDPR, According to CNIL?

  • Lack of Transparency: GDPR Articles 12-13 require a data controller to provide data subjects with transparent, intelligible, and easily accessible information relating to the scope and purpose of the personal data processing, and the lawful basis for such processing. CNIL asserts that Google fails to meet the required level of transparency based on the following:
    • Information is not intelligible: Google’s description of its personal data processing and associated personal data categories is “too generic and vague.”
    • Information is not easily accessible: Data subjects must access multiple Google documents or pages and take a number of distinct actions (“5 or 6”) to obtain complete information on the personal data that Google collects for personalization purposes and geo-tracking.
    • Lawful basis for processing is unclear: Data subjects may mistakenly view the legal basis for processing by Google as legitimate interests (that does not require consent) rather than individual consent.
    • Data retention period is not specified: Google fails to provide information on the period that it retains certain personal data.
  • Invalid Consent: Per GDPR Articles 5-7, a data controller relying on consent as the lawful basis for processing of personal data must be able to demonstrate that consent by a data subject is informed, specified, and unambiguous. CNIL claims that Google fails to capture valid consent from data subjects as follows:
    • Consent is not “informed”: Google’s data processing description for its advertising personalization services is diluted across several documents and does not clearly describe the scope of processing across multiple Google services, the amount of data processed, and the manner in which the data is combined.
    • Consent is not unambiguous: Consent for advertising personalization appears as pre-checked boxes.
    • Consent is not specific: Consent across all Google services is captured via consent to the Google Terms of Services and Privacy Policy rather than a user providing distinct consent for each Google personal data use case.

What Does This Mean for Other Companies?

Continue Reading C’est la vie? French Regulator Fines Google Nearly $57 million for GDPR Non-compliance

Last month, CTIA, the wireless industry association, launched an initiative through which wireless-connected Internet of Things (“IoT”) devices can be certified for cybersecurity readiness.  According to the CTIA announcement, the CTIA Cybersecurity Certification Program (the “Program”) is intended to protect both consumers and wireless infrastructure by creating a more secure foundation for IoT applications that support “smart” cities, connected cars, mobile health apps, home appliances, and other IoT-enabled environments.

The Program was developed in collaboration with the nationwide wireless carriers, along with technology companies, security experts and test laboratories, and builds upon IoT security recommendations from the National Telecommunications and Information Administration (NTIA) and the National Institute of Standards and Technology (NIST).  According to the Program Test Plan, devices eligible for certification include those that contain an IoT application layer that provides identity and authentication functionality and at least one communications module supporting either LTE or Wi-Fi networks.

A device submitted for certification will undergo a series of tests at a CTIA-authorized lab.  The testing will assess the device for one of three certification levels or “categories.” To obtain a Category 1 certification, the device will be reviewed for the presence of “core” IoT device security elements, including a Terms of Service and a customer-facing privacy policy, along with technical elements including password management, authentication and access controls.  A Category 2 certification includes the Category 1 elements, in addition to enhanced security features, such as an audit log, multi-factor authentication, remote deactivation, and threat monitoring. A Category 3 certification features the most comprehensive level of cybersecurity threat testing, and covers elements such as encryption of data at rest, digital signature validation, and tamper reporting, in addition to the elements under Categories 1 and 2.

The Program comes at a time of rapid growth for IoT devices.  According to the latest Ericsson Mobility Report, the global IoT market will expand to 3.5 billion cellular-connected devices in the next five years.  Much of this growth is expected to be driven by the anticipated deployment of 5G technology and enhanced mobile broadband.

The Program will begin accepting devices for certification testing beginning in October 2018.  Details on how to participate in the Program are available on the CTIA website.

On April 8, 2015, the Federal Communications Commission (FCC) Enforcement Bureau announced that AT&T has agreed to a $25 million consent decree to resolve an FCC investigation into alleged consumer privacy violations at AT&T call centers in Mexico, Columbia, and the Philippines. According to the FCC, AT&T violated Section 222 of the Communications Act (the “Act”) by failing to reasonably secure its customers’ personal information, including customers’ names and at least the last four digits of their Social Security numbers, as well as account-related data known as customer proprietary network information (CPNI). The agency further alleged that AT&T’s data security practices at the three call centers were unjust and unreasonable in violation of Section 201 of the Act. The settlement is the FCC’s largest data security enforcement action to date.

The FCC launched its investigation into AT&T in May 2014 after AT&T reported a data breach to the Commission’s CPNI Data Breach Portal. The breach occurred between November 2013 and April 2014 at a third-party call center facility in Mexico under contract with AT&T. According to the FCC, while AT&T did not operate the call center where the breach occurred, AT&T maintained and operated the systems that certain employees at the Mexico call center used to access AT&T customer records, and such systems were governed by AT&T’s data security measures. The FCC asserted that AT&T’s measures failed to prevent or timely detect the breach that lasted 168 days and resulted in the unauthorized access of more than 68,000 customer accounts. The employees as issue sold the data from the customer accounts to an unauthorized third-party who used the information to submit up to 290,000 handset unlock requests through AT&T’s website as part of what appeared to be a fraudulent used or stolen phone trafficking operation. AT&T terminated its relationship with the Mexico call center in September 2014.

In March 2015, AT&T disclosed to the FCC that it was investigating separate data breaches at call centers in Columbia and the Philippines, in which call center employees accessed account data for at least 211,000 customer accounts to obtain unlock codes for AT&T mobile phones. The unauthorized access exposed certain customer CPNI including bill amount and rate plan information, though AT&T’s investigation found no evidence that the CPNI was used or sold to third-parties.

To read more about the terms of the FCC consent decree with AT&T, visit our sister blog here.

The consent decree with AT&T comes six months after the FCC’s first data security enforcement action. In that case, the FCC issued a Notice of Apparent Liability (or NAL) seeking to impose $10 million in fines against TerraCom, Inc. and YourTel America, Inc. for allegedly violating Sections 222 and 201 of the Act by maintaining the sensitive personal data of 300,000 consumers on unencrypted Internet servers. These actions underscore the FCC’s heightened and growing emphasis on consumer privacy and data security, areas that traditionally have been the focus of the Federal Trade Commission, which has brought more than 50 privacy and data security actions across a number of industries during the past 10 years.

Last month, we reported on a bill that would amend a key provision in New Jersey’s restrictive telemarketing law, which prohibited nearly all telemarketing calls to mobile devices, even when the telemarketer had the consent of the mobile device user.  At the end of January, New Jersey Governor Chris Christie signed the bill, S1382.  The amended law only prohibits unsolicited telemarketing calls to mobile devices.  As a result, telemarketing companies can now make sales calls to mobile devices when the call is either (1) made to a customer with whom there is an existing business relationship, or (2) in response to the customer’s written request.

The amended law became effective upon signing by Governor Christie.

Last week, the FTC stated support for the National Highway Traffic Safety Administration’s (“NHTSA’s”) approach to privacy and data security within the NHTSA’s proposed regulation relating to vehicle-to-vehicle (“V2V”) communications. The proposed rule, which would incorporate V2V technology into passenger cars and light trucks by 2019, is intended to enhance driver safety by aggregating and sharing data (such as a vehicle’s speed) from surrounding vehicles to generate safety warnings for drivers.

In a comment responding to the NHTSA’s proposed rule, the FTC noted three primary concerns relating to V2V communications, as described during the FTC’s “Internet of Things” workshop in November 2013:

  • The ability of connected car technology to track consumers’ precise geolocation over time;
  • Information about driving habits used to price insurance premiums or set prices for other auto-related products, without drivers’ knowledge or consent; and
  • The security of connected cars, including the ability for third-parties to remotely access a car’s internal computer network

According to the FTC, the NHTSA’s V2V proposed rulemaking appropriately addressed these concerns through a deliberative, process-based approach that included collaboration with multiple industry and consumer stakeholders. The FTC also noted that the NHTSA designed the proposed V2V system to limit the data collected and stored to that which serves the intended safety purposes, and to ensure that the collected data cannot be used to identify a particular individual or vehicle. Lastly, with respect to the security of the collected data, the FTC supports the NHTSA’s decision to help mitigate the potential for unauthorized access to data by keeping the V2V device separate from other onboard computers.

 

On February 19, 2014, the FTC hosted a public seminar on mobile device tracking, the first event in the FTC’s Spring Privacy Series on emerging consumer privacy issues.  The seminar included a tutorial on how retail tracking technology works, along with a panel featuring representatives from consumer groups, and the retail, marketing, and technology industries, who discussed the risks and benefits, consumer awareness and perceptions, and the future of mobile device tracking.

The tutorial on mobile device tracking provided a technical overview of how mobile devices collect information and also send information back to the consumer.  This discussion also covered the practice of “hashing” which makes the information collected non-personally identifiable, but not completely anonymous.

Following the technical overview, the panel discussed the consumer benefits and privacy concerns of mobile device tracking, mainly in the context of brick-and-mortar retailers.  The panel agreed that while the technology has the potential to improve consumers’ shopping experience and help businesses identify how best to display popular products and improve line waits at registers, the collection of data via mobile devices is invisible and passive, and it is difficult for consumers to opt out of mobile device tracking.

For a more detailed overview of the seminar, please click here.

On February 6, the Department of Commerce’s National Telecommunications and Internet Administration (“NTIA”) hosted the first of eight planned multi-stakeholder meetings aimed at creating a voluntary code of conduct to address the growing commercial and government use of facial recognition technology. The meeting included a primer on how facial technology works, current applications, technical privacy safeguards, and gaps in privacy protections that should be addressed during the upcoming sessions. Meeting attendees included government stakeholders, technology industry representatives from companies including Microsoft and FaceFirst, and consumer groups such as Consumer Action and the Center for Democracy and Technology. The second meeting is scheduled for February 25.

Notably, the meeting was held one day after Senator Al Franken (D-Minn.) sent a letter to the head of FacialNetwork.com, the developer of the NameTag facial recognition app for Google Glass users. The letter cited deep concerns with NameTag’s ability to identify individuals from a distance without their knowledge and consent, the lack of federal law governing the use of facial recognition technology, and the potential for abuse by “bad actors.” In the letter, Sen. Franken “strongly urged” the developer to (1) postpone NameTag’s launch until after NTIA establishes its code of conduct; and (2) limit the app’s facial recognition feature to individuals who have given their affirmative consent to be identified.