It has been almost two years since Lina Khan was designated the new Federal Trade Commission (FTC) chair, and it has been an eventful few years. One of the many questions being asked is “Where do things stand at the FTC on privacy?” Congress has yet to pass comprehensive privacy legislation, and the FTC continues to mostly use its decades-old statute to bring privacy cases that are not always well suited to such an antiquated law.

This challenge became readily apparent in a recent FTC case brought against Kochava, a data broker that allegedly, among other things, sold to or shared with third parties precise geolocation data that, when associated with unique persistent identifiers, could reveal consumers’ visits to sensitive locations, such as reproductive health facilities or houses of worship. The FTC had alleged that selling or sharing this data was an unfair practice. Of course, to show that a practice is unfair, the agency must show that it 1) causes or is likely to cause substantial injury to consumers; 2) is not reasonably avoidable by consumers themselves; and 3) is not outweighed by countervailing benefits to consumers or to competition. In addition to the statutory language, there is a 1980 FTC Policy Statement on Unfairness that provides a detailed analysis of each of the three prongs of the unfairness test and how the agency interprets the legal standard. Establishing that something is unfair is not simple; it requires extensive proof and a detailed factual analysis.

And just last week, a federal district court dismissed the FTC’s complaint in Kochava, finding that although the concerns about the practices at issue might be “legitimate,” the FTC had not pled sufficient facts in the complaint to demonstrate that the practices cause or are likely to cause substantial injury to consumers. The FTC also argued that the practices create a significant risk of concrete harm, and that also was found to be not adequately pled. Among the failures cited by the court are that the data at issue constitute information that can allow sensitive “inferences” to be made, which  “lessen[s] the severity of the alleged privacy injury.” But probably more significant to the decision is the finding that the FTC “claims only that third parties could tie the data back to device users; not that they have done so or are likely to do so.” 

It remains to be seen whether the FTC will take the court up on its invitation to amend the Kochava complaint. But the case serves as an important reminder that despite all the statements made by commissioners about the broad range of privacy harms that they want to challenge, the very real limits to the agency’s statutory authority can make a lot of this a steep uphill battle, particularly when scrutinized closely by federal court judges. It is, however, unlikely that one decision like this will stop the agency from bringing comparable unfairness claims going forward.

That said, there have been some pretty interesting settlements that have to be recognized. Earlier this year we saw two health-related privacy cases that sent a very strong message that the FTC is deeply concerned about the privacy of health information, particularly if that information is shared with third parties for advertising purposes. These cases relied on both the concept of deception and the concept of unfairness, and the agency has made it very clear that health privacy remains a top priority.

With respect to the Children’s Online Privacy Protection Act (COPPA), Congress continues to consider modifications and expansions to the statute and the FTC remains eerily silent on the rulemaking that has been pending since 2019. The recent FTC case against Epic saw the agency obtaining its highest COPPA settlement thus far, at $275 million. The case also had an interesting unfairness count where the FTC alleged that default settings harmed kids and teens by allowing personal interactions with strangers that sometimes led to threats, bullying and sexual harassment.

The potential for harm to kids and teens will continue to be a major issue at the agency, and Commissioner Alvaro Bedoya has been speaking frequently on the subject. At a recent speech delivered at the National Academies of Sciences, Engineering & Medicine, Commissioner Bedoya emphasized the need to make sure that legal tools are addressing the issue of how social media and other technology can affect the mental health of teens. Although his concerns do go beyond traditional privacy issues, certainly how data is collected and used with respect to teens is a significant part of the discussion.

Interestingly, Commissioner Bedoya has emphasized that the agency needs to explore hiring psychologists to assist agency staff with better understanding the potential harms to children and teens that may be caused by certain online practices or tools. Notably, in recent testimony to Congress, the FTC emphasized the risks that online services can pose to children and teens and stated that:

The FTC is considering steps to deepen this work, including retaining psychologists and youth development experts to allow the agency to analyze conduct, assess harms and remedies, and pursue studies with an interdisciplinary approach, including conduct affecting children.

Teen and child privacy is also one of the many issues being examined in the FTC’s privacy rulemaking, more formally known as the Commercial Surveillance and Data Security rulemaking. The comment period has closed for the first stage of the rulemaking, and it remains to be seen whether or when the agency will come forward with any proposed rules. This rulemaking will continue for quite some time, but it remains an important FTC issue to closely follow.

And finally, we have seen the FTC make a wide variety of statements about artificial intelligence (AI), although thus far the statements have focused more on broad FTC Act compliance issues and less on privacy specifically. In a recent statement issued by the FTC along with several other agencies, the agencies emphasized that existing laws do apply to the use of AI and reiterated their concern that AI be used and developed in ways that do not have discriminatory impacts or otherwise violate federal law. In a series of blog posts, the FTC provided a deeper dive into some of its AI-related concerns. Although we have yet to see FTC enforcement involving the new wave of AI tools that are available to consumers, it is quite clear that such tools are being closely scrutinized by agency staff.