Yesterday, the Federal Trade Commission announced a settlement with Snapchat, the young mobile messaging company. The complaint alleges misrepresentations about functionality and related security as well as privacy violations, including misrepresenting the amount of data Snapchat collected from users and the use of location data for analytics purposes. Notably, some of Snapchat’s troubles flow from unauthorized third party applications that exploited issues in its non-public API.
First, a bit about Snapchat. The company is among the fastest growing social networking sites ever, processing more than 350 million messages, or “snaps” each day. Users can send text, photos, or videos to other users of the service and, before sending them, select the time when that message will self-destruct. Users are offered the option of choosing a time window between 1 and 10 seconds, after which the message will automatically delete. Recipients will “have that long to view your message and then it disappears forever,” claimed the company.
Unfortunately for Snapchat, the FTC concluded this statement was not accurate for a number of use cases. For example, when a recipient got a video message, Snapchat stored the file in a location outside of the app’s “sandbox.” Because the file was available outside the app, the recipient could connect their device to a computer and use simple browsing tools to locate and save the video. That method was widely publicized as early as December 2012, but the FTC says Snapchat didn’t begin encrypting video (thereby fixing the flaw) for nearly a year. The FTC’s complaint further alleges that recipients could download third party applications to log into the Snapchat service and avoid automatic deletion—Snapchat’s most prominent feature. Snapchat was warned about this flaw by a security researcher but did not change its core product messaging assuring users that a recipient was prevented from keeping messages indefinitely. Related to these concerns, the FTC also found problems with representations about (i) Snapchat’s ability to detect when a recipient captured a screenshot of a message and (ii) the failure to take straightforward technical steps to verify that the phone number entered by a user matched that of the device (notwithstanding user complaints), which allowed sending inappropriate and offensive “snaps.”
It is worth noting that the enforcement action against Snapchat arose at least in part due to third-party exploitation of its proprietary application-programming interface (API). A trio of Australian security researches reverse engineered the code and published it online in August 2013. This action (which was apparently contrary to Snapchat’s terms of service) exposed the app (and vulnerabilities in its coding) to the world. Those vulnerabilities were in turn exploited to create apps that defeated the essential self-destruct feature of the service, as well as by a group calling itself SnapchatDB that collected and released the usernames and phone numbers for 4.6 million Snapchat users—another failure noted by the FTC. Ultimately, because these third party exploits of the core product claim (that snaps “disappear[ed] forever”) were deemed notorious (i.e., available in app stores or posted and discussed on the internet), the FTC considered Snapchat itself to be responsible to correct or amend its own statements about the functionality and security of its product.
The settlement thus highlights the importance of accurate (as opposed to hopeful) disclosures about security as well as a prompt response when vulnerabilities are exposed—even more so where security is touted as a product feature. It is also a good reminder about the risks of relying on security by obscurity (the unpublished API).
- The FTC continues to conduct a thorough review of privacy and security representations as compared to practices, following public breaches (here the leak of millions of names by SnapchatDB to the internet). Indeed, the FTC notes that the case is part of “a multi-national enforcement sweep on mobile app privacy by members of the Global Privacy Enforcement Network” and “also coordinated with the Asia Pacific Privacy Priorities forum’s Privacy Awareness Week.”
- After the fact, privacy and security practices are often judged not to meet whatever standard of care or security happens to have been articulated, here “reasonable” steps or measures or “best security.”
- App developers and online businesses need to track the accuracy of consumer-facing disclosures and claims wholistically—if there’s a known vulnerability, even if it arises from user fraud or unauthorized third party applications—communicate with consumers about the risks, including by updating disclosures and warning of risky practices.
- Claims about security seem to be particularly susceptible to being overstated. True, perfect data security appears impossible (at least not on any device connected to the Internet), so be current and precise about what you claim your service provides.
- Phrases such as “reasonable steps” or “commercially reasonable steps” or “standard industry practices” abound in online privacy policies, but these phrases are not only unhelpful/unenlightening for consumers as a practical matter, but also in effect lay the legal framework for post-breach investigations and enforcement actions under unfair and deceptive practice laws.
- If there’s a published vulnerability the contradicts privacy/security assurances or puts consumers at risk, correct it promptly and conform public-facing statements accordingly.
- When the FTC comes knocking on privacy and security practices, even discontinued or corrected practices are fair game for enforcement.