Big Tech says no. The law says maybe.
In the last ten years, we have observed social media companies become ubiquitous. Many users of these platforms—Facebook, Instagram, Snapchat, Twitter, etc.—use them to harass, bully, stalk, and sexually exploit and abuse other users. For example, users of Horizon Worlds, Meta’s virtual reality platform, have reported that other users have virtually raped their avatars while other avatars watched and cheered them on. Omegle, a free online chat tool that allows users to socialize with others without the need to register, randomly pairs users in one-on-one chat sessions. It is currently facing multiple lawsuits brought by minors who allege the site paired them with sexual predators who groomed and sexually exploited them. It’s like Westworld in bitmoji form, and it is taking place across social media.
In other instances, the apps encourage criminal behavior or discriminatory behavior. You may remember when Snapchat had the feature that reported users’ driving speeds and, according to plaintiffs, encouraged reckless driving. Lemmon v. Snap was filed after a 20-year-old Snapchat user crashed his car while using the filter, driving over 120 miles per hour at one point. The 2017 crash killed the driver and two teenage passengers. Two of the victims’ parents sued Snap for wrongful death, saying its speed filter enticed users to drive at unsafe speeds.
In Fair Housing Council of San Fernando Valley v. Roommates.com, LLC, Roomates.com was sued directing users to fill out a questionnaire that violated the Fair Housing Act by allowing users to discriminate against housing applicants based on race, sexual orientation, and gender identity.
And despite the reports and trends, Big Tech largely refuses to take action to protect users, even when they report conduct that is violative of their user agreements. Thanks to Section 230 of the Communications Decency Act of 1996, Big Tech is actually emboldened not to act. Section 230 of the CDA provides that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230). In other words, even though the act was written and passed before anyone could even contemplate social media, these massive corporations claim Section 230 protects them from a wide range of laws that would otherwise hold them legally responsible for what others say and do.
In recent years, using a strategy that was once used against tobacco companies, lawyers across the country have alleged that social media companies have purposefully designed their platforms to addict users to their apps and are bringing product liability claims against them. The idea is that social media apps are products that are defectively designed and intentionally harm users, which permits many of these cases to circumvent Section 230’s broad immunity.
Some lawsuits allege that excessive exposure to platforms like Facebook and Instagram has led to attempted or actual suicides, eating disorders, and sleeplessness, among other issues. The plaintiffs allege the apps were designed to aggressively addict young people for corporate profit. As these cases proceed through early motions practice before the Courts, we will see whether this new strategy is effective or whether Big Tech will continue to enjoy protections afforded to no other industry.