Businessman using smart phone. This is entirely 3D generated image.

On Sept. 13, California Gov. Gavin Newsom signed into law AB 587, which requires social media companies to publicly post their content moderation policies and semiannually report data on their enforcement of the policies to the attorney general. The first part of this article will discuss the requirements imposed by AB 587 on social media companies. The second part will discuss other state laws that similarly moderate social media content and how they compare to AB 587. The last part of this article will examine the litigation history of content moderation laws and the potential implications of possible Supreme Court intervention on these state laws.

For California, the terms of service report, covering the third quarter of 2023, will need to be submitted by Jan. 1, 2024. The second terms of service report, covering activity within the fourth quarter of 2023, will be due to the attorney general no later than April 1, 2024. A third report will be due no later than Oct. 1, 2024, and subsequently, social media companies will be required to submit the reports on April 1 and Oct. 1 of each year.

There is no template or format social media companies must use for preparing the terms of service reports, but the law has specific requirements for what the reports should include.

The California law will only apply to companies that generated more than $100 million in gross revenue during the preceding calendar year. For California law pertaining to online safety and privacy as it relates to children under 18, please also see the article California’s Landmark Age-Appropriate Design Code Act: What You Need to Know.

Requirements Imposed by AB 587

Terms of Service Requirement. AB 587 requires social media companies to post terms of service that include:

  1. Contact information to allow users to ask questions about the terms of service.
  2. A description of the process that users must follow to flag content, groups or other users that they believe violate the terms of service and of the social media company’s commitments on response and resolution time.
  3. A list of potential actions the social media company may take against an item of content or a user, including but not limited to removing, demonetizing, deprioritizing or banning.

Terms of Service Report. On a semiannual basis, a social media company must submit a terms of service report to the attorney general. The terms of service report should include:

  1. The current version of the terms of service of the social medial platform.
  2. A complete and detailed description of any changes to the terms of service since the previous report.
  3. A statement of whether the current version of the terms of service defines each of the following categories of content and, if they do, the definitions of those categories, including any subcategories:
    • Hate speech or racism.
    • Extremism or radicalization.
    • Disinformation or misinformation.
    • Harassment.
    • Foreign political interference.
  4. A detailed description of content moderation practices used by the social media company for that platform, including but not limited to all of the following:
    • Any existing policies intended to address the categories of content described above in 3.
    • How automated content moderation systems enforce the terms of service of the social media platform, and when these systems involve human review.
    • How the social media company responds to user reports of violations of the terms of service.
    • How the social media company would remove individual pieces of content, users or groups that violate the terms of service or would take broader action against individual users or groups of users that violate the terms of service.
    • The languages in which the social media platform does not make terms of service available but does offer product features, including but not limited to menus and prompts.
  5. Information on content that was flagged by the social media company as content belonging to any of the categories described above in 3, including all of the following:
    • The total number of flagged items of content.
    • The total number of actioned items of content.
    • The total number of actioned items of content that resulted in action taken by the social media company against the user or group of users responsible for the content.
    • The total number of actioned items of content that was removed, demonetized or deprioritized by the social media company.
    • The number of times actioned items of content were viewed by users.
    • The number of times actioned items of content were shared, and the number of users who viewed the content before it was actioned.
    • The number of times users appealed social media company actions taken on that platform and the number of reversals of social media company actions on appeal, disaggregated by each type of action.
  6. All information required in 5 must be disaggregated into the following categories:
    • The category of content, including any relevant categories described above in 3.
    • The type of content, including but not limited to posts, comments, messages, or profiles of users or groups of users.
    • The type of media of the content, including but not limited to text, images and videos.
    • How the content was flagged, including but not limited to being flagged by company employees or contractors, artificial intelligence software, community moderators, civil society partners and users.
    • How the content was actioned, including but not limited to being actioned by company employees or contractors, artificial intelligence software, community moderators, civil society partners and users.

Enforcement Provisions. A social media company that violates AB 587 will be liable for a civil penalty not to exceed $15,000 per violation per day, and it may be enjoined in any court of competent jurisdiction.

A social media company is considered to be in violation of AB 587 for each day the social media company does any of the following:

  1. Fails to post terms of service in accordance with Section 22676.
  2. Fails to timely submit to the attorney general a report required pursuant to Section 22677.
  3. Materially omits or misrepresents required information in a report submitted pursuant to Section 22677.

In assessing civil penalty amounts, the court will consider whether the social media company has made a reasonable, good-faith attempt to comply with AB 587.

Moreover, actions for relief pursuant to AB 587 will be prosecuted exclusively in a court of competent jurisdiction by the attorney general, by a city attorney of a city having a population in excess of 750,000, or by a city attorney in a city and county in the name of the people of the state of California upon their own complaint or upon the complaint of a board, officer, person, corporation or association.

If an action pursuant to AB 587 is brought by the attorney general, half the penalty collected will be paid to the treasurer of the county in which the judgment was entered, and the other half will go to the General Fund.

If an action pursuant to AB 587 is brought by the city attorney, half the penalty will be paid to the treasurer of the city in which the judgment was entered and the other half to the treasurer of the county in which the judgment was entered.

Similar State Law

Other states, namely Florida, Texas, and New York, have also enacted social media content moderation laws.

Florida. Florida’s SB 7072 was signed into law in May 2021 to “hold Big Tech accountable by driving transparence and safeguarding Floridians’ ability to access and participate in online platforms.” Similar to AB 587, SB 7072 requires social media platforms to respond to user requests regarding the number of platform participants who were provided or shown the user’s content as well as to publish their standards used for determining whether to “censor, deplatform, and shadowban.” These standards cannot be changed “more than once every 30 days,” and social media platforms are required to apply their content moderation standards in a “consistent manner.” SB 7072 also prohibits social media platforms from removing accounts of any “journalistic enterprise” as well as any political candidate during an election. Social media platforms are also required to allow users to opt out of algorithmic sorting systems that “allow sequential or chronological posts and content.”

Under SB 7072, the attorney general of Florida can bring enforcement actions against social media companies under the state’s Unfair and Deceptive Trade Practices Act; companies found in violation will be placed on an “antitrust violator vendor list.” SB 7072 also creates a private right of action for monetary damages.

Texas. Texas’s HB 20 was signed into law in September 2021 to “protect[] the free exchange of ideas and information in [Texas].” HB 20 prohibits social media platforms from “censor[ing] a user, a user’s expression or a user’s ability to receive the expression of another person” based on “viewpoint,” with carve-outs for federal law.

Like AB 587 and SB 7072, HB 20 also requires social media platforms to publicly disclose their practices related to “content management, data management, and business practices” as well as to make a biannual transparency report containing metrics for user complaints of potential terms of service violations, how such complaints are handled and the results of the complaints.

HB 20 gives the attorney general of Texas the authority to bring enforcement actions against social media platforms for violations of HB 20. Like SB 7072, HB 20 also creates a private right of action, though users bringing suit under HB 20 would only be entitled to recover declaratory and/or injunctive relief.

New York. S4511A was signed into law in June 2022 as part of a package of laws created in the aftermath of the Buffalo and Uvalde mass shootings. While the majority of the package focused on strengthening gun regulation, S4511A “[r]equires social media networks to provide and maintain mechanisms for reporting hateful conduct on their platform.” A definition of “hateful conduct” is added, meaning the use of a social media platform to “vilify, humiliate, or incite violence against a group, or a class of persons on the basis of race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity or gender expression.” Similar to the laws in Florida and Texas, S4511A also requires social media platforms conducting business in New York to provide and maintain a mechanism for users to report hateful conduct; social media platforms must also publish a policy on how they respond to and address reported incidents of hateful conduct. S4511A does not indicate what the policies should contain, and it leaves enforcement solely to the attorney general with no private right of action.

State Comparisons in Light of Judicial Intervention

State Comparisons. While politically California and New York are typically contrasted with Florida and Texas, the effect these states’ social media content laws have on First Amendment rights may not be so different. State laws for social media content moderation appear to be on a spectrum of increasing First Amendment implications, with California on the less-burdensome end, Florida on the more-burdensome end, and Texas and New York in the middle.

All four state laws require social media platforms to publish the policies the social media platform uses to moderate content. While both California and New York require social media platforms to provide mechanisms for users to report content, California does not explicitly prohibit certain types of content – though, arguably, requiring a social media platform to specify and publish whether it has specific definitions for things like “hate speech or racism” has the same effect as New York’s explicit ban on content that “vilifies.” New York, Texas and Florida all compel social media platforms to either ban certain speech or host speech that may go against their terms of service, with Florida placing the most burdens on what social media platforms can and cannot do regarding their content.

This gradient is particularly important when considering the judicial intervention surrounding content moderation laws that appears to have stalled the conversation across different states. California will likely survive this challenge to state content moderation laws because, unlike Florida and Texas (and New York, though New York’s law has not been explicitly named in these suits), California does not require or prohibit social media platforms from hosting third-party speech; it merely requires transparency for users to understand how content is moderated. Below is a more in-depth discussion on the procedural history.

Judicial Intervention. Both the Texas and Florida laws have been subject to extensive litigation. Almost immediately after it was signed into law, SB 7072 was challenged by NetChoice and the Computer & Communications Industry Association (CCIA) in NetChoice v. Moody. Ultimately, the Eleventh Circuit affirmed the district court’s decision that the law was unconstitutional because SB 7072 represented government-compelled speech, in violation of the First Amendment. The state of Florida petitioned for a writ of certiorari on Sept. 21. The response was filed Oct. 24.

In the interim, HB 20 was subject to a similar lawsuit called NetChoice v. Paxton. The district court granted a preliminary injunction on HB 20 because the prohibition on certain moderation actions constituted editorial discretion protected by the First Amendment. The state of Texas appealed this decision, and in May 2022, the Fifth Circuit issued a one-sentence order granting a stay of the injunction. Plaintiffs NetChoice and CCIA immediately petitioned the Supreme Court to vacate the stay because the one-sentence order was unreasoned and thus deprived them of “careful review and meaningful decision” while HB 20’s constitutionality was being litigated. The Supreme Court vacated the Fifth Circuit’s decision later that month, in a 5-4 vote. The dissent stated that HB 20 was “novel” and that it was not clear how the court’s precedent should apply here, and therefore, the Supreme Court should not have intervened. The Fifth Circuit then found, on Sept. 16, that the district court erred in issuing the injunction because a platform’s “censorship is not speech,” and it remanded the case, creating a circuit split between the Fifth and Eleventh circuits. This circuit split is the basis for the Paxton plaintiffs to request a stay on the Paxton ruling in light of the Moody petition before the Supreme Court, which was granted on Oct. 12.

It is very possible that a Supreme Court ruling on social media content moderation is on the horizon, as it is not the first time the Supreme Court has considered the legal status of social media platforms related to speech. While reviewing SB 7072, the Eleventh Circuit relied on precedent of Zauderer v. Office of Disciplinary Counsel, which pertains to compelled commercial speech. Curiously enough, though the Eleventh Circuit acknowledged that Zauderer applies to advertising contexts and the government’s interest in preventing consumer deception, the Eleventh Circuit also held that Zauderer “is broad enough to cover SB 7072’s disclosure requirements,” with no other real grappling with the Zauderer jurisprudence. Moreover, in April 2021, Justice Thomas opined, in Biden v. Knight, on whether social media platforms might be compelled as common carriers to carry speech against their will. While Knight was later dismissed as moot, it does not seem like a stretch to say that there will be some clarification of the legal status of social media platforms that, in turn, will affect the constitutionality of content moderation laws that require more than just transparency.