During the 2016 election, certain Russian operatives used fake social media profiles to influence voters and also created bot accounts to add likes to and share posts across the internet.  And more recently, in January 2019, the New York Attorney General and Office of the Florida Attorney General announced settlements with certain entities that sold fake social media engagement, such as followers, likes and views.  Moreover, many of the social media platforms have had recent purges of millions of fake accounts.  Thus, it’s clear that bots and automated activity on social media platforms has been on everyone’s radar…including state legislators’ too.

Indeed, California passed a chatbot disclosure law (SB-1001) last September that makes it unlawful for persons to mislead users about their artificial bot identity in certain circumstances, and it is only now coming into effect on July 1st.  In essence, the purpose of law was to inform users when they are interacting with a virtual assistant or chatbot or automated social media account so that users could change their behavior or expectations accordingly.  Entities that may interact online or via mobile applications with their customers regarding commercial transactions via a chatbot on their own website or automated account on another platform should certainly take note of the new California law’s disclosure requirements.

The law, with certain exceptions, makes it unlawful for any person to use a “bot” to “communicate or interact with another person in California online with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election.”  The statute is limited to online commercial interactions and those that are related to influencing an election. Therefore, it appears that virtual assistants that handle online customer service inquiries would presumably fall outside the law’s scope, though depending on the interaction or how an algorithm is programmed, simple customer service (“How can I help you with your prior order?) might turn into something “incentivizing a purchase or sale of goods” (“Perhaps I can help you find a matching outfit?”).

Under the statute, a “bot” means “an automated “online” account where all or substantially all of the actions or posts of that account are not the result of a person.”  And “online” means “appearing on any public-facing Internet Web site, Web application, or digital application, including a social network or publication.”  Importantly, a person using a bot shall not be liable if the person discloses that it is a bot, though under the law, such disclosure must be “clear, conspicuous, and reasonably designed to inform persons with whom the bot communicates or interacts that it is a bot.” (Clear and conspicuous is not defined by the statute, but has been explained by the FTC). The law expressly states that it does not impose a duty on the service providers of “online platforms,” such as web hosting providers or ISPs. (Note: the law defines “online platform” as “any public-facing Internet Web site, Web application, or digital application, including a social network or publication, that has 10,000,000 or more unique monthly United States visitors or users for a majority of months during the preceding 12 months.”)  Notably, the law does not contain a private right of action, though plaintiffs could perhaps try to attach claims to California’s unfair competition law and state officials with enforcement powers under the state unfair competition law could bring an action

With the law’s effective date rapidly approaching, companies that employ virtual assistants or bots in the online environment should ensure that the nature of such bots are clearly and conspicuously disclosed to the user.  While the law only applies to interactions with people in California, it may be safer to adopt a state-specific requirement such as this one on a national basis, given identifying a user’s location may not be practical in many situations.

Photo of Jeffrey Neuburger Jeffrey Neuburger

Jeffrey Neuburger is a partner, co-head of the Technology, Media & Telecommunications Group, a member of the Privacy & Cybersecurity Group and editor of the firm’s New Media and Technology Law blog.

Jeff’s practice focuses on technology, media and advertising-related business transactions…

Jeffrey Neuburger is a partner, co-head of the Technology, Media & Telecommunications Group, a member of the Privacy & Cybersecurity Group and editor of the firm’s New Media and Technology Law blog.

Jeff’s practice focuses on technology, media and advertising-related business transactions and counseling, including the utilization of emerging technology and distribution methods in business. For example, Jeff represents clients in online strategies associated with advertising, products, services and content commercialized on the Internet through broadband channels, mobile platforms, broadcast and cable television distribution and print publishing. He also represents many organizations in large infrastructure-related projects, such as outsourcing, technology acquisitions, cloud computing initiatives and related services agreements.

Serving as a collaborative business partner through our clients’ biggest challenges, Jeff is part of the Firm’s cross-disciplinary, cross-jurisdictional Coronavirus Response Team helping to shape the guidance and next steps for clients impacted by the pandemic.