Social media companies may be feeling relief after Indiana lawmakers dropped a transparency requirement from a proposed state consumer privacy bill, SB358, which in its introduced form contained the following potentially controversial disclosure requirement (relevant portion shown):
Chapter 2. Disclosure of Social Media Administrative Procedures
Sec. 1. (a) The owner or operator of a social media service shall publish on the social media service’s Internet web site the procedures, standards, policies, algorithms, or other mechanisms used by the owner or operator for the following purposes with regard to the social media service:
(1) To determine how content is selected for dissemination to users, including: (A) any attribute of a registered user, or of a registered user’s account or profile; and (B) any attribute of an individual piece of content; that is used to determine whether the content is disseminated to the user and how the content, if disseminated to the user, is presented, prioritized, categorized, or ranked as compared to other content disseminated to the user.
* * *
This language is directed primarily to consumer-facing recommender systems, which are systems that collect and process user data for the purpose of targeting ads and other content to specific users. Recommender systems powered by machine learning and other techniques decide what content you see on apps and websites like TikTok, Instagram, Netflix, Amazon, Twitter, and Facebook (e.g., News Feed ranking). They are practical and useful tools for the companies that deploy them and the users who interact with them, at least because they display content that is predictably relevant and of interest to users and thus can improve a company’s advertising revenues. But they can also cause problems. Federal lawmakers, for example, heard testimony about leaked Meta (formerly Facebook) documents that reportedly revealed Meta’s awareness of Instagram’s recommender system potentially contributing to teenage girl’s mental health issues.
Notably, the disclosure requirement did not include a private right of action, but instead would have given the state’s attorney general enforcement powers to go after owners or operators of social media companies who knowingly or intentionally violated the requirement, an act the law would have considered “deceptive.” The bill’s current language, however, does not contain the disclosure requirement after lawmakers adopted an amendment removing it (along with other provisions not discussed here).
Though relief may have been social media’s initial reaction to lawmakers’ action, those companies also know that transparency measures like Indiana’s won’t be the last they see, probably not in Indiana, and certainly not in other more progressive jurisdictions. That means lawmakers and stakeholders will continue debating whether improving transparency will help fix the worst aspects of black-box AI systems like consumer-facing recommender systems. That debate will focus on a number of important questions, including: If transparency is intended to improve trust in data-based systems used by social media companies, does providing more technical information about recommender systems actually help consumers?
Several studies have shown that website and app privacy policies and terms of service are so dense with information that most consumers simply click past them and move on. Transparency measures likely won’t cause users to stop and read the fine print. And without training in machine learning, a consumer probably has little use for detailed information about how a site ranks posts on their home page or app timeline. On the other hand, it seems reasonable to require publication of general procedures, standards, policies, algorithms, or other mechanisms, as the disclosure requirement above would have required, especially where an enforcement authority (or individual users) would benefit from having such information. For more about the benefits and potential drawbacks of transparency measures for recommender and other AI systems, see this author’s post (April 2021) and also this post (December 2021) by the researchers at the Institute for Futures Studies (Stockholm), who propose ways to make recommender system disclosures work.
As for Indiana’s specific proposal, lawmakers there might consider a future disclosure measure that is far less unambiguous than the one shown above, so that enforcement efforts can be better managed and courts are not put in the all-to-familiar role of law interpreters. For example, how much of an “algorithm” must be disclosed? Is it sufficient to describe the elements of a recommender system, such as the use of collaborative filtering techniques and a neural network, or would compliance require more? And what does “standards and policies” for determining how content is selected for dissemination to users mean? Is that referring to how training data are selected, the logic and design of algorithms, the results of accuracy and other performance testing? Arguably, some of those disclosures could require revealing proprietary information and trade secrets, which are covered by other state and federal laws.
At bottom, more transparency is expected to help users make informed choices about how they engage with the online world and, in turn, could improve trust in AI generally. Having more information would also arm government authorities and, where permitting, individuals with information needed to litigate and seek remedies when statutory violations occur. Beyond transparency and trust, however, is the broader and more systemic problem of the asymmetric power social media and other AI technology companies have over users when it comes to collecting and processing user data, whether it’s to power their recommender systems or for other purposes. It’s not clear that Indiana’s disclosure requirement would have altered that imbalance in favor of consumers.
The post Transparency, Recommender Systems, and a Missed Opportunity for Lawmakers first appeared on ARTIFICIAL INTELLIGENCE TECHNOLOGY AND THE LAW.