As 2021 comes to a close, we will be sharing the key legislative and regulatory updates for artificial intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAVs”), and privacy this month. Lawmakers introduced a range of proposals to regulate AI, IoT, CAVs, and privacy as well as appropriate funds to study developments in these emerging spaces. In addition, from developing a consumer labeling program for IoT devices to requiring the manufacturers and operators of CAVs to report crashes, federal agencies have promulgated new rules and issued guidance to promote consumer awareness and safety. We are providing this year-end round up in four parts. In this post, we detail AI updates in Congress, state legislatures, and federal agencies.
Part I: Artificial Intelligence
While there have been various AI legislative proposals introduced in Congress, the United States has not embraced a horizontal broad-based approach to AI regulation as proposed by the European Commission, instead focusing on investing in infrastructure to promote the growth of AI.
In particular this quarter, the National Defense Authorization Act for 2021 (“NDAA”) (H.R. 6395), which the Senate is likely to pass this week, represents the most substantial federal U.S. legislation on AI to date. The NDAA established the National AI Initiative to coordinate the ongoing AI research, development, and demonstration activities among stakeholders. To implement the AI Initiative, the NDAA mandates the creation of a National Artificial Intelligence Initiative Office under the White House Office of Science and Technology Policy (“OSTP”) to undertake the AI Initiative activities, as well as an interagency National Artificial Intelligence Advisory Committee to coordinate related federal activity. The White House also launched AI.gov and the National AI Research Resource Task Force to coordinate and accelerate AI research across all scientific disciplines. In addition, the NDAA:
- Directs the National Institute of Science and Technology (“NIST”) to support the development of relevant standards and best practices pertaining to AI and appropriates $400 million to NIST through FY 2025;
- Requires an assessment and report on whether AI technology acquired by the DOD is developed in an ethically and responsibly sourced manner, including steps taken or resources required to mitigate any deficiencies;
- Includes a number of other provisions expanding research, development and deployment of AI, including authorizing $1.2 billion through FY 2025 for a Department of Energy (“DOE”) AI research program.
A growing body of state and federal proposals address algorithmic accountability and mitigation of unwanted bias and discrimination. For example, the Mind Your Own Business Act of 2021 (S. 1444), introduced by Senator Ron Wyden (D-OR), would authorize the FTC to promulgate regulations that would require covered entities to conduct impact assessments of “high-risk automated decision systems,” such as AI and machine learning techniques, as well as “high-risk information systems” that “pose a significant risk to the privacy or security” of consumers’ personal information. Other federal bills, like the Algorithmic Justice and Online Platform Transparency Act of 2021 (S. 1896), introduced by Senator Ed Markey (D-MA), would subject online platforms to transparency requirements such as describing to users the types of algorithmic processes they employ and the information they collect to power them.
States are considering their own slates of related proposals. For example, the California State Assembly is considering the Automated Decision Systems Accountability Act of 2021 (AB-13), which would require monitoring and impact assessments for California businesses that provide “automated decision systems,” defined as products or services using AI or other computational techniques to make decisions. A Washington state bill (SB 5116) would direct the state’s chief privacy officer to adopt rules regarding the development, procurement, and use of automated decision systems by public agencies. More broadly, facial recognition technology has attracted renewed attention from state lawmakers, with wholesale bans on state and local government agencies’ use of facial recognition gaining steam.
Agencies are also focusing on AI, particularly in the enforcement context. For example, the Federal Trade Commission (“FTC”) investigated and settled with Everalbum, Inc. in January 2021 in relation to its “Ever App,” a photo and video storage app that used facial recognition technology to automatically sort and “tag” users’ photographs. Pursuant to the settlement agreement, Everalbum was required to delete models and algorithms that it developed using users’ uploaded photos and videos and obtain express consent from its users prior to applying facial recognition technology. Enforcement activity by the FTC to regulate AI may become even more common, as legislative efforts seek to create a new privacy-focused bureau within the FTC and expand the agency’s civil penalty authority.