As 2021 comes to a close, we will be sharing the key legislative and regulatory updates for artificial intelligence (“AI”), the Internet of Things (“IoT”), connected and automated vehicles (“CAVs”), and privacy this month.  Lawmakers introduced a range of proposals to regulate AI, IoT, CAVs, and privacy as well as appropriate funds to study developments in these emerging spaces.  In addition, from developing a consumer labeling program for IoT devices to requiring the manufacturers and operators of CAVs to report crashes, federal agencies have promulgated new rules and issued guidance to promote consumer awareness and safety.  We are providing this year-end round up in four parts.  In this post, we detail AI updates in Congress, state legislatures, and federal agencies.

Part I:  Artificial Intelligence

While there have been various AI legislative proposals introduced in Congress, the United States has not embraced a horizontal broad-based approach to AI regulation as proposed by the European Commission, instead focusing on investing in infrastructure to promote the growth of AI.

In particular this quarter, the National Defense Authorization Act for 2021 (“NDAA”) (H.R. 6395), which the Senate is likely to pass this week, represents the most substantial federal U.S. legislation on AI to date.  The NDAA established the National AI Initiative to coordinate the ongoing AI research, development, and demonstration activities among stakeholders.  To implement the AI Initiative, the NDAA mandates the creation of a National Artificial Intelligence Initiative Office under the White House Office of Science and Technology Policy (“OSTP”) to undertake the AI Initiative activities, as well as an interagency National Artificial Intelligence Advisory Committee to coordinate related federal activity.  The White House also launched AI.gov and the National AI Research Resource Task Force to coordinate and accelerate AI research across all scientific disciplines.  In addition, the NDAA:

  • Directs the National Institute of Science and Technology (“NIST”) to support the development of relevant standards and best practices pertaining to AI and appropriates $400 million to NIST through FY 2025;
  • Requires an assessment and report on whether AI technology acquired by the DOD is developed in an ethically and responsibly sourced manner, including steps taken or resources required to mitigate any deficiencies;
  • Includes a number of other provisions expanding research, development and deployment of AI, including authorizing $1.2 billion through FY 2025 for a Department of Energy (“DOE”) AI research program.

A growing body of state and federal proposals address algorithmic accountability and mitigation of unwanted bias and discrimination.  For example, the Mind Your Own Business Act of 2021 (S. 1444), introduced by Senator Ron Wyden (D-OR), would authorize the FTC to promulgate regulations that would require covered entities to conduct impact assessments of “high-risk automated decision systems,” such as AI and machine learning techniques, as well as “high-risk information systems” that “pose a significant risk to the privacy or security” of consumers’ personal information.  Other federal bills, like the Algorithmic Justice and Online Platform Transparency Act of 2021 (S. 1896), introduced by Senator Ed Markey (D-MA), would subject online platforms to transparency requirements such as describing to users the types of algorithmic processes they employ and the information they collect to power them.

States are considering their own slates of related proposals.  For example, the California State Assembly is considering the Automated Decision Systems Accountability Act of 2021 (AB-13), which would require monitoring and impact assessments for California businesses that provide “automated decision systems,” defined as products or services using AI or other computational techniques to make decisions.  A Washington state bill (SB 5116) would direct the state’s chief privacy officer to adopt rules regarding the development, procurement, and use of automated decision systems by public agencies.  More broadly, facial recognition technology has attracted renewed attention from state lawmakers, with wholesale bans on state and local government agencies’ use of facial recognition gaining steam.

Agencies are also focusing on AI, particularly in the enforcement context.  For example, the Federal Trade Commission (“FTC”) investigated and settled with Everalbum, Inc. in January 2021 in relation to its “Ever App,” a photo and video storage app that used facial recognition technology to automatically sort and “tag” users’ photographs.  Pursuant to the settlement agreement, Everalbum was required to delete models and algorithms that it developed using users’ uploaded photos and videos and obtain express consent from its users prior to applying facial recognition technology.  Enforcement activity by the FTC to regulate AI may become even more common, as legislative efforts seek to create a new privacy-focused bureau within the FTC and expand the agency’s civil penalty authority.

Photo of Jennifer Johnson Jennifer Johnson

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors…

Jennifer Johnson is a partner specializing in communications, media and technology matters who serves as Co-Chair of Covington’s Technology Industry Group and its global and multi-disciplinary Artificial Intelligence (AI) and Internet of Things (IoT) Groups. She represents and advises technology companies, content distributors, television companies, trade associations, and other entities on a wide range of media and technology matters. Jennifer has almost three decades of experience advising clients in the communications, media and technology sectors, and has held leadership roles in these practices for almost twenty years. On technology issues, she collaborates with Covington’s global, multi-disciplinary team to assist companies navigating the complex statutory and regulatory constructs surrounding this evolving area, including product counseling and technology transactions related to connected and autonomous vehicles, internet connected devices, artificial intelligence, smart ecosystems, and other IoT products and services. Jennifer serves on the Board of Editors of The Journal of Robotics, Artificial Intelligence & Law.

Jennifer assists clients in developing and pursuing strategic business and policy objectives before the Federal Communications Commission (FCC) and Congress and through transactions and other business arrangements. She regularly advises clients on FCC regulatory matters and advocates frequently before the FCC. Jennifer has extensive experience negotiating content acquisition and distribution agreements for media and technology companies, including program distribution agreements, network affiliation and other program rights agreements, and agreements providing for the aggregation and distribution of content on over-the-top app-based platforms. She also assists investment clients in structuring, evaluating, and pursuing potential investments in media and technology companies.

Photo of Jayne Ponder Jayne Ponder

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection…

Jayne Ponder is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity Practice Group. Jayne’s practice focuses on a broad range of privacy, data security, and technology issues. She provides ongoing privacy and data protection counsel to companies, including on topics related to privacy policies and data practices, the California Consumer Privacy Act, and cyber and data security incident response and preparedness.

Photo of Lindsay Brewer Lindsay Brewer

Lindsay advises clients on environmental, human rights, product safety, and public policy matters.

She counsels clients seeking to set sustainability goals; track their progress on environmental, social, and governance topics; and communicate their achievements to external stakeholders in a manner that mitigates legal…

Lindsay advises clients on environmental, human rights, product safety, and public policy matters.

She counsels clients seeking to set sustainability goals; track their progress on environmental, social, and governance topics; and communicate their achievements to external stakeholders in a manner that mitigates legal risk. She also advises clients seeking to engage with regulators and policymakers on environmental policy. Lindsay has extensive experience advising clients on making environmental disclosures and public marketing claims related to their products and services, including under the FTC’s Green Guides and state consumer protection laws.

Lindsay’s legal and regulatory advice spans a range of topics, including climate, air, water, human rights, environmental justice, and product safety and stewardship. She has experience with a wide range of environmental and safety regimes, including the Federal Trade Commission Act, the Clean Air Act, the Consumer Product Safety Act, the Federal Motor Vehicle Safety Standards, and the Occupational Safety and Health Act. Lindsay works with companies of various sizes and across multiple sectors, including technology, energy, financial services, and consumer products.

Photo of Nira Pandya Nira Pandya

Nira Pandya advises private and public companies on venture capital financings, mergers and acquisitions, joint ventures, strategic investments, and other corporate transactions. She also represents emerging companies in general corporate matters, including entity formation, corporate governance, and securities law compliance.

Photo of Andrew Longhi Andrew Longhi

Andrew Longhi is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity and Technology and Communications Regulation Practice Groups.

Andrew advises clients on a broad range of privacy and cybersecurity issues, including compliance obligations, commercial…

Andrew Longhi is an associate in the firm’s Washington, DC office and a member of the Data Privacy and Cybersecurity and Technology and Communications Regulation Practice Groups.

Andrew advises clients on a broad range of privacy and cybersecurity issues, including compliance obligations, commercial transactions involving personal information and cybersecurity risk, and responses to regulatory inquiries.

Andrew is Admitted to the Bar under DC App. R. 46-A (Emergency Examination Waiver); Practice Supervised by DC Bar members.