The year 2020 may be remembered for its pandemic and presidential election. But it also marked a turning point in efforts to regulate artificial intelligence (AI) technologies and the systems that embody them. State lawmakers in two states joined Illinois in enacting laws directed at AI-generated biometric data, and federal lawmakers introduced their own measure. The White House in January began exploring frameworks for governing AI. Still, the AI legal landscape remains uncertain especially for stakeholders who develop and use AI systems and want more predictability so they can properly manage legal liability risks. In this post, a time frame for possible new regulations is explored.
In normal times, making predictions about a legal landscape for highly fluid and disruptive technologies like AI is difficult. But at the beginning of 2020, the White House took a big step toward possible new regulations when it issued its “Guidance for Regulation of Artificial Intelligence Applications” memo to the heads of federal government agencies seeking agency and public input on a general regulatory framework for AI and set of principles applicable to agency-use of AI technologies. That seemed like a positive development at the time, though it came more than three years after the Obama Administration’s October 2016 plan for Preparing for the Future of Artificial Intelligence.
A month later, in February 2020, the European Union (EU) Commission issued its own AI regulatory framework document for EU member countries. If the EU implements its regulatory framework, U.S.-based lawmakers and regulators might mirror aspects of it in this country, just like some states passed legislation with measures mimicking aspects of the EU’s General Data Protection Regulation (GDPR) (click here to learn more about the EU’s plan and how it could affect U.S. companies).
By August 2020, the U.S. National Institute of Standards and Technology (NIST) began publicly working with stakeholders within and outside the government to develop technical and non-technical standards for AI technologies. Although that effort seems to have slowed due to the pandemic, once standards are issues, federal agencies will be in a better position to promulgate specific rules that could apply to private AI businesses.
In November 2020, the Office of Management and Budget (OMB) issued its long-awaited guidance memorandum, “Guidance for Regulation of Artificial Intelligence Applications,” which finalized an effort by the White House that began at the beginning of the year to articulate principles and goals for an AI governance strategy.
Also, the number of legislative bills proposed by members of Congress mentioning AI (and machine learning, specifically) had been increasing, demonstrating Congress’s intent to act on constituent concerns about AI technologies.
Given all that, I estimate that 2021 will be the year for significant new federal legislation or regulations specifically targeting AI technologies. Confidence in this prediction would be higher with a Democratic-controlled Congress, presumably so given the outgoing Trump administration’s previous expressed views about minimizing regulations that might stymie American innovation and prevent the U.S. from maintaining (or achieving) AI dominance. Then again, the incoming Biden administration has not issued its thoughts on what it might do with AI either (a point this New York Times article by David McCabe and makes clear), so there is no telling what may happen under a new White House.
That said, it seems that change is needed on the regulatory front. AI technologies continue to disrupt sectors of the U.S. economy at unprecedented levels, including highly sensitive ones like banking, healthcare, transportation, manufacturing, and legal services. While we’ve seen many positive impacts AI has had on society, the speed at which AI has been adopted has also led to significant problems. Concerns over the surveillance collection and use of personal data and biometrics, the continuing presence of bias in automated decision-making systems (disproportionately impacting minorities, as I discussed here), the inability of even the best staffed AI companies to clearly explain how their AI systems make decisions or take actions, the consolidation of bigdata by a few tech companies, and the rise of nefarious uses of AI such as fake videos, cyber-intrusion, misinformation bots, and certain lethal applications, have led to calls for regulations.
My prediction does not mean that are no existing federal laws and regulations that apply to AI technologies. Private commercial businesses and individuals who make, sell, and/or use AI-based products and services may be subject to, or protected by, one or more of the following laws or regulations that broadly affect hardware/software systems and thus may directly or indirectly apply to an AI system and related activities: consumer protection laws; data and biometric privacy laws; civil rights laws; intellectual property laws (including patent, trademark, and right of privacy publicity laws); labor and employment laws; export control regulations; securities regulations; autonomous systems rules; Federal Acquisition Regulations (FARs); Federal Aviation Regulations (FARs); and safety-related technical requirements (automotive safety, for example).
So, while we wait to see what approach the Biden administration and 117th session of Congress will take when it comes to AI, I’ll be exploring in future posts how lawyers in the above practice areas can help AI technology clients navigate the current legal landscape as well as plan for the future.