Skip to content

Menu

LexBlog, Inc. logo
CommunitySub-MenuPublishersChannelsProductsSub-MenuBlog ProBlog PlusBlog PremierMicrositeSyndication PortalsAboutContactResourcesSubscribeSupport
Join
Search
Close

Recent FTC Guidance on the Use of Artificial Intelligence and Algorithms in the Age of COVID-19

By Monty Cooper, Jodi G. Daniel, Kate M. Growley, CIPP/G, CIPP/US & Natalie O. Ludaway on June 2, 2020
Email this postTweet this postLike this postShare this post on LinkedIn

On April 8, 2020, the Federal Trade Commission (FTC) published a blog post titled, “Using Artificial Intelligence and Algorithms,” that offers important lessons about the use of AI and algorithms in automated decision-making.

The post begins by noting that headlines today tout rapid improvements in AI technology, and the use of more advanced AI has enormous potential to improve welfare and productivity. But more sophisticated AI also presents risks, such as the potential for unfair or discriminatory outcomes. This tension between benefits and risks is a particular concern in “Health AI,” and the tension will continue as AI technologies are deployed to tackle the current COVID-19 crisis.

The FTC post reminds companies that, while the sophistication of AI is new, automated decision-making is not, and the FTC has a long history of dealing with the challenges presented by the use of data and algorithms to make decisions about consumers.

Based on prior FTC enforcement actions and other guidance, the FTC post outlines five principles that companies should follow when using AI and algorithms, while adequately managing consumer-protection risks. According to the post, and as expanded upon below, companies should (1) be transparent with consumers about their interaction with AI tools; (2) clearly explain decisions that result from the AI; (3) ensure that decisions are fair; (4) ensure that the data and models being used are robust and empirically sound; and (5) hold themselves accountable for compliance, ethics, fairness, and nondiscrimination.

It should be noted that the FTC and state attorneys general already look at whether companies give clear and conspicuous disclosures to consumers in order to evaluate regulatory compliance and some state attorneys general are already looking at issues related to AI. This FTC guidance emphasizes the importance of transparency even further. The guidance’s principles – especially those related to ensuring data accuracy and preventing discriminatory outcomes – will also be important as companies deploy AI to respond to COVID-19. For example, a recent article in The Hill highlighted how companies are working with hospitals to establish patient-monitoring programs that use AI-powered wearables, like smart shirts, that continuously measure patients’ biometrics (e.g., cardiac activity) so that hospital staff can better monitor patients and possibly limit the number of required visits to infected patients.. Although these wearables are promising, companies must make sure that these devices are also effective and that they satisfy principles outlined in this guidance (as well as patient-privacy and HIPAA concerns). Thus, companies will want to pay attention to the FTC’s actions here as these AI technologies are being created and improved.

Summary of the FTC’s Guidance

(1) Be transparent: Companies should be clear about how they are using AI to interact with customers. For instance, if a company uses an AI chatbot to interact with consumers, the company should be transparent to the consumers that they are interacting with a chatbot, not a person. If the company’s use of such a technology misleads consumers, the company could risk FTC enforcement. Companies must also be careful about how they collect sensitive data that will be used in their algorithms: for example, secretly collecting audio or visual data to feed an algorithm could also give rise to an FTC action.

Further, using AI-modelled information may lead to obligations under the Fair Credit Reporting Act (FCRA). That is, if a company makes automated decisions based on information from an AI-enabled third-party vendor (e.g., denies someone an apartment because of an AI model’s credit report), the company may need to provide the consumer with an “adverse action” notice, which explains the consumer’s right to review the credit report and correct any mistakes.

(2) Clearly explain decisions: Companies using AI tools that deny customers access to credit should explain the reasons for the denial. It may not be good enough for companies to give general reasons for the rejection (e.g., simply telling the customer “you don’t meet our criteria”); instead, companies should be specific (e.g., explaining that “you have an insufficient number of credit references”). As a result, companies using AI should know what data is used in its model and how that data is used to arrive at a decision. And if a company changes the terms of a credit agreement based on an automated tool (e.g., reducing a consumer’s credit limit based on his or her purchasing habits), the company should tell the consumer that the terms have changed. Failing to do so can lead to enforcement.

The FTC could potentially apply this guidance to Health AI as well. For example, as mentioned above, AI-powered wearables would allow clinicians to make better decisions regarding which COVID-19 patients to visit or not visit. This FTC guidance suggests that health-care providers utilizing these wearables must be able to explain why these devices required one patient to be seen but not another patient. Further, these devices could be considered subject to U.S. Food and Drug Administration (FDA) enforcement. Thus, enabling the user to independently validate the recommendations made by these devices is an important factor in considering whether FDA oversight is required as well.

(3) Ensure that decisions are fair: Companies should make sure that their algorithms do not result in discrimination against protected classes. For example, the FTC enforces civil-rights laws like the Equal Credit Opportunity Act (ECOA), which prohibits credit discrimination on the basis of race, sex, or income. Thus, if a company makes a credit decision based on consumers’ zip codes, resulting in a “disparate impact” on particular groups, the FTC may challenge that practice under the ECOA. Further, when evaluating an algorithm for illegal discrimination, the FTC will analyze the AI tool’s inputs (e.g., whether the model includes ethnically-based factors) and outputs (e.g., whether the model resulted in discrimination on a prohibited basis). Consequently, companies should rigorously test their algorithms, both before using them and periodically afterwards, to ensure that their AI tools do not discriminate against people.

(4) Ensure that the data and models being used are robust and empirically sound:As previewed above, companies that compile and sell consumer information that is used for credit, employment, or insurance may be subject to the FCRA. Compliance under FCRA means that companies have an obligation to implement reasonable procedures to ensure the accuracy of consumer reports and provide consumers with access to their own information.

In the health-care context, there could be significant liability risk if algorithms, using inaccurate data, lead to improper health-care decision-making. For example, Stanford Health researchers who are focused on AI-assisted in-home elder care are designing AI technology that could potentially be used in the homes of patients to monitor COVID-19 symptoms. Researchers are investigating the use of devices that can collect data (e.g., a patient’s body temperature or mobility) that can be analyzed to monitor up to seventeen activities of clinical relevance, including eating, sleeping, fluid intake, and immobility. Clinicians can then review this information in order to make decisions to help patients. While this research is promising, companies and clinicians that eventually create and use this technology must make sure that the technology’s models are sound and patients’ data are accurate to ensure that the best health-care decisions are made.

To ensure accuracy and soundness, the FTC advises that AI models must be validated and revalidated – using accepted and appropriate statistical principles and methodology – to confirm that they work as intended.

(5) Be accountable for compliance, ethics, fairness, and nondiscrimination: Companies should also hold themselves accountable to be compliant, ethical, fair, and nondiscriminatory when analyzing large amounts of data. To do this, algorithm operators should ask four questions:

  • How representative is our data set?
  • Does our data model account for biases?
  • How accurate are our predictions based on big data?
  • Does our reliance on big data raise ethical or fairness concerns?

Further, companies that develop AI to sell to other businesses should ensure that appropriate controls are put in place in order to prevent the misuse and abuse of sensitive data. Finally, companies should consider using third-party experts as objective observers to ensure that their AI tools do not unintentionally discriminate against classes of people.

Conclusion

This FTC guidance is significant because it is broad and will impact a number of legal areas (e.g., consumer protection and privacy) as AI continues to be used in so many different kinds of products – from credit-reporting services to online-retail sites to health-care products. And as mentioned above, since more sophisticated algorithms are being used in Health AI to address COVID-19, the FTC’s guidance will take on even greater importance. Thus, companies should take heed of these principles and pay attention to how the agency applies them in the years ahead.

Photo of Monty Cooper Monty Cooper
Read more about Monty CooperEmail
Photo of Jodi G. Daniel Jodi G. Daniel

Jodi Daniel is a partner in Crowell & Moring’s Health Care Group and a member of the group’s Steering Committee. She is also a director at C&M International (CMI), an international policy and regulatory affairs consulting firm affiliated with Crowell & Moring. She…

Jodi Daniel is a partner in Crowell & Moring’s Health Care Group and a member of the group’s Steering Committee. She is also a director at C&M International (CMI), an international policy and regulatory affairs consulting firm affiliated with Crowell & Moring. She leads the firm’s Digital Health Practice and provides strategic, legal, and policy advice to all types of health care and technology clients navigating the dynamic regulatory environment related to technology in the health care sector to help them achieve their business goals. Jodi is a contributor to the Uniform Law Commission Telehealth Committee, which drafts and proposes uniform state laws related to telehealth services, including the definition of telehealth, formation of the doctor-patient relationship via telehealth, creation of a registry for out-of-state physicians, insurance coverage and payment parity, and administrative barriers to entity formation.

Read more about Jodi G. DanielEmail
Show more Show less
Photo of Kate M. Growley, CIPP/G, CIPP/US Kate M. Growley, CIPP/G, CIPP/US
Read more about Kate M. Growley, CIPP/G, CIPP/USEmail
Photo of Natalie O. Ludaway Natalie O. Ludaway
Email
  • Posted in:
    Corporate & Commercial
  • Blog:
    Retail & Consumer Products Law Observer
  • Organization:
    Crowell & Moring LLP
  • Article: View Original Source

LexBlog, Inc. logo
Facebook LinkedIn Twitter RSS
Real Lawyers
99 Park Row
  • About LexBlog
  • Careers
  • Press
  • Contact LexBlog
  • Privacy Policy
  • Editorial Policy
  • Disclaimer
  • Terms of Service
  • RSS Terms of Service
  • Products
  • Blog Pro
  • Blog Plus
  • Blog Premier
  • Microsite
  • Syndication Portals
  • LexBlog Community
  • 1-800-913-0988
  • Submit a Request
  • Support Center
  • System Status
  • Resource Center

New to the Network

  • Boston ERISA & Insurance Litigation Blog
  • Stridon News and Insights
  • Taft Class Action & Consumer Insights
  • Labor and Employment Law Insights
  • Age of Disruption
Copyright © 2022, LexBlog, Inc. All Rights Reserved.
Law blog design & platform by LexBlog LexBlog Logo