Digital health technologies, including algorithms for use in health care, are being developed to aid healthcare providers and serve patients, from use with administrative tasks and workflow to diagnostic and decision support. The use of artificial intelligence (“AI”) and machine learning algorithms in health care holds great promise, with the ability to help streamline care and improve patient outcomes. At the same time, algorithms can introduce bias if they are developed and trained using data from historical datasets that harbor existing prejudices. Both state and federal governments have taken steps to address the potential for racial and ethnic disparities in use of algorithms by healthcare facilities, demonstrating that this continues to be a top priority as new technologies are deployed in health care.
California Attorney General Rob Bonta recently sent letters to 30 hospital CEOs across the state requesting information about how healthcare facilities and other providers are identifying and addressing racial and ethnic disparities in software they use to help make decisions about patient care or hospital administration. The press release stressed the importance of identifying and combatting racial health disparities in healthcare algorithms, and the AG’s letter seeks information such as a list of all decision-making tools or algorithms the hospitals use for clinical decision support, health management, operational optimization, or payment management; the purposes for which these tools are currently used and how they inform decisions; and the names of the persons responsible for ensuring they do not have a disparate impact based on race. Responses are due to the AG by October 15.
The federal government also has made disparities in health care a top priority. For example, the Department of Health and Human Services (HHS) recently issued a proposed rule regarding nondiscrimination in health programs and activities. Amongst other proposals aimed at combatting discrimination, HHS proposed provisions related to nondiscrimination in the use of clinical algorithms in healthcare decision-making and in telehealth services. Proposed § 92.210 states that “a covered entity must not discriminate against any individual on the basis of race, color, national origin, sex, age, or disability through the use of clinical algorithms in its decision-making.” The proposed rule notes that a covered entity would not be liable for clinical algorithms they did not develop, but HHS proposes to impose liability for any decisions made in reliance on clinical algorithms if they rest upon or result in discrimination. The proposed rule noted that the Department “believes it is critical to address this issue explicitly in this rulemaking given recent research demonstrating the prevalence of clinical algorithms that may result in discrimination.” Comments are due to HHS by October 3. HHS specifically seeks input on whether the provision should include additional forms of automated decision-making tools beyond clinical algorithms; whether the provision should include potential actions covered entities should take to mitigate discriminatory outcomes; and recommendations on how to identify and mitigate discrimination resulting from the usage of clinical algorithms.
These state and federal actions, as well as the associated responses, could inform ongoing dialogue about how to advance the use of digital health technologies while in parallel making progress to address inequities in health care.