In the spring of 2021, Mary Louis, a Black woman in Massachusetts, found herself on the wrong side of an algorithm. Eager to move into a new apartment, Louis submitted her application and waited for approval. Instead, she received a rejection email saying that a “third-party service” had denied her tenancy. That service was SafeRent Solutions, a company using an algorithm to score rental applicants.
For Louis, the decision was baffling and deeply frustrating. She had 16 years of flawless rental history and even submitted references from previous landlords after her initial rejection. But the response was cold and final: “We do not accept appeals and cannot override the outcome of the Tenant Screening.”
“Everything is based on numbers,” Louis later said. “You don’t get the individual empathy from them. There is no beating the system. The system is always going to beat us.”
Her experience became the foundation of a groundbreaking class-action lawsuit against SafeRent, one of the first to challenge how algorithms can perpetuate discrimination.
The Settlement and Its Implications
This week, a federal judge approved a $2.2 million settlement in the case, which alleged that SafeRent’s algorithm discriminated against rental applicants based on race and income. Louis’ attorneys argued that the system failed to account for housing vouchers, a vital tool for many low-income renters. This omission disproportionately affected Black and Hispanic applicants, who are statistically more likely to rely on such assistance.
“Just because an algorithm or AI is not programmed to discriminate, the data an algorithm uses or weights could have the same effect as if you told it to discriminate intentionally,” said Christine Webber, one of the plaintiff’s attorneys.
The lawsuit also claimed that SafeRent’s heavy reliance on credit scores further disadvantaged minority applicants, as historical inequities have led to lower average credit scores among these groups.
While SafeRent admitted no wrongdoing, the settlement requires the company to stop using its scoring feature in cases involving housing vouchers. Future systems must undergo third-party validation to ensure fairness and compliance with anti-discrimination standards.
SafeRent defended its practices in a statement, saying, “Litigation is time-consuming and expensive. While we continue to believe the SRS Scores comply with all applicable laws, we have chosen to settle to avoid the cost and distraction of prolonged litigation.”
A Broader Issue of AI Accountability
Louis’ case is part of a growing wave of legal challenges targeting algorithms and AI systems that make consequential decisions in areas like housing, employment, and healthcare. These systems, often viewed as neutral tools, can perpetuate systemic biases hidden within the data they are trained on.
The U.S. Department of Justice supported Louis’ case, arguing that algorithms used in tenant screening play a critical role in determining access to housing and should be held accountable.
“Management companies and landlords need to know that they’re now on notice, that these systems that they are assuming are reliable and good are going to be challenged,” said Todd Kaplan, one of Louis’ attorneys.
This case is a wake-up call for regulators, who have so far struggled to keep pace with the rapid adoption of AI-driven systems. While some states have proposed aggressive regulations to address these issues, few have passed, leaving the courts as the primary avenue for holding companies accountable.
Personal Impact
For Louis, the settlement brings some validation, but it doesn’t undo the challenges she faced. After being rejected by SafeRent’s algorithm, her son found her an apartment through Facebook Marketplace. It was $200 more expensive and in a less desirable neighborhood, but it was a place to live.
“I’m not optimistic that I’m going to catch a break, but I have to keep on keeping, that’s it,” she said. “I have too many people who rely on me.”
Her story highlights the human cost of algorithmic decision-making. While AI can streamline processes and reduce human error, it can also strip away nuance and empathy, reducing people to mere data points.
Moving Forward
The settlement in Louis’ case represents a significant step toward greater accountability in the use of AI and algorithms. It sets a precedent for challenging systems that unfairly disadvantage marginalized groups and underscores the need for transparency and fairness in how these tools are designed and deployed.
As AI becomes increasingly integrated into everyday life, this case serves as a reminder of the importance of safeguarding equity and justice in a rapidly evolving digital landscape. Behind every algorithm are real people like Mary Louis, whose lives can be profoundly affected by decisions they have no power to appeal.
The post A Landmark Case Against Algorithmic Discrimination first appeared on The Legal Wire.