Danya Sherbini, MPP ’24, University of Chicago Harris School of Public Policy

Many parts of society are becoming increasingly technology-driven, and housing is no different. There is a plethora of new companies using artificial intelligence and machine learning tools that aim to modernize the housing industry. Yet, the proliferation of algorithmic decision-making in housing poses a big risk: encoding historic and systemic biases that further marginalize against low-income individuals and people of color.

 

The technification of the housing industry is already affecting marginalized groups—from tenants to aspiring homeowners. Facebook’s ad algorithm used personal data to deploy targeted housing advertisements that discriminated against Black, Hispanic, and Asian users. Landlords are increasingly using private equity-backed AI screening programs to select tenants that, according to the Consumer Finance Protection Bureau, generate misleading and incorrect information that raises the costs and barriers to high-quality housing for people of color. Meanwhile, a research study from the University of California, Berkeley showed that an AI mortgage system routinely charged Black and Latinx borrowers higher rates for the same loans compared to their white counterparts. In the public sector, the Los Angeles Homeless Services Authority has been employing an algorithmic scoring system that has been found to give lower priority scores to Black and Latinx people experiencing homelessness.

 

From exclusionary zoning practices to rejected mortgage applications, rental and homeownership processes have historically discriminated against Black Americans, the formerly incarcerated, low-income individuals, and other people of color. It’s tempting to think that data-driven tools will eradicate the human biases that exist in the housing sector, but the algorithms being used to decide who gets housing are often based on inaccurate or biased data. Training algorithms on such flawed data can lead to inaccurate predictions. And in the context of housing, a wrong prediction could mean the difference between having a place to live or not.

 

To combat this issue, some advocacy organizations have called for comprehensive data protections that protect sensitive information and ensure anonymity. There has also been a push for establishing standards to govern the development and usage of artificial intelligence, including the establishment of the first ever AI Safety Institute Consortium. While these developments are a promising start, it will take a more targeted approach to properly define how these tools should or shouldn’t be used in the housing sector.

 

Housing is a basic necessity. Access to affordable, stable housing has positive effects on housing, employment, and overall well-being. The current housing crisis disproportionately affects Black, Latinx, Indigenous, and low-income households: the same groups that are most affected by biased algorithmic decision-making tools. These tools promise to modernize slow and inefficient rental and homeownership processes—but in doing so, they’re making housing discrimination easier than ever before.