As AI seeps into every aspect of daily life, we’re seeing how algorithms embedded in critical decisions in people’s lives are leaving the door open to AI-powered discrimination.
Algorithms decide who gets access to housing, healthcare, and financial aid every day. Algorithms dictate whether you get an interview for a job, what your productivity rating at work will be, whether or not your healthcare denies your medical claims, and more. These algorithms, left unchecked, can produce biased results, making everyday people vulnerable to automated discrimination—without them even knowing about it.
That’s why Assemblymember Rebecca Bauer-Kahan authored AB 1018, the Automated Decisions Safety Act.
AB 1018 ensures that Automated Decision Systems (ADS) are vetted, that everyday people understand how these decisions impacting them are made, and that people know what to do if they suspect discrimination. The Automated Decisions Safety Act is sponsored by SEIU California and TechEquity.
Want to stay in the loop on how the Automated Decisions Safety Act makes its way through the California Legislature? Sign up for our newsletter to get updates on this campaign, as well as other key initiatives to address tech’s impact on our lives.
Sign up for our mailing list to get weekly updates on how you can get plugged in.
"*" indicates required fields
AB 1018 is a common-sense approach to reducing the harm that ADS can cause. For the nitty-gritty details, you can check out the full bill language here.
The Automated Decisions Safety Act does four main things:
Requires that people who make and use these tools test them to make sure they do not create harm and comply with our existing rights to non-discrimination before they are sold and used on the public. It ensures that these tests are verified by an independent third party.
Provides people the information they need to understand where these tools are showing up in their lives and how they’ll be used to determine their housing, healthcare, and job outcomes.
Provides people an explanation of what the tool did, what personal information it used about them to make the decision, and what role the tool played in making the decision.
Through this bill, people will have the right to opt out of the use of an ADS tool in a critical decision about them; they will be able to correct information that the tool used to make the decision if it is inaccurate; and they will have the right to appeal the decision.
Catherine Bracy Founder and CEO of TechEquityPeople deserve transparency into the tools that are making decisions about their lives and the opportunity to change these decisions if they’re wrong.
VI‑SPDAT, an ADS used to determine which unhoused people have the highest need for housing deployed in LA, has been found to have racial bias. The tool favored white applicants for affordable housing, scoring 67% of unhoused White young adults into the highest priority group, compared with 56% of Latino young adults and 46% of Black young adults.
In Pennsylvania, the Allegheny Family Screening tool provides recommendations to social workers to inform which families should be investigated for neglect. The ADS flagged those with low incomes and used data like their race, zip code, disability, and their use of public welfare benefits. The ADS was found to have racial bias and bias against families with disabled parents or children. The Department of Justice is currently investigating the use of the Allegheny Family Screening tool, but child welfare agencies in at least 26 states and Washington, D.C., have considered using algorithmic tools, and jurisdictions in at least 11 have deployed them, according to the ACLU.
Judges use the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) tool to predict whether defendants should be detained or released on bail pending trial in Florida. The ADS was found to be biased against Black Americans, according to a report from ProPublica. COMPAS assigns a risk score to a defendant’s likelihood to commit a future offense, relying on the voluminous data available on arrest records, defendant demographics, and other variables. Compared to white people, who were equally likely to re-offend, COMPAS assigned more Black people a higher-risk score, resulting in longer periods of detention while awaiting trial.
Have you been on the receiving end of an AI-driven decision that you believe was biased or discriminatory? We want to hear your story.
"*" indicates required fields