To the California Privacy Protection Agency: Workers and renters need transparency in the tech making decisions about them
AI is currently making decisions that impact every facet of people’s lives, from workplaces to hospitals to homes. We know that Automated Decision-Making Technologies (ADMTs), though already widely deployed across sectors, can be rife with bias and hard-coded discrimination.
Moreover, this tech often operates as a black box with no way for impacted people to understand, change, or protest the algorithm’s results, which have massive repercussions on their lives.
It’s simple: the tech that is automating human decisions needs to be guided by the humans about whom these decisions are being made.
That’s why earlier this month, our SVP of Labor Programs, Tim Newman, gave comment to the California Privacy Protection Agency on draft rulemaking about ADMTs. Read their comment below:
Tim Newman’s Comment
Good afternoon. My name is Tim Newman, and I am sharing these comments on behalf of TechEquity. We have conducted participatory research with contract workers and surveyed renters in California about the impact of Automated Decision-Making Technologies (ADMTs). The use of these technologies by employers and landlords represents one of the most important issues that is already shaping the lives of California’s workers and renters, with profound equity implications.
The workers we spoke to reported how ADMTs control their workload, performance evaluations, and, at times, their pay. Workers described how their work product was often reviewed and assessed by an algorithmic or automated process which sometimes denied submissions of work product, deemed their work product insufficient or low quality, and set unsustainable productivity quotas based on information that was unknown to workers. Workers were subject to physical and mental stress as they struggled to deal with a lack of transparency in factors determining their working conditions and livelihoods throughout the entire employment process.
We found a similar lack of transparency in the use of tenant screening algorithms. While ADMTs are used to make recommendations to landlords about whether to approve or deny applicants, our California tenant survey of 1100 (one thousand one hundred) respondents found that renters are largely unaware of how these decisions are made or even whether the technology was used at all. Landlords overseeing small portfolios or renting at lower income levels are more likely to follow screening recommendations without additional due diligence, highlighting the increased vulnerability of under-protected renters. Black and Latinx renters were nearly twice as likely to have their applications denied as white respondents in our survey.
These findings show that ADMTs, trained on massive troves of personal datasets, are likely to compound and perpetuate biases and often lack the context that’s required by law to ensure equitable treatment. We spoke to vulnerable tenants and contract workers who have little insight into these decision-making processes and few options to challenge their outcomes.
These examples underscore three key principles for rulemaking:
- Full transparency, explainability, and disclosure are necessary given the opaque nature of these systems and their ability to make critical decisions.
- Impact assessments should be conducted before and throughout the use of these technologies to determine likely harms and identify measures to mitigate them.
- Workers and renters should receive an explanation, including what personal data was collected about them and how it was used in critical decisions to ensure they have the information to enforce existing rights and to identify when a decision made by an ADMT is inaccurate, discriminatory, or otherwise harmful.
We believe that through this rule-making, the CPPA has the historic opportunity to enact a clear, common-sense foundation for the use of ADMTs and to ensure that workers and renters have the appropriate information, rights, and protections.
Thank you to the CPPA director, staff, and board for your work on these important regulations and the opportunity to provide comments today.
–
Read our Guiding Policy Principles for Responsible AI to learn more about how we can build guardrails around the industry’s fastest-growing technology.