How do we tackle AI?
When it comes to AI, most people think of job automation—and the ensuing threat of job loss. We’ve seen it a lot lately with companies like Duolingo cutting down their headcount in favor of AI. However, AI has also crept its way into many other parts of daily life.
AI-backed tools are impacting where we live, the healthcare treatments we receive, and how we’re monitored and tracked as we go throughout our days. We can already see how a lack of oversight and policy around AI is exacerbating inequities around the world.
But this doesn’t have to be our future. That’s why we developed the Guiding Policy Principles for Responsible AI and brought it to Sacramento’s policymakers, staffers, and community leaders on March 6, 2024.
How to Change the Tide
We wrote these principles because we believe that the outcomes of technological advancement aren’t set in stone. Ultimately, much of technology’s impact is determined by the context it’s introduced into. When introduced into an unequal society (like ours), inequities don’t disappear—instead, emerging technologies can more deeply entrench them.
To change the tide, it’s going to take all of us. Tackling the scale, speed, and adoption of emerging technology takes a connected, coordinated movement of advocates rowing together toward a more equitable future. It also takes everyday people—not just deep technical experts—to engage fully in conversations about how technology impacts them.
Our Policy Principles
We developed three core guiding AI policy principles that reflect our approach with an emphasis on our two key points of intersection with tech: housing and labor.
1. People who are impacted by AI must have agency to shape the technology that dictates their access to critical needs like employment, housing, and healthcare.
After all, technology is most useful when it’s designed based on real people’s needs. We need to make sure that both the design and regulation of AI and digital technologies take a human-centered approach that addresses current and potential harm.
2. It should be on the developers, vendors, and deployers of AI to demonstrate that their tools do not create harm—and regulators, as well as private citizens, should be empowered to hold them accountable.
This isn’t the time for “innocent until proven guilty,” especially when millions of people’s safety and security are at stake. Evaluating the potential harms of a new technology shouldn’t be optional—and we need to invest in growing enforcement capacities to ensure that.
3. Concentrated power and information asymmetries must be addressed in order to effectively regulate the technology.
Right now, the dominant business model in the tech industry relies heavily on collecting, storing, and selling our data. It relies on power imbalances, on monopolies. These elements inevitably lead to harm, inequities, and a lack of accountability.
Where Do We Go From Here?
There is a better AI future that we have the chance to build. All of these principles can be built into policy—and we’re helping make that happen at TechEquity.
We are researching and providing public policy recommendations to address algorithmic bias in automated tenant screening tools as part of our Tech, Bias, and Housing Initiative. We are partnering with community organizations and global advocacy groups to shine a light on contract workers who are the predominant workforce training, developing, and moderating AI models via our Contract Worker Disparity Project.
And through the launch of our Tech, Bias, and Labor Initiative in 2024, we will investigate technology that automates or makes predictions at key stages in the workplace cycle—from hiring, performance management, productivity rates, discipline, and firing.
Check out our AI Policy Principles to learn more, and sign onto our policy principles to stay up-to-date with our work around AI.