The emergence of generative AI has captured the attention of people around the world; its potential impacts ripple throughout our economy and society—at times with devastating effects. This buzz has caused policymakers, advocates, and technologists to adapt their thinking about what’s possible—and what we should do about it.
TechEquity’s mission is to ensure that the products and practices of the tech industry are advancing human flourishing rather than undermining it. Given the growing role of artificial intelligence (and other digital technologies) in defining the economic prospects of everyday people and the scale of the technology’s potential impact, these tools are squarely in our focus across all of our programs and issue areas. We will undertake work to ensure that there are proper guardrails in place; that companies are accountable for implementing practices that ensure equity in the design, development, deployment, and oversight of these tools; and that on-the-ground civil and human rights organizations feel equipped to advocate for solutions that mitigate impacts of these tools in the communities where they live and work.
The AI Policy Principles outline how we plan to go about that work.
An individual’s or society’s experience of technology can differ based on how much power and agency they have to weather any changes these emergent technologies may bring. Therefore, it is critically important to address these contexts, in addition to addressing the functionality of the tools themselves, when developing policy solutions. Ensuring that AI will enable human flourishing rather than undermine it will require that we have clear guardrails, strong safety nets, and robust methods for the participation of impacted people in the design and deployment of technology.
Technology’s growth does not follow a linear, formulaic equation. Technology is shaped by people, by companies, by governments, and by society. We have the opportunity to direct where the technology goes and to build power to ensure that the technology benefits those who are most everyone in our economy—rather than leaving them behind.
This coalition must row together toward the ultimate goal of mitigating the harms of AI and maximizing its benefits for all members of society. When it comes to AI, though our work falls into two specific issue areas—housing and labor—we understand it to be interconnected with a wide range of other issues. We work in collaboration with partners across the economic equity, civil rights, labor, democracy, and privacy movements. We support their efforts and aim to be in conversation at the intersection of our issues so that the whole is greater than the sum of our parts.
Because AI systems are complex, the voices of technologists tend to be elevated in the conversation about how to implement effective guardrails. While we need people with skills in designing and developing AI in these conversations, equally important are the values and perspectives of those who understand the impacts of AI systems on the ground. Expertise in the technical development of AI systems should not be used as a gating criterion to these important conversations, and policy conversations about AI should recognize that AI developers have as much to learn from the experiences of people impacted by the technology as they can teach about the technology.
The principles below reflect our organizational approach to the intersection of these technologies in our economy, with an emphasis on our two focus areas: housing and labor.
Specifically, our principles are borne out of our existing research on tenant screening algorithms, the intersections of privacy, civil rights, and technology in the housing sector, the impact of automated management in the workplace on contract workers, and the rise of ghost work in the tech industry. Our recommendations are also informed by the incredible work of our organizational partners and experts in the field. In particular, we would like to acknowledge the work of the UC Berkeley Labor Center’s Technology and Work Program, AI NOW, Data & Society, ACLU, National Fair Housing Alliance, Economic Security Project, Ada Lovelace Institute, Electronic Frontier Foundation, Upturn, and Electronic Privacy Information Center (EPIC).
Below we outline three major policy principles and accompanying recommendations to ensure responsible AI design, development, deployment, maintenance, and monitoring. We believe that a collective approach centering people in each step of the AI system—paired with a series of thoughtful, intentional choices now—will not only prevent the potential harms these tools could inflict but also create the conditions under which this promising new technology can enable human flourishing.
Technology is most useful when it is designed based on real people’s needs. In that same spirit, the regulation of AI and digital technologies must take a human-centered approach that addresses current and potential harm, allows people to exercise their existing rights, and prioritizes accountability around AI tools as they are developed and deployed. These accountability structures should allow for meaningful, public control over the technology and its use in our communities, while at the same time allowing developers to experiment and innovate in ways that are guided by the input and expressed needs of the people who use and are impacted by the technology.
Policy proposals that reflect this principle should:
Policy frameworks should create strong protocols for the review of tools before their deployment and in regular intervals post-deployment. Evaluating the extent to which tools are impacting people’s civil and human rights should not be a voluntary activity but rather required and enforced by regulators. Individuals should have access to information about how these tools are impacting their lives, but the onus of holding companies accountable should not fall solely at their feet. This will require transparency and disclosure on the part of AI developers, vendors, and deployers. Additionally, we must equip regulators and individuals with the capacity and resources they need to ensure that these tools do not threaten people’s safety or security, foster discrimination, or further systemic inequities in our economy.
Policy proposals that reflect this principle should:
Power imbalances in our economy enable exploitative applications of technology and AI. In order to effectively regulate the companies who are building and deploying it, these power imbalances must be addressed. This means stronger antitrust regulations and enforcement, increased attention to the incentive structures around the investment capital that enables the growth of the industry, stronger controls on the ability of companies to collect, store, and sell data, and a deeper understanding of the impact of the business models employed by companies building the technology.
Policy proposals that reflect this principle should:
As outlined in our strategic plan, we are advancing three initiatives at the intersection of technology and economic equity across housing, labor, and the AI supply chain.
Sign on here to show your support for the equitable AI policy principles
"*" indicates required fields
Download the full AI Policy Principles here: