What exactly is Artificial Intelligence? And other questions, answered.
Despite concerns around artificial intelligence (AI)—both material and existential—the hype around AI shows no signs of slowing down. Chat GPT took the world by storm in 2023. OpenAI and Meta say that they’re ready to release AI models capable of “reasoning.” Intel is throwing down against Nvidia in a race to dominate the AI chip sphere.
With the increasing ability of AI to impact all aspects of daily life, we’re working to keep legislators in California in the know about recent developments and their consequences so that they can create policies that keep up. But we all need to be informed to ensure an equitable AI economy.
Here are our answers to some questions you might have about AI.
What is AI—according to TechEquity?
AI is often a catch-all meant to include a variety of existing and emerging technologies—used very broadly and not always correctly.
According to the Organisation for Economic Co-operation and Development (OECD), “artificial intelligence means a machine-based system that can, for explicit or implicit objectives, infer, from the input it receives, how to generate outputs such as predictions, recommendations, or decisions that can influence real or virtual environments”. In the words of our Chief Programs Officer Samantha Gordon, we like to say that AI kind of includes anything that has to do with machine learning and algorithms.
For the purposes of our work, we sometimes use ‘AI system’ as an umbrella term encompassing an array of technologies like
- algorithms: a procedure used for solving a problem or performing a computation
- generative AI: a type of AI that can create new content and ideas, including conversations, stories, images, videos, and music
- large language models (LLMs): a type of AI that has been trained on vast amounts of text to understand existing content and generate original content
- machine learning: the capability of a machine to imitate intelligent human behavior
These technologies—or a combination of them—alongside hardware produce autonomously operating objects like vehicles and robots.
How does AI work?
Now that we have some shared vocabulary, let’s back it up a bit. AI works by merging bulky sets of data with intelligent processing algorithms that run through multiple tasks extremely quickly in very little time.
These data sets are different depending on the aim of the AI tool. Chatbots might be fed public conversations on Reddit, for instance. With companies making billions of dollars a year harvesting, selling, and using data, your data has likely been fed into the algorithms behind AI.
Algorithms, though, can’t feed themselves. Behind the veneer of automation is the reality that thousands of people around the world are working to deliver words, images, videos, and sounds into the mouth of the machine. This can be brutal work and people are usually earning very little money.
Many AI companies have been using third-party staffing agencies to circumvent their responsibility for these workers. And then these same AI tools are used against the workers who code and feed the algorithms.
What are some of the harms being caused by AI?
AI is being used in a variety of different ways—from supporting medical diagnosis to creating disinformation. Within that spectrum, you’ll find applications that fuel creativity and efficiency, along with applications that create harm and result in devastating effects. Effects that we’re already seeing across our issues areas: housing and labor.
In the summer of 2023, we sent a letter to the White House about the harm of AI-backed employee-tracking software systems, centering on the experience of contract workers who are especially vulnerable.
These tools aren’t exclusively used on the contract workforce, though. Productivity management systems dehumanize warehouse workers and push them to the edge of their physical capabilities. Employers can monitor your off-duty activity with social listening software—and even feed this data into an algorithm to predict whether a job candidate will become a whistleblower.
The Writers Guild of America went on strike, in part, because of disputes over AI. While they won their battle, this is only the beginning.
AI isn’t just at work, it’s also at home—determining whether or not people even have access to a home. For instance, automated mortgage approval and tenant selection software—fed the data of our racist historical record—is disproportionately denying Black and brown access to housing.
Large corporations are also using AI tools to target vulnerable communities where they can purchase single-family homes en mass for low prices and then rent them out using automated management systems. So AI is fueled by everyone’s data and then AI-backed tools are being used on everyone in almost every sector of everyday life.
Ultimately, intentional checks and guardrails around AI not only prevent these harms—current and potential—but also create the conditions under which this promising new technology can enable human flourishing.
What protections are being put in place to address the harms of AI?
Right now, though, policy protections to address the harms of AI are scarce.
In October of 2023, Senator Schumer convened the second bipartisan AI Insight Forum—to which we, alongside many other worker and civil society groups, responded that the workers behind AI ought to be included in these sorts of conversations. That same month, President Biden issued the Executive Order for Safe, Secure and Trustworthy AI, which was encouraging with its focus on increased protections for workers and renters.
We also sent a letter to the Federal Trade Commission and Consumer Financial Protection Bureau to share our knowledge as they investigate algorithmic tenant screenings.
However, industry is keeping pace, making their thoughts known, influencing research, and even creating policies that contain loopholes for them under the facade of promoting real reforms. That’s why it’s so important that we’re all educated on the basics of AI.
Sara Flocks Legislative and Strategic Campaigns Director of the California Labor FederationWe’ve got to change everything up[…] Workers need to be involved in the development of technology, in the deployment of technology, and in the way it is used.
Who should be involved in conversations around AI?
Everyone should be involved in conversations around AI. This is one of the key operating assumptions in our AI Policy Principles.
Because AI systems are complex, the voices of technologists tend to be elevated in discussions on the development and deployment of AI. While their insights and knowledge are valuable, equally important are the values and perspectives of those who understand the impacts of AI systems on the ground.
As for our part, TechEquity will continue to be vigilant on how AI continues to evolve, particularly within our two focus areas: housing and labor. Our work will be guided by our AI Policy Principles and a desire to tackle the scale, speed, and adoption of AI—answering your questions and working in community to create a more equitable future.
Want to help? Sign on in support of these principles!
Sign onto the AI Policy Principles
Sign on here to show your support for the equitable AI policy principles
"*" indicates required fields