
By Andrew Cohen
Two recent events brought top experts to UC Berkeley Law to address known concerns and worrisome uncertainties regarding the impact of artificial intelligence (AI).
The school’s annual Race & Tech Symposium, presented by the Berkeley Center for Law & Technology and Berkeley Technology Law Journal, focused on centering racial justice in AI policymaking. Four days earlier, a panel sponsored by the law school’s Edley Center on Law & Democracy and the Goldman School of Public Policy probed whether AI will deepen inequality or help advance a more inclusive and participatory democracy.
At the symposium, UC Berkeley Law professors Daniel A. Farber, Andrea Roth, Colleen Chien, and Osagie K. Obasogie moderated panels about AI’s impact on racial equity in environmental policy, the criminal legal system, labor justice, and healthcare respectively.

Samuelson Law, Technology & Public Policy Clinic staff attorney Juliana DeVries ’17 described some perils of using machine and AI-generated evidence in criminal cases, and how clinic students are addressing it in the context of parole and probation. Noting that nearly 25% of U.S. prisoners are incarcerated for violating a condition of probation or parole — and that 500,000 adults in the United States are subjected to electronic monitoring — she outlined concerns regarding the reliability of technologies that often flag such violations.
“These are complex technologies used against people who have little chance to defend against these allegations,” she said. “In some states, people aren’t even entitled to a lawyer to challenge them. Smart phone apps increasingly being used are shown to have serious accuracy and bias issues, and facial recognition technology is shown to have false and biased results, often with the highest error rates for Black women.”
Calling for more transparency in AI product development, DeVries She said more resources to better understand technical disclosures could help, such as an in-house engineer at public defender offices, but that this is “quite far from reality of our criminal legal system.”
Nicole Ozer ’03, a key figure in crafting California’s landmark Electronic Communications Privacy Act and Reader Privacy Act, discussed a variety of legal work across strategy to defend and advance rights and safety. The ACLU of Northern California’s founding technology and civil liberties director, she has worked for more than two decades to bolster rights, justice, and democracy in the digital age, including developing the organization’s national online privacy campaign and designing innovative local surveillance reform strategies now used nationwide.
Ozer discussed a case against a face surveillance company brought on behalf of racial justice and immigrants’ rights activists that is now moving through the California state courts (she worked on an amicus brief for this case that focused on California’s constitutional right to privacy). She also highlighted an ACLU case brought in Illinois against the same company, which claimed to have captured more than 10 billion faceprints from peoples’ online photos worldwide, and the resulting settlement which permanently bars the company from making its faceprint database available to most businesses and other private entities.
“At the core of all this, really, is power,” said Ozer, now executive director of UC Law San Francisco’s new Center for Constitutional Democracy. “We’re most often up against the most powerful forces, the biggest companies, the government; and the rights and interests of people are generally the underdog in these fights. So we have to work smarter, more strategically, and more collaboratively if we want to to ensure that AI and other new technology works for the people and advances rights, equity, and justice.”
Changing Work as We Know It
The labor-focused panel amplified the uncertainty of AI’s impact on employment and how Europe has taken a more stringent approach to regulating it compared to the U.S. UC Berkeley Law Professor Diana S. Reddy noted that while 70% of American workers worry about AI potentially replacing them, the same figure would also welcome AI making parts of their job easier.

“In the past, automation has been limited for specific uses and designed for specific tasks,” she said. “But AI is capable of responding in real time to changing inputs and addressing a wide variety of roles, and employers have a history of using tech innovations to reduce labor costs.”
While union membership among American workers has dropped from about 20% in 1983 to just 10% today, Reddy sees a potential revival in union interest to help give workers a say in how AI should be used in their jobs. She noted, however, that current labor laws apply only to traditional employees, and that many companies have used technology to classify workers as contractors rather than employees. This, she said, is “a fundamental end run around our existing legal infrastructure” that disproportionately harms people of color and other marginalized groups.
“If AI dramatically displaces human workers, it’s not just about short-term job loss — it’s potentially a permanent replacement,” Reddy said. “That would allow a greater concentration of wealth accruing to businesses rather than working people, fueling skyrocketing inequality. So it’s not just unemployment; the risk here is major social and economic change.”
She also conveyed concern about AI regulation decisions falling to the states instead of the federal government. Historically, Reddy added, piecemeal regulations within labor and employment law have led to employers fleeing to places with fewer protections for workers.
“Corporations have long branded themselves as job creators, arguing that they deserve handouts,” because helping corporations means helping workers, she said. “But to the extent that companies use technology to replace workers, to the extent they stop being job creators, that could prompt radical changes in how we think about regulating them if they’re no longer participating in building wealth for all of us.”
Intersecting technology and democracy
At the Tech Policy for a Just Future: AI, Racial Equity, and Democracy event, moderated by Edley Center on Law & Democracy Executive Director Catherine E. Lhamon, George Washington Law Professor Spencer Overton and Brennan Center for Justice Vice President of Elections and Government Lawrence Norden — a visiting lecturer at UC Berkeley Law this year — cited areas of both concern and optimism.
Drawing on his forthcoming article “Ethnonationalism by Algorithm,” Overton argued that AI is central to American identity and democracy, and designed by and used to benefit “members of the dominant racial, ethnic, or cultural group while attempting to exclude or assimilate others.”

Citing reports of AI resulting in racially tilted results across various areas, from healthcare decisions to criminal justice to mortgage loans, he framed them as part of a pushback against America moving from 15% people of color in 1965 to over 40% today.
“This backlash is not unique to America, we’re seeing a growing nativism in Brazil and India and other places too,” he said. “Racial diversity is no longer considered a public good, and I believe this approach also shapes our government’s approach to AI governance.”
Overton explained how the Trump administration rescinded a Biden administration order requiring federal agencies to limit discrimination resulting from AI algorithms, and described a current mandate to eliminate slowdowns in AI development and proposals to withhold federal funding from states with burdensome AI regulations.
“If you fine-tune your AI to reduce bias, the federal government won’t buy it,” he said. “This prevents innovation, and prevents people from trying to make AI better and stronger. These policies are operationalized throughout the government.”
Norden called transparency from AI developers “incredibly important” and data privacy “critical.” He said AI could help draw fairer congressional district maps, find polling places closer to public transportation, and achieve similar goals, but that growing executive branch power emboldened by the Supreme Court, along with the outsized power of certain tech companies, has hampered such progress.
“Social media has had a huge impact on how we see the world, and I think AI will be many fold of that,” Norden said. “We’re seeing how companies move with the political winds, and a lot of AI companies and social media companies that were talking about wanting to protect our elections have gotten rid of positions [that previously existed to help with that work].”
Lamenting that people working to protect democracy and civil rights are often too siloed, he stressed that technology is integral to democracy’s future — and that AI must be better understood.
“There’s a lot of potential for AI to be a great equalizer,” Norden said. “But it’s not going to just happen if companies don’t have incentives to make that a priority.”