By Gwyneth K. Shaw
Artificial intelligence — essentially, the art and science of using machines to mimic the outputs of the human mind — is increasingly a part of our everyday lives. Alexa and Siri help us with tasks and anticipate our choices; chat bots pop in to assist with everything from customer service to health care questions.
This burgeoning technology, however, has already moved far beyond our households and workplaces, with hype and fear competing for the dominant public narrative. As AI spreads around the world, challenging conventional borders and classifications, what are the best ways to promote responsible innovation?
Two technology powerhouses at the University of California are teaming up to help answer that difficult question: The Berkeley Center for Law & Technology (BCLT), based at Berkeley Law, and the CITRIS Policy Lab at the Center for Information Technology Research in the Interest of Society and the Banatao Institute (CITRIS), which draws from expertise on the UC campuses at Berkeley, Davis, Merced, and Santa Cruz.
The Artificial Intelligence, Platforms, and Society Project, co-led by CITRIS Policy Lab Director and Goldman School of Public Policy Associate Research Professor Brandie Nonnecke and Berkeley Law Professor Tejas N. Narechania, will also work with researchers from Goldman and Berkeley’s School of Information and College of Engineering.
“As emerging technologies, such as AI, become more pervasive in society, it is critical that we identify effective technical and governance strategies to maximize benefits and mitigate risks before it’s too late,” Nonnecke says. “Our project is targeting its work on AI, platforms, and technologies used in law — all three of which simultaneously promise great benefits and pose great risks to society.”
The project will be an independent forum for students, academics, practitioners, and technology companies to explore the best ways to support responsible development and use of AI, including the role of the private sector as well as potential state, federal, and international regulation. It will offer a community for practicing attorneys to better understand current issues, as well as support research, training, and a fellowship program.
Expansions and intersections
BCLT Executive Director Wayne Stacy says the AI project complements and broadens the reach of two other initiatives the center has developed in recent years: The Asia IP & Technology Law Project and the Life Sciences Project.
“This is a major extension of what we’re doing as a law school,” Stacy says. “These three projects together really lead us to overlapping circles, so you can look at AI and drug development, or what’s happening with AI and China. This combination gives us the ability to find real intersections.”
The project aims to go beyond the rhetoric and theory and focus on three main areas: General AI governance, how platforms can and should use the technology, and how these new tools, in concert with public data, can be used to ethically address pressing problems in law.
“We cannot, as a nation, control any AI technology from being developed. We just can’t,” Stacy says. “So it really comes down to regulations, implementation, and the ability to follow those regulations.”
The new AI emphasis will also feed BCLT’s growing online B-CLE platform for earning Continuing Legal Education (CLE) credits anywhere, anytime.
Stacy, Nonnecke, and Narechania emphasize that CITRIS, which focuses on creating technology solutions for society’s most challenging problems, is the perfect partner for BCLT, which has defined the intellectual property and technology field for more than a quarter-century.
“The law school brings deep expertise about questions of law, policy, and regulation,” says Narechania, a BCLT faculty co-director. “And CITRIS, itself an interdisciplinary engineering and policy hub, brings technical expertise that is indispensable to addressing the sorts of challenges we’re hoping to tackle.”
Narechania, whose scholarship examines questions of technology, law, and policy from an institutional perspective, is focused on how the internet’s biggest platforms are and should be handling the rapidly changing AI landscape.
With new technologies, he says, there’s always a concern that the hottest developments might fade quickly. But AI, and the accompanying questions about platform governance, are not just here to stay but growing more complex.
“It’s really important that we get a handle on the governance questions ahead of time, and start to think about how we’re going to answer them,” Narechania says. “How, if at all, might we regulate speech on platforms with significant market power? How can we best address bias and discrimination by machine-learning-based algorithms? How should we balance individual privacy interests in information against the collective gains we might discern from aggregated data? These are big questions that scholars, practitioners, and policymakers will all need to confront.”
Narechania says he’s particularly thrilled to be working with Nonnecke, whom he calls a great and energetic collaborator who’s already immersed in the technological and policy details of the AI sector. She co-directs the UC Berkeley AI Policy Hub, an interdisciplinary initiative training researchers to develop effective AI governance and policy frameworks, and directs Our Better Web, a collaboration by CITRIS, Berkeley Law, the Berkeley Graduate School of Journalism, and the Goldman School to fight online disinformation and algorithmic bias and promote child safety online.
“It’s a unique opportunity to combine two strengths of the university: Its world-class engineering institution with the best public policy and law schools in the country,” Narechania says. “And the public-oriented mindset of the university is important here, too — we’re really going to focus on the public’s interests, and think through, for example, questions of democratic governance and public accountability for these large-scale systems that have such wide effects.”
It also further plants BCLT at the top of the technology law and policy landscape. Already, Stacy says, one research fellow is in residence, studying the development and regulation of biometric technology. She’s hosting a two-day virtual symposium on Feb. 22 and 23 that will bring together experts in law, computer science, and social science to outline the current international landscape.
With the European Union and California already experimenting with policy approaches to AI governance, Narechania and Nonnecke say the time is ripe for this new effort.
“This is an exciting opportunity to strengthen evidence-based technical and governance strategies that support responsible technology development and use,” Nonnecke says. “CITRIS’ mission is to develop information technologies that benefit society. By collaborating with BCLT, we’re able to tap into its extensive legal expertise and conduct much-needed research at the intersection of technology and law.”