Agenda

23rd Annual BCLT/BTLJ Symposium
Governing Machines:
Defining and Enforcing Public Policy Values in AI Systems

Register here

Thursday, April 4, 2019
2:00 p.m.

Welcoming and Framing Remarks
Jennifer Urban and Ken Bamberger

2:15 p.m. – 3:15 p.m.

Fairness and/or/vs. Privacy and Dignity

Humans may aim to be fair, but we know that cognitive biases and other factors skew human decision-making. So we turn to algorithms. Yet any algorithm contains its own biases and can be unfair too. More data can avoid unfairness, but assembling and repurposing that data runs counter to privacy. This panel will consider the tensions between fairness and privacy, explore challenges for regulators and practitioners in managing those tensions, and provide perspectives on how to accommodate both values moving forward.

Speakers:
Radha ChandraFICO
Sheryl Falk, Winston & Strawn
Meg Leta Jones, Assistant Professor, Communication, Culture & Technology Department, Georgetown University 
Lindsey Tonsager, Covington

3:15 p.m. – 3:30 p.m. Break
3:30 p.m. – 5:00 p.m.

Safety and Humans in the Loop

In what circumstances can we rely on machines? On what basis (substantive/procedural) do we determine that an AI/ML system is “good enough?” When must there be a human in the loop or at least on the loop? When is it better to leave humans off the loop? Policymakers, regulators, companies, and professional associations are taking different approaches to overseeing the transfer of tasks from humans to machines. What do we know based on these ongoing experiments? What institutional and legal choices would move us in the right direction?

Moderator: Henry Welch, Haynes & Boone

Speakers:
Justin Erlich, Voyage
Ece Kamar, MicrosoftKaren Levy, Dept. of Information Science at Cornell University and Cornell Law School
Rob Merges, UC Berkeley Law School
Liane RandolphCommissioner, California Public Utilities Commission

Presentations:
Ece Kamar
Rob Merges

5:00 p.m. Reception
Friday, April 5, 2019
9:00 a.m. – 10:00 a.m.

Machines of Manipulation

Russia’s use of profiling tools and algorithmically-driven content delivery in efforts to manipulate US voters in the 2016 election, exemplified by the collection and use of data by Cambridge Analytica, has brought renewed attention to issues of targeting and personalization. Where are the answers? In transparency, equal protection, consumer protection, data protection, election law, or other areas?

Moderator:
Brandie M. Nonnecke, CITRIS, UC Berkeley

Speakers:
Chris Hoofnagle, UC Berkeley
Peter Menell and Uri Hacohen, UC Berkeley
Michael Carl Tschantz, International Computer Science Institute, UC Berkeley

Presentations:
Chris Hoofnagle
Michael Carl Tschantz

10:00 a.m. – 10:30 a.m. Break
10:30 a.m. – 11:00 a.m.

Keynote

Yeong Zee KinAssistant Chief Executive of the Infocomm Media Development Authority of Singapore and Deputy Commissioner of the Singapore Personal Data Protection Commission

Presentation

11:00 a.m. – 12:30 p.m.

CLE: Legal Ethics

Ethical Machines – Professional knowledge and ethics

AI/ML systems may affect traditional professional roles and responsibilities, requiring re-examination of the construction of professional knowledge, “skill fade,” and ethical norms. How do we reap benefits from AI/ML without displacing or distorting professional knowledge?

Moderator:
Anna Remis, Sidley

Speakers:
Chris Mammen, Womble Bond Dickinson
Deirdre Mulligan, UC Berkeley
Dr. Michael Hodgkins, Chief Information Medical Officer, American Medical Association
Pilar Ossorio, School of Law, University of Wisconsin-Madison

Presentations:
Deirdre Mulligan/Daniel Klutzz
Dr. Michael Hodgkins
Pilar Ossorio

12:30 p.m. – 1:00 p.m.

Lunch

1:00 p.m. – 1:30 p.m.

David E. Nelson Memorial Lecture

Andrea Jelinek, Chair, European Data Protection Board

1:30 p.m.  – 1:45 p.m.
Break
1:45 p.m. – 3:00 p.m.

Trust but Verify – Validating and Defending Against Machine Decisions

Algorithms are used to “prosecute” a variety of goals: to track crime and allocate police resources, to remove offensive or infringing content, to create evidence in court proceedings, to predict creditworthiness or future criminality, to diagnose disease. What adversarial processes (legal or algorithmic) can vet or challenge these predictions and decisions? How do we know if AI/ML systems measure up to the legal standard of care (whatever that is)? Transparency and explainability are the tools currently on the table in the regulatory context. Are these sufficient (or even practical)? What alternatives could there be for testing and validating AI/ML systems?

Moderator: Boris Segalis, Cooley

Speakers:
Amit Elazari, Intel
Jen Gennai, Google
Helen Nissenbaum, Cornell Tech
Jennifer Urban, UC Berkeley Law School

Presentations:
Jen Gennai
Helen Nissenbaum

3:00 p.m.

Closing Remarks
Jennifer Urban and Deirdre Mulligan