23rd Annual BCLT/BTLJ Symposium
Governing Machines:
Defining and Enforcing Public Policy Values in AI Systems
Register here
Thursday, April 4, 2019 | |
2:00 p.m. |
Welcoming and Framing Remarks |
2:15 p.m. – 3:15 p.m. |
Fairness and/or/vs. Privacy and Dignity Humans may aim to be fair, but we know that cognitive biases and other factors skew human decision-making. So we turn to algorithms. Yet any algorithm contains its own biases and can be unfair too. More data can avoid unfairness, but assembling and repurposing that data runs counter to privacy. This panel will consider the tensions between fairness and privacy, explore challenges for regulators and practitioners in managing those tensions, and provide perspectives on how to accommodate both values moving forward. Speakers: |
3:15 p.m. – 3:30 p.m. | Break |
3:30 p.m. – 5:00 p.m. |
Safety and Humans in the Loop In what circumstances can we rely on machines? On what basis (substantive/procedural) do we determine that an AI/ML system is “good enough?” When must there be a human in the loop or at least on the loop? When is it better to leave humans off the loop? Policymakers, regulators, companies, and professional associations are taking different approaches to overseeing the transfer of tasks from humans to machines. What do we know based on these ongoing experiments? What institutional and legal choices would move us in the right direction? Moderator: Henry Welch, Haynes & Boone Speakers: Presentations: |
5:00 p.m. | Reception |
Friday, April 5, 2019 | |
9:00 a.m. – 10:00 a.m. |
Machines of Manipulation Russia’s use of profiling tools and algorithmically-driven content delivery in efforts to manipulate US voters in the 2016 election, exemplified by the collection and use of data by Cambridge Analytica, has brought renewed attention to issues of targeting and personalization. Where are the answers? In transparency, equal protection, consumer protection, data protection, election law, or other areas? Moderator: Speakers: Presentations: |
10:00 a.m. – 10:30 a.m. | Break |
10:30 a.m. – 11:00 a.m. |
Keynote Yeong Zee Kin, Assistant Chief Executive of the Infocomm Media Development Authority of Singapore and Deputy Commissioner of the Singapore Personal Data Protection Commission |
11:00 a.m. – 12:30 p.m. CLE: Legal Ethics |
Ethical Machines – Professional knowledge and ethics AI/ML systems may affect traditional professional roles and responsibilities, requiring re-examination of the construction of professional knowledge, “skill fade,” and ethical norms. How do we reap benefits from AI/ML without displacing or distorting professional knowledge? Moderator: Speakers: Presentations: |
12:30 p.m. – 1:00 p.m. |
Lunch |
1:00 p.m. – 1:30 p.m. |
David E. Nelson Memorial Lecture Andrea Jelinek, Chair, European Data Protection Board |
1:30 p.m. – 1:45 p.m. |
Break
|
1:45 p.m. – 3:00 p.m. |
Trust but Verify – Validating and Defending Against Machine Decisions Algorithms are used to “prosecute” a variety of goals: to track crime and allocate police resources, to remove offensive or infringing content, to create evidence in court proceedings, to predict creditworthiness or future criminality, to diagnose disease. What adversarial processes (legal or algorithmic) can vet or challenge these predictions and decisions? How do we know if AI/ML systems measure up to the legal standard of care (whatever that is)? Transparency and explainability are the tools currently on the table in the regulatory context. Are these sufficient (or even practical)? What alternatives could there be for testing and validating AI/ML systems? Moderator: Boris Segalis, Cooley Speakers: Presentations: |
3:00 p.m. |
Closing Remarks |