23rd Annual BCLT/BTLJ Symposium
Defining and Enforcing Public Policy Values in AI Systems
|Thursday, April 4, 2019|
|2:15 p.m. – 3:30 p.m.||
Fairness and/or/vs. Privacy and Dignity
Humans may aim to be fair, but we know that cognitive biases and other factors skew human decision-making. So we turn to algorithms. Yet any algorithm contains its own biases and can be unfair too. More data can avoid unfairness, but assembling and repurposing that data runs counter to privacy. This panel will consider the tensions between fairness and privacy, explore challenges for regulators and practitioners in managing those tensions, and provide perspectives on how to accommodate both values moving forward.
|3:30 p.m. – 4:00 p.m.||Break|
|4:00 p.m. – 5:30 p.m.||
Safety and Humans in the Loop
In what circumstances can we rely on machines? On what basis (substantive/procedural) do we determine that an AI/ML system is “good enough?” When must there be a human in the loop or at least on the loop? When is it better to leave humans off the loop? Policymakers, regulators, companies, and professional associations are taking different approaches to overseeing the transfer of tasks from humans to machines. What do we know based on these ongoing experiments? What institutional and legal choices would move us in the right direction?
|Friday, April 5, 2019|
|9:00 a.m. – 10:15 a.m.||
Machines of Manipulation
Russia’s use of profiling tools and algorithmically-driven content delivery in efforts to manipulate US voters in the 2016 election, exemplified by the collection and use of data by Cambridge Analytica, has brought renewed attention to issues of targeting and personalization. Where are the answers? In transparency, equal protection, consumer protection, data protection, election law, or other areas?
|10:15 a.m. – 10:30 a.m.||Break|
|10:30 a.m. – 11:00 a.m.||
|11:00 a.m. – 12:30 p.m.||
Ethical Machines – Professional knowledge and ethics
AI/ML systems may affect traditional professional roles and responsibilities, requiring re-examination of the construction of professional knowledge, “skill fade,” and ethical norms. How do we reap benefits from AI/ML without displacing or distorting professional knowledge?
Chris Mammen, Womble Bond Dickinson
|12:30 p.m. – 1:30 p.m.||Lunch|
|1:30 p.m. – 3:00 p.m.||
Trust but Verify – Validating and Defending Against Machine Decisions
Algorithms are used to “prosecute” a variety of goals: to track crime and allocate police resources, to remove offensive or infringing content, to create evidence in court proceedings, to predict creditworthiness or future criminality, to diagnose disease. What adversarial processes (legal or algorithmic) can vet or challenge these predictions and decisions? How do we know if AI/ML systems measure up to the legal standard of care (whatever that is)? Transparency and explainability are the tools currently on the table in the regulatory context. Are these sufficient (or even practical)? What alternatives could there be for testing and validating AI/ML systems?