Virtual NeurIPS 2020 Workshop
Algorithmic Fairness through the Lens of Causality and Interpretability
December 12, 2020, 09:00 AM GMT - 08:10 PM GMT
The Algorithmic Fairness through the Lens of Causality and Interpretability (AFCI) workshop aims to spark discussions on how open questions in algorithmic fairness can be addressed with Causality and Interpretability.
Black-box machine learning models have gained widespread deployment in decision-making settings across many parts of society, from sentencing decisions to medical diagnostics to loan lending. However, many models were found to be biased against certain demographic groups. Initial work on Algorithmic fairness focused on formalizing statistical measures of fairness, that could be used to train new classifiers. While these models were an important first step towards addressing fairness concerns, there were immediate challenges with them. Causality has recently emerged as a powerful tool to address these shortcomings. Causality can be seen as a model-first approach: starting with the language of structural causal models or potential outcomes, the idea is to frame, then solve questions of algorithmic fairness in this language. Such causal definitions of fairness can have far-reaching impact, especially in high risk domains. Interpretability on the other hand can be viewed as a user-first approach: can the ways in which algorithms work be made more transparent, making it easier for them to align with our societal values on fairness? In this way, Interpretability can sometimes be more actionable than Causality work.
Given these initial successes, this workshop aims to more deeply investigate how open questions in algorithmic fairness can be addressed with Causality and Interpretability. Questions such as: What improvements can causal definitions provide compared to existing statistical definitions of fairness? How can causally grounded methods help develop more robust fairness algorithms in practice? What tools for interpretability are useful for detecting bias and building fair systems? What are good formalizations of interpretability when addressing fairness questions?
Tisch University Professor in the Departments of Computer Science and Information Science at Cornell University
Associate Professor at the Department of Statistical Science at University College London and Adjunct Faculty of the Gatsby Computational Neuroscience Unit
PhD candidate in Applied Mathematics and Philosophy at Harvard University and Fellow at the Jain Family Institute
Moritz Hardt (Tutorial)
Assistant Professor in the Department of Electrical Engineering and Computer Sciences at University of California, Berkeley
4-8 pages, NeurIPS format
Submissions to the Papers track should describe new projects aimed at using Causality and/or Interpretability to address fairness in machine learning. Submissions should have theoretical or empirical results demonstrating the approach, and specifying how the project fills a gap in the current literature.
We welcome submissions of novel work in the area of fairness with a special interest on (but not limited to):
- Failure modes of all current fairness definitions (statistical, causal, and otherwise)
- Methods to encode domain-specific fairness knowledge into causal models
- New causal definitions of fairness
- Novel, application-specific formalizations of fairness
- New techniques to interpret the fairness of data and models
- Interpretability methods for evaluating and/or mitigating bias
See here for more details
Oct 0 5 , 2020 Oct 09 AoE
Oct 0 9 , 2020 Oct 12 AoE
1 page (max) in pdf format
Submissions for Breakout sessions should describe open problems relevant to algorithmic fairness where causality and/or interpretability can provide interesting viewpoints. These should be motivated by a real-world context, and authors should clearly state why current work does not solve the proposed open problem.
The accepted proposals will be used to lead a discussion during the workshop. We strongly encourage researchers from all seniority levels to submit a proposal to lead a topical breakout session. We aim, in collaboration with leaders, facilitators and potentially attendees, to write the results of this discussion into a white-paper after the workshop, which would be published along the proceedings.
See here for more details
Oct 0 9 , 2020 Oct 16, 11:59PM AoE
- Awa Dieng (Google Brain)
- Jessica Schrouff (Google Brain)
- Matt Kusner (University College London, Alan Turing Institute)
- Golnoosh Farnadi (University of Montreal, MILA)
- Fernando Diaz (Google Brain)