Virtual NeurIPS 2020 Workshop

Algorithmic Fairness through the Lens of Causality and Interpretability

December 12, 2020, 09:00 AM GMT - 08:10 PM GMT

The Algorithmic Fairness through the Lens of Causality and Interpretability (AFCI) workshop aims to spark discussions on how open questions in algorithmic fairness can be addressed with Causality and Interpretability.

 

Black-box machine learning models have gained widespread deployment in decision-making settings across many parts of society, from sentencing decisions to medical diagnostics to loan lending. However, many models were found to be biased against certain demographic groups. Initial work on Algorithmic fairness focused on formalizing statistical measures of fairness, that could be used to train new classifiers. While these models were an important first step towards addressing fairness concerns, there were immediate challenges with them. Causality has recently emerged as a powerful tool to address these shortcomings. Causality can be seen as a model-first approach: starting with the language of structural causal models or potential outcomes, the idea is to frame, then solve questions of algorithmic fairness in this language. Such causal definitions of fairness can have far-reaching impact, especially in high risk domains. Interpretability on the other hand can be viewed as a user-first approach: can the ways in which algorithms work be made more transparent, making it easier for them to align with our societal values on fairness? In this way, Interpretability can sometimes be more actionable than Causality work.

Given these initial successes, this workshop aims to more deeply investigate how open questions in algorithmic fairness can be addressed with Causality and Interpretability. Questions such as: What improvements can causal definitions provide compared to existing statistical definitions of fairness? How can causally grounded methods help develop more robust fairness algorithms in practice? What tools for interpretability are useful for detecting bias and building fair systems? What are good formalizations of interpretability when addressing fairness questions?

Tisch University Professor in the Departments of Computer Science and Information Science at Cornell University

Associate Professor at the Department of Statistical Science at University College London and Adjunct Faculty of the Gatsby Computational Neuroscience Unit

Assistant Professor in the Machine Learning Department at Carnegie Mellon University. 

PhD candidate in Applied Mathematics and Philosophy at Harvard University and Fellow at the Jain Family Institute

Moritz Hardt (Tutorial)

Assistant Professor in the Department of Electrical Engineering and Computer Sciences at University of California, Berkeley

4-8 pages, NeurIPS format

Submission portal

Submissions to the Papers track should describe new projects aimed at using Causality and/or Interpretability to address fairness in machine learning. Submissions should have theoretical or empirical results demonstrating the approach, and specifying how the project fills a gap in the current literature.

We welcome submissions of novel work in the area of fairness with a special interest on (but not limited to):

See here for more details

Deadlines:

Abstract Submission: Oct 05, 2020 Oct 09 AoE

Full Submission: Oct 09, 2020 Oct 12 AoE


1 page (max) in pdf format

Submission portal

Submissions for Breakout sessions should describe open problems relevant to algorithmic fairness where causality and/or interpretability can provide interesting viewpoints. These should be motivated by a real-world context, and authors should clearly state why current work does not solve the proposed open problem.

The accepted proposals will be used to lead a discussion during the workshop. We strongly encourage researchers from all seniority levels to submit a proposal to lead a topical breakout session. We aim, in collaboration with leaders, facilitators and potentially attendees, to write the results of this discussion into a white-paper after the workshop, which would be published along the proceedings.

See here for more details

Submission Deadline: Oct 09, 2020 Oct 16, 11:59PM AoE


 

Organizers

Awa Dieng (Google Brain)

Jessica Schrouff (Google Brain)


Matt Kusner (University College London, Alan Turing Institute)

Golnoosh Farnadi (University of Montreal, MILA)


Fernando Diaz (Google Brain)

If you would like to help as a reviewer (papers and breakout session), please feel out this form