Virtual NeurIPS 2021 Workshop

Algorithmic Fairness through the Lens of Causality and Robustness

December 13 or 14, 2021, 09:00 AM GMT - 08:00 PM GMT

The Algorithmic Fairness through the Lens of Causality and Robustness (AFCR) workshop aims to spark discussions on how open questions in algorithmic fairness can be addressed with Causality and Robustness.

Recently, relationships between techniques and metrics used across different fields of trustworthy ML have emerged, leading to interesting work at the intersection of algorithmic fairness, robustness, and causality.

On one hand, causality has been proposed as a powerful tool to address the limitations of initial statistical definitions of fairness. However, questions have emerged regarding 1) the applicability of such approaches due to strong assumptions inherent to causal questions and 2) the suitability of a causal framing for studies of bias and discrimination.

On the other hand, the Robustness literature has surfaced promising approaches to improve fairness in ML models. For instance, parallels can be shown between individual fairness and local robustness guarantees or between group fairness metrics and robustness to distribution shift . Beyond similarities, the interactions between fairness and robustness can help us understand how fairness guarantees hold under distribution shift or adversarial/poisoning attacks, leading to fair and robust ML models.

After a first edition of this workshop that focused on causality and interpretability, we will turn to the intersectionality between algorithmic fairness and recent techniques in causality and robustness. In this context, we will investigate how these different topics relate, but also how they can augment each other to provide better or more suited definitions and mitigation strategies for algorithmic fairness.

Invited Speakers

Elias Bareinbiom

Associate Professor in the Department of Computer Science and Director of the Causal Artificial Intelligence Lab at Columbia University

Silvia Chiappa

Senior Staff Research Scientist in Machine Learning at DeepMind and Honorary Professor at the Computer Science Department of University College London

Rumi Chunara

Associate Professor of Computer Science & Engineering, Biostats and Epidemiology at New York University, Tandon School of Engineering.

Isabel Valera

Professor of Machine Learning at the Department of Computer Science at Saarland University and Adjunct Faculty at the Max Planck Institute for Software Systems.

Hima Lakkaraju

Assistant Professor in the Business School and Department of Computer Science at Harvard University.


Been Kim (Google Brain), Ricardo Silva (UCL), Solon Barocas (Microsoft Research), Luke Stark (University of Western Ontario)

Roundtable Leads:

Issa Kohler-Hausmann (Yale University.), Alex D’Amour (Google Research), Nyalleng Moorosi (Google Research), Ulrich Aïvodji (UQAM)

Call for Papers

4-8 pages (anonymized), NeurIPS format, on CMT

Abstract deadline: September 13, Full submission: September 18

Paper Submissions should describe new projects aimed at using Causality and/or Robustness to address fairness in machine learning. Submissions should have theoretical or empirical results demonstrating the approach, and specifying how the project fills a gap in the current literature. Authors of accepted papers will be required to upload a 10-min video presentation of their paper. All recorded talks will be made available on the workshop website.

We welcome submissions of novel work in the area of fairness with a special interest on (but not limited to):

  • Failure modes of all current fairness definitions (statistical, causal, and otherwise)

  • New causal definitions of fairness

  • How can causally grounded fairness methods help develop more robust fairness algorithms in practice?

  • What is an appropriate causal framing in studies of discrimination?

  • How do approaches for adversarial/poisoning attacks target algorithmic fairness?

  • How do fairness guarantees hold under distribution shift?


Reviewer Volunteer Form

If you would like to help as a reviewer, please feel out the form below.