A hybrid NeurIPS 2022 Workshop

Algorithmic Fairness through the Lens of Causality and Privacy

The Algorithmic Fairness through the Lens of Causality and Privacy (AFCP) workshop aims to spark discussions on how open questions in algorithmic fairness can be addressed with causality and privacy.

As machine learning models permeate every aspect of decision making systems in consequential areas such as healthcare and criminal justice, it has become critical for these models to satisfy trustworthiness desiderata such as fairness, interpretability, accountability, privacy and security. Initially studied in isolation, recent work has emerged at the intersection of these different fields of research, leading to interesting questions on how fairness can be achieved using a causal perspective and under privacy concerns. 

Indeed, the field of causal fairness has seen a large expansion in recent years notably as a way to counteract the limitations of initial statistical definitions of fairness. While a causal framing provides flexibility in modelling and mitigating sources of bias using a causal model, proposed approaches rely heavily on assumptions about the data generating process, i.e., the faithfulness and ignorability assumptions. This leads to open discussions on (1) how to fully characterize causal definitions of fairness, (2) how, if possible, to improve the applicability of such definitions, and (3) what constitutes a suitable causal framing of bias from a sociotechnical perspective? 

Additionally, while most existing work on causal fairness assumes observed sensitive attribute data, such information is likely to be unavailable due to, for example, data privacy laws or ethical considerations. This observation has motivated initial work on training and evaluating fair algorithms without access to sensitive information and studying the compatibility and trade-offs between fairness and privacy. However, such work has been limited, for the most part, to statistical definitions of fairness raising the question of whether these methods can be extended to causal definitions. 

Given the interesting questions that emerge at the intersection of these different fields, this workshop aims to deeply investigate how these different topics relate, but also how they can augment each other to provide better or more suited definitions and mitigation strategies for algorithmic fairness. 

Invited Speakers 

Rollins Assistant Professor

in the Department of Biostatistics and Bioinformatics at Emory University


Talk: A causal and counterfactual view of (un)fairness in automated decision making 

Professor in the School of Information at UC Berkeley and faculty Director of the Berkeley Center for Law & Technology


Talk: Contestation and Participation in Model Design

Assistant Professor in the Department of Electrical and Computer Engineering and the Department of Computer Science at the University of Toronto


Talk: Is trustworthy ML possible in decision-making systems requiring fairness?

Director of Research at INRIA


Talk: On the interaction between Privacy, Fairness and Accuracy

Panelists

(London School of Economics)

Rachel Cummings (Columbia U.)

Jake Goldenfein (Melbourne Law School)

Sara Hooker (Cohere For AI)

Online Roundtable Leads

Causality & Fairness

Dhanya Sridhar (U. of Montreal, Mila)

David Madras (U. of Toronto)

Privacy & Fairness

Ulrich Aïvodji (ETS Montreal)

Sikha Pentyala (UW Tacoma)

Ethics & Fairness

Negar Rostamzadeh  (Google Research)

Sina Fazelpour (Northeastern U.)

Nyalleng Moroosi (Google Research)

In-person Roundtable Leads

Causality & Fairness

(London School of Economics)

Privacy & Fairness

Rachel Cummings (Columbia U.)

Sikha Pentyala (UW Tacoma)

Interpretability & Fairness

Amir-Hossein Karimi (MPI-IS, ETH Zurich)

4-8 pages (not including references and appendix), NeurIPS format

Submission portal (tba)

Submissions to the Paper track should describe new projects aimed at using Causality and/or Privacy to address fairness in machine learning. Submissions should have theoretical or empirical results demonstrating the approach, and specifying how the project fills a gap in the current literature. Authors of accepted papers will be required to upload a 10-min video presentation of their paper. All recorded talks will be made available on the workshop website.

We welcome submissions of novel work in the area of fairness with a special interest on (but not limited to):

Deadlines:

Abstract: Sep 15 Sep 22, 2022 AoE 

Full submission: Sep 22 Sep 26, 2022 AoE

Format: 4-8 pages not including references and appendix. The impact statement or checklist are optional and do not count towards the page limit. 

1 page (max, anonymized) in pdf format 

Submission portal (tba)

The extended abstract track welcomes submissions of 1-page abstracts (including references) that provide new perspectives, discussions or novel methods that are not yet finalized on the topics of fairness, causality, and/ or privacy. Accepted abstracts will be presented as posters at the workshop.


Format (maximum one page pdf, references included). 


Upload a 1-page pdf file on CMT. The pdf file should follow the one-column format, main body text must be minimum 11 point font size and page margins must be minimum 0.5 inches (all sides).


Deadline: Sep 22  Sep 26, 2022 AoE

Organizers

Awa Dieng (Google Brain, Mila)

Miriam Rateike (MPI-IS, Saarland University)

Golnoosh Farnadi (HEC Montreal, Mila)

Ferdinando Fioretto (Syracuse University)

Advisory Committee

Jessica Schrouff (DeepMind)

Code of Conduct

The AFCP workshop abides by the NeurIPS code of conduct. Participation in the event requires agreeing to the code of conduct.

Reviewer Volunteer Form
If you would like to help as a reviewer, please fill out the form below. 

To stay updated about the workshop, pre-register using this form and follow us on Twitter at @afciworkshop