A NeurIPS 2023 Workshop

Algorithmic Fairness through the Lens of Time

December 15, 2023

Pre-registration form: https://forms.gle/YBCwn7L8N5AxExMG7

virtual NeurIPS portal: https://neurips.cc/virtual/2023/workshop/66502


If you attended the workshop, please share your feedback

The Algorithmic Fairness through the Lens of Time (AFT) workshop aims to spark discussions on how a long-term perspective can help build more trustworthy algorithms in the era of expressive generative models.

Fairness has been predominantly studied under the static regime, assuming an unchanging data generation process. However, these approaches neglect the dynamic interplay between algorithmic decisions and the individuals they impact, which have shown to be prevalent in practical settings. Such observation has highlighted the need to study the long term effect of fairness mitigation strategies and incorporate dynamic systems within the development of fair algorithms. 

Despite prior research identifying several impactful scenarios where such dynamics can occur, including bureaucratic processes, social learning, recourse, and strategic behavior, extensive investigation of the long term effect of fairness methods remains limited. Initial studies have shown how enforcing static fairness constraints in dynamical systems can lead to unfair data distributions and may perpetuate or even amplify biases. 

Additionally, the rise of powerful large generative models have brought at the forefront the need to understand fairness in evolving systems. The general capabilities and widespread use of these models raise the critical question of how to assess these models for fairness and mitigate observed biases within a long term perspective. Importantly, mainstream fairness frameworks have been developed around classification and prediction tasks. How can we reconcile these existing techniques (proprocessing, in-processing and post-processing) with the development of large generative models? 

Given these interesting questions, this workshop aims to deeply investigate how to address fairness concerns in settings where learning occurs sequentially or in evolving environments. 

Invited Speakers 

Associate professor in the philosophy department at Carnegie Mellon University

Title: At the Intersection of Algorithmic Fairness and Causal Representation Learning

Research group lead at the Ellis Institute and the Max Planck Institute for Intelligent Systems in Tübingen

Title: Performativity and Power in Prediction

Senior Research Scientist at IBM T.J. Watson Research Center

Title: Uncovering Hidden Bias: Auditing Language Models with a Social Stigma Lens

Professor in the Department of Computer Science at Columbia University

Title: A Framework for Responsible Deployment of Large Language Models

Panellists

Associate professor in the philosophy department at Carnegie Mellon University

Senior Research Scientist at IBM T.J. Watson Research Center

Professor of Computer Science at the University of Maryland

Assistant Professor of the Politics of AI at Syracuse University

ML and Society team lead at Hugging Face

Roundtable Leads

Long-term Fairness

(Ohio State University)

Fairness in generative models

(Hugging Face)

 Fairness & Causality

Stephen Pfohl (Google)

Elliot Creager (U. of Waterloo)

Fairness & Ethics

(Syracuse U.)

Call for papers

4-9 pages (not including references and appendix), NeurIPS format

Submission portal

Submissions to the Paper track should describe new projects aimed at challenging static definitions of fairness, discussing long-term fairness effects and integration with generative models. Submissions should have theoretical or empirical results demonstrating the approach, and specifying how the project fills a gap in the current literature. Authors of accepted papers will be required to upload a 10-min video presentation of their paper. All recorded talks will be made available on the workshop website. More details at AFT2023 CFP

We welcome submissions of novel work in the area of fairness with a special interest on (but not limited to):

Deadlines:

Abstract: Sep 22, 2023 AoE 

Full submission: Sep 29, 2023 AoE Oct 4, 2023

Acceptance Notification: Oct 27, 2023 AoE

Format: 4-9 pages not including references and appendix. The impact statement or checklist are optional and do not count towards the page limit.

1 page (max, anonymized) in pdf format 

Submission portal

The extended abstract track welcomes submissions of 1-page abstracts (including references) that provide new perspectives, discussions or novel methods that are not yet finalized on the topics of fairness, long-term fairness, and/ or fairness in generative models. Accepted abstracts will be presented as posters at the workshop.


Deadline: Sep 29, 2023 AoE Oct 4, 2023

Acceptance Notification: Oct 27, 2023 AoE


Format (maximum one page pdf, references included). 


Upload a 1-page pdf file on CMT. The pdf file should follow the one-column format, main body text must be minimum 11 point font size and page margins must be minimum 0.5 inches (all sides).

Organizers

Awa Dieng (Google DeepMind, Mila)

Miriam Rateike (Saarland University)

Golnoosh Farnadi (McGill University, Mila)

Ferdinando Fioretto (University of Virginia)

Jessica Schrouff (Google DeepMind)

Code of Conduct

The AFCP workshop abides by the NeurIPS code of conduct. Participation in the event requires agreeing to the code of conduct.

Reviewer Volunteer Form
If you would like to help as a reviewer, please fill out the form below. 

To stay updated about the workshop, pre-register using this form and follow us on Twitter at @afciworkshop