Accepted Papers

Please refer to the poster ID and category to navigate gather.town during the poster session

Full Schedule with links to zoom: https://neurips.cc/virtual/2021/workshop/21850

Information Booklet : https://drive.google.com/file/d/1Bbq2DAnScbUufcwvxwkqCtF1lvc5TD1s/view

Oral Presentations


  • P2: Algorithmic Bias and Data Bias: Understanding the Relation between Distributionally Robust Optimization and Data Curatio

Agnieszka Słowik · Leon Bottou

  • P31: On the Impossibility of Fairness-Aware Learning from Corrupted Data

Nikola Konstantinov · Christoph Lampert

  • P33: Achieving Counterfactual Fairness for Causal Bandit

Wen Huang · Lu Zhang · Xintao Wu

  • P15: The Many Roles that Causal Reasoning Plays in Reasoning about Fairness in Machine Learning

Irene Y Chen · Hal Daumé III · Solon Barocas

  • P3: Detecting Bias in the Presence of Spatial Autocorrelation

Subhabrata Majumdar · Cheryl Flynn · Ritwik Mitra


  • P7: Fair Clustering Using Antidote Data

Anshuman Chhabra · Adish Singla · Prasant Mohapatra

Posters

  • P6: Fairness for Robust Learning to Rank

Omid Memarrast · Ashkan Rezaei · Rizal Fathony · Brian Ziebart

  • P10: Cooperative Multi-Agent Fairness and Equivariant Policies

Niko Grupen · Bart Selman · Daniel Lee

  • P12: Fair SA: Sensitivity Analysis for Fairness in Face Recognition

Aparna Joshi · Xavier Suau Cuadros · Nivedha Sivakumar · Luca Zappella · Nicholas Apostoloff

  • P19: Measure Twice, Cut Once: Quantifying Bias and Fairness in Deep Networks

Ziliang Zong · Cody Blakeney · Gentry Atkinson · Nathaniel Huish · yyan34 · Vangelis Metsis

  • P20: Bounded Fairness Transferability subject to Distribution Shift

Reilly Raab · Yatong Chen · Yang Liu

  • P24: Counterfactual Fairness in Mortgage Lending via Matching and Randomization

Sama Ghoba · Nathan Colaner

  • P25: Structural Interventions on Automated Decision Making Systems

efren cruz · Sarah Rajtmajer Rajtmajer · Debashis Ghosh

  • P27: Balancing Robustness and Fairness via Partial Invariance

Moulik Choraria · Ibtihal Ferwana · Ankur Mani · Lav Varshney

  • P33: Implications of Modeled Beliefs for Algorithmic Fairness in Machine Learning

Ruth Urner · Jeff Edmonds · Karan Singh

  • P29: Fairness Degrading Adversarial Attacks Against Clustering Algorithms

Anshuman Chhabra · Adish Singla · Prasant Mohapatra

Extended Abstracts

  • P22: Fair When Trained, Unfair When Deployed: Observable Fairness Measures are Unstable in Performative Prediction Settings

Alan Mishler · Niccolo Dalmasso