Schedule

December 13, 2021, 09:00 AM GMT - 08:30 PM GMT

Time in GMT (UTC+00). Please check for your corresponding local time on timeanddate.com

Full Schedule with links to zoom: https://neurips.cc/virtual/2021/workshop/21850

Information Booklet : https://drive.google.com/file/d/1Bbq2DAnScbUufcwvxwkqCtF1lvc5TD1s/view

09:00 AM - 09:10 AM GMT

Awa Dieng

Invited Talk

09:10 AM - 09:34 AM GMT

Invited Talk: Generalizability, robustness and fairness in machine learning risk prediction models

Rumi Chunara

Abstract: By leveraging principles of health equity, I will discuss the use of causal models and machine learning to address realistic challenges of data collection and model use across environments. Examples include a domain adaptation approach that improves prediction in under-represented population sub-groups by leveraging invariant information across groups when possible, and an algorithmic fairness method which specifically incorporates structural factors to better account for and address sources of bias and disparities.

Q&A

09:34 AM - 09:45 AM GMT

Questions: Invited talk, Rumi Chunara

Short break: Join us on Gathertown

Invited Talk

09:50 AM - 10:20 AM GMT

Invited Talk: Path-specific effects and ML fairness

Silvia Chiappa

Abstract: Arguably, the consideration of causal effects along subsets of causal paths is required for understanding and addressing unfairness in most real-world scenarios. In this talk I will share some of my thoughts on this goal and on the challenges to achieve it.

Q&A

10:20 AM - 10:30 AM GMT

Questions: Invited talk, Silvia Chiappa

Short break: Join us on Gathertown

Invited Talk

10:40 AM - 11:10 AM GMT

Invited Talk: Causality and fairness in ML: promises, challenges & open questions

Isabel Valera

Abstract: In the recent years, we have observed an explosion of research approaches at the intersection of causality and fairness in machine learning (ML). These works are often motivated by the promise that causality allows us to reason about the causes of unfairness both in the data and in the ML algorithm. However, the promises of existing causal fair approaches require strong assumptions, which hinder their practical application. In this talk, I will provide a quick overview of both the promises and the technical challenges of causal fair ML frameworks from a theoretical perspective. Finally, I will show how to leverage probabilistic ML to partially relax causal assumptions in order to develop more practical solutions to causal fair ML.

Q&A

11:10 AM - 11:20 AM GMT

Questions: Invited talk, Isabel Valera

Contributed Talks

11:30 AM - 11:40 AM GMT

Nikola Konstantinov · Christoph Lampert

11:40 AM - 11:50 AM GMT

Wen Huang · Lu Zhang · Xintao Wu

Q&A

11:50 AM - 12:00 PM GMT

Questions: Contributed talks 1, 2, 3

12:00 PM - 01:00 PM GMT

Poster Session 1

Discussions


1:00 PM - 2:00 PM GMT

Roundtables

  • Causality for Fairness

  • Leads: Issa Kohler-Hausmann, Matt Kusner, Maggie Makar, Ioana Bica

  • Robustness for Fairness

  • Leads: Silvia Chiappa, Alex D’Amour, Elliot Creager

  • General Fairness

  • Leads: Isabel Valera, Ulrich Aïvodji, Keziah Naggita, Stephen Pfohl

  • Ethics

  • Leads: Luke Stark, Irene Y. Chen, Lizzie Kumar

Long break: join us on Gathertown

Invited Talk

4:00 PM - 4:30 PM GMT

Talk on Causality

Elias Bareinbiom

Q&A

4:30 PM - 4:40 PM GMT

Questions: Invited talk, Elias Bareinbiom

Invited Talk

4:40 PM - 5:20 PM GMT

Invited Talk: Towards Reliable and Robust Model Explanations

Hima Lakkaraju

Abstract: As machine learning black boxes are increasingly being deployed in domains such as healthcare and criminal justice, there is growing emphasis on building tools and techniques for explaining these black boxes in an interpretable manner. Such explanations are being leveraged by domain experts to diagnose systematic errors and underlying biases of black boxes. In this talk, I will present some of our recent research that sheds light on the vulnerabilities of popular post hoc explanation techniques such as LIME and SHAP, and also introduce novel methods to address some of these vulnerabilities. More specifically, I will first demonstrate that these methods are brittle, unstable, and are vulnerable to a variety of adversarial attacks. Then, I will discuss two solutions to address some of the aforementioned vulnerabilities–(i) a Bayesian framework that captures the uncertainty associated with post hoc explanations and in turn allows us to generate explanations with user specified levels of confidence, and (ii) a framework based on adversarial training that is designed to make post hoc explanations more stable and robust to shifts in the underlying data; I will conclude the talk by discussing our recent theoretical results which shed light on the equivalence and robustness of state-of-the-art explanation methods.

Q&A

5:20 PM - 5:30 PM GMT

Questions: Invited Talk, Hima Lakkaraju

Contributed Talks

5:30 PM - 5:40 PM GMT

Irene Y Chen · Hal Daumé III · Solon Barocas

5:40 PM - 5:50 PM GMT

Subhabrata Majumdar · Cheryl Flynn · Ritwik Mitra

5:50 PM - 5:52 PM GMT

Anshuman Chhabra · Adish Singla · Prasant Mohapatra

Q&A

5:52 PM - 6:05 PM GMT

Questions: Contributed talks 4, 5, 6

Short break

Invited Talk

6:15 PM - 6:47 PM GMT

Invited Talk: Lessons from robust machine learning

Aditi Raghunathan

Abstract: Current machine learning (ML) methods are primarily centered around improving in-distribution generalization where models are evaluated on new points drawn from nearly the same distribution as the training data. On the other hand, robustness and fairness involve reasoning about out-of-distribution performance such as accuracy on protected groups or perturbed inputs, and reliability even in the presence of spurious correlations. In this talk, I will describe an important lesson from robustness: in order to improve out-of-distribution performance, we often need to question the common assumptions in ML. In particular, we will see that ‘more data’, ‘bigger models’, or ‘fine-tuning pretrained features’ which improve in-distribution generalization often fail out-of-distribution.

Q&A

6:47 PM - 6:55 PM GMT

Questions: Invited Talk, Aditi Raghunathan

Short break

Discussions

7:00 PM - 7:40 PM GMT

Panel: Been Kim (Google Brain), Solon Barocas (Microsoft Research), Ricardo Silva (UCL), Rich Zemel (U. of Toronto)

7:40 PM - 08:20 PM GMT

Poster Session 2

08:20 PM - 08:30 PM GMT

Miriam Rateike