Breakout Sessions
VIRTUAL NEURIPS WEBSITE FOR THE WORKSHOP (requires a NeurIPS registration)
Breakout Session 1: 11:55 AM - 12:55 PM GMT
Breakout Session 2: 05:55 PM - 06:55 PM GMT
Each 1-hour breakout session will be structured around a list of questions around one aspect of algorithmic fairness. Questions will be compiled and voted on prior to the workshop using onlinequestions.org. Each question will be briefly presented by one of the facilitators, followed by an open discussion forum.
Each 1-hour breakout session will be structured around a list of questions around one aspect of algorithmic fairness. Questions will be compiled and voted on prior to the workshop using onlinequestions.org. Each question will be briefly presented by one of the facilitators, followed by an open discussion forum.
Breakout Session 1: 11:55 AM - 12:55 PM GMT
Breakout Session 1: 11:55 AM - 12:55 PM GMT
The 3 breakout sessions below will happen in parallel. Please find the zoom link for each session on the virtual neurips website schedule.
Algorithmic Fairness in Health
Algorithmic Fairness in Health
Leads: Natalie Harris, Martin Seneviratne, Berk Ustun
Leads: Natalie Harris, Martin Seneviratne, Berk Ustun
Facilitators: Mayank Daswani, Alan Karthikesalingam
Facilitators: Mayank Daswani, Alan Karthikesalingam
Moderator: Jessica Schrouff
Moderator: Jessica Schrouff
Abstract:
Abstract:
Machine learning presents unique fairness challenges in health and medicine. There is debate around the requirements for fair machine learning in healthcare applications and the applicability of existing techniques to measure and promote equitable health outcomes. Demographic attributes such as age, sex and race encode useful physiological information, but may also reflect biased healthcare practices and inequalities in social determinants of health. In many health applications, datasets will contain a limited number of samples, a large number of features, and suffer from missingness. Finally, models will often predict continuous outcomes (i.e. triggered on a rolling basis over a patient’s care journey) that are difficult to label. Together these factors make it difficult to understand or mitigate performance disparities that can perpetuate inequitable care of patients from marginalized communities and underrepresented minorities.
Machine learning presents unique fairness challenges in health and medicine. There is debate around the requirements for fair machine learning in healthcare applications and the applicability of existing techniques to measure and promote equitable health outcomes. Demographic attributes such as age, sex and race encode useful physiological information, but may also reflect biased healthcare practices and inequalities in social determinants of health. In many health applications, datasets will contain a limited number of samples, a large number of features, and suffer from missingness. Finally, models will often predict continuous outcomes (i.e. triggered on a rolling basis over a patient’s care journey) that are difficult to label. Together these factors make it difficult to understand or mitigate performance disparities that can perpetuate inequitable care of patients from marginalized communities and underrepresented minorities.
Please submit questions to be discussed during this breakout session at onlinequestions.org Event ID: 12122001.
Please submit questions to be discussed during this breakout session at onlinequestions.org Event ID: 12122001.
On Prediction, Action and Interference
On Prediction, Action and Interference
Lead: Ricardo Silva
Lead: Ricardo Silva
Facilitator: Matt Kusner
Facilitator: Matt Kusner
Abstract:
Abstract:
Ultimately, we want the world to be less unfair by changing it. Just making fair passive predictions is not enough, so our decisions will eventually have an effect on how a societal system works. We will discuss ways of modelling hypothetical interventions so that particular measures of counterfactual fairness are respected: that is, how are sensitivity attributes interacting with our actions to cause an unfair distribution outcomes, and that being the case how do we mitigate such uneven impacts within the space of feasible actions? To make matters even harder, interference is likely: what happens to one individual may affect another. We will discuss how to express assumptions about and consequences of such causative factors for fair policy making, accepting that this is a daunting task but that we owe the public an explanation of our reasoning.
Ultimately, we want the world to be less unfair by changing it. Just making fair passive predictions is not enough, so our decisions will eventually have an effect on how a societal system works. We will discuss ways of modelling hypothetical interventions so that particular measures of counterfactual fairness are respected: that is, how are sensitivity attributes interacting with our actions to cause an unfair distribution outcomes, and that being the case how do we mitigate such uneven impacts within the space of feasible actions? To make matters even harder, interference is likely: what happens to one individual may affect another. We will discuss how to express assumptions about and consequences of such causative factors for fair policy making, accepting that this is a daunting task but that we owe the public an explanation of our reasoning.
Please submit questions to be discussed during this breakout session at onlinequestions.org Event ID: 12122002.
Please submit questions to be discussed during this breakout session at onlinequestions.org Event ID: 12122002.
Fairness through the lens of equality of opportunity and its connection to causality
Fairness through the lens of equality of opportunity and its connection to causality
Lead: Hoda Heidari
Lead: Hoda Heidari
Facilitator: Awa Dieng
Facilitator: Awa Dieng
Abstract:
Abstract:
I begin by presenting a mapping between existing mathematical notions of fairness and economic models of Equality of opportunity (EOP)—an extensively studied ideal of fairness in political philosophy. Through our conceptual mapping, many existing definitions of fairness, such as predictive value parity and equality of odds, can be interpreted as special cases of EOP. In this respect, the EOP interpretation serves as a unifying framework for understanding the normative assumptions underlying existing notions of fairness. I will conclude by discussing a causal interpretation of EOP-based notions of fairness and some thoughts on defining counterfactual notions of fairness.
I begin by presenting a mapping between existing mathematical notions of fairness and economic models of Equality of opportunity (EOP)—an extensively studied ideal of fairness in political philosophy. Through our conceptual mapping, many existing definitions of fairness, such as predictive value parity and equality of odds, can be interpreted as special cases of EOP. In this respect, the EOP interpretation serves as a unifying framework for understanding the normative assumptions underlying existing notions of fairness. I will conclude by discussing a causal interpretation of EOP-based notions of fairness and some thoughts on defining counterfactual notions of fairness.
Please submit questions to be discussed during this breakout session at onlinequestions.org Event ID: 12122003.
Please submit questions to be discussed during this breakout session at onlinequestions.org Event ID: 12122003.
Breakout Session 2: 05:55 PM - 06:55 PM GMT
Breakout Session 2: 05:55 PM - 06:55 PM GMT
The 3 breakout sessions below will happen in parallel. Please find the zoom link for each session on the virtual neurips website.
The 3 breakout sessions below will happen in parallel. Please find the zoom link for each session on the virtual neurips website.
Can counterfactuals be an effective method for achieving equitable AI?
Can counterfactuals be an effective method for achieving equitable AI?
Lead: Kiana Alikhademi, Emma Drobina, Dr. Juan E. Gilbert
Lead: Kiana Alikhademi, Emma Drobina, Dr. Juan E. Gilbert
Facilitator: Brianna Richardson, Diandra Prioleau
Facilitator: Brianna Richardson, Diandra Prioleau
Moderator: Matt Kusner
Moderator: Matt Kusner
Abstract:
Abstract:
Novel applications of machine learning (ML) and Artificial Intelligence (AI) are emerging increasingly across multiple industries, including healthcare, law enforcement, and human resources (HR). Automation is promising to these domains as it allows for faster and more efficient decision-making processes. However, many of these ML/AI technologies are under heavy scrutiny for patterns of bias and unfairness that have emerged from their use.
Novel applications of machine learning (ML) and Artificial Intelligence (AI) are emerging increasingly across multiple industries, including healthcare, law enforcement, and human resources (HR). Automation is promising to these domains as it allows for faster and more efficient decision-making processes. However, many of these ML/AI technologies are under heavy scrutiny for patterns of bias and unfairness that have emerged from their use.
Research in the area of fair, accountable, transparent, and explainable (FATE) AI technologies has risen substantially in the last decade. Many researchers have considered causality, specifically counterfactuals, as a method for achieving fair ML models. The term 'counterfactual' refers to hypothetical scenarios that describe how a situation would have happened had a given factor been different. In the context of fairness and ML, this most often refers to how the classification of a ML model for one instance would change if a feature value was changed. Of particular interest are features that refer to demographic information and other protected information, as well as features that can be used as a proxy for that information. We intend to foster discussion on whether counterfactuals and causality are effective means of achieving equitable AI. For the purpose of this session, equity with respect to AI will refer to the accessibility of AI systems, the distribution of preferred outcomes, the minimization of implicit biases, and the concurrent optimization of accuracy for each algorithm.
Research in the area of fair, accountable, transparent, and explainable (FATE) AI technologies has risen substantially in the last decade. Many researchers have considered causality, specifically counterfactuals, as a method for achieving fair ML models. The term 'counterfactual' refers to hypothetical scenarios that describe how a situation would have happened had a given factor been different. In the context of fairness and ML, this most often refers to how the classification of a ML model for one instance would change if a feature value was changed. Of particular interest are features that refer to demographic information and other protected information, as well as features that can be used as a proxy for that information. We intend to foster discussion on whether counterfactuals and causality are effective means of achieving equitable AI. For the purpose of this session, equity with respect to AI will refer to the accessibility of AI systems, the distribution of preferred outcomes, the minimization of implicit biases, and the concurrent optimization of accuracy for each algorithm.
During this breakout session, we will work to answer questions about the applicability and possible barriers of using counterfactuals to achieve Equitable AI.
During this breakout session, we will work to answer questions about the applicability and possible barriers of using counterfactuals to achieve Equitable AI.
Please submit questions to be discussed during this breakout session at onlinequestions.org Event ID: 12122004.
Please submit questions to be discussed during this breakout session at onlinequestions.org Event ID: 12122004.
The Roles of Simplicity and Interpretability in Fairness Guarantees
The Roles of Simplicity and Interpretability in Fairness Guarantees
Lead: Jon Kleinberg
Lead: Jon Kleinberg
Facilitator: Awa Dieng
Facilitator: Awa Dieng
Abstract:
Abstract:
We explore two arguments at the interface between the interpretability of algorithms and their fairness properties. We first discuss how a well-regulated algorithm for screening decisions, because it makes notions like feature sets and objective functions explicit, can be audited for evidence of discrimination in ways that would be essentially impossible for human decision-making. We then consider connections to a related fundamental point -- that as we simplify algorithms, reducing the range of features available to them, there is a precise sense in which we can find ourselves sacrificing accuracy and equity simultaneously.
We explore two arguments at the interface between the interpretability of algorithms and their fairness properties. We first discuss how a well-regulated algorithm for screening decisions, because it makes notions like feature sets and objective functions explicit, can be audited for evidence of discrimination in ways that would be essentially impossible for human decision-making. We then consider connections to a related fundamental point -- that as we simplify algorithms, reducing the range of features available to them, there is a precise sense in which we can find ourselves sacrificing accuracy and equity simultaneously.
Please submit questions to be discussed during this breakout session at onlinequestions.org Event ID: 12122005.
Please submit questions to be discussed during this breakout session at onlinequestions.org Event ID: 12122005.
Does Causal Thinking About Discrimination Assume a Can Opener?
Does Causal Thinking About Discrimination Assume a Can Opener?
Lead: Lily Hu
Lead: Lily Hu
Facilitator: Fernando Diaz
Facilitator: Fernando Diaz
Abstract:
Abstract:
Increasingly, proof of whether systems, algorithmic and not, are racially discriminatory typically takes the form of statistical evidence supposedly showing race to have causally influenced some outcome. In this talk, I will discuss the relationship between quantitative social scientific methods on causal effects of race and our normative thinking about racial discrimination. I argue that all causal inference methodologies that look to quantify causal effects of race embed what amount to substantive views about what race as a social category is and how race produces effects in the world. Though debates among causal inference methodologists are often framed as concerning which practices make for good statistical hygiene, I suggest that quantitative methods are much more straightforwardly normative than most scholars, social scientists and philosophers alike, have previously appreciated.
Increasingly, proof of whether systems, algorithmic and not, are racially discriminatory typically takes the form of statistical evidence supposedly showing race to have causally influenced some outcome. In this talk, I will discuss the relationship between quantitative social scientific methods on causal effects of race and our normative thinking about racial discrimination. I argue that all causal inference methodologies that look to quantify causal effects of race embed what amount to substantive views about what race as a social category is and how race produces effects in the world. Though debates among causal inference methodologists are often framed as concerning which practices make for good statistical hygiene, I suggest that quantitative methods are much more straightforwardly normative than most scholars, social scientists and philosophers alike, have previously appreciated.
Please submit questions to be discussed during this breakout session at onlinequestions.org Event ID: 12122006.
Please submit questions to be discussed during this breakout session at onlinequestions.org Event ID: 12122006.