Invited Speakers

The Roles of Simplicity and Interpretability in Fairness Guarantees

Abstract: We explore two arguments at the interface between the interpretability of algorithms and their fairness properties. We first discuss how a well-regulated algorithm for screening decisions, because it makes notions like feature sets and objective functions explicit, can be audited for evidence of discrimination in ways that would be essentially impossible for human decision-making. We then consider connections to a related fundamental point -- that as we simplify algorithms, reducing the range of features available to them, there is a precise sense in which we can find ourselves sacrificing accuracy and equity simultaneously. This talk will be based on joint work with Jens Ludwig, Sendhil Mullainathan, Manish Raghavan, and Cass Sunstein.


Bio: Jon Kleinberg is the Tisch University Professor in the Departments of Computer Science and Information Science at Cornell University. His research focuses on the interaction of algorithms and networks, and the roles they play in large-scale social and information systems. He is a member of the National Academy of Sciences and the National Academy of Engineering, and the recipient of MacArthur, Packard, Simons, Sloan, and Vannevar Bush research fellowships, as well awards including the Harvey Prize, the Nevanlinna Prize, and the ACM Prize in Computing.


On Prediction, Action and Interference

Abstract: Ultimately, we want the world to be less unfair by changing it. Just making fair passive predictions is not enough, so our decisions will eventually have an effect on how a societal system works. We will discuss ways of modelling hypothetical interventions so that particular measures of counterfactual fairness are respected: that is, how are sensitivity attributes interacting with our actions to cause an unfair distribution outcomes, and that being the case how do we mitigate such uneven impacts within the space of feasible actions? To make matters even harder, interference is likely: what happens to one individual may affect another. We will discuss how to express assumptions about and consequences of such causative factors for fair policy making, accepting that this is a daunting task but that we owe the public an explanation of our reasoning.

Joint work with Matt Kusner, Chris Russell and Joshua Loftus


Bio: Ricardo Silva is an Associate Professor at the Department of Statistical Science at University College London and Adjunct Faculty of the Gatsby Computational Neuroscience Unit. Prior to that, He got his PhD from the newly formed Machine Learning Department at Carnegie Mellon University in 2005. Ricardo also spent two years at the Gatsby Computational Neuroscience Unit as a Senior Research Fellow, and one year as a postdoctoral researcher at the Statistical Laboratory in Cambridge. His research focuses on computational approaches for causal inference, graphical latent variable models and relational models.


Fairness through the lens of equality of opportunity and its connection to causality

Abstract: I begin by presenting a mapping between existing mathematical notions of fairness and economic models of Equality of opportunity (EOP)—an extensively studied ideal of fairness in political philosophy. Through our conceptual mapping, many existing definitions of fairness, such as predictive value parity and equality of odds, can be interpreted as special cases of EOP. In this respect, the EOP interpretation serves as a unifying framework for understanding the normative assumptions underlying existing notions of fairness. I will conclude by discussing a causal interpretation of EOP-based notions of fairness and some thoughts on defining counterfactual notions of fairness.

Bio: Hoda Heidari is currently an Assistant Professor in the Machine Learning Department at Carnegie Mellon University. Her research is broadly concerned with the societal and economic aspects of Artificial Intelligence, and in particular, the issues of fairness and explainability for Machine Learning. She utilizes tools and methods from Computer Science and Social Sciences to quantify the inequities that arise when socially consequential decisions are automated.

Does Causal Thinking About Discrimination Assume a Can Opener?

Abstract: Increasingly, proof of whether systems, algorithmic and not, are racially discriminatory typically takes the form of statistical evidence supposedly showing race to have causally influenced some outcome. In this talk, I will discuss the relationship between quantitative social scientific methods on causal effects of race and our normative thinking about racial discrimination. I argue that all causal inference methodologies that look to quantify causal effects of race embed what amount to substantive views about what race as a social category is and how race produces effects in the world. Though debates among causal inference methodologists are often framed as concerning which practices make for good statistical hygiene, I suggest that quantitative methods are much more straightforwardly normative than most scholars, social scientists and philosophers alike, have previously appreciated.


Thinking causally about race is, I want to suggest, at least just as hard as the substantive discrimination question. For answering the question about race and causation in the social world, requires answers to substantive normative questions about race, racial discrimination, and racial injustice more broadly. And so thinking about how race acts causally is not easier or even a helpful reduction for answering the moral and political question. If we’ve “solved” the causal problem, we’ve “solved” the substantive normative questions about race, racial discrimination, and racial injustice more broadly. It reminds one of the following joke:


A physicist, a chemist, and an economist who are stranded on a desert island with no implements and a can of food. The physicist and the chemist each devise an ingenious mechanism for getting the can open. The economist says, "Assume we have a can opener!"


My argument is that tackling the racial discrimination problem by assuming we can draw a diagram of how race acts causally in the world is a bit like that: it is to assume we have what it is that we precisely need; it is to assume a can opener!


Bio: Lily Hu is a PhD candidate in Applied Mathematics and Philosophy at Harvard University and a Fellow at the Jain Family Institute. She works on topics in machine learning theory, algorithmic fairness, and philosophy of (social) science, and political philosophy. Her relevant current work is on the metaphysics of causation and causal epistemology, with a particular focus in causal inference methods in the social sciences. She is especially interested in how various statistical frameworks treat and measure the "causal effect" of social categories such as race, and ultimately, how such methods are seen to back normative claims about racial discrimination and inequalities broadly.

Tutorial on Causality for Fairness

Moritz Hardt is an Assistant Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. Hardt investigates algorithms and machine learning with a focus on reliability, validity, and societal impact. After obtaining a PhD in Computer Science from Princeton University, he held positions at IBM Research Almaden, Google Research and Google Brain. Hardt is a co-founder of the Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) and a co-author of the forthcoming textbook "Fairness and Machine Learning". He has received an NSF CAREER award, a Sloan fellowship, and best paper awards at ICML 2018 and ICLR 2017.