Speakers

Bio: Razieh Nabi is an endowed Rollins Assistant Professor in the Department of Biostatistics and Bioinformatics at Emory Rollins School of Public Health. Her research is situated at the intersection of machine learning and statistics, focusing on causal inference and its applications in healthcare and social justice. More broadly, her work spans problems in causal inference, mediation analysis, algorithmic fairness, semiparametric inference, graphical models, and missing data. She has received her PhD (2021) in Computer Science from Johns Hopkins University.  

A causal and counterfactual view of (un)fairness in automated decision making 

Abstract: Despite the illusion of objectivity, algorithms make use of subjective judgements of human beings at every step of their development. A particular worry in the context of automated decision making is perpetuating injustice, i.e., when maximizing “utility” maintains, reinforces, or even introduces unfair dependencies between sensitive features (e.g., race, gender, age, sexual orientation), decisions, and outcomes. It is therefore essential that automated decisions respect principles of fairness, particularly in socially-impactful settings such as healthcare, social welfare, and criminal justice. In this talk, we show how to use methods from causal inference and constrained optimization to make optimal but fair decisions that would “break the cycle of injustice” by correcting for the unfair dependence of both decisions and outcomes on sensitive features.

Bio: Nicolas Papernot is an Assistant Professor of Computer Engineering and Computer Science at the University of Toronto. He also holds a Canada CIFAR AI Chair at the Vector Institute. His research interests span the security and privacy of machine learning. Some of his group’s recent projects include proof-of-learning, collaborative learning beyond federation, dataset inference, and machine unlearning.  Nicolas is an Alfred P. Sloan Research Fellow in Computer Science. His work on differentially private machine learning was awarded an outstanding paper at ICLR 2022 and a best paper at ICLR 2017. He serves as an associate chair of the IEEE Symposium on Security and Privacy (Oakland) and an area chair of NeurIPS. He co-created and will co-chair the first IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) in 2023. Nicolas earned his Ph.D. at the Pennsylvania State University, working with Prof. Patrick McDaniel and supported by a Google PhD Fellowship. Upon graduating, he spent a year at Google Brain where he still spends some of his time.


Is trustworthy ML possible in decision-making systems requiring fairness?

Abstract: As machine learning is increasingly used for decision making in high-stakes areas like healthcare, it is important that machine learning algorithms exhibit fairness but also privacy, interpretability, etc. In this talk, we explore the intersection of fairness with these other facets of trustworthy machine learning. First, we consider fairness in privacy-preserving machine learning for healthcare. In such settings, methods for differentially private learning provide a general-purpose approach to learn models with privacy guarantees. The resulting privacy-preserving models neglect information from the tails of a data distribution, resulting in a loss of accuracy that can disproportionately affect small groups. Our results highlight lesser-known limitations of methods for differentially private learning in health care, models that exhibit steep trade-offs between privacy and utility, and models whose predictions are disproportionately influenced by large demographic groups in the training data. Second, we consider the intersection of fairness with interpretability in high-stakes decision making. We investigate the issue of fairwashing, in which model explanation techniques are manipulated to rationalize decisions taken by an unfair black-box model using deceptive surrogate models. We prove that fairwashing is difficult to avoid due to an irreducible factor: the unfairness of the black-box model. All in all, this calls for advances in making data representations fair themselves.

Bio: Deirdre K. Mulligan is a Professor in the School of Information at UC Berkeley, a faculty Director of the Berkeley Center for Law & Technology, a co-organizer of the Algorithmic Fairness & Opacity Working Group, an affiliated faculty on the Hewlett funded Berkeley Center for Long-Term Cybersecurity, and a faculty advisor to the Center for Technology, Society & Policy. Mulligan’s research explores legal and technical means of protecting values such as privacy, freedom of expression, and fairness in emerging technical systems. Mulligan and  Bamberger received the 2016 International Association of Privacy Professionals Leadership Award for their research contributions to the field of privacy protection.  She is a member of the Defense Advanced Research Projects Agency's Information Science and Technology study group (ISAT); and, a member of the National Academy of Science Forum on Cyber Resilience. Prior to joining the School of Information. she was a Clinical Professor of Law, founding Director of the Samuelson Law, Technology & Public Policy Clinic, and Director of Clinical Programs at the UC Berkeley School of Law. She co-chaired Microsoft's Trustworthy Computing Academic Advisory Board with Fred B. Schneider, from 2003-2014. Prior to Berkeley, she served as staff counsel at the Center for Democracy & Technology in Washington, D.C.


Contestation and Participation in Model Design

Abstract: As machine learning models are used to inform or make consequential decisions affecting people’s life chances it becomes critical for models to not only attend to values such as fairness and privacy but to support the meaningful participation of domain experts and other stakeholders in determining how to do so. Causal models and differential privacy provide hooks to support such participation and contestation about modeling choices. Considering causal models (particularly causal graphs) and epsilon (along with other aspects of a differential privacy implementation) as boundary objects (Star and Griesemer 1989; Bowker and Star 1999) reveals the important role they can play in supporting collaborative reasoning about contested concepts, facilitating stakeholder participation in decisions about how to meet policy goals within technical systems, and maintaining the publicness of embedded policy choices.

Bio: Catuscia Palamidessi is Director of Research at INRIA Saclay (since 2002), where she leads the team COMETE. She has been Full Professor at the University of Genova, Italy (1994-1997) and Penn State University, USA (1998-2002). Palamidessi's research interests include Privacy, Machine Learning, Fairness, Secure Information Flow, Formal Methods, and Concurrency. In 2019 she has obtained an ERC advanced grant to conduct research on Privacy and Machine Learning. She has been PC chair of various conferences including LICS and ICALP, and PC member of more than 120 international conferences. She is in the Editorial board of several journals, including the IEEE Transactions in Dependable and Secure Computing, Mathematical Structures in Computer Science, Theoretics, the Journal of Logical and Algebraic Methods in Programming and Acta Informatica. She is serving in the Executive Committee of ACM SIGLOG, CONCUR, and CSL.


On the interaction between Privacy, Fairness and Accuracy

Abstract: Two of the main concerns about Ethical AI are Privacy and Fairness. In this talk, I will present some preliminary results about the consequences that obfuscating data with local differential privacy mechanisms can have for accuracy and various fairness notions in machine learning. Then, I will focus on the notion of equality of opportunity (EO) proposed by Hardt et al. EO is compatible with maximal accuracy when the target label is deterministic with respect to the input features. In the probabilistic case, however, the issue is more complicated: It has been shown that under differential privacy constraints, there are data sources for which EO can only be achieved at the total detriment of accuracy, in the sense that a classifier that satisfies EO cannot be more accurate than a trivial (i.e., input-independent) classifier. I will show that this result can be strengthen by removing the privacy constraint, namely, for certain data sources, the most accurate classifier that satisfies EO is a trivial classifier. Furthermore, I will discuss the trade-off between accuracy and EO loss (opportunity difference), and provide a sufficient condition on the data source under which EO and non-trivial accuracy are compatible.

Panelists

Bio: Kristian Lum was previously an Assistant Research Professor in the Department of Computer and Information Science at the University of Pennsylvania and the Lead Statistician at the Human Rights Data Analysis Group. She is a founding member of the ACM Conference on Fairness, Accountability, and Transparency and has served in various leadership roles since its inception, growing this community of scholars and practitioners who care about the responsible use of “AI” systems. Dr. Lum’s research looks into (un)fairness of predictive models with particular attention to those used in a criminal justice setting. She received a PhD in Statistics from Duke University.

Bio: Joshua Loftus is an Assistant Professor of Statistics at the London School of Economics. Joshua’s research goals are to improve practices in data science and machine learning to reduce the impact of bias, particularly biases associated with social harms and scientific reproducibility. He is also broadly interested in high-dimensional statistics and causal inference, and in teaching theory, applications, and best practices in ethical data science. Before joining LSE Joshua earned his PhD in Statistics at Stanford University, was a Research Fellow at the Alan Turing Institute affiliated with the University of Cambridge, and then was an Assistant Professor at New York University from 2017-2020.

Bio: Dr. Rachel Cummings is an Assistant Professor of Industrial Engineering and Operations Research at Columbia University. Her research interests lie primarily in data privacy, with connections to machine learning, algorithmic economics, optimization, statistics, and public policy. Dr. Cummings received her Ph.D. in Computing and Mathematical Sciences from the California Institute of Technology, her M.S. in Computer Science from Northwestern University, and her B.A. in Mathematics and Economics from the University of Southern California.  She is the recipient of an NSF CAREER award, a DARPA Young Faculty Award, an Apple Privacy-Preserving Machine Learning Award, the ACM SIGecom Doctoral Dissertation Honorable Mention, and the Best Paper Award at the 2014 International Symposium on Distributed Computing. 

Bio: Jake Goldenfein is a Senior Lecturer at Melbourne Law School, University of Melbourne, and an Associate Investigator at the Australian Research Council Centre of Excellence for Automated Decision-Making and Society. He is a law and technology scholar studying platform regulation, surveillance regulation, and the governance of automated decision-making.

BioSara Hooker leads Cohere For AI, a non-profit research lab that seeks to solve complex machine learning problems. Cohere For AI supports fundamental research that explores the unknown, and is focused on creating more points of entry into machine learning research. Prior to Cohere For AI, Sara was a researcher at Google Brain working on training models to fulfill multiple desirable properties (efficiency, fairness, robustness, interpretability)