Bio: Elias Bareinbiom is an associate professor in the Department of Computer Science and the director of the Causal Artificial Intelligence Lab at Columbia University. He obtained his Ph.D. in Computer Science at the University of California, Los Angeles, advised by Judea Pearl. His research focuses on causal inference and its applications to data-driven fields (i.e., data science) in the health and social sciences as well as artificial intelligence and machine learning. He am particularly interested in understanding how to make robust and generalizable causal and counterfactual claims in the context of heterogeneous and biased data collections, including due to issues of confounding bias, selection bias, and external validity (transportability).

Path-specific effects and ML fairness

Abstract: Arguably, the consideration of causal effects along subsets of causal paths is required for understanding and addressing unfairness in most real-world scenarios. In this talk I will share some of my thoughts on this goal and on the challenges to achieve it.


Bio: Silvia Chiappa is a Senior Staff Research Scientist in Machine Learning at DeepMind London and Honorary Professor at the Computer Science Department of University College London. She received a Diploma di Laurea in Mathematics from University of Bologna and a PhD in Machine Learning from École Polytechnique Fédérale de Lausanne (IDIAP Research Institute). Before joining DeepMind, she worked in the Empirical Inference Department at the Max-Planck Institute for Intelligent Systems, at Microsoft Research Cambridge and in the Statistical Laboratory at the University of Cambridge. Her research interests are based around Bayesian & causal reasoning, graphical models, variational inference, time-series models, deep learning, and ML fairness and bias.

Causality and fairness in ML: promises, challenges & open questions

Abstract: In the recent years, we have observed an explosion of research approaches at the intersection of causality and fairness in machine learning (ML). These works are often motivated by the promise that causality allows us to reason about the causes of unfairness both in the data and in the ML algorithm. However, the promises of existing causal fair approaches require strong assumptions, which hinder their practical application. In this talk, I will provide a quick overview of both the promises and the technical challenges of causal fair ML frameworks from a theoretical perspective. Finally, I will show how to leverage probabilistic ML to partially relax causal assumptions in order to develop more practical solutions to causal fair ML.


Bio: Isabel Valera is a full Professor on Machine Learning at the Department of Computer Science of Saarland University in Saarbrücken (Germany), and Adjunct Faculty at MPI for Software Systems in Saarbrücken (Germany). She is a fellow of the European Laboratory for Learning and Intelligent Systems (ELLIS), where she is part of the Robust Machine Learning Program and of the Saarbrücken Artificial Intelligence & Machine learning (Sam) Unit.

She was previously an independent group leader at the MPI for Intelligent Systems in Tübingen (Germany) and held a German Humboldt Post-Doctoral Fellowship, and a “Minerva fast track” fellowship from the Max Planck Society. She obtained her PhD in 2014 and MSc degree in 2012 from the University Carlos III in Madrid (Spain), and worked as postdoctoral researcher at the MPI for Software Systems (Germany) and at the University of Cambridge (UK)..

Towards Reliable and Robust Model Explanations

Abstract: As machine learning black boxes are increasingly being deployed in domains such as healthcare and criminal justice, there is growing emphasis on building tools and techniques for explaining these black boxes in an interpretable manner. Such explanations are being leveraged by domain experts to diagnose systematic errors and underlying biases of black boxes. In this talk, I will present some of our recent research that sheds light on the vulnerabilities of popular post hoc explanation techniques such as LIME and SHAP, and also introduce novel methods to address some of these vulnerabilities. More specifically, I will first demonstrate that these methods are brittle, unstable, and are vulnerable to a variety of adversarial attacks. Then, I will discuss two solutions to address some of the aforementioned vulnerabilities–(i) a Bayesian framework that captures the uncertainty associated with post hoc explanations and in turn allows us to generate explanations with user specified levels of confidence, and (ii) a framework based on adversarial training that is designed to make post hoc explanations more stable and robust to shifts in the underlying data; I will conclude the talk by discussing our recent theoretical results which shed light on the equivalence and robustness of state-of-the-art explanation methods.


Bio: Hima Lakkaraju is an Assistant Professor at Harvard University focusing on explainability, fairness, and robustness of machine learning models. She has also been working with various domain experts in criminal justice and healthcare to understand the real world implications of explainable and fair ML. Hima has recently been named one of the 35 innovators under 35 by MIT Tech Review, and has received best paper awards at SIAM International Conference on Data Mining (SDM) and INFORMS. She has given invited workshop talks at ICML, NeurIPS, AAAI, and CVPR, and her research has also been covered by various popular media outlets including the New York Times, MIT Tech Review, TIME, and Forbes. For more information, please visit: https://himalakkaraju.github.io/

Generalizability, robustness and fairness in machine learning risk prediction models

Abstract: By leveraging principles of health equity, I will discuss the use of causal models and machine learning to address realistic challenges of data collection and model use across environments. Examples include a domain adaptation approach that improves prediction in under-represented population sub-groups by leveraging invariant information across groups when possible, and an algorithmic fairness method which specifically incorporates structural factors to better account for and address sources of bias and disparities.


Bio: Rumi Chunara is an Assistant Professor at NYU, in Computer Science & Engineering and the College of Global Public Health. Her research focuses on how we can use unstructured data to illuminate population-level epidemiology. She received her B.S. in Electrical Engineering at Caltech, S.M. in Electrical Engineering and Computer Science at MIT and Ph.D. in Electrical and Medical Engineering at the Harvard-MIT Division of Health Sciences and Technology. Rumi is a recipient of a Caltech Merit Scholarship, MIT Presidential Fellowship and is an MIT Top 35 innovator under 35 (2014).

Lessons from robust machine learning

Abstract: Current machine learning (ML) methods are primarily centered around improving in-distribution generalization where models are evaluated on new points drawn from nearly the same distribution as the training data. On the other hand, robustness and fairness involve reasoning about out-of-distribution performance such as accuracy on protected groups or perturbed inputs, and reliability even in the presence of spurious correlations. In this talk, I will describe an important lesson from robustness: in order to improve out-of-distribution performance, we often need to question the common assumptions in ML. In particular, we will see that ‘more data’, ‘bigger models’, or ‘fine-tuning pretrained features’ which improve in-distribution generalization often fail out-of-distribution.


Bio: Aditi Raghunathan is a posdoctoral researcher at UC Berkeley supported by Open Philanthropy. She is an incoming assistant professor at Carnegie Mellon University in Fall 2022. Aditi received her PhD in Computer Science at Stanford University advised by Percy Liang. Previously, she obtained her B.Tech. (Hons.) in Computer Science from IIT Madras in 2016. She is interested in building robust machine learning systems with guarantees for trustworthy real-world deployment. Her research in robustness has been recognized by a Google Ph.D. Fellowship in Machine Learning and the Open Philanthropy AI Fellowship. Among other honors, she is also the recipient of the Anita Borg Memorial Scholarship and the Stanford School of Engineering Fellowship.

Panelists

Bio: Been Kim is a staff research scientist at Google Brain. Her research focuses on building interpretable machine learning. The vision of her research is to make humans empowered by machine learning, not overwhelmed by it. She has MS and PhD degrees from MIT. Before joining Brain, she was a research scientist at Institute for Artificial Intelligence (AI2) and an affiliate faculty in the Department of Computer Science & Engineering at the University of Washington. Been has given tutorials on interpretability at ICML 2017 , at the Deep Learning Summer school at University of Toronto, Vector institute in 2018 and at CVPR 2018 .

Bio: Solon Barocas is a Principal Researcher in the New York City lab of Microsoft Research, Assistant Professor in the Department of Information Science at Cornell, and Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard. His current research explores ethical and policy issues in artificial intelligence, particularly fairness in machine learning, methods for bringing accountability to automated decision-making, and the privacy implications of inference. Solon co-founded the annual workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) and later established the ACM conference on Fairness, Accountability, and Transparency (FAccT).

Bio: Ricardo Silva is a Professor of Statistical Machine Learning and Data Science at University College London. He received his PhD from the newly formed Machine Learning Department at Carnegie Mellon University in 2005. Ricardo also spent two years at the Gatsby Computational Neuroscience Unit as a Senior Research Fellow, and one year as a postdoctoral researcher at the Statistical Laboratory in Cambridge. His research focuses on computational approaches for causal inference, graphical latent variable models and relational models

Bio: Richard Zemel is a Professor of Computer Science at the University of Toronto, where he has been a faculty member since 2000. Prior to that, he was an Assistant Professor in Computer Science and Psychology at the University of Arizona and a Postdoctoral Fellow at the Salk Institute and at Carnegie Mellon University. He received a B.Sc. degree in History & Science from Harvard University in 1984 and a Ph.D. in Computer Science from the University of Toronto in 1993. His research contributions include foundational work on systems that learn useful representations of data without any supervision; methods for learning to rank and recommend items; and machine learning systems for automatic captioning and answering questions about images. His awards include an NVIDIA Pioneers of AI Award, a Young Investigator Award from the Office of Naval Research, a Presidential Scholar Award, two NSERC Discovery Accelerators, and seven Dean’s Excellence Awards at the University of Toronto. He is a Fellow of the Canadian Institute for Advanced Research and is on the Executive Board of the Neural Information Processing Society, which runs the premier international machine learning conference.