Abstract: Powerful machine learning (ML) algorithms are trained on large text corpus, and human biases and stereotypes in the text can lead to problematic biases in the algorithms. I will first discuss our works on detecting and removing problematic biases from ML. Then I will turn the question around to explore how we can use ML as a microscope to quantify human and textual biases and address social science questions.
Bio: James Zou is an assistant professor of Biomedical Data Science and, by courtesy, of CS and EE at Stanford. He is also a Chan-Zuckerberg investigator. His group works on both foundational questions of machine learning--new algorithms and theory--as well as applications to biotech and healthcare. He is also very interested in the broader social impacts and economics of AI.