Abstract
With the rise of big data and machine learning, there have been unprecedented applications in the real world, from climate change adaptation and mitigation to human trafficking detection. However, a downside to these advances is that many of these models are not interpretable, in that we don't know what is going on inside and how the models are making their decisions and predictions.
Dubbed "black boxes," these models are dangerous because their decisions can contain unforeseen biases.
In this talk, we discuss the variety of issues that black box machine learning models present and ways in which we can open them up. These include conducting in-depth ablation studies.
It surely is time-consuming and many times unappealing to work on breaking open black boxes, but this is necessary for security and equity in machine learning research and deployment, helping to uncover unforeseen biases in the decision making process, which is especially important in fields where biases and discrimination can exist.
Dove
Track 2
Quando
Sabato dalle 16:45 alle 17:30
Speaker:
Thomas Y. Chen
Machine Learning Researcher
Bio:
Thomas Chen is an early-career machine learning researcher from New Jersey that is passionate about machine learning, computer vision, and artificial intelligence. He is highly involved in science research, especially in applying ML and AI to real-world issues that face society (e.g. deep learning-based computer vision for damage assessment post-natural disaster). He has presented his work at workshop sessions at high-level conferences such as NeurIPS, and is an invited speaker at numerous conferences like the IEEE Conference on Technologies for Sustainability and the Energy Anthropology Network.