Dates: 24 May – 21 June 2022
6 sessions, 60 minutes each – LIVE at 12:00-1:00 pm (Central)
This six-part tutorial on risk communication for trustworthy AI begins by putting risk communication research into context and introducing basics of risk perception and communication. It includes sessions on the role of trust in risk communication, and the design of risk communications and messages, and two related site-wide talks by guest speakers who are conducting research on formalizing trust in AI, and on human-AI teaming, respectively. Discussions, polls, and exercises are included to promote active learning. The concluding session focuses on what we can learn from iconic risk communication studies and meta-analyses for two topics: warnings, and fear appeals.
Session 1 (May 24): Intro to risk perception, risk communication. Recording; Slides
Session 2 (May 25, Site-wide meeting): Ana Marasović – Prerequisites, Causes, and Goals of Human Trust in AI. Recording
- Abstract: A frequently cited motivation for explainable AI (XAI) is to increase the trust of users in AI. Although common, specifics of this statement – such as what are the prerequisites for human trust in AI, or for what goals does the cognitive mechanism of trust exist – are not discussed and are often left to individual interpretation. In this talk, I will discuss questions we address in our FAccT 2020 paper: How is interpersonal trust defined? What is AI being trusted with? What causes a model to be trustworthy? What enables such models to incur trust? What should evaluation of trust satisfy?, and finish with ideas for how to extend our definitions.
- Bio: Ana Marasović is a postdoctoral researcher at the Allen Institute for AI (AI2) and the Paul G. Allen School of Computer Science & Engineering at University of Washington, and incoming assistant professor at the University of Utah. Her research interests broadly lie in the fields of natural language processing, explainable AI, and vision-and-language learning. Her projects are motivated by a unified goal: improve interaction and control of the NLP systems to help people make these systems do what they want with the confidence that they’re getting exactly what they need. Prior to joining AI2, Ana obtained her PhD from Heidelberg University.
- Abstract: In this talk I will present a recent theoretical perspective on “trusting automation” that posits how the information processing approach to the study of trust is reaching its limits, and that a relational approach is needed to advance the study and design of trustworthy systems in AI-enabled decision contexts. This perspective integrates past scholarship on trust in automation from an industrial and safety science perspective with more recent scholarship on trust in AI from a broader societal perspective. I will then propose multiple paths forward for how to adopt this relational trust perspective using examples from my own lab. I will also share insights learned from the past ten years of studying trust in complex decision contexts, trust measurement, and how related constructs like transparency and explainability may also benefit from a relational perspective.
- Bio: Erin K. Chiou (name coach) is an assistant professor of human systems engineering at Arizona State University, and directs the Automation Design Advancing People and Technology (ADAPT) Laboratory. Recent work in her lab has focused on the study of social factors in human-automation work systems across defense, security, healthcare, and manufacturing settings. Before moving to Arizona, Erin received her PhD (2016) and MS (2013) in industrial and systems engineering from the University of Wisconsin-Madison where she was also a National Science Foundation graduate research fellow. She spent several years working in various industries including a medical device company and an international education startup company before deciding to pursue her graduate degree. She received her BS in psychology and philosophy from the University of Illinois at Urbana-Champaign in 2008.
Recordings and slides will be available here the day after each session. To attend the sessions live, please contact firstname.lastname@example.org to be added to the session invitation.