Focus 3: Foundational Research in AI Risk Communication (RC) for Environmental Science (ES) Hazards
“When [weather forecasters] cannot easily understand the workings of a probabilistic product or evaluate its accuracy, this reduces their trust in information and their willingness to use it.”
From Recommendations for Developing Useful and Usable Convection-Allowing Model Ensemble Information for NWS Forecasters (Demuth et al. 2020)
Leaders: Demuth (NCAR), Bostrom (UW)
Members: Thorncroft (Albany). Ebert-Uphoff, Musgrave, Schumacher (CSU). Hickey (Google). Williams (IBM). Cains, Gagne, Wirz (NCAR). He, Lowe (NCSU). McGovern (OU). King, Tissot, White (TAMUCC). Madlambayan (UW).
1. Increase knowledge and understanding of how transparency, explanation, reproducibility, and representation of uncertainty influence trust in AI for environmental science (ES) for influential user groups.
- Develop trustworthy AI interview and knowledge elicitation protocols to probe the roles of specific types of models and model updates in environmental forecasting decisions, attitudes toward them, and how these are influenced by transparency, explanations, reproducibility, representations of uncertainty, trust in the model(s), model outputs, modelers and other contextual factors for environmental science.
- Develop candidate measures of trust, satisfaction, understanding, and willingness to rely on AI/ML for environmental science.
- Two GRAs conducted a systematic review of the AI/ML literature re: trust, trustworthiness, interpretability, explainability
- Developed an initial draft summaries of empirical findings on trust and trustworthiness in AI, interpretability and explainability of AI
- Began interdisciplinary reading group meetings
- Initial discussions and draft of a glossary and taxonomy
- Shared initial literature review findings in site-wide meetings, TAI4ES summer school.
- A RC tutorial is being developing for Spring 2022
- Planning has begun for a RC workshop for Summer 2022
2. Develop models to estimate how attitudes and perception of AI trustworthiness influence risk perception and use of AI for ES
- Enhance existing and develop new theoretical frameworks for experts’ assessment and uses of trustworthy AI/ML information
- Model and test influence of existing and newly developed XAI and interpretable AI approaches and AI/ML-interactions for environmental science on trust and use of AI/ML for environmental science.
- Predict influence of new XAI approaches and novel AI/ML interactions on trust and use of AI/ML for environmental science.
Forecaster Interviews about Severe Convective XAI
- Incorporated cognitive think-aloud and interactive methods into the interview instrument, to improve type and quality of the interview data collected
- Conducted 18 interviews with forecasters from NWS and IBM
- For the 3D-CNN storm mode guidance, conducted initial inductive analysis of some parts of the interview data, to provide feedback to developers for guidance refinement for testing in the Hazardous Weather Testbed
- Co-developed semi-structured interview protocol; in doing so, identified the need to determine definitions of explainability, interpretability, trustworthiness, etc.
- Developed, tested, and revised a coding scheme for deductively analyzing interview data and to foster collaborative analysis with AI2ES team members
- Conducted pre-tests of interview protocol, identified gaps, revised accordingly, and finalized the interview protocol
- Developed a sampling frame for interviews and initiated sampling for formal interviews
- Conducting formal interviews is and cleaning the interview data
Trustworthiness of ML guidance for Severe Convection
- Co-developed the survey
- Integrated key features of XAI into development of survey items, e.g., performance, interactivity, failure modes
- Collected n=92 completed survey responses (70% response rate) from testbed participants (n=36 forecasters, n=38 researchers, and others) in May-June 2021
- Analyzed survey results
- Presented at NOAA 3rd AI workshop in September 2021
Interviews about Winter and Coastal/Ocean with Users
- Guiding Supervised ML for Precipitation from Cameras (New York Mesonet):
- Iteratively developed, tested, and refined the formal coding scheme for the images
- Achieved intercoder reliability with the coding scheme (Krippendorff’s alpha > 0.8)
- Began hand labeling images to train the model
- Trustworthy (X)AI for FogNet with user interviews:
- Co-developed a first draft of the think-aloud interview guide for pre-testing interviews.
- Developed a preliminary output format for the FogNet model
- Integrated verification metrics focused on skill and rare events to better understand the importance of verification metrics for forecasters.
- Integrated XAI results into the interview guide, which will provide the first empirical evaluation of end users’ perceptions of these techniques.
- Trustworthy AI for ocean eddy prediction with users:
- Established interdisciplinary research team across NCAR, UW, and NCSU.
- Leveraging existing research partnerships to connect with relevant end user groups.
- Oceans/RC team held informational interview with met-ocean specialist from oil company to better understand end user decision space, general data needs, and decision timeline.
- Using the foundation from the severe convective use case (which considerable time was spent developing) to apply and modify accordingly for winter and coastal
- Began the risk comm-AI-winter working group and established regular meetings
- Met with DOT partners to learn about user needs
- Currently co-developing a DOT interview protocol and developing a prototype AI/ML to use in the interviews
- Began the risk comm-AI-coastal and ocean working groups and established regular meetings
- Initiated two new protocols based on the current coastal team’s AI/ML efforts
3. Develop principled methods of using this knowledge and modeling to inform development of trustworthy AI approaches and content, and the provision of AI-based information to user groups for improved environmental decision making.
- Development of research methodologies to develop and evaluate AI/ML environmental science information in users’ real-world decision-making environments, including unobtrusive and low-response-burden evaluative approaches for use in operational contexts.
- Develop trustworthy AI/ML information that is deemed useful and is used by different decision-makers across environmental science domains