Educational Resources for AI and Machine Learning


Short Courses and Tutorials

Getting Started With Machine Learning

Online tutorial and resources

Instructor: Tom Beucler, University of Lausanne

A non-exhaustive list of resources to help environmental scientists get started with machine learning, including tutorials with code. 

Link to site

Dates: 27 June – 30 June, 2022

Offered by AI2ES and NCAR

with partners LEAP, NCAI, and the Radiant Earth Foundation

Participants will gain an understanding of:

  • the foundations of trustworthiness for AI 
  • explanatory AI (XAI) and how explanations, physics, and robustness can help build trust in AI
  • the relationship between ethics and trustworthiness
  • how machine-learning systems have been developed for a range of environmental science applications

Interactive, machine learning Trust-a-thon with beginner and advanced tracks

Course Information

Link to the course’s interactive blog for students, slides, and github.

Summer School flyer

Deep Learning: Four-part Tutorial

Starting 24 May 2022

4 sessions, 90 minutes each 

Session 1 (May 24) topics are – Linear regression; Logistic regression; Basic neural networks and what they can compute

Session 2 (May 27) topics are – First implementation of a deep network; Command-line interfaces

Session 3 (TBD) topics are – Executing experiments on the supercomputer; Convolutional neural networks

Session 4 (TBD) topics are – TBD

Recordings will be available here after each session. 

Risk Communication for Trustworthy AI: Six-part Tutorial

Dates: 24 May – 21 June 2022

6 sessions, 60 minutes each – LIVE at 12:00-1:00 pm (Central)

This six-part tutorial on risk communication for trustworthy AI begins by putting risk communication research into context and introducing basics of risk perception and communication. It includes sessions on the role of trust in risk communication, and the design of risk communications and messages, and two related site-wide talks by guest speakers who are conducting research on formalizing trust in AI, and on human-AI teaming, respectively. Discussions, polls, and exercises are included to promote active learning. The concluding session focuses on what we can learn from iconic risk communication studies and meta-analyses for two topics: warnings, and fear appeals.

Session 1 (May 24): Intro to risk perception, risk communication. Recording; Slides

Session 2 (May 25, Site-wide meeting): Ana Marasović – Prerequisites, Causes, and Goals of Human Trust in AI. Recording

  • Abstract: A frequently cited motivation for explainable AI (XAI) is to increase the trust of users in AI. Although common, specifics of this statement – such as what are the prerequisites for human trust in AI, or for what goals does the cognitive mechanism of trust exist – are not discussed and are often left to individual interpretation. In this talk, I will discuss questions we address in our FAccT 2020 paper: How is interpersonal trust defined? What is AI being trusted with? What causes a model to be trustworthy? What enables such models to incur trust? What should evaluation of trust satisfy?, and finish with ideas for how to extend our definitions.
  • Bio: Ana Marasović is a postdoctoral researcher at the Allen Institute for AI (AI2) and the Paul G. Allen School of Computer Science & Engineering at University of Washington, and incoming assistant professor at the University of Utah. Her research interests broadly lie in the fields of natural language processing, explainable AI, and vision-and-language learning. Her projects are motivated by a unified goal: improve interaction and control of the NLP systems to help people make these systems do what they want with the confidence that they’re getting exactly what they need. Prior to joining AI2, Ana obtained her PhD from Heidelberg University.

Session 3 (June 7): Trust in risk communication. Recording; Slides

Session 4 (June 8, Site-wide meeting): Erin Chiou – Relational trust in human-agent teaming. Recording; Slides

  • Abstract: In this talk I will present a recent theoretical perspective on “trusting automation” that posits how the information processing approach to the study of trust is reaching its limits, and that a relational approach is needed to advance the study and design of trustworthy systems in AI-enabled decision contexts. This perspective integrates past scholarship on trust in automation from an industrial and safety science perspective with more recent scholarship on trust in AI from a broader societal perspective. I will then propose multiple paths forward for how to adopt this relational trust perspective using examples from my own lab. I will also share insights learned from the past ten years of studying trust in complex decision contexts, trust measurement, and how related constructs like transparency and explainability may also benefit from a relational perspective.
  • Bio: Erin K. Chiou (name coach) is an assistant professor of human systems engineering at Arizona State University, and directs the Automation Design Advancing People and Technology (ADAPT) Laboratory. Recent work in her lab has focused on the study of social factors in human-automation work systems across defense, security, healthcare, and manufacturing settings. Before moving to Arizona, Erin received her PhD (2016) and MS (2013) in industrial and systems engineering from the University of Wisconsin-Madison where she was also a National Science Foundation graduate research fellow. She spent several years working in various industries including a medical device company and an international education startup company before deciding to pursue her graduate degree. She received her BS in psychology and philosophy from the University of Illinois at Urbana-Champaign in 2008.

Session 5 (June 14): Risk communication and message design basics. Recording; Slides

Session 6 (June 21): Iconic studies in risk communication (e.g., warning effectiveness). Recording; Slides

Recordings will be available here the day after each session. To attend the sessions live, please contact susan.dubbs@ou.edu to be added to the session invitation.

Explainable AI Short Course

Dates: 10 May – 21 June 2021

6 sessions, 90 minutes each – LIVE via telecon at 1:30-3:00 pm (Central)

Instructor: Ryan Lagerquist

The short course will consist of six 90-minute lectures, each accompanied by slides and a Colab notebook with open-source Python code.  All Python examples will focus on weather prediction.

Course Information and Recordings

Dates: 26 – 29 July, 2021

Offered by AI2ES and NCAR

Topics include Trustworthy AI for Environmental Science; Explainable, Robust, Physics-based AI; AI, Ethics, and Trust; Case Studies; R2O Tips and Tricks.

Course Information 

AI4ES summer school 2021

Machine Learning in Python for Environmental Science Problems

Dates: 8 – 9 April , 2021 at 10:00 AM – 6:00 PM Eastern Time (Virtual)

Host: American Meteorological Society

Course Information 

2nd NOAA Workshop On Leveraging AI In Environmental Sciences

Dates: 30 July 2020 – 25 February 2021

Host: National Oceanic and Atmospheric Administration’s STAR Center for Satellite Applications and Research

Workshop includes presentations, posters, panel discussions, and tutorials. 

Tutorial sessions: 22 September 2020 and 20 October 2020. More tutorial sessions are planned. 

Workshop Information 

 CIRA Short Course On Machine Learning For Weather And Climate

Dates: 16 September – 21 October 2020

Instructors: Dr. Ryan Lagerquist (CIRA Boulder and NOAA GSL) and Dr. Imme Ebert-Uphoff (CIRA Fort Collins and Colorado State University)

Links are available for recorded video lectures, GitHub, and Jupyter Notebooks on the  CIRA website.

Artificial Intelligence For Earth System Science (AI4ESS) Summer School

Dates: 22-26 June 2020

Host: National Center for Atmospheric Research (Boulder, Colorado, USA)

Presentation slides and recordings are available here