Educational Resources for AI and Machine Learning
Short Courses and Tutorials
Tutorial: Tuning Deep Learning Training & Evaluation on the OU Supercomputer
March 21, 2023
Synopsis: Training and evaluation performance for deep neural network models depends on a variety of factors, including type/size/performance of data storage solutions and on the type/configuration of computational hardware. These issues are especially acute as models and/or data sets become very large. In this tutorial, we discuss the current OU supercomputer architecture and procedures for monitoring and optimizing execution performance of deep learning models. Topics include: use of one or more Graphical Processing Units (GPUs), creating data loading and transformation pipelines, caching data to high-speed storage, monitoring CPU/GPU utilization, and procedures for coordinating shared use of multiple GPUs on a single compute node. We assume prior knowledge of Python, machine learning and deep networks.
Presenters: Mel Wilson Reyes, Jay Rothenberger, Andrew Fagg (OU)
Transformers Short Course
November 15 – 30, 2022
AI2ES is hosting a virtual short course on transformers.
Please complete a brief anonymous survey about the course after viewing the recordings. Here is the link to the survey.
The sessions are as follows:
- Tuesday, Nov 15, 4-5PM CST – Transformers part I: Recurrent Neural Networks & the Gradient Problem; Solutions using Attention – Andrew H. Fagg. Recording, Slides
- Monday, Nov 21, 3:30-4:30PM CST – Transformers part II: Positional Encodings and the Basic Transformer Architecture – Andrew H. Fagg. Recording, Slides (for Sessions 1 and 2)
- Monday, Nov 28, 3:30-5PM CST – Transformers part III: Transformers in 2D – Andrew H. Fagg. Recording, Slides (for Sessions 1-3)
- Wednesday, Nov 30, 4-5PM CST – Transformer for Meteorological Forecasting Based on Multi-dimensional Input Data – Hamid Kamangir. Recording, Slides
Look for a 4th interactive session in early 2023!
Dates: 27 June – 30 June, 2022
Offered by AI2ES and NCAR
with partners LEAP, NCAI, and the Radiant Earth Foundation
Participants will gain an understanding of:
- the foundations of trustworthiness for AI
- explanatory AI (XAI) and how explanations, physics, and robustness can help build trust in AI
- the relationship between ethics and trustworthiness
- how machine-learning systems have been developed for a range of environmental science applications
Interactive, machine learning Trust-a-thon with beginner and advanced tracks
Deep Learning: Four-part Tutorial
Starting 24 May 2022
4 sessions, 90 minutes each
Session 1 (May 24) topics are – Linear regression; Logistic regression; Basic neural networks and what they can compute
Session 2 (May 27) topics are – First implementation of a deep network; Command-line interfaces
Session 3 (TBD) topics are – Executing experiments on the supercomputer; Convolutional neural networks
Session 4 (TBD) topics are – TBD
Recordings will be available here after each session.
Risk Communication for Trustworthy AI: Six-part Tutorial
Dates: 24 May – 21 June 2022
6 sessions, 60 minutes each – LIVE at 12:00-1:00 pm (Central)
This six-part tutorial on risk communication for trustworthy AI begins by putting risk communication research into context and introducing basics of risk perception and communication. It includes sessions on the role of trust in risk communication, and the design of risk communications and messages, and two related site-wide talks by guest speakers who are conducting research on formalizing trust in AI, and on human-AI teaming, respectively. Discussions, polls, and exercises are included to promote active learning. The concluding session focuses on what we can learn from iconic risk communication studies and meta-analyses for two topics: warnings, and fear appeals.
Session 2 (May 25, Site-wide meeting): Ana Marasović – Prerequisites, Causes, and Goals of Human Trust in AI. Recording
- Abstract: A frequently cited motivation for explainable AI (XAI) is to increase the trust of users in AI. Although common, specifics of this statement – such as what are the prerequisites for human trust in AI, or for what goals does the cognitive mechanism of trust exist – are not discussed and are often left to individual interpretation. In this talk, I will discuss questions we address in our FAccT 2020 paper: How is interpersonal trust defined? What is AI being trusted with? What causes a model to be trustworthy? What enables such models to incur trust? What should evaluation of trust satisfy?, and finish with ideas for how to extend our definitions.
- Bio: Ana Marasović is a postdoctoral researcher at the Allen Institute for AI (AI2) and the Paul G. Allen School of Computer Science & Engineering at University of Washington, and incoming assistant professor at the University of Utah. Her research interests broadly lie in the fields of natural language processing, explainable AI, and vision-and-language learning. Her projects are motivated by a unified goal: improve interaction and control of the NLP systems to help people make these systems do what they want with the confidence that they’re getting exactly what they need. Prior to joining AI2, Ana obtained her PhD from Heidelberg University.
- Abstract: In this talk I will present a recent theoretical perspective on “trusting automation” that posits how the information processing approach to the study of trust is reaching its limits, and that a relational approach is needed to advance the study and design of trustworthy systems in AI-enabled decision contexts. This perspective integrates past scholarship on trust in automation from an industrial and safety science perspective with more recent scholarship on trust in AI from a broader societal perspective. I will then propose multiple paths forward for how to adopt this relational trust perspective using examples from my own lab. I will also share insights learned from the past ten years of studying trust in complex decision contexts, trust measurement, and how related constructs like transparency and explainability may also benefit from a relational perspective.
- Bio: Erin K. Chiou (name coach) is an assistant professor of human systems engineering at Arizona State University, and directs the Automation Design Advancing People and Technology (ADAPT) Laboratory. Recent work in her lab has focused on the study of social factors in human-automation work systems across defense, security, healthcare, and manufacturing settings. Before moving to Arizona, Erin received her PhD (2016) and MS (2013) in industrial and systems engineering from the University of Wisconsin-Madison where she was also a National Science Foundation graduate research fellow. She spent several years working in various industries including a medical device company and an international education startup company before deciding to pursue her graduate degree. She received her BS in psychology and philosophy from the University of Illinois at Urbana-Champaign in 2008.
Recordings will be available here the day after each session. To attend the sessions live, please contact firstname.lastname@example.org to be added to the session invitation.
Dates: 10 May – 21 June 2021
6 sessions, 90 minutes each – LIVE via telecon at 1:30-3:00 pm (Central)
Instructor: Ryan Lagerquist
The short course will consist of six 90-minute lectures, each accompanied by slides and a Colab notebook with open-source Python code. All Python examples will focus on weather prediction.
Dates: 26 – 29 July, 2021
Offered by AI2ES and NCAR
Topics include Trustworthy AI for Environmental Science; Explainable, Robust, Physics-based AI; AI, Ethics, and Trust; Case Studies; R2O Tips and Tricks.
Dates: 8 – 9 April , 2021 at 10:00 AM – 6:00 PM Eastern Time (Virtual)
Host: American Meteorological Society
Dates: 30 July 2020 – 25 February 2021
Host: National Oceanic and Atmospheric Administration’s STAR Center for Satellite Applications and Research
Workshop includes presentations, posters, panel discussions, and tutorials.
Tutorial sessions: 22 September 2020 and 20 October 2020. More tutorial sessions are planned.
Dates: 16 September – 21 October 2020
Instructors: Dr. Ryan Lagerquist (CIRA Boulder and NOAA GSL) and Dr. Imme Ebert-Uphoff (CIRA Fort Collins and Colorado State University)
Links are available for recorded video lectures, GitHub, and Jupyter Notebooks on the CIRA website.
Dates: 22-26 June 2020
Host: National Center for Atmospheric Research (Boulder, Colorado, USA)
Presentation slides and recordings are available here.