AI, Ethics, and Geoethics (CS 5970)
Module 6: Bias in AI
- This module will take two weeks to complete plus you have project work due in these two weeks
- We will have a mixture of readings, videos, and case studies
We already done some initial digging into AI & bias in our introductory module. In this module we will a lot deeper into how AI can be biased, intentionally or unintentionally.
The picture on the right shows one example of bias that was discussed in the introductory movie, Coded Bias. Many camera systems were calibrated on something called a Shirley card. Kodak only released an updated multiracial card for calibration in 1996 even though the problem was well known beforehand. Calibrating to only a white woman’s face meant faces of other tones were under or overexposed, a problem that remains true today with digital cameras and improperly trained systems. I chose to show the multiracial card rather than the all-white card but it is shocking how long it took and how the issues still persist today.
For this module, we will dig into the issues of AI & Bias more deeply than we did in the introduction module. Many of your projects are focused on the bias issues and hopefully these readings will also be help for your projects.
Multiracial Shirley Card
Resources for this Module
For this module, we will return to the Race After Technology book (we will finish the book in an upcoming module) for one chapter. I know some of you got excited about the book earlier and read ahead and that’s great!
We will also be reading one chapter from Getting Started in Data Science. The chapter is available on canvas for OU students.
Finally we will make use of two online recent events focusing on Bias in AI:
- Brookings also recently held an online series on AI & Bias and we will watch some of their recorded sessions (we will also come back to Brookings when we visit AI & policy)
Since the reading is long, we are going to split the reading and the case study across two days.
- (90 min) Complete the assignment on “Chapter 1: Engineering Inequity: Are Robots Racist?” from Race After Technology by Ruha Benjamin. The case study will be on day 2.
OU students, don’t forget to turn in your grading declarations on canvas! Today’s declaration is called “Module 6: Engineered Inequality”
Case Studies and Ethical Principles
- (60 min) Complete the case study on racial bias in health-care algorithms
OU students, don’t forget to turn in your grading declarations on canvas! Today’s declaration is called “Module 6: case study on racial bias in health care”
Doing the same split of reading and case study here as we did for days 1 & 2. Today you do the reading and tomorrow a case study.
- (90 min) Complete the reading assignment “Chapter 8: Bias, Fairness, and Accountability” from Getting Started in Data Science by Ayodele Odubela
OU students, don’t forget to turn in your grading declarations on canvas called “Module 6: Assignment on Bias, Fairness, and Accountability.”
Assignment on Bias and Ethical Principles
- (60 min) Complete the assignment on Bias, Fairness, and Accountability
OU students, don’t forget to turn in your grading declarations AND your assignment on canvas! Today’s declaration is called “Module 6: Updated personal goals for responsible AI.”
For the end of the week, we are going to move to video. We are going to attend two workshops virtually, thanks to our online world! Since the videos are longer today, we will move the case studies to another day.
- (90 min) The National Institute of Standards and Technology recently held a workshop focused on Bias in AI. It was a great workshop and you can find the schedule linked above. I’d like you to pick one of the two plenary sessions (you are welcome to watch both if you want) and watch it. They are both about 90 minutes.
- Video of first panel: Foundational Juggernaut: Addressing data bias challenges in AI
- Video of second panel: Algorithmic bias is in the question, not the answer: Measuring and managing bias beyond data
- (15-30 min) In the #general thread, discuss which panel (or both if you chose both) that you watched and what main points you took away from the panels. Please discuss back and forth with each other!
OU students, don’t forget to turn in your grading declarations on canvas! Today’s declaration is called “Module 6: NIST Bias in AI.”
We will finish out the week with some shorter videos plus a final case study!
- (60-90 min) Brookings is a really interesting policy and think tank in DC that studies a wide variety of issues. They recently held an online series on AI & Bias. For this module, what I want you to do is to go the series on AI & Bias and pick a webinar to watch or several articles to read. I will note that some of the issues cross over into policy, which is great. We will be coming back to Brookings when we visit policy but we will be in a different set of resources so feel free to read/watch whichever from this list sounds most interesting to you!
- (15-30 min) In the general thread, write a short summary (at most one paragraph for each item) of what you watched or read and what you learned relevant to AI & Ethics and your work. Keep it short since I would like you to read each other’s summaries!
Case Studies and Ethical Principles
- (45 min) Complete the case study on universities using race in ML models
OU students, don’t forget to turn in your grading declarations on canvas! There are two today: “Module 6: Brookings Bias & AI” and “Module 6: Case study on racial bias in university algorithms.”