AI, Ethics, and Geoethics (CS 5970)
Module 9: AI, transparency, trust, and ethics
- This module will take anywhere from a few days to two weeks depending on how you choose to explore it. See the note below and on the information module for weeks 11-12 to fully understand how you choose-your-own-adventure with Modules 8 & 9.
- This module will rely on both papers and videos
Choose your own adventure
Remember those choose-your-own-adventure books? You would read for a bit and then make a choice. If you chose one way, you went to one page and another choice could take you to a different page. Knowing what you know about AI & ML, those books were decision trees turned into a book! We are going to re-create this idea for modules 8 and 9.
Everyone will start with a required assignment in module 8 and two required assignments in module 9. Then you can choose how you want to explore the modules in depth. Each module will have about 2 weeks of material and you can choose how much of each module you want to explore. You can focus entirely on one (except for the required assignment in both) or you can do half and half or whatever ratio you want, so long as you do 2 weeks of learning in the two modules.
p.s. note that this same text will be repeated at the top of both module pages in case you forget or get lost in what you are doing.
Transparency, Trust, and Ethics
This module is going to focus on trust and transparency. AI and ML are often viewed as black boxes and our readings in Weapons of Math Destruction have made the very clear argument that lack of transparency is one of the reasons that an algorithm can become a WMD (there are additional criteria about impact as well, but here we focus on transparency). Since transparency is also often seen as related to trust and trust to ethics, we are going to examine all 3 in this module.
Image from Black Boxes
Ethical decision making and transparency
- (30 min) Watch the Video “Three Risk Decision Traps for the Ethical AI Geoscientist” This video should be appearing at that link but it was giving me issues. I have a backup link to a recording on canvas. Remember the video we watched way back in week 2? That talk and this talk were invited keynotes in the same session at AMS and they provided a recording of both in one file. You will need to go to the link on canvas (I will post it again in this module) and watch the second video in the recording (it is one file).
- (20 min) On the #general channel, discuss the three risk decision traps and how they apply to your work as an AI scientist. Think about how you can avoid the issues she mentioned in your work/research (if none of this applies to what you currently do, think about it for your future jobs and post about that).
OU students, don’t forget to turn in your grading declarations on canvas! Today’s declaration is called “Module 9: Risk Decision Traps.”
- (45 min) Read “Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI” (the pdf is available for free at that link)
- (30 min) On the #case-studies channel, discuss all of the following questions. Don’t forget to use threading and to reply to your classmates!
- How does trust relates to your research or work or to your project?
- How can you measure intrinsic and extrinsic trust?
- How can you measure if the trust is warranted?
- What is the contract your AI is fulfilling?
- How does examining the issues of trust make your AI work as an AI scientist more ethical?
OU students, don’t forget to turn in your grading declarations on canvas! Today’s declaration is called “Module 9: Formalized Trust.”
Weapons of Math Destruction
(30 min) Read Chapter 3 of Weapons of Math Destruction: Arms Race Going to College
- (15 min) In the #weapons-of-math-destruction channel, compare the advantages and disadvantages of developing a college ranking system as discussed as compared to simply releasing the data as is done now. Did any of you use the data available now when you made your college choices? Would an AI ranking system that matched you to colleges have been useful. Would you have trusted its decisions?
OU students, don’t forget to turn in your grading declarations on canvas! Today’s declaration is called “Module 9: WMD Chapter 3.”
Weapons of Math Destruction
- (30 min) Read Chapter 8 of Weapons of Math Destruction: Collateral Damage
- (15 min) In the #weapons-of-math-destruction channel, identify at least 3 advantages and 3 disadvantages of using an AI based transparent and explainable credit rating system. Would it solve many of the issues listed in this chapter or would/could it continue to perpetuate many of the same issues? How? How could the transparency ensure ethical behavior on the part of the companies creating the algorithm as well as on the part of the end-users being rated?
OU students, don’t forget to turn in your grading declarations on canvas! Today’s declaration is called “Module 9: WMD Chapter 8”
- Based on your survey feedback and the fact that we are all going at full steam constantly, I’m giving a truly optional day with links to some fun related reading that you can read and learn and enjoy but not have to discuss unless you want to!
- Does Explainable AI improve Human Decision Making?
- Metrics for Explainable AI: Challenges and Progress
- The Sanction of Authority: Promoting Public Trust in AI
- “How do I fool you?”: Manipulating User Trust via Misleading Black Box Explanations
- This is a link to all the papers from the recent FAccT conference. Lots of good and related stuff here! FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency
- Another excellent conference link: AIES ’20: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society
- (5 min) Tell us what you read and what you learned in the #general channel
OU students, don’t forget to turn in your grading declarations on canvas! Today’s declaration is called “Module 9: Papers”