"Explore 'Empowering Ethical AI Education,' a comprehensive guide for K-12 teachers on integrating responsible AI and tech justice into the curriculum. Discover practical strategies, interactive resources, and case studies to navigate AI education with an ethical and equitable approach."

Reviewing Responsible AI & Tech Justice: A Guide for K-12 Education, by Shana V. White,  Dr. Allison Scott, Dr. Sonia Koshy, and a host of advisors, published by The Kapor Center on January 18,. 2024

“If we’re not careful, AI will perpetuate bias in our world. Computers learn how to be racist, sexist, and prejudiced in a similar way that a child does. The computers learn from their creators — us.” – Aylin Caliskan, Computer Scientist

 “Responsible AI & Tech Justice: A Guide for K-12 Education”  was published by The Kapor Center last week, and I was excited to check it out. The Kapor Center has a long history of thoughtful research around inequity in tech, so I hoped this would be a useful guide for K-12 educators looking to integrate AI teaching and usage into their classrooms.

Written by Shana V. White, Dr. Allison Scott, and Dr. Sonia Koshy from the Kapor Center, with guidance and input from 20 members of an Advisory Committee and a Senior Advisory Board, this 27-page report, dense with links and citations, is meant to help educators equip themselves–and their students– with the skills to critically evaluate AI technologies.  

As the authors say on page 7: 

“As a key component of Justice-Centered Computing Education, this guide articulates a vision for Responsible AI and Tech Justice in K-12 Education as a robust and comprehensive course of study that utilizes an explicit racial and social justice lens to equip all students with the knowledge and resources to critically interrogate the ethical and equitable development, deployment, and impacts of AI, while simultaneously challenging, disrupting, and remedying the harms that these technologies can produce within individuals’ lives, communities, and society at large. “

The heart of the guide is practical strategies for educators to assess AI platforms and tools, based on six core components. These components are: 

  • Examine the AI technology creation ecosystem from who designs and develops products and how they are developed, to who invests in their creation and benefits from their adoption.
  • Interrogate the complex relationship between technology and human beings, including human-computer interaction and topics of values, ethics, privacy, and safety.
  • Explore the impacts and implications of AI technologies on society, including positive benefits, negative consequences, and the perpetuation of exclusion, marginalization, and inequality. 
  • Interrogate personal uses of AI technology to become critical consumers of products and address misuse, exploitation, and safety concerns.
  • Build a critical lens in the collection, usage, analysis, interpretation, and reporting of data. 
  • Minimize, mitigate, and eliminate harm and injustice caused by AI technologies through both the responsible and ethical creation process and individual and collective right to refusal. 

To help educators incorporate ethical discussions and justice-focused dialogue into AI lessons, the authors share what they call Sample Interrogation Questions to use in exploring components. For the component Examine the AI technology creation ecosystem from who designs and develops products and how they are developed, to who invests in their creation and benefits from their adoption, for example, some of the questions are:

  • Who is involved in the ideation stage, research, and design phase of AI technology creation?
  • Why did the individual/group produce this piece of technology or AI tool?
  • What demographic trends exist among AI technology company boards, leadership, and technical workforce?
  • What are the backgrounds, cultures, and values of AI company boards and leadership teams? 

For educators looking for critical thinking resources around exploring AI, those seeking well-vetted lists of vocabulary words, extensive bibliographies of case studies and research, or a well-annotated Appendix of Existing Frameworks & Guidance for AI and AI in education, this new Kapor guide will be a gold mine. 

As a first pass on AI and Tech Justice in the area of K-12 education, “Responsible AI & Tech Justice: A Guide for K-12 Education” will be a key resource for many who wish to foster an ethically informed, equitable, and inclusive AI education approach. The in-depth research, the well-organized core components accompanying hard questions, and the comprehensive review and analysis that the citations, listings, and bibliographies suggest can all be building blocks for educators seeking to advance their thinking equitably and thoughtfully.

I could see a district or school technology organizing committee using this report to research how their institution might approach AI and having great success. For a classroom teacher, or a technology specialist charged with figuring these things out for their classroom or school, however, the Kapor Guide offers no easy solutions. I worry about hands-on instructors not having enough time during the school year to absorb and process all this thoughtful work. 

“Empowering students to navigate the AI landscape with ethical awareness and a commitment to equity is not just an educational goal; it’s a societal imperative.”–Kapor Center Guide

Since this guide is licensed under the Creative Commons license, Attribution-NoDerivatives (CC-BY-ND), which allows others to copy, distribute, display, and perform the work, but not make derivative works based on it, it would be great for the Kapor Center to consider building on this foundation with more tools at a future date. 

The development of interactive resources such as lesson plans, project ideas, and discussion prompts, aligned with the guide’s principles, would offer educators ready-to-use materials. These resources, possibly co-created with input from educators and students, could serve as practical tools to facilitate the integration of AI ethics and tech justice into the curriculum.

In addition (and I know that I am dreaming big here), professional development workshops based on the guide’s content could further empower educators by providing a platform for deep engagement with the material.  These workshops, envisioned as collaborative forums, could enable educators to share experiences, discuss challenges, and celebrate successes in integrating AI education, cultivating a community of practice dedicated to responsible AI and tech justice.

Incorporating these enhancements into a framework with “Responsible AI & Tech Justice: A Guide for K-12 Education” would empower a wider range of educators to navigate the complexities of AI with an ethical compass and a commitment to equity.  This Guide is a valuable first step in helping to chart the need for ethically informed, equitable, and inclusive AI education, and the authors and The Kapor Center are to be applauded for all their hard work. 

To read the Guide or an Executive Summary, download from this page.

 

Reviewing Responsible AI & Tech Justice: A Guide for K-12 Education, by Shana V. White,  Dr. Allison Scott, Dr. Sonia Koshy, and a host of advisors, published by The Kapor Center on January 18,. 2024

“If we’re not careful, AI will perpetuate bias in our world. Computers learn how to be racist, sexist, and prejudiced in a similar way that a child does. The computers learn from their creators — us.” – Aylin Caliskan, Computer Scientist

 “Responsible AI & Tech Justice: A Guide for K-12 Education”  was published by The Kapor Center last week, and I was excited to check it out. The Kapor Center has a long history of thoughtful research around inequity in tech, so I hoped this would be a useful guide for K-12 educators looking to integrate AI teaching and usage into their classrooms.

Written by Shana V. White, Dr. Allison Scott, and Dr. Sonia Koshy from the Kapor Center, with guidance and input from 20 members of an Advisory Committee and a Senior Advisory Board, this 27-page report, dense with links and citations, is meant to help educators equip themselves–and their students– with the skills to critically evaluate AI technologies.  

As the authors say on page 7: 

“As a key component of Justice-Centered Computing Education, this guide articulates a vision for Responsible AI and Tech Justice in K-12 Education as a robust and comprehensive course of study that utilizes an explicit racial and social justice lens to equip all students with the knowledge and resources to critically interrogate the ethical and equitable development, deployment, and impacts of AI, while simultaneously challenging, disrupting, and remedying the harms that these technologies can produce within individuals’ lives, communities, and society at large. “

The heart of the guide is practical strategies for educators to assess AI platforms and tools, based on six core components. These components are: 

  • Examine the AI technology creation ecosystem from who designs and develops products and how they are developed, to who invests in their creation and benefits from their adoption.
  • Interrogate the complex relationship between technology and human beings, including human-computer interaction and topics of values, ethics, privacy, and safety.
  • Explore the impacts and implications of AI technologies on society, including positive benefits, negative consequences, and the perpetuation of exclusion, marginalization, and inequality. 
  • Interrogate personal uses of AI technology to become critical consumers of products and address misuse, exploitation, and safety concerns.
  • Build a critical lens in the collection, usage, analysis, interpretation, and reporting of data. 
  • Minimize, mitigate, and eliminate harm and injustice caused by AI technologies through both the responsible and ethical creation process and individual and collective right to refusal. 

To help educators incorporate ethical discussions and justice-focused dialogue into AI lessons, the authors share what they call Sample Interrogation Questions to use in exploring components. For the component Examine the AI technology creation ecosystem from who designs and develops products and how they are developed, to who invests in their creation and benefits from their adoption, for example, some of the questions are:

  • Who is involved in the ideation stage, research, and design phase of AI technology creation?
  • Why did the individual/group produce this piece of technology or AI tool?
  • What demographic trends exist among AI technology company boards, leadership, and technical workforce?
  • What are the backgrounds, cultures, and values of AI company boards and leadership teams? 

For educators looking for critical thinking resources around exploring AI, those seeking well-vetted lists of vocabulary words, extensive bibliographies of case studies and research, or a well-annotated Appendix of Existing Frameworks & Guidance for AI and AI in education, this new Kapor guide will be a gold mine. 

As a first pass on AI and Tech Justice in the area of K-12 education, “Responsible AI & Tech Justice: A Guide for K-12 Education” will be a key resource for many who wish to foster an ethically informed, equitable, and inclusive AI education approach. The in-depth research, the well-organized core components accompanying hard questions, and the comprehensive review and analysis that the citations, listings, and bibliographies suggest can all be building blocks for educators seeking to advance their thinking equitably and thoughtfully.

I could see a district or school technology organizing committee using this report to research how their institution might approach AI and having great success. For a classroom teacher, or a technology specialist charged with figuring these things out for their classroom or school, however, the Kapor Guide offers no easy solutions. I worry about hands-on instructors not having enough time during the school year to absorb and process all this thoughtful work. 

“Empowering students to navigate the AI landscape with ethical awareness and a commitment to equity is not just an educational goal; it’s a societal imperative.”–Kapor Center Guide

Since this guide is licensed under the Creative Commons license, Attribution-NoDerivatives (CC-BY-ND), which allows others to copy, distribute, display, and perform the work, but not make derivative works based on it, it would be great for the Kapor Center to consider building on this foundation with more tools at a future date. 

The development of interactive resources such as lesson plans, project ideas, and discussion prompts, aligned with the guide’s principles, would offer educators ready-to-use materials. These resources, possibly co-created with input from educators and students, could serve as practical tools to facilitate the integration of AI ethics and tech justice into the curriculum.

In addition (and I know that I am dreaming big here), professional development workshops based on the guide’s content could further empower educators by providing a platform for deep engagement with the material.  These workshops, envisioned as collaborative forums, could enable educators to share experiences, discuss challenges, and celebrate successes in integrating AI education, cultivating a community of practice dedicated to responsible AI and tech justice.

Incorporating these enhancements into a framework with “Responsible AI & Tech Justice: A Guide for K-12 Education” would empower a wider range of educators to navigate the complexities of AI with an ethical compass and a commitment to equity.  This Guide is a valuable first step in helping to chart the need for ethically informed, equitable, and inclusive AI education, and the authors and The Kapor Center are to be applauded for all their hard work. 

To read the Guide or an Executive Summary, download from this page.