Artificial intelligence promises significant efficiency and effectiveness gains but also poses ethical risks if used without thoughtful planning and consideration. As nonprofit leaders exploring how generative AI tools like ChatGPT, Claude, and BARD might impact our organizations, it’s crucial to start with a responsible assessment and piloting of these emerging technologies.

I’ve explored several key resources offering practical guidance on AI ethics for nonprofits and philanthropy as part of my work understanding ethical frameworks in integrating AI. Here’s a brief synopsis of the core advice from each of these articles and guides, along with crucial takeaways for your organization.

What to read:

Responsible AI Adoption in Philanthropy (2023): This resource from Project Evident and the Technology Association of Grantmakers (TAG) offers a comprehensive framework for considering organizational, ethical, and technical dimensions when adopting AI. 

Key Takeaways: 

  • Involve a diverse array of stakeholders in AI tool selection and implementation.
  • Establish clear policies for acceptable and unacceptable AI uses.
  • Continuously monitor for emerging issues to adapt policies accordingly.

8 Steps Nonprofits Can Take to Adopt AI Responsibly (2023): This insightful piece from Beth Kanter, Allison Fine, and Philip Deng in the Stanford Social Innovation Review suggests a measured approach for nonprofits integrating AI  tools. 

Key Takeaways: 

  • Educate leaders on AI capabilities and limitations to make informed decisions.
  • Commit to human-centered deployment focusing on augmenting rather than replacing staff.
  • Start with narrow pilots to address specific pain points before wider use.

AI and Nonprofits: What Boards Need to Know (2023): This post from Board Effect clarifies complex topics and identifies key AI issues that nonprofit boards should prioritize. 

Key Takeaways: 

  • Boards must inquire about data privacy, security, and ethics.
  • Press for responsible and mission-aligned AI use cases vs quick returns on investment.
  • Require oversight mechanisms like bias testing and impact audits.

AI x Racial Justice Toolkit (2023): Developed by MMG Earth and Mozilla, this toolkit encourages critical reflection on how AI can perpetuate injustice if diversity and equity are not prioritized and offers practical intervention tactics. 

Key Takeaways: 

  • Approach AI as a mirror of societal biases rather than a neutral technology.
  • Listen to impacted communities; don’t overburden underrepresented groups.
  • Consider carefully whether certain AI tools cause more harm than good.

If you lack time to read them all

If time limits your reading,  I particularly recommend the Responsible AI Adoption in Philanthropy guide and the 8 Steps article from SSIR. Both offer clear frameworks and considerations that you can put into practice as you evaluate integrating AI capabilities into your organization in mission-aligned ways. 

The Responsible AI guide gives a helpful structure for addressing organizational culture, ethics, and technical dimensions. The 8 Steps article offers more specificity on piloting AI narrowly at first and not overestimating what it can do. 

Also, if you haven’t read it yet, The Smart Nonprofit: Staying Human-Centered in An Automated World, by Beth Kantor and Allison Fine (2022), is a great read on using and balancing AI and technology in nonprofit organizations. 

What other frameworks can you share?

If you have other frameworks for the ethical use of AI in nonprofit organizations that you’d like to share, especially those with an equity and inclusion perspective, please send them along. This is a rapidly evolving area, and continuous learning and adaptation are crucial. 

Artificial intelligence promises significant efficiency and effectiveness gains but also poses ethical risks if used without thoughtful planning and consideration. As nonprofit leaders exploring how generative AI tools like ChatGPT, Claude, and BARD might impact our organizations, it’s crucial to start with a responsible assessment and piloting of these emerging technologies.

I’ve explored several key resources offering practical guidance on AI ethics for nonprofits and philanthropy as part of my work understanding ethical frameworks in integrating AI. Here’s a brief synopsis of the core advice from each of these articles and guides, along with crucial takeaways for your organization.

What to read:

Responsible AI Adoption in Philanthropy (2023): This resource from Project Evident and the Technology Association of Grantmakers (TAG) offers a comprehensive framework for considering organizational, ethical, and technical dimensions when adopting AI. 

Key Takeaways: 

  • Involve a diverse array of stakeholders in AI tool selection and implementation.
  • Establish clear policies for acceptable and unacceptable AI uses.
  • Continuously monitor for emerging issues to adapt policies accordingly.

8 Steps Nonprofits Can Take to Adopt AI Responsibly (2023): This insightful piece from Beth Kanter, Allison Fine, and Philip Deng in the Stanford Social Innovation Review suggests a measured approach for nonprofits integrating AI  tools. 

Key Takeaways: 

  • Educate leaders on AI capabilities and limitations to make informed decisions.
  • Commit to human-centered deployment focusing on augmenting rather than replacing staff.
  • Start with narrow pilots to address specific pain points before wider use.

AI and Nonprofits: What Boards Need to Know (2023): This post from Board Effect clarifies complex topics and identifies key AI issues that nonprofit boards should prioritize. 

Key Takeaways: 

  • Boards must inquire about data privacy, security, and ethics.
  • Press for responsible and mission-aligned AI use cases vs quick returns on investment.
  • Require oversight mechanisms like bias testing and impact audits.

AI x Racial Justice Toolkit (2023): Developed by MMG Earth and Mozilla, this toolkit encourages critical reflection on how AI can perpetuate injustice if diversity and equity are not prioritized and offers practical intervention tactics. 

Key Takeaways: 

  • Approach AI as a mirror of societal biases rather than a neutral technology.
  • Listen to impacted communities; don’t overburden underrepresented groups.
  • Consider carefully whether certain AI tools cause more harm than good.

If you lack time to read them all

If time limits your reading,  I particularly recommend the Responsible AI Adoption in Philanthropy guide and the 8 Steps article from SSIR. Both offer clear frameworks and considerations that you can put into practice as you evaluate integrating AI capabilities into your organization in mission-aligned ways. 

The Responsible AI guide gives a helpful structure for addressing organizational culture, ethics, and technical dimensions. The 8 Steps article offers more specificity on piloting AI narrowly at first and not overestimating what it can do. 

Also, if you haven’t read it yet, The Smart Nonprofit: Staying Human-Centered in An Automated World, by Beth Kantor and Allison Fine (2022), is a great read on using and balancing AI and technology in nonprofit organizations. 

What other frameworks can you share?

If you have other frameworks for the ethical use of AI in nonprofit organizations that you’d like to share, especially those with an equity and inclusion perspective, please send them along. This is a rapidly evolving area, and continuous learning and adaptation are crucial.