Artificial intelligence is everywhere—from the apps we use daily to the workflows reshaping how we work. Tools like ChatGPT, Gemini, and Claude have made Large Language Models (LLMs) widely accessible—sparking excitement and concern across the social impact space.
For nonprofits, this moment presents a critical opportunity to not just adopt AI but to shape how it’s used ethically, equitably, and intentionally. While AI has the potential to amplify nonprofit impact, it also risks reinforcing the same inequities that nonprofits are working to dismantle. As the technology continues to advance, nonprofits must be included in the conversation and creation of these tools.
In this edition of Constructively Curated, we’re exploring the promise, pitfalls, and practical steps your organization can take to stay grounded in your brand values while exploring this technology. To help you navigate the noise, we’ve compiled a list of resources for nonprofits looking to make the most of AI tools while keeping ethics front and center.
1. Develop Artificial Intelligence Literacy
What is AI, anyway? Before diving into tools and strategies, it’s important to ensure everyone on your team shares a baseline understanding of the foundational concepts, common terms, and how to approach AI with a critical eye. This roundup from Zapier includes 8 AI courses for beginners, and LinkedIn Learning offers free courses to get you (and your team) started.
2. Explore Frameworks for Responsible AI
NTEN’s resource hub is a go-to starting point for nonprofits exploring responsible AI adoption. It offers practical guides and tools to help you surface key questions, assess potential risks, and embed equity and transparency in your AI-related projects. Here you’ll also find a helpful generative AI use policy, do’s and don’ts for chatbots, and example questions for generative AI vendors to help you implement AI with integrity.
3. Build an Ethical AI Culture
Responsible AI isn’t just about tools—it’s about building a culture rooted in trust and shared values. IBM’s 5 pillars of ethical AI—fairness, explainability, robustness, transparency, and data privacy—can serve as an example of how to evaluate and implement AI tools, especially when working with sensitive data or vulnerable communities. For more guidance, explore resources from UNESCO, the Center for Human-Compatible Artificial Intelligence, or watch this video from MIT Sloan on building an ethical AI culture.
4. Establish and Circulate Your AI Governance Policy
Once your team is aligned on values and direction, it’s time to put it into practice. Nonprofits are quickly jumping on using AI but are still playing catch-up when it comes to governance. A survey by the Technology Association of Grantmakers (TAG) found that while 81% of foundations are experimenting with AI, just 30% have an AI policy in place, and only 9% have an advisory group. AI governance is crucial for mitigating bias, protecting privacy, preventing misuse, and ensuring accountability. To get started, check out this AI policy template from Afua Bruce and Rose Afriyie for NTEN.
5. Acknowledge Bias to Reduce Harm
AI isn’t created in a vacuum—it reflects the people, systems, and biases that exist in the real world. That’s why it’s essential for nonprofits to approach AI-generated content with a critical, ethical storytelling lens. Tools that auto-generate photos or illustrations can reinforce harmful stereotypes or lack representation. Before using AI-generated imagery, ask: Does this image reflect the diversity of the communities we serve? Are we avoiding tokenism or visual clichés? For a deeper look at how this evolving technology can be designed with equity in mind, we recommend The Tech That Comes Next—a powerful read from Afua Bruce and Amy Sample Ward on how changemakers, philanthropists, and technologists can build more inclusive systems. We also recommend listening to this episode of the Talk Justice podcast featuring Afua Bruce, Kevin De Liban, and Keith Porcaro covering how to prevent harm amidst the AI hype cycle.
6. Stay True to Your Brand Values
AI isn’t going anywhere anytime soon. This article from AFP highlights how to leverage AI while keeping your brand mission, vision, and values intact. For example, if inclusivity and diversity are core to your brand, you can “commit to using high-quality, accurately representative data to mitigate bias and reduce the risk of perpetuating discrimination. Only work with vendors that hold themselves to the highest standards.” If your company values transparency, you could consider sharing a behind-the-scenes look at how AI is helping improve your nonprofit’s operations and delivering on your mission.
7. Identify the Right Use Cases for Your Organization
Nonprofit teams are often stretched thin—juggling tight budgets, limited staff, and busy schedules. AI can help lighten the administrative task load by automating time-consuming tasks, freeing up capacity for your team to focus on strategic, high-impact work. According to a survey from TechSoup, nonprofits that already use AI see benefits in key areas like grant writing and fundraising, marketing, and analytics. By using AI safely and ethically, you can make a huge impact on your nonprofit’s efficiency and ability to drive even more impact. Classy from GoFundMe shares 12 AI tools for nonprofits to consider.
8. Unlock AI’s Potential for Scaling Social Good
On a larger scale, AI is already helping us solve some of the world’s most pressing problems. As highlighted in The Atlantic, AI is helping our world’s most vulnerable populations stay one step ahead of climate change. Through Google.org’s Generative AI Accelerator, grantees use AI to predict floods, monitor wetland ecosystems, and address environmental threats to agriculture. And a 2023 BCG report found that AI has the potential to unlock insights that could help mitigate 5-10% of global greenhouse emissions by 2030. There’s increasing enthusiasm about the impactful role AI can play in meeting the U.N. Sustainable Development Goals (SDGs).
9. Weigh the Climate Cost of Everyday AI Use
While AI has the potential to support large-scale climate initiatives, we also need to consider the environmental cost of everyday use. Researchers estimate that a single ChatGPT prompt consumes about five times more electricity than just doing a simple web search. With over 123 million people using ChatGPT daily, the energy demand is impossible to ignore. And like we said previously, AI models are being used to mitigate climate risk and avoid disasters—making this a clear catch-22. It’s worth asking: How can your organization adopt AI in ways that are both sustainable and intentional? Read more about the environmental impact of GenAI from MIT and the uneven distribution of AI’s environmental impacts in Harvard Business Review.
Closing Thoughts
In an age of automation, human connection will become more important than ever. As this article from Bloomerang articulates, “There’s no outsourcing the human soul.” While AI can and will automate tasks, it can’t replace the empathy and human-centered thinking that you bring to your projects and organization’s mission. “Sometimes only a human being can deliver on the job to be done.” For more on human-centered AI (HCAI), read the overview from the Interaction Design Foundation.
We’re curious—how is your team using AI to advance your mission? Get in touch and share your approach, lessons learned, or questions you’re exploring.