NIST, ISO 42001 & OECD: AI Governance Guide for Nonprofits | beneAI
AI Governance Frameworks

Three Frameworks That Help Shape Responsible AI Governance

These three respected frameworks offer complementary lenses for thinking about risk, accountability, and values when AI enters your work. None of these require you to be a technologist. They’re designed to help leaders, program staff, and governance teams navigate the complexities of AI governance.

The “Why” Framework
OECD AI Principles

The OECD AI Principles, first adopted in 2019 and updated in 2024, represent the first intergovernmental standard on AI. Endorsed by over 40 countries, they articulate the shared values that should guide the development and use of AI systems. Where NIST gives you a risk framework and ISO gives you a management system, the OECD Principles give you an ethical and values-based foundation, which is often the natural starting point for mission-driven organizations.

The Five Principles

The OECD organizes its AI guidance around five value-based principles that apply to every actor in the AI lifecycle, including organizations that simply use AI tools built by others. Each principle translates directly into questions your organization can ask about the tools already in your environment.

1. Inclusive Growth, Sustainable Development & Well-being

AI should benefit people and the planet. For mission-driven organizations, this principle asks a direct question: Is the AI tool you’re using actually advancing your mission and the well-being of the people you serve, or is it just more efficient? Efficiency that comes at the cost of equity, accessibility, or community trust isn’t aligned with this principle, and it probably isn’t aligned with your values either.

2. Human-Centered Values & Fairness

AI should respect the rule of law, human rights, democratic values, and diversity, and include appropriate safeguards to ensure a fair society. In practice, this means evaluating whether the AI tools you use could treat different groups of people differently, and whether you have adequate safeguards in place. It also means ensuring that AI doesn’t erode human dignity or autonomy, especially in contexts where your organization is serving vulnerable populations or making decisions that affect people’s lives.

3. Transparency & Explainability

People should be able to understand when AI is being used and how it influences outcomes. For off-the-shelf users, this doesn’t mean explaining the model’s architecture. It means being honest with your clients, community, and stakeholders about where and how you’re using AI. It means choosing tools that give you enough visibility into how they work to make informed decisions. And it means being transparent internally. Staff should know which tools use AI and what data those tools process.

4. Robustness, Security & Safety

AI systems should function safely, securely, and as intended throughout their lifecycle. For organizations using commercial tools, this translates to practical vendor evaluation questions: Does the vendor have a track record of reliability? What happens when the tool is wrong? Is client data protected? What are the vendor’s security practices? It also means building internal practices for verifying outputs, maintaining human oversight, and having a plan for what happens when a tool fails or produces harmful results.

5. Accountability

Organizations that develop, deploy, or operate AI systems should be accountable for their proper functioning. This is perhaps the most important principle for off-the-shelf users to internalize: using a vendor’s AI tool does not transfer your accountability to the vendor. If an AI tool produces a biased recommendation, an inaccurate report, or a privacy violation, your organization is still responsible for the outcome. Accountability means having clear ownership, documented decision-making, and mechanisms for redress when things go wrong.

Why It Matters

The OECD AI Principles aren’t a compliance checklist. They’re a values framework. For mission-driven organizations, they serve as the ethical compass that should orient every governance decision, and they often resonate more immediately than risk matrices or management system specifications because they speak to the things these organizations already care about most.

They connect AI governance to your mission.

Most mission-driven organizations already operate from a set of deeply held values. The OECD Principles provide a bridge between those values and the specific challenges that AI introduces. They help you articulate why AI governance matters in terms your board, staff, and community already understand.

They shape the global regulatory landscape.

The OECD Principles have directly influenced the EU AI Act, national AI strategies across dozens of countries, and the Biden administration’s Executive Order on AI. Understanding these principles means understanding the direction of AI regulation worldwide, which matters increasingly as funders and partners incorporate AI governance expectations into their requirements.

They center the people affected by AI.

Where risk frameworks focus on organizational exposure and management systems focus on operational processes, the OECD Principles consistently bring the conversation back to people: their rights, their well-being, their ability to understand and challenge decisions made about them. For organizations whose entire purpose is serving people, this framing is essential.

The “What” Framework
NIST AI Risk Management Framework

Published by the National Institute of Standards and Technology, the NIST AI RMF (AI 100-1) is a voluntary, flexible framework that helps organizations manage AI-related risks throughout the AI lifecycle. It’s U.S.-based, widely referenced, and designed for organizations of any size or sector. For mission-driven organizations using off-the-shelf AI, it provides a practical structure for thinking about how AI tools could introduce risk, and what your team can do about it.

The NIST Four Core Functions

The NIST AI RMF is organized around four core functions that work together as a continuous cycle. They apply whether you’re building AI or, as is more common for mission-driven organizations, selecting, deploying, and overseeing tools built by others.

1. Govern

This is the foundation. Govern is about establishing the culture, structures, and processes for AI risk management across your organization. For a nonprofit using off-the-shelf tools, this means deciding who is responsible for AI oversight, how AI-related decisions get made, and how your organization’s values and mission inform those decisions. It’s less about technical controls and more about organizational clarity: Do you have an AI policy? Does leadership understand how AI is being used? Is there a process for staff to raise concerns?

2. Map

Map is about understanding the context in which your AI tools operate. What data do they use? Who is affected by their outputs? What could go wrong, and for whom? For organizations using commercial AI products, mapping means looking beyond the vendor’s marketing: understanding what the tool actually does with your data, who the tool was designed for (and who it wasn’t), and where the outputs touch decisions that affect real people: clients, communities, staff, or partners.

3. Measure

Once you’ve mapped the landscape, Measure is about assessing and tracking the actual risks. For off-the-shelf users, this doesn’t mean running technical audits on someone else’s model. It means asking practical questions: Are we checking the accuracy of AI-generated outputs before acting on them? Do we have a way to notice when a tool starts producing biased or unreliable results? Are we tracking how AI tools are being used across the organization, including informal use by individual staff? Measure turns awareness into evidence.

4. Manage

Manage is about taking action on what you’ve found. It includes deciding how to respond to identified risks, whether that’s setting usage guardrails, choosing a different tool, training staff, adding human review steps, or discontinuing a tool entirely. For mission-driven organizations, this is where your values become operational: what level of risk is acceptable when the people affected by AI decisions are the communities you serve? Manage also covers incident response: knowing what to do when something goes wrong.

Why It Matters

You might think a risk management framework is only for organizations building AI systems. But the NIST AI RMF is explicitly designed for all participants in the AI ecosystem, including organizations that deploy, use, or are affected by AI. When your case management platform adds an AI feature, when your staff uses ChatGPT to draft communications, or when a vendor pitches an analytics tool, the NIST framework gives you a structured way to evaluate whether and how to proceed.

It starts with governance, not technology.

The Govern function recognizes that AI risk management is fundamentally an organizational challenge. You don’t need a data science team to establish who reviews AI purchases, what data can and can’t be entered into AI tools, or how staff should report concerns about AI outputs.

It scales to your context.

A 15-person nonprofit using a handful of AI-powered tools and a 500-person social services agency integrating AI into multiple programs will apply the framework very differently, and that’s by design. NIST doesn’t prescribe specific controls; it gives you a thinking structure you can tailor to your organization’s size, risk tolerance, and mission.

It’s becoming the shared language of AI risk.

Regulators, funders, and partners are increasingly referencing the NIST AI RMF. Familiarizing your organization with its vocabulary and structure positions you to respond to compliance requirements, funder expectations, and stakeholder questions about how you manage AI responsibly.

The “How” Framework
ISO/IEC 42001

ISO/IEC 42001 is the world’s first international standard for AI management systems, published by the International Organization for Standardization in 2023. While the NIST AI RMF provides a risk-focused thinking framework, ISO 42001 provides a management system specification: a structured, certifiable approach to governing AI across an organization. Think of it as the difference between a guide and a blueprint.

Key Components

ISO 42001 is built on the familiar Plan-Do-Check-Act management system model. For organizations using off-the-shelf AI, these components provide a practical blueprint for moving from informal, ad hoc AI use to documented, accountable governance.

AI Management System (AIMS)

At its core, ISO 42001 asks organizations to build an AI Management System: a set of policies, processes, roles, and documentation that together govern how AI is used. If you’re familiar with ISO 27001 (information security) or ISO 9001 (quality management), it follows the same Plan-Do-Check-Act structure. For organizations using off-the-shelf AI, this means formalizing things many are already doing informally: approving tools, setting usage guidelines, tracking what’s in use, and reviewing whether it’s working as expected.

AI Impact Assessment

ISO 42001 requires organizations to assess the potential impacts of their AI use: on individuals, groups, and society. For a mission-driven organization using commercial AI tools, this means asking structured questions before adoption: Could this tool produce discriminatory outcomes? Does it handle sensitive data appropriately? What happens if the tool gives incorrect results, and who is affected? The assessment doesn’t require building the AI yourself; it requires understanding the impact of using it in your specific context.

Continuous Improvement & Documentation

The standard emphasizes that governance isn’t a one-time exercise. It requires ongoing monitoring, internal audits, management reviews, and documented evidence that your AI governance is active and evolving. For off-the-shelf users, this translates to practical habits: reviewing AI tool use periodically, updating policies when tools or regulations change, documenting decisions about why certain tools were approved or rejected, and maintaining a record of how your organization manages AI over time.

Why It Matters

Most mission-driven organizations won’t pursue formal ISO 42001 certification, and they don’t need to. But the standard’s structure is deeply useful as a reference point for building AI governance that’s systematic rather than ad hoc. It’s particularly valuable if your organization is growing its AI use, facing regulatory scrutiny, or accountable to funders and partners who expect documented governance practices.

It makes governance tangible.

While NIST helps you think about risk, ISO 42001 helps you build the operational infrastructure to act on it. It answers questions like: What should our AI policy actually contain? What records should we keep? How do we demonstrate to a board, funder, or regulator that we’re governing AI responsibly?

It’s internationally recognized.

For organizations working internationally, receiving international funding, or partnering with organizations in other countries, ISO 42001 provides a governance language that transcends national boundaries. It aligns with the EU AI Act’s expectations and is referenced in regulatory discussions worldwide.

It complements your existing management systems.

If your organization already works within ISO-style management systems (for information security, quality, or environmental management), ISO 42001 integrates naturally. Even if you don’t, the Plan-Do-Check-Act approach is intuitive and maps well to how most organizations already think about continuous improvement.

Putting It Together

How the Three Frameworks Work Together

These aren’t competing standards. They’re complementary lenses that together give you a complete picture of what responsible AI governance looks like, especially when you’re using tools built by someone else.

1
OECD Principles
Start with the “Why”

Define your organization’s ethical commitments around AI. What does fairness mean in your context? What transparency do your communities deserve? What does accountability look like when you’re using someone else’s tool?

Unpack this step

The OECD Principles give you a values-based foundation that connects AI governance to what your organization already cares about. Before evaluating any specific tool or writing any policy, ground yourself in these questions:

  • Mission alignment: Is this AI tool genuinely advancing our mission and the well-being of the people we serve, or is it just faster?
  • Fairness and equity: Could this tool treat different groups of people differently? Do we have safeguards to prevent that?
  • Transparency: Can we tell our clients, staff, and stakeholders where and how we’re using AI? Do we understand enough about the tool to make informed decisions?
  • Accountability: If something goes wrong, who in our organization is responsible? Do we have a process for people to raise concerns or seek redress?

These questions become the ethical foundation that informs everything else you do. Write them down. Share them with leadership. They’re the starting point for your AI governance approach.

2
NIST AI RMF
Assess the “What”

Map the AI tools in your environment, understand who they affect, measure the risks they introduce, and decide how to manage them. This doesn’t require technical expertise. It requires asking the right questions.

Unpack this step

The NIST AI RMF gives you a structured way to think about risk. For organizations using off-the-shelf AI, this translates into practical actions:

  • Inventory your AI: Catalog every AI tool in use across your organization, including informal use by individual staff (ChatGPT, Copilot, AI features in existing software). You can’t govern what you don’t know about.
  • Understand who is affected: For each tool, identify who the outputs touch. Clients? Staff? Community members? The higher the stakes, the more scrutiny a tool deserves.
  • Evaluate vendor practices: What data does the tool collect? How is it stored? What are the vendor’s security and privacy commitments? What happens when the tool is wrong?
  • Decide how to respond: Based on what you find, set guardrails: which tools are approved, what data can be entered, where human review is required, and under what circumstances a tool should be discontinued.

Start with your highest-risk tools and work outward. A simple spreadsheet tracking tools, their uses, and their risk levels is a perfectly valid first step.

3
ISO/IEC 42001
Build the “How”

Formalize your governance with policies, roles, documented decisions, and review cycles. You don’t need certification; you need systems that work and that grow with your organization.

Unpack this step

ISO 42001’s management system approach helps you turn values and risk assessments into lasting organizational practices. Focus on these building blocks:

  • Write an AI policy: Document your organization’s position on AI use: what’s encouraged, what’s restricted, and what requires approval. Keep it clear and accessible to all staff.
  • Assign roles: Designate who is responsible for AI oversight. This doesn’t need to be a new hire; it can be an existing leader or a small working group with defined responsibilities.
  • Document decisions: Keep records of which tools were evaluated, why they were approved or rejected, and any conditions placed on their use. This creates institutional memory and accountability.
  • Establish review cycles: Set a regular cadence (quarterly or biannually) to revisit your AI inventory, check whether tools are performing as expected, review any incidents, and update policies as the landscape evolves.

The goal is not bureaucracy. It’s making sure your governance can outlast any single person’s tenure and adapt as your AI use grows.

Remember
Keep it proportionate. All three frameworks are designed to scale. The goal isn’t comprehensive compliance with every element. It’s building governance that fits your context and grows with you.
You’re still accountable. Using off-the-shelf AI does not transfer your responsibility. Your organization is accountable for how AI affects the people you serve, regardless of who built the tool.
Governance is ongoing. AI tools change, regulations evolve, and your use of AI will grow. All three frameworks emphasize continuous improvement, regular reviews, and the flexibility to adapt.
Common Questions

Frequently Asked Questions

Do we need to formally adopt all three frameworks?

No. These frameworks are reference points, not requirements. Most mission-driven organizations will benefit from understanding all three and drawing selectively from each. You might use the OECD Principles to ground your ethical commitments, the NIST AI RMF to structure your risk thinking, and ISO 42001 concepts to organize your policies and documentation, all without formally certifying against any of them.

These seem designed for large tech companies. Are they really relevant to us?

All three frameworks explicitly address organizations that use AI, not just those that build it. The NIST AI RMF specifically includes guidance for “operators” and “users” of AI systems. The OECD Principles apply to all actors in the AI lifecycle. And while ISO 42001 can feel enterprise-oriented, its core concepts (having policies, assessing impact, documenting decisions, reviewing regularly) are practices that benefit organizations of any size.

We only use a few AI tools. Do we really need a governance framework?

A “few tools” is often more than organizations realize. AI features are increasingly embedded in common software: email platforms, CRMs, document editors, analytics tools. Even informal use of tools like ChatGPT by individual staff members constitutes AI use that governance should address. The framework doesn’t need to be heavy, but having clear guidelines and an intentional approach protects your organization, your staff, and the people you serve.

What’s the difference between the NIST AI RMF and ISO 42001?

The NIST AI RMF is a risk management framework: it helps you think about and address AI-related risks. ISO 42001 is a management system standard: it helps you build the organizational infrastructure to govern AI systematically. In simple terms: NIST helps you identify what needs attention; ISO helps you build the systems to address it consistently. They work well together, and many organizations reference both when building their governance approach.

Are these frameworks legally required?

Not directly. The NIST AI RMF and OECD Principles are voluntary. ISO 42001 is a certifiable standard, but certification is optional. However, these frameworks are increasingly referenced in legislation and regulatory guidance. Colorado’s AI Act, the EU AI Act, and various federal agency policies all draw on concepts from these frameworks. Aligning your governance with them positions you well for current and emerging regulatory requirements.

Where should we start if we’re doing this for the first time?

Start with two things: an honest inventory of what AI tools are in use across your organization (including informal use), and a conversation with your leadership about what values should guide your AI adoption. From there, use the NIST AI RMF’s Govern and Map functions to structure your initial assessment, draw on the OECD Principles to define your ethical commitments, and look to ISO 42001’s management system concepts to organize your policies and processes. A phased approach is almost always better than trying to do everything at once.

How do these frameworks address bias in off-the-shelf AI tools?

All three frameworks address bias, though from different angles. The OECD Principles frame it as a fairness and human rights concern. The NIST AI RMF treats it as a risk to be mapped, measured, and managed. ISO 42001 expects impact assessments that consider discriminatory outcomes. For off-the-shelf users, this translates to practical steps: asking vendors about their bias testing, monitoring tool outputs for patterns of unfair treatment, maintaining human review for high-stakes decisions, and having a process for responding when bias is identified.

Ready to Build Your AI Governance Foundation?

Let’s start with a conversation about where your organization stands and where you want to go.