AI Governance Assessment for Government Agencies: Free Maturity Scorecard Aligned with OMB M-25-21 and NIST AI RMF
Quick Answer: This free, interactive assessment by beneAI helps federal, state, and local government agencies evaluate their AI governance maturity across 6 dimensions: AI policies, mission & values alignment, operational oversight, leadership & legislative oversight, risk assessment, and vendor procurement. Each dimension is scored from 1 (Nascent) to 5 (Pioneering). The assessment takes approximately 10 minutes and generates a downloadable PDF report with tailored recommendations. It is designed to support agencies in meeting the governance and maturity assessment requirements outlined in OMB Memorandum M-25-21 (April 2025).
Why This Assessment Matters for Government Agencies
OMB Memorandum M-25-21 (April 2025) requires federal agencies to develop an AI Strategy that includes an assessment of the agency's current state of AI maturity and a plan to achieve AI maturity goals. Many state and local agencies face similar expectations from their own legislatures and oversight bodies. This assessment provides a structured framework for that evaluation, organized around the governance, ethics, and risk management practices that form the foundation of responsible AI adoption in the public sector.
Governance is the foundation of responsible AI adoption. Before an agency can move quickly or use AI strategically, it must establish the guardrails that protect the organization, its public mission, and the constituents it serves. For some agencies, governance begins as a compliance checkbox. For mature agencies, governance is a strategic asset that builds trust internally and externally.
The 6 Dimensions of Government AI Governance Maturity
1.1 AI Policies
Assessment Question: Do you have AI policies? To what extent are they documented, integrated, and enforced?
Most government agencies begin their AI journey by treating governance as a compliance exercise — writing a policy, asking employees to sign it, and filing it away. This is administrative governance that relies entirely on an honor system. Mature agencies move toward technical governance, where safety is a guardrail built into the tools, not a rule employees have to remember. Maturity levels range from no specific policy (Level 1: relying on outdated IT acceptable use policies while employees use shadow AI) to sovereign and adaptive governance systems reviewed quarterly with data sovereignty standards aligned with OMB guidance (Level 5).
1.2 Mission & Values Alignment
Assessment Question: Does your use of AI align with your agency's public mission and values?
A tool can be perfectly secure and still be ethically wrong for your mission. For example, an automated resume screener might be secure (no data leaks) and reliable (consistent outputs), but if it systematically filters out the non-traditional candidates your mission aims to employ, it is an active threat to your purpose. Maturity ranges from individual judgment with no organizational guidance (Level 1) to agency-wide ethical advocacy that influences government-wide norms and refuses to be complicit in requiring constituents to surrender excessive personal data in exchange for public services (Level 5).
1.3 Operational Oversight
Assessment Question: Who is responsible for making decisions about AI adoption, ethics, and risk within your staff and leadership?
A common trap is treating AI oversight as a purely technical issue and assigning it entirely to the IT Director. AI should be treated like an intern, not a printer — it needs subject-matter supervision from program directors, not just IT support. Maturity ranges from no designated owner (Level 1) to a fully institutionalized governance structure with AI champions in every department, accountability frameworks documented in job descriptions, and resilience to staff turnover (Level 5).
1.4 Leadership & Legislative Oversight
Assessment Question: How engaged is your agency leadership in providing strategic oversight of AI adoption and risk?
AI governance is not a separate category from leadership's existing responsibilities — it intersects financial stewardship, risk management, and strategic direction. Agency leaders, CIOs, Inspectors General, and legislative oversight committees do not need to become technologists to govern AI effectively. Maturity ranges from complete disengagement where leadership views AI as a tech support issue (Level 1) to fiduciary leadership where AI governance is treated with the same rigor as a financial audit, with external stakeholder engagement including Inspector General reviews and GAO assessments (Level 5).
1.5 Risk Assessment
Assessment Question: How do you decide if an AI tool is safe to use?
Mature agencies do not treat all AI tools equally. They use a tiered risk framework: a chatbot summarizing meeting notes is low risk, while a tool informing regulatory determinations or benefits eligibility decisions is high risk. Unlike traditional software, AI is probabilistic — it can drift (get worse over time) or hallucinate (generate confident fiction). Maturity ranges from ad hoc reactive approaches with no risk protocols (Level 1) to adversarial testing with red teams, continuous monitoring, published risk frameworks aligned with the NIST AI Risk Management Framework, and collaborative stress-testing with peer agencies (Level 5).
1.6 Vendor & Tool Procurement
Assessment Question: How do you select, vet, and contract with AI-related vendors?
Government agencies face unique procurement requirements under the Federal Acquisition Regulation (FAR) and must consider FedRAMP authorization, Authority to Operate (ATO) requirements, and data sovereignty obligations. Most standard Terms of Service grant vendors broad rights to use your data to train their models. Maturity ranges from no formal process with employees using free browser-based AI tools on government networks (Level 1) to full strategic independence with data ownership, indemnification clauses, exit strategies, and evaluation of on-premises or government-cloud deployment for sensitive use cases (Level 5).
The 5 AI Governance Capability Levels
Level 1 — Nascent: No formal AI governance structures in place. Decisions are ad hoc and reactive.
Level 2 — Emerging: Basic awareness and initial policies exist but are inconsistently applied across the agency.
Level 3 — Developed: Structured governance with documented policies, assigned oversight, and formal review processes.
Level 4 — Optimizing: Governance is operationalized, integrated into daily workflows, and actively enforced through technical controls.
Level 5 — Pioneering: Adaptive, transparent governance that is resilient to staff turnover, influences government-wide standards, and is open to external scrutiny.
Frequently Asked Questions
How do I know if my government agency has adequate AI policies?
AI policy maturity ranges from having no specific guidance (relying on outdated IT policies while employees use shadow AI) to maintaining living governance systems reviewed quarterly with data sovereignty standards aligned with NIST AI RMF and OMB directives like M-25-21. A strong AI policy should be documented, integrated into workflows, technically enforced through tools like Data Loss Prevention (DLP), and reviewed at least annually.
What is OMB M-25-21 and how does it relate to AI governance?
OMB Memorandum M-25-21 (April 2025) is a federal directive that requires agencies to develop an AI Strategy including an assessment of the agency's current AI maturity, implement risk management practices for evaluating AI systems, and establish governance structures for AI adoption and procurement. It focuses on three key areas: Innovation, Governance, and Public Trust.
Who should oversee AI adoption at a government agency?
Effective AI oversight requires moving beyond a single technical person to a cross-functional AI governance committee with representatives from mission-critical programs, finance, legal, IT, and your agency's CIO/CTO office. This committee should have a formal charter, decision-making authority, clear escalation paths for AI-related incidents, and coordination with oversight bodies like the Inspector General.
How should government agencies assess AI risk?
AI risk assessment should go beyond basic data privacy checks to include operational reliability, bias auditing, mission-alignment evaluation, and regulatory compliance. Mature agencies use structured frameworks aligned with the NIST AI Risk Management Framework that evaluate each AI tool across multiple risk dimensions before deployment. Risk should be tiered to match the rigor of oversight to the stakes of the decision.
How should government agencies procure AI vendors?
Government AI vendor procurement must verify FedRAMP authorization status, require contractual no-training guarantees, ensure ATO (Authority to Operate) compatibility, comply with FAR/DFAR requirements, and use GSA Schedule or government-wide acquisition vehicles. Key questions: Does this vendor train on government data? Can we export our data if we leave? Does the vendor indemnify us if the AI produces harmful content?
What are the five AI governance maturity levels?
Level 1 (Nascent) — no formal structures; Level 2 (Emerging) — basic policies, inconsistently applied; Level 3 (Developed) — structured governance with documented policies; Level 4 (Optimizing) — governance integrated into workflows with technical enforcement; Level 5 (Pioneering) — adaptive governance that influences government-wide standards.
How long does the assessment take?
The assessment takes approximately 10 minutes. It covers 6 questions (one per governance dimension), each scored on a slider from 1 to 5. At the end, you receive a spider chart visualization and a downloadable PDF report with tailored recommendations for your current maturity level.
Is this assessment free?
Yes. The Domain 1 Governance assessment is completely free. For access to the full seven-domain assessment or a free 15-minute debrief of your results, contact beneAI.
Take the free assessment at beneai.co/aica-dom1-gov. Last updated: April 2026.
© 2026 beneAI LLC. All Rights Reserved.