How State Governments Are Approaching AI Governance: A Blueprint for Success

John Rood is the Founder of Proceptual, where he specializes in AI governance and safety. He teaches these subjects at Michigan State University and the University of Chicago, sharing his expertise with the next generation of leaders. As a Certified AI Systems Auditor, John has conducted AI audit, governance, and training projects for a diverse range of organizations, from Global 50 corporations to innovative startups.
As artificial intelligence rapidly transforms how government agencies operate, state leaders face an urgent question: How can we harness AI’s benefits while protecting citizens from its risks? The answer lies in comprehensive AI governance—and states across America are pioneering innovative approaches that offer valuable lessons for public sector employers nationwide.
The Growing Need for AI Governance
The stakes couldn’t be higher. Without proper oversight, AI systems can perpetuate bias in hiring decisions, expose sensitive citizen data to security breaches, infringe on intellectual property rights, and undermine the fundamental principle of fair treatment that citizens expect from their government.
The risks are real and documented. AI systems have been shown to exhibit racial bias in criminal justice applications, make discriminatory decisions in social services, and create privacy vulnerabilities when processing personal information. For government agencies—which must operate with the highest standards of transparency, accountability, and fairness—these risks are simply unacceptable.
State lawmakers across the United States introduced almost 700 AI-related bills in 2024 across 45 states, demonstrating the urgency with which state governments are addressing these challenges. Many of these laws and regulations focus specifically on government use of AI technologies.
Leading by Example: State-Level AI Legislation
states are taking decisive action to regulate internal government use of AI. Kentucky has emerged as a notable leader in this space, passing legislation that requires the state to implement comprehensive AI governance frameworks. This groundbreaking law establishes clear requirements for how Kentucky state agencies must approach AI deployment, setting a precedent for other states to follow.
Similarly, Texas has enacted its own AI governance legislation, recognizing that state government AI use requires specific oversight mechanisms. These laws reflect a growing understanding among state leaders that internal AI governance is not optional—it’s essential for maintaining public trust and ensuring effective government operations.
The legislative momentum continues to build. Some proposals from 2024 include strong, substantive guardrails on the use of AI in the public sector, indicating that states are moving beyond simple acknowledgment of AI risks toward concrete regulatory action.
The NIST Framework: A Foundation for Success
So what should states do to implement effective AI governance? The most widely accepted and practical solution is adopting the NIST AI Risk Management Framework (AI RMF). The NIST AI Risk Management Framework (AI RMF) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
At its core, the NIST AI RMF is built on four functions: Govern, Map, Measure, and Manage. These components work together to create a comprehensive approach to AI risk management:
Govern: Establishing organizational policies, procedures, and oversight mechanisms for AI use. This includes designating responsible parties, defining risk tolerance, and creating accountability structures.
Map: Identifying and categorizing AI systems and their associated risks. This involves understanding what AI technologies are being used, where they’re deployed, and what potential impacts they might have.
Measure: Assessing and quantifying risks associated with AI systems. This includes ongoing monitoring, testing for bias, and evaluating system performance.
Manage: Taking action to mitigate identified risks through technical, operational, or policy interventions. This involves implementing safeguards, providing training, and continuously improving AI systems.
Though not a checklist, the playbook supports flexible adaptation based on industry-specific needs and maturity levels, making it particularly well-suited for the diverse needs of state government agencies.
Navigating the Complexity Challenge
One of the unique challenges of AI governance at the state level is the sheer number and diversity of departments and actors involved. The AI needs of a state university system are vastly different from those of the Department of Motor Vehicles, which in turn differ from those of social services agencies or law enforcement departments.
A university system might use AI for academic research, student services, and administrative functions, requiring governance frameworks that balance academic freedom with ethical AI use. Meanwhile, a DMV might deploy AI for document processing and fraud detection, necessitating different risk management approaches focused on accuracy and privacy protection.
This diversity means that one-size-fits-all solutions simply won’t work. State AI governance must be sophisticated enough to address sector-specific needs while maintaining consistent standards across all government operations. Successful states are developing flexible frameworks that provide clear principles while allowing departments to tailor implementation to their unique circumstances.


In the future, state AI governance will likely become more sophisticated and comprehensive.
JOHN ROOD
Building the Foundation: AI System Registries
The first step in effective AI governance is understanding what AI systems are actually being used across state government. This requires implementing a comprehensive registry of all AI systems in use—a central database that tracks every AI application, from simple automation tools to sophisticated machine learning systems.
An effective AI registry should capture essential information about each system: what it does, what data it uses, who has access to it, what decisions it influences, and what risks it might pose. This inventory serves as the foundation for all other governance activities, enabling state leaders to prioritize oversight efforts and allocate resources effectively.
The registry process often reveals surprising insights. Many state agencies discover they’re using more AI than they initially realized, including AI embedded in commercial software solutions. Others find that similar AI tools are being deployed across multiple departments without coordination, creating opportunities for shared best practices and cost savings.
Centralized Oversight: The Role of State AI Governance Committees
Effective AI governance requires centralized coordination, which is why successful states are establishing dedicated AI governance committees at the state level. These committees, typically comprising representatives from multiple agencies along with technical and policy experts, serve as the central hub for AI oversight activities.
The committee’s role extends beyond simple oversight. They develop state-wide AI policies, coordinate training programs, facilitate knowledge sharing between agencies, and serve as the primary point of contact for AI-related issues. They also play a crucial role in ensuring that AI deployments align with state priorities and values.
All AI registry information should report up to this central committee, creating a comprehensive view of AI use across state government. This centralized approach enables better risk management, more efficient resource allocation, and stronger accountability mechanisms.
Real-World Implementation: Lessons from the Field
States implementing AI governance frameworks are learning valuable lessons about what works and what doesn’t. Successful implementations typically start small, focusing on high-risk or high-visibility AI applications before expanding to comprehensive coverage. They also invest heavily in training and education, recognizing that effective governance requires widespread understanding of AI risks and best practices.
Communication emerges as a critical success factor. States that effectively engage stakeholders—including agency staff, citizens, and advocacy groups—throughout the governance development process tend to achieve better outcomes and broader buy-in. Transparency about AI use and governance efforts also helps maintain public trust.
Another key lesson is the importance of ongoing adaptation. AI technology evolves rapidly, and governance frameworks must be flexible enough to address new challenges and opportunities. States that build learning and adjustment mechanisms into their governance structures are better positioned for long-term success.
The Path Forward for State AI Governance
States that act now to implement comprehensive governance frameworks will be better positioned to harness AI’s benefits while protecting their citizens and maintaining public trust.
The combination of legislative action, NIST framework adoption, comprehensive system registries, and centralized oversight provides a proven blueprint for success. However, each state must adapt this blueprint to its unique circumstances, taking into account local priorities, resources, and challenges.
For public sector employers and job seekers, these governance developments signal a growing demand for AI expertise within government. Understanding AI governance principles, risk management frameworks, and implementation best practices will become increasingly valuable skills in the public sector job market.
In the future, state AI governance will likely become more sophisticated and comprehensive. States are beginning to collaborate and share best practices, potentially leading to more standardized approaches while still allowing for local adaptation. The federal government may also provide additional guidance and resources, further supporting state-level governance efforts.
The states leading on AI governance today are not just protecting their citizens—they’re building the foundation for more effective, efficient, and trustworthy government operations in the AI age. Their experiences provide valuable lessons for all levels of government grappling with the challenges and opportunities of artificial intelligence.
Want new articles before they get published? Subscribe to our Awesome Newsletter.