Governance Deserves A Bigger Role In The AI Agent Era

Shazia Manus is Chief Data and Analytics Officer for TruStage.
When we imagine the future of AI agents, it can be rather cinematic.
A patient steps into a glowing diagnostic pod and walks out moments later with a diagnosis and a custom treatment plan—all without seeing a practitioner.
Across town, the doctor is briefly alerted to the plan; her agentic colleague is on the job. She goes back to cheering on her daughter’s soccer team under the bright sky, a far cry from the 12-hour shifts she once worked beneath fluorescent lights and heavy eyes.
Like most fiction, this scenario skips over very pertinent, albeit less evocative, details. While patient, doctor and AI agent assume the starring roles, their castmate governance is relegated to the bottom of the call sheet. Although essential to the production, governance rarely appears on screen.
As AI agents are rapidly democratizing, however, this underappreciated actor is about to step into the spotlight.
Governance As Best Supporting Actor
For AI agents to “peacefully coexist” with humans—as anticipated by leaders like Workday CEO Carl Eschenbach—technology integrators will need reassurance. Like in past eras of technology revolution, reassurance often comes in the form of standards, frameworks and guidelines. These tools act as governance sentinels, standing guard at the frontier of change.
The tricky thing is that the bodies charged with creating these guardrails often lag well behind the innovation they hope to regulate. This leaves the responsibility to early-adopter firms.
The choices these firms make today will set precedents for how agentic systems are governed tomorrow. Ideally, they will ensure accountability stays on pace with autonomy.
Among the primary duties of governance sentinels is the safeguarding of intentionality. When it comes to AI adoption, firms can become typecast into one of two roles: bystander in a runaway plot or protagonist in an open arc, choosing their adventures deliberately as the story unfolds.
The most progressive firms position themselves in the latter category, finding the right talent to identify strategic use cases and then empowering them to act fast—within guardrails they’ve helped design and endorse.
There’s no one-size-fits-all approach to establishing this framework. Effective methods depend on factors like a firm’s team size and expertise, regulatory requirements, organizational risk appetite and the company’s cultural receptivity to fail-fast transformation. That said, a few foundational practices have emerged as useful starting points.
Tips For Establishing Governance For A Responsible Agentic Future
Seize the AI moment. Governance is about balancing business value with risk. Yet, securing funding for governance is like asking a producer to back a film with no stars and a vague plot. It may be brilliant, but without a compelling story, leaders looking at the investment through a lens of scarcity (and that’s a lot of folks right now) are less inclined to green-light the project. This is where AI teams must carpe diem. AI is having a moment, and spending on AI projects is only going up. In fact, enterprise leaders in the U.S. are planning to increase AI spend by an average of 5.7% this year, even though overall IT budgets will rise by just 1.8%. By framing governance under the AI umbrella, teams can position it not as overhead, but as an essential part of the program.
Imagine pitching an AI agent that flags high-risk customer service complaints and automates compliance tasks. Baking governance into the design avoids it being seen as a cost center. Governance becomes part of what gets the project funded. Best of all, the standards, frameworks and guidelines developed can be reused across future AI projects, speeding development and reinforcing a culture of intentionality.
Back into permissible use cases. Just as artists use white space to control how the viewer moves through a composition, AI teams can use hard boundaries to shape where AI agents operate. By clearly defining where the technology should never have autonomy, AI agent integrators create clarity around where it can. One effective approach is to design a criticality matrix that identifies the firm’s ideological and philosophical point of view or high-impact areas that should remain firmly under human oversight.
Take our imaginary doctor, for example. Her healthcare system might prohibit AI agents from making end-of-life care decisions, deferring those to human clinicians. Such a guardrail limits the risk of an autonomous AI agent making a catastrophic mistake. It also helps direct AI teams toward safe, acceptable use cases, such as autonomously scheduling follow-up visits based on test results.
Make learning its own KPI. Good governance thrives on good measurement. It’s critical for early-adopter firms to recognize this while also being willing to get creative with what will constitute success. One of the upsides to being an early adopter is discovery. When leaders treat discovery as a measurable outcome, learning becomes its own form of productivity. Two great learning metrics to track are experimentation frequency and occurrences of adaptation. The first monitors how often new use cases for AI agents are piloted; the second tracks how often those pilots adapt based on insights.
Consider a payments firm that’s looking to deploy AI agents for better fraud detection. They plan to pilot at least three new agent-driven use cases. The first project yields too many false positives, introducing an unacceptable amount of friction into the consumer experience. Rather than scrapping the project, the team adjusts the agent’s thresholds and retrains it with additional data to keep transactions flowing.
Building An AI Narrative That Lasts
For now, agents are center stage. But it’s the strength of the supporting cast—especially governance—that will determine success. Early-adopter firms have a responsibility to shape this narrative in a way that honors the technology’s potential while also maintaining human agency, human volition and the safety and soundness of their operating principles.
The best technology stories are the ones with purpose and staying power. That starts with the right fundamentals. For most, those fundamentals are sustainable funding, intentional use cases with clear boundaries and principled KPIs that define the plot of AI adoption.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?