We are moving from AI that generates to AI that acts.
Generative AI helped organisations draft emails, summarise meetings, and write code. Agentic AI goes a step further: it can plan across multiple steps, use tools, and take actions in real systems—updating databases, initiating workflows, or even triggering payments—often at machine speed. That shift in capability is precisely why IMDA’s Model AI Governance Framework for Agentic AI (Version 1.0, Jan 2026) lands at an important moment: it translates familiar responsible AI principles into practical expectations for AI systems that can change the world outside the chat window.
Below are the main updates in the framework, and what they signal for organisations—especially in high-trust, high-stakes sectors like financial services.
What’s new in the Agentic AI framework
1. A clearer “anatomy” of agents, not just models
Earlier governance conversations often treated “AI” as a model that outputs text or scores. This framework insists that agents are systems—and the risk profile depends on their components.
A key visual (the core components diagram on page 3) lays out the moving parts of an agent: model, instructions, memory, planning & reasoning, tools, and protocols. It also explicitly calls out emerging standards for agent interoperability, such as tool-communication protocols and agent-to-agent protocols. That matters because governance needs to cover more than the LLM: the tool layer and protocol layer are where “output” becomes “action.”
Why it matters: Many failures won’t look like a wrong answer; they’ll look like a correct-looking answer that triggered the wrong tool call, with real operational consequences.
2. A more operational way to assess risk: action-space, autonomy, reversibility
The framework introduces a practical lens: an agent’s risk depends heavily on:
- Action-space: what it is allowed to do (systems it can access; read vs write; breadth of tools),
- Autonomy: how much freedom it has to decide when and how to act,
- Reversibility: whether actions can be undone.
This comes through strongly in the risk assessment tables on pages 9–10, which distinguish factors affecting impact (e.g., sensitive data access, external system access, scope of actions, reversibility) and likelihood (e.g., autonomy, task complexity, exposure to untrusted external systems).
Why it matters: This is governance that maps cleanly onto business and control design. It helps teams move from abstract principles (“be accountable”) to concrete decisions (“this agent can read but not write,” “approvals required for irreversible actions,” “limit browser tools in early rollouts”).
3. “Bound risks upfront” becomes a design discipline, not a post-hoc checklist
The framework’s first dimension—Assess and bound the risks upfront—puts emphasis on restricting what an agent can do by design:
- least-privilege access to tools and data,
- constraining workflows using SOP-like structures for process-driven tasks,
- isolating high-risk functions via sandboxing and “take offline” procedures.
This is not just caution; it is a pragmatic recognition that once agents operate at scale, continuous human oversight of every micro-step becomes impractical.
Why it matters: It pushes organisations toward “guardrails first” engineering: designing bounded systems early rather than relying on monitoring and incident response later.
4. A major “update” area: agent identity and permissions
One of the most consequential sections is the explicit discussion of agent identity management and the gap between today’s identity systems and what agentic environments will require.
The framework notes that traditional authorisation is often static, while agents may need fine-grained, context-dependent permissions—and the reality of agents acting on behalf of different humans, or spawning sub-agents, complicates accountability.
It offers interim best practices:
- give agents unique identities,
- record which capacity they’re acting in (independent vs on behalf of a user),
- enforce a “no privilege escalation” rule of thumb: a user should not grant an agent permissions beyond what the user has, and delegations should be recorded.
Why it matters: This is one of the first mainstream governance documents that treats agents like new kinds of “digital employees” who require identity, entitlements, audit trails, and controls—especially when they can transact, modify records, or interact with external systems.
5. Accountability is expanded across the value chain, not just the deploying team
The framework’s second dimension—Make humans meaningfully accountable—goes beyond the simple “human-in-the-loop” slogan.
First, it shows a simplified agentic AI value chain (the diagram on page 13): model developers, tooling providers, agentic AI system providers, deploying organisations, and end users. It then emphasises that deploying organisations need clear responsibility allocations internally (leaders, product teams, cybersecurity teams, users) and externally (contracts, performance guarantees, security arrangements, observability features).
Why it matters: Agentic deployments often involve multiple vendors and platforms. This framework nudges organisations to build governance that survives outsourced complexity—especially around logging, tool-call visibility, scoped credentials, and incident readiness.
6. Human oversight is reframed to fight automation bias and alert fatigue
The framework recognises two realities:
- agents can act fast and across many steps,
- humans supervising capable systems are vulnerable to automation bias and alert fatigue.
So it recommends designing meaningful checkpoints for approval—especially for high-stakes, irreversible, atypical, or user-defined risk actions—and making approvals digestible rather than dumping raw logs.
It also recommends auditing oversight effectiveness over time, and complementing human review with automated monitoring.
Why it matters: This is an important shift from “a human somewhere signed off” to “the human oversight mechanism remains effective at scale.”
7. Stronger technical expectations: agent testing, monitoring, and staged rollout
The third dimension—Implement technical controls and processes—is where the framework becomes very “engineering actionable.” It:
- lists example controls for planning and tool use (e.g., reflection, strict tool input formats, least privilege),
- calls out protocol security practices (including whitelisting trusted servers where relevant),
- expands testing beyond output quality to include: overall execution, policy compliance, tool-calling correctness, robustness, workflow-level testing, and multi-agent testing,
- recommends gradual rollouts and continuous monitoring post-deployment, with clear alert thresholds and interventions.
Why it matters: For agentic AI, “model evaluation” isn’t enough. The system must be tested like software and like an autonomous workflow operator.
8. A new governance pillar: end-user responsibility and tradecraft preservation
The fourth dimension—Enable end-user responsibility—is notable because it treats responsible deployment as a socio-technical problem.
The framework distinguishes user archetypes (the user needs diagram on page 22):
- users who interact with agents (often external-facing) → focus on transparency,
- users who integrate agents into work processes (often internal) → add training and education.
It also flags a subtle but real risk: tradecraft erosion—as agents take over entry-level tasks, humans may lose foundational skills.
Why it matters: This pushes organisations to treat training, usage policy, and escalation pathways as governance controls—not optional change management.
Why this is significant
It marks a shift from “AI ethics” to “AI operations”
The framework doesn’t replace principles like transparency or fairness; it operationalises them for a world where AI can act. That’s a different governance problem, closer to cybersecurity, operational risk, and internal controls.
It will shape enterprise expectations for “agent-ready” systems
By emphasising identity, least-privilege permissions, auditability of tool calls, and continuous monitoring, it effectively sets a baseline for what “responsible agentic deployment” should look like in procurement, implementation, and assurance.
It encourages innovation with bounded risk
The message is not “don’t deploy agents.” It is: deploy them in a way that scales trust—through staged rollouts, well-defined checkpoints, and controls proportionate to impact. This is aligned with Singapore’s pragmatic positioning: enable adoption while maintaining confidence.
For financial services and regulated industries, it provides a common language
Agentic AI in finance isn’t hypothetical: it can touch customer data, communications, onboarding, fraud operations, and transaction workflows. The framework’s focus on reversibility, approvals for high-stakes actions, traceability, and vendor accountability maps directly onto how regulated organisations already think about operational risk.
A practical takeaway for industry feedback
If you are contributing to industry consultation, the highest-leverage feedback will likely be concrete case studies around:
- how teams implement agent identity and delegated permissions in real environments,
- what “meaningful checkpoints” look like without killing productivity,
- how to test and monitor multi-step workflows at scale,
- how to train users while preventing tradecraft erosion.
That is where “good governance” becomes reproducible practice.
Source document: IMDA, Model AI Governance Framework for Agentic AI (Version 1.0, Jan 2026).

