Agent
Governance
Architecture
A four-layer reference architecture defining how AI agents declare intent, obtain authorization, operate within governed boundaries, and submit to immutable audit. Built around a patent-pending enforcement primitive that makes every layer cryptographically real.
Most governance frameworks
define policy. Few enforce it
at runtime.
ISO standards, NIST frameworks, EU AI Act compliance guides — all define what agents should do. Most stop short of enforcing it at execution time. Intent-Based Authorization is a patent-pending enforcement primitive that operates at every agent action: cryptographic verification, scope enforcement, behavioral drift detection, and a hardware-layer kill-switch — with a target enforcement latency under 5ms. Without IBA at its center, AGA is a policy document. With IBA, it is a machine-enforced architecture.
BEHAVIOR
CONTROL
PROTOCOL
Layers 3 and 4 define what agents should do. IBA enforces it. Deterministically. Cryptographically.
Scope, expiry, and permitted actions declared
X-IBA-Resource · X-IBA-Action · X-IBA-Version
· Verify signatures (principal + agent)
· Check certificate not expired or replayed
· Evaluate scope envelope against requested action
· Behavioral drift check against declared intent
· Commit decision to WitnessBound before execution
X-IBA-Verdict: ALLOW
Action executes within scope
X-IBA-Verdict: BLOCK
Action nullified · New cert required
Before execution · Tamper-proof · Permanent record
The architecture exists
because IBA makes it enforceable.
Every governance framework before IBA had the same flaw: at the moment of execution, the agent either complied or it didn’t. There was no mechanism to guarantee compliance cryptographically before the action ran.
IBA closes that gap. The Intent Certificate is signed by both the agent and the principal before any action is taken. The TBDE (Trust-Boundary Decision Engine) verifies it at the transport layer — checking identity, scope, and behavioral consistency before execution proceeds. WitnessBound records every decision to an immutable ledger.
Recent litigation involving AI agents highlights the growing legal need for verifiable agent authorization. Amazon v. Perplexity (N.D. Cal., March 2026) serves as a legal signal: a federal court found that user consent alone was insufficient for agent platform access, pointing toward the kind of machine-readable authorization credential that IBA provides.
READ PROTOCOL SPEC →AGA vs existing
governance frameworks.
| Capability | ISO 42001 | NIST AI RMF | EU AI Act | Zero Trust | ★ AGA / IBA |
|---|---|---|---|---|---|
| Runtime enforcement | — | — | — | ✓ | ✓ Cryptographic |
| Agent-specific authorization | — | — | — | — | ✓ Intent Certificate |
| Scope enforcement at execution | — | — | — | Partial | ✓ DENY_ALL default |
| Behavioral drift detection | — | — | — | — | ✓ Behavioral drift detection |
| Hardware kill switch | — | — | — | — | ✓ Optional research hardware layer |
| Immutable audit chain | — | — | ✓ Partial | — | ✓ WitnessBound |
| Legal signal: agent authorization | — | — | — | — | ✓ Amazon v. Perplexity 2026 |
| Patent-protected core | — | — | — | — | ✓ GB2603013.0 (pending) |
| Sub-5ms latency | — | — | — | — | ✓ O(1) deterministic |
| Model agnostic | ✓ | ✓ | ✓ | ✓ | ✓ Transport layer |
One architecture.
Four sites. One enforceable core.
Each site in the AGA ecosystem carries a reference to the full stack. Here is the badge that appears on every AGA component site: