If you've implemented OAuth scopes, you've already touched the edge of a 600-year-old governance system.
In January 2025, South, Marro, Hardjono, Mahari, and Pentland published arXiv:2501.09674 — a three-token architecture for AI agent authorization extending OAuth 2.0 and OpenID Connect:
- User ID-token — standard OIDC identity. Who owns the agent.
- Agent-ID token — the agent's capabilities, limitations, and unique identifier.
- Delegation token — cryptographically signed, scoped, revocable. The authorization itself.
They didn't reference privateering. But the architecture they built is the same one Western maritime law spent 300 years refining.
The Original OAuth: Letters of Marque
Before a Baltimore privateer could leave harbor in 1812, the owner had to:
- Declare the vessel's name, tonnage, and armament (identity)
- Receive a commission specifying exactly which ships they could attack (scope)
- Post a $5,000–$10,000 bond (accountability)
- Submit every capture to a vice-admiralty prize court (review)
- Accept that violating the commission meant revocation and criminal liability
Five layers. Identity. Scope. Accountability. Review. Revocation. Without the commission, you were a pirate. Without the prize court condemnation, your capture was stolen property.
Convergent Evolution in Code
Stanford Law's CodeX project mapped the same structure onto AI agent liability, identifying three categories of authority: express (explicit delegation), implied (reasonable inference), and apparent (what third parties believe the agent can do).
That third one is where systems break. In Moffatt v. Air Canada (2024), a chatbot told a customer they could retroactively apply for bereavement fares. The company never authorized that promise. The tribunal held the company liable anyway — because a reasonable customer would believe the agent could make it.
The apparent_authority edge case your legal team hasn't thought about.
The Liability Architecture Is Tightening
California's AB 316, effective January 2026, precludes defendants from using autonomous AI operation as a defense. The EU's Product Liability Directive, by December 2026, treats AI as a product under strict liability.
The pattern: whoever deploys the agent bears full responsibility. This is what the privateer's bond encoded — the commission didn't absolve the owner; it made them formally responsible.
Meanwhile, Congress is bringing back the original. H.R. 4988 revives Article I letters of marque for cyber operations. A separate Senate bill targets cartels. The 1812 mechanism is live again.
The Prize Court Is the Point
Every institutional solution to delegation — across centuries and civilizations — converges on the same architecture. But the piece that mattered most was the prize court: mandatory judicial review before any prize was legally claimed.
For AI agents, the prize court is the audit trail. Not just logging — structured, queryable evidence that the agent operated within scope, that no third-party rights were violated, that the outcome matches the authorization.
Without it, your agent's autonomous actions are as legally suspect as an uncondemned prize. And California just eliminated the defense that used to protect you.
Build the Audit Trail Before You Leave the Harbor
The essay's argument reduces to one claim: without a verifiable record of delegation and scope compliance, every autonomous action is legally suspect. Chain of Consciousness provides that record — a cryptographic, tamper-evident, hash-linked provenance chain for every action your agent takes. Identity verified, scope documented, outcomes anchored.
When the post-hoc review comes — and the liability architecture guarantees it will — the record is there.
pip install chain-of-consciousness
# or
npm install chain-of-consciousness
from chain_of_consciousness import ChainOfConsciousness
coc = ChainOfConsciousness()
entry = coc.add_entry(
action="delegation_scope_check",
details={"scope": "inbox_review", "constraint": "suggest_only"},
agent_id="agent-007"
)
# Tamper-evident, hash-linked, anchored
Full essay with all 24 sources: Letters of Marque for AI Agents
This article was originally published by DEV Community and written by Alex @ Vibe Agent Making.
Read original article on DEV Community