Advanced

    From Craft to Constitution

    A governance-first paradigm for turning agent development from brittle craft into principled engineering.

    Jay Burgess4 min read

    Governance-first agent engineering starts from a blunt observation: powerful agents are not deterministic programs. They are probabilistic systems that reason, select actions, and adapt in ways that cannot be fully controlled by traditional command-style programming. Treating them like ordinary software creates brittle systems that work in demos and fail under mission-critical pressure.

    A constitutional approach changes the center of gravity. Instead of relying on one prompt to command behavior, the system defines principles, policies, roles, permissions, checks, and arbitration mechanisms. The agent operates within a governed environment. Its actions are not trusted because the model sounds confident; they are trusted when they satisfy enforceable rules.

    This framing is useful for teams building production systems. It pushes design away from prompt tinkering and toward operating law. What goals are permitted? Which tools require approval? How are conflicts resolved? What evidence must be produced before an action is accepted? What policies override model preferences? These questions define the real behavior of the system.

    The "Agentic Computer" metaphor is helpful because it treats the LLM like a probabilistic CPU. A CPU needs an operating system, memory protection, scheduling, permissions, and audit logs. So does an agent. The advanced lesson is that governance cannot be bolted on after autonomy. It must be the environment in which autonomy runs.

    The constitutional framing
    A constitutional approach to agent governance means the agent doesn't just receive instructions — it operates inside a governed environment with roles, permitted actions, arbitration mechanisms, and audit infrastructure. The model's preferences are subordinate to the operating law. That inversion is what makes mission-critical deployment possible.

    What this means in practice

    The practical implementation question is not whether the idea is interesting. It is how a team turns it into a workflow that can be inspected, repeated, and improved. For this topic, the operating focus is direct: Treat governance as the operating system around probabilistic agents: policies, roles, arbitration, logs, and enforceable permissions.

    That means the engineering work starts before the first model call. The team must decide what the agent is allowed to know, what it is allowed to do, what evidence it must produce, and which actions require a human decision. This is the difference between an impressive demo and a system that can survive real users, changing inputs, and production constraints.

    A credible implementation also includes a feedback path. Every agent run should leave behind enough context for another engineer to answer four questions: what goal was attempted, what context was used, which tools were called, and why the system believed the task was complete. If those questions cannot be answered from logs, traces, or structured outputs, the agent is still operating as a black box.

    Reference Diagram

    A simple architecture to reason from

    Use this diagram as a starting point, not as a universal blueprint. The important move is to make the stages visible. Once stages are visible, you can assign owners, define contracts, set permissions, measure quality, and decide where human review belongs.

    Workflow Map
    Read left to right: state moves through controlled boundaries.
    1
    Constitution

    Define the input and constraint boundary.

    2
    Policy Engine

    Transform state through a controlled interface.

    3
    Agent Role

    Transform state through a controlled interface.

    4
    Action Request

    Transform state through a controlled interface.

    5
    Arbiter

    Transform state through a controlled interface.

    6
    Audit Log

    Return evidence, state, and decision context.

    Code Example

    Policy before execution

    The example below is intentionally small. Production agentic systems should start with compact contracts like this because small contracts are testable. Once the boundary is working, you can add richer orchestration without losing control of the core behavior.

    ts·Policy before execution
    const policyDecision = await authorize({
      actor: "agent:release-manager",
      action: "deploy",
      environment: "production",
      evidence: ["tests_passed", "rollback_plan_present"],
    });
    
    if (!policyDecision.allowed) throw new Error(policyDecision.reason);
    Illustrative pattern — not production-ready

    Implementation notes

    Treat these notes as the first design review checklist. They are deliberately concrete because agentic systems fail most often in the gaps between the model, the tools, the data, and the human operating process.

    Design note 1

    Write policies as enforceable runtime checks, not only prompt instructions.

    Design note 2

    Give each agent a role with explicit responsibilities and prohibited actions.

    Design note 3

    Use arbitration when goals, policies, or agent recommendations conflict.

    Governance cannot be bolted on
    Every team that has tried to add governance after deploying autonomous agents has learned the same lesson: retrofitting policy enforcement into a live system is significantly harder than building it in from the start. The audit log, the policy engine, and the role system need to exist before the agent has production access — not after the first incident.

    Common failure modes

    The fastest way to make an article useful is to name how the pattern breaks. These are the failure modes to watch for when a team moves from reading about this idea to deploying it inside a real workflow.

    The agent is asked to follow policy but no runtime system enforces it.
    Roles overlap, so nobody can tell which agent is accountable for a decision.
    Audit logs record actions but not the policy decision that allowed them.

    Operating checklist

    Before this pattern graduates from experiment to production, require a short operating checklist. The checklist should include the owner of the workflow, the allowed tools, the risk rating for each tool, the data sources the agent can use, the completion criteria, the review path, and the rollback plan. If a team cannot fill out that checklist, the workflow is not ready for higher autonomy.

    The checklist should also define how the system will be evaluated after launch. Useful metrics include task success rate, human correction rate, average iterations per completed task, cost per successful run, escalation rate, and the number of blocked tool calls. These metrics turn agent quality into an engineering conversation instead of an opinion about whether the output felt good.

    Finally, make the learning loop explicit. When the agent fails, decide whether the fix belongs in the prompt, the retrieval layer, the tool contract, the permission model, the evaluation suite, or the human process. Mature agentic engineering is not the absence of failures. It is the ability to classify failures quickly and improve the system without expanding risk.

    Key Takeaways
    Probabilistic agents need governed operating environments.
    Policies and arbitration are stronger than prompt-only control.
    Governance must be designed before mission-critical autonomy.
    Learn the full system

    Build real fluency in agentic engineering.

    The Academy turns these concepts into a full curriculum, AI tutor, templates, and the CAE credential path.

    Start Learning