Beginner

    The Six Levels of Agentic Engineering

    A ladder for understanding how teams move from manual supervision to higher levels of agent delegation.

    Jay Burgess4 min read

    The six levels of agentic engineering are a practical ladder for thinking about delegation. At Level 1, the human watches every step. The agent suggests, the human approves, and the workflow feels interactive. This is where most teams should begin because it builds trust while exposing the agent's strengths and mistakes.

    Level 2 adds bounded autonomy. The agent can perform low-risk actions such as reading files, searching documentation, or drafting a plan without constant interruption. Level 3 introduces task ownership: the agent can complete a small assignment, run checks, and return a summary. The human reviews the result rather than steering every move.

    Level 4 is workflow delegation. Agents can coordinate several steps, use tools, and recover from ordinary failures. For example, an agent might open a pull request, address lint errors, and prepare release notes. Level 5 adds specialized agents, where a planner delegates to workers with distinct responsibilities such as research, implementation, review, or testing.

    Level 6 is supervised autonomy across a larger system. The agent network can pursue goals over longer time horizons, but it still operates inside policy, observability, and human approval boundaries. The ladder is not a race to remove humans. It is a way to match autonomy to risk. A team that cannot operate Level 2 safely is not ready for Level 6. Mature agentic engineering means increasing delegation only after the controls, tests, and review loops are strong enough to support it.

    The ladder is a release process
    Treat autonomy levels like staging environments. You wouldn't push untested code straight to production — don't push untested autonomy there either. Level 2 is staging. Level 6 is production. Promotions require evidence, not optimism.

    What this means in practice

    The practical implementation question is not whether the idea is interesting. It is how a team turns it into a workflow that can be inspected, repeated, and improved. For this topic, the operating focus is direct: Use autonomy levels as a release ladder: promote agents only after each lower level is reliable, observable, and safe.

    That means the engineering work starts before the first model call. The team must decide what the agent is allowed to know, what it is allowed to do, what evidence it must produce, and which actions require a human decision. This is the difference between an impressive demo and a system that can survive real users, changing inputs, and production constraints.

    A credible implementation also includes a feedback path. Every agent run should leave behind enough context for another engineer to answer four questions: what goal was attempted, what context was used, which tools were called, and why the system believed the task was complete. If those questions cannot be answered from logs, traces, or structured outputs, the agent is still operating as a black box.

    Reference Diagram

    A simple architecture to reason from

    Use this diagram as a starting point, not as a universal blueprint. The important move is to make the stages visible. Once stages are visible, you can assign owners, define contracts, set permissions, measure quality, and decide where human review belongs.

    Workflow Map
    Read left to right: state moves through controlled boundaries.
    1
    Interactive

    Human approves every suggestion.

    2
    Bounded Reads

    Agent reads freely; writes need approval.

    3
    Task Ownership

    Agent owns a task; human reviews result.

    4
    Workflow Delegation

    Agent handles multi-step workflows.

    5
    Specialists

    Planner delegates to specialist agents.

    6
    Supervised Autonomy

    Long-horizon goals inside policy boundaries.

    Use the interactive tool below
    The Autonomy Level Explorer in this article lets you click through each level and see exactly what the agent can do, what still requires approval, and what a policy configuration looks like.
    Code Example

    Autonomy gate configuration

    The example below is intentionally small. Production agentic systems should start with compact contracts like this because small contracts are testable. Once the boundary is working, you can add richer orchestration without losing control of the core behavior.

    ts·Autonomy gate configuration
    const autonomyPolicy = {
      level: 3,
      autoApprove: ["read_files", "run_unit_tests"],
      requireApproval: ["edit_files", "send_email", "deploy"],
      maxIterations: 8,
      maxSpendUsd: 1.5,
    };
    Illustrative pattern — not production-ready

    Implementation notes

    Treat these notes as the first design review checklist. They are deliberately concrete because agentic systems fail most often in the gaps between the model, the tools, the data, and the human operating process.

    Design note 1

    Define what each autonomy level means inside your organization.

    Design note 2

    Promote a workflow only when logs show repeated success at the previous level.

    Design note 3

    Keep higher autonomy for reversible, well-tested, low-blast-radius work.

    Common failure modes

    The fastest way to make an article useful is to name how the pattern breaks. These are the failure modes to watch for when a team moves from reading about this idea to deploying it inside a real workflow.

    Teams jump to autonomous execution before they understand normal failure patterns.
    Autonomy is granted by tool category rather than by task risk.
    A workflow has no rollback path even though the agent is allowed to mutate state.

    Operating checklist

    Before this pattern graduates from experiment to production, require a short operating checklist. The checklist should include the owner of the workflow, the allowed tools, the risk rating for each tool, the data sources the agent can use, the completion criteria, the review path, and the rollback plan. If a team cannot fill out that checklist, the workflow is not ready for higher autonomy.

    The checklist should also define how the system will be evaluated after launch. Useful metrics include task success rate, human correction rate, average iterations per completed task, cost per successful run, escalation rate, and the number of blocked tool calls. These metrics turn agent quality into an engineering conversation instead of an opinion about whether the output felt good.

    Finally, make the learning loop explicit. When the agent fails, decide whether the fix belongs in the prompt, the retrieval layer, the tool contract, the permission model, the evaluation suite, or the human process. Mature agentic engineering is not the absence of failures. It is the ability to classify failures quickly and improve the system without expanding risk.

    Try it yourself

    Interactive Tool

    Use this tool to apply the concepts from this article to your own situation. Outputs are heuristic recommendations — validate against your actual workload requirements.

    Interactive Tool

    Autonomy Level Explorer

    Level 1

    Interactive

    Minimal risk

    Every suggestion is reviewed before execution.

    Agent can do
    • Suggest code changes
    • Draft plans
    • Explain existing code
    • Answer questions
    Still requires approval
    • Any file write
    • Any tool execution
    • Any external call
    ts · Example policy config
    const policy = {
      level: 1,
      autoApprove: [],
      requireApproval: ["*"],  // everything
    };
    Ready to promote when

    You can predict what the agent will suggest and explain why.

    Key Takeaways
    Autonomy should increase gradually, not all at once.
    Each level requires stronger boundaries and better observability.
    The goal is calibrated delegation, not unsupervised independence.
    Learn the full system

    Build real fluency in agentic engineering.

    The Academy turns these concepts into a full curriculum, AI tutor, templates, and the CAE credential path.

    Start Learning