Intermediate

    Design Patterns for Agentic AI Systems

    A pattern-oriented approach to coordinating machines that reason, delegate, and work together.

    Jay Burgess4 min read

    Agentic AI systems become difficult when multiple machines are allowed to reason, act, and coordinate. Without design patterns, the result is usually a tangle of prompts, tools, retries, and unclear responsibility. Patterns help teams decide who plans, who executes, who reviews, and how information moves through the system.

    The hierarchical pattern is the clearest example. A manager agent receives the high-level objective, breaks it into tasks, and delegates to specialist agents. This works well when the work naturally separates into roles: researcher, implementer, tester, reviewer, or writer. The benefit is control. The cost is that the manager becomes a critical point of failure and must be able to judge progress accurately.

    Sequential patterns are useful when each step depends on the previous one. Collaborative or swarm patterns are useful when the problem benefits from multiple perspectives. Review-and-critique patterns are useful when the first answer is rarely good enough. Tool-use patterns formalize how agents interact with external systems. Memory patterns define what agents carry forward and what they must fetch fresh.

    The real value of patterns is not terminology. It is making failure visible. If an implementation agent produces bad code, did the planner give bad instructions, did the tool return bad context, did the reviewer miss the problem, or did the system lack an exit condition? Patterns make those questions answerable. They turn distributed intelligence from chaos into an architecture that can be tested, improved, and governed.

    Patterns are about accountability, not naming
    The value of using a pattern name like 'hierarchical' or 'sequential' is not the label — it's the clarity it forces. When you say an agent is a 'critic', you've committed to defining what correctness means, what it checks against, and what happens if it fails. That commitment is what makes systems governable.

    What this means in practice

    The practical implementation question is not whether the idea is interesting. It is how a team turns it into a workflow that can be inspected, repeated, and improved. For this topic, the operating focus is direct: Use design patterns to make responsibility explicit across planners, workers, critics, retrievers, and human reviewers.

    That means the engineering work starts before the first model call. The team must decide what the agent is allowed to know, what it is allowed to do, what evidence it must produce, and which actions require a human decision. This is the difference between an impressive demo and a system that can survive real users, changing inputs, and production constraints.

    A credible implementation also includes a feedback path. Every agent run should leave behind enough context for another engineer to answer four questions: what goal was attempted, what context was used, which tools were called, and why the system believed the task was complete. If those questions cannot be answered from logs, traces, or structured outputs, the agent is still operating as a black box.

    Reference Diagram

    A simple architecture to reason from

    Use this diagram as a starting point, not as a universal blueprint. The important move is to make the stages visible. Once stages are visible, you can assign owners, define contracts, set permissions, measure quality, and decide where human review belongs.

    Workflow Map
    Read left to right: state moves through controlled boundaries.
    1
    Manager

    Define the input and constraint boundary.

    2
    Planner

    Transform state through a controlled interface.

    3
    Retriever

    Transform state through a controlled interface.

    4
    Worker

    Transform state through a controlled interface.

    5
    Critic

    Transform state through a controlled interface.

    6
    Final Reviewer

    Return evidence, state, and decision context.

    Code Example

    Hierarchical handoff

    The example below is intentionally small. Production agentic systems should start with compact contracts like this because small contracts are testable. Once the boundary is working, you can add richer orchestration without losing control of the core behavior.

    ts·Hierarchical handoff
    const plan = [
      { agent: "researcher", task: "collect source context" },
      { agent: "builder", task: "apply code change" },
      { agent: "critic", task: "review against acceptance criteria" },
      { agent: "publisher", task: "prepare final summary" },
    ];
    Illustrative pattern — not production-ready
    Structured handoffs are non-negotiable
    Agents that hand off prose summaries to the next agent create ambiguity at every junction. When the downstream agent fails, you cannot tell whether the plan was wrong, the handoff was lossy, or the worker misunderstood. Typed, structured payloads between agents are as important as typed function signatures in any other system.

    Implementation notes

    Treat these notes as the first design review checklist. They are deliberately concrete because agentic systems fail most often in the gaps between the model, the tools, the data, and the human operating process.

    Design note 1

    Give each agent one responsibility and one success definition.

    Design note 2

    Make handoff payloads structured so downstream agents are not guessing.

    Design note 3

    Use critic or reviewer agents where first-pass quality is predictably weak.

    Common failure modes

    The fastest way to make an article useful is to name how the pattern breaks. These are the failure modes to watch for when a team moves from reading about this idea to deploying it inside a real workflow.

    The manager delegates vague tasks and then cannot evaluate the returned work.
    Agents pass prose summaries instead of structured state, losing critical details.
    A critic agent checks style but not correctness, giving false confidence.

    Operating checklist

    Before this pattern graduates from experiment to production, require a short operating checklist. The checklist should include the owner of the workflow, the allowed tools, the risk rating for each tool, the data sources the agent can use, the completion criteria, the review path, and the rollback plan. If a team cannot fill out that checklist, the workflow is not ready for higher autonomy.

    The checklist should also define how the system will be evaluated after launch. Useful metrics include task success rate, human correction rate, average iterations per completed task, cost per successful run, escalation rate, and the number of blocked tool calls. These metrics turn agent quality into an engineering conversation instead of an opinion about whether the output felt good.

    Finally, make the learning loop explicit. When the agent fails, decide whether the fix belongs in the prompt, the retrieval layer, the tool contract, the permission model, the evaluation suite, or the human process. Mature agentic engineering is not the absence of failures. It is the ability to classify failures quickly and improve the system without expanding risk.

    Key Takeaways
    Patterns clarify responsibility across agents.
    Hierarchies improve control but create manager-agent risk.
    Good patterns make failures easier to locate.
    Learn the full system

    Build real fluency in agentic engineering.

    The Academy turns these concepts into a full curriculum, AI tutor, templates, and the CAE credential path.

    Start Learning