ChanlChanl
Learning AI

Claude Code subagents and the orchestrator pattern

How to structure Claude Code subagents, write dispatch prompts, and coordinate parallel work across services, SDKs, and frontends in a monorepo.

DGDean GroverCo-founderFollow
April 1, 2026
18 min read read
Watercolor illustration of a colorful workspace with a main monitor surrounded by floating screens showing different code, warm plum tones

You're working on a feature that touches three layers. The backend needs a new field on the agent schema. The SDK needs updated types and a React hook. The frontend needs a new column in the data table. You type the request, Claude reads 400 lines of backend rules, implements the schema change, then pivots to the SDK, reads 300 lines of different rules, updates the types, then switches to the frontend, reads another 200 lines of component conventions. By the time it's building the React component, the backend context has faded. The Mongoose gotcha about findByIdAndUpdate that was front-of-mind 20 minutes ago is now buried under 15,000 tokens of SDK and frontend context.

This is the single-agent ceiling. Not a context window limit in the technical sense. You've got 200k tokens. The problem is attention dilution. Claude can hold the backend patterns or the frontend patterns in sharp focus, but holding all of them simultaneously while also tracking cross-layer dependencies is where things start to slip.

Subagents solve this by giving each layer its own focused context window. The backend agent reads only backend rules. The SDK agent reads only SDK conventions. The frontend agent reads only component patterns. And a parent orchestrator coordinates between them, tracking what each one produced and checking that the pieces fit together.

This article goes deep on one specific pattern: using subagents and the orchestrator model to parallelize work across a large codebase. We'll build the pattern from scratch, show real dispatch prompts from a 17-project monorepo, and cover when to reach for Agent Teams instead.

This is Part 5 of the Claude extension stack series. Part 2 covered rules, hooks, and skills. Part 4 showed how they compose in production.

What are subagents and why do they exist?

Subagents are isolated Claude Code sessions spawned by a parent agent. Each one gets a fresh context window, runs a specific task, and returns the result to the parent. The parent never shares its full conversation history with the child. This isolation is the entire point.

Think of it as the difference between doing everything yourself and delegating to specialists. A single Claude Code session is you at a whiteboard, switching between backend code, SDK code, and React components, trying to hold all three mental models simultaneously. Subagents are three specialists, each at their own desk, each focused on one layer, reporting back to a coordinator.

Claude Code's Task tool makes this concrete. When you (or the orchestrating agent) call Task, it spawns a new session with a specific prompt. That session has its own context window, its own tool access, and its own set of instructions. It runs to completion and returns a result. The parent receives the result and continues.

Here's what's happening under the hood:

Add lastActive field across all layers Plan: 3 tasks (backend, SDK, UI) Task: "Add lastActive to agent schema" Read CLAUDE.md, implement, test Result: schema updated, test passing Task: "Add lastActive to SDK types + hook" Read CLAUDE.md, implement, test Result: types + hook updated Verify: SDK types match API response Fresh context windowOnly backend rules loaded Fresh context windowOnly SDK rules loaded User Parent Agent Subagent (Task)
Subagent lifecycle: the parent spawns a child session with isolated context

The three properties that make this useful: isolation (each subagent starts clean), focus (it reads only what it needs), and composability (the parent can dispatch many in sequence or parallel).

The context window math

Here's why isolation matters in practice. In our monorepo, scoped rules alone consume significant context:

Rules FileLinesApprox. Tokens
backend-services.md280~2,800
api-contracts.md200~2,000
sdk-cli.md350~3,500
frontend-apps.md300~3,000
inter-service.md150~1,500
Lessons files (4 total)400+~4,000

A single agent working across all layers loads ~16,800 tokens of rules before it reads a single line of your code. A backend subagent loads ~4,300 (backend + API contracts). A frontend subagent loads ~3,000 (frontend only). That's a 4x reduction in rule overhead, which translates directly to sharper attention on the rules that actually matter for the task.

This isn't theoretical. We've measured the difference on real tasks. When a single agent handles a three-layer feature, it occasionally applies backend patterns to frontend code (using Logger instead of console.error in a React component) or frontend patterns to backend code (adding data-testid to a DTO). Subagents don't make these cross-contamination errors because they never see the irrelevant rules.

The three tiers of multi-agent work

Multi-agent work in Claude Code breaks into three tiers: built-in subagents and Agent Teams (Tier 1), external orchestrators like Claude Squad (Tier 2), and cloud agents like Codex Web (Tier 3). Addy Osmani's "Code Agent Orchestra" article introduced this framing, and after a year of using all three, we've found it holds up.

Tier 1: Built-in (subagents and Agent Teams). These operate within a single Claude Code terminal session. No extra tooling, no additional processes. Subagents use the Task tool. Agent Teams use an experimental coordination protocol. This is where 90% of multi-agent work happens.

Tier 2: External orchestrators (3-10 agents). Tools like Claude Squad, Conductor, and Vibe Kanban spawn multiple Claude Code instances in isolated worktrees with dashboards, diff review, and merge control. Useful when you need visual oversight of many parallel agents, or when agents need to work on the same codebase without stepping on each other's files.

Tier 3: Cloud agents (no local terminal). Claude Code Web, GitHub Copilot Coding Agent, Jules by Google, Codex Web by OpenAI. These run in cloud VMs with no local setup. Great for async work (open a PR, go to sleep, review in the morning), but you lose the tight feedback loop of local development.

This article focuses on Tier 1 because it's where most developers should start and where most will stay. The built-in subagent pattern handles the vast majority of multi-project tasks without installing anything or running additional processes.

TierPatternBest ForToken Cost
1aSingle agentSingle-file edits, quick fixes, exploration1x (baseline)
1bSubagentsMulti-project features, 2-5 parallel tasks2-3x
1cAgent TeamsComplex coordination, shared task lists, peer messaging~7x
2External orchestrators5-10 agents, visual dashboards, worktree isolationVariable
3Cloud agentsAsync PRs, no local setup neededVariable

How do you structure an orchestrator?

The orchestrator pattern has four phases. You've seen this structure if you've managed a team of engineers: clarify requirements, plan the work, assign tasks, review results. The only difference is that your "team" is subagents, and your "assignment" is a carefully crafted prompt.

Phase 0: clarify

Before creating any tasks, the orchestrator loads context for the relevant topic and asks questions. In our codebase, every session starts by identifying affected projects. We maintain a config/projects.yaml registry with all 17 projects, their paths, ports, and conventions. The orchestrator reads it to determine the layer stack.

This is the phase where the orchestrator decides whether subagents are even necessary. A single-file change in one service? Handle it directly. A feature that spans backend, SDK, and frontend? Plan tasks.

The decision threshold we've settled on: if the task touches more than one project with different conventions, use subagents. If it touches multiple files in the same project, a single session is usually fine.

Phase 1: plan inside-out

Task ordering follows what we call the DRY onion. Inner layers first, outer layers last. Backend before SDK. SDK before UI. This matters because each layer depends on the shape of the layer below it.

1. BackendSchema + API 2. SDKTypes + Hooks 3. FrontendComponents + Pages
DRY onion: tasks flow from inner layers to outer layers

If you build the frontend first and discover the API returns a different shape than you assumed, you're reworking three layers instead of one. Building inside-out means each layer can rely on the finalized output of the previous layer.

Here's what a real task plan looks like for adding a lastActive timestamp to agents. Each task has a subject, description with acceptance criteria, and dependency wiring.

markdown
## Task Plan: Add lastActive timestamp to agents
 
Task #1: [agent-service] Add lastActive field to agent schema
  - Add `lastActive: Date` to agent.schema.ts
  - Update on every PATCH /agents/:id
  - Test: PATCH an agent, verify lastActive changed
  - No dependencies
 
Task #2: [platform-sdk] Add lastActive to Agent type and hooks
  - Add `lastActive?: string` to Agent interface
  - Update useAgent() hook staleTime to 30s (was 60s)
  - Test: type compiles, hook returns lastActive
  - Blocked by: Task #1 (needs final API shape)
 
Task #3: [chanl-admin] Show lastActive in agent list table
  - Add "Last active" column to agents DataTable
  - Use relative time format ("2 hours ago")
  - Test: column renders, sorts correctly
  - Blocked by: Task #2 (needs SDK hook)

The Blocked by relationships are critical. Task #2 can't start until Task #1 finishes because it needs to know the exact field name and type the API returns. Task #3 can't start until Task #2 finishes because it imports the SDK hook.

Phase 2: dispatch with context packets

Each subagent receives a context packet: the project path, its CLAUDE.md, the relevant rules file, the task description, and the commands it needs (build, test, health check). This is the most important part of the pattern, because the subagent doesn't see your conversation history. The context packet is everything it knows.

Here's a real dispatch prompt for a backend subagent:

markdown
You are working on: agent-service
Read first:
  - services/agent-service/CLAUDE.md
  - .claude/rules/backend-services.md
  - .claude/rules/api-contracts.md
Path: services/agent-service/
Commands:
  - Build: pnpm build
  - Test: pnpm test
  - Health: curl localhost:8002/health
 
TASK: Add lastActive field to agent schema
 
Add a `lastActive` field (Date type) to the Agent schema in
agent.schema.ts. Update it automatically whenever an agent is
modified via PATCH /agents/:id.
 
Acceptance criteria:
1. Field exists on schema with type Date, optional, indexed
2. findOneAndUpdate in agents.service.ts sets lastActive: new Date()
3. Field appears in Swagger docs via @ApiPropertyOptional
4. Unit test: PATCH an agent, assert lastActive is recent
 
RULES:
- Use findByIdAndUpdate, never doc.save()
- Always scope queries by workspaceId
- Use Logger class, never console.log
- Return data directly from controller (ResponseInterceptor wraps)

Notice what's included: specific file names to read, exact commands to run, concrete acceptance criteria, and the key rules that apply. The subagent doesn't need to figure any of this out from a 500-line CLAUDE.md. It gets exactly what it needs.

And here's the SDK dispatch that follows, after the backend task completes:

markdown
You are working on: platform-sdk
Read first:
  - packages/platform-sdk/CLAUDE.md
  - .claude/rules/sdk-cli.md
Path: packages/platform-sdk/
Commands:
  - Build: pnpm build
  - Test: pnpm test
 
TASK: Add lastActive to Agent type and update hooks
 
The backend now returns `lastActive: string` (ISO date) on the
Agent response. Update the SDK to surface this.
 
Acceptance criteria:
1. Agent interface in types/ includes lastActive?: string
2. useAgent() hook in react/use-agent-hooks.ts works unchanged
   (field flows through automatically via unwrapResponse)
3. Agent type re-exported from react/index.ts includes the new field
4. Reduce staleTime to 30000 (30s) for agent detail queries
 
Context from previous task:
- API returns: { id, name, ..., lastActive: "2026-04-01T..." }
- Field is optional (null for agents never modified after creation)

That "Context from previous task" section is key. The SDK subagent wasn't alive when the backend subagent ran. It doesn't know what the API returns unless you tell it. The orchestrator bridges this gap by extracting the relevant output from Task #1 and injecting it into Task #2's prompt.

Phase 3: verify cross-project consistency

After all subagents complete, the orchestrator reviews results for integration gaps. This is the phase that catches mismatches no individual subagent can see.

Common things to check:

  • Type alignment. Does the SDK's Agent interface match the actual API response shape? If the backend returns lastActive as a Date and the SDK types it as string, there's a mismatch.
  • Import correctness. Does the frontend import from @chanl-ai/platform-sdk/react, not from a local types file?
  • Query key invalidation. When the backend changes an agent, does the SDK's mutation hook invalidate the right query keys?
  • Error handling. If the backend returns 404 for an agent without lastActive, does the frontend handle the null case? Scorecards can automate this kind of quality check across your entire agent fleet.

We encode this as a checklist the orchestrator runs through:

markdown
## Cross-project verification
 
- [ ] SDK Agent type matches API response shape
- [ ] Frontend imports from SDK, not local types
- [ ] Mutation hooks invalidate correct query keys
- [ ] Null/undefined cases handled at every layer
- [ ] Build passes in all three projects
- [ ] No cross-layer rule violations

How should you write dispatch prompts?

The dispatch prompt is the single most important artifact in the orchestrator pattern. A vague prompt produces vague work. A precise prompt produces work that slots into your codebase like it was written by someone who's been on the team for months.

Here are the patterns we've learned from dispatching hundreds of subagents across a 17-service monorepo.

Always include "read first" files

The subagent doesn't know your project conventions unless you point it at the right documentation. We always include the project's CLAUDE.md and the relevant scoped rules file. These files contain patterns like "always use findByIdAndUpdate" or "always export types from react/index.ts" that the subagent needs to follow.

markdown
Read first:
  - services/agent-service/CLAUDE.md
  - .claude/rules/backend-services.md

Without these, the subagent improvises. It might use doc.save() for updates. It might create a custom auth guard instead of using AuthModule.forRoot(). It might define types in the app instead of the SDK. All of these are architectural violations in our codebase, and all of them are avoided by pointing the subagent at the right rules.

Bridge context between dependent tasks

When Task #2 depends on Task #1, the orchestrator must extract the relevant output from Task #1 and include it in Task #2's prompt. The subagent for Task #2 wasn't alive during Task #1. It has zero implicit knowledge of what happened.

markdown
Context from previous task (backend):
- New field: lastActive (Date, optional, indexed)
- Returned in GET /agents/:id and GET /agents responses
- Format: ISO 8601 string after JSON serialization
- Null for agents created before this change

This is where many orchestrator setups fail. They dispatch Task #2 with a reference to Task #1 but don't actually include the relevant details. The subagent then has to guess the API shape, and guesses are wrong often enough to cause rework.

Specify test requirements explicitly

Every task description should include what test to write and what constitutes passing. Not "write tests" but "unit test: PATCH an agent, assert lastActive is a recent Date (within 5 seconds of now)."

markdown
Acceptance criteria:
1. Schema field exists: lastActive, type Date, optional
2. Service method sets it: findOneAndUpdate includes $set: { lastActive }
3. Test exists: patches agent, asserts lastActive > (now - 5000ms)
4. Swagger: @ApiPropertyOptional on the DTO

The more specific the test requirement, the more likely the subagent writes a test that actually validates the behavior instead of a test that merely exists.

Include the rules that matter most

Don't just point at the rules file. Call out the two or three rules most relevant to this specific task. The rules file might be 300 lines. The subagent will follow the ones at the top more reliably than the ones buried in the middle.

markdown
KEY RULES for this task:
- Use findByIdAndUpdate, NEVER doc.save() (race condition risk)
- Always scope queries by workspaceId (prevents cross-tenant leak)
- Return data directly from controller (ResponseInterceptor wraps it)

This redundancy is intentional. Yes, these rules are in backend-services.md. But repeating them in the task description puts them in the subagent's immediate attention, not 200 lines deep in a rules file.

When do you use Agent Teams instead?

Agent Teams are Claude Code's built-in multi-agent orchestrator. They're experimental and disabled by default, but they add coordination primitives that subagents lack: a shared task list with dependency tracking, peer-to-peer messaging between teammates, and file locking to prevent merge conflicts.

Enable them in ~/.claude/settings.json:

json
{
  "env": {
    "CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS": "1"
  }
}

The mental model shift: subagents are one-way delegation (parent dispatches, child reports back). Agent Teams are collaborative coordination (teammates communicate with each other, not just with a lead).

When Agent Teams help

Iterative cross-project work. If the backend and frontend need to negotiate an API contract through multiple rounds of feedback, Agent Teams let them message each other directly instead of routing everything through the parent.

Shared state management. When multiple agents modify the same configuration file or shared module, the task list's dependency tracking and file locking prevent conflicts.

Long-running sessions. If the work takes hours and you want to interact with individual teammates directly (not just the lead), Agent Teams support that. With subagents, you can only talk to the parent.

When subagents are better

Cost sensitivity. Agent Teams consume roughly 7x more tokens than a standard session. The coordination protocol (task syncing, peer messages, file locks) adds overhead on every operation. Subagents exist only for the duration of their task.

Simple delegation. If each task is independent and doesn't need to communicate with other tasks, the coordination overhead of Agent Teams is pure waste. Dispatch three subagents, collect three results, done.

Reliability. Agent Teams have known limitations. Session resumption (/resume) doesn't restore in-process teammates. Teammates sometimes fail to mark tasks as completed, blocking dependent work. The experimental label is earned.

For our monorepo, we use subagents for 90%+ of multi-agent work. Agent Teams are useful for the occasional complex feature where the backend and SDK need to iterate on an API contract, but those situations are rare enough that we haven't made Agent Teams our default.

How does this look in a real monorepo?

Let me walk through a real example from our codebase. Our platform is a monorepo with 8 NestJS microservices, 3 Next.js frontend apps, a TypeScript SDK, a Python voice bot, and a Vercel-hosted MCP server. When a feature touches multiple projects, the orchestrator pattern is the only way it works reliably.

The /dispatch skill

We've encoded the entire orchestrator pattern as a slash command. When you type /dispatch add lastActive to agents, the skill:

  1. Reads config/projects.yaml to identify affected projects
  2. Creates tasks with dependencies using TaskCreate
  3. Dispatches subagents in layer order (backend first, then SDK, then UI)
  4. Updates task status as each completes
  5. Reviews cross-project consistency

Here's the skill's dispatch template, which every subagent receives:

markdown
You are working on: {project name}
Read first: {claude_md path} and .claude/rules/{rules_file}
Path: {project path}
Commands: {build, test, health from projects.yaml}
 
TASK: {task subject}
{task description with acceptance criteria}
 
RULES:
- Write the test FIRST (TDD red), then implement to make it pass
- Stay within the project directory
- Use project-specific make/pnpm commands for build/test
- Return: what you did, test results, and any issues found

The "stay within the project directory" rule prevents subagents from wandering into other services. Without it, a backend subagent might try to "helpfully" update the SDK types or reconfigure agent tools, creating a conflict when the SDK subagent runs next.

Scope guardrails for the orchestrator

Not every feature should be dispatched to subagents. Some changes are too risky for parallel work. We've encoded blast-radius checks that trigger before the orchestrator creates any tasks.

Risk SignalExampleRequired Action
Schema field change on a core entityRenaming status on AgentMap ALL downstream consumers first
More than 5 tasksFeature touching 4 services + SDK + 2 appsSplit into multiple PRs
Shared module changeModifying nestjs-common auth guardGrep all imports, list what breaks
API contract changeChanging pagination response shapeMust be backwards-compatible

These guardrails have prevented several "I renamed a field and broke 12 things" disasters. The orchestrator checks the task plan against these triggers before dispatching. If it hits one, it stops and asks for confirmation. This is the same principle behind testing AI agents before production: catch the blast radius before it happens, not after. Automated scenario testing catches these integration failures at the API level before they reach users.

Parallel vs sequential dispatch

Independent tasks dispatch in parallel. Dependent tasks wait for their blockers to complete. In practice, this means:

Parallel: Two backend services that don't depend on each other. For example, adding a field to agent-service and adding an endpoint to interactions-service can happen simultaneously if neither depends on the other's output.

Sequential: Anything that follows the DRY onion. SDK waits for backend. Frontend waits for SDK. The shape of each layer is determined by the layer below it.

Orchestrator agent-serviceAdd lastActive field interactions-serviceAdd analytics endpoint platform-sdkUpdate Agent type + hook chanl-adminNew column + analytics card
Real dispatch pattern: parallel where possible, sequential where dependent

In this example, the two backend tasks run in parallel (saving time), but the SDK task waits for both to finish (it needs both API shapes). The frontend task waits for the SDK (it imports the hooks).

What goes wrong and how do you fix it?

The most common failures are lossy context hand-offs between subagents, cross-layer type mismatches where the SDK and API disagree on field shapes, and rules buried deep in files getting ignored. We've been running this pattern for about a year, and these are the ones that bit us hardest.

Subagent context hand-off is lossy

The subagent doesn't see the orchestrator's conversation. It gets a task description and file paths, but misses nuance from the discussion. If you spent five minutes explaining to the orchestrator that "lastActive should only update on user-initiated changes, not system events," and that context didn't make it into the task description, the subagent will update lastActive on every change.

Fix: Make task descriptions verbose about constraints. Include the "why" and the "except when." Better to over-specify than under-specify.

Cross-layer type mismatches

The backend subagent returns lastActive as a Date object. The SDK subagent types it as string. The frontend subagent formats it as a relative time. Somewhere in the chain, a mismatch happens. Maybe the backend returns null for agents created before the migration, but the SDK typed it as non-optional. The frontend crashes on .toISOString() of null.

Fix: The orchestrator's Phase 3 verification must explicitly check type alignment. We run through a checklist: "Does the SDK interface match the actual API response? Are optional fields marked optional? Are null cases handled?"

Subagents ignore rules deep in the file

Just like CLAUDE.md, rules files have an attention gradient. Rules near the top get followed more reliably than rules buried 200 lines deep. When a subagent reads backend-services.md, it might miss the anti-pattern about export { Type } vs export type { Type } that's at line 280.

Fix: Repeat the most critical rules in the task description itself. Yes, it's redundant. But the task description is in the subagent's immediate attention. The rules file is supplementary context.

The orchestrator becomes a bottleneck

If you're dispatching 8+ subagents, the orchestrator spends more time coordinating than actual work gets done. Reading results, bridging context, checking alignment, dispatching the next batch. At some point, the coordination overhead exceeds the parallelism benefit.

Fix: Keep sessions under 8 tasks. If a feature plan exceeds that, split it into multiple PRs. "Session 1: backend only. Session 2: SDK + UI." This matches how human engineering teams work, too. A feature that touches every service in a monorepo isn't one PR. It's a phased rollout.

Agent Teams fail on session resume

This is a known limitation. If you /resume an Agent Teams session, in-process teammates don't restore. The lead may try to message teammates that no longer exist. Tasks get stuck in "in progress" forever.

Fix: Treat Agent Teams sessions as ephemeral. Don't rely on resume. If a session dies, start fresh. For work that spans multiple days, use subagents with explicit state (task descriptions, not in-memory coordination).

Why does specialization beat generalization?

Three focused agents consistently outperform one generalist working three times as long. That's the pattern every practitioner in the growing Claude Code community converges on. The awesome-claude-code repo curates skills, hooks, and MCP configurations. The awesome-claude-code-toolkit catalogs 135+ agents and 150+ plugins. Tier 2 orchestrators like Claude Squad and Conductor build visual dashboards on top of the same subagent primitives. Across all of them, the conclusion is the same. The benefit isn't just context window efficiency. It's cognitive focus. A subagent that reads only backend rules produces better backend code than a generalist juggling three sets of conventions.

Addy Osmani frames the shift well: from conductor (guiding one agent in real-time) to orchestrator (coordinating an ensemble). The conductor model hits a ceiling when your codebase is big enough that no single context window can hold all the relevant conventions simultaneously. The orchestrator model scales because each ensemble member only needs to hold its own part.

Decision framework: which pattern do you need?

Start with a single agent for single-project work. Move to subagents when the task crosses project boundaries with different conventions. Reserve Agent Teams for the rare cases where agents need to negotiate with each other. Here's the full decision tree we use every day.

Start with a single agent. Most tasks are single-project, single-layer changes. A bug fix in one service. A new column in one table. A refactored hook. Don't reach for subagents when a single session will do. The overhead of planning, dispatching, and verifying is real.

Move to subagents when the task crosses project boundaries. Three or more distinct implementation steps across different projects with different conventions. Backend + SDK + UI. Service A + Service B. TypeScript + Python. The context window dilution from loading multiple rule sets is the signal.

Consider Agent Teams when agents need to coordinate with each other. Not just report to a parent, but actually negotiate. An API contract discussion between backend and frontend. A shared configuration that multiple agents modify in sequence. These situations are uncommon enough that subagents with explicit context bridging handle most of them, but when they arise, Agent Teams' shared task list and peer messaging are genuinely useful.

Reach for external orchestrators (Tier 2) when you need visual oversight of 5+ agents. Dashboards showing what each agent is working on, diff review before merge, worktree isolation for parallel file edits. This is typically team-level infrastructure, not individual developer tooling.

SignalPatternWhy
Single file, one projectSingle agentNo overhead needed
Bug fix, one serviceSingle agentContext is focused already
Feature across 2-3 layersSubagentsAvoids attention dilution
Feature across 4+ servicesSubagents with phased PRsKeep sessions under 8 tasks
API contract negotiationAgent TeamsAgents need peer communication
5+ parallel agentsTier 2 orchestratorNeed visual oversight and worktree isolation
Async overnight PRsTier 3 cloud agentNo local terminal needed

Putting it together

The orchestrator pattern isn't complicated. It's four phases: clarify, plan, dispatch, verify. But the details matter. The dispatch prompt is the most important artifact. The inside-out task ordering prevents rework. The cross-project verification catches integration gaps. And the scope guardrails keep sessions from becoming unmanageable.

If you're working in a monorepo, or any codebase with distinct layers that have different conventions, start with one thing: the next time you have a multi-project feature, resist the urge to do it all in one session. Create a plan with three tasks. Dispatch three subagents with specific context packets. Review the results for consistency.

You'll notice two things immediately. First, each subagent produces cleaner code because it was focused on one set of patterns. Second, the orchestrator catches integration issues that a single agent would have silently introduced. That's the pattern working.

The Part 1 mental model gives you the foundation for where rules, skills, and hooks fit. Part 3 digs into MCP connectors. The Part 4 production walkthrough shows how all 7 extension points compose. This article gives you the subagent playbook.

Remember that three-layer feature from the opening? Backend schema, SDK types, frontend table column. Instead of one agent losing the Mongoose gotcha by token 15,000, you get three agents that each nail their layer and an orchestrator that checks the seams. The tools exist. The patterns are documented. The difference between teams that get 2x from Claude Code and teams that get 10x is whether they treat it as a single agent or an engineering team. Many focused windows, properly coordinated, consistently outperform one overloaded window trying to hold everything at once.

Build AI agents with managed tool orchestration

Chanl equips your AI agents with tools, knowledge, and memory, then tests them with realistic scenarios before production. The same orchestration principles that make subagents work also make agent tool systems reliable.

Explore the platform
DG

Co-founder

Building the platform for AI agents at Chanl — tools, testing, and observability for customer experience.

Aprende IA Agéntica

Una lección por semana: técnicas prácticas para construir, probar y lanzar agentes IA. Desde ingeniería de prompts hasta monitoreo en producción. Aprende haciendo.

500+ ingenieros suscritos

Frequently Asked Questions