Claude Code subagents: how to run parallel tasks and finish work faster
The single biggest speed improvement in Claude Code is not a faster model or a bigger context window. It is subagents. One main agent that delegates independent tasks to separate workers, all running at the same time.
Most people use Claude Code as a single thread: do this, then do that, then do the next thing. That works, but for sessions with multiple independent tasks, you are leaving half your productivity on the table.
What subagents actually are
A subagent is a fresh Claude Code instance spawned by your main session. It gets its own context window, its own tools, and works independently. The main agent sends it a task, continues with other work, and picks up the result when the subagent finishes.
Think of it like this: you are the project manager. Instead of doing every task yourself, you assign independent tasks to team members who work in parallel. You only handle the tasks that depend on each other.
The technical mechanism is the Agent tool. When Claude Code identifies that two or more tasks have no dependencies between them, it can launch multiple subagents in a single response. Each subagent gets a complete brief: what to do, why, and what context it needs.
When subagents make sense
Research and exploration
You need to understand three different parts of a codebase before making a change. Instead of reading one area at a time, spawn three subagents that each explore a different area simultaneously. Results come back together, and you have the full picture in one round instead of three.
Independent file changes
You need to update the API endpoint, the frontend component, and the test file. None of these changes depend on the others being done first. Three subagents, three files, one round of work.
Testing and validation
After making changes, you want to run the test suite, check for linting errors, and verify the build. These are independent checks. Launch all three as subagents and get all results at once.
Content and documentation
Writing three blog posts. Generating API docs for three modules. Creating test data for three features. Any batch of similar but independent writing tasks benefits from parallel execution.
When subagents do not make sense
Not everything should be parallelized:
- Sequential dependencies. If task B needs the output of task A, they must run in order. Forcing parallelism here creates errors or wasted work.
- Same-file edits. Two subagents editing the same file will create conflicts. Use worktrees (more on this below) or sequence these tasks.
- Simple tasks. Spawning a subagent has overhead. For a task that takes 10 seconds, running it directly is faster than delegating.
- Tasks requiring human judgment. If a task needs you to make a decision midway, a subagent cannot pause and ask. Keep these in the main thread.
Git worktrees for isolation
The most powerful subagent pattern uses git worktrees. A worktree gives each subagent its own copy of the repository, so they can make changes to any file without conflicting with each other or with your main working directory.
Here is when to use worktrees:
- Two subagents might touch overlapping files
- You want to review each subagent's changes independently before merging
- The task involves experimental changes you might want to discard
Without worktrees, subagents work directly in your repository. This is fine when they touch completely different files, but risky when there is any overlap.
How to structure subagent prompts
The quality of subagent output depends almost entirely on the prompt. A subagent starts with a blank context. It does not know what you discussed in the main session. It does not know what other subagents are doing. You need to brief it like a colleague who just walked into the room.
Good subagent prompts include:
- What to accomplish. The concrete deliverable, not a vague goal.
- Why it matters. Context that helps the subagent make judgment calls.
- What you already know. File paths, patterns discovered, things tried.
- What tools to use or avoid. Read-only research? Full editing? Specific directories to focus on?
- Expected output format. Should it write code, return a summary, or create a file?
A bad prompt: "Fix the tests." A good prompt: "The test file at src/auth/login.test.ts has 3 failing tests after we changed the JWT expiry from 1 hour to 15 minutes. Update the test expectations to match the new 15-minute expiry. Run the tests to confirm they pass."
Real workflow examples
Example 1: Feature implementation
You want to add a dark mode toggle to your app. The main agent plans the approach and identifies three independent pieces:
- Subagent 1: Create the toggle component and state management
- Subagent 2: Write the dark mode CSS variables and theme definitions
- Subagent 3: Update the settings page layout to include the toggle
All three run simultaneously. The main agent then integrates the results, handles any overlap, and runs the test suite.
Example 2: Code review
You have a pull request with changes across the API, frontend, and database layers. Instead of reviewing sequentially:
- Subagent 1 (read-only): Review API changes for security issues
- Subagent 2 (read-only): Review frontend changes for UX problems
- Subagent 3 (read-only): Review database migration for data integrity
Each subagent is a specialist. The main agent combines their findings into a unified review.
Example 3: SEO content batch
You need five blog posts for your product. Each post targets a different keyword and has no dependency on the others. Five subagents, five posts, one session. This is exactly how we produce content at Nova Labs, and it turns what used to be a full day of writing into a single session.
Common mistakes
- Launching too many subagents at once. Rate limits apply across all subagents. Four concurrent agents hitting the API hard will trigger limits faster than one agent working steadily.
- Vague delegation. "Research this topic" produces generic results. "Find the three most upvoted Reddit threads in r/ClaudeAI about token costs and summarize the main complaints" produces actionable intelligence.
- Not verifying subagent output. Subagents work independently but are not infallible. Always review their output before building on it, especially for code changes.
- Parallelizing everything. Some tasks benefit from the main agent's accumulated context. A complex debugging session where each step builds on the previous finding should stay in the main thread.
Setting up your project for subagent success
The better your project is organized, the better subagents perform. A few structural choices that help:
- Clear directory structure. Subagents navigate by file paths. A well-organized project means less exploration and faster results.
- A good CLAUDE.md. Subagents inherit your project's CLAUDE.md instructions. Rules about coding style, testing conventions, and file organization apply to subagents automatically. This is why a solid CLAUDE.md file matters so much.
- Modular code. Code with clear boundaries between modules is easier to work on in parallel. Tightly coupled code forces sequential work.
- Git hygiene. Worktree-based subagents need a clean git state. Uncommitted changes and dangling branches make isolation harder.
The productivity math
A typical Claude Code session might involve five tasks that take 2-3 minutes each sequentially: 10-15 minutes total. With subagents handling three of those tasks in parallel, the session drops to 6-8 minutes. That is not revolutionary for one session. But across a full day of development sessions, you are recovering hours.
The real win is not time savings per task. It is the ability to think bigger about what you can accomplish in a single session. When parallel execution is available, you stop breaking work into tiny chunks and start tackling entire features at once.
Want to see how much your current Claude Code workflow costs and where the tokens go? Try our free cost analyzer. Upload a session log and see the full breakdown, including how much you could save by optimizing your approach.
And if you want the complete playbook for setting up Claude Code as a business operating system, with subagent patterns, skill systems, memory architecture, and everything we use to run Nova Labs, check out the AI OS Blueprint.
You might also like
Want to build your own AI OS?
The AI OS Blueprint gives you the complete system: 53-page playbook, working skills, and a clonable repo. Starting at $47.
30-day money-back guarantee. No subscription.