Domain 1
Task 1.3

Guardrails & Safety

Learn this interactively

What You Need to Know

Task Statement 1.3 is about the mechanics of how a coordinator actually invokes subagents and passes information between them. If 1.2 taught you the architecture, 1.3 teaches you the wiring.

The Task Tool

The Task tool is the mechanism for spawning subagents from a coordinator. It is not a suggestion or a convention — it is the specific API mechanism that makes multi-agent orchestration work in the Claude Agent SDK.

There is a critical configuration requirement: the coordinator's allowedTools must include "Task". Without this, the coordinator physically cannot spawn subagents. This is a binary gate, not a soft preference. If Task is missing from allowedTools, the coordinator has no way to invoke subagents, full stop.

Each subagent is defined by an AgentDefinition that specifies three things:

  1. Description — what the subagent does (used by the coordinator to decide when to invoke it).
  2. System prompt — the instructions the subagent follows.
  3. Tool restrictions — which tools the subagent can access (scoped to its role).

Key Concept

The coordinator's allowedTools must include "Task" to spawn subagents. This is a hard requirement. Without it, the coordinator cannot invoke any subagent regardless of how they are defined.

Context Passing: The Make-or-Break Detail

Context passing is where most multi-agent systems fail in practice. The principle from 1.2 applies directly here: subagents have isolated context. They receive only what the coordinator explicitly includes in their prompt.

There are three rules for effective context passing:

Rule 1: Include complete findings from prior agents. If the synthesis subagent needs web search results and document analysis output, the coordinator must pass both — in full — in the synthesis subagent's prompt. Do not assume the synthesis agent can "look up" prior results. It cannot.

Rule 2: Use structured data formats that separate content from metadata. When passing research findings between agents, the data must include both the content (the claim, the fact, the analysis) and the metadata (source URL, document name, page number). If you pass content without metadata, the downstream agent cannot attribute claims to sources.

This is a specific exam pattern: a synthesis agent produces a report with unsourced claims. The web search and document analysis subagents are working correctly. The root cause is that the coordinator passed content without structured metadata — the synthesis agent literally had no source information to include.

Rule 3: Design coordinator prompts that specify goals, not procedures. The coordinator prompt should tell subagents what to achieve and what quality criteria to meet, not step-by-step instructions for how to do it. Goal-oriented prompts enable subagent adaptability. Procedural instructions constrain subagents and prevent them from adjusting their approach when they encounter unexpected situations.

Exam Trap

When a synthesis agent produces unsourced claims, the exam expects you to identify the context passing failure — specifically, missing structured metadata. Do not blame the synthesis agent's prompt or propose giving it direct tool access.

Structured Metadata Format

The structured data format for inter-agent context passing should separate content from metadata cleanly. A practical format looks like this:

json
{
  "findings": [
    {
      "claim": "Solar panel efficiency has increased 25% in the last decade",
      "source_url": "https://example.com/solar-report",
      "document_name": "Annual Solar Industry Report 2024",
      "page_number": 14,
      "confidence": "high",
      "retrieved_by": "web_search_agent"
    }
  ]
}

Each finding carries its source attribution as metadata. When the synthesis agent receives this structured data, it has everything it needs to produce a properly cited report.

Parallel Spawning

When a coordinator needs to invoke multiple subagents for independent tasks, it should emit multiple Task tool calls in a single response rather than invoking them one at a time across separate turns.

Sequential spawning (one subagent per coordinator turn) introduces unnecessary latency. If the web search agent and document analysis agent can work independently, there is no reason to wait for one to finish before starting the other.

The exam tests latency awareness. When presented with independent subagent tasks, the correct answer involves parallel spawning. Look for answer options that mention "in a single response" or "simultaneously" — these signal the parallel pattern.

Key Concept

Spawn independent subagents in parallel by emitting multiple Task tool calls in a single coordinator response. This reduces latency compared to sequential invocation across separate turns.

fork_session

fork_session creates independent branches from a shared analysis baseline. After a coordinator has completed an initial analysis (reading a codebase, understanding a problem), it can fork the session to explore divergent approaches.

Example: after analysing a codebase, the coordinator forks to compare two testing strategies. Each fork operates independently after the branching point — they do not see each other's results, and changes in one fork do not affect the other.

fork_session is not the same as --resume. Resume continues a specific named session. Fork creates a new independent branch. The exam tests this distinction. Use fork when you need divergent exploration from a shared starting point. Use resume when you want to continue the same line of investigation.

Practical Example: Attribution Failure

A multi-agent research system has three agents: web search, document analysis, and synthesis. The web search agent returns well-sourced results with URLs and titles. The document analysis agent returns detailed analysis with page references.

The coordinator passes the content from both agents to the synthesis agent but strips the metadata — it sends the claims and analysis text without source URLs, document names, or page numbers. The synthesis agent produces an excellent summary with no source attribution.

The fix is not to modify the synthesis agent's prompt (it cannot cite sources it does not have). The fix is to require the coordinator to pass structured metadata alongside content, preserving the source URL, document name, and page number for every finding.

Exam Traps

EXAM TRAP

Assuming subagents automatically have access to the coordinator's conversation history or other subagents' outputs

Subagents have isolated context. Every piece of information they need must be explicitly included in their prompt by the coordinator. There is no automatic context inheritance.

EXAM TRAP

Blaming the synthesis agent for missing citations when the real issue is context passing without metadata

The synthesis agent can only cite sources it has been given. If the coordinator passes content without source URLs and document names, the synthesis agent literally cannot produce citations.

EXAM TRAP

Proposing sequential subagent invocation for tasks that can run independently

Sequential invocation introduces unnecessary latency. Independent tasks should be spawned in parallel using multiple Task tool calls in a single coordinator response.

EXAM TRAP

Confusing fork_session with --resume

fork_session creates independent branches for divergent exploration. --resume continues a specific named session. They serve different purposes: fork for comparing approaches, resume for continuing work.

Practice Scenario

A synthesis agent produces a report where several claims have no source attribution. The web search subagent correctly returns results with URLs, titles, and snippets. The document analysis subagent correctly returns analysis with page references. Both subagents are verified to be working properly. What is the most likely root cause?

Build Exercise

Implement Context Passing with Structured Metadata

Intermediate
50 minutes

What you'll learn

  • Why the coordinator allowedTools must include Task to spawn subagents
  • How to design structured metadata that separates content from source attribution
  • Why context passing failures cause attribution errors in downstream agents
  • How to spawn independent subagents in parallel for reduced latency
  • The difference between fork_session and parallel Task invocation
  1. Create a coordinator agent with Task in its allowedTools

    Why: Task is the hard gate for subagent spawning. Without it in allowedTools, the coordinator cannot invoke any subagent. The exam tests this as a binary requirement — it is not optional or configurable at runtime.

    You should see: A coordinator agent definition with allowedTools explicitly including Task alongside any other tools the coordinator needs directly.

  2. Define two subagents: a web search agent that returns results with source URLs and titles, and a document analysis agent that returns analysis with page references

    Why: Each subagent needs scoped tool access matching its role. The exam tests whether you define subagents with proper AgentDefinition fields: description, system prompt, and tool restrictions.

    You should see: Two AgentDefinition objects, each with a description, system prompt, and restricted tool set. The web search agent has search tools only; the document analysis agent has file reading tools only.

  3. Design a structured output format that separates content from metadata: each finding includes claim, source_url, document_name, page_number, and confidence

    Why: The exam specifically tests the attribution failure pattern: when a synthesis agent produces unsourced claims, the root cause is that the coordinator passed content without structured metadata. Separating content from metadata is the fix.

    You should see: A TypeScript interface or JSON schema defining the Finding type with both content fields (claim, analysis) and metadata fields (source_url, document_name, page_number, confidence, retrieved_by).

  4. Pass complete structured results from both subagents to a synthesis subagent, preserving all metadata

    Why: This is the critical step the exam targets. Stripping metadata before passing to the synthesis agent is the root cause of attribution failures. The coordinator must pass the full structured output, not just the claim text.

    You should see: The coordinator passes the complete findings array (with all metadata intact) to the synthesis agent prompt. No metadata fields are stripped or summarised away.

  5. Verify that the synthesis agent can attribute every claim in its output to a specific source with URL and page number

    Why: This verification step confirms the context passing worked. If any claim lacks attribution, trace back to whether the metadata was actually passed — do not blame the synthesis agent prompt.

    You should see: A synthesis report where every factual claim includes a citation with source URL and page number. No orphaned claims without attribution.

  6. Refactor the coordinator to spawn both research subagents in parallel using multiple Task tool calls in a single response

    Why: The exam tests latency awareness. Sequential spawning of independent subagents wastes time. Parallel spawning via multiple Task calls in a single coordinator response is the correct pattern for independent tasks.

    You should see: Both the web search and document analysis subagents invoked simultaneously via parallel Task calls, with the coordinator waiting for both to complete before proceeding to synthesis.

Sources