What You Need to Know
The number of tools you give an agent directly affects how reliably it selects the right one. This is not a minor implementation detail — it is a core architectural decision that determines whether your multi-agent system works in production.
The Tool Overload Problem
Giving a single agent 18 tools degrades selection reliability. The model faces increased decision complexity with every additional tool, and error rates climb as the toolkit grows. The optimal range is 4-5 tools per agent, scoped to that agent's specific role.
This is not just about quantity — it is about relevance. A synthesis agent should NOT have web search tools. A web search agent should NOT have document analysis tools. When agents have tools outside their specialisation, they tend to misuse them. A synthesis agent with access to web_search might decide to run its own searches instead of using the search results already provided to it, duplicating work and wasting context.
The principle: each agent gets only the tools it needs for its defined role. Nothing more.
The tool_choice Configuration
The tool_choice parameter controls how the model interacts with available tools. There are three settings, and each serves a distinct purpose.
"auto" (default)
The model decides whether to call a tool or return text. Use this for general operation where the model needs flexibility to respond conversationally when no tool call is appropriate.
{
"tool_choice": { "type": "auto" }
}
"any"
The model MUST call a tool but chooses which one. Use this when you need guaranteed structured output from one of multiple schemas — the model will always produce a tool call, never plain text.
{
"tool_choice": { "type": "any" }
}
This is particularly valuable in extraction pipelines. If you have multiple extraction schemas (one for invoices, one for receipts, one for contracts) and the document type is unknown, "any" guarantees the model picks one and produces structured output rather than returning a conversational response.
Forced selection The model MUST call a specific named tool. Use this to enforce mandatory first steps — the model cannot skip or reorder the required operation.
{
"tool_choice": { "type": "tool", "name": "extract_metadata" }
}
This is the tool for enforcing workflow ordering. If metadata extraction must happen before any enrichment tools run, forced selection guarantees it. The model cannot decide to skip extract_metadata and jump straight to enrichment. After the forced call completes, subsequent turns can use "auto" for the remaining steps.
Scoped Cross-Role Tools
Sometimes an agent needs occasional access to a capability that belongs to another role. The naive approach is to route every such request through the coordinator. The problem: this adds 2-3 round trips per request and can increase latency by 40% or more.
The solution is a scoped cross-role tool: a constrained version of the capability, given directly to the agent that needs it.
Consider a synthesis agent that frequently needs to verify simple facts during report generation. The naive design routes all verification requests back to the coordinator, which delegates to the search agent, waits for results, and returns them. For 85% of verifications — simple lookups that take milliseconds — this round-trip overhead is wasteful.
The fix: give the synthesis agent a scoped verify_fact tool that handles simple lookups directly. Complex verifications (requiring multiple sources, cross-referencing, or nuanced judgement) still route through the coordinator. The 85% simple case is handled locally; the 15% complex case uses the full pipeline.
This pattern is tested directly in the exam (Q9). Know it cold.
Replacing Generic Tools with Constrained Alternatives
Instead of giving a subagent fetch_url (which can fetch anything from anywhere), give it load_document that validates document URLs only. The constrained tool:
- Prevents misuse (the agent cannot fetch arbitrary URLs)
- Makes the tool's purpose clearer (the description is specific, not generic)
- Reduces the risk of unintended side effects (no fetching of non-document resources)
This is the principle of least privilege applied to tool design. Each tool should do exactly what the agent needs and nothing more.
Role-Specific Tool Scoping in Practice
Here is how tool distribution looks in a well-designed multi-agent research system:
| Agent | Tools (4-5 each) |
|---|---|
| Web Search | search_web, fetch_page, extract_links, save_snippet |
| Document Analysis | extract_metadata, extract_data_points, summarize_content, verify_claim |
| Synthesis | compile_report, verify_fact (scoped), format_citation, assess_coverage |
| Coordinator | Task (to spawn subagents), review_output, request_revision |
Each agent has exactly the tools it needs. The synthesis agent has a scoped verify_fact for simple lookups. The coordinator controls the workflow without having access to domain-specific tools.
Key Concept
The optimal range is 4-5 tools per agent, scoped to its role. For high-frequency simple operations, add a scoped cross-role tool directly to the agent that needs it — this avoids coordinator round-trip latency for the common case.
Exam Traps
Routing all simple verification requests through the coordinator when 85% are simple lookups
Coordinator round-trips add 2-3 extra hops per request. A scoped verify_fact tool on the synthesis agent handles the 85% simple case directly, cutting latency by up to 40%.
Using tool_choice 'auto' when structured output is required
With 'auto', the model may return conversational text instead of calling a tool. Use 'any' to guarantee a tool call, or forced selection to guarantee a specific tool call.
Giving an agent 18 tools and expecting reliable selection
Tool selection reliability degrades as the number of tools increases. The optimal range is 4-5 tools per agent. More tools means more decision complexity and more selection errors.
Giving a subagent a generic fetch_url tool when a constrained load_document would suffice
Generic tools enable misuse. Constrained alternatives (load_document that validates document URLs only) enforce the principle of least privilege and make the tool's purpose clearer.
Practice Scenario
A synthesis agent frequently returns control to the coordinator for simple fact verification, adding 2-3 round trips per task and 40% latency. Analysis shows 85% of verifications are simple lookups. What is the most effective solution?
Build Exercise
Configure Tool Distribution Across a Multi-Agent System
What you'll learn
- Scope tools to agent roles using the 4-5 tools per agent guideline
- Implement scoped cross-role tools to avoid coordinator round-trip latency
- Configure tool_choice modes (auto, any, forced) for different workflow requirements
- Apply least-privilege tool design by replacing generic tools with constrained alternatives
- Verify that tool distribution prevents cross-role misuse in multi-agent systems
- Design three agent roles (web search, document analysis, synthesis) and assign 4-5 tools to each, scoped to its role
Why: Tool overload degrades selection reliability. The exam tests the principle that each agent should have 4-5 tools scoped to its specific role. Giving a single agent 18 tools is a known anti-pattern that causes misrouting.
You should see: A configuration object or table listing three agents, each with exactly 4-5 tools. No tool appears in more than one agent role (except scoped cross-role tools added later). Tool names clearly indicate their purpose and scope.
- Add a scoped verify_fact tool to the synthesis agent that handles simple lookups directly
Why: Routing every fact verification through the coordinator adds 2-3 round trips and up to 40% latency. The exam tests the scoped cross-role tool pattern — give the agent a constrained version of a capability for the 85% simple case, routing only complex cases to the coordinator.
You should see: A verify_fact tool added to the synthesis agent toolset with a description that explicitly limits it to simple single-source lookups and states that complex multi-source verifications should be escalated to the coordinator.
- Configure tool_choice forced selection on the document analysis agent to ensure extract_metadata runs as the mandatory first step
Why: Forced selection enforces workflow ordering. The exam tests your knowledge of all three tool_choice modes: auto lets the model choose freely, any guarantees a tool call, and forced selection guarantees a specific tool call. This prevents the model from skipping mandatory steps.
You should see: A document analysis agent configuration where the first API call uses tool_choice with type: tool and name: extract_metadata, and subsequent calls switch to tool_choice: auto for the remaining analysis steps.
- Replace a generic fetch_url tool with a constrained load_document that validates document URLs only
Why: This applies the principle of least privilege to tool design. A generic fetch_url tool can fetch anything from anywhere, enabling misuse. A constrained load_document that validates URLs prevents the agent from fetching arbitrary resources. The exam tests this pattern directly.
You should see: A load_document tool definition that includes URL validation logic (checking for document file extensions or trusted domains) and rejects non-document URLs with a clear error message.
- Test with a query that requires all three agents and verify that no cross-role tool misuse occurs
Why: End-to-end testing validates that your tool distribution works in practice. Cross-role misuse — such as a synthesis agent running its own web searches instead of using provided results — is a common failure the exam expects you to prevent through proper scoping.
You should see: A test run log showing: the web search agent using only its tools, the document analysis agent starting with extract_metadata (forced), and the synthesis agent using compile_report plus verify_fact for simple checks. No agent calls a tool outside its assigned set.