Agentic Loop
A control flow where Claude repeatedly receives input, decides on an action (often a tool call), observes the result, and continues until a task is complete. The loop runs until the model returns a stop_reason of end_turn rather than tool_use.
Exam context: Questions test whether you understand the loop termination conditions and how stop_reason values determine whether the loop continues or exits.
See also: 1.1 Agentic Loops
Orchestration Pattern
A design approach for coordinating one or more Claude calls to accomplish a task. Common patterns include prompt chaining (sequential calls), routing (directing to a specialised handler), parallelisation (concurrent calls), and the orchestrator-workers pattern (a central agent delegating to sub-agents).
Exam context: You must match each orchestration pattern to the correct use case. Know when prompt chaining is preferable to a single monolithic prompt, and when parallelisation provides a genuine benefit.
See also: 1.2 Orchestration Patterns
Guardrail
A safety mechanism that constrains Claude's behaviour within acceptable boundaries. Guardrails can be implemented as input validation (checking user messages before sending to Claude), output validation (checking Claude's responses before returning to users), or tool-level restrictions (limiting which tools are available or what parameters they accept).
Exam context: The exam distinguishes between input guardrails, output guardrails, and tool guardrails. Know where each type sits in the agentic loop and what it protects against.
See also: 1.3 Guardrails & Safety
Claude Agent SDK
Anthropic's official Python framework for building agentic applications. It provides a high-level Agent class with built-in tool management, guardrail hooks, handoff capabilities, and tracing. The SDK handles the agentic loop internally, so developers focus on configuring agent behaviour rather than writing loop logic.
Exam context: Questions may ask about the SDK's built-in features (guardrails, handoffs, tracing) versus what you must implement yourself. Know the difference between the SDK's Agent class and a raw API agentic loop.
See also: 1.4 Claude Agent SDK
Multi-Agent System
An architecture where multiple specialised agents collaborate to complete a task. Each agent has its own system prompt, tools, and responsibilities. Communication between agents happens through handoffs (transferring control) or message-passing (sharing data). The two main topologies are hierarchical (manager delegates to workers) and peer-to-peer (agents communicate directly).
Exam context: Know the trade-offs between single-agent and multi-agent designs. The exam tests whether you can identify when a multi-agent system is justified versus when a simpler pattern suffices.
See also: 1.5 Multi-Agent Systems
Human-in-the-Loop
A design pattern where certain agent actions require explicit human approval before execution. This is typically applied to high-impact operations such as sending emails, modifying databases, or making purchases. Implementation involves pausing the agentic loop, presenting the proposed action to a human, and resuming only upon approval.
Exam context: Questions focus on where to insert approval gates in an agentic loop and which types of actions warrant human review. Know the difference between human-in-the-loop and fully autonomous execution.
See also: 1.6 Human-in-the-Loop
Error Recovery
Strategies for handling failures within an agentic loop without crashing the entire workflow. Common approaches include retry with exponential backoff, fallback to a simpler strategy, graceful degradation (returning partial results), and escalation to a human operator.
Exam context: The exam tests whether you can design resilient agentic systems. Know when to retry versus when to fail gracefully, and how to prevent infinite retry loops.
See also: 1.7 Error Recovery & Resilience
stop_reason
A field in the Claude API response that indicates why the model stopped generating. The two key values are end_turn (the model finished its response naturally) and tool_use (the model wants to call a tool). In an agentic loop, tool_use means the loop should continue; end_turn means the task is complete.
Exam context: This is a frequently tested concept. You must know all stop_reason values and what each one signals to the orchestration layer.
See also: 1.1 Agentic Loops
tool_use
A stop_reason value indicating that Claude wants to invoke a tool. The response will contain a tool_use content block specifying the tool name and input parameters. The orchestrator must execute the tool and return the result as a tool_result message before the next API call.
Exam context: Understand the full tool-use flow: Claude returns tool_use, the orchestrator executes, and sends back tool_result. Know what happens if the tool result is malformed or missing.
See also: 1.1 Agentic Loops
end_turn
A stop_reason value indicating that Claude has finished its response and does not need to call any more tools. In an agentic loop, this is the signal to exit the loop and return the final response to the user.
Exam context: The exam may present scenarios where you must determine whether to continue looping or terminate. end_turn is always the termination signal.
See also: 1.1 Agentic Loops
Fan-Out/Fan-In
A parallelisation pattern where a task is split into multiple independent sub-tasks (fan-out), each processed concurrently, and the results are aggregated back together (fan-in). This is useful when multiple pieces of information can be gathered or processed independently.
Exam context: Know when fan-out/fan-in is appropriate versus sequential processing. The exam tests whether you can identify tasks that are genuinely independent and can benefit from parallel execution.
See also: 1.2 Orchestration Patterns
Routing Pattern
An orchestration approach where an initial classification step determines which specialised handler should process a request. The router examines the input, categorises it, and directs it to the appropriate downstream agent or prompt. This avoids overloading a single prompt with too many responsibilities.
Exam context: Questions test whether you can design effective routing logic. Know the difference between LLM-based routing (using Claude to classify) and rule-based routing (using deterministic logic).
See also: 1.2 Orchestration Patterns