What You Need to Know
Agent SDK hooks are the mechanism for injecting deterministic behaviour into an otherwise probabilistic system. They sit at the boundary between the model's decisions and the real world, intercepting tool calls and results to enforce business rules and normalise data. This task statement connects directly to the enforcement spectrum from 1.4 — hooks are how you implement programmatic enforcement in practice.
Two Types of Hooks
The Agent SDK provides hooks at two points in the tool execution lifecycle:
PostToolUse hooks run after a tool executes but before the model processes the result. They intercept tool results and transform them before the model sees them. The model receives clean, normalised data regardless of which tool produced it.
Tool call interception hooks run before a tool executes. They intercept the outgoing tool call and can block it, modify it, or redirect it to an alternative workflow. The tool never runs if the hook decides to block it.
Key Concept
PostToolUse hooks transform data after execution. Tool call interception hooks enforce policy before execution. Know which direction each hook operates in — the exam tests this distinction.
PostToolUse Hooks: Data Normalisation
Different MCP tools return data in different formats. A customer database might return Unix timestamps (1710489600). An order management system might return ISO 8601 dates ("2024-03-15T12:00:00Z"). A status API might return numeric codes (200, 404, 500) while another returns strings ("active", "cancelled", "pending").
Without normalisation, the model must interpret these heterogeneous formats on every iteration. This introduces inconsistency — the model might correctly parse a Unix timestamp one time and misinterpret it the next.
A PostToolUse hook solves this by normalising all formats before the model processes them:
- Unix timestamps → ISO 8601 dates
- Numeric status codes → human-readable strings
- Currency values → consistent decimal format with currency code
- Date strings in various regional formats → a single standard format
The model receives clean, consistent data every time, regardless of which tool or backend system produced it.
Tool Call Interception: Policy Enforcement
Tool call interception hooks are the implementation mechanism for the prerequisite gates described in 1.4. They intercept outgoing tool calls before execution and apply business rules:
Use case: Refund threshold enforcement. A hook intercepts all calls to process_refund. If the refund amount exceeds $500, the hook blocks the call and redirects to a human escalation workflow. The refund tool never executes — the hook prevents it before it can run.
Use case: Compliance prerequisite gates. A hook intercepts calls to transfer_funds. If the required anti-money laundering (AML) check has not been completed for this session, the hook blocks the call and returns an error message directing the agent to complete the AML check first.
Use case: Manager approval workflow. A hook intercepts calls to approve_discount for discounts above 20%. The hook pauses execution and routes the request to a manager approval queue. Only after manager approval does the tool execute.
Exam Trap
The exam will present PostToolUse hooks as a solution for blocking policy-violating actions. This is wrong. PostToolUse runs after execution — by the time it fires, the non-compliant action has already occurred. Use tool call interception hooks (pre-execution) to block actions before they happen.
The Decision Framework
This framework is the core mental model for the exam:
| Requirement | Mechanism | Guarantee |
|---|---|---|
| Must be followed 100% of the time | Hooks | Deterministic |
| Preferred but occasional deviation is acceptable | Prompts | Probabilistic |
If the business would lose money from a single failure → use a hook. If the business would face legal risk from a single failure → use a hook. If it is a formatting preference or style guideline → prompt-based guidance is fine.
The exam consistently presents prompt-based solutions as distractors for scenarios requiring deterministic enforcement. The decision is not about whether prompts are "good enough" — it is about whether the consequence of a single failure justifies deterministic guarantees.
Hooks vs Prompts: Side-by-Side Comparison
Scenario: International transfers must pass AML checks.
- Prompt approach: "Always complete AML verification before processing international transfers." Works 95% of the time. The 5% failure rate means some transfers skip AML checks — a regulatory violation.
- Hook approach: Tool call interception hook blocks
transfer_fundsuntilaml_checkreturns a pass. Works 100% of the time. No transfer can execute without AML verification.
Scenario: Responses should be formatted in markdown.
- Prompt approach: "Format all responses using markdown with headers and bullet points." Works most of the time. Occasional plain-text responses are not a business risk.
- Hook approach: Unnecessary overhead. Formatting preferences do not require deterministic enforcement.
Scenario: Refunds above $500 require human approval.
- Prompt approach: "For refunds above $500, escalate to a human agent." Works most of the time. A single failure means a large refund processed without approval.
- Hook approach: Intercept
process_refund, check the amount, block if above $500 and route to human escalation. Works 100% of the time.
Practical Example: Data Format Chaos
A customer support agent uses three MCP tools:
get_customerreturns dates as Unix timestamps and status as numeric codes.lookup_orderreturns dates as ISO 8601 strings and status as English strings.check_shippingreturns dates as "DD/MM/YYYY" and status as single-character codes ("S" for shipped, "P" for pending).
Without a PostToolUse hook, the model must interpret three different date formats and three different status representations on every iteration. Sometimes it correctly converts a Unix timestamp; sometimes it confuses the day/month order in "DD/MM/YYYY"; sometimes it misinterprets "P" as "processed" instead of "pending."
With a PostToolUse hook, all tool results are normalised before the model sees them:
- All dates → ISO 8601 ("2024-03-15T12:00:00Z")
- All status codes → human-readable strings ("shipped", "pending", "delivered")
The model always receives consistent data, eliminating interpretation errors entirely.
Exam Traps
Using PostToolUse hooks to block policy-violating actions
PostToolUse hooks run after tool execution. By the time the hook fires, the non-compliant action has already been processed. Use tool call interception hooks (pre-execution) to block actions before they happen.
Enhanced prompt instructions as the solution for 100% compliance requirements
Prompts provide probabilistic compliance. If the business requires 100% enforcement (financial operations, regulatory compliance, security checks), only hooks provide deterministic guarantees.
Suggesting model-side data transformation instead of PostToolUse hooks for normalisation
Relying on the model to normalise heterogeneous data formats introduces inconsistency. PostToolUse hooks ensure clean, consistent data reaches the model every time, regardless of which tool produced it.
Confusing the direction of hooks — PostToolUse runs after execution, tool call interception runs before
PostToolUse transforms results after a tool runs. Tool call interception blocks or modifies calls before a tool runs. Using the wrong hook direction means either missing the opportunity to prevent an action or unnecessarily blocking completed work.
Practice Scenario
An agent occasionally processes international transfers without required compliance checks. The compliance team requires 100% enforcement of anti-money laundering (AML) checks before any international transfer is executed. The current system uses prompt instructions that work approximately 95% of the time. What is the correct approach?
Build Exercise
Implement Agent SDK Hooks for Normalisation and Policy Enforcement
What you'll learn
- The distinction between PostToolUse hooks (after execution, data normalisation) and tool call interception (before execution, policy enforcement)
- Why hooks provide deterministic guarantees that prompts cannot match
- How to normalise heterogeneous data formats from multiple MCP tools into a consistent schema
- How to implement threshold-based and prerequisite-based policy enforcement using pre-execution hooks
- The decision framework: hooks for 100% requirements, prompts for preferences
- Create an agent with three MCP tools that return data in different formats: Tool A returns Unix timestamps and numeric status codes, Tool B returns ISO 8601 dates and string statuses, Tool C returns DD/MM/YYYY dates and single-character status codes
Why: This recreates the data format chaos example from the exam. Without normalisation, the model must interpret three different date formats and three different status representations, leading to inconsistent parsing across iterations.
You should see: Three tool implementations that each return data with distinct date and status formats. Tool A uses epoch seconds and numeric codes, Tool B uses ISO strings and English statuses, Tool C uses DD/MM/YYYY and single characters.
- Implement a PostToolUse hook that intercepts all tool results and normalises dates to ISO 8601 format and status codes to human-readable English strings
Why: PostToolUse hooks run after execution but before the model processes the result. This is the correct hook direction for data normalisation — the exam tests whether you know that PostToolUse transforms data after execution, not before.
You should see: A hook function that detects the format of each field and converts it: Unix timestamps to ISO 8601, DD/MM/YYYY to ISO 8601, numeric status codes to English strings, and single-character codes to full words.
- Verify the model receives consistent data by testing with queries that require results from all three tools
Why: Consistent data eliminates interpretation errors. Without normalisation, the model might confuse day/month order in DD/MM/YYYY or misinterpret status code P as processed instead of pending. Verification proves the hook works across all tool outputs.
You should see: Three tool results that all use ISO 8601 dates and English status strings, regardless of which tool produced them. The model response should reference dates and statuses consistently without confusion.
- Add a tool call interception hook that blocks process_refund when the amount exceeds $500 and redirects to a human escalation workflow
Why: Tool call interception runs before execution — the refund never processes. The exam specifically warns against using PostToolUse for blocking, because by that point the action has already occurred. Pre-execution interception is the only correct hook direction for policy enforcement.
You should see: A pre-execution hook that inspects process_refund calls, checks the amount parameter, and blocks the call with a redirect message if the amount exceeds 500. The refund tool never executes for blocked calls.
- Add a second interception hook that blocks transfer_funds until aml_check has returned a pass result in the current session
Why: This is the AML compliance scenario from the exam. Prompt instructions achieve 95% compliance, but regulatory requirements demand 100%. The hook provides deterministic enforcement that no prompt can match — a single missed AML check can result in legal penalties.
You should see: A pre-execution hook that checks session state for a completed AML check before allowing transfer_funds to execute. Without a prior passing aml_check, the transfer is blocked with a descriptive error.
- Test both hooks by attempting to trigger the blocked operations and verify they are prevented before execution
Why: Testing confirms that the hooks provide deterministic enforcement. The key verification is that blocked tools never execute — the hook prevents the call, not just logs a warning after the fact.
You should see: Both blocked operations return interception messages without the underlying tool executing. After satisfying prerequisites (completing AML check, reducing refund amount), the operations succeed.
Sources
- Claude Agent SDK Overview — Anthropic
- Claude Agent SDK Hooks Documentation — Anthropic
- Building with Claude API (Skilljar) — Anthropic