What You Need to Know
Claude Code provides six built-in tools for working with codebases: Read, Write, Edit, Bash, Grep, and Glob. Each has a specific purpose, and using the wrong tool for a task wastes time, context tokens, or both. The exam deliberately presents scenarios where confusing these tools leads to incorrect answers.
Grep vs Glob: The Core Distinction
This is the single most important distinction in this task statement. Get this wrong and you will lose marks.
Grep searches file CONTENTS for patterns. Use Grep when you need to find text inside files. Function callers. Error messages. Import statements. Variable assignments. Any time you are searching for what files contain, Grep is the tool.
// Find all files that call processLegacyOrder()
Grep: "processLegacyOrder"
// Find all error messages containing "timeout"
Grep: "timeout"
// Find all files that import a specific module
Grep: "import.*from 'utils/auth'"
Glob matches file PATHS by naming patterns. Use Glob when you need to find files by name, extension, or directory structure. Test files. Configuration files. All TypeScript files in a specific directory. Any time you are searching for files based on their path, Glob is the tool.
// Find all test files
Glob: "**/*.test.tsx"
// Find all configuration files
Glob: "**/config.*"
// Find all MDX files in the domains directory
Glob: "content/domains/**/*.mdx"
The distinction in one sentence: Grep finds what is INSIDE files. Glob finds files by their NAMES.
The exam presents scenarios where a developer uses the wrong tool. If someone uses Glob to find function callers, it will fail — Glob matches paths, not contents. If someone uses Grep to find test files by naming pattern, it will work technically (by searching for "test" in filenames via content), but it is the wrong tool and the exam expects you to identify the correct one.
Read, Write, and Edit
These three tools handle file operations, each optimised for a different use case.
Edit performs targeted modifications using unique text matching. You specify the exact text to find and its replacement. It is fast and precise because it touches only the specific text you identify.
Edit:
old_string: "function processOrder(id: string)"
new_string: "function processOrder(id: string, validate: boolean = true)"
When Edit fails: Edit requires unique text matching. If the text you specify appears in multiple places in the file, Edit cannot determine which occurrence to change and will fail. This is not a bug — it is a safety mechanism preventing unintended modifications.
The fallback: Read + Write. When Edit cannot find unique anchor text, fall back to Read (load the full file contents) followed by Write (write the complete modified file). This is more expensive in terms of context tokens because you load the entire file, but it is reliable when Edit's unique matching constraint cannot be satisfied.
The ordering matters:
- Try Edit first — it is faster and uses less context
- Fall back to Read + Write only when Edit fails due to non-unique text
Do not default to Read + Write for every modification. The exam penalises this because it wastes context tokens unnecessarily.
Incremental Codebase Understanding
How you explore a codebase matters as much as which tools you use. There is a right way and a wrong way.
Wrong: Read all files upfront. Loading every file into context before you understand what you need is a context-budget killer. If a codebase has 200 files and you read them all, you have consumed your entire context window on files that are mostly irrelevant to your task. This is the single biggest anti-pattern in codebase exploration.
Right: Incremental discovery. Start narrow. Expand only as needed.
-
Grep to find entry points. Search for the function name, class name, or error message that anchors your investigation. This tells you which files are relevant.
-
Read to follow imports and trace flows. Once you know which files matter, Read them to understand the code structure. Follow import statements to discover related files.
-
Grep again to trace usage. If you find a wrapper function or re-export, Grep for that name across the codebase to find all consumers.
-
Read only what you need. Each file you read should be justified by what you discovered in the previous step.
This approach uses minimal context for maximum understanding. You discover the codebase's structure progressively, spending tokens only on files that are relevant to your task.
Tracing Function Usage Across Wrapper Modules
A common codebase pattern: a function is defined in one module, re-exported through a wrapper, and consumed through the wrapper's name. Simple Grep for the original function name will miss consumers who import through the wrapper.
The correct approach:
- Grep for the function definition to find where it is defined
- Read the defining file to identify exported names
- Grep for each exported name across the codebase to find all consumers
- If the function is re-exported through a barrel file (e.g.
index.ts), Grep for the barrel file's module name to find consumers who import from it
This multi-step trace is more thorough than a single Grep and catches indirect consumers that a simple search would miss.
The Deprecation Scenario
This scenario appears frequently in exam preparation: find all files that call a deprecated function AND find the test files for those callers. The correct tool sequence is:
- Grep for the function name — finds all files that call the deprecated function (content search)
- Glob for test files — finds test files matching the caller filenames (path matching)
For example, if Grep reveals that OrderProcessor.ts and RefundHandler.ts call the deprecated function, then Glob for **/*.test.tsx (or specifically **/OrderProcessor.test.* and **/RefundHandler.test.*) finds the corresponding test files.
This is Grep then Glob — content search followed by path matching. Not the other way round.
Key Concept
Grep searches file contents. Glob matches file paths. Edit is preferred for modifications; Read + Write is the fallback when Edit fails. Build codebase understanding incrementally — never read all files upfront.
Exam Traps
Using Glob to find function callers (it searches paths, not contents)
Glob matches file paths by naming pattern. It cannot search inside files for function calls. Use Grep to search file contents for function names, import statements, or error messages.
Using Grep to find files by extension or naming pattern
While Grep could technically find filenames mentioned in content, Glob is the purpose-built tool for matching file paths. Use Glob for **/*.test.tsx, **/config.*, and similar path-based searches.
Reading all source files upfront before understanding what is relevant
Loading every file into context is a context-budget killer. The correct approach is incremental: Grep to find entry points, then Read to trace flows from those specific entry points.
Defaulting to Read + Write for every file modification instead of trying Edit first
Edit is faster and uses less context because it only touches the specific text. Read + Write loads the entire file. Try Edit first; fall back to Read + Write only when Edit fails due to non-unique text.
Practice Scenario
A developer needs to find all files that call a deprecated function processLegacyOrder() and also find all test files for those callers. Which tool sequence is correct?
Build Exercise
Trace and Refactor a Deprecated Function Using Built-in Tools
What you'll learn
- Apply Grep for content search and Glob for path matching in the correct sequence
- Use incremental codebase discovery instead of reading all files upfront
- Select Edit as the primary modification tool, with Read + Write as a fallback
- Trace function usage across wrapper modules and barrel files
- Follow the Grep-then-Glob pattern for finding callers and their test files
- Use Grep to search for all callers of a target function (e.g. processLegacyOrder) across the codebase
Why: Grep searches file contents — it is the correct tool for finding function callers. Using Glob here would fail because Glob matches file paths, not contents. The exam tests this distinction directly and penalises candidates who confuse the two.
You should see: A list of file paths containing calls to processLegacyOrder, with line numbers and matching lines showing the exact call sites. For example: src/OrderProcessor.ts:42: await processLegacyOrder(orderId).
- Use Glob to find test files matching the caller filenames (e.g. **/*.test.tsx)
Why: Glob matches file paths by naming pattern — it is the correct tool for finding test files by extension or naming convention. This completes the Grep-then-Glob pattern: content search to find callers, then path matching to find their tests.
You should see: A list of test file paths matching the pattern, such as src/OrderProcessor.test.tsx and src/RefundHandler.test.tsx. These correspond to the caller files found by Grep in the previous step.
- Use Read to examine each caller file and understand the usage pattern and context
Why: Reading files incrementally — only after Grep identifies which files matter — is the correct approach. Reading all source files upfront is a context-budget killer that the exam explicitly penalises. Each Read should be justified by what you discovered in the previous step.
You should see: The full contents of each caller file, showing how processLegacyOrder is called, what parameters are passed, how the return value is used, and whether the function is imported directly or through a wrapper module.
- Use Edit to replace the deprecated function call with the new API in each caller file
Why: Edit is the preferred modification tool because it targets specific text and uses less context than Read + Write. The exam penalises defaulting to Read + Write for every modification. Always try Edit first — it is faster and more precise.
You should see: Each caller file updated with the new API call replacing the deprecated one. For example, processLegacyOrder(orderId) replaced with processOrder(orderId, { validate: true }). The Edit tool confirms the replacement was made successfully.
- When Edit fails due to non-unique text matching, fall back to Read to load the full file, then Write to output the complete modified version
Why: Edit fails when the target text appears multiple times in the file — this is a safety mechanism, not a bug. The fallback is Read + Write: load the entire file, make modifications in your working memory, and write the complete modified version. This uses more context tokens but is reliable when Edit cannot find a unique match.
You should see: A successful file modification where Edit initially fails with a non-unique match error, followed by a Read that loads the full file contents, then a Write that outputs the entire file with the correct modification applied.