Exam Information

Claude Certified Architect — Foundations.

This certification validates that practitioners can make informed tradeoff decisions when shipping real-world solutions with Claude. It tests foundational knowledge across the Claude API, the Claude Agent SDK, Claude Code, and the Model Context Protocol (MCP) — the technologies used to build production-grade applications with Claude.

Questions are grounded in realistic scenarios drawn from actual customer use cases: agentic customer support, multi-agent research pipelines, Claude Code in CI/CD, developer productivity tools, and structured data extraction.

Format
Multiple choice
1 correct, 3 distractors
Score range
100 – 1,000
Scaled across forms
Passing score
720
Set by SMEs
Result
Pass / Fail
No penalty for guessing
Target candidate

A solution architect with 6+ months building on Claude.

The ideal candidate designs and operates production applications with Claude and understands both the capabilities and the limitations of large language models.

Claude Agent SDK

Multi-agent orchestration, subagent delegation, tool integration, lifecycle hooks.

Claude Code

CLAUDE.md, Agent Skills, MCP servers, plan mode for team workflows.

Model Context Protocol

Designing tool and resource interfaces for backend system integration.

Structured Output

JSON schemas, few-shot examples, and reliable extraction patterns.

Context Management

Long documents, multi-turn conversations, multi-agent handoffs.

CI/CD Integration

Code review, test generation, and PR feedback in automated pipelines.

Reliability & Escalation

Error handling, human-in-the-loop, self-evaluation patterns.

Content outline

How the 100% is distributed.

Five domains, weighted by their share of the scored content. Architecture and Claude Code workflows together account for nearly half the exam.

01Agentic Architecture & Orchestration
27%
02Tool Design & MCP Integration
18%
03Claude Code Configuration & Workflows
20%
04Prompt Engineering & Structured Output
20%
05Context Management & Reliability
15%
Exam scenarios

Six production contexts. Four shown per exam.

Every exam draws four scenarios at random from this set. Each scenario frames a group of questions that test architectural judgment in a realistic deployment.

01

Customer Support Resolution Agent

An Agent SDK agent handles returns, billing disputes, and account issues via custom MCP tools (get_customer, lookup_order, process_refund, escalate_to_human). Target: 80%+ first-contact resolution while knowing when to escalate.

Agentic ArchitectureTool Design & MCPContext & Reliability
02

Code Generation with Claude Code

Claude Code accelerates day-to-day development — generation, refactoring, debugging, documentation. Integrate with custom slash commands, CLAUDE.md, and choose between plan mode and direct execution.

Claude Code WorkflowsContext & Reliability
03

Multi-Agent Research System

A coordinator delegates to specialized subagents — web search, document analysis, synthesis, and report generation — to produce comprehensive, cited reports.

Agentic ArchitectureTool Design & MCPContext & Reliability
04

Developer Productivity with Claude

Agents help engineers explore unfamiliar codebases, understand legacy systems, and automate repetitive tasks using Read, Write, Bash, Grep, Glob, and MCP servers.

Tool Design & MCPClaude Code WorkflowsAgentic Architecture
05

Claude Code for CI/CD

Claude Code runs automated reviews, generates test cases, and gives PR feedback. Prompts must produce actionable feedback while minimizing false positives.

Claude Code WorkflowsPrompt Engineering
06

Structured Data Extraction

Extract information from unstructured documents, validate with JSON schemas, and integrate cleanly with downstream systems while handling edge cases gracefully.

Prompt EngineeringContext & Reliability
Sample question

What the questions actually look like.

Scenario · Customer Support Resolution Agent

Production data shows that in 12% of cases, your agent skips get_customer entirely and calls lookup_order using only the customer's stated name, occasionally leading to misidentified accounts and incorrect refunds. What change would most effectively address this reliability issue?

  1. AAdd a programmatic prerequisite that blocks lookup_order and process_refund calls until get_customer has returned a verified customer ID.
  2. BEnhance the system prompt to state that customer verification via get_customer is mandatory before any order operations.
  3. CAdd few-shot examples showing the agent always calling get_customer first, even when customers volunteer order details.
  4. DImplement a routing classifier that analyzes each request and enables only the subset of tools appropriate for that request type.
Correct answer · A

When a specific tool sequence is required for critical business logic — like verifying customer identity before processing refunds — programmatic enforcement provides deterministic guarantees that prompt-based approaches cannot. Options B and C rely on probabilistic LLM compliance, which is insufficient when errors have financial consequences. Option D addresses tool availability rather than tool ordering, which is not the actual problem.

Ready to dive in?

Start with the domain that carries the most weight, or work through them in order.