Claude Certification Exam Format

The Claude Certified Architect – Foundations (CCA-F) exam is a proctored, scenario-based assessment that evaluates your ability to design and implement production-grade solutions with Claude.

  • 60 multiple-choice questions in 120 minutes
  • Each question presents 1 correct response and 3 distractors
  • Scenario-based delivery: 4 of 6 possible scenarios are randomly selected per exam session
  • Proctored, closed-book assessment — no AI assistance, no external tools, no documentation
60 Questions
120 Minutes
4 / 6 Scenarios Selected
5 Exam Domains
720 Passing Score

Scoring

The exam uses a scaled scoring model to ensure consistency across different scenario combinations.

  • Scaled score range: 100 – 1,000
  • Minimum passing score: 720
  • No penalty for guessing — unanswered questions are scored as incorrect, so answer every question
  • Pass/fail designation is based on a minimum competency standard established by subject matter experts
  • Score report delivered within 2 business days with section-level breakdowns

Tip: Because there is no guessing penalty, you should always select an answer for every question, even if you are unsure. Eliminating one or two distractors significantly improves your odds.

Five Anthropic Certification Domains

The exam content is organized into five domains, each weighted according to its proportion of the overall exam.

DomainWeight
D1: Agentic Architecture & Orchestration27%
D2: Tool Design & MCP Integration18%
D3: Claude Code Configuration & Workflows20%
D4: Prompt Engineering & Structured Output20%
D5: Context Management & Reliability15%

Domain 1: Agentic Architecture & Orchestration (27%)

The heaviest domain. Covers designing, building, and managing agentic systems that use Claude as the reasoning engine — from single-agent loops to complex multi-agent orchestrations.

1.1 Agentic Loops — Design agentic loops that use tool results to determine next actions. The loop checks stop_reason: "tool_use" means execute the tool and continue; "end_turn" means the agent is done. Tool results must be appended to conversation history before the next request.
1.2 Multi-Agent Orchestration — Implement orchestration patterns using the Agent SDK: hub-and-spoke (coordinator delegates to specialists), handoff chains, and parallel fan-out. Choose based on whether tasks need centralized control or peer-to-peer delegation.
1.3 Subagent Invocation & Context — Manage subagent context and output for a parent agent. Subagents do NOT inherit the parent’s conversation history — only what the coordinator explicitly passes in the prompt. Output must be structured for the parent to consume.
1.4 Workflow Enforcement & Handoffs — Implement guardrails and enforcement patterns for agent behavior. When business rules require guaranteed compliance (e.g., verification before refund), use programmatic enforcement (hooks) rather than prompt-based instructions.
1.5 SDK Hooks — Configure hooks for monitoring, logging, or enforcing policy. The Agent SDK provides PreToolUse and PostToolUse hooks that can block, modify, or log tool calls deterministically.
1.6 Task Decomposition — Decompose complex tasks into subtasks suitable for delegation. Effective decomposition defines clear boundaries, expected outputs, and success criteria for each subtask.
1.7 Session State & Resumption — Manage session state and conversation history across turns. Includes context window management, progressive summarization, and immutable fact blocks for critical data.
Full study guide for Domain 1 →

Domain 2: Tool Design & MCP Integration (18%)

Focuses on designing effective tools that Claude can use, writing clear descriptions, handling errors, and integrating via the Model Context Protocol (MCP).

2.1 Effective Tool Interfaces — Write tool descriptions that enable Claude to select the correct tool and provide valid inputs. Descriptions are the primary mechanism for tool selection — include input formats, example queries, edge cases, and boundaries.
2.2 Structured Error Responses — Return structured error information from tools with isError, errorCategory, and isRetryable flags. This lets the agent distinguish transient failures (retry) from permanent ones (escalate).
2.3 Tool Distribution & Choice — Determine how to distribute tools across agents, MCP servers, or inline definitions. Tool selection reliability degrades above ~5 tools per agent — distribute the rest across specialized subagents.
2.4 MCP Server Integration — Configure and deploy MCP servers for Claude Code or agent systems. Use .mcp.json with environment variable expansion (${API_KEY}) to keep secrets out of version control.
2.5 Built-in Tool Selection — Leverage built-in tools (Read, Write, Edit, Bash, Grep, Glob) in agent workflows. These tools provide Claude Code’s core file system and shell interaction capabilities.
Full study guide for Domain 2 →

Domain 3: Claude Code Configuration & Workflows (20%)

Covers practical configuration and day-to-day usage of Claude Code as a development tool, including memory hierarchies, custom commands, and CI/CD integration.

3.1 CLAUDE.md Hierarchy — Configure CLAUDE.md memory hierarchies: project-level (.claude/CLAUDE.md, version-controlled), user-level (~/.claude/CLAUDE.md, personal), and directory-level (scoped to subdirectories). Project-level is shared; user-level is private.
3.2 Commands & Skills — Create custom slash commands in .claude/commands/ (version-controlled, available on clone) and skills in .claude/skills/ with frontmatter options like context: fork for isolated execution.
3.3 Path-Specific Rules — Define path-specific rules in .claude/rules/ with YAML frontmatter glob patterns (e.g., **/*.test.tsx) for automatic conditional application regardless of directory structure.
3.4 Plan Mode — Use plan mode for architectural reasoning before implementation. Ideal for complex tasks with large-scale changes, dependency analysis, and architectural decisions before writing code.
3.5 Iterative Refinement — Apply iterative refinement workflows: generate an initial output, review and critique, then refine. This generate-then-critique cycle improves quality incrementally.
3.6 CI/CD Integration — Integrate Claude Code into CI/CD pipelines using the -p flag for non-interactive mode. Use synchronous API for blocking checks (pre-merge), Batches API for latency-tolerant jobs (overnight reports).
Full study guide for Domain 3 →

Domain 4: Prompt Engineering & Structured Output (20%)

Tests your ability to craft effective prompts and extract structured, validated output from Claude — a critical skill for building reliable production systems.

4.1 Explicit Review Criteria — Define explicit criteria and rubrics for evaluation tasks. Avoid vague instructions; provide measurable dimensions (e.g., “rate correctness 1–5 based on these criteria”).
4.2 Few-Shot Prompting — Write few-shot examples to guide behavior in ambiguous scenarios. Examples are especially effective for format compliance, edge-case handling, and consistent tone.
4.3 Structured Output — Use tool_use with JSON Schema to guarantee structured output. Force the model to respond via a tool call whose parameters match your schema — more reliable than asking for JSON in a text response.
4.4 Validation & Retry — Implement validation-retry loops for output correction. On failure, append specific validation errors (which field failed, expected vs actual) to the retry prompt — not generic “try again” messages.
4.5 Batch Processing — Design batch processing workflows using the Message Batches API. Suitable for latency-tolerant workloads (up to 24h processing, 50% cost savings). NOT suitable for blocking workflows like pre-commit hooks.
4.6 Multi-Pass Review — Architect multi-pass review systems (generate-then-critique). Split large inputs (e.g., 14-file PR) into per-file local analysis + a separate cross-file integration pass to avoid attention dilution.
Full study guide for Domain 4 →

Domain 5: Context Management & Reliability (15%)

Addresses keeping agents grounded and reliable over long sessions, handling errors gracefully, and knowing when to involve humans.

5.1 Preserving Context — Preserve and compress context across long-running sessions. Extract critical transactional facts (IDs, amounts, dates) into an immutable “case facts” block at the start of each prompt, outside summarized history.
5.2 Escalation & Ambiguity — Determine when to escalate to a human versus resolve autonomously. Define explicit escalation criteria; LLM self-reported confidence scores are poorly calibrated and unreliable for routing.
5.3 Error Propagation — Design error propagation strategies across multi-agent hierarchies. Return structured error context (failure type, attempted query, partial results, suggested alternatives) to enable intelligent coordinator recovery.
5.4 Codebase Exploration — Use codebase exploration tools effectively (Read, Grep, Glob, Bash). Understand when to use each tool and how to combine them for efficient large-codebase navigation.
5.5 Human Review — Implement confidence-routed human review workflows. Track metrics stratified by category (not just aggregate accuracy) to reveal hidden failures in specific document types or scenarios.
5.6 Provenance — Maintain provenance and traceability of agent-generated outputs. Require structured claim-source mappings from subagents that downstream agents must preserve through synthesis.
Full study guide for Domain 5 →

Six Claude Architect Exam Scenarios

Every question on the exam is anchored to one of six real-world scenarios. During each exam session, 4 of these 6 scenarios are randomly selected. Below is a summary of each scenario and its primary domain coverage.

Scenario 1: Customer Support Resolution Agent

You are building an AI-powered customer support agent that can autonomously resolve common issues — processing refunds, looking up order status, troubleshooting products — while escalating complex or policy-ambiguous cases to human agents. The system must integrate with CRM, order management, and knowledge-base tools.

Primary domains: D1 (Agentic Architecture), D2 (Tool Design), D5 (Context Management & Reliability)

Scenario 2: Code Generation with Claude Code

You are using Claude Code as a development partner on a complex software project. The scenario covers configuring memory hierarchies, defining coding standards, working with plan mode for architectural decisions, and using iterative refinement to ship high-quality code efficiently.

Primary domains: D3 (Claude Code Configuration), D5 (Context Management & Reliability)

Scenario 3: Multi-Agent Research System

You are designing a research system where multiple specialized agents collaborate — one gathers sources, another synthesizes findings, a third performs fact-checking. The orchestrator must manage delegation, context passing between agents, and synthesize a final report with provenance.

Primary domains: D1 (Agentic Architecture), D2 (Tool Design), D5 (Context Management & Reliability)

Scenario 4: Developer Productivity with Claude

You are integrating Claude into a developer tools platform — code review automation, documentation generation, and developer workflow improvements. This scenario emphasizes tool design, MCP integration, and agentic patterns to boost productivity.

Primary domains: D2 (Tool Design), D3 (Claude Code Configuration), D1 (Agentic Architecture)

Scenario 5: Claude Code for Continuous Integration

You are setting up Claude Code within a CI/CD pipeline to automate code review, test generation, and deployment checks. The scenario focuses on headless Claude Code configuration, prompt engineering for consistent automated feedback, and structured output for pipeline integration.

Primary domains: D3 (Claude Code Configuration), D4 (Prompt Engineering & Structured Output)

Scenario 6: Structured Data Extraction

You are building a pipeline to extract structured information from unstructured documents — invoices, contracts, medical records. The scenario covers prompt design for extraction, JSON Schema enforcement, validation-retry loops, and batch processing at scale.

Primary domains: D4 (Prompt Engineering & Structured Output), D5 (Context Management & Reliability)

Core Claude AI Technologies Tested

The exam assumes working familiarity with the following tools and frameworks:

  • Claude Agent SDK — orchestration, handoffs, guardrails, hooks
  • Model Context Protocol (MCP) — server configuration, tool exposure, transport
  • Claude Code — CLAUDE.md, slash commands, skills, plan mode, rules
  • Claude API — messages, tool_use, streaming, system prompts
  • Message Batches API — bulk processing, async result retrieval
  • JSON Schema — type definitions for structured output enforcement
  • Pydantic — validation models for tool inputs and outputs
  • Built-in tools — Read, Write, Edit, Bash, Grep, Glob

What Is NOT on the Exam

The following topics are explicitly out of scope for the CCA – Foundations certification:

  • Internal implementation details of Claude models (architecture, training data, weights)
  • Fine-tuning or custom model training
  • Pricing calculations or billing administration
  • Non-Claude AI models or competing platforms
  • General machine learning theory (gradient descent, backpropagation, etc.)
  • Infrastructure provisioning (AWS, GCP, Azure setup)
  • Frontend/UI design or implementation
  • Legal compliance details (GDPR, HIPAA specifics) beyond general awareness

Ready to Start Preparing?

Test yourself with scenario-based questions from the official exam guide.

Try Sample Questions Practice Bank