Deep Dive tips and tricks to get certified: Step-by-step tutorials, videos, practice exams.
NOTE: Content here are my personal opinions, and not intended to represent any employer (past or present). “PROTIP:” here highlight information I haven’t seen elsewhere on the internet because it is hard-won, little-know but significant facts based on my personal research and experience.
This article was completely hand-crafted (for now).
Visit https://anthropic.com/
– the corporate marketing landing page.
Notice “Anthropic is a public benefit corporation dedicated to securing its benefits and mitigating its risks.”
Anthropic’s entry on LinkedIn classifies the company in the “Research Services” industry:
“Anthropic is an AI safety and research company working to build reliable, interpretable, and steerable AI systems.”
3M followers. 501-1K employees.
Anthropic was founded in 2021 by seven former employees of OpenAI, including now CEO Dario Amodei was OpenAI’s Vice President of Research.
Click “Read more” at https://www.anthropic.com/research about results from Anthropic’s survey of users.
“Claude” on LinkedIn.com says “Claude is an AI assistant built by Anthropic to be safe, accurate, and secure.” in Technology, Information and Internet. 884K followers.
“Brainstorm in Claude, build in Cowork” VIDEO
Claude competes with agentic coding tools (aka coding agent IDEs) that read a codebase, edit files, and run commands:
REMEMBER: Anthropic doesn’t offer phone or live chat support, only thru chat at support.claude.com. Uptime shows Anthropic’s own production environments:
Meet Claude - Platform - Solutions - Pricing - Resources - Contact sales - Try Claude
platform.claude.com is the user Claude Console Dashboard, Workbench, Files, and Skills, Documentation (for each organization). Claude also creates the evaluation automation that it rubs.
Claude API refers to the endpoint provided to SDK requests to the claude-agent-sdk wrapper around claude -p
REMEMBER: The -p flag specifies non-interactive (aka “headless” task), No prompts, no confirmations. Runs and returns the result. The SDK spawns the Claude Code CLI as a subprocess and communicates over stdin/stdout via JSON-lines. xcompare it to the Anthropic Client SDK. Specify –allowedTools and –disallowedTools permissions.
Claude Code is “like handing a capable teammate who actually does the work”. Instead of hand coding, human app designers now speak natural language conversations with Claude Code to write design specs from which both infrastructure creation and programming code are generated.
“AI will soon be writing 90 percent of all code.” — Dario Amodei, Anthropic CEO, March 10 2025
That is why instead of hiring entry-level programmers, companies will be paying for AI tokens.
Claude CoWork can control macOS mouse, keyboard, and screen - letting Claude operate any app.
The history of the US Government’s use of Claude for domestic surveillance or in fully autonomous weapons is summarized in https://en.wikipedia.org/wiki/Anthropic
It says the company is headquartered in San Francisco’s Foundry Square (near the Bay Bridge) at 500 Howard and First Streets (across from Chipotle & BlackRock and close to the SalesForce tower’s BART/busses).
REMEMBER: Anththropic does not host their own models but use AWS, Azure, GCP, etc. Claude is the only frontier AI model available on all three leading cloud providers: AWS, Google Cloud, and Microsoft. Claude would also be integrated into the Databricks Data Intelligence Platform and Snowflake’s Lakehouse databases.
PROTIP: That enables us to bring costs down by using a downloaded local foundation model while using Claude Code/Work.
Claude Dispatch enables cross-device workflows where tasks move from mobile app to desktop app which stays awake (doing whatever else).
Claude “Computer Use”: Because raw GUI control is powerful, but also brittle, slower, and much harder to govern, the Claude ecosystem is a layered agent system where connectors (with structured contracts) via MCP apps are preferred, browser automation (of forms on websites) is secondary, and raw full-screen (difficult to govern) desktop control is the fallback layer.
References:
Automation provided by AI agents have gone beyond auto-complete of code.
Connectors (under the “Customize” and Settings menu items) enable Claude to interact with external platforms GitHub, Gmail, Google Calendar, Google Drive, etc.
GitHub Integration: Deep integration with GitHub for PR reviews, issue management and even CI/CD.
An agentic code harness is what enables an LLM to be Agentic with sandboxes, accept prompts, use tools, etc.
Memory system: CLAUDE.md and other files that provide persistent context across sessions.
Slash commands: Powerful keywords to control agent behavior. VIDEO
Skills (under the Customize menu item) enable new knowledge to be dynamically obtained by Claude or subagents based on minimal description and the current query as opposed to always taking up room lurking in the context memory. Skills are now integrated with commands.
Subagents: Create specialized subagents for different tasks with their own context window. REMEMBER: Subagents operate with isolated context and do NOT share memory with the coordinator. Every piece of its information must be passed explicitly in it.
MCP Support: Extend it with any MCP tool to access APIs, databases and other external systems.
Hooks are small scripts (agentic workflows) that run automatically triggered by events (before or after Claude tries to do something). So a hook can block Claude from taking an action unless a specific condition has been met. https://dev.to/gunnargrosch/automating-your-workflow-with-claude-code-hooks-389h
Plugins (under the Customize menu item) bundle hooks, slash commands, and skills together for sharing with others.
Claude Agent SDK are used to build agentic AI systems beyond coding assistance.
Rules ???
PROTIP: Improvements in net productivity can be confidently monitized when features are combined to be useful when consistently applied:
Customer Support Resolution Agent (Agent SDK + MCP + escalation)
Code Generation with Claude Code (CLAUDE.md + plan mode + slash commands)
Multi-Agent Research System (coordinator-subagent orchestration)
Developer Productivity Tools (built-in tools + MCP servers) See https://github.com/anthropics/courses/blob/master/tool_use/README.md
Claude Code for CI/CD (non-interactive pipelines + structured output)
Structured Data Extraction (JSON schemas + tool_use + validation loops)
CAUTION: Cowork activity is not captured in audit logs or Compliance APIs today, which is why it is not for regulated workloads.
PROTIP: Use merlin.ai’s bulk purchasing costs $5/mo ($60/year) (with code AZ5) to access several LLMs (Claude Sonnet 4.5, OpenAI GPT5, etc.) instead of paying for a Claude AI subscription at https://claude.com/pricing:
Anthropic’s own tutorials are at:
Articles:
YouTube videos with no subscription:
YouTube videos peddling subscriptions:
by Brock Mesarich - AI for Non Techies to pitch $47/mo AI for Non-Technies: “Dispatch” from your phone.
“How to Use Claude Cowork Projects Better Than 99% of People”
Others when you’re through with the above:
https://www.youtube.com/watch?v=uUGfo8QOsW0&pp=ugUEEgJlbg%3D%3D Claude Mythos 5: Most Powerful Model Ever! AGI, GLM 5.1, Claude Code Update & Codex Plugins! AI NEWS
PROTIP: Load my templates repo from GitHub, which contains a curated set from other tutorials.
mkdir -p ~/bomonike
git clone https://github.com/bomonike/claude-templates.git --depth 1
cd claude-templates
PROTIP: Use this as your base project when you install Claude.
alias cl='claude --dangerously-skip-permissions'
alias clc='cl --continue' # resume last session with the context/history from the previoius session
# Resume Claude with the context/history from the previoius session but still be able to get back to that point later:
alias clf='claude --resume --fork session'.
brew install --cask visual-studio-code
code

VIDEO: Fun fact: 90% of code in Claude Code is written by itself, in TypeScript, React, Ink, Yoga, and Bun. The team works at around 5 releases per engineer each day. AI agents are used for code reviews and tests, test-driven development’s (TDD) renaissance, automating incident response, and cautious use of feature flags.
brew install node
winget install OpenJS.NodeJS.LTS # on Windows
node --version
v20.18.0
curl -fsSL https://claude.ai/install.sh | bash
PROTIP: We do not recommend “brew install” because it can be out of date, even though it’s more convenient since Homebrew installs to /opt/homebrew/bin for all apps.
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bashrc && source ~/.bashrc
claude --version
This should reflect the latest release at https://github.com/anthropics/claude-code/release which was, at time of this writing:
2.1.86 (Claude Code)
PROTIP: Notice that Claude is updated daily. So end your day with a backup and start your day with an update.
whereis claude
Claude was not installed if you see: bash: claude:: command not found Otherwise you should see this (where ~ is replaced with /Users/your machine username):
claude: ~/.local/bin/claude
$( whereis claude)
That’s the equivalent of:
~/.local/bin/claude
Alternately, more simply since the path is within $PATH:
claude
Alternately: To begin Claude with the context/history from the previoius session:
claude --resume
Alternately, to begin Claude with the context/history from the previoius session but still be able to get back to that point later:
claude --resume --fork session
Remember that aliases were setup.
crf
The first time that Claude runs:
???
Press command+V to paste. Click “Authorize”.
PROTIP: Press shift+command and - or + to make fonts larger or smaller. But that adjusts for all panes. So many prefer to view Claude Code standalone rather than within VSCode.
PROTIP: Ideally, use three monitor screens: Terminal for Claude Code, Visual Studio (vertical view), Tutorial screen.
claude auth status
claude auth logout
REMEMBER: Logout auth before setting up auth for 3rd-party clouds (Amazon, GCP, Microsoft, etc.)
From Google VertexAI after installing gcloud cli:
export ANTHROPIC_???_API_KEY="..."
export CLAUDE_CODE_USE_???=1
From Micrsoft Foundry Project API Key:
export ANTHROPIC_FOUNDRY_API_KEY="..."
export CLAUDE_CODE_USE_FOUNDRY=1
export ANTHROPIC_API_KEY=""
export ANTHROPIC_BASE_URL=http://localhost:11434
VIDEO: Instead of “Download” at
https://lmstudio.ai/blog/claudecode
brew install --cask lm-studio
Alternately, use Ollama VIDEO:
brew install ollama
export ANTHROPIC_AUTH_TOKEN=ollama
ollama signin
ollama pull kimi-k2.5:cloud # on Claude's cloud (AWS)
OLLAMA_CONTEXT_LENGTH=64000 ollama serve
claude --model "kimi-k2.5:cloud"
WARNING: Kimi (in China) was created (stolen) by distillation of Anthropic’s model.
Setup auth for free use of moonshot.ai’s Kimi model downloaded for running on Ollama via local relay path. The model features a 1T-parameter Mixture-of-Experts (MoE) Transformer architecture with 32B activated parameters. It supports image, video, PDF, and text inputs up to 256K tokens and excels in benchmarks like MMMU-Pro (78.5), SWE-Bench Verified (76.8), and AIME 2025 (96.1). Trained on approximately 15 trillion mixed visual and text tokens, it enables native multimodality, cross-modal reasoning, and efficient tool use grounded in visual data.
Using a free model means that you can use automatic /loop to iterate through many results, then select the best, like a Monte Carlo simulation.
But LM Studio using the MLX backend can produce 20 to 30 percent faster generation for the same model on the same hardware. And the Apple M3 Max has more bandwidth than the newer M4 Pro.
REMEMBER: Just as within Jupyter Notebook, run shell commands prefixed with the ! modifier. For example, ! pwd will run the pwd command and insert the output right into the conversation.
Click the Toggle sidebar (squarish) icon to collapse and expand the sidebar menu.
REMEMBER: The “Usage” Settings menu item does not appear until you have a paid subscription.
PROTIP: From anywhere in Claude, press shift+command+, (comma) for Claude’s Settings at https://claude.ai/settings/general
But switch off the “AWS Extend Switch Roles” browser extension if that comes up instead.
PROTIP: To chat from any screen, switch to a New Chat prompt by pressing shift+command+O (the letter) and start typing. For the pop-up, press command+K or shift+command+I for incognito (for the prompt to not appear among Recents).
REMEMBER: When your cursor is within the chat box, use these keyboard shortcuts:
References:
REMEMBER: Unless you go incognito, every time you run Claude in a directory, a Claude Code Project is created under ~/.claude/projects. So review and remove.
Click the “Project” on the left menu to provide a way for Claude to remember your preferences and customize its responses to your preferences. So you don’t to repeat yourself.
PROTIP: If you work with different companies or clients, isolate each by creating a different project containing different information.
Click “+ New Project”
TODO: ???
Team/Enterprise subscribers can share a Project among themselves.
shift+tab cycles through the permissions modes, so auto-accept edits is displayed just because currently I’m in the bypass permissions mode. There is one more permission mode plan, in which Claude Code will discuss and plan, but will not make changes to your files.
REMEMBER: The revolution to productivity from AI comes from “Plan Mode”, which uses AI to generate a plan rather than “vibe coding” prompts thtat generate results directly. Generating code from plans is more repeatable and enables several people to review and collaborate.
⏸ plan mode on (shift+tab to cycle)
Ask Claude to add tests to evaluate whether its solution is complete and valid.
Type your question or command on top of “How can I help you today?”
REMEMBER, there is a cutoff for when information has been loaded in the model.

To create your own automations, consider the “Cowork” button at the top of the Claude app.
Cowork and Projects both require a Pro Plan subscription.
Click one Category at a time to see what’s available already: Code, Communication, Data, Design, Development, Financial Services, Health, Life sciences, Productivity, Sales and Marketing.
REMEMBER: Most services at the end of the connector (such as Zapier) charge money.
Setting up Claude Code... ✔ Claude Code successfully installed! Version: 2.1.81 Location: ~/.local/bin/claude Next: Run claude --help to get started ⚠ Setup notes: • Native installation exists but ~/.local/bin is not in your PATH. Run: echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bashrc && source ~/.bashrc ✅ Installation complete!
WARNING: Installing using curl would require adding to the $PATH in your ~/.zshrc or ~/.bashrc file this line:
export PATH="$HOME/.claude/bin:$PATH"
RECOMMENDED: In a Terminal, install Claude Code</a>
brew info claude-code
brew install claude-code
Terminal-based AI coding assistant install: 170,173 (30 days), 390,990 (90 days), 585,358 (365 days) ==> Moving App 'Claude.app' to '/Users/johndoe/Applications/Claude.app'
tree ~/.claude
folders:
backups cache downloads
REMEMBER: The free Claude.ai plan does not include Claude Code access. Upgrade to a Claude Pro, Max, Teams, Enterprise, or Console account.\
Select the subscription level:
Claude Code can be used with your Claude subscription or billed based on API usage through your Console account.
Select login method:
❯ 1. Claude account with subscription · Pro, Max, Team, or Enterprise
2. Anthropic Console account · API usage billing
3. 3rd-party platform · Amazon Bedrock, Microsoft Foundry, or Vertex AI
Documentation:
· Amazon Bedrock: https://code.claude.com/docs/en/amazon-bedrock
· Microsoft Foundry: https://code.claude.com/docs/en/microsoft-foundry
· Vertex AI: https://code.claude.com/docs/en/google-vertex-ai
???
Type just the / slash character for a menu:
/batch /claude-api /compact # summarize the conversation and replaces the current context with the summary. /context # token usage by each system component /cost # tokens spent /debug /extra-usage /heapdump /loop /pr-comments /release-notes /review /security-review /simplify /update-config /schedule
Others:
/help # menu below: /connect # establish connection /start # Begin a new session /memory # /statusline # below the prompt defined in customizable ~/.claude/statusline.sh /settings # menu /clear # (aka /reset) is faster than exiting and starting Claude Code again. /search # through the database /upload # files exit # from Claude UI/CLI program /status # overview of your current Claude Code setup /config # configuration /loop /doctor
Additional slash commands:
/insights # file://$HOME/.claude/usage-data/report.html /effort # Effort Level Controls https://www.youtube.com/watch?v=brLhhkUqcn4&t=18618s">max for Opus only. high, medium, low, auto. /remote-control # /batch # Batch Tasks & PRs /simplify # Code Review /loop # Schedule Prompts /btw # side question
REMEMBER: Each session is a 5-hour rolling window (at time of this writing). ???
Models reset ???
/model default # to switch to the sonnet model /model haiku # to switch to using the latest Haiku model. /model Sonnet (1M context) # to switch to using the latest Opus model. /model Opus (1M context) # to switch to using the latest Opus model. /model mythos # new Capybarra March 28, 2026 to Cyber Defenders. /fast # to speed up Opus model execution.
Install utility a program ccusage to analyze session logs:
https://github.com/ryoppippi/ccusage/
See ccusage.com/guide/session/reports
### /status
Example:
❯ /status Version: 2.1.3 Session name: /rename to add a name Session ID: 4eb36de6-c9f2-4c22-8ad3-a8232ea6c078 cwd: /Users/gigi Auth token: none API key: /login managed key Organization: Perplexity AI Email: gigi.sayfan@perplexity.ai Model: opus (claude-opus-4-5-20251101) MCP servers: notion ✔, linear ✔, datadog ✔ Memory: user (.claude/CLAUDE.md) Setting sources: User settings, Shared project settings, Project local settings
### /config
Configuration choices are stored by Claude in its file .claude/settings.json
❯ /config
Auto-compact true
Show tips true
Thinking mode true
Prompt suggestions true
Rewind code (checkpoints) true
Verbose output false
Terminal progress bar true
Default permission mode Accept edits
Respect .gitignore in file picker true
Auto-update channel latest
Theme Dark mode
Notifications Auto
Output style default
Language Default (English)
Editor mode normal
Model opus
Claude’s context window is 200K, meaning it can ingest 200K+ tokens (about 500 pages of text or more) when using a paid Claude plan. The Claude API can ingest 1M tokens when using Claude Opus 4.6 or Sonnet 4.6.
PROTIP: Take action when token usage is above 50%. See Rewind Mode (Escape x2)
The first line in the example above:” 51k tokens (26%)” is what is currently used. Users on Claude Code with a Max, Team, or Enterprise plan, Claude Opus 4.6 have a 1M token context window.
REMEMBER: The Autocompact Buffer: 45k tokens (22.5%) is reserved for autocompaction. When your conversation approaches the context window limit, Claude summarizes earlier messages to make room for new content. Claude Code does this automatically when the context window fills up, but here’s the thing - automatic compaction might keep less important stuff and throw away useful insights. But that takes time and require work space. The context window limit applies to input + output combined. When autocompaction triggers, the model needs room to generate the summary. Without reserved space, a full context would leave no room for output. So right off the bat, you only have about half the context window for your actual conversation.
System Overhead: The system prompt and tools reserve almost 20k tokens (~10%).
The more MCP servers are used, the more “MCP Tools” tokens are used. Each tool within an MCP server consumes token before it even starts. Each of several tools are usually a part of each MCP server. For example, Notion has a tool for
* create-pages
* create-comment
* update-page
* update-database
❯ /cost
⎿ Total cost: $2.69
Total duration (API): 5m 12s
Total duration (wall): 9h 39m 12s
Total code changes: 10 lines added, 1 line removed
Usage by model:
claude-haiku: 42.1k input, 790 output, 0 cache read, 11.9k cache write ($0.0609)
claude-opus-4-5: 3.4k input, 10.7k output, 1.7m cache read, 235.3k cache write, 1 web search ($2.63). x ```
<a name="loop"></a>
## /loop
The <tt>/loop</tt> command parses natural language specifications into three parameters of a <strong>CronCreate call </strong>, which is not just a repetitiive <a target="_blank" href="https://ghuntley.com/loop/">"Ralph loop"</a>. It can also <strong>schedule a task</strong> that fires based on a timer, in the current Claude Code session. Close the terminal, exit Claude, or lose your connection, and all scheduled tasks vanish.
<pre>
{
"cron": "*/10 * * * *",
"prompt": "Check the CI status on PR #42 and summarize any failures",
"recurring": true
}
</pre>
<a name="CLAUDE.md"></a>
### CLAUDE.md file
REMEMBER: Claude Code has no memory. On every new single session, it wakes up with <strong>zero context</strong> about your project.
So history and preferences must be added added as context.
1. At the <strong>Claude CLI</strong>,
1. Copy in files from ???
* CLAUDE.md referenced by
* state.md — current state of the project
* architecture.md — how everything fits together
* terraform-CLAUDE.md
* python-CLAUDE.md
* MEMORY.md
<br /><br />
1. Integrate from those who shared theirs:
* https://github.com/anthropics/courses/blob/master/tool_use/README.md
* https://github.com/citypaul/.dotfiles/blob/main/claude/.claude/CLAUDE.md
* https://github.com/jarrodwatts/claude-code-config
* https://github.com/centminmod/my-claude-code-setup?tab=readme-ov-file#alternate-read-me-guides
* Git Worktrees (for <a target="_blank" href="https://code.claude.com/docs/en/desktop#work-in-parallel-with-sessions">Parallel Sessions in Claude Code</a> via Claude Desktop apps
* https://github.com/Piebald-AI/claude-code-system-prompts?tab=readme-ov-file
* etc. ???
<br /><br />
* https://github.com/Piebald-AI/claude-code-system-prompts?tab=readme-ov-file#system-reminders
1. Customize System prompts using https://github.com/Piebald-AI/tweakcc
<a name="init"></a>
1. generate a starter CLAUDE.md as a starting point:
```bash
/init
tree
TODO:
├── api ├── web ├── .editorconfig ├── .env.example ├── .gitignore ├── CLAUDE.md ├── README.md └── docker-compose.yml
Edit file CLAUDE.md, the long-term memory file.
The file guides Claude Code (claude.ai/code) when working with code in this repository.
REMEMBER: At the start of each agent session, Claude looks for a CLAUDE.MD file in each GitHub repository root, in parent directories for monorepo setups, or in your home folder for universal application across all projects. So the file must be named with uppercase “CLAUDE”, lowercase “.md” (like GitHub looks for “README.md”). Providing this context up front helps agents avoid running incorrect commands or introducing architectural or stylistic inconsistencies when implementing new features.
Each CLAUDE.md file holds markdown-formatted project-specific context that should be repeated in every prompt: Project context (basic rules), About this project, Key directories, Standards, structure, conventions, workflows, style, domain-specific terminology. Example:
PROTIP: Keep CLAUDE.md files to a maximum of 100–200 lines. Long files are a code smell and take up precious context. CLAUDE.md should be a routing file, not a knowledge dump.
Point to .claude/rules/*.md for detailed specs and docs/ for architecture. Otherwise it gets so long that Claude skims it and misses the important stuff.
Delete what you don’t need — deleting is easier than creating from scratch.
Explore Claude Plugin Marketplace of Curated plugins, agent skills, and MCP servers for Claude Code: https://claudemarketplaces.com/learn
/plugin marketplace add <a target="_blank" href="https://github.com/jarrodwatts/claude-hu">jarrodwatts/claude-hud</a>
/plugin install claude-hud
/reload-plugins # to activate
/claude-hud:setup # to ~/.claude/settings.json
/restart Claude Code
code ~/.claude/plugins/claude-hud/config.json
Updates every ~300ms. $80/yr Masterclass
Claude Co-Work - “Hand off tasks to Claude and come back to finished work.”
Claude Skills “turn expertise, procedures, and best practices into reusable capabilities.” To ensure output follows proven patterns (rather than guessing) for handling PowerPoint pptx files, pptx/SKILL.md is defined.
https://platform.claude.com/workspaces/default/skills handlers for pdf, Microsoft xlsx, pptx, docx,
VIDEO: Chris Raroque runs Claude Code Opus inside a Warp client referencing a [paid] mobbin.com design template. Voice dictates changes. Breaks down generation section by section. No hand edits.
Anthropic provides free tutorials at https://anthropic.skilljar.com/
https://claude.com/partners
“Anthropic invests $100 million into the Claude Partner Network” (announced Mar 12, 2026) mentions “technical” Claude Certified Architect (CCA) Foundations certification.
#CAExamPrep
“A significant proportion of our $100 million investment will go directly to our partners as direct support for training and sales enablement, and for market development (including work to make customer deployments successful) and co-marketing for joint campaigns and events. “
The Partner Portal at https://partnerportal.anthropic.com/s/login/ provides Academy training materials, sales playbooks used by our own go-to-market team, and other co-marketing documentation.
At the Services Partner Directory, enterprise buyers can find firms with Claude implementation experience.
Partners get priority access to new certifications as they roll out.
Additional certifications for sellers, architects, and developers.
Use your personal email to sign up for their newsletter.
Use your personal email to sign In to https://anthropic.skilljar.com
Exam Domains from Anthropic’s Exam Guide.pdf:
The community confirms is the exam’s focus areas: fallback loop design, Batch API cost optimization, JSON schema structuring to prevent hallucinations, and MCP tool orchestration.
IBM AI Engineering (Coursera) ML/DL concepts and model deployment Conceptual + hands-on Cloud-agnostic
Anthropic Academy is at https://www.anthropic.com/learn
https://anthropic.skilljar.com/claude-certified-architect-foundations-access-request
References:
| Claude Opus | Claude Sonnet | Claude Haiku | Mythos | |
|---|---|---|---|---|
| Description | Highest level of intelligence | Balance of quality, speed, cost | Most cost-efficient and latency-optimized model | |
| capabilities (Best used for) |
advanced reasoning | Common coding tasks | Quick code completions and suggestions | |
| Cost: | Highest | Medium | Lowest | |
| Input/Output $/MTok | $5/$25 | $3/$15 | $1/$5 | |
| Prompt caching Read/Write $/MTok | $0.50/$6.25 | $0.30/$3.75 | $0.10/$1.25 | |
| max_input_tokens (Context window) | 1M tokens | 1M tokens | 200k tokens | |
| max_tokens (Max output) | 128k tokens | 64k tokens | 64k tokens | |
| Tokens/min Input & Output | 30K/8K | 30K/8K | 50K/10K | |
| Comparative Latency: | Moderate | Fast | Fastest | |
| Supports Reasoning & Adaptive Thinking |
Yes | Yes | No! |
REMEMBER: Each model used has a different ID and version on each cloud: See DOCS: API codes for each Claude Model version list or GET https://api.anthropic.com/v1/models
On AWS, the full model_id = “us.anthropic.claude-3-7-sonnet-20250219-v1:0”
| Feature | Claude Opus 4.6 | Claude Sonnet 4.6 | Claude Haiku 4.5 |
|---|---|---|---|
| Claude API ID | claude-opus-4-6 | claude-sonnet-4-6 | claude-haiku-4-5-20251001 |
| Claude API alias used by API calls | claude-opus-4-6 | claude-sonnet-4-6 | claude-haiku-4-5 |
| GCP Vertex AI ID | claude-opus-4-6 | claude-sonnet-4-6 | claude-haiku-4-5@20251001 |
| AWS Bedrock ID | anthropic.claude-opus-4-6-v1 | anthropic.claude-sonnet-4-6 | anthropic.claude-haiku-4-5-20251001-v1:0 |
| Reliable knowledge cutoff: | - | - | February 2025 |
| Training data cutoff: | - | - | July 2025 |
TODO: Microsoft Foundry?
REMEMBER: The Reliable knowledge cutoff is the date through which knowledge is most extensive and reliable.
Training Data Cutoff is the broader range of data used.
export ANTHROPIC_API_KEY='sk...your-api-key-here'
https://platform.claude.com/settings/keys
Run the curl-model-info.sh from https://github.com/bomonike/claude-templates…
curl https://api.anthropic.com/v1/messages \
-H "Content-Type: application/json" \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "claude-opus-4-6",
"max_tokens": 1000,
"messages": [
{
"role": "user",
"content": "What are the capabilities of Claude Opus 4.5 and its Reliable knowledge cutoff date and Training data cutoff dates?"
}
]
}'
An example of the response: ???
{
"id": "msg_01HCDu5LRGeP2o7s2xGmxyx8",
"type": "message",
"role": "assistant",
"content": [
{
"type": "text",
"text": "Here are some effective search strategies to find the latest renewable energy developments:\n\n## Search Terms to Use:\n- \"renewable energy news 2024\"\n- \"clean energy breakthrough\"\n- \"solar/wind/battery technology advances\"\n- \"green energy innovations\"\n- \"climate tech developments\"\n- \"energy storage solutions\"\n\n## Best Sources to Check:\n\n**News & Industry Sites:**\n- Renewable Energy World\n- GreenTech Media (now Wood Mackenzie)\n- Energy Storage News\n- CleanTechnica\n- PV Magazine (for solar)\n- WindPower Engineering & Development..."
}
],
"model": "claude-opus-4-6",
"stop_reason": "end_turn",
"usage": {
"input_tokens": 21,
"output_tokens": 305
}
}
The “AI-6 framework” at February 2026 Packt BOOK: “Design Multi-Agent AI Systems Using MCP and A2A” (on OReilly.com) by Gigi Sayfan referencing his book GitHub repo https://github.com/Sayfan-AI/ai-six.
https://www.anthropic.com/learn/claude-for-you
AI Fluency 11-video playlist on YouTube
01Introduction to AI Fluency
02The AI Fluency Framework
03Deep Dive 1: What is Generative AI?
04Delegation
05Applying Delegation
06Description
07Deep Dive 2: Effective Prompting Techniques
08Discernment
09The Description-Discernment Loop
010Diligence
https://platform.claude.com/docs/en/get-started
curl https://api.anthropic.com/v1/messages \
-H "Content-Type: application/json" \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "claude-opus-4-6",
"max_tokens": 1000,
"messages": [
{
"role": "user",
"content": "What should I search for to find the latest developments in renewable energy?"
}
]
}'
Making a request
Multi-Turn conversations work by you maintaining your own chat history.
Chatbot
PROTIP: By default, Chat returns message with code between backticks so its explanation text can be added. To retrieve just the code returned with “stop sequences”:
import json
# Parse as JSON to validate and format
parsed_data = json.loads(text.strip())
# Or just strip whitespace for other data types
clean_text = text.strip()
messages = []
add_user_message(messages, "Generate a very short event bridge rule as json")
add_assistant_message(messages, "```json")
text = chat(messages, stop_sequences=["```"])
System prompts
Temperature
Streaming
Controlling model output
Structured data
PROTIP: This is a common misconception. MCP Servers and tool use are complementary but different concepts. Tool use is about Claude calling functions to accomplish tasks. MCP is about who provides those functions - instead of you writing them, someone else has already implemented them in an MCP Server.
The key insight is that MCP Servers provide tool schemas and functions already defined for you, while direct tool use requires you to author everything yourself. Both involve Claude using tools, but MCP dramatically reduces the development work required on your end.
Instead of sitting around monitoring every prompt like a hall monitor just in case a rogue rm -rf slips by.
So consider a Code Container to mount every project into an isolated container where I can let my harness run loose with full permissions while the actual machine stays untouched.
npm install -g code-container
However, although actions within a container can’t affect your real system, it breaks when it needs network access, host filesystem access, or anything that crosses the sandbox boundary. Most real workflows need at least one of those.
So auto mode lets an AI classifier decide what’s safe, block what isn’t, and ask you only when it’s genuinely unsure. Auto mode runs two separate security systems. One watches what goes into the agent’s context.A server-side detector scans content for prompt injection attempts.
The second line of defense evaluates what the agent wants to do before it does it. Before the agent executes any action with real consequences, the “transcript classifier” built on Claude Sonnet 4.6 classifier evaluates the action against a set of decision criteria using full chain-of-thought reasoning.
This BLOG by Marco Kotrotsos reports a 17% false negative which allowed dangerous actions, including 5.7% data exfiltration attack success rate. But that’s still better that letting everything through when using the time-saving:
–dangerously-skip-permissions
Auto mode is not a replacement for judgment on high-stakes operations.
claude auto-mode defaults
References:
300ms startup time!
References:
https://medium.com/gitconnected/stop-babysitting-claude-code-get-work-done-10x-faster-with-code-container-fcd515381751
https://medium.com/@the.gigi/claude-code-deep-dive-lock-him-up-ea142fc8246b by Gigi Sayfan CCDD (Claude Code Deep Dive)
https://www.youtube.com/watch?v=IjiaCOt7bP8&pp=ugUHEgVlbi1VUw%3D%3D Agent Skills: Code Beats Markdown (Here’s Why) Sam Witteveen
26-03-28 v019 doc: HUD :anthropic-certs.md created 2026-03-19