What Is Claude Opus 4.7? Features, Benchmarks, Pricing, and Everything You Need to Know

Claude Opus 4.7 is Anthropic's most capable GA model with 1M context, high-res vision (3.75MP), xhigh effort level, and task budgets. Full breakdown of features, benchmarks, pricing, and what changed from Opus 4.6.

Ashley Innocent

Ashley Innocent

16 April 2026

What Is Claude Opus 4.7? Features, Benchmarks, Pricing, and Everything You Need to Know

TL;DR

Claude Opus 4.7 is Anthropic’s most capable generally available model, released April 16, 2026. It introduces high-resolution vision (up to 3.75 megapixels), a new xhigh effort level, task budgets for agentic loops, and a new tokenizer. It keeps the 1M token context window and $5/$25 per million token pricing from Opus 4.6 but ships several breaking API changes, including the removal of extended thinking budgets and sampling parameters.

Introduction

Anthropic released Claude Opus 4.7 on April 16, 2026. It replaces Opus 4.6 as the top-tier model in the Claude lineup and targets developers building autonomous agents, knowledge-work assistants, and vision-heavy applications.

The release matters for three reasons. First, it’s the first Claude model with high-resolution image support, more than tripling the pixel budget from 1.15 MP to 3.75 MP. Second, it introduces task budgets, a way to give the model a token allowance for an entire agentic loop rather than a single turn. Third, it ships breaking changes that require code updates if you’re migrating from Opus 4.6.

💡
This guide covers what Opus 4.7 can do, how it compares to its predecessor, what it costs, and what you need to change if you’re upgrading. You’ll also see how to test your Claude API integration with Apidog, which handles the multi-turn conversation format and tool-use payloads that Opus 4.7 excels at.
button

Core Specifications

Specification Value
API model ID claude-opus-4-7
Context window 1,000,000 tokens
Max output tokens 128,000 tokens
Input pricing $5 per million tokens
Output pricing $25 per million tokens
Batch input pricing $2.50 per million tokens
Batch output pricing $12.50 per million tokens
Cache read pricing $0.50 per million tokens
5-min cache write $6.25 per million tokens
1-hour cache write $10 per million tokens
Release date April 16, 2026
Availability Claude API, Amazon Bedrock, Google Vertex AI, Microsoft Foundry

Opus 4.7 uses a new tokenizer that may produce up to 35% more tokens for the same text compared to Opus 4.6. The per-token price is unchanged, but your effective cost per request may increase depending on the content.

What’s New in Claude Opus 4.7

High-Resolution Image Support

This is the headline addition. Previous Claude models capped image input at 1,568 pixels on the long edge (about 1.15 megapixels). Opus 4.7 raises that to 2,576 pixels on the long edge (about 3.75 megapixels).

The practical impact: screenshots, design mockups, documents, and photographs come through at much higher fidelity. Coordinate mapping is now 1:1 with actual pixels, eliminating the scale-factor math that computer-use workflows previously required.

Opus 4.7 also improves on specific vision subtasks:

Higher resolution means more tokens per image. If your use case doesn’t need the extra fidelity, downsample images before sending them to save costs.

New xhigh Effort Level

The effort parameter controls how much reasoning Claude invests in a response. Opus 4.7 adds xhigh above the existing high, medium, and low levels.

Use xhigh for coding and agentic tasks where quality matters more than latency. At this level, the model spends significantly more tokens on internal reasoning, resulting in better outputs for complex problems. Use high as the minimum for intelligence-sensitive work. Lower levels trade accuracy for speed and cost savings.

Task Budgets (Beta)

Task budgets solve a problem that anyone building agents has hit: how do you prevent a multi-turn agentic loop from consuming an unbounded number of tokens?

With task budgets, you give Claude a rough token target for the entire loop, including thinking, tool calls, tool results, and final output. The model sees a running countdown and uses it to prioritize work, skip low-value steps, and finish gracefully as the budget runs out.

Key details:

For open-ended agentic tasks where quality matters most, skip the task budget and let the model run. Reserve task budgets for workloads where you need to control total spend.

Adaptive Thinking as the Only Thinking Mode

Extended thinking (where you set a fixed budget_tokens) is removed. Attempting to set thinking: {"type": "enabled", "budget_tokens": N} returns a 400 error.

Adaptive thinking is the sole thinking-on mode. In Anthropic’s internal evaluations, it consistently outperformed the fixed-budget approach because the model allocates reasoning tokens dynamically based on task difficulty.

Important: adaptive thinking is off by default. You must explicitly set thinking: {"type": "adaptive"} to enable it.

By default, thinking content is also omitted from responses. If you need to see the model’s reasoning (e.g., for streaming progress to users), set display: "summarized" in the thinking config.

Improved Memory

Opus 4.7 is better at writing to and reading from file-system-based memory. If your agent maintains a scratchpad, notes file, or structured memory store across turns, it will do a better job of updating and referencing those notes.

This matters for long-running coding agents, research assistants, and any workflow where context carries across sessions.

Knowledge Work Improvements

Specific gains on real-world knowledge tasks:

What Changed from Opus 4.6

Breaking API Changes

These apply to the Messages API. If you use Claude Managed Agents, there are no breaking changes.

Change Before (Opus 4.6) After (Opus 4.7)
Extended thinking thinking: {"type": "enabled", "budget_tokens": 32000} Must use thinking: {"type": "adaptive"}
Sampling parameters temperature, top_p, top_k accepted Non-default values return 400 error
Thinking display Thinking content included by default Omitted by default; opt in with display: "summarized"
Tokenizer Standard tokenizer New tokenizer (up to 35% more tokens for same text)

Behavior Changes

These aren’t API-breaking but may affect your prompts:

If you’ve built prompting scaffolding to force Claude into specific behaviors (like “double-check the slide layout” or “give status updates”), try removing it. Opus 4.7 handles many of these patterns natively.

Pricing Breakdown

Opus 4.7 maintains the same per-token pricing as Opus 4.6 and 4.5:

Usage type Cost
Standard input $5 / MTok
Standard output $25 / MTok
Batch input $2.50 / MTok
Batch output $12.50 / MTok
Cache read $0.50 / MTok
5-min cache write $6.25 / MTok
1-hour cache write $10 / MTok
Fast mode input (Opus 4.6 only) $30 / MTok
US data residency 1.1x multiplier

The new tokenizer is the cost variable to watch. Because it may produce up to 35% more tokens for the same input text, your effective cost per request could increase even though the per-token price hasn’t changed. Test with the /v1/messages/count_tokens endpoint to measure the impact on your specific prompts.

The 1M context window has no long-context premium. A 900K-token request costs the same per-token rate as a 9K-token request.

Where to Use Opus 4.7

Strong Use Cases

When Opus 4.7 May Be Overkill

How to Test Your Claude Opus 4.7 Integration with Apidog

Switching your model ID from claude-opus-4-6 to claude-opus-4-7 is the easy part. The harder part is validating that your existing prompts, tool definitions, and error handling still work correctly after the breaking changes.

Apidog makes this straightforward:

Import your API schema. Drop in your OpenAPI spec or manually define your Claude API endpoints. Apidog auto-generates request templates for the Messages API.

Create test scenarios. Set up multi-turn conversations that test your specific tool-use patterns. Apidog lets you chain requests, pass context between turns, and validate response schemas.

Compare model versions. Run the same test scenarios against claude-opus-4-6 and claude-opus-4-7 side by side. Check for differences in token counts, response structure, and output quality.

Validate breaking changes. Confirm that your updated thinking config works, that removed sampling parameters don’t sneak back in, and that the new tokenizer doesn’t blow past your max_tokens limits.

Debug tool-use payloads. Inspect the full request and response bodies for multi-turn tool-use conversations. Apidog’s visual interface makes it easy to spot malformed tool results or missing tool_use_id references.

Migration Checklist

If you’re upgrading from Opus 4.6:

Conclusion

Claude Opus 4.7 is Anthropic’s strongest generally available model. The high-resolution vision, task budgets, and xhigh effort level push it further into autonomous agent territory. The breaking changes (no more extended thinking budgets, no sampling parameters) require code updates, but the migration path is clear.

The new tokenizer is the main cost consideration. Per-token prices are flat, but the same prompt may cost more due to higher token counts. Test your workloads before switching production traffic.

For developers building API integrations, Apidog provides the testing and debugging environment you need to validate your migration and compare model performance across versions.

button

Explore more

AI Agents are the New API Consumers

AI Agents are the New API Consumers

AI agents are the new API consumers—shifting how APIs are designed, validated, and secured. Learn how to build agent-friendly APIs and adapt your workflows.

16 April 2026

Running AI models locally vs. via API: which should you choose?

Running AI models locally vs. via API: which should you choose?

Local AI vs API: cost breakdown, latency numbers, capability gaps, and privacy tradeoffs. With concrete guidance on when to self-host and when to use the API.

16 April 2026

Designing APIs for AI Agents, Not Just Humans

Designing APIs for AI Agents, Not Just Humans

Designing APIs for AI agents, not just human users, is the new frontier. Learn why it matters, key challenges, actionable design principles, and real-world examples to make your APIs agent-ready.

15 April 2026

Practice API Design-first in Apidog

Discover an easier way to build and use APIs