LIVE
ANTHROPICOpus 4.7 benchmarks published2m ago
CLAUDEOK142ms
OPUS 4.7$15 / $75per Mtok
CHATGPTOK89ms
HACKERNEWSWhy has not AI improved design quality the way it improved dev speed?14m ago
MMLU-PROleader Opus 4.788.4
GEMINIDEGRADED312ms
MISTRALMistral Medium 3 released6m ago
GPT-4o$5 / $15per Mtok
ARXIVCompositional reasoning in LRMs22m ago
BEDROCKOK178ms
GEMINI 2.5$3.50 / $10.50per Mtok
THE VERGEFrontier Model Forum expansion announced38m ago
SWE-BENCHleader Claude Opus 4.772.1%
MISTRALOK104ms
ANTHROPICOpus 4.7 benchmarks published2m ago
CLAUDEOK142ms
OPUS 4.7$15 / $75per Mtok
CHATGPTOK89ms
HACKERNEWSWhy has not AI improved design quality the way it improved dev speed?14m ago
MMLU-PROleader Opus 4.788.4
GEMINIDEGRADED312ms
MISTRALMistral Medium 3 released6m ago
GPT-4o$5 / $15per Mtok
ARXIVCompositional reasoning in LRMs22m ago
BEDROCKOK178ms
GEMINI 2.5$3.50 / $10.50per Mtok
THE VERGEFrontier Model Forum expansion announced38m ago
SWE-BENCHleader Claude Opus 4.772.1%
MISTRALOK104ms

Claude Opus 4.7 vs Claude Opus 4.6

Anthropic released Claude Opus 4.7 on April 17, 2026, roughly a month after 4.6. Pricing held steady at $15 input and $75 output per million tokens. The headline is a 5x context upgrade from 200K to 1 million tokens and a solid bump on reasoning and code benchmarks. Here is the full generational diff.

Head-to-Head Specs

SpecClaude Opus 4.7Claude Opus 4.6
ProviderAnthropicAnthropic
Input Price$15.00/1M$15.00/1M
Output Price$75.00/1M$75.00/1M
Context Window1M200K
Released2026-042026-03
Capabilitiestext, vision, tool-use, codetext, vision, tool-use, code

Benchmark Scores

BenchmarkClaude Opus 4.7Claude Opus 4.6Winner
MMLU-Pro93.892.4Claude
HumanEval96.295.1Claude
GPQA Diamond76.574.2Claude
MATH93.191.8Claude
SWE-bench65.462.3Claude

See the full benchmark leaderboard for all models.

Category Breakdown

Context windowClaude Opus 4.7

1M tokens on 4.7 vs 200K on 4.6, a 5x jump

Code generationClaude Opus 4.7

4.7 scores 96.2 on HumanEval vs 4.6 at 95.1

ReasoningClaude Opus 4.7

4.7 leads on GPQA Diamond (76.5 vs 74.2) and MMLU-Pro (93.8 vs 92.4)

MathClaude Opus 4.7

4.7 scores 93.1 on MATH vs 4.6 at 91.8

SWE-benchClaude Opus 4.7

4.7 posts 65.4 vs 4.6 at 62.3 on real engineering tasks

PricingTieTie

Both cost $15/$75 per 1M tokens

Choose Claude Opus 4.7 when:

  • Workloads that benefit from 1M context
  • Long-running agent sessions
  • Whole-repository code tasks
  • Multi-document research synthesis
View Claude Opus 4.7 details

Choose Claude Opus 4.6 when:

  • Existing prompts tuned to 4.6 behavior
  • Workloads that fit in 200K and do not need migration
  • Enterprise teams on version-pinned contracts
  • Evaluation and regression testing against 4.6
View Claude Opus 4.6 details

Frequently Asked Questions

Which is better, Claude Opus 4.7 or Claude Opus 4.6?

It depends on your use case. Claude Opus 4.7 from Anthropic excels at workloads that benefit from 1m context, while Claude Opus 4.6 from Anthropic is better for existing prompts tuned to 4.6 behavior. See the full comparison above for detailed benchmarks and pricing.

How much does Claude Opus 4.7 cost compared to Claude Opus 4.6?

Claude Opus 4.7 costs $15.00 input and $75.00 output per 1M tokens. Claude Opus 4.6 costs $15.00 input and $75.00 output per 1M tokens.

What is the context window difference between Claude Opus 4.7 and Claude Opus 4.6?

Claude Opus 4.7 supports 1M tokens, while Claude Opus 4.6 supports 200K tokens.

More Comparisons

Interactive Compare ToolAll ModelsFull Pricing Guide