LIVE
ANTHROPICOpus 4.7 benchmarks published2m ago
CLAUDEOK142ms
OPUS 4.7$15 / $75per Mtok
CHATGPTOK89ms
HACKERNEWSWhy has not AI improved design quality the way it improved dev speed?14m ago
MMLU-PROleader Opus 4.788.4
GEMINIDEGRADED312ms
MISTRALMistral Medium 3 released6m ago
GPT-4o$5 / $15per Mtok
ARXIVCompositional reasoning in LRMs22m ago
BEDROCKOK178ms
GEMINI 2.5$3.50 / $10.50per Mtok
THE VERGEFrontier Model Forum expansion announced38m ago
SWE-BENCHleader Claude Opus 4.772.1%
MISTRALOK104ms
ANTHROPICOpus 4.7 benchmarks published2m ago
CLAUDEOK142ms
OPUS 4.7$15 / $75per Mtok
CHATGPTOK89ms
HACKERNEWSWhy has not AI improved design quality the way it improved dev speed?14m ago
MMLU-PROleader Opus 4.788.4
GEMINIDEGRADED312ms
MISTRALMistral Medium 3 released6m ago
GPT-4o$5 / $15per Mtok
ARXIVCompositional reasoning in LRMs22m ago
BEDROCKOK178ms
GEMINI 2.5$3.50 / $10.50per Mtok
THE VERGEFrontier Model Forum expansion announced38m ago
SWE-BENCHleader Claude Opus 4.772.1%
MISTRALOK104ms

Claude Opus 4.7

Flagship

by Anthropic

Claude Opus 4.7 is Anthropic's newest flagship, released April 17, 2026. The headline change is a 1 million token context window at the same API pricing as 4.6, alongside incremental gains on reasoning, math, and SWE-bench. It is built for long-context code work, multi-document analysis, and agent workflows that span hours of tool calls.

Input Price

$15.00

per 1M tokens

Output Price

$75.00

per 1M tokens

Context Window

1M

tokens

Released

2026-04

API access

Capabilities

textvisiontool-usecode

Key Strengths

  • 1M token context window
  • Leads on HumanEval and SWE-bench
  • Same price as 4.6 with 5x context
  • Strongest agentic tool use to date

Best For

  • Whole-repository refactors
  • Multi-document research synthesis
  • Long-running agent workflows
  • Extended codebase debugging

Benchmark Scores

BenchmarkScoreDescription
MMLU-Pro93.8General knowledge and reasoning across 57 subjects
HumanEval96.2Python code generation and problem solving
GPQA Diamond76.5Graduate-level science questions verified by domain experts
MATH93.1Competition-level mathematics problems
SWE-bench65.4Real-world software engineering tasks from GitHub issues

Scores sourced from public benchmark datasets. See full benchmark leaderboard for all models.

Pricing Details

Input tokens

$15.00

per 1M tokens

Output tokens

$75.00

per 1M tokens

Estimated cost per 1K requests

$52.50

~1K input + ~500 output tokens avg

Prices are subject to change. Check the official documentation for current pricing. See the cost calculator for detailed estimates.

Related Models

View DocumentationCompare ModelsCost CalculatorFull Pricing Guide