AI Benchmarks
Compare leading AI models across standardized benchmarks. Last updated 2026-04-17.
How do you know if Claude is smarter than GPT-4? How does the new Llama 4 stack up against Gemini 2.5? Benchmarks provide the answer. These standardized tests measure specific AI capabilities across diverse domains and let us compare models objectively. They're imperfect (benchmarks are often gamed), but they're the only shared language we have for understanding AI progress.
MMLU measures broad knowledge across multiple choice questions across chemistry, history, law, and 50+ other domains. A score of 92 percent means the model answers 92 out of 100 random questions correctly across all topics. MMLU is the closest we have to a general intelligence test for AI. HumanEval tests code generation: the model writes functions to solve programming problems that humans created. GPQA (Graduate-Level Google-Proof Questions) is deliberately hard, asking obscure questions that require deep expertise. MATH benchmarks raw mathematical reasoning. SWE-bench tests software engineering tasks: given a failing test and a codebase, can the model write code to fix it?
No single benchmark captures everything. A model that excels at MMLU might struggle with code. Benchmarks have been leaked and learned during training. And real-world performance depends on your specific task, how you prompt, and how you integrate the model into your system. Use this data to narrow the field of candidates. Then test the finalists on your actual workloads. We've also collected this data in our model comparison tool for side-by-side analysis.
MMLU-Pro: General knowledge and reasoning across 57 subjects. Max score: 100.
| Rank | Model | Provider | Score↓ | Released |
|---|---|---|---|---|
| #1 | Claude Opus 4.7 | Anthropic | 93.8/ 100 | 2026-04 |
| #2 | Claude Opus 4.6 | Anthropic | 92.4/ 100 | 2026-03 |
| #3 | o1 | OpenAI | 91.8/ 100 | 2025-09 |
| #4 | Gemini 2.5 Pro | 91.2/ 100 | 2026-01 | |
| #5 | GPT-4.5 | OpenAI | 90.1/ 100 | 2025-12 |
| #6 | Llama 4 Maverick | Meta | 89.3/ 100 | 2026-03 |
| #7 | Claude Sonnet 4.6 | Anthropic | 88.7/ 100 | 2026-02 |
| #8 | DeepSeek V3 | DeepSeek | 88.1/ 100 | 2025-12 |
| #9 | GPT-4o | OpenAI | 87.2/ 100 | 2025-05 |
| #10 | Mistral Large | Mistral | 86.8/ 100 | 2025-11 |
| #11 | o3-mini | OpenAI | 86.3/ 100 | 2025-11 |
| #12 | Llama 4 Scout | Meta | 85.9/ 100 | 2026-02 |
| #13 | Gemini 2.0 Flash | 84.5/ 100 | 2025-10 | |
| #14 | Claude Haiku 4.5 | Anthropic | 82.1/ 100 | 2026-01 |
| #15 | Mistral Small | Mistral | 78.4/ 100 | 2025-09 |