ARC-AGI: 2026 AI Leaderboard
Abstract visual reasoning puzzles designed to stay hard for LLMs.
What it tests
ARC-AGI (Abstract Reasoning Corpus) is a set of grid-based visual puzzles where a model sees a few input/output example grids and must infer the transformation rule. Each puzzle is designed to require abstraction that does not exist in the training data.
How it is scored
Accuracy on held-out puzzles. A 50% score is considered a major frontier milestone. ARC-AGI-2 is the harder current version; a $1M prize was offered for solving it.
Why it matters
ARC-AGI is the benchmark designed to resist scaling. Strong performance suggests actual abstract-reasoning capability rather than pattern completion. Useful for identifying which models are 'thinking' vs 'searching training data'.
Leaderboard (3 models)
Sorted by ARC-AGIscore. Tier column shows the tool's overall AIToolTier rank, which blends this benchmark with pricing, features, and real-world usability.
| # | Model | Tier | ARC-AGI score | Variant | Overall |
|---|---|---|---|---|---|
| 1 | Gemini (Google) Gemini 3.1 Ultra | A | 77.1% | ARC-AGI | 8.3/10 |
| 2 | Claude (Anthropic) Claude Opus 4.7 (4.6 baseline scores shown; 4.7 announced 13% coding lift, 3x production task completion) | A | 75.2% | ARC-AGI | 8.5/10 |
| 3 | ChatGPT GPT-5.4 | A | 73.3% | ARC-AGI | 8.8/10 |
About ARC-AGI
- Creator
- Francois Chollet, 2019 (v2 2024)
- Unit
- % (max 100)
- Official source
- https://arcprize.org/