Reasoning

ARC-AGI: 2026 AI Leaderboard

Abstract visual reasoning puzzles designed to stay hard for LLMs.

What it tests

ARC-AGI (Abstract Reasoning Corpus) is a set of grid-based visual puzzles where a model sees a few input/output example grids and must infer the transformation rule. Each puzzle is designed to require abstraction that does not exist in the training data.

How it is scored

Accuracy on held-out puzzles. A 50% score is considered a major frontier milestone. ARC-AGI-2 is the harder current version; a $1M prize was offered for solving it.

Why it matters

ARC-AGI is the benchmark designed to resist scaling. Strong performance suggests actual abstract-reasoning capability rather than pattern completion. Useful for identifying which models are 'thinking' vs 'searching training data'.

Leaderboard (3 models)

Sorted by ARC-AGIscore. Tier column shows the tool's overall AIToolTier rank, which blends this benchmark with pricing, features, and real-world usability.

#ModelTierARC-AGI score
1Gemini (Google)
Gemini 3.1 Ultra
A77.1%
2Claude (Anthropic)
Claude Opus 4.7 (4.6 baseline scores shown; 4.7 announced 13% coding lift, 3x production task completion)
A75.2%
3ChatGPT
GPT-5.4
A73.3%

About ARC-AGI

Creator
Francois Chollet, 2019 (v2 2024)
Unit
% (max 100)
Official source
https://arcprize.org/

Other benchmarks