Arcee Trinity-Large-Thinking vs MiniMax M2 / M2.5

Which one should you pick? Here's the full breakdown.

Arcee Trinity-Large-Thinking

A
8.1/10

Arcee AI's US-made open-weight frontier reasoning model -- launched 2026-04-01. 398B total params, ~13B active. Sparse MoE (256 experts, 4 active = 1.56% routing). Apache 2.0, trained from scratch. #2 on PinchBench trailing only Claude 3.5 Opus. ~96% cheaper than Opus-4.6 on agentic tasks

Our Pick

MiniMax M2 / M2.5

A
8.4/10

MiniMax's open-weights frontier -- first open model to match Claude Opus 4.6 on SWE-Bench at 10-20× lower cost

CategoryArcee Trinity-Large-ThinkingMiniMax M2 / M2.5
Ease of Use6.06.5
Output Quality9.09.0
Value9.59.5
Features8.08.5
Overall8.18.4

Pricing Comparison

FeatureArcee Trinity-Large-ThinkingMiniMax M2 / M2.5
Free TierYesYes
Starting Price$0$0

Benchmark Head-to-Head

MiniMax M2.5 (230B/10B active MoE) benchmarks — Arcee Trinity-Large-Thinking has no published benchmarks

BenchmarkScore
MMLU-Pro82.1%
GPQA Diamond76.8%
SWE-Bench Verified80.2%
HumanEval91%
AIME 202585.3%

Which Should You Pick?

Pick Arcee Trinity-Large-Thinking if...

Teams that need a US-made, Apache 2.0, frontier-tier open-weight model and can either rent multi-GPU infrastructure or pay OpenRouter API pricing at ~$0.90/M output tokens. Particularly valuable for US government, defense, or regulated enterprise contexts where country-of-origin matters for procurement. Also good for agentic reasoning workloads where the ~96% cost savings vs Claude Opus actually changes what you can build.

Visit Arcee Trinity-Large-Thinking

Pick MiniMax M2 / M2.5 if...

Agentic coding and tool-use workflows on a budget. Best price-to-SWE-Bench ratio of any open-weights model in 2026.

Visit MiniMax M2 / M2.5

Our Verdict

MiniMax M2 / M2.5 edges out Arcee Trinity-Large-Thinking with a 8.4 vs 8.1 overall score. Both are solid picks, but MiniMax M2 / M2.5 has the advantage in features.