Arcee Trinity-Large-Thinking
A Tier · 8.1/10
Arcee AI's US-made open-weight frontier reasoning model -- launched 2026-04-01. 398B total params, ~13B active. Sparse MoE (256 experts, 4 active = 1.56% routing). Apache 2.0, trained from scratch. #2 on PinchBench trailing only Claude 3.5 Opus. ~96% cheaper than Opus-4.6 on agentic tasks
Score Breakdown
The Good and the Bad
What we like
- +Rare US-made frontier-tier open-weight reasoning model -- fills the gap that Reflection AI has been teasing but has not yet shipped. With Llama 5 still unconfirmed, Arcee Trinity is the strongest available US-made open frontier option as of April 2026
- +Trained from scratch (not a fine-tune) at 398B total params with genuinely novel 256-expert MoE architecture. This is real frontier-scale training from a US startup, not a re-distillation -- a meaningful proof point for the US open-weight ecosystem
- +#2 on PinchBench trailing only Claude 3.5 Opus. Beats DeepSeek, Qwen, and most other open-weight competitors on agentic reasoning in that specific evaluation. Third-party benchmarks beyond PinchBench are still landing through Q2 2026
- +~96% cheaper than Claude Opus 4.6 on equivalent agentic tasks (per Arcee's own cost modeling) -- the sparse MoE routing is aggressive enough that per-token economics are closer to a 13B dense model than a 398B one
What could be better
- −Fresh as of 2026-04-01 -- third-party benchmark verification beyond PinchBench is still lagging. The 'competitive with Opus' claim is plausible but not yet cross-validated by Artificial Analysis, LMArena, or major tier-1 press evaluations
- −Arcee AI is a smaller US startup -- production-scale support, fine-tuning ecosystem, and community fine-tunes are thinner than what Llama, Qwen, or DeepSeek offer. Enterprise adoption will be gated by Arcee's ability to grow those resources
- −Requires multi-GPU infrastructure to self-host at full capacity -- 398B total params means even with MoE routing, the inactive experts still need to fit in memory. Realistic self-hosting starts at 4× H100 or equivalent
- −First-release model. Expect rough edges on instruction-following, long-horizon coherence, and multilingual performance versus more iterated families like Qwen 3.6 or GLM-5.1
Pricing
Self-hosted (Apache 2.0)
- ✓Trained from scratch, not a fine-tune of an existing model
- ✓Apache 2.0 license, unrestricted commercial use
- ✓Weights on Hugging Face
- ✓256-expert sparse MoE with 4 experts active (~1.56% routing)
API (OpenRouter, Trinity-Large-Thinking)
- ✓Available on OpenRouter for hosted inference
- ✓~96% cheaper than Claude Opus 4.6 at the same quality tier on agentic tasks
- ✓Pay-as-you-go
System Requirements
Hardware needed to self-host. Min = smallest viable setup (usually heavy quantization). Max = full-precision / production-grade.
| Model variant | Min | Max |
|---|---|---|
| Arcee Trinity-Large-Thinking (398B total / 13B active MoE)Apache 2.0. Trained from scratch by Arcee AI (US) | 4× H100 80 GB or equivalent (256-expert MoE needs inactive experts in memory too) | 8× H100 or 4× H200 for production serving |
Known Issues
- Third-party benchmark cross-validation still landing. PinchBench #2 ranking is Arcee's own evaluation -- Artificial Analysis, LMArena, and similar independent leaderboards are still adding Trinity through April-May 2026. Treat the 'Opus-tier' claim as provisionalSource: Arcee launch announcement, VentureBeat coverage · 2026-04
- Community quantizations for the 256-expert MoE routing layers showed issues at Q3 and below during the first week post-launch. Q5 is the practical sweet spot as of mid-April 2026Source: Reddit r/LocalLLaMA, Hugging Face discussions · 2026-04
Best for
Teams that need a US-made, Apache 2.0, frontier-tier open-weight model and can either rent multi-GPU infrastructure or pay OpenRouter API pricing at ~$0.90/M output tokens. Particularly valuable for US government, defense, or regulated enterprise contexts where country-of-origin matters for procurement. Also good for agentic reasoning workloads where the ~96% cost savings vs Claude Opus actually changes what you can build.
Not for
Absolute beginners or low-budget experimenters -- the 398B MoE needs real hardware or real API spend. Also not ideal if community ecosystem / fine-tune availability matters to you -- Qwen 3.6 and Llama 4 both have deeper third-party support. And not the right pick for multilingual or non-English use cases -- Arcee Trinity is English-first.
Our Verdict
Arcee Trinity-Large-Thinking is the most consequential US-made open-weight launch since Meta's Llama 4. A tiny US startup shipping a 398B-parameter sparse-MoE frontier reasoning model, trained from scratch, under Apache 2.0, priced ~96% below Claude Opus -- that is genuinely a new category of competitor in the open-weight ecosystem. The third-party benchmark verification is still landing, so treat the 'Opus-tier' positioning as provisional through April 2026. But even if Trinity lands at 80% of the claimed quality, it is the strongest US-made open-weight frontier option available today, and for US procurement / country-of-origin-sensitive deployments it fills a real gap that nobody else had solved.
Sources
- Arcee AI: Trinity-Large-Thinking (accessed 2026-04-17)
- VentureBeat: Arcee's open-source Trinity (accessed 2026-04-17)
- TechCrunch: Arcee AI 400B open-source LLM from scratch (accessed 2026-04-17)
Alternatives to Arcee Trinity-Large-Thinking
Llama 4 (Meta)
Meta's open-weights flagship family -- Scout (10M context), Maverick (multimodal 400B MoE), Behemoth in preview
Mistral AI
European AI lab with open and commercial models -- Mistral Small 4 (Mar 2026, 119B MoE Apache 2.0 unified model), Medium 3 (Apr 9 2026), and Voxtral TTS (open-source speech, Mar 2026)
DeepSeek
Near-frontier reasoning for pennies on the dollar -- the open-source LLM that made Silicon Valley nervous
Gemma 4 (Google)
Google DeepMind's open-weights model family -- multimodal, 256K context, runs on edge devices
Qwen (Alibaba)
Alibaba's open-weights + API family -- Qwen 3.6-Plus (Mar 30 2026, 1M context + always-on CoT + agentic tool-use), Qwen3.5 Small (2B runs on iPhone, 9B matches 120B-class models), plus Qwen3.5-Omni native multimodal. Apache 2.0 on the open sizes
GLM / Z.ai (Zhipu AI)
Zhipu AI's open-weights family -- GLM-5.1 (launched 2026-04-07) is 744B MoE / 40B active, topped SWE-Bench Pro at 58.4 (beating GPT-5.4 and Claude Opus 4.6), MIT licensed, 200K context. Trained entirely on 100K Huawei Ascend 910B chips -- first frontier model with zero Nvidia in the training stack
Kimi K2.5 (Moonshot)
Moonshot's 1T-parameter MoE open-weights flagship -- best open-source agentic coder, rivals Claude Opus 4.5
Nemotron (Nvidia)
Nvidia's open-weights family -- hybrid Mamba-Transformer MoE architecture, optimized for efficient reasoning on Nvidia hardware
MiniMax M2 / M2.5
MiniMax's open-weights frontier -- first open model to match Claude Opus 4.6 on SWE-Bench at 10-20× lower cost
Falcon (TII)
UAE's Technology Innovation Institute open-weights family -- Falcon 3 optimized for efficient sub-10B deployment on consumer hardware
gpt-oss (OpenAI)
OpenAI's FIRST open-weight models -- gpt-oss-120b (single 80GB GPU, near parity with o4-mini on reasoning) and gpt-oss-20b (runs on 16GB edge devices). Apache 2.0. Launched 2025-08-05. gpt-oss-safeguard ships in 2026 as the safety-tuned variant
IBM Granite 4.0
IBM's enterprise-focused open-weight family -- Granite 4.0 hybrid Mamba-2 + transformer architecture (70-80% memory reduction vs pure transformer), 3B to 32B sizes, Apache 2.0. First open model family to secure ISO 42001 certification. Nano 350M runs on CPU with 8-16GB RAM. 3B Vision variant landed 2026-04-01
Olmo 3 (AI2)
Allen Institute for AI's fully-open frontier reasoning models -- Olmo 3 family (2025-11-20) includes 7B and 32B sizes, four variants (Base, Think, Instruct, RLZero). Apache 2.0 with fully open data + checkpoints + training logs. Olmo 3-Think 32B matches Qwen3-32B-Thinking at 6x fewer training tokens
AI21 Jamba2
AI21 Labs' hybrid SSM-Transformer (Mamba-style) open-weight family -- Jamba2 launched 2026-01-08. Two sizes: 3B dense (runs on phones / laptops) and Jamba2 Mini MoE (12B active / 52B total). Apache 2.0, 256K context, mid-trained on 500B tokens
StepFun Step 3.5 Flash
StepFun's (China) agent-focused open-weight model -- Step 3.5 Flash launched 2026-02-01. 196B sparse MoE, ~11B active. Benchmarks slightly ahead of DeepSeek V3.2 at over 3x smaller total size. Step 3 (321B / 38B active, Apache 2.0) and Step3-VL-10B multimodal also in the family
Cohere Command A
Cohere's enterprise-multilingual flagship -- 111B params, 256K context, runs on 2x H100. 23 languages. CC-BY-NC 4.0 on weights (research / non-commercial), commercial requires Cohere enterprise contract. Follow-ups: Command A Reasoning + Command A Vision