Is Arcee Trinity-Large-Thinking Down?

Quickest ways to check the current status of arcee.ai plus recent known issues and working alternatives if it's out.

Last editorial review: 2026-04-17

How to check right now

Known issues we've tracked

Third-party benchmark cross-validation still landing. PinchBench #2 ranking is Arcee's own evaluation -- Artificial Analysis, LMArena, and similar independent leaderboards are still adding Trinity through April-May 2026. Treat the 'Opus-tier' claim as provisional
2026-04Arcee launch announcement, VentureBeat coverage
Community quantizations for the 256-expert MoE routing layers showed issues at Q3 and below during the first week post-launch. Q5 is the practical sweet spot as of mid-April 2026
2026-04Reddit r/LocalLLaMA, Hugging Face discussions

Issues here are sourced from our editorial sweeps, not real-time telemetry. Newer issues may exist.

What to use if Arcee Trinity-Large-Thinking is down

Top AIToolTier-ranked alternatives in the same category, ordered by our overall score.

About Arcee Trinity-Large-Thinking

Tier A (8.1/10). Arcee AI's US-made open-weight frontier reasoning model -- launched 2026-04-01. 398B total params, ~13B active. Sparse MoE (256 experts, 4 active = 1.56% routing). Apache 2.0, trained from scratch. #2 on PinchBench trailing only Claude 3.5 Opus. ~96% cheaper than Opus-4.6 on agentic tasks