Best Local & Open-Weight LLMs (2026)
Open-weight and self-hostable large language models. Chinese and American labs compared — Qwen, DeepSeek, GLM, Kimi, Llama, Gemma, Mistral, Nemotron, MiniMax, Falcon. Benchmarks, pricing, and hardware requirements (min/mid/max) for running each model locally.
17 tools ranked S through F.
Tier rankings
A
Full ranking
Sorted by overall score. Click any tool for the full review.
| # | Tool | Tier | Overall | Ease | Output | Value | Features |
|---|---|---|---|---|---|---|---|
| 1 | Qwen (Alibaba) Alibaba's open-weights + API family -- Qwen 3.6-Max-Preview (Apr 20 2026 CLOSED-weights #1 on SWE-bench Pro/Terminal-Bench 2.0/SciCode), Qwen3.6-35B-A3B (Apr 16 open-weights coding champion), plus Qwen 3.6-Plus API flagship. Apache 2.0 on most sizes, but the best model is now proprietary | A | 8.8 | 7 | 9 | 10 | 9 |
| 2 | MiniMax M2 / M2.5 MiniMax's open-weights frontier -- first open model to match Claude Opus 4.6 on SWE-Bench at 10-20× lower cost | A | 8.4 | 6.5 | 9 | 9.5 | 8.5 |
| 3 | Gemma 4 (Google) Google DeepMind's open-weights model family -- multimodal, 256K context, runs on edge devices | A | 8.3 | 7 | 8 | 10 | 8 |
| 4 | IBM Granite 4.0 IBM's enterprise-focused open-weight family -- Granite 4.0 hybrid Mamba-2 + transformer architecture (70-80% memory reduction vs pure transformer), 3B to 32B sizes, Apache 2.0. First open model family to secure ISO 42001 certification. Nano 350M runs on CPU with 8-16GB RAM. 3B Vision variant landed 2026-04-01 | A | 8.2 | 7 | 8 | 9.5 | 8.5 |
| 5 | Kimi K2.5 (Moonshot) Moonshot's 1T-parameter MoE open-weights flagship -- best open-source agentic coder, rivals Claude Opus 4.5 | A | 8.1 | 6 | 9 | 8.5 | 9 |
| 6 | gpt-oss (OpenAI) OpenAI's FIRST open-weight models -- gpt-oss-120b (single 80GB GPU, near parity with o4-mini on reasoning) and gpt-oss-20b (runs on 16GB edge devices). Apache 2.0. Launched 2025-08-05. gpt-oss-safeguard ships in 2026 as the safety-tuned variant | A | 8.1 | 7 | 8.5 | 10 | 7 |
| 7 | Arcee Trinity-Large-Thinking Arcee AI's US-made open-weight frontier reasoning model -- launched 2026-04-01. 398B total params, ~13B active. Sparse MoE (256 experts, 4 active = 1.56% routing). Apache 2.0, trained from scratch. #2 on PinchBench trailing only Claude 3.5 Opus. ~96% cheaper than Opus-4.6 on agentic tasks | A | 8.1 | 6 | 9 | 9.5 | 8 |
| 8 | DeepSeek Near-frontier reasoning for pennies on the dollar -- the open-source LLM that made Silicon Valley nervous | A | 8.0 | 7.5 | 8 | 9.5 | 7 |
| 9 | GLM / Z.ai (Zhipu AI) Zhipu AI's open-weights family -- GLM-5.1 (launched 2026-04-07) is 744B MoE / 40B active, topped SWE-Bench Pro at 58.4 (beating GPT-5.4 and Claude Opus 4.6), MIT licensed, 200K context. Trained entirely on 100K Huawei Ascend 910B chips -- first frontier model with zero Nvidia in the training stack | A | 8.0 | 6.5 | 8.5 | 9 | 8 |
| 10 | AI21 Jamba2 AI21 Labs' hybrid SSM-Transformer (Mamba-style) open-weight family -- Jamba2 launched 2026-01-08. Two sizes: 3B dense (runs on phones / laptops) and Jamba2 Mini MoE (12B active / 52B total). Apache 2.0, 256K context, mid-trained on 500B tokens | A | 8.0 | 6.5 | 8 | 9 | 8.5 |
| 11 | Llama 4 (Meta) Meta's open-weights flagship family -- Scout (10M context), Maverick (multimodal 400B MoE), Behemoth in preview | B | 7.9 | 5 | 8.5 | 9 | 9 |
| 12 | Olmo 3 (AI2) Allen Institute for AI's fully-open frontier reasoning models -- Olmo 3 family (2025-11-20) includes 7B and 32B sizes, four variants (Base, Think, Instruct, RLZero). Apache 2.0 with fully open data + checkpoints + training logs. Olmo 3-Think 32B matches Qwen3-32B-Thinking at 6x fewer training tokens | B | 7.9 | 6 | 8 | 9.5 | 8 |
| 13 | Nemotron (Nvidia) Nvidia's open-weights family -- hybrid Mamba-Transformer MoE architecture, optimized for efficient reasoning on Nvidia hardware | B | 7.8 | 6.5 | 8 | 8 | 8.5 |
| 14 | StepFun Step 3.5 Flash StepFun's (China) agent-focused open-weight model -- Step 3.5 Flash launched 2026-02-01. 196B sparse MoE, ~11B active. Benchmarks slightly ahead of DeepSeek V3.2 at over 3x smaller total size. Step 3 (321B / 38B active, Apache 2.0) and Step3-VL-10B multimodal also in the family | B | 7.8 | 6 | 8 | 9 | 8 |
| 15 | Mistral AI European AI lab with open and commercial models -- Mistral Small 4 (Mar 2026, 119B MoE Apache 2.0 unified model), Medium 3 (Apr 9 2026), and Voxtral TTS (open-source speech, Mar 2026) | B | 7.5 | 6 | 8 | 9 | 7 |
| 16 | Cohere Command A Cohere's enterprise-multilingual flagship -- 111B params, 256K context, runs on 2x H100. 23 languages. CC-BY-NC 4.0 on weights (research / non-commercial), commercial requires Cohere enterprise contract. Follow-ups: Command A Reasoning + Command A Vision | B | 7.5 | 6.5 | 8.5 | 7 | 8 |
| 17 | Falcon (TII) UAE's Technology Innovation Institute open-weights family -- Falcon 3 optimized for efficient sub-10B deployment on consumer hardware | B | 7.1 | 7 | 6.5 | 9 | 6 |