Qwen (Alibaba)
A Tier · 8.8/10
Alibaba's open-weights family -- Qwen3.5, Qwen3-Coder-Next, Qwen3-VL, Qwen3-Max. Apache 2.0 flagship sizes.
Score Breakdown
Benchmark Scores
Benchmarks for Qwen3.5-397B MoE
| Benchmark | Description | Score | |
|---|---|---|---|
| MMLU-Pro | Harder multi-subject reasoning | 83.5% | |
| GPQA Diamond | Graduate-level science questions | 78.2% | |
| AIME 2025 | 87% | ||
| HumanEval | Python code generation | 92.5% | |
| SWE-Bench Verified | 69.4% |
Last updated: 2026-04-13
The Good and the Bad
What we like
- +Apache 2.0 license on the open sizes -- genuinely permissive for any commercial use
- +Qwen3-Coder-Next 80B-A3B runs on 8 GB VRAM and still posts top-tier coding benchmarks (sparse MoE activates only ~3B params)
- +Full modality lineup: text (Qwen3), vision (Qwen3-VL), coder (Qwen3-Coder-Next), reasoning (Qwen3-Thinking)
- +Qwen3.5-397B sits in LMArena's top-3 open-source models
- +262K context on Qwen3-Max, 256K on Coder-Next -- competitive long-context performance
- +Massive ecosystem support: Ollama, llama.cpp, vLLM, LM Studio all ship first-class Qwen quants
What could be better
- −Qwen3-Max flagship is API-only -- you can't self-host the best Alibaba model
- −Censorship on politically sensitive topics (PRC regulations apply)
- −English writing style occasionally stilted compared to Claude or Mistral
- −Rapid release cadence means model names (Qwen3, Qwen3.5, Qwen3-Next, Qwen3-Max-Thinking) are confusing
Pricing
Self-hosted (Free)
- ✓Apache 2.0 license on open weights
- ✓Available on Hugging Face, ModelScope, Ollama
- ✓Fine-tuning fully permitted
API (OpenRouter / Alibaba Cloud)
- ✓Qwen3-Coder-Next 80B-A3B: $0.12 in / $0.60 out
- ✓Qwen3.5-397B: $0.40 in / $2.40 out
- ✓Qwen3-Max (API only): $0.78 in / $6.00 out
System Requirements
Hardware needed to self-host. Min = smallest viable setup (usually heavy quantization). Max = full-precision / production-grade.
| Model variant | Min | Max |
|---|---|---|
| Qwen3-Coder-Next 80B-A3B (sparse MoE) | 8 GB VRAM Q4 (RTX 3060) | 1× A100 80 GB FP16 |
| Qwen3.5 (397B MoE flagship) | 128 GB RAM + 24 GB GPU (Q3) | 4× H100 FP8 |
| Qwen3-VL (vision flagship) | 24 GB VRAM (Q4) | 1× H100 FP16 |
| Qwen3-Max | API-only -- weights not released | API-only -- weights not released |
Known Issues
- Qwen3-Max (API flagship) is not released as open weights -- confusing for users expecting Alibaba's best model to be self-hostableSource: Reddit r/LocalLLaMA · 2026-02
- Refuses discussion of Tiananmen, Taiwan sovereignty, Xi Jinping -- same PRC content filters as DeepSeekSource: Hugging Face discussions · 2026-01
Best for
Developers who want frontier-tier open weights with Apache 2.0 licensing. Qwen3-Coder-Next is arguably the best local coding model; Qwen3.5-397B is a top-3 open generalist.
Not for
Teams that need the Qwen3-Max flagship self-hostable (it's API-only), or use cases that touch Chinese-government-sensitive topics.
Our Verdict
Qwen is the most complete open-weights family in 2026. Alibaba ships Apache-2.0 weights across text, coding, vision, and reasoning -- every modality has a top-tier entry. Qwen3-Coder-Next is a standout: 3B active params but competitive with Claude Sonnet on coding. The catch is that Qwen3-Max, the absolute flagship, stays closed. If you can live with the PRC content filters and want the best open-weights ecosystem, Qwen is the S-tier pick.
Sources
- Qwen official blog (accessed 2026-04-13)
- Hugging Face Qwen collection (accessed 2026-04-13)
- OpenRouter pricing (accessed 2026-04-13)
- Artificial Analysis (accessed 2026-04-13)
- Reddit r/LocalLLaMA (accessed 2026-04-13)
Alternatives to Qwen (Alibaba)
Llama 4 (Meta)
Meta's open-weights flagship family -- Scout (10M context), Maverick (multimodal 400B MoE), Behemoth in preview
Mistral AI
European AI lab with open and commercial models that punch well above their size
DeepSeek
Near-frontier reasoning for pennies on the dollar -- the open-source LLM that made Silicon Valley nervous
Gemma 4 (Google)
Google DeepMind's open-weights model family -- multimodal, 256K context, runs on edge devices
GLM / Z.ai (Zhipu AI)
Zhipu AI's open-weights family -- GLM-4.6 text flagship and GLM-4.6V multimodal, true MIT licensed
Kimi K2.5 (Moonshot)
Moonshot's 1T-parameter MoE open-weights flagship -- best open-source agentic coder, rivals Claude Opus 4.5
Nemotron (Nvidia)
Nvidia's open-weights family -- hybrid Mamba-Transformer MoE architecture, optimized for efficient reasoning on Nvidia hardware
MiniMax M2 / M2.5
MiniMax's open-weights frontier -- first open model to match Claude Opus 4.6 on SWE-Bench at 10-20× lower cost
Falcon (TII)
UAE's Technology Innovation Institute open-weights family -- Falcon 3 optimized for efficient sub-10B deployment on consumer hardware