Qwen (Alibaba) logo
A

Qwen (Alibaba)

A Tier · 8.8/10

Alibaba's open-weights family -- Qwen3.5, Qwen3-Coder-Next, Qwen3-VL, Qwen3-Max. Apache 2.0 flagship sizes.

Last updated: 2026-04-13Free tier available

Score Breakdown

7.0
Ease of Use
9.0
Output Quality
10.0
Value
9.0
Features

Benchmark Scores

Benchmarks for Qwen3.5-397B MoE

BenchmarkScore
MMLU-Pro83.5%
GPQA Diamond78.2%
AIME 202587%
HumanEval92.5%
SWE-Bench Verified69.4%

Last updated: 2026-04-13

The Good and the Bad

What we like

  • +Apache 2.0 license on the open sizes -- genuinely permissive for any commercial use
  • +Qwen3-Coder-Next 80B-A3B runs on 8 GB VRAM and still posts top-tier coding benchmarks (sparse MoE activates only ~3B params)
  • +Full modality lineup: text (Qwen3), vision (Qwen3-VL), coder (Qwen3-Coder-Next), reasoning (Qwen3-Thinking)
  • +Qwen3.5-397B sits in LMArena's top-3 open-source models
  • +262K context on Qwen3-Max, 256K on Coder-Next -- competitive long-context performance
  • +Massive ecosystem support: Ollama, llama.cpp, vLLM, LM Studio all ship first-class Qwen quants

What could be better

  • Qwen3-Max flagship is API-only -- you can't self-host the best Alibaba model
  • Censorship on politically sensitive topics (PRC regulations apply)
  • English writing style occasionally stilted compared to Claude or Mistral
  • Rapid release cadence means model names (Qwen3, Qwen3.5, Qwen3-Next, Qwen3-Max-Thinking) are confusing

Pricing

Self-hosted (Free)

$0
  • Apache 2.0 license on open weights
  • Available on Hugging Face, ModelScope, Ollama
  • Fine-tuning fully permitted

API (OpenRouter / Alibaba Cloud)

$0.12/per 1M input tokens
  • Qwen3-Coder-Next 80B-A3B: $0.12 in / $0.60 out
  • Qwen3.5-397B: $0.40 in / $2.40 out
  • Qwen3-Max (API only): $0.78 in / $6.00 out

System Requirements

Hardware needed to self-host. Min = smallest viable setup (usually heavy quantization). Max = full-precision / production-grade.

Model variantMinMax
Qwen3-Coder-Next 80B-A3B (sparse MoE)8 GB VRAM Q4 (RTX 3060)1× A100 80 GB FP16
Qwen3.5 (397B MoE flagship)128 GB RAM + 24 GB GPU (Q3)4× H100 FP8
Qwen3-VL (vision flagship)24 GB VRAM (Q4)1× H100 FP16
Qwen3-MaxAPI-only -- weights not releasedAPI-only -- weights not released

Known Issues

  • Qwen3-Max (API flagship) is not released as open weights -- confusing for users expecting Alibaba's best model to be self-hostableSource: Reddit r/LocalLLaMA · 2026-02
  • Refuses discussion of Tiananmen, Taiwan sovereignty, Xi Jinping -- same PRC content filters as DeepSeekSource: Hugging Face discussions · 2026-01

Best for

Developers who want frontier-tier open weights with Apache 2.0 licensing. Qwen3-Coder-Next is arguably the best local coding model; Qwen3.5-397B is a top-3 open generalist.

Not for

Teams that need the Qwen3-Max flagship self-hostable (it's API-only), or use cases that touch Chinese-government-sensitive topics.

Our Verdict

Qwen is the most complete open-weights family in 2026. Alibaba ships Apache-2.0 weights across text, coding, vision, and reasoning -- every modality has a top-tier entry. Qwen3-Coder-Next is a standout: 3B active params but competitive with Claude Sonnet on coding. The catch is that Qwen3-Max, the absolute flagship, stays closed. If you can live with the PRC content filters and want the best open-weights ecosystem, Qwen is the S-tier pick.

Sources

  • Qwen official blog (accessed 2026-04-13)
  • Hugging Face Qwen collection (accessed 2026-04-13)
  • OpenRouter pricing (accessed 2026-04-13)
  • Artificial Analysis (accessed 2026-04-13)
  • Reddit r/LocalLLaMA (accessed 2026-04-13)

Alternatives to Qwen (Alibaba)

Llama 4 (Meta) logo

Llama 4 (Meta)

Meta's open-weights flagship family -- Scout (10M context), Maverick (multimodal 400B MoE), Behemoth in preview

B
7.9/10
Free tierFrom $0
Llama 4 Scout has a 10M token context wi...Llama 4 Maverick is natively multimodal ...
Updated 2026-04-13
Mistral AI logo

Mistral AI

European AI lab with open and commercial models that punch well above their size

B
7.5/10
Free tierFrom $0
Extremely competitive API pricing -- Mis...Open-weight models (Mistral 7B, Mixtral)...
Updated 2026-03-26
DeepSeek logo

DeepSeek

Near-frontier reasoning for pennies on the dollar -- the open-source LLM that made Silicon Valley nervous

A
8.0/10
Free tierFrom $0
Pricing is absurdly cheap compared to GP...DeepSeek-R1 reasoning model genuinely co...
Updated 2026-03-31
Gemma 4 (Google) logo

Gemma 4 (Google)

Google DeepMind's open-weights model family -- multimodal, 256K context, runs on edge devices

A
8.3/10
Free tierFrom $0
Apache 2.0 license -- truly permissive, ...Multimodal: handles text + image input (...
Updated 2026-04-08
GLM / Z.ai (Zhipu AI) logo

GLM / Z.ai (Zhipu AI)

Zhipu AI's open-weights family -- GLM-4.6 text flagship and GLM-4.6V multimodal, true MIT licensed

A
8.0/10
Free tierFrom $0
True MIT license -- one of the few front...GLM-4.6 is SOTA among open models for ag...
Updated 2026-04-13
Kimi K2.5 (Moonshot) logo

Kimi K2.5 (Moonshot)

Moonshot's 1T-parameter MoE open-weights flagship -- best open-source agentic coder, rivals Claude Opus 4.5

A
8.1/10
Free tierFrom $0
Frontier-tier performance -- Elo 1309 on...Beats Claude Opus 4.5 on several coding ...
Updated 2026-04-13
Nemotron (Nvidia) logo

Nemotron (Nvidia)

Nvidia's open-weights family -- hybrid Mamba-Transformer MoE architecture, optimized for efficient reasoning on Nvidia hardware

B
7.8/10
Free tierFrom $0
Hybrid Mamba-Transformer architecture dr...Nemotron 3 Super activates only 3.6B par...
Updated 2026-04-13
MiniMax M2 / M2.5 logo

MiniMax M2 / M2.5

MiniMax's open-weights frontier -- first open model to match Claude Opus 4.6 on SWE-Bench at 10-20× lower cost

A
8.4/10
Free tierFrom $0
First open-weight model to hit 80.2% on ...~10B active params during inference (out...
Updated 2026-04-13
Falcon (TII) logo

Falcon (TII)

UAE's Technology Innovation Institute open-weights family -- Falcon 3 optimized for efficient sub-10B deployment on consumer hardware

B
7.1/10
Free tierFrom $0
Apache 2.0 license -- fully permissive f...Sub-10B sizes run on any consumer GPU or...
Updated 2026-04-13