GLM / Z.ai (Zhipu AI) logo
A

GLM / Z.ai (Zhipu AI)

A Tier · 8.0/10

Zhipu AI's open-weights family -- GLM-4.6 text flagship and GLM-4.6V multimodal, true MIT licensed

Last updated: 2026-04-13Free tier available

Score Breakdown

6.5
Ease of Use
8.5
Output Quality
9.0
Value
8.0
Features

Benchmark Scores

Benchmarks for GLM-4.6

BenchmarkScore
MMLU-Pro81.2%
GPQA Diamond74.5%
HumanEval89.1%
SWE-Bench Verified64.2%
BFCL (function calling)88%

Last updated: 2026-04-13

The Good and the Bad

What we like

  • +True MIT license -- one of the few frontier-tier open-weights models with zero commercial restrictions
  • +GLM-4.6 is SOTA among open models for agentic tool-use and function calling
  • +GLM-4.6V is #1 open-source on MMBench, MathVista, OCRBench among multimodal models
  • +200K context window handles long documents reliably
  • +Strong Chinese + English performance (unlike DeepSeek which is English-biased)

What could be better

  • Smaller Western community than Qwen or DeepSeek -- fewer tutorials, quants, fine-tunes
  • English tone is noticeably more stilted than Claude or Mistral for creative writing
  • PRC content filters apply to politically sensitive topics
  • Ollama support lags behind Qwen/Llama/Mistral release cycles

Pricing

Self-hosted (Free)

$0
  • MIT license -- truly open, no MAU clauses
  • Full weights on Hugging Face
  • Commercial use fully permitted

API (Z.ai / OpenRouter)

$0.60/per 1M input tokens
  • GLM-4.6: $0.60 in / $2.20 out
  • GLM-4.6V (vision): tiered
  • 200K context

System Requirements

Hardware needed to self-host. Min = smallest viable setup (usually heavy quantization). Max = full-precision / production-grade.

Model variantMinMax
GLM-4.6 (355B MoE flagship)MIT license -- zero commercial restrictions128 GB RAM + 24 GB GPU (Q3 offload)4× H100 FP8
GLM-4.6V (multimodal)Vision tower adds ~4 GB on top of base footprint128 GB RAM + 28 GB GPU (Q3 + vision tower)4× H100 FP8
GLM-4-9B (small)6 GB VRAM (Q4)24 GB VRAM FP16

Known Issues

  • GLM-4.6 requires specific tokenizer and chat template -- several community llama.cpp quants initially had broken tool-use until fixes landedSource: Hugging Face discussions, GitHub issues · 2026-03
  • Refuses discussion of Tiananmen, Taiwan, Xi Jinping -- same PRC content filters as DeepSeek and QwenSource: Reddit r/LocalLLaMA · 2026-02

Best for

Teams that need genuine MIT-licensed frontier open weights with no commercial strings. Especially strong for agentic workflows and vision (GLM-4.6V).

Not for

Consumer-facing English content generation (Mistral or Claude write better), or ultra-low-resource deployment (use Gemma 4 or Phi-4 instead).

Our Verdict

GLM-4.6 is the most under-appreciated frontier open-weights model in 2026. The true MIT license puts it ahead of Llama 4 on licensing, and the agentic tool-use performance beats most of its open-weight peers. GLM-4.6V is legitimately the best open multimodal model on several benchmarks. The weakness is purely ecosystem: fewer Western fine-tunes and less Ollama coverage. If you're building an agent or multimodal product and want clean licensing, GLM is the pick.

Sources

  • Z.ai blog: GLM-4.6 and GLM-4.6V (accessed 2026-04-13)
  • Hugging Face THUDM collection (accessed 2026-04-13)
  • Artificial Analysis open-weights leaderboard (accessed 2026-04-13)
  • OpenRouter pricing (accessed 2026-04-13)

Alternatives to GLM / Z.ai (Zhipu AI)

Llama 4 (Meta) logo

Llama 4 (Meta)

Meta's open-weights flagship family -- Scout (10M context), Maverick (multimodal 400B MoE), Behemoth in preview

B
7.9/10
Free tierFrom $0
Llama 4 Scout has a 10M token context wi...Llama 4 Maverick is natively multimodal ...
Updated 2026-04-13
Mistral AI logo

Mistral AI

European AI lab with open and commercial models that punch well above their size

B
7.5/10
Free tierFrom $0
Extremely competitive API pricing -- Mis...Open-weight models (Mistral 7B, Mixtral)...
Updated 2026-03-26
DeepSeek logo

DeepSeek

Near-frontier reasoning for pennies on the dollar -- the open-source LLM that made Silicon Valley nervous

A
8.0/10
Free tierFrom $0
Pricing is absurdly cheap compared to GP...DeepSeek-R1 reasoning model genuinely co...
Updated 2026-03-31
Gemma 4 (Google) logo

Gemma 4 (Google)

Google DeepMind's open-weights model family -- multimodal, 256K context, runs on edge devices

A
8.3/10
Free tierFrom $0
Apache 2.0 license -- truly permissive, ...Multimodal: handles text + image input (...
Updated 2026-04-08
Qwen (Alibaba) logo

Qwen (Alibaba)

Alibaba's open-weights family -- Qwen3.5, Qwen3-Coder-Next, Qwen3-VL, Qwen3-Max. Apache 2.0 flagship sizes.

A
8.8/10
Free tierFrom $0
Apache 2.0 license on the open sizes -- ...Qwen3-Coder-Next 80B-A3B runs on 8 GB VR...
Updated 2026-04-13
Kimi K2.5 (Moonshot) logo

Kimi K2.5 (Moonshot)

Moonshot's 1T-parameter MoE open-weights flagship -- best open-source agentic coder, rivals Claude Opus 4.5

A
8.1/10
Free tierFrom $0
Frontier-tier performance -- Elo 1309 on...Beats Claude Opus 4.5 on several coding ...
Updated 2026-04-13
Nemotron (Nvidia) logo

Nemotron (Nvidia)

Nvidia's open-weights family -- hybrid Mamba-Transformer MoE architecture, optimized for efficient reasoning on Nvidia hardware

B
7.8/10
Free tierFrom $0
Hybrid Mamba-Transformer architecture dr...Nemotron 3 Super activates only 3.6B par...
Updated 2026-04-13
MiniMax M2 / M2.5 logo

MiniMax M2 / M2.5

MiniMax's open-weights frontier -- first open model to match Claude Opus 4.6 on SWE-Bench at 10-20× lower cost

A
8.4/10
Free tierFrom $0
First open-weight model to hit 80.2% on ...~10B active params during inference (out...
Updated 2026-04-13
Falcon (TII) logo

Falcon (TII)

UAE's Technology Innovation Institute open-weights family -- Falcon 3 optimized for efficient sub-10B deployment on consumer hardware

B
7.1/10
Free tierFrom $0
Apache 2.0 license -- fully permissive f...Sub-10B sizes run on any consumer GPU or...
Updated 2026-04-13