GLM / Z.ai (Zhipu AI)
A Tier · 8.0/10
Zhipu AI's open-weights family -- GLM-4.6 text flagship and GLM-4.6V multimodal, true MIT licensed
Score Breakdown
Benchmark Scores
Benchmarks for GLM-4.6
| Benchmark | Description | Score | |
|---|---|---|---|
| MMLU-Pro | Harder multi-subject reasoning | 81.2% | |
| GPQA Diamond | Graduate-level science questions | 74.5% | |
| HumanEval | Python code generation | 89.1% | |
| SWE-Bench Verified | 64.2% | ||
| BFCL (function calling) | 88% |
Last updated: 2026-04-13
The Good and the Bad
What we like
- +True MIT license -- one of the few frontier-tier open-weights models with zero commercial restrictions
- +GLM-4.6 is SOTA among open models for agentic tool-use and function calling
- +GLM-4.6V is #1 open-source on MMBench, MathVista, OCRBench among multimodal models
- +200K context window handles long documents reliably
- +Strong Chinese + English performance (unlike DeepSeek which is English-biased)
What could be better
- −Smaller Western community than Qwen or DeepSeek -- fewer tutorials, quants, fine-tunes
- −English tone is noticeably more stilted than Claude or Mistral for creative writing
- −PRC content filters apply to politically sensitive topics
- −Ollama support lags behind Qwen/Llama/Mistral release cycles
Pricing
Self-hosted (Free)
- ✓MIT license -- truly open, no MAU clauses
- ✓Full weights on Hugging Face
- ✓Commercial use fully permitted
API (Z.ai / OpenRouter)
- ✓GLM-4.6: $0.60 in / $2.20 out
- ✓GLM-4.6V (vision): tiered
- ✓200K context
System Requirements
Hardware needed to self-host. Min = smallest viable setup (usually heavy quantization). Max = full-precision / production-grade.
| Model variant | Min | Max |
|---|---|---|
| GLM-4.6 (355B MoE flagship)MIT license -- zero commercial restrictions | 128 GB RAM + 24 GB GPU (Q3 offload) | 4× H100 FP8 |
| GLM-4.6V (multimodal)Vision tower adds ~4 GB on top of base footprint | 128 GB RAM + 28 GB GPU (Q3 + vision tower) | 4× H100 FP8 |
| GLM-4-9B (small) | 6 GB VRAM (Q4) | 24 GB VRAM FP16 |
Known Issues
- GLM-4.6 requires specific tokenizer and chat template -- several community llama.cpp quants initially had broken tool-use until fixes landedSource: Hugging Face discussions, GitHub issues · 2026-03
- Refuses discussion of Tiananmen, Taiwan, Xi Jinping -- same PRC content filters as DeepSeek and QwenSource: Reddit r/LocalLLaMA · 2026-02
Best for
Teams that need genuine MIT-licensed frontier open weights with no commercial strings. Especially strong for agentic workflows and vision (GLM-4.6V).
Not for
Consumer-facing English content generation (Mistral or Claude write better), or ultra-low-resource deployment (use Gemma 4 or Phi-4 instead).
Our Verdict
GLM-4.6 is the most under-appreciated frontier open-weights model in 2026. The true MIT license puts it ahead of Llama 4 on licensing, and the agentic tool-use performance beats most of its open-weight peers. GLM-4.6V is legitimately the best open multimodal model on several benchmarks. The weakness is purely ecosystem: fewer Western fine-tunes and less Ollama coverage. If you're building an agent or multimodal product and want clean licensing, GLM is the pick.
Sources
- Z.ai blog: GLM-4.6 and GLM-4.6V (accessed 2026-04-13)
- Hugging Face THUDM collection (accessed 2026-04-13)
- Artificial Analysis open-weights leaderboard (accessed 2026-04-13)
- OpenRouter pricing (accessed 2026-04-13)
Alternatives to GLM / Z.ai (Zhipu AI)
Llama 4 (Meta)
Meta's open-weights flagship family -- Scout (10M context), Maverick (multimodal 400B MoE), Behemoth in preview
Mistral AI
European AI lab with open and commercial models that punch well above their size
DeepSeek
Near-frontier reasoning for pennies on the dollar -- the open-source LLM that made Silicon Valley nervous
Gemma 4 (Google)
Google DeepMind's open-weights model family -- multimodal, 256K context, runs on edge devices
Qwen (Alibaba)
Alibaba's open-weights family -- Qwen3.5, Qwen3-Coder-Next, Qwen3-VL, Qwen3-Max. Apache 2.0 flagship sizes.
Kimi K2.5 (Moonshot)
Moonshot's 1T-parameter MoE open-weights flagship -- best open-source agentic coder, rivals Claude Opus 4.5
Nemotron (Nvidia)
Nvidia's open-weights family -- hybrid Mamba-Transformer MoE architecture, optimized for efficient reasoning on Nvidia hardware
MiniMax M2 / M2.5
MiniMax's open-weights frontier -- first open model to match Claude Opus 4.6 on SWE-Bench at 10-20× lower cost
Falcon (TII)
UAE's Technology Innovation Institute open-weights family -- Falcon 3 optimized for efficient sub-10B deployment on consumer hardware