Llama 4 (Meta) logo
B

Llama 4 (Meta)

B Tier · 7.9/10

Meta's open-weights flagship family -- Scout (10M context), Maverick (multimodal 400B MoE), Behemoth in preview

Last updated: 2026-04-13Free tier available

Score Breakdown

5.0
Ease of Use
8.5
Output Quality
9.0
Value
9.0
Features

Benchmark Scores

Benchmarks for Llama 4 Maverick (17B/400B MoE)

Chatbot Arena ELOHuman preference rating1417
BenchmarkScore
MMLU-Pro80.5%
GPQA Diamond69.8%
HumanEval88%
MMMU (multimodal)73.4%

Last updated: 2026-04-13

Personality & Tone

The open-weight workhorse

Tone: Plain, helpful, and neutral. Meta's instruction-tuned Llama 4 reads like a sanitized ChatGPT -- useful for general tasks but without a strong persona of its own.

Quirks: The 'real' personality depends on the checkpoint you run. Base Llama 4 is bland by design; the interesting behaviors come from community fine-tunes (Nous, Hermes, Dolphin, etc.) that give it different voices and refusal patterns.

The Good and the Bad

What we like

  • +Llama 4 Scout has a 10M token context window -- longest shipping open-weight model, ideal for RAG
  • +Llama 4 Maverick is natively multimodal (early-fusion) and hit Elo 1417 on LMArena experimental
  • +Permissive-enough license for most commercial use (700M MAU clause rarely binds)
  • +Biggest open-weights ecosystem by far -- Ollama, LM Studio, vLLM, llama.cpp, thousands of fine-tunes
  • +Meta invests heavily -- Behemoth (~2T) is in preview as the teacher model

What could be better

  • Llama 4 initial launch underdelivered on vibes vs. benchmark numbers per r/LocalLLaMA consensus
  • Community License is not Apache/MIT -- the 700M MAU clause and attribution requirement rule out some commercial use
  • Requires serious hardware to run the flagship sizes -- Maverick full-precision needs 4× H100
  • DeepSeek V3.2 and Kimi K2.5 have surpassed Llama on many benchmarks at similar or lower cost

Pricing

Self-hosted (Free)

$0
  • Llama 4 Community License
  • Unlimited use
  • Zero data sharing
  • 700M MAU clause + attribution required

Cloud API (Together.ai, Fireworks, Groq)

$3-8/per 1M input tokens
  • Scout: $3 in / $7.50 out
  • Maverick: $8 in / $20 out
  • No hardware needed

System Requirements

Hardware needed to self-host. Min = smallest viable setup (usually heavy quantization). Max = full-precision / production-grade.

Model variantMinMax
Llama 4 Scout (109B MoE, 17B active, 10M context)Full 10M context is practically unreachable on consumer hardware due to KV-cache size2× RTX 4090 48 GB total (Q4 quantization)2× A100 80 GB FP16
Llama 4 Maverick (400B MoE, multimodal)128 GB unified RAM Mac Studio M3 Ultra (Q3)4× H100 80 GB or 2× H200 FP8
Llama 3.3 70B (dense, still popular)1× RTX 3090/4090 24 GB (Q4)1× H100 80 GB FP16

Known Issues

  • Llama 4 Maverick scored Elo 1417 on a special 'experimental chat' variant on LMArena -- the released weights feel weaker than that number impliesSource: Reddit r/LocalLLaMA, LMArena notes · 2026-04
  • Quantized versions of Scout at 10M context use enormous KV-cache memory -- full 10M is practically unreachable on consumer hardwareSource: Hugging Face discussions · 2026-03

Best for

Developers and teams who need a permissively-licensed open-weights model with strong tooling, long context (Scout), or multimodal (Maverick). Safe default choice given the ecosystem.

Not for

Teams chasing the absolute frontier on benchmarks -- DeepSeek V3.2 and Kimi K2.5 score higher. Also not ideal if you need true MIT/Apache licensing (use Qwen, GLM, or MiniMax instead).

Our Verdict

Llama 4 is the safe open-weights default in 2026. It has the biggest ecosystem, the longest context (Scout's 10M), and genuine multimodality (Maverick). But the frontier has moved -- DeepSeek V3.2 and Kimi K2.5 are stronger per-dollar, and the Llama 4 Community License is less permissive than Apache 2.0 alternatives from Alibaba and Z.ai. If you're building on open weights and want maximum compatibility, Llama 4 is still the right pick. If you want best-in-class performance per dollar, look at DeepSeek or Qwen.

Sources

  • Meta Llama official site (accessed 2026-04-13)
  • Meta AI blog: Llama 4 (accessed 2026-04-13)
  • Together.ai pricing (accessed 2026-04-13)
  • LMArena leaderboard (accessed 2026-04-13)
  • Reddit r/LocalLLaMA (accessed 2026-04-13)

The Tier List Tuesday

Weekly newsletter: tier movers, new entrants, and the VS of the week. Built from our daily AI-tool sweeps. No spam, unsubscribe anytime.

Alternatives to Llama 4 (Meta)

Mistral AI logo

Mistral AI

European AI lab with open and commercial models -- Mistral Medium 3.5 SHIPPED 2026-04-29 (128B dense, 256k context, 77.6% SWE-Bench Verified) plus Vibe Remote Agents + Le Chat Work Mode. Earlier 2026 line: Small 4 (Mar 2026 119B MoE Apache 2.0 unified), Medium 3 (Apr 9 2026), Voxtral TTS (Mar 2026 open-source speech)

B
7.5/10
Free tierFrom $0
Mistral Medium 3.5 (April 29 2026) is Mi...Vibe Remote Agents (also 4/29) lets you ...
Updated 2026-05-04
DeepSeek logo

DeepSeek

DeepSeek V4 shipped 2026-04-24: V4-Pro (1.6T/49B active MoE) + V4-Flash (284B/13B active), 1M native context, Hybrid Attention Architecture, open-source on HF. Trails only Gemini 3.1 Pro on world knowledge

A
8.0/10
Free tierFrom $0
Pricing is absurdly cheap compared to GP...DeepSeek-R1 reasoning model genuinely co...
Updated 2026-04-28
Gemma 4 (Google) logo

Gemma 4 (Google)

Google DeepMind's open-weights model family -- multimodal, 256K context, runs on edge devices

A
8.3/10
Free tierFrom $0
Apache 2.0 license -- truly permissive, ...Multimodal: handles text + image input (...
Updated 2026-04-19
Qwen (Alibaba) logo

Qwen (Alibaba)

Alibaba's open-weights + API family -- Qwen3.6-27B dense (Apr 22 2026 Apache 2.0, beats the 397B MoE flagship on coding from a single consumer GPU), Qwen 3.6-Max-Preview (Apr 20 2026 closed-weights #1 on SWE-bench Pro/Terminal-Bench 2.0/SciCode), Qwen3.6-35B-A3B (Apr 16 open-weights MoE), plus Qwen 3.6-Plus API flagship

A
8.8/10
Free tierFrom $0
Qwen 3.6-Plus (launched Mar 30 2026) is ...Qwen3.5 Small (0.8B / 2B / 4B / 9B) is t...
Updated 2026-04-27
GLM / Z.ai (Zhipu AI) logo

GLM / Z.ai (Zhipu AI)

Zhipu AI's open-weights family -- GLM-5.1 (launched 2026-04-07) is 744B MoE / 40B active, topped SWE-Bench Pro at 58.4 (beating GPT-5.4 and Claude Opus 4.6), MIT licensed, 200K context. Trained entirely on 100K Huawei Ascend 910B chips -- first frontier model with zero Nvidia in the training stack

A
8.0/10
Free tierFrom $0
GLM-5.1 (2026-04-07) topped SWE-Bench Pr...First frontier model trained entirely on...
Updated 2026-04-17
Kimi K2.6 (Moonshot) logo

Kimi K2.6 (Moonshot)

Moonshot's 1T-parameter MoE open-weights flagship -- Kimi K2.6 (GA 2026-04-20) is #1 open-weights on Artificial Analysis Intelligence Index v4.0 (score 54, ranked #4 overall). Native video input, 256K context, Modified MIT license

A
8.1/10
Free tierFrom $0
Frontier-tier performance -- Elo 1309 on...Beats Claude Opus 4.5 on several coding ...
Updated 2026-04-27
Nemotron (Nvidia) logo

Nemotron (Nvidia)

Nvidia's open-weights family -- hybrid Mamba-Transformer MoE architecture, optimized for efficient reasoning on Nvidia hardware

B
7.8/10
Free tierFrom $0
Hybrid Mamba-Transformer architecture dr...Nemotron 3 Super activates only 3.6B par...
Updated 2026-04-19
MiniMax M2.7 logo

MiniMax M2.7

MiniMax's open-weights self-evolving agent flagship -- M2.7 (released 2026-03-18) scores 56.22% SWE-Pro and 57.0% Terminal Bench 2 from a 229B/10B-active MoE

A
8.4/10
Free tierFrom $0
229B/10B-active MoE delivers Tier-1 agen...Sparse MoE design: ~10B active params du...
Updated 2026-04-27
Falcon (TII) logo

Falcon (TII)

UAE's Technology Innovation Institute open-weights family -- Falcon 3 optimized for efficient sub-10B deployment on consumer hardware

B
7.1/10
Free tierFrom $0
Apache 2.0 license -- fully permissive f...Sub-10B sizes run on any consumer GPU or...
Updated 2026-04-13
gpt-oss (OpenAI) logo

gpt-oss (OpenAI)

OpenAI's FIRST open-weight models -- gpt-oss-120b (single 80GB GPU, near parity with o4-mini on reasoning) and gpt-oss-20b (runs on 16GB edge devices). Apache 2.0. Launched 2025-08-05. gpt-oss-safeguard ships in 2026 as the safety-tuned variant

A
8.1/10
Free tierFrom $0
First-ever OpenAI open-weight release --...gpt-oss-120b approaches o4-mini on reaso...
Updated 2026-04-17
IBM Granite 4.0 logo

IBM Granite 4.0

IBM's enterprise-focused open-weight family -- Granite 4.0 hybrid Mamba-2 + transformer architecture (70-80% memory reduction vs pure transformer), 3B to 32B sizes, Apache 2.0. First open model family to secure ISO 42001 certification. Nano 350M runs on CPU with 8-16GB RAM. 3B Vision variant landed 2026-04-01

A
8.2/10
Free tierFrom $0
Hybrid Mamba-2 + transformer architectur...Granite 4.0 Nano (350M and 1.5B) is genu...
Updated 2026-04-17
Arcee Trinity-Large-Thinking logo

Arcee Trinity-Large-Thinking

Arcee AI's US-made open-weight frontier reasoning model -- launched 2026-04-01. 398B total params, ~13B active. Sparse MoE (256 experts, 4 active = 1.56% routing). Apache 2.0, trained from scratch. #2 on PinchBench trailing only Claude 3.5 Opus. ~96% cheaper than Opus-4.6 on agentic tasks

A
8.1/10
Free tierFrom $0
Rare US-made frontier-tier open-weight r...Trained from scratch (not a fine-tune) a...
Updated 2026-04-17
Olmo 3 (AI2) logo

Olmo 3 (AI2)

Allen Institute for AI's fully-open frontier reasoning models -- Olmo 3 family (2025-11-20) includes 7B and 32B sizes, four variants (Base, Think, Instruct, RLZero). Apache 2.0 with fully open data + checkpoints + training logs. Olmo 3-Think 32B matches Qwen3-32B-Thinking at 6x fewer training tokens

B
7.9/10
Free tierFrom $0
FULLY OPEN is a different category than ...Olmo 3-Think 32B matches Qwen3-32B-Think...
Updated 2026-04-17
AI21 Jamba2 logo

AI21 Jamba2

AI21 Labs' hybrid SSM-Transformer (Mamba-style) open-weight family -- Jamba2 launched 2026-01-08. Two sizes: 3B dense (runs on phones / laptops) and Jamba2 Mini MoE (12B active / 52B total). Apache 2.0, 256K context, mid-trained on 500B tokens

A
8.0/10
Free tierFrom $0
Hybrid SSM-Transformer (Mamba-style) arc...Jamba2 3B dense runs realistically on iP...
Updated 2026-04-17
StepFun Step 3.5 Flash logo

StepFun Step 3.5 Flash

StepFun's (China) agent-focused open-weight model -- Step 3.5 Flash launched 2026-02-01. 196B sparse MoE, ~11B active. Benchmarks slightly ahead of DeepSeek V3.2 at over 3x smaller total size. Step 3 (321B / 38B active, Apache 2.0) and Step3-VL-10B multimodal also in the family

B
7.8/10
Free tierFrom $0
Step 3.5 Flash at 196B total / 11B activ...Agent-focused tuning explicitly -- tool ...
Updated 2026-04-17
Cohere Command A logo

Cohere Command A

Cohere's enterprise-multilingual flagship -- 111B params, 256K context, runs on 2x H100. 23 languages. CC-BY-NC 4.0 on weights (research / non-commercial), commercial requires Cohere enterprise contract. Follow-ups: Command A Reasoning + Command A Vision

B
7.5/10
Free tierFrom $0
Best-in-class multilingual open-weight m...Runs on just 2x H100 at FP16 for the ful...
Updated 2026-04-17