gpt-oss (OpenAI)
A Tier · 8.1/10
OpenAI's FIRST open-weight models -- gpt-oss-120b (single 80GB GPU, near parity with o4-mini on reasoning) and gpt-oss-20b (runs on 16GB edge devices). Apache 2.0. Launched 2025-08-05. gpt-oss-safeguard ships in 2026 as the safety-tuned variant
Score Breakdown
The Good and the Bad
What we like
- +First-ever OpenAI open-weight release -- the brand halo alone makes this a mandatory evaluation for anyone working with open weights. The fact that OpenAI shipped it (after years of 'open-ness' debates) is itself historically significant
- +gpt-oss-120b approaches o4-mini on reasoning benchmarks while running on a single 80GB GPU -- one of the best single-GPU frontier reasoning options available in the open-weight category
- +gpt-oss-20b hits 16GB VRAM requirements (runs on consumer RTX 4080 / Apple Silicon M3 Ultra / similar) while roughly matching o3-mini -- a uniquely strong edge-deployable reasoning model
- +Apache 2.0 license with no 'acceptable use policy' carve-outs -- genuinely permissive, unlike Llama's terms. Commercial deployments face zero licensing friction
What could be better
- −Released 2025-08-05, so by April 2026 it is not the newest open-weight option -- DeepSeek V3.2, Qwen 3.5/3.6, GLM-5.1, and Granite 4 have all shipped later iterations. gpt-oss is still excellent but is no longer front-page hot
- −No multimodal inputs -- text only. For vision or audio, pair with Qwen3.5-Omni, Llama 4 multimodal, or a separate vision model
- −OpenAI has not published a follow-up to gpt-oss (no gpt-oss-v2 or similar as of April 2026). Single release, unclear if more is coming -- could be a one-time brand move rather than ongoing commitment
- −gpt-oss-safeguard (the safety-tuned variant, 2026 release) is narrower in capability than the base models -- for research / red-team / full-capability work, use the original releases
Pricing
Self-hosted (Free, Apache 2.0)
- ✓First OpenAI open-weight release ever
- ✓Weights on Hugging Face + Ollama + llama.cpp + vLLM
- ✓Apache 2.0 license -- unrestricted commercial use
- ✓No telemetry, no phone-home, runs fully offline
API (OpenRouter / Together / Fireworks)
- ✓gpt-oss-120b: ~$0.15 in / $0.60 out
- ✓gpt-oss-20b: ~$0.07 in / $0.30 out
- ✓Competitive per-token pricing across hosted providers
System Requirements
Hardware needed to self-host. Min = smallest viable setup (usually heavy quantization). Max = full-precision / production-grade.
| Model variant | Min | Max |
|---|---|---|
| gpt-oss-120b (flagship reasoning)Apache 2.0 -- unrestricted commercial use | 1× 80 GB GPU (H100 / A100 FP16) | 2× H100 for production serving |
| gpt-oss-20b (edge reasoning)Runs locally on a decent consumer setup; Apple M3 Ultra handles it well | 16 GB VRAM (RTX 4080) or 32 GB Apple Silicon unified memory | 1× A100 40 GB FP16 |
| gpt-oss-safeguard (2026 safety-tuned variant)Narrower capability than base; tuned for deployment in safety-sensitive contexts | Same as base (120b or 20b) | Same as base |
Known Issues
- Initial community quantizations (Q4 and below) showed degradation on reasoning tasks versus FP16 -- the MoE routing layers are sensitive to quantization. Q5/Q6 is the practical sweet spot for gpt-oss-120bSource: Reddit r/LocalLLaMA, Hugging Face discussions · 2025-09
- gpt-oss did not make the transition to OpenAI's GPT-5.x tokenizer -- still uses the older tokenizer from the GPT-4 era. Compatible with existing tooling but slightly less token-efficient than frontier OpenAI API modelsSource: OpenAI open-models docs · 2025-08
Best for
Developers who want OpenAI-brand open-weight reasoning models for self-hosting or fine-tuning. Particularly good for single-GPU deployments (gpt-oss-120b on one 80GB card) or edge-device reasoning (gpt-oss-20b on 16GB consumer GPUs / Apple Silicon). Also good as a reliable baseline when comparing newer open-weight releases.
Not for
Anyone needing absolute bleeding-edge quality (DeepSeek V3.2 / V4, GLM-5.1, Qwen 3.6 are all stronger on most benchmarks as of April 2026). Also not for multimodal use cases -- gpt-oss is text-only.
Our Verdict
gpt-oss remains historically important as the first OpenAI open-weight release (August 2025), and the 120b model on a single 80GB GPU is still one of the cleanest single-card frontier-reasoning options in the open-weight category. By April 2026 it is no longer the bleeding edge -- DeepSeek V3.2, GLM-5.1, and Qwen 3.6 have all shipped stronger models -- but gpt-oss's combination of OpenAI brand + genuine Apache 2.0 + single-GPU 120b sizing makes it a durable default in any open-weight evaluation matrix. Worth adding to any shortlist; probably not first pick unless the OpenAI brand association matters for your stack.
Sources
- OpenAI: Introducing gpt-oss (accessed 2026-04-17)
- OpenAI Open Models hub (accessed 2026-04-17)
- Hugging Face: Welcome OpenAI gpt-oss (accessed 2026-04-17)
Alternatives to gpt-oss (OpenAI)
Llama 4 (Meta)
Meta's open-weights flagship family -- Scout (10M context), Maverick (multimodal 400B MoE), Behemoth in preview
Mistral AI
European AI lab with open and commercial models -- Mistral Small 4 (Mar 2026, 119B MoE Apache 2.0 unified model), Medium 3 (Apr 9 2026), and Voxtral TTS (open-source speech, Mar 2026)
DeepSeek
Near-frontier reasoning for pennies on the dollar -- the open-source LLM that made Silicon Valley nervous
Gemma 4 (Google)
Google DeepMind's open-weights model family -- multimodal, 256K context, runs on edge devices
Qwen (Alibaba)
Alibaba's open-weights + API family -- Qwen 3.6-Plus (Mar 30 2026, 1M context + always-on CoT + agentic tool-use), Qwen3.5 Small (2B runs on iPhone, 9B matches 120B-class models), plus Qwen3.5-Omni native multimodal. Apache 2.0 on the open sizes
GLM / Z.ai (Zhipu AI)
Zhipu AI's open-weights family -- GLM-5.1 (launched 2026-04-07) is 744B MoE / 40B active, topped SWE-Bench Pro at 58.4 (beating GPT-5.4 and Claude Opus 4.6), MIT licensed, 200K context. Trained entirely on 100K Huawei Ascend 910B chips -- first frontier model with zero Nvidia in the training stack
Kimi K2.5 (Moonshot)
Moonshot's 1T-parameter MoE open-weights flagship -- best open-source agentic coder, rivals Claude Opus 4.5
Nemotron (Nvidia)
Nvidia's open-weights family -- hybrid Mamba-Transformer MoE architecture, optimized for efficient reasoning on Nvidia hardware
MiniMax M2 / M2.5
MiniMax's open-weights frontier -- first open model to match Claude Opus 4.6 on SWE-Bench at 10-20× lower cost
Falcon (TII)
UAE's Technology Innovation Institute open-weights family -- Falcon 3 optimized for efficient sub-10B deployment on consumer hardware
IBM Granite 4.0
IBM's enterprise-focused open-weight family -- Granite 4.0 hybrid Mamba-2 + transformer architecture (70-80% memory reduction vs pure transformer), 3B to 32B sizes, Apache 2.0. First open model family to secure ISO 42001 certification. Nano 350M runs on CPU with 8-16GB RAM. 3B Vision variant landed 2026-04-01
Arcee Trinity-Large-Thinking
Arcee AI's US-made open-weight frontier reasoning model -- launched 2026-04-01. 398B total params, ~13B active. Sparse MoE (256 experts, 4 active = 1.56% routing). Apache 2.0, trained from scratch. #2 on PinchBench trailing only Claude 3.5 Opus. ~96% cheaper than Opus-4.6 on agentic tasks
Olmo 3 (AI2)
Allen Institute for AI's fully-open frontier reasoning models -- Olmo 3 family (2025-11-20) includes 7B and 32B sizes, four variants (Base, Think, Instruct, RLZero). Apache 2.0 with fully open data + checkpoints + training logs. Olmo 3-Think 32B matches Qwen3-32B-Thinking at 6x fewer training tokens
AI21 Jamba2
AI21 Labs' hybrid SSM-Transformer (Mamba-style) open-weight family -- Jamba2 launched 2026-01-08. Two sizes: 3B dense (runs on phones / laptops) and Jamba2 Mini MoE (12B active / 52B total). Apache 2.0, 256K context, mid-trained on 500B tokens
StepFun Step 3.5 Flash
StepFun's (China) agent-focused open-weight model -- Step 3.5 Flash launched 2026-02-01. 196B sparse MoE, ~11B active. Benchmarks slightly ahead of DeepSeek V3.2 at over 3x smaller total size. Step 3 (321B / 38B active, Apache 2.0) and Step3-VL-10B multimodal also in the family
Cohere Command A
Cohere's enterprise-multilingual flagship -- 111B params, 256K context, runs on 2x H100. 23 languages. CC-BY-NC 4.0 on weights (research / non-commercial), commercial requires Cohere enterprise contract. Follow-ups: Command A Reasoning + Command A Vision