Mistral AI logo
B

Mistral AI

B Tier · 7.5/10

European AI lab with open and commercial models -- Mistral Medium 3.5 SHIPPED 2026-04-29 (128B dense, 256k context, 77.6% SWE-Bench Verified) plus Vibe Remote Agents + Le Chat Work Mode. Earlier 2026 line: Small 4 (Mar 2026 119B MoE Apache 2.0 unified), Medium 3 (Apr 9 2026), Voxtral TTS (Mar 2026 open-source speech)

Last updated: 2026-05-04Free tier available

Score Breakdown

6.0
Ease of Use
8.0
Output Quality
9.0
Value
7.0
Features

Benchmark Scores

Benchmarks for Mistral Medium 3.5 (vendor-published; third-party verification pending)

BenchmarkScore
MMLU86%
HumanEval92%
MATH69%
SWE-Bench Verified77.6%

Last updated: 2026-04-29

Personality & Tone

The European pragmatist

Tone: Efficient, terse, and slightly blunt. Mistral answers in fewer words than Claude or ChatGPT, especially on factual questions, and rarely hedges or softens its take.

Quirks: Trained with less Anglocentric data than Llama, so it handles French, German, and Spanish notably better than US-origin models. Refusal rates are lower than ChatGPT or Gemini on most gray-area prompts.

The Good and the Bad

What we like

  • +Mistral Medium 3.5 (April 29 2026) is Mistral's first 'flagship merged' model -- 128B dense, 256k context, 77.6% on SWE-Bench Verified, in public preview at $1.5/$7.5 per million tokens. Closes most of the coding-benchmark gap to Claude Opus / GPT-5.5 at materially lower API cost
  • +Vibe Remote Agents (also 4/29) lets you launch cloud-based coding sessions that run asynchronously and in parallel via CLI or Le Chat -- file diffs, tool calls, and the ability to teleport a local session to the cloud while preserving history and approval state. Unique in the category as of today
  • +Le Chat Work Mode (4/29) is the first agentic mode shipped at the consumer-chat tier -- multi-step task completion, cross-tool workflows, research synthesis, inbox triage, with explicit approval gates for sensitive operations
  • +Mistral Small 4 (March 2026) unifies the previously-split Small/Magistral/Pixtral/Devstral lines into one 119B MoE Apache-2.0 model. Voxtral TTS (March 2026) fills the speech gap with a competent open-source 4B-param model that runs on consumer hardware
  • +Extremely competitive API pricing remains the moat -- Small 4 at $0.20/1M tokens, Medium 3.5 at $1.5/$7.5 per million tokens, against frontier-class quality

What could be better

  • Le Chat web interface is bare-bones compared to ChatGPT or Claude
  • Smaller ecosystem -- fewer integrations and community resources
  • Less brand recognition means less community help when you get stuck
  • Documentation could be better, especially for newer models

Pricing

Le Chat (Free)

$0
  • Web chat interface with Mistral models
  • Mistral Small 4 + Medium 3 available
  • Basic features, limited rate

API (Mistral Small 4)

$0.20/per 1M tokens
  • 119B MoE, Apache 2.0 open-weight
  • Unifies Small/Magistral/Pixtral/Devstral into one model
  • Fast, efficient, 128K context

API (Mistral Medium 3.5)

$1.5 / $7.5/per 1M tokens (input/output)
  • Public preview SHIPPED 2026-04-29 -- Mistral's first 'flagship merged' model
  • 128B dense, 256k context, 77.6% SWE-Bench Verified
  • Underlies new Vibe Remote Agents + Le Chat Work Mode

API (Mistral Medium 3 -- predecessor)

$1/per 1M tokens
  • Launched April 9, 2026
  • EU AI Act compliance metadata
  • Balanced price/performance, superseded by 3.5 for new workloads

API (Mistral Large 3)

$2/per 1M tokens
  • Flagship sparse MoE
  • 256K context
  • MRL license (paid for commercial self-hosting)

Voxtral TTS

$0
  • 4B-param open-source speech model, March 2026
  • 9 languages, runs on consumer hardware
  • Apache 2.0

System Requirements

Hardware needed to self-host. Min = smallest viable setup (usually heavy quantization). Max = full-precision / production-grade.

Model variantMinMax
Mistral Small 3 / Devstral 2 (24B dense, Apache 2.0)10 GB VRAM (Q4)1× A100 40 GB FP16
Mistral 14B / 8B / 3B (Apache 2.0)6 / 4 / 2 GB VRAM (Q4)24 / 16 / 8 GB VRAM FP16
Mixtral 8x22B (legacy)64 GB RAM + 24 GB GPU (Q3)2× A100 80 GB FP16
Mistral Large 3 (flagship)Not self-hostable under free terms -- MRL licenseRequires paid commercial license to self-host

Known Issues

  • ENTERPRISE PRODUCT (2026-04-28 public preview): Mistral Workflows -- a Temporal-powered durable orchestration engine for AI workloads. Built on the same Temporal core that backs Netflix / Stripe / Salesforce, with Mistral-added streaming, payload handling, multi-tenancy, and observability. Python SDK v3.0, Helm-deployable workers, customer-perimeter data residency. Human-in-the-loop approvals via simple Python (wait_for_input()), full execution tracking in Studio, deploys cloud / on-prem / hybrid. Distinct from Vibe Remote Agents (the consumer-facing async coding sessions); Workflows is the enterprise infra layer that makes them and other AI workloads durable at scale. Live customers cited at preview: ASML, ABANCA, CMA-CGM, France Travail, La Banque Postale, Moeve. Pricing during preview not disclosedSource: Mistral AI blog (mistral.ai/news/workflows) · 2026-04-28
  • Mistral Medium 3.5 SHIPPED 2026-04-29 in public preview, accompanied by two net-new agentic offerings: Vibe Remote Agents (cloud-based coding sessions, async + parallel, CLI or Le Chat entry) and Le Chat Work Mode (agentic chat for multi-step tasks across tools). The model is 128B dense, 256k context, and posts 77.6% on SWE-Bench Verified. Pricing is $1.5/$7.5 per million tokens (input/output). 'Flagship merged' framing means Medium 3.5 supersedes Medium 3 for new workloads -- existing Medium 3 deployments continue to workSource: Mistral AI blog (mistral.ai/news/vibe-remote-agents-mistral-medium-3-5) · 2026-04-29
  • Le Chat occasionally slower than competitors during European business hoursSource: Reddit r/MistralAI · 2026-03
  • Voxtral TTS English output is competent but trails ElevenLabs v3 on expressiveness -- it's positioned as an open-source alternative, not a quality leaderSource: TechCrunch Voxtral coverage · 2026-03

Best for

Developers who want cheap, high-quality API access. Also strong for multilingual applications and European companies that prefer an EU-based AI provider for data residency.

Not for

Non-technical users looking for a polished chat experience. ChatGPT and Claude are much better as consumer products.

Our Verdict

Mistral is the scrappy underdog that keeps surprising people. Their models are impressively efficient -- you get near-GPT-4 quality at a fraction of the API cost. But the consumer experience (Le Chat) is rough. This is primarily a developer's tool. If you're building AI applications on a budget, Mistral should be on your shortlist.

Sources

  • Mistral AI: Workflows public preview (2026-04-28) (accessed 2026-05-04)
  • Mistral AI: Vibe Remote Agents + Mistral Medium 3.5 (2026-04-29) (accessed 2026-04-30)
  • Mistral AI official site (accessed 2026-04-30)
  • TechCrunch: Mistral releases Voxtral TTS (accessed 2026-04-16)
  • SiliconANGLE: hardware-efficient language models (accessed 2026-04-16)
  • LMSYS Chatbot Arena rankings (accessed 2026-04-16)
  • API testing (accessed 2026-04-16)

The Tier List Tuesday

Weekly newsletter: tier movers, new entrants, and the VS of the week. Built from our daily AI-tool sweeps. No spam, unsubscribe anytime.

Alternatives to Mistral AI

Llama 4 (Meta) logo

Llama 4 (Meta)

Meta's open-weights flagship family -- Scout (10M context), Maverick (multimodal 400B MoE), Behemoth in preview

B
7.9/10
Free tierFrom $0
Llama 4 Scout has a 10M token context wi...Llama 4 Maverick is natively multimodal ...
Updated 2026-04-13
DeepSeek logo

DeepSeek

DeepSeek V4 shipped 2026-04-24: V4-Pro (1.6T/49B active MoE) + V4-Flash (284B/13B active), 1M native context, Hybrid Attention Architecture, open-source on HF. Trails only Gemini 3.1 Pro on world knowledge

A
8.0/10
Free tierFrom $0
Pricing is absurdly cheap compared to GP...DeepSeek-R1 reasoning model genuinely co...
Updated 2026-04-28
Gemma 4 (Google) logo

Gemma 4 (Google)

Google DeepMind's open-weights model family -- multimodal, 256K context, runs on edge devices

A
8.3/10
Free tierFrom $0
Apache 2.0 license -- truly permissive, ...Multimodal: handles text + image input (...
Updated 2026-04-19
Qwen (Alibaba) logo

Qwen (Alibaba)

Alibaba's open-weights + API family -- Qwen3.6-27B dense (Apr 22 2026 Apache 2.0, beats the 397B MoE flagship on coding from a single consumer GPU), Qwen 3.6-Max-Preview (Apr 20 2026 closed-weights #1 on SWE-bench Pro/Terminal-Bench 2.0/SciCode), Qwen3.6-35B-A3B (Apr 16 open-weights MoE), plus Qwen 3.6-Plus API flagship

A
8.8/10
Free tierFrom $0
Qwen 3.6-Plus (launched Mar 30 2026) is ...Qwen3.5 Small (0.8B / 2B / 4B / 9B) is t...
Updated 2026-04-27
GLM / Z.ai (Zhipu AI) logo

GLM / Z.ai (Zhipu AI)

Zhipu AI's open-weights family -- GLM-5.1 (launched 2026-04-07) is 744B MoE / 40B active, topped SWE-Bench Pro at 58.4 (beating GPT-5.4 and Claude Opus 4.6), MIT licensed, 200K context. Trained entirely on 100K Huawei Ascend 910B chips -- first frontier model with zero Nvidia in the training stack

A
8.0/10
Free tierFrom $0
GLM-5.1 (2026-04-07) topped SWE-Bench Pr...First frontier model trained entirely on...
Updated 2026-04-17
Kimi K2.6 (Moonshot) logo

Kimi K2.6 (Moonshot)

Moonshot's 1T-parameter MoE open-weights flagship -- Kimi K2.6 (GA 2026-04-20) is #1 open-weights on Artificial Analysis Intelligence Index v4.0 (score 54, ranked #4 overall). Native video input, 256K context, Modified MIT license

A
8.1/10
Free tierFrom $0
Frontier-tier performance -- Elo 1309 on...Beats Claude Opus 4.5 on several coding ...
Updated 2026-04-27
Nemotron (Nvidia) logo

Nemotron (Nvidia)

Nvidia's open-weights family -- hybrid Mamba-Transformer MoE architecture, optimized for efficient reasoning on Nvidia hardware

B
7.8/10
Free tierFrom $0
Hybrid Mamba-Transformer architecture dr...Nemotron 3 Super activates only 3.6B par...
Updated 2026-04-19
MiniMax M2.7 logo

MiniMax M2.7

MiniMax's open-weights self-evolving agent flagship -- M2.7 (released 2026-03-18) scores 56.22% SWE-Pro and 57.0% Terminal Bench 2 from a 229B/10B-active MoE

A
8.4/10
Free tierFrom $0
229B/10B-active MoE delivers Tier-1 agen...Sparse MoE design: ~10B active params du...
Updated 2026-04-27
Falcon (TII) logo

Falcon (TII)

UAE's Technology Innovation Institute open-weights family -- Falcon 3 optimized for efficient sub-10B deployment on consumer hardware

B
7.1/10
Free tierFrom $0
Apache 2.0 license -- fully permissive f...Sub-10B sizes run on any consumer GPU or...
Updated 2026-04-13
gpt-oss (OpenAI) logo

gpt-oss (OpenAI)

OpenAI's FIRST open-weight models -- gpt-oss-120b (single 80GB GPU, near parity with o4-mini on reasoning) and gpt-oss-20b (runs on 16GB edge devices). Apache 2.0. Launched 2025-08-05. gpt-oss-safeguard ships in 2026 as the safety-tuned variant

A
8.1/10
Free tierFrom $0
First-ever OpenAI open-weight release --...gpt-oss-120b approaches o4-mini on reaso...
Updated 2026-04-17
IBM Granite 4.0 logo

IBM Granite 4.0

IBM's enterprise-focused open-weight family -- Granite 4.0 hybrid Mamba-2 + transformer architecture (70-80% memory reduction vs pure transformer), 3B to 32B sizes, Apache 2.0. First open model family to secure ISO 42001 certification. Nano 350M runs on CPU with 8-16GB RAM. 3B Vision variant landed 2026-04-01

A
8.2/10
Free tierFrom $0
Hybrid Mamba-2 + transformer architectur...Granite 4.0 Nano (350M and 1.5B) is genu...
Updated 2026-04-17
Arcee Trinity-Large-Thinking logo

Arcee Trinity-Large-Thinking

Arcee AI's US-made open-weight frontier reasoning model -- launched 2026-04-01. 398B total params, ~13B active. Sparse MoE (256 experts, 4 active = 1.56% routing). Apache 2.0, trained from scratch. #2 on PinchBench trailing only Claude 3.5 Opus. ~96% cheaper than Opus-4.6 on agentic tasks

A
8.1/10
Free tierFrom $0
Rare US-made frontier-tier open-weight r...Trained from scratch (not a fine-tune) a...
Updated 2026-04-17
Olmo 3 (AI2) logo

Olmo 3 (AI2)

Allen Institute for AI's fully-open frontier reasoning models -- Olmo 3 family (2025-11-20) includes 7B and 32B sizes, four variants (Base, Think, Instruct, RLZero). Apache 2.0 with fully open data + checkpoints + training logs. Olmo 3-Think 32B matches Qwen3-32B-Thinking at 6x fewer training tokens

B
7.9/10
Free tierFrom $0
FULLY OPEN is a different category than ...Olmo 3-Think 32B matches Qwen3-32B-Think...
Updated 2026-04-17
AI21 Jamba2 logo

AI21 Jamba2

AI21 Labs' hybrid SSM-Transformer (Mamba-style) open-weight family -- Jamba2 launched 2026-01-08. Two sizes: 3B dense (runs on phones / laptops) and Jamba2 Mini MoE (12B active / 52B total). Apache 2.0, 256K context, mid-trained on 500B tokens

A
8.0/10
Free tierFrom $0
Hybrid SSM-Transformer (Mamba-style) arc...Jamba2 3B dense runs realistically on iP...
Updated 2026-04-17
StepFun Step 3.5 Flash logo

StepFun Step 3.5 Flash

StepFun's (China) agent-focused open-weight model -- Step 3.5 Flash launched 2026-02-01. 196B sparse MoE, ~11B active. Benchmarks slightly ahead of DeepSeek V3.2 at over 3x smaller total size. Step 3 (321B / 38B active, Apache 2.0) and Step3-VL-10B multimodal also in the family

B
7.8/10
Free tierFrom $0
Step 3.5 Flash at 196B total / 11B activ...Agent-focused tuning explicitly -- tool ...
Updated 2026-04-17
Cohere Command A logo

Cohere Command A

Cohere's enterprise-multilingual flagship -- 111B params, 256K context, runs on 2x H100. 23 languages. CC-BY-NC 4.0 on weights (research / non-commercial), commercial requires Cohere enterprise contract. Follow-ups: Command A Reasoning + Command A Vision

B
7.5/10
Free tierFrom $0
Best-in-class multilingual open-weight m...Runs on just 2x H100 at FP16 for the ful...
Updated 2026-04-17