GPT-5.4 Mini Hits 54% on SWE-Bench Pro, Runs 2× Faster and Is Now Free on ChatGPT

GigaNectar Team

GitHub Copilot header graphic published on the GitHub Blog changelog announcing the general availability of GPT-5.4 mini for GitHub Copilot users on March 17, 2026
GPT-5.4 Mini & Nano: OpenAI’s Speed-First Models Explained

On March 17, 2026, OpenAI released GPT-5.4 mini and GPT-5.4 nano — two compact, faster versions of its flagship GPT-5.4 model, designed for high-volume workloads where response speed is as critical as raw capability. GPT-5.4 mini runs more than twice as fast as its predecessor GPT-5 mini, scores 54.38% on SWE-bench Pro (just 3.3 points behind the flagship’s 57.7%), and is now accessible to ChatGPT Free and Go users via the “Thinking” option in the + menu.

GPT-5.4 nano is OpenAI’s lowest-cost model to date at $0.20 per million input tokens, available exclusively through the API. Both models are built to operate as subagents inside agentic AI systems — where a larger model like GPT-5.4 orchestrates and delegates individual tasks to mini and nano running in parallel. GPT-5.4 mini is also now rolling out in GitHub Copilot for Pro, Pro+, Business, and Enterprise users. Notion’s AI Engineering Lead Abhisek Modi noted: “Until recently, only the most expensive models could reliably navigate agentic tool calling. Today, smaller models like GPT-5.4 mini and nano can easily handle it.”

AI Models · March 17, 2026

Faster, cheaper AI built for subagents — GPT-5.4 mini scores 54.38% on SWE-bench Pro, within 3.3 points of the flagship, at a fraction of the cost and more than 2× the speed.

⚡ 2× faster than GPT-5 mini 💬 ChatGPT Free tier 🤖 Subagent-ready 💰 From $0.20 / 1M tokens

By the numbers

Key figures at a glance

All benchmark scores are from the official OpenAI release, run at high reasoning effort.

54.38%
GPT-5.4 mini on SWE-bench Pro — 3.3 pts behind flagship’s 57.7%
72.13%
GPT-5.4 mini on OSWorld-Verified — human baseline is 72.4%
$0.75
Per 1M input tokens — GPT-5.4 mini API price
$0.20
Per 1M input tokens — GPT-5.4 nano, OpenAI’s cheapest model

Benchmarks

How the three models compare

SWE-bench Pro tests real software-engineering tasks. OSWorld-Verified measures autonomous desktop navigation via screenshots.

Source: OpenAI official release, March 17, 2026  ·  High reasoning effort  ·  Human OSWorld baseline: 72.4%

Model explorer

Pick a model, see the full specs

Tap any model below to see its specs, benchmark scores, pricing, and where it’s available right now.

GPT-5.4 mini

Fast, capable, and on ChatGPT’s free tier

Speed vs GPT-5 miniMore than 2× faster
Context window400,000 tokens
Input modalitiesText + Images
API input / output$0.75 / $4.50 per 1M
ChatGPT Free & Go✓ Live via “Thinking” in + menu
GitHub Copilot✓ Rolling out Pro, Pro+, Business, Enterprise
Codex quota usage30% of GPT-5.4 quota
SWE-bench Pro54.38%
OSWorld-Verified72.13%
Best forEditing, debugging, codebase search, subagent tasks

GPT-5.4 nano

API-only · OpenAI’s lowest price point

API input / output$0.20 / $1.25 per 1M Cheapest
AvailabilityAPI only — no ChatGPT or Codex UI
SWE-bench Pro52.4%
OSWorld-Verified39.01%
vs GPT-5 nanoMajor upgrade on coding & tool-calling
Best forClassification, data extraction, ranking, lightweight coding support
Not ideal forComplex web browsing, multi-step reasoning, long-context tasks

GPT-5.4 flagship

Full capability — plans, coordinates, reviews

ReleasedMarch 5, 2026
SWE-bench Pro57.7%
OSWorld-Verified75.0% — above human baseline (72.4%)
ChatGPT accessPlus, Team, Pro — as GPT-5.4 Thinking
In CodexPlans & delegates subtasks to mini/nano subagents
Role in agentic stackOrchestrator
API input / output$2.50 / $15.00 per 1M

How agentic AI works

The delegation model, explained

Inside Codex, GPT-5.4 acts as the orchestrator. It decides what needs to be done, then hands individual jobs to mini and nano running in parallel — each handling narrower, faster tasks.

Orchestrator
GPT-5.4
Plans, coordinates & reviews final output
Mini subagent
GPT-5.4 mini
Codebase search · file review · front-end generation · edit & debug loops
Nano subagent
GPT-5.4 nano
Classification · data extraction · ranking · lightweight coding support
“For editing pages specifically, it matched and often exceeded GPT-5.2 on handling complex formatting at a fraction of the compute.” — Abhisek Modi, AI Engineering Lead, Notion (via OpenAI’s official release)

Availability

Who can access what, right now

GPT-5.4 mini is the more widely available model. GPT-5.4 nano is restricted to the API and targets developer pipelines.

GPT-5.4 mini

  • ChatGPT Free & Go — “Thinking” in + menu
  • Paid users — fallback when GPT-5.4 Thinking hits rate limit
  • OpenAI API
  • Codex — uses 30% of GPT-5.4 quota
  • GitHub Copilot — Pro, Pro+, Business, Enterprise

GPT-5.4 nano

  • OpenAI API only
  • Not in ChatGPT interface
  • Not in Codex UI
  • For developer pipelines & automated workflows
  • $0.20 per 1M input tokens

Developer tooling

GPT-5.4 mini lands in GitHub Copilot

Enterprise and Business admins must first enable the GPT-5.4 mini policy in Copilot settings before it appears in the model picker for their users.

Available across all major IDEs

Now rolling out per the GitHub Copilot changelog. Select it via the model picker in chat, ask, edit, and agent modes. A 0.33× premium request multiplier applies — pricing is tentative and subject to change.

VS Code Visual Studio JetBrains Xcode Eclipse github.com GitHub Mobile iOS/Android GitHub CLI

Cost

API pricing at a glance

Prices are per million tokens as listed in the official OpenAI release.

GPT-5.4 mini

Input tokens$0.75 / 1M
Output tokens$4.50 / 1M
Context window400K tokens
Codex quota30% of GPT-5.4

GPT-5.4 nano

Input tokens$0.20 / 1M
Output tokens$1.25 / 1M
AvailabilityAPI only
OpenAI’s cheapest✓ Yes

Wrap-up

Small Models, Serious Specs

This piece covered the benchmark scores, pricing, availability, and agentic workflow role of GPT-5.4 mini and GPT-5.4 nano, both released by OpenAI on March 17, 2026. GPT-5.4 mini scored 54.38% on SWE-bench Pro and 72.13% on OSWorld-Verified — within striking range of the flagship’s 57.7% and 75.0% respectively, running more than twice as fast as GPT-5 mini. GPT-5.4 nano, at $0.20 per million input tokens, is OpenAI’s most affordable model and is API-only.

GPT-5.4 mini is available in GitHub Copilot for Pro, Pro+, Business, and Enterprise plans, as well as ChatGPT’s free tier, Codex, and the OpenAI API. In Codex, the model uses 30% of the GPT-5.4 quota, making it practical for routine coding workflows. GPT-5.4 nano is aimed at pipelines handling classification, extraction, and ranking at scale. All pricing, benchmark data, IDE support, and access details discussed here were drawn directly from OpenAI and GitHub’s official release materials.

For more recent technology coverage, see: NVIDIA DLSS 5 and neural rendering for RTX 50 series, Google Maps’ Gemini AI navigation update, and AirPods Max 2 specs, pricing, and features.

Leave a comment