OpenAI Launches GPT-5.5 in ChatGPT and Codex
OpenAI has shipped GPT-5.5, rolling out now to Plus, Pro, Business, and Enterprise users in ChatGPT and Codex. The model matches GPT-5.4's per-token latency while completing the same Codex tasks with significantly fewer tokens — reviewers report ~57% token reduction on Terminal Bench (39.1 vs 34.2 score at 2,165 vs 4,950 output tokens). GPT-5.5 Pro is available for harder problems across select tiers. API pricing is $5/M input, $30/M output, with a 1M context window; API launch is "imminent." Sam Altman described the release as a pivot: "To a significant degree, we have to become an AI inference company now." OpenAI and NVIDIA have already piloted a whole-company Codex rollout.
Why It Matters
Intelligence-per-token is now the defining frontier metric — higher capability at lower effective cost per task changes the deployment economics for every enterprise building on LLMs. GPT-5.5's position in Codex as the default agent model signals that OpenAI is competing on agentic work surface, not just benchmark headlines.