GPT-5.1 Codex Max
GPT-5.1 Codex Max: The AI That Codes for Sustained Periods, Turning Ideas into Apps
Author: Veronika Radoslavskaya
OpenAI has announced the release of GPT-5.1 Codex Max, a specialized agentic coding model that signals a new era for software development. This update is designed not just to write code faster, but to work autonomously on complex projects for sustained periods, positioning it as an indispensable partner for professional programmers and a powerful tool for non-coders looking to build their own software.
The core breakthrough is the model’s ability to work on large-scale problems without losing context. This is achieved through a new process called compaction, which allows the AI to prune its history while coherently working across multiple context windows—effectively managing millions of tokens in a single task. This technological leap enables project-scale refactors and deep debugging sessions, with OpenAI confirming that the agent is designed to remain persistent, autonomously fixing errors and iterating until it delivers a successful result.
For developers, this capability translates directly to significant productivity gains. Developers using Codex are reporting substantial increases in their pull request output and faster project completion rates. Crucially, the model also delivers superior token efficiency, using approximately 30% fewer 'thinking tokens' to achieve the same high-quality results compared to previous models, leading to real-world cost savings.
Crucially for the non-programmer audience, the model’s enhanced agentic capabilities and training in Windows environments mean that highly sophisticated software creation is now accessible through natural language. Codex Max is trained to operate across the Command Line Interface (CLI) and IDE extensions. You simply provide the high-level goal, and the agent takes over the heavy lifting of planning, executing, and validating the application.
OpenAI emphasizes the need for caution: while Codex Max is their most capable cybersecurity model to date, it is designed to run in a secure sandbox by default. The company stresses that the AI should be treated as an additional reviewer, and human oversight remains critical before deploying any autonomously generated software to production..
Read more news on this topic:
Did you find an error or inaccuracy?
We will consider your comments as soon as possible.
