Close Menu
  • AI
  • Content Creation
  • Tech
  • Robotics
AI-trends.todayAI-trends.today
  • AI
  • Content Creation
  • Tech
  • Robotics
Trending
  • The Coding Guide to Property Based Testing with Hypothesis and Stateful, Differential and Metamorphic Test Designs
  • Schematik Is ‘Cursor for Hardware.’ The Anthropics Want In
  • Hacking the EU’s new age-verification app takes only 2 minutes
  • Google AI Releases Google Auto-Diagnosis: A Large Language Model LLM Based System to Diagnose Integrity Test Failures At Scale
  • This is a complete guide to running OpenAI’s GPT-OSS open-weight models using advanced inference workflows.
  • The Huey Code Guide: Build a High-Performance Background Task Processor Using Scheduling with Retries and Pipelines.
  • Top 19 AI Red Teaming Tools (2026): Secure Your ML Models
  • OpenAI’s Kevin Weil is Leaving The Company
AI-trends.todayAI-trends.today
Home»Tech»OpenAI Releases GPT 5.1-Codex Max, an Agentic Coding model with Compaction and Long Horizons for Workflows in Multi-Window.

OpenAI Releases GPT 5.1-Codex Max, an Agentic Coding model with Compaction and Long Horizons for Workflows in Multi-Window.

Tech By Gavin Wallace20/11/20255 Mins Read
Facebook Twitter LinkedIn Email
Meta AI Introduces Multi-SpatialMLLM: A Multi-Frame Spatial Understanding with Multi-modal
Meta AI Introduces Multi-SpatialMLLM: A Multi-Frame Spatial Understanding with Multi-modal
Share
Facebook Twitter LinkedIn Email

OpenAI introduced GPT-5.1-Codex-Max. is an agentic code model that was designed to handle long software engineering projects that can span many hours and thousands of tokens. The CLI, IDE Extension, Cloud Integration and Code Review Surfaces are available in Codex today, while API access is planned for the near future.

What is GPT-5.1 Codex Max optimised for??

GPT 5.1 Codex Max was built using an improved version of OpenAI’s fundamental reasoning model. This base model was trained across various domains, including software engineering and math. On top of this, GPT-5.1-Codex-Max is trained on real world software engineering workloads such as PR creation, code review, frontend coding and Q&A.

Model is designed for frontier coding evaluations, not general conversation. GPT 5.1-Codex Max and the Codex family are recommended exclusively for agentic tasks within Codex or Codex-like environments. They should not be used to replace GPT 5.1 when it comes to general conversations.

It is also the very first Codex version to be trained in Windows environments. The training tasks include those that will make the model a more effective collaborator with the Codex CLI. This includes improved behavior when working under the Codex Sandbox and running commands.

Compaction tasks and the long run

GPT-5.1 Codex Max’s core feature is compacting. Model still operates in a single context window but is trained natively to operate across several context windows. This involves pruning the model’s interaction history and preserving important information for long-term horizons.

GPT-5.1 Codex-Max compacts the session in Codex programs when its context window is about to reach a certain size. This creates a brand new window with the same essential task state and then proceeds to execute. It repeats this process until the task is completed.

OpenAI provides internal evaluations in which GPT-5.1 Codex-Max has worked independently on a task for over 24 hours. The model fixes any failing tests, iterates its implementation and produces successful results.

https://openai.com/index/gpt-5-1-codex-max/

Speed, reasoning effort and token efficiency

GPT-5.1-Codex-Max Uses the same reasoning-effort control as GPT-5, tuned specifically for agents. The reasoning effort controls how many tokens are used by the model before it commits to a solution.

GPT 5.1-Codex Max, with a medium reasoning effort on SWE-bench verified achieves a higher level of accuracy than GPT 5.1-Codex for the same amount of effort and 30% less tokens. OpenAI now offers a new reasoning effort called Extra High (also known as xhigh) for tasks that are not latency-sensitive. The model can think longer and come up with better solutions. The recommended mode for the majority of workloads is Medium.

Benchmark results reflect these changes. OpenAI reported the scores for 500 SWE-bench verified issues with GPT 5.1-Codex rated at high reasoning and GPT 5.1-Codex Max rated at xhigh. 73.7% were for GPT 5.1-Codex, and 77.9%, for GPT 5.1-Codex. Scores are 66.3% on SWE Lancer IC SWE and 79.9%. The scores for Terminal-Bench 2.0 are 52.8% & 58.1%. The compaction feature is enabled in all evaluations, while Terminal-Bench uses Codex CLI within the Laude Harbor harness.

GPT-5-CodexMax generated high-quality frontend designs similar in functionality and visual appearance to GPT-5.1 Codex but with lower token costs due to the more efficient reasoning tracks.

https://openai.com/index/gpt-5-1-codex-max/

What you need to know

  1. GPT 5.1 Codex Max is a frontier agentic coding model built on an updated reasoning base, further trained on real software engineering tasks such as PR creation, code review, frontend coding and Q&A, and is available today across Codex CLI, IDE, cloud and code review surfaces, with API access coming later.
  2. Models that support long-running tasks through compaction (where it compresses itself repeatedly to fit multiple windows) can run for up to 24 hours, and over millions of tokens.
  3. GPT Codex Max has the same reasoning effort controls as GPT Codex, yet at medium efforts it is faster than GPT Codex Verified on the SWE Bench Verified. At Extra High reasoning, the task becomes more difficult.
  4. GPT Codex Max with xhigh effort improved SWE Bench Verified (from 73.7 to 77.9%), SWE Lancer SWE (66.3 to 79.9%), and Terminal Bench 2. From 52.8 to 58.1 per cent, when compared with GPT Codex Max.

GPT-5-CodexMax makes it clear that OpenAI is investing in long-running agentic code rather than quick, one-shot edits. The model’s compaction, frontier-coding evaluations such as SWE Bench Verified, SWE Lancer IC SWE and explicit reasoning efforts controls are a great way to test the scaling of test-time computing in actual software engineering workflows and not only benchmarks. As this capability is incorporated into the production pipelines, both Preparedness Framework (PF) and Codex Sandbox (CS) will become critical. GPT-5-CodexMax is an advanced agentic coding framework that operationalises the long-horizon reasoning of developer tools.


Michal is a professional in data science with a Masters of Science degree from the University of Padova. Michal is a data scientist with a background in statistics, machine-learning, and data engineering. She excels at turning complex datasets into useful insights.

🙌 Follow MARKTECHPOST: Add us as a preferred source on Google.

AI coding openai work x
Share. Facebook Twitter LinkedIn Email
Avatar
Gavin Wallace

Related Posts

The Coding Guide to Property Based Testing with Hypothesis and Stateful, Differential and Metamorphic Test Designs

19/04/2026

Google AI Releases Google Auto-Diagnosis: A Large Language Model LLM Based System to Diagnose Integrity Test Failures At Scale

18/04/2026

This is a complete guide to running OpenAI’s GPT-OSS open-weight models using advanced inference workflows.

18/04/2026

The Huey Code Guide: Build a High-Performance Background Task Processor Using Scheduling with Retries and Pipelines.

18/04/2026
Top News

Some Democrats believe that AI will help the party win elections

Mira Murati’s Stealth AI Lab launches its first product

Jony Ive Says He Wants His OpenAI Devices to ‘Make Us Happy’

HHS Creates an AI tool to Help Develop Hypotheses Regarding Vaccine Injuries

Anthropic Uses Claude Chats as Training Data. You can opt out.

Load More
AI-Trends.Today

Your daily source of AI news and trends. Stay up to date with everything AI and automation!

X (Twitter) Instagram
Top Insights

To Text and Action: Redefining language models with reasoning, memory, and autonomy

10/06/2025

OpenAI’s Atlas Browser Takes Direct Intention at Google Chrome

21/10/2025
Latest News

The Coding Guide to Property Based Testing with Hypothesis and Stateful, Differential and Metamorphic Test Designs

19/04/2026

Schematik Is ‘Cursor for Hardware.’ The Anthropics Want In

18/04/2026
X (Twitter) Instagram
  • Privacy Policy
  • Contact Us
  • Terms and Conditions
© 2026 AI-Trends.Today

Type above and press Enter to search. Press Esc to cancel.