Mistral AI quietly built one of the most practical ecosystems for coding agents in the open source/weights AI area, and now they’re shipping their biggest infrastructure upgrade. Mistral announced remote agents in VibeThe public preview of, the coding agent platform from, is being held in conjunction with. Mistral Medium 3.5 — a new 128B dense model that now serves as the default model in both Vibe and Le Chat, Mistral’s consumer assistant.
What Is Vibe and Why is It Important?
If you haven’t used it yet, Mistral Vibe is a coding agent accessible through a CLI (command-line interface) that lets an AI model work through software tasks on your behalf — writing code, refactoring modules, generating tests, investigating CI failures, and more. It’s like a junior programmer that can work on any codebase.
Vibe sessions were previously local, which meant that the agent had to be connected with your computer and terminal. Today, that changes.
When you step away, the agent runs.
Now, coding sessions will be able to complete long-term tasks even if you are away. Many tasks can be run simultaneously, so you don’t have to become the bottleneck at every stage.
It is this shift in behavior that will make a difference. You don’t need to babysit a coding sesssion in your terminal. Instead, start a task. The cloud will handle the rest. The cloud agent can be launched from Le Chat or the Mistral CLI. You can monitor the progress of your agent while it is running. This includes file differences, tool calls and progress state.
Developers who have already started a CLI session will find this feature particularly useful: sessions that are running locally can be transferred to the Cloud, with all their history and task status. So you don’t lose your place — you just move the work off your machine.
Every session is run in isolation. Every coding session is run in a sandbox that isolates all edits, installations and broad changes. Once the job is completed, you will be notified and the agent opens a pull-request on GitHub. You can then review the final result, rather than the individual keystrokes.
Understanding the logic of how Vibe is connected to Le Chat will also help you understand it. Mistral uses Workflows orchestrated in Mistral Studio to bring Mistral Vibe into Le Chat — originally built for their own in-house coding environment, then for enterprise customers, and now open to everyone. This means the remote coding agent in Le Chat is not a standalone feature — it’s built on top of Mistral’s own orchestration layer, which is useful context if you’re thinking about how to architect similar agentic systems yourself.
Vibe can be integrated with GitHub, Linear, Jira, Sentry, Jira issues and Jira incidents. It also integrates Vibe into apps such as Slack and Teams to report.
Mistral Medium: the Model behind it All
This would not be possible in practice without an AI model that is capable. The new model released is Mistral Medium 3.5The Mistral team calls it its flagship model.
It is a dense 128B model with a 256k context window, handling instruction-following, reasoning, and coding in a single set of weights. For context, a 256k context window means the model can process roughly 200,000 words in a single pass — long enough to reason across an entire large codebase.
This model can also be used in multiple modes. Mistral team trained the vision encoder from scratch to handle variable image sizes and aspect ratios — a notable architectural choice. Since most vision-language models rely on pretrained encoders, such as CLIP’s, building this component entirely from scratch indicates that Mistral placed a high priority on flexibility and not defaulting the model to a fixed-resolution assumption.
Mistral Medium 3.0 scored 77.6% in SWE Bench Verified. It was ahead of Devstral 2, and other models such as Qwen3.5 3970B A17B. SWE-Bench Verified is a standard benchmark that tests whether a model can resolve real-world GitHub issues from popular open-source repositories — it’s one of the most reliable proxies for practical software engineering ability. The model also scores 91.4 on τ³-Telecom and has strong agentic capabilities.

The reasoning effort of the model is configurable for each request. So, it can work on a chat answer or a more complex run. This is important for developers integrating the model via API — you can dial down compute for simple lookups and dial it up for multi-step reasoning tasks, without switching models.
Models were built to handle long-term tasks and call multiple tools with reliability. They also produce structured output that can be consumed by downstream code.
Le Chat now has a Work Mode: A new Agentic Layer
Beyond the coding agent upgrades, Mistral is also shipping Work mode in Le Chat — a new agentic mode for more general, multi-step tasks. Work mode, a powerful agentic feature for Le Chat’s complex tasks powered by Mistral medium 3.5 and a harness new to Le Chat, is the new mode. Agents are the backend of the assistant, which means Le Chat is able to read, write and use multiple tools simultaneously.
Practically, this means things like cross-tool workflows — catching up across email, messages, and calendar; preparing for a meeting with relevant context pulled from multiple sources; or triaging an inbox and creating Jira issues from team discussions.
Work mode has connectors enabled by default, rather than having to be selected manually. This allows the agent to access documents, email, calendars and other systems, giving them the context they need for taking the right action. The new chat assistant has a significantly different user interface than the traditional ones, which require you to select your tools manually before every session.
Transparency is a built-in feature rather than an afterthought: every action the agent takes is visible — you see each tool call and the thinking rationale. Le Chat will ask for explicit approval — based on your permissions — before proceeding with sensitive tasks like sending a message, writing a document, or modifying data.
What you need to know
The key points to remember are:
- Mistral Medium 3 is the new default model for both Vibe & Le Chat — a dense 128B model with a 256k context window that scores 77.6% on SWE-Bench Verified, beats Devstral 2 and Qwen3.5 397B A17B, and is available as open weights on Hugging Face.
- Now Vibe Code Agents run on the Cloud — sessions can be spawned from the CLI or Le Chat, run asynchronously in isolated sandboxes, and local sessions can be teleported to the cloud without losing session history or task state.
- Le Chat’s new Workmode allows multiple-step, parallel agentic tasks to be executed. — powered by Mistral Medium 3.5, it can work across email, calendar, documents, Jira, and Slack simultaneously, with all tool calls and reasoning steps visible and explicit approval required before sensitive actions.
- Mistral Medium 3.5 allows you to configure the reasoning effort per API request — the same model hYou can also find out more about the following:les lightweight chat replies and complex long-horizon agentic runs.
Check out the Model Weights on HF and Technical details. Also, feel free to follow us on Twitter Don’t forget about our 130k+ ML SubReddit Subscribe Now our Newsletter. Wait! Are you using Telegram? now you can join us on telegram as well.
Want to promote your GitHub repo, Hugging Face page, Product release or Webinar?? Connect with us

