Close Menu
  • AI
  • Content Creation
  • Tech
  • Robotics
AI-trends.todayAI-trends.today
  • AI
  • Content Creation
  • Tech
  • Robotics
Trending
  • Sakana AI Presents KAME – A Tandem Speak-to-Speech architecture that injects LLM in real time
  • Mistral AI launches Remote Agents for Vibe, Mistral Medium and Mistral 3.5. Both have a 77.6% Verified SWE Benchmark Score
  • Construct a Multi-Agent AI Workflow for Organic Community Modeling, Protein Interactions, Metabolism, and Cell Signaling Simulation
  • Disneyland Now Makes use of Face Recognition on Guests
  • A Coding Implementation to Parsing, Analyzing, Visualizing, and Fine-Tuning Agent Reasoning Traces Using the lambda/hermes-agent-reasoning-traces Dataset
  • Meta Introduces: Autodata: A Framework that Turns AI Models Into Autonomous Data Scientists to Create High-Quality Training Data
  • A New NVIDIA Research Shows Speculative Decoding in NeMo RL Achieves 1.8× Rollout Generation Speedup at 8B and Projects 2.5× End-to-End Speedup at 235B
  • The implementation of NeuralSet for deep learning and NeuralSet to decode MEG signals in order to predict linguistic features
AI-trends.todayAI-trends.today
Home»Tech»Mistral AI launches Remote Agents for Vibe, Mistral Medium and Mistral 3.5. Both have a 77.6% Verified SWE Benchmark Score

Mistral AI launches Remote Agents for Vibe, Mistral Medium and Mistral 3.5. Both have a 77.6% Verified SWE Benchmark Score

Tech By Gavin Wallace03/05/20267 Mins Read
Facebook Twitter LinkedIn Email
Apple and Duke Researchers Present a Reinforcement Learning Approach That
Apple and Duke Researchers Present a Reinforcement Learning Approach That
Share
Facebook Twitter LinkedIn Email

Mistral AI quietly built one of the most practical ecosystems for coding agents in the open source/weights AI area, and now they’re shipping their biggest infrastructure upgrade. Mistral announced remote agents in VibeThe public preview of, the coding agent platform from, is being held in conjunction with. Mistral Medium 3.5 — a new 128B dense model that now serves as the default model in both Vibe and Le Chat, Mistral’s consumer assistant.

What Is Vibe and Why is It Important?

If you haven’t used it yet, Mistral Vibe is a coding agent accessible through a CLI (command-line interface) that lets an AI model work through software tasks on your behalf — writing code, refactoring modules, generating tests, investigating CI failures, and more. It’s like a junior programmer that can work on any codebase.

Vibe sessions were previously local, which meant that the agent had to be connected with your computer and terminal. Today, that changes.

When you step away, the agent runs.

Now, coding sessions will be able to complete long-term tasks even if you are away. Many tasks can be run simultaneously, so you don’t have to become the bottleneck at every stage.

It is this shift in behavior that will make a difference. You don’t need to babysit a coding sesssion in your terminal. Instead, start a task. The cloud will handle the rest. The cloud agent can be launched from Le Chat or the Mistral CLI. You can monitor the progress of your agent while it is running. This includes file differences, tool calls and progress state.

Developers who have already started a CLI session will find this feature particularly useful: sessions that are running locally can be transferred to the Cloud, with all their history and task status. So you don’t lose your place — you just move the work off your machine.

Every session is run in isolation. Every coding session is run in a sandbox that isolates all edits, installations and broad changes. Once the job is completed, you will be notified and the agent opens a pull-request on GitHub. You can then review the final result, rather than the individual keystrokes.

Understanding the logic of how Vibe is connected to Le Chat will also help you understand it. Mistral uses Workflows orchestrated in Mistral Studio to bring Mistral Vibe into Le Chat — originally built for their own in-house coding environment, then for enterprise customers, and now open to everyone. This means the remote coding agent in Le Chat is not a standalone feature — it’s built on top of Mistral’s own orchestration layer, which is useful context if you’re thinking about how to architect similar agentic systems yourself.

Vibe can be integrated with GitHub, Linear, Jira, Sentry, Jira issues and Jira incidents. It also integrates Vibe into apps such as Slack and Teams to report.

Mistral Medium: the Model behind it All

This would not be possible in practice without an AI model that is capable. The new model released is Mistral Medium 3.5The Mistral team calls it its flagship model.

It is a dense 128B model with a 256k context window, handling instruction-following, reasoning, and coding in a single set of weights. For context, a 256k context window means the model can process roughly 200,000 words in a single pass — long enough to reason across an entire large codebase.

This model can also be used in multiple modes. Mistral team trained the vision encoder from scratch to handle variable image sizes and aspect ratios — a notable architectural choice. Since most vision-language models rely on pretrained encoders, such as CLIP’s, building this component entirely from scratch indicates that Mistral placed a high priority on flexibility and not defaulting the model to a fixed-resolution assumption.

Mistral Medium 3.0 scored 77.6% in SWE Bench Verified. It was ahead of Devstral 2, and other models such as Qwen3.5 3970B A17B. SWE-Bench Verified is a standard benchmark that tests whether a model can resolve real-world GitHub issues from popular open-source repositories — it’s one of the most reliable proxies for practical software engineering ability. The model also scores 91.4 on τ³-Telecom and has strong agentic capabilities.

https://mistral.ai/news/vibe-remote-agents-mistral-medium-3-5

The reasoning effort of the model is configurable for each request. So, it can work on a chat answer or a more complex run. This is important for developers integrating the model via API — you can dial down compute for simple lookups and dial it up for multi-step reasoning tasks, without switching models.

Models were built to handle long-term tasks and call multiple tools with reliability. They also produce structured output that can be consumed by downstream code.

Le Chat now has a Work Mode: A new Agentic Layer

Beyond the coding agent upgrades, Mistral is also shipping Work mode in Le Chat — a new agentic mode for more general, multi-step tasks. Work mode, a powerful agentic feature for Le Chat’s complex tasks powered by Mistral medium 3.5 and a harness new to Le Chat, is the new mode. Agents are the backend of the assistant, which means Le Chat is able to read, write and use multiple tools simultaneously.

Practically, this means things like cross-tool workflows — catching up across email, messages, and calendar; preparing for a meeting with relevant context pulled from multiple sources; or triaging an inbox and creating Jira issues from team discussions.

Work mode has connectors enabled by default, rather than having to be selected manually. This allows the agent to access documents, email, calendars and other systems, giving them the context they need for taking the right action. The new chat assistant has a significantly different user interface than the traditional ones, which require you to select your tools manually before every session.

Transparency is a built-in feature rather than an afterthought: every action the agent takes is visible — you see each tool call and the thinking rationale. Le Chat will ask for explicit approval — based on your permissions — before proceeding with sensitive tasks like sending a message, writing a document, or modifying data.

What you need to know

The key points to remember are:

  • Mistral Medium 3 is the new default model for both Vibe & Le Chat — a dense 128B model with a 256k context window that scores 77.6% on SWE-Bench Verified, beats Devstral 2 and Qwen3.5 397B A17B, and is available as open weights on Hugging Face.
  • Now Vibe Code Agents run on the Cloud — sessions can be spawned from the CLI or Le Chat, run asynchronously in isolated sandboxes, and local sessions can be teleported to the cloud without losing session history or task state.
  • Le Chat’s new Workmode allows multiple-step, parallel agentic tasks to be executed. — powered by Mistral Medium 3.5, it can work across email, calendar, documents, Jira, and Slack simultaneously, with all tool calls and reasoning steps visible and explicit approval required before sensitive actions.
  • Mistral Medium 3.5 allows you to configure the reasoning effort per API request — the same model hYou can also find out more about the following:les lightweight chat replies and complex long-horizon agentic runs.

Check out the Model Weights on HF and Technical details. Also, feel free to follow us on Twitter Don’t forget about our 130k+ ML SubReddit Subscribe Now our Newsletter. Wait! Are you using Telegram? now you can join us on telegram as well.

Want to promote your GitHub repo, Hugging Face page, Product release or Webinar?? Connect with us


AI ar Benchmark
Share. Facebook Twitter LinkedIn Email
Avatar
Gavin Wallace

Related Posts

Sakana AI Presents KAME – A Tandem Speak-to-Speech architecture that injects LLM in real time

03/05/2026

Construct a Multi-Agent AI Workflow for Organic Community Modeling, Protein Interactions, Metabolism, and Cell Signaling Simulation

02/05/2026

A Coding Implementation to Parsing, Analyzing, Visualizing, and Fine-Tuning Agent Reasoning Traces Using the lambda/hermes-agent-reasoning-traces Dataset

02/05/2026

Meta Introduces: Autodata: A Framework that Turns AI Models Into Autonomous Data Scientists to Create High-Quality Training Data

02/05/2026
Top News

The Enigma of Enforcing GDPR on LLMs • AI Blog

Cursor’s Bugbot was designed to help Vibe Coders save themselves

Some Musk v. Altman jurors don’t like Elon Musk

Open Source Robot Brain that Thinks 3D

Anthropic Uses Claude Chats as Training Data. You can opt out.

Load More
AI-Trends.Today

Your daily source of AI news and trends. Stay up to date with everything AI and automation!

X (Twitter) Instagram
Top Insights

Google tests AI chatbot for YouTube

28/04/2026

Six scary predictions for AI by 2026

19/12/2025
Latest News

Sakana AI Presents KAME – A Tandem Speak-to-Speech architecture that injects LLM in real time

03/05/2026

Mistral AI launches Remote Agents for Vibe, Mistral Medium and Mistral 3.5. Both have a 77.6% Verified SWE Benchmark Score

03/05/2026
X (Twitter) Instagram
  • Privacy Policy
  • Contact Us
  • Terms and Conditions
© 2026 AI-Trends.Today

Type above and press Enter to search. Press Esc to cancel.