Close Menu
  • AI
  • Content Creation
  • Tech
  • Robotics
AI-trends.todayAI-trends.today
  • AI
  • Content Creation
  • Tech
  • Robotics
Trending
  • NVIDIA AI brings Nemotron-3 Nano-30B to NVFP4 using Quantization Aware Distillation for Efficient Inference
  • How to Create AI Agents that Use Short-Term Memory, Long-Term Memory, and Episodic memory
  • A Coding Analysis and Experimentation of Decentralized Federated Education with Gossip protocols and Differential privacy
  • Jeffrey Epstein Had a ‘Personal Hacker,’ Informant Claims
  • PyKEEN: Coding for Training, Optimizing and Evaluating Knowledge Graph Embeddings
  • Robbyant LingBot World – a Real Time World Model of Interactive Simulations and Embodied AI
  • SERA is a Soft Verified Coding agent, built with only Supervised training for practical Repository level Automation Workflows.
  • I Let Google’s ‘Auto Browse’ AI Agent Take Over Chrome. It didn’t quite click
AI-trends.todayAI-trends.today
Home»AI»AI Model that Never Stops Learning

AI Model that Never Stops Learning

AI By Gavin Wallace18/06/20253 Mins Read
Facebook Twitter LinkedIn Email
Pope Leo XIV Declares AI a Threat to Human Dignity
Pope Leo XIV Declares AI a Threat to Human Dignity
Share
Facebook Twitter LinkedIn Email

Modern large language Models (LLMs), while they may write elegant sonnets or code, lack the ability to even learn from their mistakes.

Massachusetts Institute of Technology researchers have devised a new way to help LLMs improve their performance by adjusting parameters based on useful information.

Work is an important step in building artificial intelligence models that learn continually—a long-standing goal of the field and something that will be crucial if machines are to ever more faithfully mimic human intelligence. It could also give us AI chatbots that can better integrate new data, such as a user’s preferences and interest.

SEAL is MIT’s Self-Adapting Language Models scheme. It involves an LLM learning to create its own training data, and updating procedure based upon the input that it receives.

“The initial idea was to explore if tokens [units of text fed to LLMs and generated by them] could cause a powerful update to a model,” JyothishPari is a PhD student from MIT, who has been involved in the development of SEAL. Pari says they wanted to test if a computer model’s output can be used for training.

Adam Zweiger from MIT, a student involved in the SEAL project, says that newer SEAL models are not necessarily better. “reason” This reasoning does not have a long-term benefit for the model.

SEAL on the other hand generates new insight and folds that into its weights or parameters. When given, say, a sentence about the difficulties faced by Apollo’s space program, the model would generate new passages that tried to explain the implications. They compared it to the way humans take notes and revise them to improve their understanding.

This data was used to update the model and test its ability to answer questions. Finally, it provides the ability to answer a set of questions. reinforcement learning Signal that guides the model towards updates which improves its abilities overall and helps it to continue learning.

Researchers tested the approach using small- and medium-sized versions of Meta’s open source model. Llama Alibaba Qwen. They claim that this approach would work well for larger, frontier-style models.

Researchers tested SEAL on both text and ARC, a benchmark that measures an AI model’s capability to solve abstract reasoning tasks. Both cases, they found SEAL to be a powerful tool that allowed models to keep learning long after their initial training.

Pulkit Agrawal is a professor from MIT, who was in charge of the project. He says the SEAL Project touches upon important AI themes, such as how AI can learn for itself. The SEAL project could help to make AI more personalized, according to Agrawal. “LLMs are powerful but we don’t want their knowledge to stop,” He says.

The SEAL program isn’t a permanent way to make AI better. Agrawal suggests that, among other things, LLMs are affected by what is known as “catastrophic forgetting,” A troubling phenomenon occurs when older information is wiped out by ingesting brand new data. The difference in artificial networks and their biological counterparts may explain this phenomenon. Pari, Zweigler and others note that SEAL requires a lot of computation and there is no consensus on how to best schedule periods of new learning. Zweigler brings up a fun suggestion: LLMs can experience periods where they are not learning. “sleep” Consolidation of new information.

Still, for all its limitations, SEAL is an exciting new path for further AI research—and it may well be something that finds its way into future frontier AI models.

What is your opinion of AI, which can continue to learn? To let me know, send an email to hello@wired.com.

AI ai lab artificial intelligence deep learning machine learning
Share. Facebook Twitter LinkedIn Email
Avatar
Gavin Wallace

Related Posts

Jeffrey Epstein Had a ‘Personal Hacker,’ Informant Claims

31/01/2026

I Let Google’s ‘Auto Browse’ AI Agent Take Over Chrome. It didn’t quite click

30/01/2026

‘Uncanny Valley’: Minneapolis Misinformation, TikTok’s New Owners, and Moltbot Hype

29/01/2026

A Yann LeCun–Linked Startup Charts a New Path to AGI

29/01/2026
Top News

A toy AI exposed 50,000 logs of its chats with kids for anyone who has a Gmail account

AI Slop Steals One of Summer’s Best Games Copycats have a hard time catching up.

Google’s and OpenAI’s Chatbots Can Strip Girls in Pictures Right down to Bikinis

Kara Swisher would rather work for Sam Altman than Mark Zuckerberg

GPT-4o Tells Jokes about AI • AI Blog

Load More
AI-Trends.Today

Your daily source of AI news and trends. Stay up to date with everything AI and automation!

X (Twitter) Instagram
Top Insights

DeepPolisher is a new deep learning tool that improves accuracy of genome assemblies by precisely correcting base-level errors.

08/08/2025

Token selection with high-entropy in Reinforcement Learnign with Verifiable Reward (RLVR) improves accuracy for LLMs and lowers the training cost

09/06/2025
Latest News

NVIDIA AI brings Nemotron-3 Nano-30B to NVFP4 using Quantization Aware Distillation for Efficient Inference

02/02/2026

How to Create AI Agents that Use Short-Term Memory, Long-Term Memory, and Episodic memory

02/02/2026
X (Twitter) Instagram
  • Privacy Policy
  • Contact Us
  • Terms and Conditions
© 2026 AI-Trends.Today

Type above and press Enter to search. Press Esc to cancel.