Close Menu
  • AI
  • Content Creation
  • Tech
  • Robotics
AI-trends.todayAI-trends.today
  • AI
  • Content Creation
  • Tech
  • Robotics
Trending
  • xAI Releases Standalone Grok Speech to text and Text to speech APIs, Aimed at Enterprise Voice Developers
  • Anthropic releases Claude Opus 4.7, a major upgrade for agentic coding, high-resolution vision, and long-horizon autonomous tasks
  • The Coding Guide to Property Based Testing with Hypothesis and Stateful, Differential and Metamorphic Test Designs
  • Schematik Is ‘Cursor for Hardware.’ The Anthropics Want In
  • Hacking the EU’s new age-verification app takes only 2 minutes
  • Google AI Releases Google Auto-Diagnosis: A Large Language Model LLM Based System to Diagnose Integrity Test Failures At Scale
  • This is a complete guide to running OpenAI’s GPT-OSS open-weight models using advanced inference workflows.
  • The Huey Code Guide: Build a High-Performance Background Task Processor Using Scheduling with Retries and Pipelines.
AI-trends.todayAI-trends.today
Home»Tech»IBM and ETH Zürich Researchers Unveil Analog Foundation Models to Tackle Noise in In-Memory AI Hardware

IBM and ETH Zürich Researchers Unveil Analog Foundation Models to Tackle Noise in In-Memory AI Hardware

Tech By Gavin Wallace21/09/20254 Mins Read
Facebook Twitter LinkedIn Email
A Coding Implementation to Build an AI Agent with Live
A Coding Implementation to Build an AI Agent with Live
Share
Facebook Twitter LinkedIn Email

IBM researchers, together with ETH Zürich, have unveiled a new class of Analog Foundation Models (AFMs) The aim of this project is to provide a bridge between the large language models and Analog In Memory Computing hardware. AIMC has long promised a radical leap in efficiency—running models with a billion parameters in a footprint small enough for embedded or edge devices—thanks to dense non-volatile memory (NVM) that combines storage and computation. The Achilles heel of the technology has always been noise. Matrix-vector multiplications performed directly within NVM devices result in non-deterministic error that can cripple commercial models.

Why is analog computing important for LLMs

AIMC is a matrix-vector multiplier that performs the multiplication directly within memory arrays. It eliminates the von Neumann bottleneck while delivering massive increases in both throughput and energy efficiency. Prior studies revealed that AIMC combined with 3D NVM The following are some examples of how to get started: Mixture-of-Experts (MoE) The architectures would be able to support in theory models of trillion parameters, even on small accelerators. This could allow AI to be implemented on devices outside of data centers.

https://arxiv.org/pdf/2505.09663

What is the Difference Between Analog in-Memory computing (AIMC is it so difficult to use practically?

Noise is the biggest obstacle. AIMC calculations are affected by device variability, DAC/ADC quantumization and runtime fluctuation that reduce model accuracy. Unlike quantization on GPUs—where errors are deterministic and manageable—analog noise is stochastic and unpredictable. Researchers have found ways of adapting small networks such as CNNs and RNNs.

Analog Foundation Models: How can they address noise?

IBM introduces its team Analog Foundation ModelsThis training integrates hardware-awareness to prepare LLMs on analog execution. They use:

  • Noise injection During training, AIMC randomness is simulated.
  • The weight is cut in a series of iterative steps Stabilize distributions to device limits
  • Learned ranges of static input/output Quantization Aligned to real hardware constraints
  • Distillation using pre-trained LLMs Synthetic data 20B tokens can be used.

The following methods are implemented in conjunction with AIHWKIT-LightningModels like Phi-3-mini-4k-instruct The following are some examples of how to get started: Llama-3.2-1B-Instruct The ability to endure Under analog noise, performance is comparable to baselines of weighted 4-bit and activation 8 bit.. AFMs performed better than both Quantization-aware Training (QAT), and Post-Training Quantization (SpinQuant) in evaluations of reasoning and factual standards.

These models only work with analog hardware

No. AFMs perform well on low-precision digital hardware. AFMs can handle round-to nearest quantization (RTN) better than other methods because they are trained to deal with noise. They are therefore useful for both AIMC acceleration hardware and commodity digital inference equipment.

How much compute can you add to the inference process?

Yes. Researchers tested test-time compute scaling On the MATH500 benchmark, the model generates multiple responses per question and selects the best using a reward-based system. AFMs exhibited better scaling behaviour than QATs, and accuracy gaps decreased as more compute for inference was allocated. This is consistent with AIMC’s strengths—low-power, high-throughput inference rather than training.

https://arxiv.org/pdf/2505.09663

What is the future of Analog In-Memory Computing?

The team of researchers provides the first systematized demonstration that LLMs large than a few kilobytes can be adapted for AIMC hardware with no catastrophic loss in accuracy. Although training AFMs can be resource-intensive, and tasks like GSM8K have accuracy gaps even after training is complete, these results represent a significant milestone. This combination is a milestone. Cross-compatibility, energy efficiency and robustness against noise AFMs offer a way of scaling models above the GPU limit.

The following is a summary of the information that you will find on this page.

Analog Foundation Models are a crucial step in scaling LLMs past the limitations of digital accelerators. AIMC has become a real platform by making models resilient to unpredictable noises of analog in memory computing. The training costs and reasoning benchmarks are still high. However, this research establishes the path towards energy-efficient, large scale, models running on compact, low cost hardware. It also pushes foundation models to closer edge deployment.


Take a look at the PAPER The following are some examples of how to get started: GITHUB PAGE. Check out our website to learn more. GitHub Page for Tutorials, Codes and Notebooks. Also, feel free to follow us on Twitter Join our Facebook group! 100k+ ML SubReddit Subscribe now our Newsletter.


Asif Razzaq, CEO of Marktechpost Media Inc. is a visionary engineer and entrepreneur who is dedicated to harnessing Artificial Intelligence’s potential for the social good. Marktechpost was his most recent venture. This platform, which combines a technical and understandable coverage of deep learning and machine learning news, is renowned for its comprehensive and in-depth reporting. This platform has over 2,000,000 monthly views which shows its popularity.

🔥[Recommended Read] NVIDIA AI Open-Sources ViPE (Video Pose Engine): A Powerful and Versatile 3D Video Annotation Tool for Spatial AI

AI dat ETH ibm models research search
Share. Facebook Twitter LinkedIn Email
Avatar
Gavin Wallace

Related Posts

xAI Releases Standalone Grok Speech to text and Text to speech APIs, Aimed at Enterprise Voice Developers

19/04/2026

Anthropic releases Claude Opus 4.7, a major upgrade for agentic coding, high-resolution vision, and long-horizon autonomous tasks

19/04/2026

The Coding Guide to Property Based Testing with Hypothesis and Stateful, Differential and Metamorphic Test Designs

19/04/2026

Google AI Releases Google Auto-Diagnosis: A Large Language Model LLM Based System to Diagnose Integrity Test Failures At Scale

18/04/2026
Top News

The AGI Battle Between Microsoft and OpenAI is More Than Just a Contract

OpenClaw is banned by Meta and other tech companies over cyber security concerns

AI activists rethink their strategy in the face of a changing industry

OpenAI – The Soul of OpenAI

Gemini Can Now Book You an Uber or Order a DoorDash Meal on Your Phone. This is How It Works

Load More
AI-Trends.Today

Your daily source of AI news and trends. Stay up to date with everything AI and automation!

X (Twitter) Instagram
Top Insights

AI Security Meets the War Machine

20/02/2026

Trump Administration will not rule out more action against Anthropic

11/03/2026
Latest News

xAI Releases Standalone Grok Speech to text and Text to speech APIs, Aimed at Enterprise Voice Developers

19/04/2026

Anthropic releases Claude Opus 4.7, a major upgrade for agentic coding, high-resolution vision, and long-horizon autonomous tasks

19/04/2026
X (Twitter) Instagram
  • Privacy Policy
  • Contact Us
  • Terms and Conditions
© 2026 AI-Trends.Today

Type above and press Enter to search. Press Esc to cancel.