Close Menu
  • AI
  • Content Creation
  • Tech
  • Robotics
AI-trends.todayAI-trends.today
  • AI
  • Content Creation
  • Tech
  • Robotics
Trending
  • xAI Releases Standalone Grok Speech to text and Text to speech APIs, Aimed at Enterprise Voice Developers
  • Anthropic releases Claude Opus 4.7, a major upgrade for agentic coding, high-resolution vision, and long-horizon autonomous tasks
  • The Coding Guide to Property Based Testing with Hypothesis and Stateful, Differential and Metamorphic Test Designs
  • Schematik Is ‘Cursor for Hardware.’ The Anthropics Want In
  • Hacking the EU’s new age-verification app takes only 2 minutes
  • Google AI Releases Google Auto-Diagnosis: A Large Language Model LLM Based System to Diagnose Integrity Test Failures At Scale
  • This is a complete guide to running OpenAI’s GPT-OSS open-weight models using advanced inference workflows.
  • The Huey Code Guide: Build a High-Performance Background Task Processor Using Scheduling with Retries and Pipelines.
AI-trends.todayAI-trends.today
Home»Tech»NVIDIA Released Audio Flamingo 3 : A Model Open Source for Advancing General Audio Intelligence

NVIDIA Released Audio Flamingo 3 : A Model Open Source for Advancing General Audio Intelligence

Tech By Gavin Wallace16/07/20254 Mins Read
Facebook Twitter LinkedIn Email
Mistral Launches Agents API: A New Platform for Developer-Friendly AI
Mistral Launches Agents API: A New Platform for Developer-Friendly AI
Share
Facebook Twitter LinkedIn Email

AGI is a term that has been used to describe artificial general intelligence (AGI). Meet its auditory counterpart—Audio General Intelligence. . Audio Flamingo 3 (AF3)NVIDIA makes a big leap in understanding and reasoning about sound. While past models could transcribe speech or classify audio clips, they lacked the ability to interpret audio in a context-rich, human-like way—across speech, ambient sound, and music, and over extended durations. The AF3 model changes this.

NVIDIA announces Audio Flamingo 3 A large audio-language (LALM), a model that is open source, can be downloaded. It is a system that can not only listen, but also comprehend and reason. AF3 is based on a 5-stage curriculum, powered by the AF Whisper encoder. It supports audio inputs up to 10 minutes long, multi-turn, multi-audio conversations, voice-to-voice interaction, and on-demand reasoning. The new standard for AI sound interaction is a big step towards AGI.

Audio Flamingo 3’s core innovations

  1. The Unified Audio Encoder AF-Whisper AF3 utilizes AF Whisper, a novel coder adapted by Whisper-v3. It processes speech, ambient sounds, and music using the same architecture—solving a major limitation of earlier LALMs which used separate encoders, leading to inconsistencies. AF-Whisper uses audio caption datasets, synthesized meta-data, and a 1280-dimension embedded space to align text representations.
  2. On-Demand Reasoning: Audio Chain of Thought Unlike static QA systems, AF3 is equipped with ‘thinking’ capabilities. Using the AF-Think dataset (250k examples), the model can perform chain-of-thought reasoning when prompted, enabling it to explain its inference steps before arriving at an answer—a key step toward transparent audio AI.
  3. Multiple-Turn and Multi-Audio Conversations AF3’s contextual conversation can be held using the AF Chat dataset of 75k dialogues. It mimics the real world, when humans refer to audio cues. This module allows voice-to -voice conversation using the streaming text-to -speech feature.
  4. The Long Audio Reasoning AF3 has been trained with LongAudio-XL (1.25M examples) and is capable of reasoning on audio inputs lasting up to ten minutes. The model is trained with LongAudio-XL (1.2 million examples) and supports tasks such as meeting summaries, podcast comprehension, sarcasm recognition, and temporal grounds.

Benchmarks of the State-of-the Art and Capabilities in Real World

AF3 outperforms open- and closed-models on more than 20 benchmarks.

  • MMAU (avg.) 73.14% (+2.14% compared to Qwen2.5 O)
  • LongAudioBench: 68.6 (GPT-4o evaluation), beating Gemini 2.5 Pro
  • LibriSpeech (ASR): Phi-4 mm outperforms WER by 1.57%
  • ClothoAQA: 91.1% vs. the 89.2% of Qwen2.5O

This is not a marginal improvement; it redefines what audio-language systems should be capable of. AF3 benchmarks voice chats, speech generation and latency. The latter is achieved at 5.94s compared to 14.62s with Qwen2.5.

The Data Pipeline – Datasets that teach audio reasoning

NVIDIA didn’t just scale compute—they rethought the data:

  • AudioSkills-XL: Eight million examples of ambient, music and speech reasoning.
  • LongAudio-XL: Long-form speech is covered from audiobooks podcasts and meetings.
  • AF-Think: Inferences of the short CoT type are promoted.
  • AF-Chat: This is designed for multiple-turn conversations with audio.

The datasets are open source, as well as the training code, recipes and other information. This allows for future research and reproducibility.

Open Source

AF3 does not represent a mere model change. NVIDIA announced:

  • Model weights
  • Recipes for training
  • The Inference Code
  • Four open datasets

This transparency makes AF3 a model of audio-language that is accessible to all. The model offers new directions for research in areas such as auditory reasoning and low-latency audio agents. It also opens up possibilities to explore music comprehension, multi-modal interaction, and musical understanding.

Conclusion: Toward General Audio Intelligence

Audio Flamingo 3 proves that deep audio comprehension is not only possible but also reproducible and available. NVIDIA has developed a model which listens, comprehends and reasons better than previous LALMs. This is achieved by combining diverse data with novel training techniques and scale.


Click here to find out more Paper, Codes You can also find out more about the following: Model on Hugging Face. The researchers are the sole credit holders for this work.

Ready to connect with 1 Million+ AI Devs/Engineers/Researchers? MarkTechPost is used by NVIDIA and LG AI Research to reach the target audience. [Learn More]


Asif Razzaq, CEO of Marktechpost Media Inc. is a visionary engineer and entrepreneur who is dedicated to harnessing Artificial Intelligence’s potential for the social good. Marktechpost was his most recent venture. This platform, devoted to Artificial Intelligence, is renowned for its comprehensive coverage of deep learning and machine learning. This platform has over 2,000,000 monthly views which shows its popularity.

nvidia van
Share. Facebook Twitter LinkedIn Email
Avatar
Gavin Wallace

Related Posts

xAI Releases Standalone Grok Speech to text and Text to speech APIs, Aimed at Enterprise Voice Developers

19/04/2026

Anthropic releases Claude Opus 4.7, a major upgrade for agentic coding, high-resolution vision, and long-horizon autonomous tasks

19/04/2026

The Coding Guide to Property Based Testing with Hypothesis and Stateful, Differential and Metamorphic Test Designs

19/04/2026

Google AI Releases Google Auto-Diagnosis: A Large Language Model LLM Based System to Diagnose Integrity Test Failures At Scale

18/04/2026
Top News

RentAHuman: I was hired by AI agents to promote their startups

It’s Hard to Be Excited about a New Amazon Smartphone

China Turns Legacy Chips Into a Trade Weapon

OpenAI Sneezes – and the Software Firms Get a Cold

Real Estate Is Entering Its AI Slop Era

Load More
AI-Trends.Today

Your daily source of AI news and trends. Stay up to date with everything AI and automation!

X (Twitter) Instagram
Top Insights

Model Context Protocol could be the new HTTP in AI.

27/08/2025

A new AI brain model is coming to ICU

11/08/2025
Latest News

xAI Releases Standalone Grok Speech to text and Text to speech APIs, Aimed at Enterprise Voice Developers

19/04/2026

Anthropic releases Claude Opus 4.7, a major upgrade for agentic coding, high-resolution vision, and long-horizon autonomous tasks

19/04/2026
X (Twitter) Instagram
  • Privacy Policy
  • Contact Us
  • Terms and Conditions
© 2026 AI-Trends.Today

Type above and press Enter to search. Press Esc to cancel.