Close Menu
  • AI
  • Content Creation
  • Tech
  • Robotics
AI-trends.todayAI-trends.today
  • AI
  • Content Creation
  • Tech
  • Robotics
Trending
  • Join Us for Our Livestream: Musk and Altman on the Future of OpenAI
  • A detection tool claims that the Pope’s warnings about AI were AI-generated.
  • Photon releases Spectrum, an open-source TypeScript framework that deploys AI agents directly to iMessages, WhatsApp and Telegram
  • OpenAI Open-Sources – Euphony: a web-based visualization tool for Harmony session data and Codex chat logs
  • Hugging face releases mlintern: A Open-Source AI agent that automates LLM post-training workflow
  • Mozilla Fixed 271 Firefox Bugs Using Anthropic Mythos
  • YouTube will allow celebrities to request the removal of AI fakes.
  • OpenAI enhances ChatGPT’s image generation model
AI-trends.todayAI-trends.today
Home»Tech»Google DeepMind’s Causal Framework: Robust Alignment Modeling for LLM Alignment Using Crome

Google DeepMind’s Causal Framework: Robust Alignment Modeling for LLM Alignment Using Crome

Tech By Gavin Wallace04/07/20254 Mins Read
Facebook Twitter LinkedIn Email
NVIDIA Introduces ProRL: Long-Horizon Reinforcement Learning Boosts Reasoning and Generalization
NVIDIA Introduces ProRL: Long-Horizon Reinforcement Learning Boosts Reasoning and Generalization
Share
Facebook Twitter LinkedIn Email

The reward models play a fundamental role in aligning LLMs to human feedback. However, they are challenged by issues of reward hacking. The models are based on superficial factors such as the length of responses or their formatting, rather than true indicators of quality like relevance and factuality. The problem is that standard training objectives do not distinguish between false correlations in the training data, and real causal factors of response quality. This failure leads to reward models that are brittle and generate policies with misaligned incentives. Moreover, a causal understanding is needed to develop RMs which are insensitive to spurious signals and sensitive to quality causal attributes.

Limitations in Existing RM and the need for Causal Robustness

There are methods that try to fix reward hacking in standard RLHFs that rely either on Bradley-Terry ranking or pairwise methods. These include architectural modifications such as Odin and policy-level adjustment, or data-centric techniques involving ensembles, consistency checks, or other methods. Modern causal-inspired methods employ MMD regularization in order to estimate causal effects by rewriting corrected sentences. These methods, however, only target predetermined spurious variables, and miss unknown correlations. The augmentation strategy is still crude, while evaluation-based methods do not equip the reward model with robust mechanisms to combat diverse spurious variations.

Crome: Causally-robust reward modelling for LLMs

Researchers from Google DeepMind, McGill University, and MILA – Quebec AI Institute have proposed Crome (Causally Robust Reward Modeling), a framework built on an explicit causal model of answer generation. Crome trains RMs how to discern between real quality drivers and surface cues. It does this by adding counterfactuals generated using LLMs that are based on preference data. It creates two different types of synthetic learning pairs. (a) Causal augmentations which change specific causal attributes (such as factuality) to increase sensitivity for true quality shifts. (b)Neutral augmentations to maintain invariance with spurious attributes (like style) using tie-labels. Crome improves robustness and increases RewardBench accuracy of up to 4.5%. It also enhances safety and reasoning.

The Technical Approach to Counterfactual Accumulation and Composite Loss Optimizing

Crome has two major phases. First, it generates attribute-aware data counterfactuals based upon a causal model. Second, the model is trained with combined data using a special loss. It presents a theoretical study on the way causal augmentation separates real reward drivers from false correlates using an idealized version of a model. Crome uses the UltraFeedback data with counterfactuals created using Gemini 2.0 Flash and assesses performance on RewardBench, and reWordBench. Researchers use diverse LLMs for their experiments including Gemma-2-9B – IT, Qwen2.5-7B, and Gemma-2-2-2B.

Performance gains: from RewardBench and WildGuardTest

Crome has achieved significant improvements in RewardBench ranking accuracy when compared to RRM. This includes gains of up to 13,18% in the Safety category and 7.19% in Reasoning. Crome achieves aggregate accuracy improvements of up to 9,1% with Gemma-2-9B in PairPM settings on reWordBench. It also performs better on 21 of the 23 transformations. Moreover, the accuracy of ranking from RewardBench compared to reWordBench is lower (19.78% vs. 21.54%). Crome has shown excellent improvements in safety on WildGuardTest using Best-of N selection. It achieved lower attack success rates for harmful prompts, while maintaining similar refusing rates for benign prompts.

The Conclusion of Causal Data Enhancement and the Future Directions

Researchers introduced Crome as a framework to solve reward hacking problems during RM Training. It uses two strategies for augmented synthetic data: Causal and Neutral Augmentations. Crome has outperformed strong baselines for multiple base models on RewardBench and reward modeling techniques, as well as superior robustness in reWordBench when it comes to spurious correlations. Crome, a dataset-focused training method, opens up new directions for research in the generation of synthetic data for base model learning, and causal attribute validation could be highly useful for future development in robust language models alignment.


Take a look at the Paper. The researchers are the sole owners of all credit. Also, feel free to follow us on Twitter Don’t forget about our 100k+ ML SubReddit Subscribe Now our Newsletter.


Sajjad is in his final year of undergraduate studies at IIT Kharagpur. Tech enthusiast Sajjad is interested in the applications of AI, with an emphasis on their impact and real-world implications. He strives to make complex AI ideas clear and understandable.

deepmind Google modeling
Share. Facebook Twitter LinkedIn Email
Avatar
Gavin Wallace

Related Posts

Photon releases Spectrum, an open-source TypeScript framework that deploys AI agents directly to iMessages, WhatsApp and Telegram

22/04/2026

OpenAI Open-Sources – Euphony: a web-based visualization tool for Harmony session data and Codex chat logs

22/04/2026

Hugging face releases mlintern: A Open-Source AI agent that automates LLM post-training workflow

22/04/2026

Google Simula: A Framework that Uses Reasoning to Generate Synthetic Datasets in Specialized AI Domains

21/04/2026
Top News

Attend Our Livestream to Learn What GPT-5 Means to ChatGPT Users

AI Creativity: The Secret Ingredients

OpenAI’s Fidji Simo Is Taking Medical Go away Amid an Government Shake-Up

WIRED| WIRED

‘Uncanny Valley’: Iran War in the AI Era, Prediction Market Ethics, and Paramount Beats Netflix

Load More
AI-Trends.Today

Your daily source of AI news and trends. Stay up to date with everything AI and automation!

X (Twitter) Instagram
Top Insights

ChatGPT will soon have ads. Advertisements are Coming to ChatGPT.

16/01/2026

Create production-ready AgentScope workflows using ReAct Agents and Custom tools, multi-agent debate, structured output, concurrent pipelines, and custom tools

02/04/2026
Latest News

Join Us for Our Livestream: Musk and Altman on the Future of OpenAI

22/04/2026

A detection tool claims that the Pope’s warnings about AI were AI-generated.

22/04/2026
X (Twitter) Instagram
  • Privacy Policy
  • Contact Us
  • Terms and Conditions
© 2026 AI-Trends.Today

Type above and press Enter to search. Press Esc to cancel.