Close Menu
  • AI
  • Content Creation
  • Tech
  • Robotics
AI-trends.todayAI-trends.today
  • AI
  • Content Creation
  • Tech
  • Robotics
Trending
  • Google Cloud AI Research introduces ReasoningBank: a memory framework that distills reasoning strategies from agent successes and failures.
  • Equinox Detailed implementation with JAX Native Moduls, Filtered Transformations, Stateful Ladders and Workflows from End to end.
  • Xiaomi MiMo V2.5 Pro and MiMo V2.5 Released: Frontier Model Benchmarks with Significantly Lower Token Cost
  • How to Create a Multi-Agent System of Production Grade CAMEL with Tool Usage, Consistency, and Criticism-Driven Improvement
  • Sam Altman’s Orb Company promoted a Bruno Mars partnership that didn’t exist
  • Alibaba Qwen Team Releases Qwen3.6.27B: Dense open-weight Model that Outperforms MoE 397B on Agentic Coding Benchmarks
  • Former MrBeast exec sues over ‘years’ of alleged harassment
  • Some of them Were Scary Good. They were all pretty scary.
AI-trends.todayAI-trends.today
Home»AI»OpenAI supports a bill that would limit liability for AI-enabled mass deaths or financial disasters

OpenAI supports a bill that would limit liability for AI-enabled mass deaths or financial disasters

AI By Gavin Wallace10/04/20264 Mins Read
Facebook Twitter LinkedIn Email
Inside Anthropic’s First Developer Day, Where AI Agents Took Center
Inside Anthropic’s First Developer Day, Where AI Agents Took Center
Share
Facebook Twitter LinkedIn Email

OpenAI throws The Illinois State legislature has pledged its support for a state law that will shield AI labs in Illinois from any liability. AI models In the case of serious harm to society such as 100 deaths or injuries or $1 billion worth of property damage, these devices are often used.

It seems that the effort marks a change in OpenAI’s The lawfulness of the following: strategy. OpenAI, up until now, has mostly played the defense role, fighting bills that would have been detrimental to OpenAI. AI labs liable For the harms caused by their technology. Several AI policy experts tell WIRED that SB 3444—which could set a new standard for the industry—is a more extreme measure than bills OpenAI has supported in the past.

This bill will shield Frontier AI developers from legal liability “critical harms” It is not liable for any incident caused by their Frontier models, as long as the model was created without malicious intent or negligence. They have also published reports about safety, transparency, and security on their site. A frontier model is defined as any AI trained model with more than 100 million dollars in computation costs. That definition would likely apply to America’s largest AI laboratories, such as OpenAI, Google xAI Anthropic and Meta.

“We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois,” OpenAI spokesperson Jamie Radice made the following statement in a written email. “They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards.”

The bill defines critical harms as those that are most likely to cause concern in the AI sector. For example, a bad actor could use AI for malicious purposes. create a chemical, biological, radiological, or nuclear weapon. This would be a major harm to the AI model if it were to engage in actions that, when committed by human beings, would result in a criminal act and lead these outcomes. The AI lab that created the AI model would not be liable if the AI model committed any of the actions listed in SB 3444 as long as the behavior was not intentional.

The US federal and state legislators have not passed any legislation that would determine whether AI models developers like OpenAI could be held liable for the harm caused by this technology. As AI laboratories continue to develop more powerful AI model that pose new safety and security challenges such as Anthropic’s Claude MythosThese questions are becoming increasingly relevant.

Caitlin Untermeyer from OpenAI Global Affairs also supported SB 3444 in her testimony. She argued that a federal regulation framework was needed for AI. Niedermeyer delivered a message in line with those of the Trump administration. crackdown on state AI safety lawsIt’s not important to say it. “a patchwork of inconsistent state requirements that could create friction without meaningfully improving safety.” Silicon Valley has also argued in the past that this is paramount. AI legislation to not hamper America’s position in the global AI race. Niedermeyer pointed out that SB 3444, which is an act of state safety legislation, can only be effective if it’s accompanied by other measures. “reinforce a path toward harmonization with federal systems.”

“At OpenAI, we believe the North Star for frontier regulation should be the safe deployment of the most advanced models in a way that also preserves US leadership in innovation,” Niedermeyer says:

Scott Wisor is the policy director of Secure AI and tells WIRED that he thinks this bill will have a small chance to pass, considering Illinois’ reputation as a state with draconian regulations on technology. “We polled people in Illinois, asking whether they think AI companies should be exempt from liability, and 90 percent of people oppose it. There’s no reason existing AI companies should be facing reduced liability,” Wisor is saying.

artificial intelligence ethics government openai politics
Share. Facebook Twitter LinkedIn Email
Avatar
Gavin Wallace

Related Posts

Sam Altman’s Orb Company promoted a Bruno Mars partnership that didn’t exist

22/04/2026

Some of them Were Scary Good. They were all pretty scary.

22/04/2026

North Korean hacker mediocre use AI to steal millions.

22/04/2026

Join Us for Our Livestream: Musk and Altman on the Future of OpenAI

22/04/2026
Top News

Then, is it? “AI bubble” About to burst late in 2025 or even 2026?

Chinese chatbots censor themselves

This Startup Is Trying to Create a DeepSeek Moment in the US

Meta Superintelligence Lab’s Researchers are Already Departing

Trump Intel Deal Official

Load More
AI-Trends.Today

Your daily source of AI news and trends. Stay up to date with everything AI and automation!

X (Twitter) Instagram
Top Insights

Google’s Nano Banana Pro Image Generator: Hands on with Google

21/11/2025

Why You Shouldn’t Trust A Chatbot To Talk About Themselves

14/08/2025
Latest News

Google Cloud AI Research introduces ReasoningBank: a memory framework that distills reasoning strategies from agent successes and failures.

23/04/2026

Equinox Detailed implementation with JAX Native Moduls, Filtered Transformations, Stateful Ladders and Workflows from End to end.

23/04/2026
X (Twitter) Instagram
  • Privacy Policy
  • Contact Us
  • Terms and Conditions
© 2026 AI-Trends.Today

Type above and press Enter to search. Press Esc to cancel.