OpenAI throws The Illinois State legislature has pledged its support for a state law that will shield AI labs in Illinois from any liability. AI models In the case of serious harm to society such as 100 deaths or injuries or $1 billion worth of property damage, these devices are often used.
It seems that the effort marks a change in OpenAI’s The lawfulness of the following: strategy. OpenAI, up until now, has mostly played the defense role, fighting bills that would have been detrimental to OpenAI. AI labs liable For the harms caused by their technology. Several AI policy experts tell WIRED that SB 3444—which could set a new standard for the industry—is a more extreme measure than bills OpenAI has supported in the past.
This bill will shield Frontier AI developers from legal liability “critical harms” It is not liable for any incident caused by their Frontier models, as long as the model was created without malicious intent or negligence. They have also published reports about safety, transparency, and security on their site. A frontier model is defined as any AI trained model with more than 100 million dollars in computation costs. That definition would likely apply to America’s largest AI laboratories, such as OpenAI, Google xAI Anthropic and Meta.
“We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois,” OpenAI spokesperson Jamie Radice made the following statement in a written email. “They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards.”
The bill defines critical harms as those that are most likely to cause concern in the AI sector. For example, a bad actor could use AI for malicious purposes. create a chemical, biological, radiological, or nuclear weapon. This would be a major harm to the AI model if it were to engage in actions that, when committed by human beings, would result in a criminal act and lead these outcomes. The AI lab that created the AI model would not be liable if the AI model committed any of the actions listed in SB 3444 as long as the behavior was not intentional.
The US federal and state legislators have not passed any legislation that would determine whether AI models developers like OpenAI could be held liable for the harm caused by this technology. As AI laboratories continue to develop more powerful AI model that pose new safety and security challenges such as Anthropic’s Claude MythosThese questions are becoming increasingly relevant.
Caitlin Untermeyer from OpenAI Global Affairs also supported SB 3444 in her testimony. She argued that a federal regulation framework was needed for AI. Niedermeyer delivered a message in line with those of the Trump administration. crackdown on state AI safety lawsIt’s not important to say it. “a patchwork of inconsistent state requirements that could create friction without meaningfully improving safety.” Silicon Valley has also argued in the past that this is paramount. AI legislation to not hamper America’s position in the global AI race. Niedermeyer pointed out that SB 3444, which is an act of state safety legislation, can only be effective if it’s accompanied by other measures. “reinforce a path toward harmonization with federal systems.”
“At OpenAI, we believe the North Star for frontier regulation should be the safe deployment of the most advanced models in a way that also preserves US leadership in innovation,” Niedermeyer says:
Scott Wisor is the policy director of Secure AI and tells WIRED that he thinks this bill will have a small chance to pass, considering Illinois’ reputation as a state with draconian regulations on technology. “We polled people in Illinois, asking whether they think AI companies should be exempt from liability, and 90 percent of people oppose it. There’s no reason existing AI companies should be facing reduced liability,” Wisor is saying.

