OpenAI may have violated California’s new AI safety law with the release of its latest coding model, according to allegations from an AI watchdog group.
A violation would potentially expose the company to millions of dollars in fines, and the case may become a precedent-setting first test of the new law’s provisions.
An OpenAI spokesperson disputed the watch dog’s position, telling Fortune the company was “confident in our compliance with frontier safety laws, including SB 53.”
The controversy centers on GPT-5.3-Codex, OpenAI’s newest coding model, which was released last week. The model is part of an effort by OpenAI to reclaim its lead in AI-powered coding and, according to benchmark data OpenAI released, shows markedly higher performance on coding tasks than earlier model versions from both OpenAI and competitors like Anthropic. However, the model has also raised unprecedented cybersecurity concerns.
CEO Sam Altman said the model was the first to hit the “high” risk category for cybersecurity on the company’s Preparedness Framework, an internal risk classification system OpenAI uses for model releases. This means OpenAI is essentially classifying the model as capable enough at coding to potentially facilitate significant cyber harm, especially if automa...

20 hours ago
2














English (US) ·