OpenAI’s new model leaps ahead in coding capabilities—but raises unprecedented cybersecurity risks

5 days ago 10

OpenAI believes it has finally pulled ahead in one of the most closely watched races in artificial intelligence: AI-powered coding. Its newest model, GPT-5.3-Codex, represents a solid advance over rival systems, showing markedly higher performance on coding benchmarks and reported results than earlier generations of both OpenAI’s and Anthropic’s models—suggesting a long-sought edge in a category that could reshape how software is built.

But the company is rolling out the model with unusually tight controls and delaying full developer access as it confronts a harder reality: The same capabilities that make GPT-5.3-Codex so effective at writing, testing, and reasoning about code also raise serious cybersecurity concerns. In the race to build the most powerful coding model, OpenAI has run headlong into the risks of releasing it.

GPT-5.3-Codex is available to paid ChatGPT users, who can use the model for everyday software development tasks such as writing, debugging, and testing code through OpenAI’s Codex tools and ChatGPT interface. But for now, the company is not opening unrestricted access for high-risk cybersecurity uses, and OpenAI is not immediately enabling full API access that would allow the model to be automated at scale. Those more sensitive applications are being gated behind additional safeguards, including a new trusted-access program for vetted security professionals, reflecting OpenAI’s view that the model has crossed a new cybersecurity r...

Read Entire Article