OpenAI releases its answer to Claude Code, first AI model with “high capability” risk for cybersecurity
AI agents that can write code have quickly become one of the most profitable, and competitive, applications coming from the current crop of AI startups.
Anthropic’s Claude Code is enjoying a moment of popularity among software engineers, and it’s shoring up the startup’s revenue projections as it aims for an IPO this year. Claude Code’s launch, along with Anthropic’s release of Claude Cowork, which is aimed at nontechnical users, has been a key force behind software stocks’ massive recent underperformance.
Today OpenAI released its latest salvo in the AI code war: GPT-5.3-Codex, an “agentic coding” model that takes its name from OpenAI’s Codex coding app.
OpenAI says that GPT-5.3-Codex is the first model that was “instrumental in creating itself.”
According to the announcement, the new model can be used to build complex websites, interactive games, and achieved a new industry-wide high score on the widely used SWE-Bench Pro software development benchmark test.
But the model is also the first that OpenAI has released that comes with a “high capability” risk for cybersecurity, meaning the company’s evaluations showed that the tool had the potential to be used for sophisticated cyberattacks, though OpenAI says it has added mitigations to prevent such misuse.
Today OpenAI released its latest salvo in the AI code war: GPT-5.3-Codex, an “agentic coding” model that takes its name from OpenAI’s Codex coding app.
OpenAI says that GPT-5.3-Codex is the first model that was “instrumental in creating itself.”
According to the announcement, the new model can be used to build complex websites, interactive games, and achieved a new industry-wide high score on the widely used SWE-Bench Pro software development benchmark test.
But the model is also the first that OpenAI has released that comes with a “high capability” risk for cybersecurity, meaning the company’s evaluations showed that the tool had the potential to be used for sophisticated cyberattacks, though OpenAI says it has added mitigations to prevent such misuse.