Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
OpenAI's GPT-5.3-Codex Faces California AI Safety Law Scrutiny As Watchdog Alleges High-Risk Violations
OpenAI’s GPT-5.3-Codex Faces California AI Safety Law Scrutiny As Watchdog Alleges High-Risk Violations
Snigdha Gairola
Mon, February 16, 2026 at 9:31 PM GMT+9 3 min read
OpenAI may face significant fines after a watchdog group alleged the company violated California’s new AI safety law with the release of its latest coding model, GPT-5.3-Codex.
High-Risk GPT-5.3-Codex Sparks Safety Concerns
Last week, The Midas Project claimed OpenAI failed to implement legally required safeguards for models classified as high cybersecurity risks, as outlined in its own safety framework.
GPT-5.3-Codex is part of OpenAI’s effort to reclaim its lead in AI-powered coding and, according to company benchmarks, outperforms previous models and competitors in coding tasks.
Don’t Miss:
Watchdog Challenges OpenAI’s Compliance Claims
OpenAI disputes the allegations.
An OpenAI spokesperson told Fortune, “GPT-5.3-Codex completed our full testing and governance process, as detailed in the publicly released system card, and did not demonstrate long-range autonomy capabilities based on proxy evaluations and confirmed by internal expert judgments, including from our Safety Advisory Group.”
However, Midas Project founder Tyler Johnston criticized the release.
This is _“_especially embarrassing given how low the floor SB 53 sets is: basically just adopt a voluntary safety plan of your choice and communicate honestly about it, changing it as needed, but not violating or lying about it," he said.
Safety researchers like Nathan Calvin of Encode also questioned OpenAI’s defense, noting, “From reading the relevant docs … it doesn’t look ambiguous to me.”
Trending: Blue-chip art has historically outpaced the S&P 500 since 1995, and fractional investing is now opening this institutional asset class to everyday investors.
OpenAI Growth Surges Amid Anthropic Rivalry
On Monday, OpenAI’s momentum increased as CEO Sam Altman reassured employees and investors despite mounting competition from Anthropic’s upgraded coding tools.
Altman said in an internal Slack message that ChatGPT had returned to more than 10% monthly growth and that an updated Chat model was set for release that week.
He also noted that Codex usage rose about 50% after the launch of GPT-5.3-Codex and a standalone Mac app.
OpenAI pushed back against Anthropic’s Super Bowl ads, criticizing the idea of advertising in ChatGPT.
Altman called the ads “deceptive,” though a person familiar with the matter said the company planned to test clearly labeled ads placed at the bottom of responses without influencing answers.
**Read Next: **
Photo courtesy: Shutterstock
UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets.
Get the latest stock analysis from Benzinga:
This article OpenAI’s GPT-5.3-Codex Faces California AI Safety Law Scrutiny As Watchdog Alleges High-Risk Violations originally appeared on Benzinga.com
Terms and Privacy Policy
Privacy Dashboard
More Info