OpenAI Releases CoT Monitoring to Stop Malicious Behavior of Large Models

robot
Abstract generation in progress

Golden Finance reported that OpenAI released the latest research, using CoT (chain of thought) monitoring, it can prevent malicious behaviors such as large models talking nonsense and hiding true intentions, and it is also one of the effective tools for supervising super models. OpenAI uses the newly released cutting-edge model o3-mini as the monitored object, and the weaker GPT-4o model as the monitor. The test environment is a coding task that requires the AI to implement functionality in the codebase to pass unit tests. The results showed that the CoT monitor performed well in detecting systematic "reward hacking" behavior, with a recall rate of up to 95%, far exceeding the 60% of behaviors that were only monitored.

View Original
The content is for reference only, not a solicitation or offer. No investment, tax, or legal advice provided. See Disclaimer for more risks disclosure.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments