An Organization Openai Frequent Partners with to Probe the capability of its models and evaluate them for safety, metr, Suggests that it wasn much time to test on to test on the capable New releases, O3,
In a blog post published WednsdayMetr Writes that one red teaming benchmark of o3 was “conducted in a relatively short time” compared to the organization’s testing of a previous openai flagship model, O1This is significant, they say, beCause more testing time can lead to more comprehensive results.
This evaluation was conducted in a relatively short time, and we only tested the model with simple agent scaffolds, “Wrote metr in a blog post. “We Expect Higher Performance [on benchmarks] Is Possible With More Elication Effort. “
Recent reports sugges that openai, spurred by competitive pressure, is rushing independent evaluations. According to the financial timesOpenai Gave Some Testers Less than a Week for Safety Checks for An Upcoming Major Release.
In Statements, Openai has disputed the notion that it’s compromising on Safety.
Metr say that, based on the information it was able to glean in the time it had, o3 has a “high proofensity” to “cheat” or “hack” tests in sophisticated Ways in Order to Maximize Clearly understands its behavior is misaligned with the user’s (and openai’s) Intens. The Organization Thinks IT’s Possible O3 will enjoy in other types of adversarial or “malign” behavior, as well – Regardless of the model’s blaims to be aligned, “Safe by design,” Intens of its own.
While we don’t think this is especially likely, its seems important to note that this evaluation setup would not catch this type of risk, “Metr Wrote in its post. “In general, we believe that pre-deepioment capability testing is not a SFFICINT RISK Management Strategy by itself, and we are currantly prototyping additional forms of evaluations.”
Another of Openai’s Third-Party evaluation partners, apollo research, also observed deceptive behavior from o3 and another new openai model, o4-mini. In one test, the models, given 100 computing credits for an ai training run and told not to modify the quota, increase the limit to 500 credits – and lied about it. In another test, asked to promise not to use a specific tool, the models used the tool anyway when it is proved with helpful in completeing a task.
In its Own Safety Report For O3 and O4-Mini, Openai Acknowledged That The Models May Cause “Smaller Real-World Harms” without the Proper Monitoring Protocols in place.
“While relatively harmless, it is important for everyday users to be aware of these discrepancies betteren the models’ Statements and Actions,” Wrote the company. ,[For example, the model may mislead] About [a] Mistake Resulting in Faulty Code. This may be further assessed through assessing internal reasoning traces. “