A Meta Exec on Monday Denied A Rumor That The Company Trained Its New AI MODELS to Present Well on Specific Benchmarks while Concealing the Models’ Weaknesses.
The Executive, Ahmad al-Dahle, VP of Generative AI at Meta, Said in a post on x That it’s “simply not true” that meta trained its LLAMA 4 MAVERICK AND LLAMA 4 Scout Models On “Test Sets.” In AI Benchmarks, Test Sets are collections of data used to evaluate the performance of a model after it’s been train. Training on a Test Set Block Misleadingly inflate a model’s benchmark scores, making the model appear more capable than it is actually is.
Over the weekend, An Unsubstantited Rumor That meta artificially boosted its new models’ Benchmark Results Began Circulating on X and Reddit. The rumor appears to have originated from a post on a chinese social media site from a user claiming to have resigned from meta in protest over the company’s benchmarking proactions.
Reports that Maverick and Scout perform poorly on Certain Tasks Fuled the rumor, as did meta’s decision to use an Experimental, Unreleased Version of Maverick to achieve better scores on the benchmark LM ArenaResearches on x have observed stark Differences in the behavior of the publicly Downloadable Mavementk Compared with the Model Hosted on Lm Arena.
Al-Dahle Acknowledged that some users aree
“Since we Dropped the Models as Soon as They were ready, we expect it’ll take seveal days for all the public implements to get dialed in,” Al-Dahle said. “We’ll Keep Working Through Our Bug Fixes and Onboarding Partners.”