Anthropic Used Pokémon to Benchmark Its Newest Ai Model

Anthropic used pokémon to benchmark its newest ai model. Yes, really.

In a blog post Published monday, anthropic said that it tested its latest model, Claude 3.7 sonnetOn the game boy classic pokémon red. The company equipped the model with basic memory, screen pixel input, and function calls to press buttons and navigate around the screen, allowing it to play pokémon contenuously.

A unique feature of claude 3.7 sonnet is its ability to engage in “Extended Thinking.” Like Openai’s O3-Mini and Deepsek’s R1, Claude 3.7 Sonnet Can “Reason” Through Challenging Problems by Applying More Computing-And Taking More Time.

That came in handy in pokémon red, apparently.

Compared to a Previous Version of Claude, Claude 3.0 sonnet, which failed to leave the house in Pallet Town where the story begins, claude 3.7 Sonnet Successfully Battled Three POKSEMON GYM Leaders and won their badges.

Image credits:Anthropic

Now, it’s not clear how much computing was required for claude 3.7 Sonnet to Reach that thats Milestones – and how long even took. Anthropic only said that the model performed 35,000 actions to reach the last gym leader, surge.

It surely won’t be long before some enterprising develops outs out.

Pokémon red is more of a toy Benchmark Than Anything. However, there IS a long history of Games Being Used for Ai Benchmarking Purposes. In the past few months alone, a number of new apps and platforms have cropped up to test models’ Game-Playing Abilites on Titles Ranging from Street fighter to Pictionary,

Leave a Comment