Meta has released a new collection of ai models, llama 4, in its llama family – on a Saturday, no less.
There are four models in total: LLAMA 4 Scout, LLAma 4 MAVERICK, and Llama 4 Behemoth. All was trained on “Large Amounts of Unlabed Text, Image, and Video Data” to give Them “Broad Visual Understanding,” Meta Says.
The success of open models from chinese ai lab Deepsek, which performance on par or better than meta’s Previous Flagship llama models, reportedly kicked lLAma devel over. Meta is said to have scrambled war rooms to decipher how Deepsek lowered the cost of running and deplying models like R1 and V3.
Scout and Maverick are open available on llama.com and from Meta’s partners, including the Ai Dev Platform Hugging Face, While Behemoth is Still in Training. Meta says that meta ai, its ai-positive assist apps including Whatsapp, Messenger, and Instagram, Has Been Updated to Use Lalama 4 in 40 Countries. Multimodal features are limited to the us in English for now.
Some developers may take issue with the lLAma 4 license. Users in the eu are prohibited from using or distributing the models, likely the result of government requirements requirements imposed by the region’s AI and data privacy laws. (In the past, meta has decared these laws as overly burdensome. Meta, which meta can grant – or deny – at its sole discretion.
“These lLAma 4 models mark the beginning of a new era for the llama ecosystem,” Meta Wrote in a blog post. “This is just the beginning for the llama 4 collection.”
Meta says that llama 4 is its first cohort of models to use a mixture of experts (Moe) Architecture, which is more computationally efficient efficient for training and answers. Moe Architectures Basically Break down data processing tasks into subtasks and then delegate them to smaller, specialized “Expert” models.
Maverick, for example, has 400 billion total parameters, but only 17 billion active Parameters Across 128 “Experts.” (Parameters roughly correspond to a model’s problem-assistant skills.
According to meta’s internal testing, maverick, which the company says is best for “General Assistant and Chat” Use Cases Like Creative Writing, Exceeds Models Such as Openai ‘ 2.0 on certain coding, reasoning, multilingual, long-context, and image benchmarks. However, Maverick does not quite measure up to more capable recent models like Google’s Gemini 2.5 Pro, Anthropic’s Claude 3.7 Sonnet, and Openai’s GPT-4.5.
Scout’s strengths lie in tasks like document summarization and reasons over large codebases. Uniquely, it has a very large context window: 10 million tokens. (“Tokens” represent bits of raw text – EG, the word “fantastic” split into “fan,” “tas” and “tic.”) In plain English, scout can take in images and up to millions of worses, alloying Work with extramely large documents.
Scout can run on a single nvidia h100 gpu, while maverack requires an nvidia H100 DGX system, according to meta.
Meta’s Unreleased Behemoth will need even beefier hardware. According to the company, behemoth has 288 billion active parameters, 16 experts, and nearly two trillion total parameters. Meta’s internal benchmarking has been outperforming GPT-4.5, Claude 3.7 Sonnet, and Gemini 2.0 Pro (but not 2.5 pro) on Several evaluations measuring stem skills and math Problem Solving.
Of Note, None of the llama 4 models is a proper “Reasoning” model along the lines of Openai’s O1 and O3-Min. Reasoning models fact-check their answers and generally respond to questions more reliably, but as a consortece take longer than traditional, “Non-Reasoning” Models to Deleiver Answers.
Interestingly, meta says that it tuned all of its llam 4 models to refuse to answer “Constint” questions of Questions Less often. According to the company, LLAma 4 Responds to “Debated” Political and Social Topics that the Previous Crop of LLAMA Models wouldnless. In addition, the company says, lLAma 4 is “dramaatically more balanced” with which prompts it flat-out vete entertainment.
,[Y]OU can count on [Lllama 4] To provide helpful, facula responses with judgment, “a meta speech’s told techcrunch.”[W]E’re Continuing to make llam more responseve so that it answers more questions, can respond to a variety of different viewpoints […] And doesn’t favorite views over others. “
Thos Tweaks come as white house allies accuse ai of political wokeness.
Many of President Donald Trump’s Close Confidants, Including Elon Musk and Crypto and AI “Czar” David Sacks, have alleged that many ai chatbots censor conservative viewpointsSacks Has Historically singled out Openai’s chatgpt in particular as “programmed to be woke” and untruthful about politically sensitive subjects.
In truth, bias in ai is an intractable technical problem. Musk’s Own Ai Company, Xai, Has Struggled to create a chatbot that doesn’t endorse some political views over other.
That Hasnless Stopped Companies Including Openai from adjusting Their ai models to answer more questions than they would have previous, in particular questions on Controversial Political Subjects.