All models

Meta: Llama 4 Maverick

by Meta

Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forward pass (400B total). It supports multilingual text and image input, and produces multilingual text and code output across 12 supported languages. Optimized for vision-language tasks, Maverick is instruction-tuned for assistant-like behavior, image reasoning, and general-purpose multimodal interaction. Maverick features early fusion for native multimodality and a 1 million token context window. It was trained on a curated mixture of public, licensed, and Meta-platform data, covering ~22 trillion tokens, with a knowledge cutoff in August 2024. Released on April 5, 2025 under the Llama 4 Community License, Maverick is suited for research and commercial applications requiring advanced multimodal understanding and high model throughput.

Avg Score

76.4%

240 answers

Avg Latency

2.6s

15 runs

Pricing

$0.15

input

/

$0.60

output

per 1M tokens

Context

1049K

tokens

Alternatives

Models with similar or better quality but different tradeoffs

Same Quality, Cheaper

Models with similar or better performance at a lower cost per token.

Same Quality, Faster

Models with similar or better performance but lower latency.

Same Cost, Better

Models at a similar price point with higher benchmark scores.

Other Models from Meta

Compare performance with other models from the same creator

Benchmark Performance

How this model performs across different benchmarks

Price vs Performance

Compare cost efficiency across all models

Current model
Other models
X-axis uses log scale for better visualization of price range

Score Over Time

Performance trends across all benchmark runs

Benchmark Activity

Number of benchmark runs over time

Quickstart

Get started with this model using OpenRouter

View on OpenRouter
import { OpenRouter } from "@openrouter/sdk";

const openrouter = new OpenRouter({
  apiKey: "<OPENROUTER_API_KEY>"
});

const completion = await openrouter.chat.completions.create({
  model: "meta-llama/llama-4-maverick",
  messages: [
    {
      role: "user",
      content: "Hello!"
    }
  ]
});

console.log(completion.choices[0].message.content);

Get your API key at openrouter.ai/keys