All models

Auto Router

by OpenRouter

Your prompt will be processed by a meta-model and routed to one of dozens of models (see below), optimizing for the best possible output. To see which model was used, visit [Activity](/activity), or read the `model` attribute of the response. Your response will be priced at the same rate as the routed model. Learn more, including how to customize the models for routing, in our [docs](/docs/guides/routing/routers/auto-router). Requests will be routed to the following models: - [openai/gpt-5.1](/openai/gpt-5.1) - [openai/gpt-5](/openai/gpt-5) - [openai/gpt-5-mini](/openai/gpt-5-mini) - [openai/gpt-5-nano](/openai/gpt-5-nano) - [openai/gpt-4.1](/openai/gpt-4.1) - [openai/gpt-4.1-mini](/openai/gpt-4.1-mini) - [openai/gpt-4.1-nano](/openai/gpt-4.1-nano) - [openai/gpt-4o](/openai/gpt-4o) - [openai/gpt-4o-2024-05-13](/openai/gpt-4o-2024-05-13) - [openai/gpt-4o-2024-08-06](/openai/gpt-4o-2024-08-06) - [openai/gpt-4o-2024-11-20](/openai/gpt-4o-2024-11-20) - [openai/gpt-4o-mini](/openai/gpt-4o-mini) - [openai/gpt-4o-mini-2024-07-18](/openai/gpt-4o-mini-2024-07-18) - [openai/gpt-4-turbo](/openai/gpt-4-turbo) - [openai/gpt-4-turbo-preview](/openai/gpt-4-turbo-preview) - [openai/gpt-4-1106-preview](/openai/gpt-4-1106-preview) - [openai/gpt-4](/openai/gpt-4) - [openai/gpt-3.5-turbo](/openai/gpt-3.5-turbo) - [openai/gpt-oss-120b](/openai/gpt-oss-120b) - [anthropic/claude-opus-4.5](/anthropic/claude-opus-4.5) - [anthropic/claude-opus-4.1](/anthropic/claude-opus-4.1) - [anthropic/claude-opus-4](/anthropic/claude-opus-4) - [anthropic/claude-sonnet-4.5](/anthropic/claude-sonnet-4.5) - [anthropic/claude-sonnet-4](/anthropic/claude-sonnet-4) - [anthropic/claude-3.7-sonnet](/anthropic/claude-3.7-sonnet) - [anthropic/claude-haiku-4.5](/anthropic/claude-haiku-4.5) - [anthropic/claude-3.5-haiku](/anthropic/claude-3.5-haiku) - [anthropic/claude-3-haiku](/anthropic/claude-3-haiku) - [google/gemini-3-pro-preview](/google/gemini-3-pro-preview) - [google/gemini-2.5-pro](/google/gemini-2.5-pro) - [google/gemini-2.0-flash-001](/google/gemini-2.0-flash-001) - [google/gemini-2.5-flash](/google/gemini-2.5-flash) - [mistralai/mistral-large](/mistralai/mistral-large) - [mistralai/mistral-large-2407](/mistralai/mistral-large-2407) - [mistralai/mistral-large-2411](/mistralai/mistral-large-2411) - [mistralai/mistral-medium-3.1](/mistralai/mistral-medium-3.1) - [mistralai/mistral-nemo](/mistralai/mistral-nemo) - [mistralai/mistral-7b-instruct](/mistralai/mistral-7b-instruct) - [mistralai/mixtral-8x7b-instruct](/mistralai/mixtral-8x7b-instruct) - [mistralai/mixtral-8x22b-instruct](/mistralai/mixtral-8x22b-instruct) - [mistralai/codestral-2508](/mistralai/codestral-2508) - [x-ai/grok-4](/x-ai/grok-4) - [x-ai/grok-3](/x-ai/grok-3) - [x-ai/grok-3-mini](/x-ai/grok-3-mini) - [deepseek/deepseek-r1](/deepseek/deepseek-r1) - [meta-llama/llama-3.3-70b-instruct](/meta-llama/llama-3.3-70b-instruct) - [meta-llama/llama-3.1-405b-instruct](/meta-llama/llama-3.1-405b-instruct) - [meta-llama/llama-3.1-70b-instruct](/meta-llama/llama-3.1-70b-instruct) - [meta-llama/llama-3.1-8b-instruct](/meta-llama/llama-3.1-8b-instruct) - [meta-llama/llama-3-70b-instruct](/meta-llama/llama-3-70b-instruct) - [meta-llama/llama-3-8b-instruct](/meta-llama/llama-3-8b-instruct) - [qwen/qwen3-235b-a22b](/qwen/qwen3-235b-a22b) - [qwen/qwen3-32b](/qwen/qwen3-32b) - [qwen/qwen3-14b](/qwen/qwen3-14b) - [cohere/command-r-plus-08-2024](/cohere/command-r-plus-08-2024) - [cohere/command-r-08-2024](/cohere/command-r-08-2024) - [moonshotai/kimi-k2-thinking](/moonshotai/kimi-k2-thinking) - [perplexity/sonar](/perplexity/sonar)

Avg Score

84.2%

510 answers

Avg Latency

40.6s

278 runs

Pricing

$4.83

input

/

$18.81

output

per 1M tokens

Context

2000K

tokens

Alternatives

Models with similar or better quality but different tradeoffs

Same Quality, Cheaper

Models with similar or better performance at a lower cost per token.

Same Quality, Faster

Models with similar or better performance but lower latency.

Same Cost, Better

Models at a similar price point with higher benchmark scores.

Other Models from OpenRouter

Compare performance with other models from the same creator

ModelScoreLatencyCost/1M
Body Builder (beta)3.1%29.1sFree

Benchmark Performance

How this model performs across different benchmarks

BenchmarkScoreRank
Venture Capital Terms Benchmark
100.0%
1 / 25
Niederstetten Benchmark
86.0%
5 / 41

Price vs Performance

Compare cost efficiency across all models

Current model (baseline)
Other models (relative score)
Y-axis shows score difference from shared benchmarks. X-axis uses log scale.

Score Over Time

Performance trends across all benchmark runs

Benchmark Activity

Number of benchmark runs over time

Quickstart

Get started with this model using OpenRouter

View on OpenRouter
import { OpenRouter } from "@openrouter/sdk";

const openrouter = new OpenRouter({
  apiKey: "<OPENROUTER_API_KEY>"
});

const completion = await openrouter.chat.completions.create({
  model: "openrouter/auto",
  messages: [
    {
      role: "user",
      content: "Hello!"
    }
  ]
});

console.log(completion.choices[0].message.content);

Get your API key at openrouter.ai/keys