by OpenRouter
Your prompt will be processed by a meta-model and routed to one of dozens of models (see below), optimizing for the best possible output. To see which model was used, visit [Activity](/activity), or read the `model` attribute of the response. Your response will be priced at the same rate as the routed model. Learn more, including how to customize the models for routing, in our [docs](/docs/guides/routing/routers/auto-router). Requests will be routed to the following models: - [openai/gpt-5.1](/openai/gpt-5.1) - [openai/gpt-5](/openai/gpt-5) - [openai/gpt-5-mini](/openai/gpt-5-mini) - [openai/gpt-5-nano](/openai/gpt-5-nano) - [openai/gpt-4.1](/openai/gpt-4.1) - [openai/gpt-4.1-mini](/openai/gpt-4.1-mini) - [openai/gpt-4.1-nano](/openai/gpt-4.1-nano) - [openai/gpt-4o](/openai/gpt-4o) - [openai/gpt-4o-2024-05-13](/openai/gpt-4o-2024-05-13) - [openai/gpt-4o-2024-08-06](/openai/gpt-4o-2024-08-06) - [openai/gpt-4o-2024-11-20](/openai/gpt-4o-2024-11-20) - [openai/gpt-4o-mini](/openai/gpt-4o-mini) - [openai/gpt-4o-mini-2024-07-18](/openai/gpt-4o-mini-2024-07-18) - [openai/gpt-4-turbo](/openai/gpt-4-turbo) - [openai/gpt-4-turbo-preview](/openai/gpt-4-turbo-preview) - [openai/gpt-4-1106-preview](/openai/gpt-4-1106-preview) - [openai/gpt-4](/openai/gpt-4) - [openai/gpt-3.5-turbo](/openai/gpt-3.5-turbo) - [openai/gpt-oss-120b](/openai/gpt-oss-120b) - [anthropic/claude-opus-4.5](/anthropic/claude-opus-4.5) - [anthropic/claude-opus-4.1](/anthropic/claude-opus-4.1) - [anthropic/claude-opus-4](/anthropic/claude-opus-4) - [anthropic/claude-sonnet-4.5](/anthropic/claude-sonnet-4.5) - [anthropic/claude-sonnet-4](/anthropic/claude-sonnet-4) - [anthropic/claude-3.7-sonnet](/anthropic/claude-3.7-sonnet) - [anthropic/claude-haiku-4.5](/anthropic/claude-haiku-4.5) - [anthropic/claude-3.5-haiku](/anthropic/claude-3.5-haiku) - [anthropic/claude-3-haiku](/anthropic/claude-3-haiku) - [google/gemini-3-pro-preview](/google/gemini-3-pro-preview) - [google/gemini-2.5-pro](/google/gemini-2.5-pro) - [google/gemini-2.0-flash-001](/google/gemini-2.0-flash-001) - [google/gemini-2.5-flash](/google/gemini-2.5-flash) - [mistralai/mistral-large](/mistralai/mistral-large) - [mistralai/mistral-large-2407](/mistralai/mistral-large-2407) - [mistralai/mistral-large-2411](/mistralai/mistral-large-2411) - [mistralai/mistral-medium-3.1](/mistralai/mistral-medium-3.1) - [mistralai/mistral-nemo](/mistralai/mistral-nemo) - [mistralai/mistral-7b-instruct](/mistralai/mistral-7b-instruct) - [mistralai/mixtral-8x7b-instruct](/mistralai/mixtral-8x7b-instruct) - [mistralai/mixtral-8x22b-instruct](/mistralai/mixtral-8x22b-instruct) - [mistralai/codestral-2508](/mistralai/codestral-2508) - [x-ai/grok-4](/x-ai/grok-4) - [x-ai/grok-3](/x-ai/grok-3) - [x-ai/grok-3-mini](/x-ai/grok-3-mini) - [deepseek/deepseek-r1](/deepseek/deepseek-r1) - [meta-llama/llama-3.3-70b-instruct](/meta-llama/llama-3.3-70b-instruct) - [meta-llama/llama-3.1-405b-instruct](/meta-llama/llama-3.1-405b-instruct) - [meta-llama/llama-3.1-70b-instruct](/meta-llama/llama-3.1-70b-instruct) - [meta-llama/llama-3.1-8b-instruct](/meta-llama/llama-3.1-8b-instruct) - [meta-llama/llama-3-70b-instruct](/meta-llama/llama-3-70b-instruct) - [meta-llama/llama-3-8b-instruct](/meta-llama/llama-3-8b-instruct) - [qwen/qwen3-235b-a22b](/qwen/qwen3-235b-a22b) - [qwen/qwen3-32b](/qwen/qwen3-32b) - [qwen/qwen3-14b](/qwen/qwen3-14b) - [cohere/command-r-plus-08-2024](/cohere/command-r-plus-08-2024) - [cohere/command-r-08-2024](/cohere/command-r-08-2024) - [moonshotai/kimi-k2-thinking](/moonshotai/kimi-k2-thinking) - [perplexity/sonar](/perplexity/sonar)
Models with similar or better quality but different tradeoffs
Models with similar or better performance at a lower cost per token.
Models with similar or better performance but lower latency.
Models at a similar price point with higher benchmark scores.
Compare performance with other models from the same creator
How this model performs across different benchmarks
Compare cost efficiency across all models
Performance trends across all benchmark runs
Number of benchmark runs over time
Get started with this model using OpenRouter