All models

xAI: Grok 4 Fast

by xAI

Grok 4 Fast is xAI's latest multimodal model with SOTA cost-efficiency and a 2M token context window. It comes in two flavors: non-reasoning and reasoning. Read more about the model on xAI's [news post](http://x.ai/news/grok-4-fast). Reasoning can be enabled/disabled using the `reasoning` `enabled` parameter in the API. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#controlling-reasoning-tokens)

Avg Score

79.3%

14 answers

Avg Latency

18.3s

10 runs

Pricing

$0.20

input

/

$0.50

output

per 1M tokens

Context

2000K

tokens

Alternatives

Models with similar or better quality but different tradeoffs

No alternatives found

Run benchmarks on this model to discover alternatives

Other Models from xAI

Compare performance with other models from the same creator

ModelScoreLatencyCost/1M
xAI: Grok 393.8%24.7s$9.00
xAI: Grok 3 Mini Beta93.8%23.3s$0.40
xAI: Grok 3 Beta91.7%26.8s$9.00
xAI: Grok Code Fast 189.2%16.1s$0.85
xAI: Grok 3 Mini87.7%22.0s$0.40
xAI: Grok 4.1 Fast81.4%14.2s$0.35
xAI: Grok 481.2%43.0s$9.00

Benchmark Performance

How this model performs across different benchmarks

No benchmark data available

Run benchmarks with this model to see performance breakdown

Price vs Performance

Compare cost efficiency across all models

Current model (baseline)
Other models (relative score)
Y-axis shows score difference from shared benchmarks. X-axis uses log scale.

Score Over Time

Performance trends across all benchmark runs

Benchmark Activity

Number of benchmark runs over time

Quickstart

Get started with this model using OpenRouter

View on OpenRouter
import { OpenRouter } from "@openrouter/sdk";

const openrouter = new OpenRouter({
  apiKey: "<OPENROUTER_API_KEY>"
});

const completion = await openrouter.chat.completions.create({
  model: "x-ai/grok-4-fast",
  messages: [
    {
      role: "user",
      content: "Hello!"
    }
  ]
});

console.log(completion.choices[0].message.content);

Get your API key at openrouter.ai/keys