All models

OpenAI: Codex Mini

by OpenAI

codex-mini-latest is a fine-tuned version of o4-mini specifically for use in Codex CLI. For direct use in the API, we recommend starting with gpt-4.1.

Avg Score

0.0%

0 answers

Avg Latency

0ms

0 runs

Pricing

$1.50

input

/

$6.00

output

per 1M tokens

Context

200K

tokens

Alternatives

Models with similar or better quality but different tradeoffs

No alternatives found

Run benchmarks on this model to discover alternatives

Other Models from OpenAI

Compare performance with other models from the same creator

ModelScoreLatencyCost/1M
OpenAI: o1-pro94.2%95.3s$375.00
OpenAI: GPT-5.1-Codex-Max93.5%25.2s$5.63
OpenAI: GPT-592.2%54.7s$5.63
OpenAI: GPT-5 Image90.9%49.2s$10.00
OpenAI: GPT-5.1 Chat90.6%6.1s$5.63
OpenAI: GPT-5.1-Codex90.6%16.6s$5.63
OpenAI: GPT-5.1-Codex-Mini90.6%12.6s$1.13
OpenAI: GPT-5.190.3%35.5s$5.63
OpenAI: o390.3%19.0s$5.00
OpenAI: gpt-oss-120b90.0%28.3s$0.11
OpenAI: o4 Mini Deep Research89.7%134.5s$5.00
OpenAI: GPT-5 Pro89.4%332.1s$67.50
OpenAI: GPT-5 Image Mini89.3%31.8s$2.25
OpenAI: GPT-4o Search Preview89.3%11.2s$6.25
OpenAI: GPT-4.189.2%21.8s$5.00
OpenAI: gpt-oss-safeguard-20b88.4%2.3s$0.19
OpenAI: o188.3%24.1s$37.50
OpenAI: GPT-5.287.8%20.7s$7.88
OpenAI: o3 Pro87.6%125.7s$50.00
OpenAI: GPT-5.2-Codex87.2%21.7s$7.88
OpenAI: GPT-5 Codex87.2%20.5s$5.63
OpenAI: GPT-5.2 Pro86.9%47.7s$94.50
OpenAI: o3 Deep Research86.5%362.7s$25.00
OpenAI: GPT-5.2 Chat86.4%9.5s$7.88
OpenAI: GPT-5 Chat86.2%6.8s$5.63
OpenAI: o4 Mini High86.1%26.8s$2.75
OpenAI: o3 Mini High86.1%13.8s$2.75
OpenAI: o4 Mini85.3%18.2s$2.75
OpenAI: o3 Mini85.3%16.0s$2.75
OpenAI: ChatGPT-4o82.9%7.4s$10.00
OpenAI: GPT-5 Mini82.5%24.7s$1.13
OpenAI: GPT-5 Nano77.1%34.2s$0.22
OpenAI: GPT-4o (2024-05-13)76.5%4.7s$10.00
OpenAI: GPT-4o76.1%5.8s$12.00
OpenAI: GPT-4o (2024-11-20)75.6%13.0s$6.25
OpenAI: GPT-4o73.8%12.5s$6.25
OpenAI: gpt-oss-20b72.8%13.3s$0.06
OpenAI: GPT-4.1 Mini71.9%13.2s$1.00
OpenAI: GPT-3.5 Turbo (older v0613)70.8%13.7s$1.50
OpenAI: GPT-4o-mini (2024-07-18)70.3%13.2s$0.38
OpenAI: GPT-4o (2024-08-06)70.0%9.5s$6.25
OpenAI: gpt-oss-120b69.6%29.0s$0.11
OpenAI: GPT-4o-mini67.4%11.5s$0.38
OpenAI: GPT-4o-mini Search Preview66.2%6.5s$0.38
OpenAI: GPT-4 Turbo (older v1106)64.7%18.1s$20.00
OpenAI: GPT-4 Turbo Preview62.6%15.8s$20.00
OpenAI: GPT-461.7%12.1s$45.00
OpenAI: GPT-4 Turbo61.5%26.5s$20.00
OpenAI: GPT-4 (older v0314)59.7%16.8s$45.00
OpenAI: GPT-4.1 Nano59.2%5.3s$0.25
OpenAI: GPT-3.5 Turbo 16k48.2%4.2s$3.50
OpenAI: GPT-3.5 Turbo42.9%4.0s$1.00
OpenAI: GPT-3.5 Turbo Instruct27.5%4.4s$1.75
OpenAI: GPT-4o Audio$6.25
OpenAI: gpt-oss-120bFree
OpenAI: gpt-oss-20bFree
OpenAI: GPT Audio$6.25
OpenAI: GPT Audio Mini$1.50

Benchmark Performance

How this model performs across different benchmarks

No benchmark data available

Run benchmarks with this model to see performance breakdown

Score Over Time

Performance trends across all benchmark runs

No score trend data

Score history will appear here after multiple runs

Benchmark Activity

Number of benchmark runs over time

No activity data

Activity will appear here after benchmark runs

Quickstart

Get started with this model using OpenRouter

View on OpenRouter
import { OpenRouter } from "@openrouter/sdk";

const openrouter = new OpenRouter({
  apiKey: "<OPENROUTER_API_KEY>"
});

const completion = await openrouter.chat.completions.create({
  model: "openai/codex-mini",
  messages: [
    {
      role: "user",
      content: "Hello!"
    }
  ]
});

console.log(completion.choices[0].message.content);

Get your API key at openrouter.ai/keys