All models

OpenAI: GPT-5.2-Codex

by OpenAI

GPT-5.2-Codex is an upgraded version of GPT-5.1-Codex optimized for software engineering and coding workflows. It is designed for both interactive development sessions and long, independent execution of complex engineering tasks. The model supports building projects from scratch, feature development, debugging, large-scale refactoring, and code review. Compared to GPT-5.1-Codex, 5.2-Codex is more steerable, adheres closely to developer instructions, and produces cleaner, higher-quality code outputs. Reasoning effort can be adjusted with the `reasoning.effort` parameter. Read the [docs here](https://openrouter.ai/docs/use-cases/reasoning-tokens#reasoning-effort-level) Codex integrates into developer environments including the CLI, IDE extensions, GitHub, and cloud tasks. It adapts reasoning effort dynamically—providing fast responses for small tasks while sustaining extended multi-hour runs for large projects. The model is trained to perform structured code reviews, catching critical flaws by reasoning over dependencies and validating behavior against tests. It also supports multimodal inputs such as images or screenshots for UI development and integrates tool use for search, dependency installation, and environment setup. Codex is intended specifically for agentic coding applications.

Avg Score

87.2%

9 answers

Avg Latency

21.7s

9 runs

Pricing

$1.75

input

/

$14.00

output

per 1M tokens

Context

400K

tokens

Alternatives

Models with similar or better quality but different tradeoffs

No alternatives found

Run benchmarks on this model to discover alternatives

Other Models from OpenAI

Compare performance with other models from the same creator

ModelScoreLatencyCost/1M
OpenAI: o1-pro94.2%95.3s$375.00
OpenAI: GPT-5.1-Codex-Max93.5%25.2s$5.63
OpenAI: GPT-592.2%54.7s$5.63
OpenAI: GPT-5 Image90.9%49.2s$10.00
OpenAI: GPT-5.1 Chat90.6%6.1s$5.63
OpenAI: GPT-5.1-Codex90.6%16.6s$5.63
OpenAI: GPT-5.1-Codex-Mini90.6%12.6s$1.13
OpenAI: GPT-5.190.3%35.5s$5.63
OpenAI: o390.3%19.0s$5.00
OpenAI: gpt-oss-120b90.0%28.3s$0.11
OpenAI: o4 Mini Deep Research89.7%134.5s$5.00
OpenAI: GPT-5 Pro89.4%332.1s$67.50
OpenAI: GPT-5 Image Mini89.3%31.8s$2.25
OpenAI: GPT-4o Search Preview89.3%11.2s$6.25
OpenAI: GPT-4.189.2%21.8s$5.00
OpenAI: gpt-oss-safeguard-20b88.4%2.3s$0.19
OpenAI: o188.3%24.1s$37.50
OpenAI: GPT-5.287.8%20.7s$7.88
OpenAI: o3 Pro87.6%125.7s$50.00
OpenAI: GPT-5 Codex87.2%20.5s$5.63
OpenAI: GPT-5.2 Pro86.9%47.7s$94.50
OpenAI: o3 Deep Research86.5%362.7s$25.00
OpenAI: GPT-5.2 Chat86.4%9.5s$7.88
OpenAI: GPT-5 Chat86.2%6.8s$5.63
OpenAI: o4 Mini High86.1%26.8s$2.75
OpenAI: o3 Mini High86.1%13.8s$2.75
OpenAI: o4 Mini85.3%18.2s$2.75
OpenAI: o3 Mini85.3%16.0s$2.75
OpenAI: ChatGPT-4o82.9%7.4s$10.00
OpenAI: GPT-5 Mini82.5%24.7s$1.13
OpenAI: GPT-5 Nano77.1%34.2s$0.22
OpenAI: GPT-4o (2024-05-13)76.5%4.7s$10.00
OpenAI: GPT-4o76.1%5.8s$12.00
OpenAI: GPT-4o (2024-11-20)75.6%13.0s$6.25
OpenAI: GPT-4o73.8%12.5s$6.25
OpenAI: gpt-oss-20b72.8%13.3s$0.06
OpenAI: GPT-4.1 Mini71.9%13.2s$1.00
OpenAI: GPT-3.5 Turbo (older v0613)70.8%13.7s$1.50
OpenAI: GPT-4o-mini (2024-07-18)70.3%13.2s$0.38
OpenAI: GPT-4o (2024-08-06)70.0%9.5s$6.25
OpenAI: gpt-oss-120b69.6%29.0s$0.11
OpenAI: GPT-4o-mini67.4%11.5s$0.38
OpenAI: GPT-4o-mini Search Preview66.2%6.5s$0.38
OpenAI: GPT-4 Turbo (older v1106)64.7%18.1s$20.00
OpenAI: GPT-4 Turbo Preview62.6%15.8s$20.00
OpenAI: GPT-461.7%12.1s$45.00
OpenAI: GPT-4 Turbo61.5%26.5s$20.00
OpenAI: GPT-4 (older v0314)59.7%16.8s$45.00
OpenAI: GPT-4.1 Nano59.2%5.3s$0.25
OpenAI: GPT-3.5 Turbo 16k48.2%4.2s$3.50
OpenAI: GPT-3.5 Turbo42.9%4.0s$1.00
OpenAI: GPT-3.5 Turbo Instruct27.5%4.4s$1.75
OpenAI: GPT-4o Audio$6.25
OpenAI: gpt-oss-120bFree
OpenAI: gpt-oss-20bFree
OpenAI: GPT Audio$6.25
OpenAI: GPT Audio Mini$1.50
OpenAI: Codex Mini$3.75

Benchmark Performance

How this model performs across different benchmarks

No benchmark data available

Run benchmarks with this model to see performance breakdown

Price vs Performance

Compare cost efficiency across all models

Current model (baseline)
Other models (relative score)
Y-axis shows score difference from shared benchmarks. X-axis uses log scale.

Score Over Time

Performance trends across all benchmark runs

Benchmark Activity

Number of benchmark runs over time

Quickstart

Get started with this model using OpenRouter

View on OpenRouter
import { OpenRouter } from "@openrouter/sdk";

const openrouter = new OpenRouter({
  apiKey: "<OPENROUTER_API_KEY>"
});

const completion = await openrouter.chat.completions.create({
  model: "openai/gpt-5.2-codex",
  messages: [
    {
      role: "user",
      content: "Hello!"
    }
  ]
});

console.log(completion.choices[0].message.content);

Get your API key at openrouter.ai/keys