by OpenAI
gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized to run on a single H100 GPU with native MXFP4 quantization. The model supports configurable reasoning depth, full chain-of-thought access, and native tool use, including function calling, browsing, and structured output generation.
Models with similar or better quality but different tradeoffs
Models with similar or better performance at a lower cost per token.
Models with similar or better performance but lower latency.
Models at a similar price point with higher benchmark scores.
Compare performance with other models from the same creator
How this model performs across different benchmarks
Compare cost efficiency across all models
Performance trends across all benchmark runs
Number of benchmark runs over time
Get started with this model using OpenRouter