All Models

o3-mini

o-mini Reasoning Tool Calling Attachments Structured Output

OpenAI o3-mini is a cost-efficient language model optimized for STEM reasoning tasks, particularly excelling in science, mathematics, and coding. This model supports the `reasoning_effort` parameter, which can be set to "high", "medium", or "low" to control the thinking time of the model. The default is "medium". OpenRouter also offers the model slug `openai/o3-mini-high` to default the parameter to "high". The model features three adjustable reasoning effort levels and supports key developer capabilities including function calling, structured outputs, and streaming, though it does not include vision processing capabilities. The model demonstrates significant improvements over its predecessor, with expert testers preferring its responses 56% of the time and noting a 39% reduction in major errors on complex questions. With medium reasoning effort settings, o3-mini matches the performance of the larger o1 model on challenging reasoning evaluations like AIME and GPQA, while maintaining lower latency and cost.

Providers 8
Released Dec 20, 2024
Input Modalities text, image
Output Modalities text
Tarsk Use coding

Available Providers (8)

Provider Model ID Input Cost Output Cost Context Max Output Docs
Poe openai/o3-mini $0.99/MTok $4.00/MTok 200K 100K
Cloudflare AI Gateway openai/o3-mini $1.10/MTok $4.40/MTok 200K 100K
Azure Cognitive Services o3-mini $1.10/MTok $4.40/MTok 200K 100K
Abacus o3-mini $1.10/MTok $4.40/MTok 200K 100K
Vercel AI Gateway openai/o3-mini $1.10/MTok $4.40/MTok 200K 100K
OpenAI o3-mini $1.10/MTok $4.40/MTok 200K 100K
Jiekou.AI o3-mini $1.10/MTok $4.40/MTok 131.1K 131.1K
Azure o3-mini $1.10/MTok $4.40/MTok 200K 100K

Capabilities

Reasoning
Tool Calling
Attachments
Open Weights
Structured Output