All Models

OpenAI o4-mini

o-mini Reasoning Tool Calling Attachments Structured Output

OpenAI o4-mini is a compact reasoning model in the o-series, optimized for fast, cost-efficient performance while retaining strong multimodal and agentic capabilities. It supports tool use and demonstrates competitive reasoning and coding performance across benchmarks like AIME (99.5% with Python) and SWE-bench, outperforming its predecessor o3-mini and even approaching o3 in some domains. Despite its smaller size, o4-mini exhibits high accuracy in STEM tasks, visual problem solving (e.g., MathVista, MMMU), and code editing. It is especially well-suited for high-throughput scenarios where latency or cost is critical. Thanks to its efficient architecture and refined reinforcement learning training, o4-mini can chain tools, generate structured outputs, and solve multi-step tasks with minimal delay—often in under a minute.

Providers 4
Released Jun 1, 2024
Input Modalities text, image, pdf
Output Modalities text
Tarsk Use coding

Available Providers (4)

Provider Model ID Input Cost Output Cost Context Max Output Docs
GitHub Models openai/o4-mini $0.00/MTok $0.00/MTok 200K 100K
Helicone o4-mini $1.10/MTok $4.40/MTok 200K 100K
Kilo Gateway openai/o4-mini $1.10/MTok $4.40/MTok 200K 100K
NanoGPT openai/o4-mini $1.10/MTok $4.40/MTok 200K 100K

Capabilities

Reasoning
Tool Calling
Attachments
Open Weights
Structured Output