All Models

GPT-4.1-nano

gpt-nano Tool Calling Attachments Structured Output

For tasks that demand low latency, GPT‑4.1 nano is the fastest and cheapest model in the GPT-4.1 series. It delivers exceptional performance at a small size with its 1 million token context window, and scores 80.1% on MMLU, 50.3% on GPQA, and 9.8% on Aider polyglot coding – even higher than GPT‑4o mini. It’s ideal for tasks like classification or autocompletion.

Providers 10
Released Apr 14, 2025
Input Modalities text, image, pdf
Output Modalities text
Tarsk Use coding

Available Providers (10)

Provider Model ID Input Cost Output Cost Context Max Output Docs
GitHub Models openai/gpt-4.1-nano $0.00/MTok $0.00/MTok 128K 16.4K
Poe openai/gpt-4.1-nano $0.09/MTok $0.36/MTok 1.0M 32.8K
Azure Cognitive Services gpt-4.1-nano $0.10/MTok $0.40/MTok 1.0M 32.8K
AIHubMix gpt-4.1-nano $0.10/MTok $0.40/MTok 1.0M 32.8K
Abacus gpt-4.1-nano $0.10/MTok $0.40/MTok 1.0M 32.8K
302.AI gpt-4.1-nano $0.10/MTok $0.40/MTok 1M 32.8K
Vercel AI Gateway openai/gpt-4.1-nano $0.10/MTok $0.40/MTok 1.0M 32.8K
OpenAI gpt-4.1-nano $0.10/MTok $0.40/MTok 1.0M 32.8K
NanoGPT openai/gpt-4.1-nano $0.10/MTok $0.40/MTok 1.0M 32.8K
Azure gpt-4.1-nano $0.10/MTok $0.40/MTok 1.0M 32.8K

Capabilities

Reasoning
Tool Calling
Attachments
Open Weights
Structured Output