All Models

Gemini 2.5 Flash Lite

Reasoning Tool Calling Attachments Structured Output

Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better performance across common benchmarks compared to earlier Flash models. By default, "thinking" (i.e. multi-pass reasoning) is disabled to prioritize speed, but developers can enable it via the [Reasoning API parameter](https://openrouter.ai/docs/use-cases/reasoning-tokens) to selectively trade off cost for intelligence.

Providers 10
Released Jun 17, 2025
Input Modalities pdf, image, text, audio, video
Output Modalities text
Tarsk Use coding

Available Providers (10)

Provider Model ID Input Cost Output Cost Context Max Output Docs
Poe google/gemini-2.5-flash-lite $0.07/MTok $0.28/MTok 1.0M 64K
Jiekou.AI gemini-2.5-flash-lite $0.09/MTok $0.36/MTok 1.0M 65.5K
ZenMux google/gemini-2.5-flash-lite $0.10/MTok $0.40/MTok 1.0M 64K
OpenRouter google/gemini-2.5-flash-lite $0.10/MTok $0.40/MTok 1.0M 65.5K
Vertex gemini-2.5-flash-lite $0.10/MTok $0.40/MTok 1.0M 65.5K
Google gemini-2.5-flash-lite $0.10/MTok $0.40/MTok 1.0M 65.5K
SAP AI Core gemini-2.5-flash-lite $0.10/MTok $0.40/MTok 1.0M 65.5K
Vercel AI Gateway google/gemini-2.5-flash-lite $0.10/MTok $0.40/MTok 1.0M 65.5K
NanoGPT gemini-2.5-flash-lite $0.10/MTok $0.40/MTok 1.0M 65.5K
Qiniu gemini-2.5-flash-lite /MTok /MTok 1.0M 64K

Capabilities

Reasoning
Tool Calling
Attachments
Open Weights
Structured Output