All Models
Mercury 2
Mercury 2 is an extremely fast reasoning LLM, and the first reasoning diffusion LLM (dLLM). Instead of generating tokens sequentially, Mercury 2 produces and refines multiple tokens in parallel, achieving >1,000 tokens/sec on standard GPUs. Mercury 2 is 5x+ faster than leading speed-optimized LLMs like Claude 4.5 Haiku and GPT 5 Mini, at a fraction of the cost. Mercury 2 supports tunable reasoning levels, 128K context, native tool use, and schema-aligned JSON output. Built for coding workflows where latency compounds, real-time voice/search, and agent loops. OpenAI API compatible. Read more in the [blog post](https://www.inceptionlabs.ai/blog/introducing-mercury-2).
Benchmarks
Available Providers (3)
| Provider | Model ID | Input Cost | Output Cost | Context | Max Output | Docs |
|---|---|---|---|---|---|---|
| | mercury-2 | $0.25/MTok | $0.75/MTok | 128K | 50K | |
| | inception/mercury-2 | $0.25/MTok | $0.75/MTok | 128K | 50K | |
| | inception/mercury-2 | $0.25/MTok | $0.75/MTok | 128K | 128K |
Capabilities
Reasoning
Tool Calling
Attachments
Open Weights
Structured Output