All Models

Qwen/Qwen3-235B-A22B-Thinking-2507

qwen Reasoning Tool Calling Open Weights Structured Output

Qwen3-235B-A22B-Thinking-2507 is a high-performance, open-weight Mixture-of-Experts (MoE) language model optimized for complex reasoning tasks. It activates 22B of its 235B parameters per forward pass and natively supports up to 262,144 tokens of context. This "thinking-only" variant enhances structured logical reasoning, mathematics, science, and long-form generation, showing strong benchmark performance across AIME, SuperGPQA, LiveCodeBench, and MMLU-Redux. It enforces a special reasoning mode (</think>) and is designed for high-token outputs (up to 81,920 tokens) in challenging domains. The model is instruction-tuned and excels at step-by-step reasoning, tool use, agentic workflows, and multilingual tasks. This release represents the most capable open-source variant in the Qwen3-235B series, surpassing many closed models in structured reasoning use cases.

Providers 3
Released Jul 25, 2025
Input Modalities text
Output Modalities text
Tarsk Use coding

Available Providers (3)

Provider Model ID Input Cost Output Cost Context Max Output Docs
Kilo Gateway qwen/qwen3-235b-a22b-thinking-2507 $0.11/MTok $0.60/MTok 262.1K 262.1K
SiliconFlow Qwen/Qwen3-235B-A22B-Thinking-2507 $0.13/MTok $0.60/MTok 262K 262K
SiliconFlow (China) Qwen/Qwen3-235B-A22B-Thinking-2507 $0.13/MTok $0.60/MTok 262K 262K

Capabilities

Reasoning
Tool Calling
Attachments
Open Weights
Structured Output