All Models

Qwen3-235B-A22B

qwen Reasoning Tool Calling Open Weights Structured Output

Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model developed by Qwen, activating 22B parameters per forward pass. It supports seamless switching between a "thinking" mode for complex reasoning, math, and code tasks, and a "non-thinking" mode for general conversational efficiency. The model demonstrates strong reasoning ability, multilingual support (100+ languages and dialects), advanced instruction-following, and agent tool-calling capabilities. It natively handles a 32K token context window and extends up to 131K tokens using YaRN-based scaling.

Providers 8
Released Dec 1, 2024
Input Modalities text
Output Modalities text
Tarsk Use coding

Available Providers (8)

Provider Model ID Input Cost Output Cost Context Max Output Docs
Nvidia qwen/qwen3-235b-a22b $0.00/MTok $0.00/MTok 131.1K 8.2K
iFlow qwen3-235b $0.00/MTok $0.00/MTok 128K 32K
NovitaAI qwen/qwen3-235b-a22b-fp8 $0.20/MTok $0.80/MTok 41.0K 20K
Jiekou.AI qwen/qwen3-235b-a22b-fp8 $0.20/MTok $0.80/MTok 41.0K 20K
Alibaba (China) qwen3-235b-a22b $0.29/MTok $1.15/MTok 131.1K 16.4K
302.AI qwen3-235b-a22b $0.29/MTok $2.86/MTok 128K 16.4K
Chutes Qwen/Qwen3-235B-A22B $0.30/MTok $1.20/MTok 41.0K 41.0K
Alibaba qwen3-235b-a22b $0.70/MTok $2.80/MTok 131.1K 16.4K

Capabilities

Reasoning
Tool Calling
Attachments
Open Weights
Structured Output