All Models
Qwen/Qwen3.5-122B-A10B
The Qwen3.5 122B-A10B native vision-language model is built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixture-of-experts model, achieving higher inference efficiency. In terms of overall performance, this model is second only to Qwen3.5-397B-A17B. Its text capabilities significantly outperform those of Qwen3-235B-2507, and its visual capabilities surpass those of Qwen3-VL-235B.
Benchmarks
Available Providers (2)
| Provider | Model ID | Input Cost | Output Cost | Context | Max Output | Docs |
|---|---|---|---|---|---|---|
| | qwen/qwen3.5-122b-a10b | $0.26/MTok | $2.08/MTok | 262.1K | 65.5K | |
| | Qwen/Qwen3.5-122B-A10B | $0.29/MTok | $2.32/MTok | 262.1K | 65.5K |
Capabilities
Reasoning
Tool Calling
Attachments
Open Weights
Structured Output