All Models
Qwen: Qwen3.5-Flash
The Qwen3.5 native vision-language Flash models are built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixture-of-experts model, achieving higher inference efficiency. Compared to the 3 series, these models deliver a leap forward in performance for both pure text and multimodal tasks, offering fast response times while balancing inference speed and overall performance.
Available Providers (1)
| Provider | Model ID | Input Cost | Output Cost | Context | Max Output | Docs |
|---|---|---|---|---|---|---|
| | qwen/qwen3.5-flash-02-23 | $0.10/MTok | $0.40/MTok | 1M | 65.5K |
Capabilities
Reasoning
Tool Calling
Attachments
Open Weights
Structured Output