All Models
GLM-4.5
GLM-4.5 is our latest flagship foundation model, purpose-built for agent-based applications. It leverages a Mixture-of-Experts (MoE) architecture and supports a context length of up to 128k tokens. GLM-4.5 delivers significantly enhanced capabilities in reasoning, code generation, and agent alignment. It supports a hybrid inference mode with two options, a "thinking mode" designed for complex reasoning and tool use, and a "non-thinking mode" optimized for instant responses. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config)
Available Providers (17)
| Provider | Model ID | Input Cost | Output Cost | Context | Max Output | Docs |
|---|---|---|---|---|---|---|
| | ZhipuAI/GLM-4.5 | $0.00/MTok | $0.00/MTok | 131.1K | 98.3K | |
| | glm-4.5 | $0.00/MTok | $0.00/MTok | 131.1K | 98.3K | |
| | glm-4.5 | $0.00/MTok | $0.00/MTok | 131.1K | 98.3K | |
| | glm-4.5 | $0.29/MTok | $1.14/MTok | 128K | 98.3K | |
| | z-ai/glm-4.5 | $0.35/MTok | $1.54/MTok | 128K | 64K | |
| | accounts/fireworks/models/glm-4p5 | $0.55/MTok | $2.19/MTok | 131.1K | 131.1K | |
| | glm-4.5 | $0.60/MTok | $2.20/MTok | 131.1K | 98.3K | |
| | zai-org/GLM-4.5 | $0.60/MTok | $2.20/MTok | 131.1K | 98.3K | |
| | zai-org/GLM-4.5 | $0.60/MTok | $2.20/MTok | 128K | 4.1K | |
| | zai-org/glm-4.5 | $0.60/MTok | $2.20/MTok | 131.1K | 98.3K | |
| | zai-org/glm-4.5 | $0.60/MTok | $2.20/MTok | 128K | 8.2K | |
| | z-ai/glm-4.5 | $0.60/MTok | $2.20/MTok | 128K | 96K | |
| | glm-4.5 | $0.60/MTok | $2.20/MTok | 131.1K | 98.3K | |
| | zai/glm-4.5 | $0.60/MTok | $2.20/MTok | 131.1K | 131.1K | |
| | zai-org/glm-4.5 | $0.60/MTok | $2.20/MTok | 131.1K | 98.3K | |
| | glm-4.5 | $0.67/MTok | $2.46/MTok | 131.1K | 131.1K | |
| | glm-4.5 | —/MTok | —/MTok | 131.1K | 98.3K |
Capabilities
Reasoning
Tool Calling
Attachments
Open Weights
Structured Output