All Models

GLM-4.6V

glm Reasoning Tool Calling Attachments Open Weights Structured Output

GLM-4.6V is a large multimodal model designed for high-fidelity visual understanding and long-context reasoning across images, documents, and mixed media. It supports up to 128K tokens, processes complex page layouts and charts directly as visual inputs, and integrates native multimodal function calling to connect perception with downstream tool execution. The model also enables interleaved image-text generation and UI reconstruction workflows, including screenshot-to-HTML synthesis and iterative visual editing.

Providers 12
Released Sep 30, 2025
Input Modalities text, image, video, pdf
Output Modalities text
Tarsk Use coding

Available Providers (12)

Provider Model ID Input Cost Output Cost Context Max Output Docs
Zhipu AI Coding Plan glm-4.6v $0.00/MTok $0.00/MTok 128K 32.8K
Z.AI Coding Plan glm-4.6v $0.00/MTok $0.00/MTok 128K 32.8K
ZenMux z-ai/glm-4.6v $0.14/MTok $0.42/MTok 200K 64K
AIHubMix glm-4.6v $0.14/MTok $0.41/MTok 128K 32.8K
302.AI glm-4.6v $0.14/MTok $0.43/MTok 128K 32.8K
Z.AI glm-4.6v $0.30/MTok $0.90/MTok 128K 32.8K
Deep Infra zai-org/GLM-4.6V $0.30/MTok $0.90/MTok 204.8K 131.1K
NovitaAI zai-org/glm-4.6v $0.30/MTok $0.90/MTok 131.1K 32.8K
Zhipu AI glm-4.6v $0.30/MTok $0.90/MTok 128K 32.8K
Vercel AI Gateway zai/glm-4.6v $0.30/MTok $0.90/MTok 128K 24K
Chutes zai-org/GLM-4.6V $0.30/MTok $0.90/MTok 131.1K 65.5K
Poe novita/glm-4.6v /MTok /MTok 131K 32.8K

Capabilities

Reasoning
Tool Calling
Attachments
Open Weights
Structured Output