All Models

DeepSeek V3.2 TEE

deepseek Reasoning Tool Calling Open Weights Structured Output

DeepSeek-V3.2 is a large language model designed to harmonize high computational efficiency with strong reasoning and agentic tool-use performance. It introduces DeepSeek Sparse Attention (DSA), a fine-grained sparse attention mechanism that reduces training and inference cost while preserving quality in long-context scenarios. A scalable reinforcement learning post-training framework further improves reasoning, with reported performance in the GPT-5 class, and the model has demonstrated gold-medal results on the 2025 IMO and IOI. V3.2 also uses a large-scale agentic task synthesis pipeline to better integrate reasoning into tool-use settings, boosting compliance and generalization in interactive environments. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config)

Providers 2
Released Dec 1, 2025
Input Modalities text
Output Modalities text
Tarsk Use coding

Available Providers (2)

Provider Model ID Input Cost Output Cost Context Max Output Docs
Chutes deepseek-ai/DeepSeek-V3.2-TEE $0.28/MTok $0.42/MTok 131.1K 65.5K
NanoGPT TEE/deepseek-v3.2 $0.50/MTok $1.00/MTok 164K 65.5K

Capabilities

Reasoning
Tool Calling
Attachments
Open Weights
Structured Output