All Models

Mistral Nemo

mistral-nemo Reasoning Tool Calling Open Weights Structured Output

A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA. The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi. It supports function calling and is released under the Apache 2.0 license.

Providers 9
Released Jul 1, 2024
Input Modalities text, image
Output Modalities text
Tarsk Use coding

Available Providers (9)

Provider Model ID Input Cost Output Cost Context Max Output Docs
GitHub Models mistral-ai/mistral-nemo $0.00/MTok $0.00/MTok 128K 8.2K
NovitaAI mistralai/mistral-nemo $0.04/MTok $0.17/MTok 60.3K 16K
Vercel AI Gateway mistral/mistral-nemo $0.04/MTok $0.17/MTok 60.3K 16K
NanoGPT mistralai/Mistral-Nemo-Instruct-2407 $0.10/MTok $0.12/MTok 16.4K 8.2K
Azure Cognitive Services mistral-nemo $0.15/MTok $0.15/MTok 128K 128K
Mistral mistral-nemo $0.15/MTok $0.15/MTok 128K 128K
Azure mistral-nemo $0.15/MTok $0.15/MTok 128K 128K
STACKIT neuralmagic/Mistral-Nemo-Instruct-2407-FP8 $0.49/MTok $0.71/MTok 128K 8.2K
Helicone mistral-nemo $20.00/MTok $40.00/MTok 128K 16.4K

Capabilities

Reasoning
Tool Calling
Attachments
Open Weights
Structured Output