All Models
Mistral: Mixtral 8x7B Instruct
Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters. Instruct model fine-tuned by Mistral. #moe
Benchmarks
Available Providers (1)
| Provider | Model ID | Input Cost | Output Cost | Context | Max Output | Docs |
|---|---|---|---|---|---|---|
| | mistralai/mixtral-8x7b-instruct | $0.54/MTok | $0.54/MTok | 32.8K | 16.4K |
Capabilities
Reasoning
Tool Calling
Attachments
Open Weights
Structured Output