MiniMax M1
MiniMax-M1 is a large-scale, open-weight reasoning model designed for extended context and high-efficiency inference. It leverages a hybrid Mixture-of-Experts (MoE) architecture paired with a custom "lightning attention" mechanism, allowing it to process long sequences—up to 1 million tokens—while maintaining competitive FLOP efficiency. With 456 billion total parameters and 45.9B active per token, this variant is optimized for complex, multi-step reasoning tasks. Trained via a custom reinforcement learning pipeline (CISPO), M1 excels in long-context understanding, software engineering, agentic tool use, and mathematical reasoning. Benchmarks show strong performance across FullStackBench, SWE-bench, MATH, GPQA, and TAU-Bench, often outperforming other open models like DeepSeek R1 and Qwen3-235B.
Available Providers (6)
| Provider | Model ID | Input Cost | Output Cost | Context | Max Output | Docs |
|---|---|---|---|---|---|---|
| | MiniMax-M1 | $0.13/MTok | $1.25/MTok | 1M | 128K | |
| | MiniMax-M1 | $0.14/MTok | $1.33/MTok | 1M | 131.1K | |
| | minimax/minimax-m1 | $0.40/MTok | $2.20/MTok | 1M | 40K | |
| | minimaxai/minimax-m1-80k | $0.55/MTok | $2.20/MTok | 1M | 40K | |
| | minimaxai/minimax-m1-80k | $0.55/MTok | $2.20/MTok | 1M | 40K | |
| | MiniMax-M1 | —/MTok | —/MTok | 1M | 80K |