All Models

LiquidAI: LFM2-8B-A1B

Open Weights

LFM2-8B-A1B is an efficient on-device Mixture-of-Experts (MoE) model from Liquid AI’s LFM2 family, built for fast, high-quality inference on edge hardware. It uses 8.3B total parameters with only ~1.5B active per token, delivering strong performance while keeping compute and memory usage low—making it ideal for phones, tablets, and laptops.

Providers 1
Released Oct 20, 2025
Input Modalities text
Output Modalities text
Tarsk Use coding

Available Providers (1)

Provider Model ID Input Cost Output Cost Context Max Output Docs
Kilo Gateway liquid/lfm2-8b-a1b $0.01/MTok $0.02/MTok 32.8K 32.8K

Capabilities

Reasoning
Tool Calling
Attachments
Open Weights
Structured Output