Skip to content
  • There are no suggestions because the search field is empty.

2026 Changelog

Jan 02nd 2026

Version 4.6.6

Compass API

LFM2-8B-A1B
A high-efficiency 8.3B parameter hybrid Mixture-of-Experts (MoE) language model designed for fast, memory-efficient inference on phones, laptops, and embedded systems, with support for both CPU- and GPU-based deployments. The model is deployed on the Core42 cloud in the UAE region.