IBM's Granite division has released two new multilingual embedding models, one with 97 million parameters and another with 311 million. These models are based on ModernBERT architecture and support over 200 languages, with a context window of 32,000 tokens. They are designed for applications such as retrieval, search, and similarity tasks, and are available with immediate support on Hugging Face's Text Embeddings Inference platform. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Expands open-source multilingual embedding options, potentially improving performance for search and retrieval tasks.
RANK_REASON Release of new models from a non-frontier lab with open-source support.