Mistral: Mistral NemoModel25/100 via “multilingual text generation with 128k context window”
A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA. The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese,...
Unique: 12B parameter size with 128k context window represents a sweet spot between inference cost and capability — smaller than Mistral Large (34B) but with equivalent context length, enabling longer-context reasoning at lower computational cost. Built in collaboration with NVIDIA, suggesting optimization for NVIDIA hardware (CUDA, TensorRT) and inference frameworks.
vs others: Offers 4x longer context than GPT-3.5 (32k) at lower inference cost than GPT-4 (32k-128k), while maintaining multilingual support across 9+ languages without model switching overhead.