OpenAI: gpt-oss-120bModel25/100 via “multilingual understanding and generation”
gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized...
Unique: Trained on diverse multilingual corpora with language-agnostic embedding spaces, using MoE expert specialization where different experts handle different language families (e.g., one expert for Romance languages, another for Sino-Tibetan languages), enabling consistent quality across 50+ languages
vs others: Supports more languages than GPT-3.5 with better quality than open-source multilingual models, while being cheaper than GPT-4 and faster due to sparse activation reducing per-token compute for multilingual inference