OpenAI: gpt-oss-120bModel25/100 via “context-aware conversation with multi-turn memory”
gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized...
Unique: Trained with multi-turn conversation data using OpenAI's proprietary RLHF approach, with MoE expert routing that specializes in conversation context tracking and entity resolution, enabling natural multi-turn conversations without explicit context management frameworks
vs others: Better multi-turn coherence than GPT-3.5 with lower cost than GPT-4, while being faster than Claude due to sparse activation and more consistent context tracking than open-source models due to supervised fine-tuning on conversation data