via “multi-task embedding model evaluation across 8+ task types”
Embedding model benchmark — 8 tasks, 112 languages, the standard for comparing embeddings.
Unique: Implements a polymorphic task system where each task type (Retrieval, Classification, etc.) inherits from AbsTask and defines its own evaluation logic, metrics, and dataset handling. This allows MTEB to support 1000+ evaluation tasks across 10+ task types without duplicating evaluation code. Task metadata (language, domain, license) is standardized, enabling filtering and cross-cutting analysis.
vs others: Broader task coverage (8+ task types vs. single-task benchmarks like STS or BEIR) and standardized task interface enable fair comparison across heterogeneous evaluation scenarios, whereas most embedding benchmarks focus on retrieval-only evaluation.