lm-evaluation-harnessFramework43/100
via “multi-backend language model instantiation with unified interface”
EleutherAI's evaluation framework — 200+ benchmarks, powers Open LLM Leaderboard.
Unique: Uses a pluggable registry system (lm_eval/api/registry.py) where each backend implements a common LM interface with automatic BOS token handling, tokenizer management, and context window validation. Unlike frameworks that require separate evaluation scripts per backend, this centralizes backend logic while preserving backend-specific optimizations (e.g., vLLM's paged attention).
vs others: Supports more backends (25+) than alternatives like LM-Eval-Lite or custom evaluation scripts, and provides unified loglikelihood + generation interface that alternatives often split across separate tools