multi-size model selection with speed-accuracy tradeoff optimization
Provides six model variants (tiny 39M, base 74M, small 244M, medium 769M, large 1550M, turbo 809M) with different parameter counts, VRAM requirements (1-10GB), and inference speeds (10x-1x relative to large). Each size trades accuracy for speed — tiny runs ~10x faster but with ~5-10% lower WER (word error rate), while large provides best accuracy at 10GB VRAM cost. Turbo variant (809M params) optimizes large-v3 for 8x speedup with minimal accuracy loss but lacks translation support.
Unique: Discrete model size family with published speed/accuracy/VRAM tradeoff matrix allows developers to make informed selection based on deployment constraints; turbo variant represents architectural optimization (knowledge distillation or pruning) achieving 8x speedup with <5% accuracy loss, distinct from simply using smaller base model
vs alternatives: More transparent tradeoff options than Whisper API (single model) or competitors like Deepgram (proprietary size selection); open-source allows local benchmarking on own hardware rather than relying on vendor performance claims