via “batch translation with automatic sequence padding and attention masking”
translation model by undefined. 6,70,292 downloads.
Unique: Marian's encoder-decoder architecture enables efficient batch processing of the encoder stage (all sequences in parallel) while maintaining sequential decoding, a design choice that balances memory efficiency with throughput. Automatic padding and masking are handled transparently by HuggingFace Transformers, abstracting low-level tensor manipulation.
vs others: Batch processing achieves 8-12x throughput improvement over single-sentence inference on GPU, outperforming API-based services (Google Translate, AWS Translate) which charge per-request and add network latency, though requires upfront infrastructure investment.