mDeBERTa-v3-base-mnli-xnliModel42/100 via “cross-lingual natural language inference with entailment scoring”
zero-shot-classification model by undefined. 2,37,978 downloads.
Unique: Trained jointly on MNLI (English, 433K examples) and XNLI (15 languages, 75K examples), enabling zero-shot cross-lingual entailment without language-specific fine-tuning. DeBERTa-v3's disentangled attention mechanism explicitly separates content and position information, improving cross-lingual generalization compared to standard transformer architectures.
vs others: Achieves 2-5% higher accuracy on XNLI multilingual benchmarks than mBERT and XLM-R due to DeBERTa's attention design, and requires no language-specific adapters unlike adapter-based approaches, making it faster to deploy across new languages.