ko-sroberta-multitaskModel45/100 via “fine-tuning and domain adaptation for korean-specific tasks”
sentence-similarity model by undefined. 17,63,322 downloads.
Unique: Leverages sentence-transformers' high-level fine-tuning API with automatic loss computation and gradient management, enabling domain adaptation without low-level PyTorch code; supports multiple loss functions (triplet, contrastive, multi-task) and automatic validation set evaluation, reducing fine-tuning complexity compared to raw transformers fine-tuning
vs others: Requires 50-70% less code than fine-tuning raw HuggingFace transformers models and includes automatic learning rate scheduling, validation monitoring, and checkpoint management; achieves 10-20% accuracy improvement on domain-specific Korean tasks compared to base model when fine-tuned on 10K+ labeled examples, while being 3-5x faster to implement than custom contrastive learning loops