wav2vec2-large-xlsr-53-chinese-zh-cnModel47/100 via “mandarin chinese speech-to-text transcription with cross-lingual transfer learning”
automatic-speech-recognition model by undefined. 19,93,708 downloads.
Unique: Uses XLSR-53 cross-lingual pretraining (53 languages of unlabeled audio) rather than monolingual pretraining, enabling effective fine-tuning with limited Chinese labeled data (~50 hours). The wav2vec2 architecture combines masked prediction on continuous speech representations with contrastive learning, achieving better generalization than traditional acoustic models or end-to-end CTC-only approaches.
vs others: Outperforms Baidu DeepSpeech and Kaldi-based Chinese ASR systems on Common Voice benchmark due to transformer-based architecture and cross-lingual transfer, while being freely available and deployable on-premise unlike commercial APIs (Baidu, iFlytek, Alibaba)