Capability
Interactive Model Playground With Parameter Tuning
20 artifacts provide this capability.
Want a personalized recommendation?
Find the best match →Top Matches
via “parameter-efficient fine-tuning via p-tuning v2”
Tsinghua's bilingual dialogue model.
Unique: Implements P-Tuning v2 as a first-class fine-tuning method with integrated training loop in ptuning/ directory, supporting both discrete and continuous prompt optimization with automatic hyperparameter scheduling rather than requiring manual tuning
vs others: More memory-efficient than LoRA (7GB vs 9GB) for ChatGLM while maintaining comparable task performance; prompt-based approach is more interpretable than adapter-based methods for understanding model behavior changes