Capability
Multi Task Zero Shot Task Generalization Evaluation
11 artifacts provide this capability.
Want a personalized recommendation?
Find the best match →Top Matches
via “zero-shot and few-shot task generalization through in-context learning”
01.AI's bilingual 34B model with 200K context option.
Unique: Bilingual in-context learning enables cross-lingual few-shot adaptation — users can provide examples in English and apply the learned pattern to Chinese inputs or vice versa
vs others: Few-shot performance is likely comparable to Llama 2 34B but inferior to GPT-3.5 and Claude, which demonstrate superior in-context learning and few-shot generalization