promptbenchBenchmark30/100 via “evaluation-metrics-computation-with-task-specific-scoring”
PromptBench is a powerful tool designed to scrutinize and analyze the interaction of large language models with various prompts. It provides a convenient infrastructure to simulate **black-box** adversarial **prompt attacks** on the models and evaluate their performances.
Unique: Implements task-specific metric computation (classification, generation, reasoning) with proper edge case handling and aggregation across datasets, rather than generic metric wrappers. Supports both reference-based and reference-free metrics.
vs others: More comprehensive than generic metric libraries because it provides task-specific implementations with proper handling of benchmark-specific requirements (e.g., GLUE metric computation, MMLU scoring). Integrates seamlessly with the evaluation framework.