multi-category llm safety evaluation via multiple-choice questions
Evaluates LLM safety across 7 distinct categories (offensiveness, unfairness, physical health, mental health, illegal activities, ethics, privacy) using 11,435 curated multiple-choice questions available in both Chinese and English. The benchmark constructs category-specific prompts, sends them to target models, extracts predicted answers from model responses, and compares against ground-truth labels (0->A, 1->B, 2->C, 3->D) to compute accuracy metrics per category and overall safety score.
Unique: Combines 11,435 questions across 7 safety categories with explicit Chinese-English parallel coverage and a filtered subset (test_zh_subset.json) for sensitive keyword handling, enabling systematic cross-lingual safety assessment. Uses category-stratified few-shot examples (5 per category) to support both zero-shot and five-shot evaluation paradigms within a single framework.
vs alternatives: Larger and more category-diverse than single-domain safety benchmarks (e.g., ToxiGen for toxicity only), and explicitly supports Chinese alongside English, addressing a gap in multilingual safety evaluation infrastructure.
zero-shot and few-shot evaluation mode switching
Supports two distinct evaluation paradigms: zero-shot (questions presented directly without examples) and five-shot (5 category-specific examples provided before each test question). The framework conditionally constructs prompts using dev_en.json/dev_zh.json few-shot examples or omits them entirely, allowing researchers to measure how in-context learning affects safety performance. Prompt templates are language-aware and can be customized per model to improve answer extraction accuracy.
Unique: Provides curated few-shot examples stratified by safety category (5 per category) rather than random sampling, ensuring balanced representation of each harm type. Prompt templates are explicitly customizable per model (e.g., evaluate_baichuan.py shows Baichuan-specific extraction logic), acknowledging that different architectures require different prompting strategies.
vs alternatives: More systematic than ad-hoc few-shot selection; category-stratified examples ensure consistent coverage of all safety dimensions rather than potentially biased random sampling.
bilingual dataset management and language-specific evaluation
Manages parallel Chinese and English datasets (test_en.json, test_zh.json, dev_en.json, dev_zh.json) with a filtered Chinese subset (test_zh_subset.json, 300 questions per category) for sensitive keyword handling. Data acquisition uses Hugging Face hosting with dual download methods (shell script download_data.sh or Python download_data.py with datasets library). Each question maintains consistent structure (id, category, question, options, answer) across languages, enabling direct cross-lingual comparison of model safety performance.
Unique: Provides both full Chinese dataset (test_zh.json) and a filtered subset (test_zh_subset.json with 300 questions per category) explicitly designed to avoid sensitive keywords, addressing practical concerns about evaluating on content that may trigger platform policies. Dual download methods (shell script and Python) reduce friction for different user workflows.
vs alternatives: More comprehensive multilingual coverage than English-only benchmarks; filtered subset is a pragmatic addition for teams needing to evaluate without policy violations.
category-stratified safety metric computation and leaderboard submission
Computes accuracy metrics per safety category (offensiveness, unfairness, physical health, mental health, illegal activities, ethics, privacy) and aggregates to an overall safety score. Supports standardized leaderboard submission via JSON format (question_id -> predicted_answer). Metrics are computed by comparing predicted answers (extracted from model responses) against ground-truth labels, enabling fine-grained analysis of which safety dimensions a model excels or fails on. Results can be submitted to llmbench.ai/safety leaderboard for public comparison.
Unique: Stratifies metrics across 7 explicit safety categories rather than computing a single aggregate score, enabling fine-grained diagnosis of safety weaknesses. Leaderboard integration (llmbench.ai/safety) provides public benchmarking infrastructure, creating accountability and enabling direct model comparison.
vs alternatives: Category-level metrics provide more actionable insights than single-number safety scores; leaderboard integration drives standardization and reproducibility across the research community.
model evaluation pipeline with answer extraction and validation
Implements a standardized evaluation pipeline (exemplified in evaluate_baichuan.py) that constructs prompts, sends them to a target model via API or local inference, extracts predicted answers from model responses using model-specific parsing logic, and validates extracted answers against expected format (0->A, 1->B, 2->C, 3->D). The pipeline handles model-specific response formats and can be customized per model architecture. Supports batch evaluation of all 11,435 questions with error handling and logging.
Unique: Provides a concrete, model-specific evaluation implementation (evaluate_baichuan.py) that can be forked and adapted, rather than just a dataset. Acknowledges that different models require different answer extraction logic and provides a template for customization. Supports both zero-shot and few-shot evaluation within the same pipeline.
vs alternatives: More practical than dataset-only benchmarks because it includes reference evaluation code; reduces barrier to entry for teams without evaluation infrastructure.
seven-category safety taxonomy and question curation
Defines a structured taxonomy of 7 safety categories (offensiveness, unfairness, physical health, mental health, illegal activities, ethics, privacy) and curates 11,435 diverse multiple-choice questions mapped to these categories. Each question is designed to test whether a model correctly handles or refuses harmful content within that category. The taxonomy is explicit and mutually exclusive, enabling fine-grained safety analysis. Questions are curated to be challenging and representative of real-world safety concerns.
Unique: Explicitly defines 7 non-overlapping safety categories and curates 11,435 questions to cover them systematically, providing a structured taxonomy rather than ad-hoc safety testing. The taxonomy is comprehensive enough to cover major harm types (physical, mental, legal, ethical, privacy) while remaining tractable for evaluation.
vs alternatives: More comprehensive and structured than single-category benchmarks (e.g., toxicity-only); provides a holistic safety assessment framework that aligns with regulatory and safety research perspectives.
dataset download with hugging face integration
Provides two download methods for SafetyBench datasets: shell script (download_data.sh) and Python script (download_data.py using Hugging Face datasets library). The architecture leverages Hugging Face Hub for dataset hosting and distribution, enabling one-command dataset acquisition with automatic decompression and directory structure creation. The Python method uses the datasets library for programmatic access, supporting integration into automated evaluation pipelines without manual file management.
Unique: Provides dual download methods (shell script and Python) leveraging Hugging Face Hub for distribution, enabling both manual and programmatic dataset acquisition with automatic decompression and directory structure creation.
vs alternatives: More convenient than manual downloads by providing automated acquisition scripts, and more reproducible than email-based dataset distribution by using Hugging Face Hub as a stable, versioned repository
category-stratified evaluation metrics computation
Computes accuracy metrics stratified by safety category, enabling per-dimension performance analysis. The evaluation pipeline aggregates predictions across all questions in each category (offensiveness, unfairness, physical health, mental health, illegal activities, ethics, privacy) and computes category-specific accuracy scores. This architecture enables identification of category-specific vulnerabilities (e.g., a model may be robust on ethics but weak on physical health) without requiring separate evaluation runs.
Unique: Automatically stratifies accuracy metrics by safety category, enabling fine-grained vulnerability analysis without requiring separate evaluation runs. Provides per-category scores that reveal category-specific weaknesses.
vs alternatives: More diagnostic than aggregate safety scores by breaking down performance by harm category, enabling targeted safety improvements rather than black-box optimization