adversarial-filtered multiple-choice evaluation
Evaluates language models on 70,000 multiple-choice questions where incorrect options were generated by language models and adversarially selected to fool machines while remaining obviously wrong to humans. The filtering process uses a two-stage approach: LLM-generated distractors are ranked by their ability to confuse models (measured via model accuracy on that specific question), then human annotators validate that the hard-for-models options remain easy for humans. This creates a dataset where model performance gaps vs human performance (95.6% human accuracy) directly measure commonsense reasoning gaps rather than dataset artifacts.
Unique: Uses adversarial filtering where distractors are selected based on measured model confusion rather than human-written plausibility, creating a dataset that specifically targets machine weaknesses while maintaining human interpretability. This two-stage LLM-generation + human-validation approach is more scalable than purely human-written distractors while maintaining higher quality than random negatives.
vs alternatives: Harder than SWAG (predecessor) because distractors are adversarially selected for model confusion, and more human-aligned than synthetic reasoning datasets because human accuracy (95.6%) validates that hard-for-models questions remain easy for humans.
physical commonsense continuation prediction
Tests models' ability to predict the next action or outcome in video-like scenarios involving physical activities (cooking, sports, repairs, etc.). Each question presents a sequence of events and asks which of four options most plausibly continues the sequence. The dataset uses real-world video captions and activities, grounding commonsense in concrete physical interactions rather than abstract reasoning. Models must understand object physics, tool usage, body mechanics, and temporal causality to select correct continuations.
Unique: Grounds commonsense reasoning in real video captions and activities rather than synthetic scenarios, ensuring that correct answers reflect actual physical outcomes humans observe. The adversarial filtering specifically targets models that fail at physical reasoning while humans succeed, creating a diagnostic tool for embodied understanding gaps.
vs alternatives: More grounded in real-world physics than abstract reasoning benchmarks like MMLU, and more challenging than simple video QA because distractors are adversarially selected to confuse models specifically about physical causality.
social and temporal reasoning evaluation
Assesses models' understanding of social dynamics, conversational context, and temporal sequences in everyday scenarios. Questions test whether models can reason about social norms (what's appropriate to say/do), emotional reactions, and cause-effect relationships across time. The dataset includes scenarios involving interpersonal interactions, social etiquette, and temporal ordering of events. Adversarial distractors specifically target models that misunderstand social context or temporal logic while remaining obviously wrong to humans.
Unique: Combines social understanding with temporal reasoning in a single benchmark, testing whether models understand not just what happens next but why it happens and how humans would react. Adversarial filtering specifically targets models that fail at social reasoning while humans succeed.
vs alternatives: More comprehensive than social bias benchmarks because it tests positive social understanding (what's appropriate) rather than just detecting bias, and more grounded than abstract reasoning datasets.
machine-vs-human performance gap analysis
Provides a calibrated benchmark where human accuracy (95.6%) is known and adversarial filtering ensures that questions hard for machines remain easy for humans. This enables precise measurement of the performance gap between models and humans on commonsense reasoning. Researchers can use this gap to quantify progress toward human-level understanding and identify which types of commonsense reasoning (physical, social, temporal) show the largest model-human gaps.
Unique: Provides a human-calibrated baseline (95.6% accuracy) with adversarial filtering that ensures the gap is meaningful — questions hard for machines are easy for humans, so the gap reflects genuine commonsense reasoning deficits rather than dataset ambiguity. This enables precise measurement of progress toward human-level understanding.
vs alternatives: More interpretable than benchmarks without human baselines because the gap directly measures commonsense reasoning deficit, and more reliable than benchmarks where hard questions are hard for both humans and machines.
dataset versioning and reproducibility
Provides a fixed, versioned dataset of 70,000 examples with consistent train/validation/test splits, enabling reproducible evaluation across models and time. The dataset is hosted on Hugging Face with version control, allowing researchers to cite specific versions and ensuring that benchmark results are comparable across papers. The fixed nature of the dataset (no dynamic generation or augmentation) means that model improvements reflect genuine capability gains rather than dataset variance.
Unique: Provides a fixed, versioned dataset on Hugging Face with explicit train/validation/test splits, enabling reproducible evaluation and fair comparison across models. The fixed nature ensures that improvements reflect genuine capability gains rather than dataset variance or adversarial augmentation at test time.
vs alternatives: More reproducible than dynamically-generated benchmarks because the dataset is fixed and versioned, and more comparable than benchmarks with multiple variants because all researchers use the same evaluation set.