multi-hop reasoning dataset construction with supporting fact annotation
Provides 113,000 question-answer pairs where each question requires traversing and reasoning across 2+ Wikipedia articles to derive the answer. The dataset includes explicit supporting fact annotations identifying which sentences from source documents are necessary for answering, enabling training of models that can both answer questions and explain their reasoning chains. Built through crowdsourced annotation with quality control mechanisms to ensure multi-hop reasoning is genuinely required rather than answerable from single documents.
Unique: Explicitly annotates supporting facts at sentence-level granularity rather than just providing QA pairs, enabling evaluation of both answer correctness AND reasoning transparency. The dataset design enforces multi-hop requirements through crowdsourcing validation that questions cannot be answered from single documents.
vs alternatives: Differs from SQuAD (single-document QA) and MS MARCO (web-scale but less structured) by providing explicit multi-hop reasoning requirements with supporting fact labels, making it uniquely suited for training interpretable reasoning systems rather than just answer extraction.
supporting fact prediction evaluation framework
Provides a structured evaluation methodology for assessing whether QA systems can correctly identify which source sentences support their answers. The framework compares predicted supporting facts against human-annotated ground truth using precision, recall, and F1 metrics at both sentence and paragraph levels. This enables measurement of reasoning transparency independent of answer correctness, allowing diagnosis of whether a system found the right answer for the right reasons.
Unique: Decouples supporting fact evaluation from answer correctness, enabling independent assessment of reasoning transparency. Provides both sentence-level and paragraph-level metrics, allowing evaluation at different granularities depending on system architecture.
vs alternatives: Unlike generic QA metrics (EM/F1) that only measure answer correctness, this framework specifically evaluates whether systems can justify their reasoning, addressing the explainability gap in black-box QA systems.
compositional reasoning benchmark with multi-document retrieval requirements
Structures questions to require explicit composition of facts across multiple Wikipedia articles, creating a benchmark where naive single-document retrieval fails. Questions are designed such that the answer cannot be found in any single article; instead, the system must retrieve multiple relevant documents, identify the connecting entity or relationship, and synthesize information across them. This tests whether systems can perform true multi-hop reasoning versus pattern matching on single documents.
Unique: Explicitly validates that questions require multi-hop reasoning through crowdsourced verification that single-document retrieval cannot answer them. Questions are structured around entity linking and relationship composition, forcing systems to perform genuine multi-stage reasoning rather than single-stage retrieval.
vs alternatives: Compared to general QA datasets like Natural Questions (single-hop, web-scale) or SQuAD (single-document), HotpotQA's explicit multi-hop requirement and supporting fact annotations make it uniquely suited for evaluating whether systems perform compositional reasoning vs. pattern matching.
distractor document filtering and ranking evaluation
Provides a controlled evaluation setting where systems must distinguish relevant documents from distractors. The dataset includes both supporting documents (necessary for answering) and distractor documents (related to the question but not required for the answer). This tests whether retrieval systems can rank supporting documents above distractors, a critical capability for multi-hop QA where false positives in retrieval compound through reasoning stages. Evaluation measures whether systems retrieve all necessary documents while minimizing false positives.
Unique: Provides explicit distractor documents alongside supporting documents, enabling controlled evaluation of retrieval precision and recall. Distractors are selected to be topically related but not necessary for answering, testing whether systems can distinguish genuine supporting evidence from noise.
vs alternatives: Unlike open-domain QA datasets that evaluate retrieval against the full web, HotpotQA's controlled distractor set enables precise measurement of retrieval quality independent of corpus size, making it easier to diagnose retrieval failures in multi-hop systems.
question type classification and reasoning pattern analysis
Categorizes questions into distinct reasoning types (e.g., 'bridge' questions requiring entity linking between documents, 'comparison' questions requiring fact synthesis) and provides labels enabling analysis of system performance across reasoning patterns. This allows fine-grained evaluation of which reasoning types systems handle well vs. poorly, and enables targeted training or evaluation on specific compositional reasoning challenges. The taxonomy captures the structural reasoning requirements independent of domain content.
Unique: Provides explicit question type labels capturing the structural reasoning requirements (bridge, comparison, etc.) independent of domain content. Enables analysis of whether systems struggle with specific reasoning patterns vs. general knowledge gaps.
vs alternatives: Unlike generic QA datasets without reasoning type labels, HotpotQA's type taxonomy enables targeted evaluation and debugging of reasoning capabilities, allowing researchers to identify whether failures stem from retrieval, entity linking, or fact composition.
wikipedia-grounded question generation for domain-specific reasoning
Questions are generated from Wikipedia articles and require reasoning over real-world entities, relationships, and facts. This grounds reasoning in a concrete knowledge domain (Wikipedia) rather than synthetic or template-based questions, enabling evaluation of whether systems can handle real-world complexity. Questions span diverse topics (people, places, films, organizations) and reasoning patterns (attribute lookup, entity linking, relationship chaining).
Unique: Questions are grounded in real Wikipedia entities and relationships rather than synthetic templates, requiring models to handle actual knowledge base complexity (entity disambiguation, relationship chaining, fact lookup). This makes reasoning evaluation more realistic than template-based datasets.
vs alternatives: Grounds reasoning in a real, large-scale knowledge base (Wikipedia) rather than synthetic examples, enabling evaluation of whether systems can handle real-world entity linking and relationship reasoning.