competition-mathematics problem dataset loading with multi-subject stratification
Loads and preprocesses 12,500 curated competition mathematics problems from AMC 10/12, AIME, and Math Olympiads using the MATHDataset class in MATH.py. The loader supports multiple tokenization strategies and can selectively include or exclude solution steps during preprocessing, enabling researchers to evaluate models on problem-solving without solution hints. Problems are stratified across 7 mathematical subjects (Prealgebra, Algebra, Number Theory, Counting/Probability, Geometry, Intermediate Algebra, Precalculus) with structured JSON metadata including problem statements, solutions, and difficulty levels.
Unique: Curates problems exclusively from high-difficulty mathematical competitions (AMC, AIME, Olympiads) rather than generic math word problems, ensuring evaluation on reasoning-intensive problems that require multi-step derivations and deep mathematical understanding. The MATHDataset class implements subject-aware stratification enabling fine-grained evaluation across mathematical domains.
vs alternatives: More rigorous than generic math QA datasets (e.g., MathQA, SVAMP) because problems require genuine mathematical reasoning rather than simple arithmetic, making it the de facto standard for evaluating LLM mathematical capabilities in research.
mathematical equivalence checking with latex normalization and algebraic simplification
Implements the is_equiv() function in math_equivalence.py that determines semantic equivalence between two mathematical expressions regardless of syntactic representation. The system applies a multi-stage normalization pipeline that handles LaTeX formatting, fraction representations, algebraic simplification, and numerical precision issues before performing string-based comparison. This enables accurate answer verification without requiring exact string matching, accommodating equivalent forms like '1/2', '0.5', and '\frac{1}{2}'.
Unique: Implements a multi-stage normalization pipeline specifically designed for competition mathematics rather than generic string comparison. The system handles domain-specific challenges like multiple valid representations of the same answer (fractions vs decimals, different LaTeX encodings) and applies algebraic simplification to catch mathematically equivalent but syntactically different forms.
vs alternatives: More robust than exact string matching or simple numerical comparison because it normalizes across multiple mathematical notations and handles algebraic equivalence, enabling accurate evaluation of LLM answers that are mathematically correct but expressed differently than ground truth.
solution step extraction and intermediate reasoning evaluation
Extracts and preserves solution steps from MATH problems, enabling evaluation of intermediate reasoning and chain-of-thought capabilities. The system can optionally include or exclude solution steps during dataset loading, supporting different evaluation methodologies: evaluating final answers only (without hints) or evaluating intermediate reasoning steps. This enables researchers to assess whether models can generate correct reasoning chains or merely guess final answers.
Unique: Preserves solution steps as first-class data throughout the evaluation pipeline, enabling evaluation of intermediate reasoning quality rather than just final answers. This supports emerging research on chain-of-thought prompting and interpretable AI reasoning.
vs alternatives: More comprehensive than final-answer-only evaluation because it assesses reasoning quality and interpretability, but requires more manual annotation and is harder to automate than simple answer verification.
local gpt-style model evaluation with configurable beam search and sampling
Provides evaluation infrastructure in eval_math_gpt.py that runs local language models (GPT-style architectures) on MATH dataset problems with configurable inference parameters including beam search width, sampling temperature, and top-k/top-p filtering. The run_eval() function orchestrates the evaluation pipeline: loads problems from MATHDataset, generates model responses with specified decoding strategy, extracts final answers from model outputs, and compares against ground truth using mathematical equivalence checking. Supports both greedy decoding and stochastic sampling for exploring model behavior under different inference regimes.
Unique: Integrates configurable beam search and sampling directly into the evaluation loop, enabling researchers to explore how different decoding strategies affect mathematical reasoning performance. The architecture separates inference configuration from evaluation logic, allowing systematic comparison of greedy vs stochastic decoding on the same problem set.
vs alternatives: More flexible than API-based evaluation (e.g., OpenAI GPT-3 API) because it supports arbitrary inference parameters and local model variants, but requires more computational resources and manual infrastructure setup compared to cloud-based alternatives.
openai gpt-3 api-based model evaluation with remote inference
Provides evaluation infrastructure in evaluate_gpt3.py that interfaces with OpenAI's GPT-3 API for remote model evaluation on MATH problems. The system handles API authentication, batches problem submissions to the GPT-3 API, parses structured responses, and aggregates accuracy metrics. This enables evaluation of closed-source models without local compute resources, though with latency and cost considerations inherent to API-based inference.
Unique: Abstracts away OpenAI API complexity by providing a unified evaluation interface that handles authentication, batching, response parsing, and error handling. The system integrates seamlessly with the local evaluation pipeline, enabling side-by-side comparison of API-based and local models using identical evaluation metrics.
vs alternatives: Simpler than local evaluation for closed-source models because it eliminates infrastructure setup, but introduces API dependency, latency, and cost overhead compared to local inference on open-source models.
subject-stratified accuracy metrics aggregation and reporting
Aggregates evaluation results across the 12,500 problems and computes accuracy metrics stratified by mathematical subject (Prealgebra, Algebra, Number Theory, Counting/Probability, Geometry, Intermediate Algebra, Precalculus). The reporting system generates per-subject accuracy percentages, overall accuracy, and optional per-difficulty breakdowns. This enables fine-grained analysis of model strengths and weaknesses across mathematical domains, revealing whether models struggle with specific subject areas.
Unique: Implements subject-aware stratification that breaks down accuracy by mathematical domain, revealing whether models have domain-specific weaknesses (e.g., strong on Algebra but weak on Geometry). This granularity is essential for understanding model capabilities beyond aggregate accuracy.
vs alternatives: More informative than single aggregate accuracy metric because subject-stratified results expose domain-specific model limitations, enabling targeted improvement efforts and more nuanced model comparison.
problem metadata extraction and structured indexing
Extracts and indexes structured metadata from MATH dataset JSON files including problem statement, solution steps, final answer, difficulty level, and mathematical subject. The indexing system enables efficient retrieval of problems by subject, difficulty, or other attributes, and provides structured access to problem components (problem text vs solution vs answer) for different evaluation workflows. Metadata is preserved throughout the evaluation pipeline to enable stratified analysis and filtering.
Unique: Preserves full problem metadata (subject, difficulty, solution steps) throughout the evaluation pipeline, enabling post-hoc analysis of which problem characteristics correlate with model success or failure. The indexing structure supports efficient filtering and stratified evaluation.
vs alternatives: More structured than raw problem files because metadata is parsed and indexed, enabling efficient filtering and analysis; but less flexible than custom metadata systems that could include additional annotations (e.g., required mathematical concepts, solution techniques).
answer extraction from model outputs with heuristic parsing
Extracts final numerical or symbolic answers from model-generated text using heuristic pattern matching (e.g., regex patterns for 'Answer: X', 'Final Answer:', or boxed notation). The extraction system handles common answer formats including integers, fractions, decimals, and algebraic expressions. This enables automatic answer verification without requiring models to output structured JSON or follow strict formatting conventions, accommodating natural language model outputs.
Unique: Uses lightweight regex-based heuristics rather than requiring models to output structured JSON, enabling evaluation of base language models without answer format fine-tuning. This pragmatic approach trades robustness for flexibility, accommodating diverse model output styles.
vs alternatives: More flexible than requiring structured output because it works with any model without fine-tuning, but less reliable than models trained to output answers in standardized formats (e.g., JSON with 'answer' field).
+3 more capabilities