extended test case generation with 35x multiplier for python code evaluation
Generates augmented test suites for MBPP problems by creating 35x more test cases than the original benchmark through systematic edge-case and boundary-condition generation. The system maintains structured metadata for each problem including base_input (original tests), plus_input (extended tests), contract (input validation constraints), atol (floating-point tolerance), canonical_solution (ground truth), and entry_point (function name). This architectural separation enables rigorous detection of fragile solutions that pass shallow tests but fail on edge cases, addressing the fundamental limitation that original MBPP's ~3 tests per task miss correctness issues.
Unique: Provides 35x test case multiplier specifically for MBPP (378 tasks) with structured metadata separation (base_input vs plus_input) and input validation contracts, enabling systematic edge-case coverage that original MBPP's ~3 tests per task cannot achieve. Uses canonical_solution ground truth execution to dynamically calibrate timeouts and floating-point tolerances per problem.
vs alternatives: Significantly more rigorous than original MBPP (3→105 tests per task average) and HumanEval+ (80x multiplier) while maintaining Python-specific focus; catches correctness issues that shallow benchmarks miss but requires more computational resources for evaluation.
safe isolated execution of untrusted llm-generated code with multi-layer resource guards
Executes arbitrary Python code generated by LLMs in isolated processes with enforced resource limits and system call restrictions to prevent malicious or buggy code from crashing the evaluation framework. The untrusted_check function spawns separate processes via multiprocessing with shared memory IPC, applies memory limits (default 4GB via EVALPLUS_MAX_MEMORY_BYTES environment variable), dynamically calculated time limits based on ground truth execution time, I/O suppression via swallow_io to prevent output pollution, and reliability_guard to disable dangerous system calls. This architecture prevents code injection, infinite loops, memory exhaustion, and filesystem access while maintaining execution fidelity for correctness evaluation.
Unique: Implements multi-layer isolation using process-level separation (multiprocessing), memory limits (EVALPLUS_MAX_MEMORY_BYTES), dynamic timeout calculation from canonical_solution execution, I/O suppression (swallow_io), and system call restrictions (reliability_guard). This combination prevents both accidental crashes and intentional attacks while maintaining execution fidelity for correctness evaluation.
vs alternatives: More robust than simple try-catch approaches because it uses OS-level process isolation rather than Python-level exception handling; prevents infinite loops and memory exhaustion that would crash a single-process evaluator, though with higher latency than in-process execution.
code sanitization and normalization for consistent evaluation across llm outputs
Preprocesses LLM-generated code to normalize formatting, remove extraneous content, and extract the target function before execution. The sanitize module (evalplus/sanitize.py) handles variable formatting inconsistencies, removes comments and docstrings that may interfere with parsing, extracts the function matching the entry_point name, and validates syntax before execution. This ensures that evaluation results reflect code correctness rather than formatting quirks or LLM hallucinations like extra imports or wrapper code. The sanitization pipeline is essential because different LLMs produce code with different indentation, naming conventions, and structural patterns that would otherwise cause false negatives.
Unique: Implements multi-stage sanitization pipeline that separates formatting normalization (indentation, whitespace) from structural extraction (entry_point function isolation) and validation (syntax checking). Uses AST-based function extraction rather than regex, ensuring robust handling of complex code structures and nested functions.
vs alternatives: More robust than simple regex-based extraction because it uses Python's ast module for structural parsing; handles edge cases like nested functions, decorators, and complex indentation that regex approaches would miss. Enables fair comparison across LLM models with different output conventions.
multi-backend llm integration for code generation with 8+ provider support
Provides unified interface to generate code from 8+ LLM backends including vLLM, HuggingFace, OpenAI, Anthropic, Google Gemini, AWS Bedrock, and Ollama. The provider architecture (evalplus/provider/) abstracts backend-specific API details behind a common interface, handling authentication, request formatting, response parsing, and error handling for each provider. This enables researchers to benchmark code generation across different models and providers without rewriting evaluation code. The codegen module (evalplus/codegen.py) orchestrates the generation pipeline: problem specification → prompt formatting → LLM call → response extraction → sanitization → evaluation.
Unique: Implements provider abstraction layer that unifies 8+ LLM backends (vLLM, HuggingFace, OpenAI, Anthropic, Gemini, Bedrock, Ollama) behind a common interface, enabling single-codebase evaluation across local and cloud models. Each provider handles authentication, request formatting, and response parsing independently, allowing researchers to swap backends without modifying evaluation logic.
vs alternatives: More comprehensive than single-provider frameworks (e.g., OpenAI-only evaluators) because it supports both cloud APIs and self-hosted models; enables cost-benefit analysis between providers and avoids vendor lock-in. Abstraction layer reduces code duplication compared to implementing each provider separately.
pass@k metric calculation with configurable sample aggregation
Computes pass@k metrics by generating multiple code samples per problem and calculating the probability that at least one sample passes all tests. The metric is calculated as: pass@k = 1 - (C(n-c, k) / C(n, k)) where n is total samples, c is passing samples, and k is the sample count. This enables evaluation of model reliability: pass@1 measures single-shot accuracy, while pass@10 or pass@100 measures whether the model can eventually generate correct code. The framework aggregates results across all problems to produce dataset-level pass@k scores, enabling comparison of models' code generation reliability.
Unique: Implements pass@k metric using combinatorial formula (1 - C(n-c,k)/C(n,k)) rather than empirical sampling, enabling exact calculation without Monte Carlo approximation. Supports configurable k values and aggregation across problems, enabling multi-level analysis (per-problem, per-category, dataset-wide).
vs alternatives: More statistically rigorous than simple accuracy metrics because it accounts for sampling variance and model reliability; enables fair comparison between models with different single-shot accuracy but similar pass@k. Combinatorial calculation is faster and more precise than empirical sampling approaches.
performance evaluation via cpu instruction counting with evalperf dataset
Measures code efficiency using CPU instruction counting rather than wall-clock time, enabling reproducible performance evaluation across different hardware. The EvalPerf dataset generates performance-exercising inputs with exponential scaling (2^1 to 2^26 elements) to stress-test algorithmic complexity. The profiling pipeline uses Linux perf counters to measure CPU instructions, filters tasks based on profile size, compute cost, coefficient of variation, and performance clustering to select representative benchmarks. This approach isolates algorithmic efficiency from hardware variance, enabling rigorous comparison of code quality across models and implementations.
Unique: Uses CPU instruction counting via Linux perf counters rather than wall-clock time, enabling reproducible performance evaluation independent of hardware variance. Generates performance-exercising inputs with exponential scaling (2^1 to 2^26) to stress-test algorithmic complexity, and filters tasks based on profile size, compute cost, and coefficient of variation to select representative benchmarks.
vs alternatives: More reproducible than wall-clock timing because instruction counts are hardware-independent; enables fair comparison across different machines and cloud environments. Exponential input scaling reveals algorithmic complexity issues that constant-size inputs would miss, providing deeper insight into code quality.
structured dataset management with problem metadata and test case organization
Organizes MBPP+ problems as structured JSON with metadata fields: base_input (original test cases), plus_input (extended test cases), contract (input validation constraints), atol (floating-point tolerance), canonical_solution (ground truth implementation), and entry_point (function name). The dataset management system (evalplus/data/) loads problems from JSON, validates metadata consistency, and provides programmatic access to test cases and solutions. This structured approach enables systematic evaluation: problems can be filtered by category, difficulty, or test coverage; test cases can be aggregated across base and plus inputs; and metadata enables reproducible evaluation across different tools and frameworks.
Unique: Implements structured JSON-based dataset organization with explicit separation of base_input (original tests) and plus_input (extended tests), enabling selective evaluation and test coverage analysis. Metadata includes contract (input validation), atol (floating-point tolerance), canonical_solution, and entry_point, providing complete problem specification for reproducible evaluation.
vs alternatives: More structured than flat test files because metadata is explicitly organized and queryable; enables filtering, aggregation, and analysis that would be difficult with unstructured test data. JSON format is human-readable and tool-agnostic, supporting integration with external evaluation frameworks.
command-line evaluation pipeline with end-to-end orchestration
Provides CLI tools (evalplus.evaluate, evalplus.codegen, evalplus.evalperf, evalplus.sanitize) that orchestrate the complete evaluation workflow: code generation → sanitization → correctness evaluation → optional performance evaluation. The evaluate command executes generated code against MBPP+ test suites with configurable timeouts and memory limits, producing pass@k metrics and detailed result logs. The codegen command generates code from specified LLM providers. The evalperf command measures performance via instruction counting. The sanitize command preprocesses code before evaluation. This modular CLI design enables researchers to run evaluation pipelines without writing custom code, supporting reproducible benchmarking and result sharing.
Unique: Implements modular CLI tools (evaluate, codegen, evalperf, sanitize) that can be chained together or run independently, enabling flexible evaluation workflows. Each tool handles a specific stage of the pipeline (generation, sanitization, evaluation, performance measurement), allowing users to customize workflows without writing code.
vs alternatives: More user-friendly than programmatic APIs for researchers who prefer command-line tools; enables reproducible evaluation without custom code. Modular design allows selective use of components (e.g., evaluate without codegen) for flexibility.
+2 more capabilities