multilingual code generation benchmarking across 17 languages with execution-based validation
Provides a standardized evaluation framework for code generation models that accepts generated code in 17 programming languages (C, C++, C#, Java, Kotlin, Go, Rust, Python, Ruby, PHP, JavaScript, Perl, Haskell, OCaml, Scala, D, Pascal) and validates correctness through actual execution against unit tests via the ExecEval Docker-based execution engine. Uses a centralized problem definition model with src_uid foreign keys linking generated code to shared problem descriptions and unittest_db.json, enabling consistent evaluation across language variants of the same problem.
Unique: Combines 25M training examples across 7,500 unique problems with an execution-based evaluation pipeline (ExecEval) that actually runs generated code in Docker containers against unit tests, rather than relying on static analysis or string matching. The src_uid linking system creates a normalized data model where problem descriptions and tests are stored once and referenced by all language variants, eliminating duplication and ensuring consistency.
vs alternatives: Larger scale (25M examples vs typical 10-100K) and true execution-based validation across more languages (17 vs 4-6) than HumanEval or CodeXGLUE, with explicit support for code translation and repair tasks beyond generation.
src_uid-based cross-task dataset linking and problem normalization
Implements a foreign key linking system where all task-specific datasets (program synthesis, code translation, APR, retrieval) reference shared problem definitions via src_uid identifiers. Problem descriptions and unit tests are stored once in centralized problem_descriptions.jsonl and unittest_db.json files, then linked by src_uid to avoid duplication. The Hugging Face datasets API automatically resolves these links during data loading, returning enriched DatasetDict objects with problem context pre-joined to task examples.
Unique: Uses a normalized relational data model (src_uid as foreign key) for a code benchmark, treating problem definitions as a separate entity layer rather than embedding them in each task dataset. This is more sophisticated than typical flat-file benchmark structures and enables consistent multi-task evaluation on identical problems.
vs alternatives: More efficient than duplicating problem descriptions across 7 task datasets (reduces storage by ~30-40%), and enables automatic link resolution via Hugging Face API unlike manual CSV joins in CodeXGLUE or HumanEval variants.
hugging face datasets api integration with automatic src_uid resolution
Provides a Python API for loading xCodeEval datasets from Hugging Face Hub (NTU-NLP-sg/xCodeEval) with automatic src_uid-based linking between task datasets and shared problem definitions. The datasets library handles data downloading, caching, and streaming, while the xCodeEval integration automatically joins task examples with problem_descriptions.jsonl and unittest_db.json using src_uid foreign keys. Returns DatasetDict objects with enriched examples ready for model training or evaluation.
Unique: Integrates xCodeEval with Hugging Face datasets library, providing automatic src_uid resolution and streaming support. Treats data loading as a first-class concern with built-in linking logic, rather than requiring manual JSON parsing.
vs alternatives: More convenient than manual Git LFS downloads because it handles caching and automatic linking, and integrates seamlessly with Hugging Face training pipelines vs custom data loaders.
git lfs manual dataset download with selective file access
Provides an alternative data access method using Git LFS for users who prefer direct file access or need selective dataset downloads. Supports cloning the repository with LFS disabled, then pulling specific task files or problem definitions on demand. Useful for custom processing pipelines or environments where Python/Hugging Face is not available, though requires manual src_uid linking to join task examples with problem definitions.
Unique: Provides Git LFS-based alternative to Hugging Face API, enabling direct file access and selective downloads. Requires manual src_uid linking but offers more control over data access patterns.
vs alternatives: More flexible than Hugging Face API for selective downloads and custom pipelines, but requires more manual work for src_uid linking and lacks automatic caching/streaming.
multi-task evaluation pipeline with three-phase execution model
Implements a standardized three-phase evaluation pipeline (Phase 1: Generation, Phase 2: Execution, Phase 3: Metrics) that applies consistently across all 7 tasks (program synthesis, code translation, APR, tag classification, code compilation, NL-code retrieval, code-code retrieval). Phase 1 generates or retrieves code, Phase 2 executes it via ExecEval or computes retrieval metrics, and Phase 3 aggregates results into pass@k, MRR, NDCG, or other task-specific metrics. Enables direct comparison of model performance across tasks.
Unique: Defines a unified three-phase evaluation pipeline that applies to all 7 tasks, treating generation, execution, and metric computation as separate concerns. Enables consistent evaluation methodology across diverse task types (generation, translation, retrieval, classification).
vs alternatives: More comprehensive than task-specific evaluation scripts because it provides a unified framework for all 7 tasks, and enables direct comparison of model performance across different task types.
program synthesis task generation and evaluation with pass@k metrics
Evaluates code generation models on the program synthesis task by accepting natural language problem descriptions and generating code solutions in any of 17 languages. The evaluation pipeline (Phase 1: Generation, Phase 2: Execution, Phase 3: Metrics) runs generated code against unit tests via ExecEval, computing pass@k metrics (pass@1, pass@10, etc.) that measure the probability of finding a correct solution within k samples. Supports both single-solution and multi-sample evaluation modes for assessing model reliability.
Unique: Implements a three-phase evaluation pipeline (Generation → Execution → Metrics) with explicit pass@k computation that measures the probability of finding a correct solution within k attempts, rather than just binary pass/fail. Supports multi-sample evaluation across 17 languages with language-specific compiler configurations and timeout handling.
vs alternatives: More rigorous than HumanEval's simple pass@k because it handles language-specific compilation errors and timeouts explicitly, and scales to 25M training examples vs HumanEval's 164 problems.
code translation task evaluation with language-pair validation
Evaluates code translation models by accepting source code in one language and generated translations in a target language, then validating functional equivalence through execution against shared unit tests. The translation evaluation pipeline compiles and executes both source and translated code against the same unittest_db.json test cases, comparing outputs to detect translation errors. Supports all 17 language pairs (though not all pairs may have training data) and uses language-specific compiler mappings to handle syntax differences.
Unique: Validates code translation by executing both source and target code against identical unit tests and comparing outputs, ensuring functional equivalence rather than syntactic similarity. Uses language-specific compiler mappings to handle the complexity of 17 different compilation environments and their idiosyncrasies.
vs alternatives: More rigorous than BLEU-score-based translation metrics because it validates actual functional correctness through execution, and covers more language pairs (17 vs typical 2-4) with explicit compiler integration.
automatic program repair (apr) task generation and evaluation
Evaluates program repair models by providing buggy code snippets and expecting corrected versions that pass unit tests. The APR evaluation pipeline executes repaired code against unittest_db.json test cases, measuring whether the repair successfully fixes the bug without introducing new failures. Supports repairs across all 17 languages and uses the same execution-based validation as program synthesis, enabling direct comparison of repair quality.
Unique: Treats program repair as an executable task where success is measured by unit test passage, rather than syntactic similarity to reference repairs. Integrates with the same ExecEval pipeline as program synthesis, enabling direct performance comparison between generation and repair models.
vs alternatives: More comprehensive than traditional APR benchmarks (Defects4J, QuixBugs) because it covers 17 languages and 7,500 problems vs 395 Java bugs, and uses consistent execution-based metrics across all repair types.
+5 more capabilities