permissively-licensed source code dataset curation and aggregation
Aggregates 67 TB of source code from the Software Heritage archive, filtering for permissively licensed repositories (MIT, Apache 2.0, BSD, etc.) across 600+ programming languages. Uses automated license detection and validation to ensure legal compliance for model training. Implements a rigorous deduplication pipeline at file and repository levels to eliminate redundant training data and reduce dataset bloat.
Unique: Largest open-source code dataset at 67 TB with automated opt-out governance allowing repository owners to request removal, combined with rigorous deduplication and PII removal pipeline — no other public dataset offers this scale with legal compliance and community control mechanisms
vs alternatives: Larger and more legally compliant than GitHub's CodeSearchNet (14M files) or Google's BigQuery public datasets, with explicit opt-out governance vs. implicit inclusion, and covers 600+ languages vs. Codex training data's undisclosed language distribution
opt-out governance and repository exclusion management
Implements a community-driven opt-out system where repository owners can request removal of their code from the dataset without legal takedown notices. Maintains a registry of excluded repositories and re-applies exclusions during dataset updates. Provides transparent governance documentation and a clear submission process for removal requests, balancing open access with creator rights.
Unique: First large-scale code dataset to implement opt-out governance at dataset level rather than relying solely on license compliance, with transparent registry and community submission process — shifts power from dataset creators to code contributors
vs alternatives: More respectful of creator autonomy than GitHub Copilot's training approach (no opt-out) or academic datasets (one-time snapshot), and more scalable than individual DMCA takedowns
pii and sensitive data removal pipeline
Automated pipeline that scans source code for personally identifiable information (email addresses, API keys, SSH keys, credit card patterns, phone numbers) and removes or redacts them before dataset release. Uses regex patterns, entropy-based detection for secrets, and heuristic rules to identify sensitive data. Operates at file level with configurable sensitivity thresholds to balance data utility against privacy risk.
Unique: Combines regex pattern matching, entropy-based secret detection, and heuristic rules in a unified pipeline with configurable sensitivity — more comprehensive than simple regex-only approaches, but trades off false positive rate against security coverage
vs alternatives: More thorough than GitHub's secret scanning (which only flags known patterns) because it includes entropy-based detection for unknown secret formats, but less accurate than specialized tools like TruffleHog due to language-agnostic approach
multi-language source code indexing and retrieval
Indexes 67 TB of source code across 600+ programming languages with language-aware metadata (syntax, file extension, language family). Enables retrieval by language, license, repository, or code patterns. Uses Software Heritage's existing indexing infrastructure as foundation, augmented with language detection and classification. Supports both bulk download and filtered queries for specific language subsets.
Unique: Leverages Software Heritage's existing language detection and indexing infrastructure, then augments with BigCode-specific language classification and filtering — avoids reinventing language detection while providing dataset-specific query capabilities
vs alternatives: More comprehensive language coverage (600+ languages) than GitHub's Linguist (500+ languages) and more accessible than Software Heritage's raw API because it's pre-filtered for permissive licenses and deduplicated
content-based deduplication at file and repository levels
Removes duplicate code files and repositories using content hashing (SHA-256 or similar) and fuzzy matching for near-duplicates. Operates in two stages: exact deduplication via hash matching, then fuzzy matching (e.g., Jaccard similarity or MinHash) to catch semantically identical code with minor formatting differences. Preserves one canonical copy of each unique code pattern while removing redundant training examples.
Unique: Two-stage deduplication combining exact hash matching with fuzzy similarity matching (likely MinHash or Jaccard) to catch both identical and near-identical code — more thorough than single-stage approaches but computationally expensive
vs alternatives: More aggressive deduplication than CodeSearchNet (which uses simple hash matching) because it catches near-duplicates, but less semantic than clone detection tools (which understand code structure) because it's content-based
software heritage archive integration and version control history access
Integrates with Software Heritage's comprehensive archive of 200+ million repositories and their full version control history. Extracts source code snapshots from Software Heritage's Git/Mercurial/SVN repositories, preserving repository metadata (commit history, author info, timestamps). Provides access to code at specific points in time, enabling historical analysis or training on code evolution patterns.
Unique: Leverages Software Heritage's universal code archive (200M+ repositories) as data source, providing access to code that would be impossible to collect via GitHub API alone — enables training on archived/deleted repositories and non-GitHub platforms (GitLab, Gitea, etc.)
vs alternatives: More comprehensive than GitHub-only datasets because it includes code from GitLab, Gitea, SourceForge, and other platforms archived by Software Heritage; more legally defensible than web scraping because it uses an established, community-maintained archive
license compliance and legal metadata tracking
Tracks and validates SPDX license identifiers for each repository, ensuring only permissively licensed code (MIT, Apache 2.0, BSD, etc.) is included. Maintains license metadata alongside code files, enabling downstream users to verify legal compliance. Implements license hierarchy and compatibility checking to handle dual-licensed or complex licensing scenarios.
Unique: Combines automated SPDX detection with manual review and maintains license metadata alongside code, enabling downstream users to verify compliance — more transparent than datasets that simply claim 'permissive licenses' without proof
vs alternatives: More legally rigorous than GitHub's CodeSearchNet (which doesn't validate licenses) and more transparent than Codex training data (which doesn't disclose license filtering at all)
dataset versioning and reproducibility tracking
Maintains versioned snapshots of the dataset (e.g., v2.0, v2.1) with documented changes between versions (new repositories added, deduplication improvements, PII removal updates). Provides checksums and manifests for reproducibility, enabling researchers to cite specific dataset versions and reproduce results. Tracks dataset lineage and transformation history.
Unique: Maintains semantic versioning and detailed changelogs for dataset releases, enabling researchers to cite specific versions and understand dataset evolution — more rigorous than one-off dataset releases without versioning
vs alternatives: More reproducible than academic datasets that are released once without versioning, and more transparent than commercial datasets (Codex) that don't disclose version history or changes
+2 more capabilities