StarCoder Data
DatasetFree783 GB curated code dataset from 86 languages with PII redaction.
Capabilities9 decomposed
multi-language code corpus assembly with permissive licensing filtering
Medium confidenceAggregates 783 GB of source code across 86 programming languages from public repositories, applying automated licensing detection and filtering to retain only permissively licensed code (MIT, Apache 2.0, BSD, etc.). Uses repository metadata parsing and SPDX license identifier matching to exclude GPL and proprietary code at ingestion time, ensuring legal compliance for downstream model training without manual curation.
Implements automated SPDX-based license filtering at scale across 86 languages rather than manual curation, enabling legal compliance without human bottleneck. Combines repository-level metadata with file-level license detection to maximize precision.
More legally defensible than generic code scrapes (e.g., The Stack) because it enforces permissive licensing constraints upfront, reducing downstream compliance risk for commercial model training.
near-deduplication with semantic code similarity detection
Medium confidenceRemoves near-duplicate code blocks using a combination of exact string matching and semantic similarity hashing (likely MinHash or similar probabilistic data structure) to identify functionally equivalent code across the corpus. Operates at multiple granularities: file-level, function-level, and snippet-level, reducing redundant training signal while preserving diverse implementations of the same algorithm.
Applies multi-granularity deduplication (file, function, snippet levels) with semantic hashing rather than exact-match-only, capturing near-duplicates that simple string matching would miss. Likely uses language-aware tokenization to normalize syntax before similarity computation.
More aggressive deduplication than The Stack (which uses only exact matching) reduces training data by ~15-25% while preserving algorithmic diversity, improving model convergence without sacrificing generalization.
personally identifiable information redaction with multi-pattern detection
Medium confidenceScans code corpus for PII including email addresses, IP addresses, API keys, AWS credentials, and other secrets using regex-based pattern matching and entropy-based detection heuristics. Redacts or removes identified PII before dataset release, protecting developer privacy and preventing accidental credential leakage into trained models. Operates as a preprocessing pipeline stage with configurable sensitivity thresholds.
Combines multi-pattern regex detection (emails, IPs, API keys) with entropy-based heuristics for unknown credential formats, operating as a preprocessing stage rather than post-hoc filtering. Likely includes language-specific parsers for docstrings and comments where credentials are commonly documented.
More comprehensive than simple regex-only approaches because it detects entropy-based anomalies (e.g., random-looking strings in code) that indicate credentials, reducing false negatives while maintaining reasonable false-positive rates through threshold tuning.
exact deduplication with content-addressable storage indexing
Medium confidenceRemoves exact duplicate files and code blocks using cryptographic hashing (SHA-256 or similar) to create a content-addressable index, enabling O(1) duplicate detection across the entire 783 GB corpus. Operates after near-deduplication to catch remaining exact matches, using a distributed hash table or database index to track seen content hashes and eliminate redundant entries before final dataset assembly.
Uses cryptographic content hashing (SHA-256) for O(1) duplicate detection across massive corpus, enabling deterministic, auditable deduplication. Operates as final deduplication stage after semantic near-deduplication, catching exact matches efficiently.
More scalable than in-memory set-based deduplication because hash index can be persisted to disk and queried incrementally, enabling processing of corpora larger than available RAM without sacrificing performance.
jupyter notebook code-text interleaving extraction
Medium confidenceParses Jupyter notebook JSON structure to extract code cells and markdown cells as interleaved code-text sequences, preserving the pedagogical context and narrative flow of notebook-based code examples. Converts notebook format to flat code-text pairs suitable for training, handling cell execution order, cell dependencies, and markdown explanations as contextual metadata. Enables models to learn from documented, explained code rather than isolated snippets.
Preserves code-text interleaving from Jupyter notebooks as training data rather than extracting code cells in isolation, enabling models to learn documentation-code alignment patterns. Treats markdown explanations as contextual metadata rather than discarding them.
Captures pedagogical value that pure code corpora miss; models trained on interleaved code-text learn to generate documented code and understand code-explanation relationships, improving downstream code generation quality and interpretability.
developer opt-out mechanism with repository-level exclusion
Medium confidenceImplements a registry system allowing developers to request exclusion of their code from the training dataset, respecting developer autonomy and addressing concerns about AI training on personal projects. Operates via GitHub issue or form submission to BigCode, with opt-out requests matched against repository metadata (owner, URL, commit hash) to identify and remove affected code before dataset release. Enables retroactive removal if requested after initial inclusion.
Provides explicit opt-out mechanism allowing developers to request code exclusion after publication, respecting developer autonomy and addressing ethical concerns about non-consensual AI training. Operates via transparent, developer-facing process rather than hidden curation.
More ethically defensible than datasets with no opt-out (e.g., The Stack) because it acknowledges developer agency and provides recourse for those uncomfortable with AI training on their code, though less comprehensive than opt-in approaches.
language-stratified corpus organization with balanced sampling
Medium confidenceOrganizes the 783 GB corpus into language-specific subsets (86 languages) with metadata annotations enabling stratified sampling and balanced representation during model training. Tracks language distribution statistics and enables selective dataset construction (e.g., 'give me Python + JavaScript + Go code only') without reprocessing the entire corpus. Supports both language-balanced and language-weighted sampling strategies for different training objectives.
Organizes corpus into 86 language-specific subsets with metadata enabling stratified sampling and selective dataset construction, rather than treating all code as homogeneous. Supports both language-balanced and language-weighted sampling for different training objectives.
Enables fine-grained control over language representation during training, allowing teams to build specialized models (e.g., Python-only) or multilingual models with custom language weights, whereas generic corpora force take-it-or-leave-it language distribution.
github issue and commit message corpus inclusion
Medium confidenceExtends the code corpus with GitHub issue descriptions and commit messages as supplementary training data, capturing natural language explanations of code changes, bug reports, and feature requests. Extracts issue titles, descriptions, and commit messages from GitHub API or repository archives, linking them to corresponding code changes where possible. Enables models to learn code-change-explanation alignment and understand domain-specific terminology from real-world software development discussions.
Includes GitHub issues and commit messages as supplementary training data alongside code, enabling models to learn code-change-explanation alignment and domain-specific terminology from real-world development discussions. Treats natural language explanations as first-class training data rather than discarding them.
Richer training signal than code-only corpora because models learn to associate code changes with natural language explanations, improving downstream code generation quality and enabling models to generate meaningful commit messages and issue descriptions.
dataset versioning and reproducibility tracking
Medium confidenceMaintains versioned snapshots of the dataset with full provenance tracking, including data processing pipeline parameters, deduplication thresholds, PII redaction patterns, and opt-out exclusions applied to each version. Enables reproducible model training by documenting exact dataset composition, enabling researchers to cite specific dataset versions and understand how dataset changes affect model behavior. Supports rollback to previous versions and comparison of dataset statistics across versions.
Maintains versioned snapshots with full provenance tracking (processing parameters, deduplication thresholds, opt-outs) enabling reproducible model training and dataset auditing. Treats dataset composition as a first-class artifact requiring version control and documentation.
More reproducible than static dataset releases because it documents exact processing parameters and enables version-specific citations, allowing researchers to understand how dataset changes affect model behavior and supporting scientific reproducibility.
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with StarCoder Data, ranked by overlap. Discovered automatically through the match graph.
The Stack v2
67 TB permissively licensed code dataset across 600+ languages.
StarCoderData
250GB curated code dataset for StarCoder training.
Private AI
Multi-modal PII detection and redaction API for 49 languages.
RedPajama v2
30 trillion token web dataset with 40+ quality signals per document.
xCodeEval
Dataset by NTU-NLP-sg. 6,96,087 downloads.
CulturaX
6.3T token multilingual dataset across 167 languages.
Best For
- ✓ML researchers training code LLMs at scale
- ✓Organizations building proprietary code models with legal compliance requirements
- ✓Open-source projects needing permissively-licensed training corpora
- ✓Teams training code models with limited compute budgets
- ✓Projects optimizing training efficiency and convergence speed
- ✓Researchers studying code diversity and redundancy in open-source ecosystems
- ✓Organizations publishing code datasets publicly
- ✓Teams training models that will be deployed in production environments
Known Limitations
- ⚠Excludes GPL-licensed code, limiting representation of Linux kernel and GNU ecosystem patterns
- ⚠License detection relies on repository metadata and SPDX identifiers; misclassified licenses may slip through
- ⚠No dynamic re-licensing support — dataset is static snapshot, doesn't update as upstream licenses change
- ⚠Language distribution is biased toward popular ecosystems (Python, JavaScript, Java); niche languages underrepresented
- ⚠Semantic similarity detection may over-aggressively deduplicate legitimate variations (e.g., different error handling strategies)
- ⚠Granularity trade-offs: function-level deduplication may miss snippet-level redundancy; file-level may be too coarse
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
About
BigCode's curated code training dataset containing 783 GB of permissively licensed code from 86 programming languages plus GitHub issues and Git commits. Includes Jupyter notebooks with text-code interleaving. Meticulous data processing: near-deduplication, PII redaction (emails, IP addresses, API keys), and exact deduplication. Used to train the original StarCoder model. Opt-out mechanism respects developers who wish to exclude their code from AI training.
Categories
Alternatives to StarCoder Data
The GitHub for AI — 500K+ models, datasets, Spaces, Inference API, hub for open-source AI.
Compare →FLUX, Stable Diffusion, SDXL, SD3, LoRA, Fine Tuning, DreamBooth, Training, Automatic1111, Forge WebUI, SwarmUI, DeepFake, TTS, Animation, Text To Video, Tutorials, Guides, Lectures, Courses, ComfyUI, Google Colab, RunPod, Kaggle, NoteBooks, ControlNet, TTS, Voice Cloning, AI, AI News, ML, ML News,
Compare →Are you the builder of StarCoder Data?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →