multi-source pretraining data composition with documented curation rules
Dolma aggregates 3 trillion tokens from 7 heterogeneous sources (Common Crawl, The Stack, peS2o, Project Gutenberg, Wikipedia, Wikibooks, C4) with fully documented filtering criteria, deduplication methods, and mixing ratios. The composition system enables researchers to understand exactly which data proportions and quality thresholds were applied, making training runs reproducible across different teams and hardware configurations. Data is segmented into pretraining, mid-training, and post-training pools to support staged model development.
Unique: Dolma's distinguishing feature is comprehensive documentation of data curation decisions (exact filtering rules, deduplication methods via Duplodocus, mixing ratios) released alongside trained models (OLMo 7B, 32B), enabling full reproducibility. Most pretraining datasets (C4, The Pile, ROOTS) document composition at a high level but not the specific algorithmic rules applied. Dolma's integration with OlmoTrace enables tracing model outputs back to source training documents, providing data provenance that most datasets lack.
vs alternatives: Dolma provides greater transparency and reproducibility than C4 or The Pile through documented filtering rules and deduplication specifications, while offering more diverse source coverage (code + academic + literary) than web-only datasets like C4, though it is smaller than ROOTS (1.6T vs 3T tokens) and less frequently updated than continuously-refreshed web crawl datasets.
source-specific data filtering and quality control
Dolma implements source-specific filtering pipelines using documented rules applied through tools like Datamap-rs (large-scale data cleaning) and Duplodocus (fuzzy deduplication). Each of the 7 sources undergoes tailored quality filtering appropriate to its characteristics: web crawl data is filtered for language and content quality, code is filtered for license and syntax validity, academic papers are filtered by venue quality, and literary text is filtered for encoding and completeness. Filtering rules are explicitly documented to enable researchers to understand and potentially modify quality thresholds.
Unique: Dolma's filtering approach is distinguished by source-specific quality criteria (e.g., academic papers filtered by venue quality, code filtered by license validity) rather than uniform filtering across all data. The integration of Duplodocus for fuzzy deduplication (vs. exact-match deduplication) is more sophisticated than simple hash-based approaches, enabling detection of near-duplicate content across sources. Documentation of exact filtering rules is rare in published datasets.
vs alternatives: Dolma's documented, source-specific filtering is more transparent than C4's undisclosed filtering rules, and more sophisticated than The Pile's simple language detection, though it requires external tools (Datamap-rs, Duplodocus) rather than providing integrated filtering infrastructure like some commercial training platforms.
post-training data pipeline integration with open instruct for instruction tuning
Dolma's post-training data pool is designed for use with Open Instruct, Allen AI's instruction tuning framework, enabling seamless transition from pretraining to instruction tuning. The post-training pool contains instruction-formatted data (format unspecified) optimized for alignment and capability refinement. Integration with Open Instruct provides data loading, instruction formatting, and training orchestration for the post-training phase. This integration enables researchers to implement the full training pipeline (pretraining → continued pretraining → instruction tuning) using coordinated Dolma and Open Instruct components.
Unique: Dolma's post-training data pool with Open Instruct integration provides a coordinated instruction tuning solution that is rare in open-source ecosystems. Most datasets provide pretraining data only; Dolma's inclusion of post-training data and integration with Open Instruct enables end-to-end training without external instruction data curation. The simultaneous release of Dolma, OlmoCore, and Open Instruct provides a complete, reproducible training pipeline.
vs alternatives: Dolma's integrated post-training pipeline is more complete than datasets providing pretraining data only, though it is less flexible than using generic instruction datasets (e.g., Alpaca, ShareGPT) that support multiple training frameworks.
staged training data segmentation for pretraining, mid-training, and post-training phases
Dolma provides three distinct data pools optimized for different training stages: a pretraining pool for initial model training on diverse, general-purpose text; a mid-training pool for continued pretraining with potentially different source ratios or quality thresholds; and a post-training pool for instruction tuning and alignment. This segmentation enables researchers to apply different data compositions at different training phases without managing separate datasets, and allows for staged training strategies where model behavior is refined through targeted data exposure.
Unique: Dolma's segmentation into three explicit training phases (pretraining, mid-training, post-training) with separate downloadable pools is uncommon in published datasets. Most datasets provide a single corpus; Dolma's phase-specific segmentation enables researchers to implement sophisticated multi-stage training strategies without custom data partitioning. The integration with Open Instruct for post-training suggests end-to-end training pipeline support.
vs alternatives: Dolma's staged data segmentation is more structured than generic datasets like C4 or The Pile, which provide single corpora; it is comparable to commercial training platforms that offer phase-specific data curation, but with full transparency and reproducibility.
data provenance tracing from trained models back to source documents
Dolma integrates with the OlmoTrace tool, which enables researchers to trace model outputs and behaviors back to the specific source documents in the training dataset that contributed to those outputs. This capability works by maintaining mappings between training data and model internals, allowing queries like 'which documents influenced this model's response?' or 'what is the source distribution of training data for this capability?'. Traceability is implemented through document-level tracking during preprocessing and training, enabling post-hoc analysis of model behavior in terms of training data composition.
Unique: OlmoTrace's document-level provenance tracing from model outputs back to training data is a rare capability in open-source LLM ecosystems. Most models provide no tracing mechanism; some provide source-level statistics but not output-specific tracing. Dolma's integration of traceability at the dataset level (maintaining document identifiers through preprocessing) enables this capability without post-hoc model modification.
vs alternatives: Dolma's provenance tracing via OlmoTrace provides transparency unavailable in most open models (which provide no tracing) and exceeds the source-level statistics provided by some datasets like C4, though it is less detailed than commercial model cards that sometimes include data attribution.
code-specific data extraction and quality filtering from the stack
Dolma incorporates The Stack, a large-scale source code dataset, with code-specific filtering and quality control. Code data is filtered for license compliance (removing GPL and other restrictive licenses), syntax validity, and repository quality. The Stack integration provides access to diverse programming languages and coding patterns without requiring separate code dataset curation. Code is deduplicated using the same Duplodocus fuzzy deduplication as other sources, enabling detection of near-duplicate code across repositories.
Unique: Dolma's integration of The Stack with explicit license filtering (removing GPL) is distinctive because it enables commercial use of code-trained models while maintaining open-source compliance. Most code datasets (e.g., CodeParrot, GitHub Copilot training data) do not document license filtering or provide GPL-free variants. The combination of license filtering with fuzzy deduplication across code repositories is more sophisticated than simple exact-match deduplication.
vs alternatives: Dolma's code data provides license-compliant code training without GPL restrictions, making it suitable for commercial models, whereas The Pile and other generic datasets either include GPL code or lack code data entirely. However, it is smaller and less frequently updated than GitHub's full code index.
academic paper text extraction and venue-based quality filtering via pes2o
Dolma incorporates peS2o, a large-scale academic paper dataset, with venue-based quality filtering that prioritizes papers from high-impact conferences and journals. Academic papers are filtered by publication venue quality (e.g., top-tier conferences, high-impact journals) rather than citation count or other metrics, ensuring training data includes rigorous, peer-reviewed research. Paper text is extracted from PDFs and structured metadata, enabling models to learn from scientific writing and domain-specific knowledge. Academic data is deduplicated using the same fuzzy deduplication as other sources.
Unique: Dolma's use of venue-based quality filtering for academic papers (rather than citation count or other metrics) is distinctive because it prioritizes peer-review rigor over popularity, potentially reducing bias toward highly-cited but potentially flawed work. Integration of peS2o with explicit venue quality criteria is rare in published datasets; most datasets either exclude academic content or include it without quality filtering.
vs alternatives: Dolma's academic data provides peer-reviewed, venue-filtered content that exceeds generic datasets like C4 or The Pile in academic quality, though it is smaller and less frequently updated than full academic paper indices like arXiv or PubMed.
web text filtering and deduplication across common crawl and c4 sources
Dolma integrates web text from both Common Crawl (raw web crawl) and C4 (pre-filtered web text), with documented filtering rules for language detection, content quality, and toxicity. Web data undergoes source-specific filtering appropriate to its characteristics: Common Crawl data is filtered more aggressively due to lower baseline quality, while C4 data benefits from existing filtering. All web data is deduplicated using Duplodocus fuzzy deduplication to remove near-duplicate content across domains. The combination of two web sources with different filtering approaches provides diversity while maintaining quality standards.
Unique: Dolma's use of two complementary web sources (Common Crawl and C4) with source-specific filtering is distinctive because it balances raw coverage (Common Crawl) with pre-filtered quality (C4), providing diversity while maintaining standards. Most datasets use either raw crawls or pre-filtered sources, but not both. The documented filtering rules (though not detailed in available materials) enable reproducibility that most web datasets lack.
vs alternatives: Dolma's dual-source web data provides greater transparency and reproducibility than C4 alone, while offering broader coverage than C4-only datasets, though it is smaller and less frequently updated than continuously-refreshed web crawl datasets.
+3 more capabilities