via “multi-source pretraining data composition with documented curation rules”
Allen AI's 3T token dataset for fully reproducible LLM training.
Unique: Dolma's distinguishing feature is comprehensive documentation of data curation decisions (exact filtering rules, deduplication methods via Duplodocus, mixing ratios) released alongside trained models (OLMo 7B, 32B), enabling full reproducibility. Most pretraining datasets (C4, The Pile, ROOTS) document composition at a high level but not the specific algorithmic rules applied. Dolma's integration with OlmoTrace enables tracing model outputs back to source training documents, providing data provenance that most datasets lack.
vs others: Dolma provides greater transparency and reproducibility than C4 or The Pile through documented filtering rules and deduplication specifications, while offering more diverse source coverage (code + academic + literary) than web-only datasets like C4, though it is smaller than ROOTS (1.6T vs 3T tokens) and less frequently updated than continuously-refreshed web crawl datasets.