multi-language web-scale document collection with 40+ quality annotations
Aggregates 100+ billion deduplicated documents (30 trillion tokens) from 84 CommonCrawl dumps across 5 languages (English, German, French, Spanish, Italian). Each document is pre-annotated with 40+ quality signals including perplexity scores, deduplication hashes, content classifiers, and toxicity ratings computed via a standardized pipeline. The architecture processes raw CommonCrawl HTML through text extraction, deduplication, and multi-dimensional quality scoring, enabling downstream users to apply custom filtering strategies without reprocessing the raw data.
Unique: Processes 84 CommonCrawl dumps (claimed as most complete coverage vs. C4, Refinedweb, Dolma, SlimPajama) with 40+ pre-computed quality annotations per document, enabling fine-grained data curation research without requiring users to reprocess raw CommonCrawl. Open-source processing scripts allow reproducibility and custom filtering strategies on a standardized base dataset.
vs alternatives: Larger scale (30 trillion tokens vs. C4's 156B tokens, RedPajama-1T's 1T tokens) with richer quality annotations (40+ signals vs. minimal metadata in competitors) and multilingual coverage, making it superior for comparative curation research and training diverse language models.
document-level deduplication with hash-based matching
Implements deduplication across 100+ billion documents using hash-based matching to identify and remove duplicate content from CommonCrawl. The pipeline computes deduplication hashes for each document and filters the raw 100+ trillion token corpus down to 30 trillion deduplicated tokens. This approach preserves document boundaries (unlike token-level deduplication) and produces deterministic, reproducible results across reprocessing runs.
Unique: Uses document-level hash-based deduplication (preserving document boundaries) rather than token-level or fuzzy matching, enabling reproducible filtering and transparent deduplication hashes that users can inspect and verify. Processes 84 CommonCrawl dumps with consistent deduplication methodology.
vs alternatives: Document-level deduplication is more interpretable and reproducible than token-level approaches, and the published deduplication hashes enable users to understand and verify which documents were removed, unlike proprietary datasets that hide deduplication decisions.
free and open-source corpus access
Provides the entire 30 trillion token corpus, processing scripts, and quality annotations as free, open-source resources with no licensing restrictions. Users can download, modify, redistribute, and use the data for any purpose including commercial applications. This open approach enables broad research access and community-driven improvements without vendor lock-in.
Unique: Provides complete 30 trillion token corpus with processing scripts as free, open-source resources with no licensing restrictions, whereas competitors (C4, RefinedWeb) may have usage restrictions or require commercial licensing
vs alternatives: Eliminates licensing costs and vendor lock-in through open-source distribution, enabling broad access for academic and commercial use versus competitors with restricted access or licensing requirements
perplexity-based quality scoring for language model fitness
Computes perplexity scores for each document using a reference language model, enabling quantitative assessment of text quality and language model fitness. The perplexity metric measures how well a pre-trained model predicts the document; lower perplexity indicates higher-quality, more coherent text. These pre-computed scores allow users to filter documents by quality threshold without running inference themselves, and to study the relationship between perplexity and downstream model performance.
Unique: Pre-computes perplexity scores for 100+ billion documents, eliminating the computational cost of running inference for quality assessment. Enables comparative studies of how perplexity thresholds affect training outcomes without requiring users to implement their own scoring pipeline.
vs alternatives: Provides pre-computed perplexity scores (eliminating inference cost) whereas competitors like C4 use heuristic filters (URL patterns, line-ending ratios); perplexity is a more principled, model-based quality metric but requires understanding of the reference model used.
content classification and toxicity annotation across documents
Annotates each document with content classifiers and toxicity ratings, enabling category-based filtering and safety-aware data curation. The pipeline applies pre-trained classifiers to categorize document content (e.g., news, forums, documentation) and compute toxicity scores. These annotations are pre-computed and stored with each document, allowing users to filter by content type or toxicity threshold without running inference themselves.
Unique: Pre-computes both content classifiers and toxicity ratings for 100+ billion documents, enabling multi-dimensional safety and content-based filtering without requiring users to implement or run their own classifiers. Supports comparative studies of how content filtering affects model behavior.
vs alternatives: Provides pre-computed toxicity and content annotations (eliminating inference cost) whereas most web datasets require downstream filtering; enables safety-aware curation at scale without custom classifier implementation.
open-source reproducible data processing pipeline
Publishes end-to-end processing scripts on GitHub that convert raw CommonCrawl HTML to deduplicated, annotated documents. The pipeline is fully open-source, enabling users to understand, verify, and reproduce the data processing methodology. Scripts handle HTML-to-text conversion, deduplication, quality signal computation, and filtering, allowing researchers to reprocess data with custom parameters or apply the same methodology to new CommonCrawl dumps.
Unique: Publishes complete, open-source processing scripts enabling full reproducibility and transparency of data processing methodology. Users can inspect, verify, and reapply the pipeline to new data, unlike proprietary datasets where processing is opaque.
vs alternatives: Open-source pipeline enables reproducibility and auditability vs. proprietary datasets (C4, Refinedweb) where processing methodology is proprietary or partially documented; enables research on data processing methodology itself.
fine-grained data curation via quality signal filtering
Enables users to apply custom filtering strategies by combining 40+ pre-computed quality signals (perplexity, toxicity, content classifiers, deduplication hashes, etc.). Rather than providing pre-filtered 'ready-to-train' datasets, RedPajama v2 provides the raw signals and lets users define their own filtering logic. This architecture supports comparative studies of curation strategies and enables organizations to apply domain-specific or value-aligned filtering without reprocessing the base dataset.
Unique: Provides 40+ pre-computed quality signals enabling fine-grained, user-defined curation strategies rather than pre-filtered datasets. This architecture supports comparative research on curation methodology and enables organizations to apply custom filtering without reprocessing the base dataset.
vs alternatives: Enables comparative curation research (studying how different filtering strategies affect outcomes) whereas competitors provide pre-filtered datasets; gives users control over filtering logic but requires more implementation effort.
multilingual web corpus with consistent annotation across 5 languages
Provides 30 trillion tokens across 5 languages (English, German, French, Spanish, Italian) with consistent quality signal annotations applied uniformly across all languages. The architecture processes each language through the same deduplication, quality scoring, and classification pipeline, enabling comparative studies of language-specific data characteristics and training multilingual models on a standardized base dataset. Language-specific processing details are not documented, but the consistent annotation methodology enables cross-language analysis.
Unique: Provides 30 trillion tokens across 5 languages with identical quality signal annotations, enabling comparative studies of language-specific data characteristics and training multilingual models on a standardized base. Consistent annotation methodology across languages enables cross-language analysis.
vs alternatives: Larger multilingual coverage (5 languages, 30 trillion tokens) than RedPajama-1T (English-only, 1 trillion tokens) and most competitors; consistent annotation enables comparative language research, but limited to European languages vs. competitors with broader language coverage.
+3 more capabilities