C4 (Colossal Clean Crawled Corpus)
DatasetFreeGoogle's cleaned Common Crawl corpus used to train T5.
Capabilities8 decomposed
large-scale english text corpus filtering and deduplication
Medium confidenceProcesses 750GB of raw Common Crawl data through a multi-stage heuristic filtering pipeline that removes short pages (< 100 words), deduplicates at the sentence level using fuzzy matching, filters offensive/adult content via keyword blacklists and classifier heuristics, and restricts to English-language documents via language detection. The filtering approach uses rule-based heuristics rather than learned classifiers, making it reproducible and auditable but potentially less adaptive to domain-specific quality signals.
Uses transparent, rule-based heuristic filtering (short-page removal, sentence deduplication, keyword blacklists) instead of learned classifiers, making the filtering pipeline fully reproducible and auditable; this contrasts with proprietary datasets that use opaque ML-based quality scoring
More transparent and reproducible than proprietary datasets like OpenWebText2, but less adaptive to quality signals than datasets using learned classifiers; widely benchmarked so downstream model performance is well-understood
multi-language text corpus with 108-language support
Medium confidenceProvides a multilingual variant of C4 covering 108 languages extracted from Common Crawl using language detection heuristics. Each language subset is independently filtered and deduplicated using the same heuristic pipeline as the English version, enabling researchers to train or evaluate multilingual models without manually collecting and cleaning language-specific corpora. Language detection is performed at document level, so mixed-language documents are assigned to a single language based on dominant language detection.
Applies consistent heuristic-based filtering across 108 languages using a single pipeline, enabling direct comparability across language subsets; most multilingual corpora either focus on high-resource languages or use language-specific filtering strategies
Broader language coverage than mC4 alternatives, but language-agnostic filtering may introduce quality inconsistencies across languages compared to language-specific curation approaches
news-domain-specific text corpus with distribution matching
Medium confidenceProvides a 'realnewslike' variant of C4 that filters the corpus to match the distribution of news articles from Common Crawl's news sources. This variant uses domain-specific heuristics (URL patterns, content structure, publication metadata) to identify news-domain documents and creates a subset with similar statistical properties to real news corpora. The filtering preserves the original heuristic-based approach while constraining the corpus to a specific domain distribution.
Applies domain-specific filtering to create a news-aligned corpus variant while preserving the original heuristic-based filtering pipeline; enables researchers to study domain-specific pre-training effects without collecting domain-specific data separately
More accessible than manually curated news corpora, but less precise than corpora built from actual news archives with editorial quality control
streaming and batch dataset access with hugging face integration
Medium confidenceProvides C4 as a Hugging Face Dataset with native support for both streaming (on-the-fly loading without full download) and batch downloading via the Hugging Face Datasets library. The dataset is split into train/validation splits, supports efficient sampling and shuffling, and integrates with Hugging Face's caching and versioning system. Streaming uses HTTP range requests to fetch only required data, while batch access downloads and caches locally for repeated access.
Integrates C4 directly into Hugging Face Datasets ecosystem with native streaming support, enabling researchers to use C4 without downloading the full 750GB; most alternative large corpora require manual download and preprocessing
More convenient than manually downloading and preprocessing Common Crawl, but streaming adds latency compared to local SSD access; better for exploratory work, less ideal for production training at scale
reproducible dataset versioning and split management
Medium confidenceManages C4 dataset versions and train/validation splits through Hugging Face's versioning system, enabling reproducible access to specific dataset versions and splits. Each version is immutable and tied to a specific Git commit, ensuring that researchers can reproduce results by specifying the exact dataset version. Splits are pre-defined (train, validation) and deterministically generated, so the same split is always returned for the same seed.
Provides immutable, Git-backed versioning for the entire dataset through Hugging Face Hub, ensuring that researchers can pin exact dataset versions in their training code; most large corpora lack this level of version control
Better reproducibility than manually downloaded datasets, but less flexible than custom dataset management systems that support arbitrary splits and transformations
offensive content filtering with keyword-based heuristics
Medium confidenceFilters documents containing offensive, adult, or inappropriate content using a combination of keyword blacklists, pattern matching, and heuristic rules. The filtering is applied during the initial corpus curation and removes documents that match offensive content patterns, reducing but not eliminating inappropriate content. The approach is transparent and rule-based, making it auditable but potentially less effective than learned classifiers at catching nuanced offensive content.
Uses transparent, rule-based keyword filtering for offensive content instead of learned classifiers, making the filtering auditable but potentially less effective; enables researchers to understand exactly what content was filtered
More transparent than proprietary datasets with opaque filtering, but less effective at catching nuanced offensive content than datasets using learned classifiers or human review
sentence-level deduplication with fuzzy matching
Medium confidenceRemoves duplicate and near-duplicate sentences across the entire corpus using fuzzy string matching heuristics. The deduplication is applied at the sentence level (not document level), so documents with duplicate sentences are modified to remove the duplicates. This approach reduces data leakage and redundancy in the training corpus, improving model generalization by ensuring that the model sees diverse sentence patterns rather than repeated content.
Applies sentence-level deduplication using fuzzy matching across the entire 750GB corpus, reducing data leakage while preserving document-level structure; most alternative corpora use document-level deduplication or no deduplication
More thorough than document-level deduplication at removing redundancy, but computationally expensive and may introduce artifacts by breaking document coherence
short-document filtering with length-based heuristics
Medium confidenceRemoves documents shorter than a minimum length threshold (typically 100 words) to filter out low-quality, stub, or boilerplate content. This filtering is applied during corpus curation and reduces the proportion of short, low-information-density documents in the training corpus. The approach is simple and transparent but may remove legitimate short-form content like abstracts, summaries, or social media posts.
Uses simple, transparent length-based filtering (minimum 100 words) to remove low-quality stub content, making the filtering auditable and reproducible; most alternative corpora use more complex quality heuristics
Simpler and more transparent than learned quality classifiers, but less effective at identifying low-quality content that is not simply short
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with C4 (Colossal Clean Crawled Corpus), ranked by overlap. Discovered automatically through the match graph.
CulturaX
6.3T token multilingual dataset across 167 languages.
fineweb
Dataset by HuggingFaceFW. 6,37,939 downloads.
RedPajama v2
30 trillion token web dataset with 40+ quality signals per document.
mC4
Multilingual web corpus covering 101 languages.
FineFineWeb
Dataset by m-a-p. 5,55,725 downloads.
c4
Dataset by allenai. 6,98,456 downloads.
Best For
- ✓researchers training large language models from scratch
- ✓teams evaluating pre-training dataset quality and filtering strategies
- ✓organizations needing reproducible, auditable data pipelines without proprietary ML classifiers
- ✓researchers training multilingual or cross-lingual models
- ✓teams building NLP systems for non-English languages
- ✓organizations studying language representation and transfer learning
- ✓researchers studying domain-specific pre-training effects
- ✓teams building NLP systems for news analysis, summarization, or classification
Known Limitations
- ⚠Heuristic filtering may miss nuanced quality issues that learned classifiers would catch; no adaptive filtering based on downstream task performance
- ⚠750GB is large but may be insufficient for training state-of-the-art models (GPT-3 used ~300B tokens, newer models use trillions)
- ⚠English-only by design; multilingual variant exists but is separate dataset with different filtering characteristics
- ⚠Deduplication is sentence-level, not document-level, so near-duplicate documents may remain
- ⚠Offensive content filtering relies on keyword lists which have known biases and false positive/negative rates
- ⚠Language detection is document-level only; mixed-language documents are assigned to single language, losing code-switching and multilingual content
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
About
Google's cleaned version of Common Crawl used to train the original T5 model. 750GB of English text filtered with heuristic rules: removed short pages, deduped sentences, filtered offensive content, and restricted to English. Despite being superseded by newer datasets, C4 remains one of the most studied and benchmarked pre-training datasets. Available in English, multilingual (108 languages), and realnewslike variants on Hugging Face.
Categories
Alternatives to C4 (Colossal Clean Crawled Corpus)
The GitHub for AI — 500K+ models, datasets, Spaces, Inference API, hub for open-source AI.
Compare →FLUX, Stable Diffusion, SDXL, SD3, LoRA, Fine Tuning, DreamBooth, Training, Automatic1111, Forge WebUI, SwarmUI, DeepFake, TTS, Animation, Text To Video, Tutorials, Guides, Lectures, Courses, ComfyUI, Google Colab, RunPod, Kaggle, NoteBooks, ControlNet, TTS, Voice Cloning, AI, AI News, ML, ML News,
Compare →Are you the builder of C4 (Colossal Clean Crawled Corpus)?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →