large-scale multimodal document-image dataset curation and indexing
Provides a 1 trillion token-scale dataset of PDF documents paired with extracted images and text, curated from Common Crawl with deduplication and quality filtering applied at scale. The dataset uses HuggingFace's distributed dataset infrastructure to enable efficient streaming and sampling of 1M+ document-image pairs without requiring full local storage, with metadata indexing for retrieval by document type, language, and content characteristics.
Unique: Combines PDF-level document structure preservation with extracted image-text pairs at 1T token scale, using Common Crawl's distributed crawl infrastructure and HuggingFace's streaming dataset format to avoid centralized storage bottlenecks — most competitors (e.g., LAION) focus on web images or require full downloads
vs alternatives: Larger and more document-focused than LAION-5B or Conceptual Captions, with native PDF structure metadata enabling document-aware training; more accessible than proprietary datasets like Google's internal document corpora due to CC-BY-4.0 licensing and HuggingFace Hub distribution
streaming dataset access with lazy loading and memory-efficient batching
Implements HuggingFace Datasets' streaming protocol to load document-image pairs on-demand without downloading the full 1T token dataset, using memory-mapped Arrow format and distributed sharding across multiple processes. Batching is handled through configurable DataLoader wrappers that respect image tensor dimensions and text sequence lengths, enabling training on machines with limited VRAM through dynamic batch size adjustment.
Unique: Uses HuggingFace's Arrow-based streaming format with automatic shard distribution and epoch-level determinism, enabling true lazy loading without requiring dataset mirroring — most competitors (Petastorm, TFRecord) require pre-sharding or local caching
vs alternatives: More memory-efficient than downloading full datasets and faster to iterate than manual data pipelines; integrates natively with PyTorch/TensorFlow without custom serialization code
document-image pair extraction and alignment from pdf sources
Extracts text and images from PDF documents using OCR and layout analysis, then aligns extracted text with corresponding page images through spatial coordinate matching and text-region association. The extraction pipeline handles multi-page PDFs, preserves document structure metadata (headers, footers, sections), and deduplicates near-identical documents using perceptual hashing and text similarity metrics to ensure dataset quality.
Unique: Combines PDF text extraction with rendered page images and spatial alignment metadata at scale, using perceptual hashing for deduplication — most document datasets (DocVQA, RVL-CDIP) are manually curated or use simpler extraction without alignment preservation
vs alternatives: Preserves document structure and layout information unlike text-only datasets; larger and more diverse than manually-curated document benchmarks; automated extraction enables continuous updates from Common Crawl
common crawl-sourced dataset with quality filtering and language detection
Ingests documents from Common Crawl's WARC archives, applies language detection (likely using fastText or similar) to filter for English content, and runs quality heuristics (text-to-image ratio, document length, spam detection) to remove low-quality or malicious PDFs. The filtering pipeline is applied during dataset construction, reducing the raw crawl from billions of documents to 1M+ high-quality document-image pairs with reproducible filtering criteria.
Unique: Applies reproducible quality filtering to Common Crawl at scale, with transparent filtering criteria and public provenance — most proprietary datasets (Google, OpenAI) do not disclose filtering methods; most academic datasets are manually curated at smaller scale
vs alternatives: Larger and more diverse than manually-curated datasets; more transparent and reproducible than proprietary web-scale datasets; enables research on real-world document distributions
multimodal dataset sampling and stratification for balanced model training
Provides mechanisms to sample subsets of the 1T token dataset with control over document type distribution, image-text ratio, and content characteristics. Sampling can be stratified by document category (academic papers, web pages, forms, etc.) or by content properties (text length, image density, language) to ensure training data reflects desired distributions rather than raw web frequencies, which are heavily skewed toward common document types.
Unique: Enables stratified sampling across document types and content properties at scale, allowing researchers to control training data distribution — most large datasets provide raw access without built-in stratification mechanisms
vs alternatives: More flexible than fixed dataset splits; enables targeted evaluation on specific document categories; supports research on dataset bias and distribution effects
metadata-rich document records with source attribution and quality scores
Each dataset record includes rich metadata beyond image and text: source URL, crawl date, document type classification, quality score, OCR confidence, text-image alignment score, and deduplication information. Metadata is structured as JSON and queryable, enabling filtering and analysis without loading full images/text, and providing traceability for reproducibility and copyright attribution.
Unique: Provides queryable metadata with quality scores and source attribution for every record, enabling transparent dataset analysis and reproducibility — most large datasets provide minimal metadata or require custom extraction
vs alternatives: More transparent than proprietary datasets; enables reproducible research and copyright compliance; supports dataset bias analysis and quality-aware training