rank-bm25
RepositoryFreeVarious BM25 algorithms for document ranking
Capabilities9 decomposed
bm25okapi probabilistic document ranking with standard parameters
Medium confidenceImplements the canonical BM25 (Best Matching 25) algorithm using the Okapi variant, which scores document relevance to queries through a probabilistic ranking function that combines term frequency, inverse document frequency, and document length normalization. The implementation accepts pre-tokenized document corpora and queries, computing relevance scores via numpy-based matrix operations on term statistics (document frequencies, term positions, corpus-wide IDF values). Initialization computes IDF values across the entire corpus once, then get_scores() applies the BM25 formula with tunable k1 (term saturation) and b (length normalization) parameters to generate per-document relevance scores.
Pure Python implementation with minimal dependencies (numpy only) and a two-line API (initialize with corpus, call get_scores on query), making it the lightest-weight BM25 option for prototyping without external IR infrastructure
Faster to integrate than Elasticsearch/Solr for small-to-medium corpora (< 1M docs) and more transparent than black-box neural rankers, but slower than optimized C++ implementations like Whoosh for large-scale production systems
bm25l length-normalized document ranking for variable-length documents
Medium confidenceImplements the BM25L variant, which modifies the standard BM25 formula to normalize document length more aggressively, addressing the bias toward longer documents that can occur with standard BM25. The algorithm adjusts the length normalization component by using a different formula that prevents saturation effects when documents vary significantly in length. Like BM25Okapi, it computes corpus-wide IDF once during initialization and applies the modified scoring formula during get_scores(), but the length normalization parameter b has different semantics and impact compared to the standard variant.
Implements the BM25L variant with modified length normalization formula that prevents saturation bias, addressing a known limitation of standard BM25 when document lengths vary widely
Better than BM25Okapi for heterogeneous corpora with extreme length variation, but requires empirical evaluation to confirm improvement on specific datasets
bm25+ enhanced term frequency handling with saturation control
Medium confidenceImplements the BM25+ variant, which refines the term frequency saturation component of standard BM25 by adding a constant term to the numerator of the saturation function, preventing term frequency from ever reaching zero contribution. This addresses a theoretical limitation in BM25Okapi where very high term frequencies can paradoxically reduce relevance scores. The implementation maintains the same initialization and scoring interface as other variants but applies a modified formula during get_scores() that ensures monotonic improvement with term frequency.
Implements BM25+ with modified term frequency saturation that ensures monotonic contribution, addressing a theoretical limitation where BM25Okapi's saturation function can produce counter-intuitive score decreases at very high term frequencies
More theoretically sound than BM25Okapi for term frequency handling, but empirical gains are often marginal and require dataset-specific tuning to realize benefits
corpus-wide idf computation with lazy initialization
Medium confidenceComputes inverse document frequency (IDF) statistics across the entire tokenized corpus during algorithm initialization, storing term-to-IDF mappings that are reused across all subsequent queries. The implementation iterates through the corpus once to count document frequencies per term, then applies the IDF formula (typically log(N / df) where N is corpus size and df is document frequency) to generate a lookup table. This one-time computation cost is amortized across multiple queries, but requires that the corpus is static — adding new documents necessitates recomputing IDF values for the entire corpus.
Computes IDF once during initialization and caches it for all queries, making the library stateful and corpus-specific rather than supporting pre-computed or external IDF values
Simpler API than systems requiring external IDF computation, but less flexible than frameworks that accept pre-computed IDF values or support incremental updates
top-n document retrieval with sorted ranking results
Medium confidenceProvides a get_top_n() method that scores all documents in the corpus against a query and returns the top N results sorted by relevance score in descending order. The implementation calls get_scores() internally to compute relevance for all documents, then uses numpy argsort or similar sorting to identify and return the N highest-scoring documents as tuples of (document_index, score). This convenience method eliminates the need for users to manually sort and filter results, providing a common retrieval pattern in a single function call.
Provides a convenience method that combines scoring and sorting in a single call, reducing boilerplate for the common pattern of retrieving top-N results
More convenient than manually calling get_scores() and sorting, but less efficient than specialized retrieval systems that can use indices to avoid scoring all documents
tunable bm25 parameters (k1, b) for algorithm customization
Medium confidenceExposes k1 (term saturation parameter) and b (length normalization parameter) as configurable hyperparameters during algorithm initialization, allowing users to customize the ranking behavior without modifying the library code. The k1 parameter controls how quickly term frequency saturates (higher k1 = slower saturation, more weight on term frequency), while b controls the degree of length normalization (b=0 disables length normalization, b=1 applies full normalization). These parameters are stored as instance variables and applied during get_scores() computation, enabling empirical tuning for specific domains or datasets.
Exposes k1 and b as instance-level parameters that can be set during initialization, enabling per-instance customization without subclassing or code modification
More flexible than fixed-parameter implementations, but less automated than systems with built-in parameter optimization or learning-to-rank approaches
minimal dependency architecture with numpy-only computation
Medium confidenceImplements all BM25 algorithms using only numpy for numerical operations, avoiding heavy dependencies on full IR frameworks (Elasticsearch, Solr) or machine learning libraries (scikit-learn, TensorFlow). The library uses numpy arrays for efficient vector operations (IDF lookups, score computation) and basic Python data structures (lists, dicts) for corpus management. This design choice minimizes installation overhead and allows the library to be embedded in larger systems without dependency conflicts, though it sacrifices some performance optimizations available in specialized IR libraries.
Implements BM25 with only numpy as a dependency, making it the lightest-weight pure-Python option compared to frameworks that require Elasticsearch, Solr, or scikit-learn
Easier to install and embed than Elasticsearch/Solr, but slower and less feature-rich than production IR systems; lighter than scikit-learn but less integrated with ML pipelines
preprocessing-agnostic tokenization interface
Medium confidenceAccepts pre-tokenized documents and queries as input, leaving all text preprocessing (lowercasing, stemming, stopword removal, punctuation handling) to the caller. The library makes no assumptions about tokenization strategy and works with any tokenization scheme the user provides, whether simple whitespace splitting, sophisticated NLP pipelines (spaCy, NLTK), or domain-specific tokenizers. This design maximizes flexibility but requires users to implement preprocessing themselves, making the library a pure ranking algorithm rather than an end-to-end search solution.
Accepts only pre-tokenized input and provides no built-in preprocessing, making it a pure ranking algorithm that delegates all text processing to the caller
More flexible than systems with fixed preprocessing pipelines, but requires more setup than end-to-end search engines that handle preprocessing internally
stateless query scoring with consistent interface across variants
Medium confidenceProvides a uniform get_scores(tokenized_query) method across all BM25 variants (Okapi, L, Plus) that takes a pre-tokenized query and returns a numpy array of relevance scores, one per document in the corpus. The method is stateless — it does not modify internal state or cache results — and produces deterministic scores given the same query and corpus. All variants share this interface, allowing users to swap implementations without changing calling code, though the underlying scoring formulas differ.
Provides identical get_scores() interface across all BM25 variants, enabling drop-in replacement of algorithms without code changes
More consistent than systems where different algorithms have different APIs, but less feature-rich than learning-to-rank systems that provide explanations or personalization
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with rank-bm25, ranked by overlap. Discovered automatically through the match graph.
vectra
A lightweight, file-backed vector database for Node.js and browsers with Pinecone-compatible filtering and hybrid BM25 search.
Turbopuffer
Low-cost vector database — pay-per-query, S3-backed, up to 10x cheaper at scale.
infinity
The AI-native database built for LLM applications, providing incredibly fast hybrid search of dense vector, sparse vector, tensor (multi-vector), and full-text.
Local GPT
Chat with documents without compromising privacy
RediSearch
A query and indexing engine for Redis, providing secondary indexing, full-text search, vector similarity search and aggregations.
lancedb
Developer-friendly OSS embedded retrieval library for multimodal AI. Search More; Manage Less.
Best For
- ✓Information retrieval engineers building search systems
- ✓NLP practitioners prototyping ranking pipelines
- ✓Teams implementing hybrid search (BM25 + dense retrieval)
- ✓Developers building lightweight search without Elasticsearch/Solr
- ✓Search systems over heterogeneous document collections (mixed lengths)
- ✓Short-form content ranking (social media, abstracts, snippets)
- ✓Teams comparing multiple BM25 variants to find optimal ranking
- ✓Ranking systems where term frequency saturation is problematic
Known Limitations
- ⚠Requires pre-tokenized input — no built-in text preprocessing (stemming, lowercasing, stopword removal)
- ⚠Stateless scoring — no learned ranking or personalization across queries
- ⚠Memory scales linearly with corpus size and vocabulary size; corpus must fit in RAM
- ⚠No support for phrase queries, boolean operators, or field-specific weighting
- ⚠IDF computation is corpus-specific; adding new documents requires recomputing IDF values
- ⚠Length normalization is more aggressive than BM25Okapi, which may under-reward relevant longer documents in some domains
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
Package Details
About
Various BM25 algorithms for document ranking
Categories
Alternatives to rank-bm25
Are you the builder of rank-bm25?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →