hybrid-search-execution
Execute searches that combine vector embeddings, keyword matching, and structured data filters in a single query. Vespa processes all three search modalities simultaneously and ranks results using unified scoring.
ml-model-ranking-integration
Serve machine learning models (ONNX, XGBoost, TensorFlow) directly within ranking pipelines to score and re-rank search results without external inference services. Models execute on indexed data during query time.
batch-document-processing
Process and index large batches of documents efficiently, supporting bulk updates, deletions, and insertions with optimized throughput.
query-language-execution
Execute complex queries using Vespa's YQL (Vespa Query Language) to specify search logic, filtering, grouping, and result processing in a single declarative statement.
recommendation-ranking-pipeline
Build recommendation systems by combining collaborative filtering, content-based filtering, and ML models within Vespa's ranking pipeline to generate personalized recommendations.
multi-phase-ranking-execution
Execute multi-phase ranking pipelines where initial phases use fast approximate ranking to reduce candidate set, and later phases apply expensive ML models to final candidates.
structured-data-filtering
Filter search results using structured data conditions on fields like dates, numbers, categories, and enums. Combine multiple filter conditions with boolean logic.
real-time-data-indexing
Index new documents and updates to existing documents in real-time with immediate searchability. Supports both streaming updates and batch ingestion while maintaining index consistency.
+7 more capabilities