swirl-searchRepository50/100 via “retrieval-augmented generation (rag) with llm-powered answer synthesis”
AI Search & RAG Without Moving Your Data. Get instant answers from your company's knowledge across 100+ apps while keeping data secure. Deploy in minutes, not months.
Unique: Implements RAG as a processor in the result processing pipeline (swirl/processors/rag.py), allowing it to be composed with other processors (normalization, ranking, PII removal). Supports multiple LLM providers (OpenAI, Anthropic, Ollama, Azure) through pluggable LLM client abstraction. Streams responses via WebSocket to Galaxy UI for real-time answer generation without waiting for full LLM completion.
vs others: More flexible than monolithic RAG systems because RAG is optional and composable with other processors; supports multiple LLM providers unlike single-model solutions; streams responses for better UX compared to batch answer generation.