PrivateGPT vs wicked-brain
Side-by-side comparison to help you choose.
| Feature | PrivateGPT | wicked-brain |
|---|---|---|
| Type | Framework | Repository |
| UnfragileRank | 43/100 | 32/100 |
| Adoption | 1 | 0 |
| Quality | 0 | 0 |
| Ecosystem |
| 0 |
| 1 |
| Match Graph | 0 | 0 |
| Pricing | Free | Free |
| Capabilities | 14 decomposed | 11 decomposed |
| Times Matched | 0 | 0 |
Accepts documents in multiple formats (PDF, DOCX, TXT, etc.), automatically parses and splits them into semantically meaningful chunks using configurable chunk size and overlap parameters, then embeds each chunk using a pluggable embedding model (local or cloud-based). The ingestion pipeline stores both embeddings in a vector database and raw chunk text/metadata in a node store for later retrieval and context assembly.
Unique: Uses LlamaIndex's pluggable document loader and node parser abstraction, allowing swappable parsing strategies and embedding models without code changes — configured entirely via YAML. Supports both local embedding models (via Ollama) and cloud providers, with automatic fallback and retry logic built into the ingestion service.
vs alternatives: More flexible than Langchain's document loaders because it decouples parsing, chunking, and embedding through dependency injection, allowing teams to swap vector stores or embedding models without rewriting ingestion logic.
Implements a full RAG pipeline that embeds user queries, retrieves semantically similar chunks from the vector store, optionally reranks retrieved results for relevance, and assembles retrieved context into a prompt template before sending to an LLM. The pipeline supports both synchronous and streaming responses, with configurable retrieval parameters (top-k, similarity threshold) and optional reranking models to improve answer quality.
Unique: Implements RAG as a composable LlamaIndex pipeline with pluggable retriever, reranker, and prompt template components — allows swapping vector stores, embedding models, and LLMs independently without touching the core RAG logic. Supports both sync and async/streaming endpoints via FastAPI, enabling real-time UI updates.
vs alternatives: More modular than LangChain's RAG chains because each component (retriever, reranker, LLM) is independently configurable and testable, and the dependency injection pattern makes it easier to mock components for unit testing.
Maintains conversation history across multiple turns, allowing users to ask follow-up questions that reference previous answers. The system assembles context from both the current query and relevant previous turns, passes this to the LLM for coherent multi-turn responses. Chat history is stored in memory (or optionally persisted) and can be cleared or managed per conversation session.
Unique: Manages multi-turn conversations by assembling context from both current query and relevant previous turns, then passing this to the LLM — allows coherent follow-up questions without explicit context re-entry. History is maintained in memory with optional persistence.
vs alternatives: More flexible than stateless Q&A because it maintains conversation context across turns, enabling more natural multi-turn interactions, but requires explicit conversation session management.
Extracts and stores metadata from documents (filename, upload date, document type, custom tags) alongside embeddings, enabling metadata-based filtering during retrieval. Users can filter search results by metadata (e.g., 'only search in PDFs from 2024') to improve precision. Metadata is stored in the node store and can be used in hybrid search combining semantic similarity with keyword/metadata filtering.
Unique: Stores document metadata alongside embeddings and supports metadata-based filtering during retrieval — enables hybrid search combining semantic similarity with keyword/metadata filtering. Metadata is extracted during ingestion and can be customized per document type.
vs alternatives: More precise than pure semantic search because metadata filtering reduces the search space before semantic ranking, improving both quality and performance for large collections.
Supports batch ingestion of multiple documents through an asynchronous pipeline that processes documents in parallel without blocking the API. Documents are queued, processed by worker threads/processes, and their ingestion status can be monitored via API endpoints. This enables efficient ingestion of large document collections without blocking the main application.
Unique: Implements asynchronous batch ingestion using FastAPI's async support and background task workers — allows processing multiple documents in parallel without blocking the API. Ingestion status can be monitored via API endpoints.
vs alternatives: More efficient than synchronous ingestion because it processes documents in parallel and doesn't block the API, enabling better user experience during large batch uploads.
Provides a templating system for assembling prompts that combine user queries, retrieved context, and system instructions. Developers can customize prompt templates via YAML configuration to control how context is formatted, what instructions are given to the LLM, and how responses are structured. Supports variable substitution (e.g., {query}, {context}, {date}) and conditional sections based on available context.
Unique: Implements prompt templating via YAML configuration with variable substitution — allows customizing how context is formatted and what instructions are given to the LLM without code changes. Supports different templates for different use cases (Q&A, summarization, etc.).
vs alternatives: More flexible than hardcoded prompts because templates are configurable and can be experimented with without code changes, enabling rapid prompt engineering iteration.
Abstracts LLM interactions through LlamaIndex's LLM interface, supporting local models (via Ollama), OpenAI, Anthropic, Hugging Face, and other providers through a unified configuration layer. Developers specify the LLM provider in YAML config without code changes, and the system handles API authentication, request formatting, and response parsing for each provider's unique protocol.
Unique: Uses LlamaIndex's LLM abstraction layer to decouple application code from provider-specific APIs — configuration is entirely YAML-driven, with no code changes needed to swap providers. Supports both streaming and non-streaming responses, with automatic fallback to non-streaming if provider doesn't support it.
vs alternatives: More provider-agnostic than LangChain because LlamaIndex's LLM interface is more consistently implemented across providers, reducing the need for provider-specific branching logic in application code.
Abstracts vector storage through LlamaIndex's vector store interface, supporting Qdrant, Milvus, Weaviate, Pinecone, and in-memory SimpleVectorStore. Developers configure the vector store backend in YAML, and the system handles connection pooling, index creation, similarity search, and metadata filtering without code changes. Supports both dense vector search and hybrid search (combining vector similarity with keyword matching).
Unique: LlamaIndex's vector store abstraction allows swapping backends (Qdrant, Milvus, Weaviate, Pinecone, SimpleVectorStore) entirely through YAML configuration — no code changes required. Supports both dense vector search and hybrid search combining semantic similarity with keyword/metadata filtering.
vs alternatives: More database-agnostic than LangChain's vector store integrations because the abstraction is more consistently implemented, reducing provider lock-in and making it easier to migrate between vector databases.
+6 more capabilities
Indexes markdown files containing code skills and knowledge into a local SQLite database with FTS5 (Full-Text Search 5) enabled, enabling semantic keyword matching without vector embeddings or external infrastructure. The system parses markdown structure (headings, code blocks, metadata) and builds inverted indices for fast retrieval of skill documentation by natural language queries. No external vector DB or embedding service required — all indexing and search happens locally.
Unique: Uses SQLite FTS5 for keyword-based retrieval instead of vector embeddings, eliminating dependency on external embedding services (OpenAI, Cohere) and vector databases while maintaining sub-millisecond local search performance
vs alternatives: Simpler and faster to set up than Pinecone/Weaviate RAG stacks for developers who prioritize zero infrastructure over semantic similarity
Retrieves indexed skills from the local SQLite database and injects them into the context window of AI coding CLIs (Claude Code, Cursor, Gemini CLI, GitHub Copilot) as formatted markdown or structured prompts. The system acts as a middleware layer that intercepts queries, searches the skill index, and prepends relevant documentation to the AI's input context before sending to the LLM. Supports multiple CLI integrations through adapter patterns.
Unique: Implements RAG-like behavior without vector embeddings by using FTS5 keyword matching and injecting matched skills directly into CLI context windows, designed specifically for AI coding assistants rather than generic LLM applications
vs alternatives: Lighter weight than full RAG pipelines (no embedding model, no vector DB) while still enabling skill-aware code generation in popular AI CLIs
Provides a command-line interface for managing the skill library (add, remove, search, list, export) without requiring programmatic API calls. Commands include `wicked-brain add <file>`, `wicked-brain search <query>`, `wicked-brain list`, `wicked-brain export`, enabling developers to manage skills from the terminal. Supports piping and scripting for automation.
PrivateGPT scores higher at 43/100 vs wicked-brain at 32/100. PrivateGPT leads on adoption and quality, while wicked-brain is stronger on ecosystem.
Need something different?
Search the match graph →© 2026 Unfragile. Stronger through disorder.
Unique: Provides a full-featured CLI for skill management (add, search, list, export) enabling terminal-based workflows and shell script integration without requiring a GUI or API client
vs alternatives: More scriptable and automation-friendly than GUI-based knowledge management tools
Provides a structured system for organizing, storing, and versioning coding skills as markdown files with optional metadata (tags, difficulty, language, category). Skills are stored in a flat or hierarchical directory structure and can be edited directly in any text editor. The system tracks which skills are indexed and provides utilities to add, update, and remove skills from the index without requiring a database UI or special tooling.
Unique: Treats skills as first-class markdown files with Git versioning rather than database records, enabling developers to manage their knowledge base using standard text editors and version control workflows
vs alternatives: More portable and version-control-friendly than proprietary knowledge base tools (Notion, Obsidian plugins) while remaining compatible with standard developer workflows
Executes all knowledge indexing and retrieval operations locally on the developer's machine using SQLite FTS5, eliminating the need for external services, API keys, or cloud infrastructure. The entire skill database is stored as a single SQLite file that can be backed up, versioned, or shared via Git. No network calls, no rate limits, no vendor lock-in — all operations complete in milliseconds on local hardware.
Unique: Deliberately avoids external dependencies (vector DBs, embedding APIs, cloud services) by using only SQLite FTS5, making it the only RAG-adjacent system that requires zero infrastructure setup or API credentials
vs alternatives: Eliminates operational complexity and cost of vector database services (Pinecone, Weaviate) while maintaining offline-first privacy guarantees that cloud-based RAG systems cannot provide
Provides an extensible adapter pattern for integrating the skill library with multiple AI coding CLIs through standardized interfaces. Each CLI adapter handles the specific protocol, context format, and API of its target tool (Claude Code's prompt format, Cursor's context injection, Gemini CLI's request structure). New adapters can be added by implementing a simple interface without modifying core indexing logic.
Unique: Uses adapter pattern to abstract CLI-specific integration details, allowing a single skill library to work across Claude Code, Cursor, Gemini CLI, and custom tools without duplicating indexing or retrieval logic
vs alternatives: More flexible than CLI-specific plugins because adapters are decoupled from core indexing, enabling skill library reuse across tools without reimplementing search
Converts natural language queries into FTS5 search expressions by tokenizing, normalizing, and optionally expanding queries with synonyms or related terms. The system handles common query patterns (e.g., 'how do I X' → search for skill tags matching X) and applies FTS5 operators (AND, OR, phrase matching) to improve precision. No machine learning or semantic models — purely lexical matching with heuristic query expansion.
Unique: Implements heuristic-based query expansion for FTS5 to handle natural language variations without semantic embeddings, using rule-based synonym mapping and query pattern recognition
vs alternatives: Simpler and faster than semantic search (no embedding inference latency) while still handling common query variations through configurable synonym expansion
Parses markdown skill files to extract structured metadata (title, description, tags, language, difficulty, category) from frontmatter (YAML/TOML) or markdown conventions (heading levels, code fence language tags). Metadata is indexed alongside skill content, enabling filtered searches (e.g., 'find all Python skills tagged with async'). Supports custom metadata fields through configuration.
Unique: Extracts metadata from markdown structure (YAML frontmatter, code fence language tags, heading levels) rather than requiring a separate metadata file, keeping skills self-contained and editable in any text editor
vs alternatives: More portable than database-based metadata (Notion, Obsidian) because metadata lives in the markdown file itself and is version-controllable
+3 more capabilities