Obsidian Copilot
ExtensionFreeAI agent for Obsidian knowledge vault.
Capabilities14 decomposed
vault-wide semantic search with hybrid bm25+ and vector retrieval
Medium confidenceCombines lexical BM25+ search with optional embedding-backed vector search (Orama/Miyo) to retrieve semantically similar notes from the entire vault. The system maintains dual indices—one for keyword matching and one for semantic embeddings—allowing users to find notes by meaning rather than exact text matches. Queries are processed through both indices and results are ranked by relevance, enabling natural language question answering over the knowledge base.
Implements dual-index hybrid search (BM25+ + optional vector embeddings) within Obsidian's plugin architecture, allowing users to toggle between lexical and semantic search without leaving the vault. The 'context envelope' system (DeepWiki: Context Sources and Envelope System) abstracts multiple retrieval sources (folders, tags, links, embeddings) into a unified context object passed to the LLM.
Unlike generic RAG tools that require external vector databases, Obsidian Copilot keeps search local-first with optional cloud embeddings, maintaining vault privacy while supporting semantic search without forced vendor lock-in.
multi-provider llm abstraction with streaming response handling
Medium confidenceAbstracts 15+ LLM providers (OpenAI, Anthropic, Groq, DeepSeek, Ollama, Azure OpenAI, etc.) behind a unified ChatModelProvider enum and chain execution system. Implements provider-agnostic streaming via the Chain Execution System (DeepWiki), allowing responses to stream token-by-token to the UI while maintaining consistent behavior across different model APIs. Each provider's authentication, rate limits, and response formats are normalized through a model management layer.
Implements a ChatModelProviders enum (src/constants.ts 204-441) that unifies 15+ providers with a single Chain Execution System. The streaming architecture decouples provider-specific response handling from UI rendering, allowing token-by-token updates without blocking the chat interface. Supports both cloud and local models in the same abstraction layer.
More provider-agnostic than Copilot (GitHub) or Claude Desktop, which lock into single providers. Obsidian Copilot's abstraction layer allows switching providers mid-conversation without losing context, and supports local models (Ollama) for zero-cost inference.
relevant notes sidebar with link-graph and semantic suggestions
Medium confidenceThe Relevant Notes sidebar panel (DeepWiki: User Interface) displays notes related to the current conversation using two mechanisms: link-graph analysis (showing notes linked from the current context) and semantic similarity (showing notes with similar embeddings). This provides users with contextual navigation and discovery without requiring explicit search. The panel updates dynamically as the conversation progresses.
Implements a dual-mechanism sidebar (DeepWiki: User Interface) that combines link-graph analysis (explicit connections) with semantic similarity (embedding-based). The sidebar updates dynamically as the conversation progresses, providing contextual navigation without requiring users to leave the chat. Suggestions are ranked by relevance and displayed with preview snippets.
More integrated than external knowledge graph tools because the sidebar operates within Obsidian's UI and updates in real-time. Unlike ChatGPT's file references, Obsidian Copilot's sidebar shows the full knowledge graph context, enabling users to discover unexpected connections.
document parsing with pdf/epub/docx support via hosted conversion
Medium confidenceThe PDF/EPUB/DOCX Parsing feature (DeepWiki: Core Features) allows users to upload documents in multiple formats, which are converted to Markdown via Brevilabs-hosted infrastructure. The converted content is then indexed and searchable within the vault. This enables users to incorporate external documents into their knowledge base without manual transcription. Parsing is handled server-side to avoid bloating the Obsidian plugin.
Offloads document conversion to Brevilabs-hosted infrastructure (DeepWiki: Core Features), avoiding bloat in the Obsidian plugin. Supports multiple formats (PDF, EPUB, DOCX) and converts them to Markdown for seamless integration with the vault. Converted content is indexed and searchable like native notes.
More integrated than external document conversion tools because converted content is automatically indexed in the vault. Unlike generic PDF readers, Obsidian Copilot makes document content searchable and referenceable in chat, enabling knowledge synthesis across documents and notes.
self-hosted backend replacement with miyo, firecrawl, and perplexity integration
Medium confidenceThe Self-Host Mode (DeepWiki: Core Features) allows users with Copilot Plus (Believer tier) to replace Brevilabs' hosted backend with self-hosted services: Miyo for embeddings, Firecrawl for web scraping, and Perplexity for web search. This enables privacy-conscious users to run the entire Copilot Plus stack without sending data to Brevilabs. Configuration is handled through settings, allowing users to point to their own infrastructure.
Implements a pluggable backend architecture (DeepWiki: Core Features) that allows users to replace Brevilabs' hosted services with self-hosted alternatives (Miyo, Firecrawl, Perplexity). Configuration is handled through settings, enabling users to point to their own infrastructure without modifying code. This maintains feature parity with cloud-hosted Copilot Plus while preserving data privacy.
More flexible than Copilot Plus' cloud-only architecture because users can choose between hosted and self-hosted backends. Unlike generic self-hosted LLM frameworks (Ollama, LocalAI), Obsidian Copilot provides a complete self-hosted stack with embeddings, web search, and document parsing integrated.
settings interface with provider configuration and model selection
Medium confidenceThe Settings Interface (DeepWiki: Settings Interface) provides a comprehensive UI for configuring Obsidian Copilot, including provider selection, API key management, model selection, and feature toggles. The Settings and Configuration System (DeepWiki) manages the CopilotSettings interface and DEFAULT_SETTINGS baseline. Users can configure multiple providers, select default models, and enable/disable features without editing configuration files.
Implements a comprehensive Settings Interface (DeepWiki: Settings Interface) that abstracts provider configuration, API key management, and model selection. The Settings and Configuration System manages the CopilotSettings interface with DEFAULT_SETTINGS baseline, enabling users to configure multiple providers and switch between them without code changes.
More user-friendly than configuration files because settings are managed through a dedicated UI. Unlike ChatGPT's settings, Obsidian Copilot allows users to configure multiple providers and switch between them, enabling cost optimization and provider comparison.
context-aware chat with selective note/folder/tag inclusion
Medium confidenceEnables users to explicitly select which notes, folders, or tags should be included as context for each chat message. The Chat Input and Context Control system (DeepWiki) allows users to toggle context sources on/off before sending a message, building a context envelope that's passed to the LLM. This prevents token waste on irrelevant notes while maintaining fine-grained control over what the AI can see.
Implements a context envelope system (DeepWiki: Context Sources and Envelope System) that allows users to dynamically select context sources (notes, folders, tags) per message. The UI provides toggleable context controls in the Chat View (src/components/Chat.tsx), enabling users to see exactly what context will be sent before the message is processed.
Unlike ChatGPT's file upload or Claude's project context, Obsidian Copilot's context selection is granular (folder/tag level), persistent across sessions, and integrated with Obsidian's native organization system. Users don't need to manually upload files—context is pulled from the vault in real-time.
react-style autonomous agent with tool-calling loop
Medium confidenceImplements a ReAct (Reasoning + Acting) agent loop that iteratively calls tools (vault search, web search, composer edits) based on LLM reasoning. The Tool System and Autonomous Agents subsystem (DeepWiki) manages tool registration, execution, and result feedback. The agent reasons about which tool to use, executes it, observes the result, and decides whether to continue or return a final answer. This enables multi-step problem solving without user intervention.
Implements a ReAct loop within Obsidian's plugin sandbox, managing tool execution (vault search, web search, composer) without leaving the vault. The Tool System (DeepWiki) registers tools as callable functions with schemas, allowing the LLM to reason about which tool to use. Results are fed back into the reasoning loop, enabling iterative refinement.
More integrated than standalone agent frameworks (LangChain, AutoGPT) because tools operate directly on the Obsidian vault without external APIs. Copilot Plus agents can search the vault and web in the same loop, then apply edits directly to notes—a workflow that would require multiple tool integrations in generic agent frameworks.
ai-assisted note editing with diff preview and one-click application
Medium confidenceThe Composer system (DeepWiki: Tool System and Autonomous Agents) allows users to request AI edits to notes, which are previewed as diffs before application. Users can accept, reject, or modify suggested edits. The system integrates with the note editor, applying changes atomically and maintaining undo history. This enables AI-assisted writing (summarization, expansion, tone adjustment) without overwriting original content until explicitly approved.
Implements a Composer tool that generates diffs and previews them in the Obsidian UI before applying changes. The system maintains atomic edits (all-or-nothing application) and integrates with Obsidian's native undo system, ensuring users can always revert AI suggestions. The diff preview is rendered inline in the chat, allowing users to approve/reject without leaving the conversation.
Unlike generic LLM writing assistants (Grammarly, Hemingway), Obsidian Copilot's Composer is vault-aware and can edit notes directly. Unlike VS Code's Copilot, which applies edits immediately, Obsidian Copilot requires explicit approval, reducing the risk of accidental overwrites.
custom command system with markdown-based prompt templates and variable substitution
Medium confidenceAllows users to define custom commands as Markdown files with templated prompts that support variable substitution (e.g., {{selectedText}}, {{fileName}}, {{date}}). The Command System (DeepWiki) parses these templates at runtime, substitutes variables from the current context, and executes the resulting prompt. This enables users to create reusable AI workflows (e.g., 'summarize this note', 'generate outline') without writing code.
Implements a Markdown-based command system (DeepWiki: Command System) where users define prompts as Markdown files with {{variable}} placeholders. The system parses these templates, substitutes variables from the current Obsidian context (selected text, file name, date, etc.), and executes the resulting prompt. This allows non-technical users to create custom AI workflows without touching code.
More accessible than LangChain prompt templates or OpenAI's custom GPTs because templates are plain Markdown files stored in the vault. Users can version-control, share, and modify templates using Obsidian's native tools. Unlike ChatGPT's custom instructions, Obsidian Copilot's commands are context-aware and can access vault-specific variables.
vision-capable chat with image attachment and understanding
Medium confidenceAllows users to attach images to chat messages and send them to vision-capable LLM models (GPT-4V, Claude 3, Gemini Vision, etc.). The system handles image encoding, provider-specific vision API formatting, and response streaming. Images are embedded inline in the chat history and can be referenced in follow-up messages. This enables users to ask questions about diagrams, screenshots, or visual content within their vault.
Integrates vision capabilities into the multi-provider abstraction layer, allowing users to attach images to chat and have them processed by any vision-capable provider. Images are embedded in the chat history and can be referenced in follow-up messages, maintaining context across multiple turns. The system handles provider-specific vision API formatting (e.g., base64 encoding for OpenAI, URL references for Claude).
More integrated than uploading images to ChatGPT or Claude because images are stored in the Obsidian vault and referenced directly. Users can build persistent visual knowledge bases and ask follow-up questions about images without re-uploading. Unlike generic image analysis tools, vision chat is scoped to the vault and can reference other notes for context.
persistent chat history with markdown note storage and retrieval
Medium confidenceStores chat conversations as Markdown notes in the vault, enabling users to review, search, and reference past conversations. The Chat Persistence and History subsystem (DeepWiki) saves each conversation as a timestamped note with metadata (model used, context sources, etc.). Users can search chat history using Obsidian's native search and link to conversations from other notes. This creates a persistent knowledge artifact from AI interactions.
Implements chat persistence by storing conversations as Markdown notes in the vault (DeepWiki: Chat Persistence and History). Each conversation is timestamped, tagged with metadata (model used, context sources), and searchable using Obsidian's native search. This integrates chat history into the vault's knowledge graph, allowing users to link to conversations from other notes.
Unlike ChatGPT or Claude, which store history in proprietary databases, Obsidian Copilot stores chat history as Markdown files in the user's vault. This enables full-text search, version control, and integration with other notes. Users own their conversation data and can export it without vendor lock-in.
project-scoped context with folder/tag/url-based boundaries
Medium confidenceThe Project System (DeepWiki: Project System) allows users to define scoped contexts from folders, tags, or external URLs. Each project has its own chat history, context sources, and configuration. This enables users to isolate conversations to specific projects (e.g., 'Research Project A', 'Client B Documentation') without mixing context. Projects are persisted and can be switched between without losing state.
Implements a Project System (DeepWiki: Project System) that allows users to define scoped contexts from folders, tags, or external URLs. Each project maintains separate chat history and context sources, enabling users to work on multiple projects without context pollution. Projects are persisted in the vault and can be switched between without losing state.
More integrated than external project management tools because projects are defined within Obsidian using native folder/tag structures. Unlike ChatGPT's conversation threads, Obsidian Copilot projects maintain persistent context sources and configuration, enabling consistent behavior across sessions.
long-term memory with persistent agent-readable/writable memory notes
Medium confidenceThe Long-Term Memory feature (DeepWiki: Tool System and Autonomous Agents) allows autonomous agents to read and write persistent memory notes that persist across conversations. Agents can store facts, decisions, or context in dedicated memory notes and retrieve them in future conversations. This enables agents to build on previous interactions and maintain continuity across sessions without requiring users to manually provide context.
Implements long-term memory as a tool within the ReAct agent loop, allowing agents to read and write persistent memory notes. Memory notes are stored in the vault as Markdown files and can be referenced in future conversations. This enables agents to build context across sessions without requiring users to manually provide state.
Unlike stateless LLM APIs, Obsidian Copilot agents can maintain persistent memory across conversations. Unlike generic vector databases, memory is stored as human-readable Markdown notes in the vault, enabling users to audit and modify agent memory directly.
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with Obsidian Copilot, ranked by overlap. Discovered automatically through the match graph.
obsidian-copilot
THE Copilot in Obsidian
Local GPT
Chat with documents without compromising privacy
LanceDB
Serverless embedded vector DB — Lance format, multimodal, versioning, no server needed.
vectra
A lightweight, file-backed vector database for Node.js and browsers with Pinecone-compatible filtering and hybrid BM25 search.
onyx
Open Source AI Platform - AI Chat with advanced features that works with every LLM
weaviate
Weaviate is an open-source vector database that stores both objects and vectors, allowing for the combination of vector search with structured filtering with the fault tolerance and scalability of a cloud-native database.
Best For
- ✓knowledge workers with large vaults (100+ notes) who need semantic discovery
- ✓researchers building on existing notes without remembering exact terminology
- ✓teams using Obsidian as a knowledge base that need AI-powered QA
- ✓teams evaluating multiple LLM providers without rewriting integrations
- ✓privacy-conscious users who want to run local models (Ollama, LM Studio)
- ✓developers building multi-tenant Obsidian setups with per-user provider selection
- ✓users exploring their vault's knowledge graph while chatting
- ✓researchers discovering connections between notes during analysis
Known Limitations
- ⚠Embedding-backed search requires external API keys (OpenAI, Anthropic, or self-hosted Miyo)
- ⚠BM25+ lexical search alone may miss semantic variations and synonyms
- ⚠Vector search adds ~500ms-2s latency per query depending on vault size and embedding provider
- ⚠No built-in re-ranking by note recency or importance—relies on embedding model's relevance scoring
- ⚠Streaming implementation adds ~50-100ms latency per token due to UI update overhead
- ⚠No automatic fallback if primary provider fails—requires manual provider switching
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
About
AI agent plugin for Obsidian that provides conversational access to your entire vault, enabling semantic search across notes, question answering from your knowledge base, and AI-assisted writing within Obsidian.
Categories
Alternatives to Obsidian Copilot
OpenAI's managed agent API — persistent assistants with code interpreter, file search, threads.
Compare →Are you the builder of Obsidian Copilot?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →