Obsidian Copilot
AgentFreeAI agent for Obsidian knowledge vault.
Capabilities14 decomposed
vault-wide semantic search with hybrid bm25+ and embedding-backed retrieval
Medium confidenceExecutes dual-path search across the entire Obsidian vault using BM25+ lexical indexing as the default free tier, with optional embedding-backed vector search via Orama or Miyo APIs for semantic similarity. The indexing system maintains an in-memory inverted index of vault contents, while the retrieval layer implements RAG-style context envelope construction that ranks results by relevance and injects top-K documents into LLM prompts. Search results are ranked and formatted as markdown context blocks injected into chat messages.
Implements a hybrid search architecture that defaults to free BM25+ lexical search but allows opt-in embedding-backed vector search via external APIs (Orama/Miyo), avoiding vendor lock-in while maintaining local-first operation. The context envelope system automatically constructs ranked context blocks from search results, injecting them into LLM prompts without manual prompt engineering.
Faster than cloud-only RAG solutions (Notion AI, ChatGPT plugins) because BM25+ indexing runs locally; more semantically aware than simple keyword search because embedding search is available; more flexible than Obsidian's native search because it integrates with LLM reasoning.
multi-provider llm abstraction with streaming response handling
Medium confidenceAbstracts 15+ LLM providers (OpenAI, Anthropic, Google, Groq, Ollama, Azure, etc.) behind a unified ChatModelProviders enum and model management system. The chain execution system streams responses token-by-token from the selected provider's API, with built-in error handling and fallback logic. Supports both cloud-hosted APIs (via API keys) and local models (Ollama, LM Studio) without code changes, enabling users to swap providers without reconfiguring prompts or context handling.
Implements a provider-agnostic abstraction layer (ChatModelProviders enum in src/constants.ts) that supports 15+ providers including local models (Ollama, LM Studio) and cloud APIs, with unified streaming response handling. The model management system allows users to configure multiple providers and switch between them at runtime without code changes, enabling cost/performance optimization and vendor lock-in avoidance.
More flexible than Copilot or ChatGPT plugins (locked to single provider) because it supports local models and 15+ cloud providers; simpler than LangChain for Obsidian users because configuration is UI-driven rather than code-based; faster than batch-only solutions because it streams responses token-by-token.
document parsing and ingestion for pdf, epub, and docx formats via hosted backend
Medium confidenceThe Plus-tier document parsing feature allows users to upload PDF, EPUB, and DOCX files, which are converted to markdown by Brevilabs' hosted backend and ingested into the vault. The conversion process extracts text, preserves structure (headings, lists, tables), and generates markdown files that can be searched and linked like native notes. This is a hosted service; documents are sent to Brevilabs' infrastructure for processing.
Provides hosted document parsing for PDF, EPUB, and DOCX formats, converting them to markdown and ingesting them into the vault. This is differentiated from local parsing tools by the hosted approach (no local dependencies) and integration with the vault knowledge base.
More integrated than external document converters (Pandoc, CloudConvert) because converted files are automatically ingested into the vault; more accessible than local parsing tools because no setup is required; more comprehensive than single-format tools because it supports PDF, EPUB, and DOCX.
self-hosted backend replacement with miyo, firecrawl, and perplexity integration
Medium confidenceThe Plus-tier 'Self-Host Mode' (Believer tier) allows users to replace Brevilabs' hosted backend with self-hosted services: Miyo for embeddings, Firecrawl for web scraping, and Perplexity for web search. This enables privacy-conscious deployments where all data remains under user control. Configuration is via settings UI, allowing users to point to their own instances of these services. The agent system automatically uses the configured backends for search and web access.
Enables users to replace Brevilabs' hosted backend with self-hosted services (Miyo, Firecrawl, Perplexity), maintaining full data control while retaining agent capabilities. Configuration is UI-driven, allowing non-technical users to point to their own infrastructure.
More flexible than cloud-only solutions (ChatGPT, Copilot) because it supports self-hosted backends; more integrated than manual service integration because configuration is built into the plugin; more privacy-preserving than Brevilabs' managed services because data never leaves the user's infrastructure.
settings interface with multi-provider configuration and model selection
Medium confidenceThe settings UI allows users to configure multiple LLM providers (OpenAI, Anthropic, Google, etc.) with API keys, select default models for chat and embeddings, and customize behavior (context size, temperature, streaming, etc.). Settings are stored in Obsidian's plugin data directory and can be exported/imported. The interface supports both simple (API key + model) and advanced (custom endpoints, proxy settings) configuration. Model selection is dynamic; users can switch models without restarting Obsidian.
Provides a comprehensive settings UI for configuring 15+ LLM providers, with support for multiple API keys, model selection, and advanced options (custom endpoints, proxy settings). Settings are stored in Obsidian's plugin data directory and can be exported/imported.
More user-friendly than code-based configuration (LangChain, LLamaIndex) because it uses a UI; more flexible than single-provider solutions because it supports 15+ providers; more portable than cloud-based settings because configuration is stored locally.
plugin lifecycle management with lazy loading and state persistence
Medium confidenceThe plugin implements a standard Obsidian plugin lifecycle (onload, onunload) with lazy initialization of expensive components (embeddings, indexing, agent infrastructure). The state management system persists plugin state (settings, conversation history, memory notes) to Obsidian's plugin data directory, enabling recovery after crashes or restarts. The plugin integrates with Obsidian's command palette and ribbon UI for easy access to chat and commands.
Implements standard Obsidian plugin lifecycle with lazy initialization of expensive components and automatic state persistence to the plugin data directory. This enables fast startup and crash recovery without manual intervention.
More efficient than eager loading because expensive components are initialized on-demand; more reliable than in-memory state because state is persisted to disk; more integrated than external state management because it uses Obsidian's native plugin data directory.
context-aware chat with selective note/folder/tag injection and project scoping
Medium confidenceEnables conversational chat with fine-grained control over which vault content is included in each message. Users can select specific notes, folders, or tags to inject as context, or use the free 'Vault QA' mode for full-vault search. The context envelope system constructs a ranked context block from selected sources, injecting it into the system prompt. The Plus tier 'Project Mode' allows defining scoped contexts from folders/tags/URLs, enabling multi-project workflows where different conversations operate over different knowledge domains.
Implements a context envelope system that allows users to dynamically select which notes/folders/tags are injected into each chat message, with optional Project Mode (Plus) for persistent scoped contexts. This enables multi-project workflows within a single vault without requiring separate Obsidian instances or manual context switching.
More flexible than ChatGPT's conversation scoping (which is global) because it supports per-message context selection; more granular than Notion AI (which operates on single pages) because it can combine multiple notes and folders; simpler than building custom RAG pipelines because context selection is UI-driven.
react-style autonomous agent with tool-calling loop and vault/web/composer actions
Medium confidenceImplements a ReAct (Reasoning + Acting) agent loop that enables the LLM to autonomously decide when to search the vault, fetch web content, or apply edits via the Composer tool. The agent maintains an internal reasoning trace, calls tools based on LLM-generated function calls, and iterates until reaching a terminal state (answer found, max steps exceeded, or error). Tools include vault search (BM25+/semantic), web search (via Firecrawl or Perplexity), and note editing (via Composer with diff preview). This is a Plus-tier feature backed by Brevilabs' hosted infrastructure.
Implements a ReAct-style agent loop that orchestrates multiple tools (vault search, web search, Composer edits) based on LLM-generated function calls, with reasoning traces visible to the user. The agent maintains state across iterations and can apply edits back to the vault, enabling autonomous knowledge workflows. This is differentiated from simpler tool-calling by the iterative reasoning loop and multi-step planning.
More autonomous than manual tool-calling (Copilot's function calling) because the agent decides which tools to use and iterates; more integrated than external agents (AutoGPT, LangChain agents) because it operates directly within Obsidian and can edit notes; more transparent than black-box agents because reasoning traces are visible to the user.
ai-assisted note editing with diff preview and one-click application
Medium confidenceThe Composer tool allows users to request AI-generated edits to notes, with a diff preview interface showing proposed changes before application. Users can accept, reject, or manually edit the diff. The editing system integrates with Obsidian's file API to apply changes atomically, maintaining undo/redo history. Supports both inline edits (modify selected text) and full-note rewrites. The agent system can autonomously apply edits via Composer as part of tool-calling loops.
Implements a diff preview interface for AI-generated edits, allowing users to review and manually adjust changes before application. The Composer tool integrates with Obsidian's file API for atomic application and maintains undo/redo history. This is differentiated from simple text replacement by the explicit diff review step and support for autonomous agent-driven edits.
More transparent than auto-apply editing (ChatGPT plugins) because diffs are previewed before application; more integrated than external editors because changes are applied directly to Obsidian files; more flexible than template-based generation because it supports arbitrary edits to existing content.
markdown-based custom command system with variable substitution and prompt templating
Medium confidenceAllows users to define custom commands as markdown files with prompt templates, variable placeholders (e.g., {{selectedText}}, {{fileName}}), and optional YAML frontmatter for configuration. Commands are executed by substituting variables and sending the rendered prompt to the LLM. Supports both simple one-shot prompts and complex multi-step workflows. Commands are stored as markdown files in the vault, enabling version control and sharing. The command system integrates with the chat UI and can be triggered via slash commands or the command palette.
Implements a markdown-based command system with variable substitution and optional YAML configuration, allowing users to define reusable AI workflows without coding. Commands are stored as markdown files in the vault, enabling version control and team sharing. This is differentiated from hardcoded commands by the template-based approach and vault-native storage.
More flexible than built-in commands (Copilot's quick actions) because users can define arbitrary prompts; more accessible than code-based automation (LangChain, Zapier) because it uses markdown syntax; more shareable than UI-based configuration because commands are version-controlled markdown files.
vision-capable chat with image attachment and multimodal understanding
Medium confidenceAllows users to attach images to chat messages and send them to vision-capable LLM models (GPT-4V, Claude 3, Gemini Vision, etc.). The chat UI handles image upload, encoding, and injection into multimodal prompts. Supports multiple image formats (PNG, JPEG, WebP) and can process images alongside text context from notes. The vision capability is provider-dependent; only models with vision support can process images.
Integrates vision-capable LLM models directly into the chat interface, allowing users to attach images and process them alongside text context from notes. The implementation is provider-agnostic, supporting any LLM with vision capabilities (GPT-4V, Claude 3, Gemini Vision, etc.).
More integrated than external vision tools (ChatGPT, Claude web) because images are processed in context of vault notes; more flexible than single-provider solutions because it supports multiple vision models; more accessible than API-based vision tools because it's UI-driven.
sidebar panel with link-graph-aware note suggestions and semantic relevance ranking
Medium confidenceThe 'Relevant Notes' sidebar panel displays notes semantically related to the current chat or selected note, ranked by relevance. The panel combines semantic search results with link-graph analysis (backlinks, forward links, link distance) to surface both directly connected and semantically similar notes. Results are updated in real-time as the user types or selects notes. The panel integrates with the chat context system, allowing users to quickly inject suggested notes into the conversation.
Combines semantic search results with link-graph analysis (backlinks, forward links, link distance) to rank related notes, providing both semantic and structural relevance. The sidebar panel integrates with the chat context system, allowing one-click injection of suggested notes into conversations.
More intelligent than Obsidian's native backlinks panel (which shows only direct links) because it includes semantic similarity; more discoverable than manual search because suggestions are proactive; more integrated than external graph tools because it's built into the chat interface.
persistent chat history storage as markdown notes with full-text search
Medium confidenceAutomatically saves chat conversations as markdown notes in a designated vault folder (configurable in settings). Each conversation is stored as a single markdown file with messages formatted as blockquotes or list items, preserving the full conversation history including context injections and tool calls. Conversations are searchable via Obsidian's native full-text search and can be linked to other notes. The system supports exporting conversations and integrates with Obsidian's sync and backup mechanisms.
Stores conversations as markdown notes in the vault, making them searchable via Obsidian's native full-text search and linkable to other notes. This integrates conversations into the knowledge base rather than siloing them in a separate database, enabling serendipitous discovery and knowledge graph building.
More integrated than external conversation storage (ChatGPT history, Slack threads) because conversations are part of the vault; more searchable than UI-based history because it uses Obsidian's full-text search; more portable than proprietary formats because conversations are plain markdown.
long-term memory agent with persistent memory note reading and writing
Medium confidenceThe Plus-tier long-term memory feature allows agents to read and write persistent memory notes, enabling the agent to learn from previous interactions and maintain state across conversations. The agent can autonomously update memory notes with new information, insights, or summaries, and retrieve memory notes as context for future decisions. This is implemented via tool-calling, where the agent can invoke 'read memory' and 'write memory' tools. Memory notes are stored in the vault and can be manually edited by users.
Enables agents to autonomously read and write persistent memory notes stored in the vault, allowing agents to learn from previous interactions and maintain state across conversations. Memory notes are plain markdown, making them human-editable and version-controllable.
More transparent than black-box agent memory (LangChain memory modules) because memory is stored as editable markdown; more persistent than in-memory state because memory survives agent restarts; more flexible than fixed-schema memory because markdown allows arbitrary structure.
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with Obsidian Copilot, ranked by overlap. Discovered automatically through the match graph.
Doclime
Revolutionize research with AI-driven search and PDF...
anything-llm
The all-in-one AI productivity accelerator. On device and privacy first with no annoying setup or configuration.
llmware
Unified framework for building enterprise RAG pipelines with small, specialized models
WeKnora
LLM-powered framework for deep document understanding, semantic retrieval, and context-aware answers using RAG paradigm.
JeecgBoot
一款 AI 驱动的低代码平台,提供"零代码"与"代码生成"双模式——零代码模式一句话搭建系统,代码生成模式自动输出前后端代码与建表 SQL,生成即可运行。平台内置 AI 聊天助手、AI大模型、知识库、AI流程编排、MCP 与插件体系,兼容主流大模型,支持一句话生成流程图、设计表单、聊天式业务操作,解决 Java 项目 80% 重复工作,高效且不失灵活。
Open WebUI
Self-hosted ChatGPT-like UI — supports Ollama/OpenAI, RAG, web search, multi-user, plugins.
Best For
- ✓knowledge workers with large, unstructured note collections (100+ notes)
- ✓researchers building on existing literature notes without manual tagging
- ✓teams migrating from traditional wikis to AI-augmented knowledge bases
- ✓developers building multi-tenant AI applications with provider flexibility
- ✓privacy-conscious teams requiring local-only LLM inference
- ✓cost-optimized deployments needing to switch between budget and premium models
- ✓enterprises with existing LLM provider contracts (Azure, AWS Bedrock)
- ✓researchers importing research papers and books into their knowledge base
Known Limitations
- ⚠BM25+ free tier requires exact keyword overlap; semantic search requires external API key (Orama/Miyo)
- ⚠Indexing latency scales with vault size; no incremental indexing mentioned in architecture
- ⚠Vector search quality depends on embedding model quality; no local embedding option documented
- ⚠Search results limited to markdown text; binary files (PDFs, images) require Plus tier document parsing
- ⚠Provider-specific features (vision, function calling) require conditional logic; no unified abstraction for all capabilities
- ⚠Streaming adds ~50-200ms latency per token compared to batch responses
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
About
AI agent plugin for Obsidian that provides conversational access to your entire vault, enabling semantic search across notes, question answering from your knowledge base, and AI-assisted writing within Obsidian.
Categories
Alternatives to Obsidian Copilot
Are you the builder of Obsidian Copilot?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →