memento-mcp
MCP ServerFreeMemento MCP: A Knowledge Graph Memory System for LLMs
Capabilities14 decomposed
entity-centric knowledge graph construction with temporal versioning
Medium confidenceConstructs and maintains a Neo4j-backed knowledge graph where entities (persons, organizations, concepts) serve as primary nodes with complete version history and temporal audit trails. Each entity stores name, type classification, observational statements, and vector embeddings. The system automatically tracks all mutations through Neo4jStorageProvider, enabling point-in-time reconstruction of entity state at any historical timestamp and supporting confidence decay calculations over time.
Implements complete temporal versioning at the entity level with automatic confidence decay calculations, rather than treating the knowledge graph as a static snapshot. Uses Neo4j's native graph structure combined with timestamp-aware queries to enable point-in-time reconstruction without separate time-series databases.
Provides temporal awareness and confidence decay that vector-only memory systems (like simple RAG) lack, while maintaining graph structure advantages over flat document stores for relationship reasoning.
semantic relationship management with strength and confidence scoring
Medium confidenceManages directed relationships between entities with multi-dimensional scoring: strength (0.0-1.0 importance indicator) and confidence (0.0-1.0 certainty level). Relationships are stored as Neo4j edges with relationType classification, metadata fields, and automatic timestamp tracking. The system supports relationship creation, updates, and queries that filter by strength/confidence thresholds, enabling LLMs to reason about relationship reliability and importance.
Decouples strength (importance) from confidence (certainty) as independent dimensions, allowing LLMs to distinguish between 'this relationship is important but uncertain' vs. 'this relationship is unimportant but certain'. Implements automatic confidence decay over time using configurable half-life parameters.
More sophisticated than simple triple stores that treat all relationships equally; enables probabilistic reasoning about relationship reliability without requiring external Bayesian inference systems.
neo4j storage abstraction with pluggable provider pattern
Medium confidenceAbstracts Neo4j database operations through a Neo4jStorageProvider interface, enabling potential future storage backend swaps without changing business logic. The provider handles all graph mutations, queries, vector indexing, and temporal operations. This layered architecture separates storage concerns from knowledge graph management, improving testability and maintainability. The provider implements connection pooling, transaction management, and error handling for Neo4j operations.
Implements storage abstraction through a provider interface pattern, decoupling business logic from Neo4j-specific implementation details. Enables testability through mock providers and future backend flexibility without rewriting core graph operations.
More maintainable than tightly coupled Neo4j code; enables unit testing of business logic without database dependencies through mock providers.
relationship metadata and custom field storage
Medium confidenceStores arbitrary metadata as key-value pairs on relationships, enabling custom fields beyond standard properties (strength, confidence, relationType). Metadata is unstructured and flexible, allowing LLMs to attach domain-specific information to relationships without schema changes. Metadata is queryable and included in relationship results, supporting rich relationship semantics.
Treats relationship metadata as first-class queryable properties rather than opaque blobs, enabling flexible relationship semantics without schema changes. Metadata is included in all relationship queries and results.
More flexible than fixed-schema relationship properties; enables domain-specific customization without requiring schema migrations.
cli interface for local knowledge graph management
Medium confidenceProvides a command-line interface for managing knowledge graphs locally without requiring MCP client integration. The CLI enables entity creation, relationship management, search, and temporal queries through terminal commands, supporting scripted workflows and local testing. The CLI uses the same underlying KnowledgeGraphManager as the MCP server, ensuring consistent behavior across interfaces.
Provides CLI interface that shares the same KnowledgeGraphManager implementation as the MCP server, ensuring consistent behavior across local and remote access patterns. Enables scripted workflows and testing without MCP client overhead.
More convenient than direct Neo4j Cypher queries for common operations; enables local development without MCP server setup.
configuration management with environment variables and config files
Medium confidenceManages system configuration through environment variables and optional config files, enabling deployment flexibility without code changes. Configuration includes Neo4j connection details, OpenAI API keys, embedding batch sizes, decay half-life parameters, and MCP server settings. The system loads configuration at startup with environment variable precedence over file-based config, supporting both development and production deployments.
Implements configuration management with environment variable precedence, enabling secure credential handling and environment-specific tuning without code changes. Supports both file-based and environment variable configuration.
More flexible than hardcoded configuration; enables production deployments with proper credential separation.
vector embedding generation and caching with async job management
Medium confidenceGenerates and caches vector embeddings for entities using OpenAI's text-embedding-3-small model through an EmbeddingJobManager that batches requests and implements exponential backoff retry logic. Embeddings are cached in Neo4j's vector index to enable semantic similarity search. The system queues embedding jobs asynchronously, allowing entity creation to proceed without blocking on embedding generation, while maintaining eventual consistency through background job processing.
Implements asynchronous embedding generation via EmbeddingJobManager with exponential backoff retry logic and in-database caching, decoupling embedding latency from entity creation. Uses Neo4j's native vector index rather than external vector databases, reducing operational complexity.
Faster than synchronous embedding approaches for bulk entity creation; more cost-efficient than naive per-entity API calls through batching; simpler than external vector DB solutions by leveraging Neo4j's built-in vector capabilities.
hybrid semantic and keyword search with adaptive strategy selection
Medium confidenceImplements hybrid search combining vector similarity (via Neo4j vector index) and keyword matching, with an adaptive strategy selector that automatically chooses the optimal search method based on query characteristics. Semantic search uses entity embeddings to find conceptually similar entities; keyword search uses Neo4j full-text indexes for exact term matching. The system evaluates query properties (length, specificity, entity type) to route to the most effective search path.
Implements adaptive strategy selection that automatically routes queries to semantic or keyword search based on query characteristics, rather than requiring explicit user configuration. Combines Neo4j's vector index and full-text index capabilities in a single unified search interface.
More intelligent than single-strategy search systems; avoids the latency overhead of always running both semantic and keyword searches by adaptively selecting the optimal path.
point-in-time graph reconstruction with temporal queries
Medium confidenceEnables querying the knowledge graph state at any historical timestamp through getGraphAtTime() and related temporal query methods. The system uses Neo4j's timestamp metadata on all mutations to reconstruct the graph as it existed at a specific moment, filtering out entities and relationships created after the target timestamp and excluding relationships whose confidence has decayed below a threshold. This enables LLMs to reason about what was known at a particular point in time.
Reconstructs complete graph state at historical timestamps by filtering mutations and applying confidence decay calculations, enabling temporal reasoning without requiring separate time-series databases. Integrates temporal queries directly into the MCP tool interface for LLM accessibility.
Provides temporal reasoning capabilities that static knowledge graphs lack; simpler than maintaining separate versioned snapshots by computing historical state on-demand from mutation logs.
confidence decay with configurable half-life parameters
Medium confidenceAutomatically reduces relationship confidence scores over time using an exponential decay model with configurable half-life parameters. The system applies decay calculations at query time based on the relationship's creation timestamp and current time, allowing older relationships to gradually lose confidence. This models the real-world phenomenon that information becomes less certain as time passes without reinforcement, enabling LLMs to deprioritize stale information.
Implements exponential confidence decay as a first-class feature in the knowledge graph, automatically reducing relationship certainty over time without requiring explicit updates. Decay is calculated at query time, enabling dynamic confidence adjustment without storing multiple confidence versions.
More sophisticated than binary fresh/stale flags; enables continuous confidence degradation that better models real-world information uncertainty.
mcp tool exposure with schema-based function calling
Medium confidenceExposes all knowledge graph operations through the Model Context Protocol as MCP tools with JSON schema definitions. The system implements a tool registry that maps natural language requests to underlying KnowledgeGraphManager methods, with automatic schema generation for input validation and type safety. LLM clients invoke tools via standard MCP function-calling protocol, receiving structured responses with embedded metadata for error handling and result interpretation.
Implements complete MCP tool registry with automatic schema generation from TypeScript interfaces, enabling type-safe tool invocation without manual schema maintenance. Integrates directly with Claude Desktop and Cursor via standard MCP protocol.
More integrated than REST API approaches for LLM clients; provides native tool-calling experience without requiring custom API wrappers.
batch entity and relationship operations with transactional consistency
Medium confidenceSupports batch creation and updates of multiple entities and relationships in a single transaction, ensuring all-or-nothing semantics. The system queues batch operations through the KnowledgeGraphManager, which coordinates Neo4j transactions to maintain consistency. Batch operations are more efficient than individual mutations, reducing network round-trips and enabling atomic multi-entity updates (e.g., creating an entity and all its relationships simultaneously).
Implements transactional batch operations at the MCP tool level, enabling LLMs to perform multi-entity updates atomically without requiring manual transaction management. Coordinates Neo4j transactions to ensure consistency across entity and relationship mutations.
More efficient than sequential individual mutations; provides ACID guarantees that simple REST APIs without transaction support cannot offer.
entity observation management with incremental updates
Medium confidenceManages entity observations (descriptive statements about entities) with support for adding, updating, and removing individual observations without replacing the entire observation set. Each observation is tracked independently with timestamps, enabling LLMs to add new facts about entities incrementally. The system maintains observation history and supports querying observations by creation date or content, allowing temporal reasoning about when facts were learned.
Treats observations as first-class mutable elements with individual timestamps and history, rather than immutable entity properties. Enables incremental fact accumulation without requiring full entity replacement, supporting natural conversation flows where new information is learned gradually.
More flexible than immutable entity snapshots; enables natural incremental learning patterns that match how LLMs discover information across conversations.
entity type classification and filtering
Medium confidenceClassifies entities into predefined types (person, organization, concept, event, location) and enables filtering queries by entity type. The system uses entityType as a first-class property on entities, allowing queries to constrain results to specific categories. Type classification enables domain-specific reasoning (e.g., 'find all people related to this organization') and improves search precision by filtering irrelevant entity types.
Implements entity type as a first-class property with built-in filtering support, enabling type-aware queries without requiring external classification systems. Predefined types (person, organization, concept, event, location) cover common knowledge graph use cases.
Simpler than external entity linking systems; provides immediate type-based filtering without requiring NER or entity disambiguation.
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with memento-mcp, ranked by overlap. Discovered automatically through the match graph.
cognee
Knowledge Engine for AI Agent Memory in 6 lines of code
mempalace
The best-benchmarked open-source AI memory system. And it's free.
Memory
** - Knowledge graph-based persistent memory system
mcp-neo4j
Neo4j Labs Model Context Protocol servers
mcp-memory-service
Open-source persistent memory for AI agent pipelines (LangGraph, CrewAI, AutoGen) and Claude. REST API + knowledge graph + autonomous consolidation.
Mem0
Persistent memory layer for AI agents.
Best For
- ✓AI agents requiring long-term memory across multi-turn conversations
- ✓Teams building knowledge management systems for LLM applications
- ✓Developers implementing temporal reasoning in AI systems
- ✓Knowledge graph applications requiring nuanced relationship semantics
- ✓Systems where relationship certainty varies (e.g., inferred vs. directly stated facts)
- ✓Multi-agent systems needing to communicate relationship reliability
- ✓Teams planning long-term storage backend flexibility
- ✓Projects requiring high test coverage of business logic
Known Limitations
- ⚠Version history storage grows linearly with entity mutations — no automatic pruning or archival
- ⚠Temporal queries require full graph traversal; no built-in time-series optimization
- ⚠Entity embedding updates are asynchronous via EmbeddingJobManager — stale embeddings possible during high-volume updates
- ⚠Confidence decay is applied uniformly across all relationship types — no per-type decay curves
- ⚠Strength and confidence are independent dimensions; no built-in correlation analysis
- ⚠Relationship metadata is unstructured key-value pairs — no schema validation
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
Repository Details
Last commit: Oct 27, 2025
About
Memento MCP: A Knowledge Graph Memory System for LLMs
Categories
Alternatives to memento-mcp
⭐AI-driven public opinion & trend monitor with multi-platform aggregation, RSS, and smart alerts.🎯 告别信息过载,你的 AI 舆情监控助手与热点筛选工具!聚合多平台热点 + RSS 订阅,支持关键词精准筛选。AI 智能筛选新闻 + AI 翻译 + AI 分析简报直推手机,也支持接入 MCP 架构,赋能 AI 自然语言对话分析、情感洞察与趋势预测等。支持 Docker ,数据本地/云端自持。集成微信/飞书/钉钉/Telegram/邮件/ntfy/bark/slack 等渠道智能推送。
Compare →The first "code-first" agent framework for seamlessly planning and executing data analytics tasks.
Compare →Are you the builder of memento-mcp?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →