AI memory with biological decay
AgentFreeMost RAG setups fail because they treat memory like a static filing cabinet. When every transient bug fix or abandoned rule is stored forever, the context window eventually chokes on noise, spiking token costs and degrading the agent's reasoning.This implementation experiments with a biological
Capabilities7 decomposed
biological decay-based memory forgetting
Medium confidenceImplements spaced repetition and memory decay using biological forgetting curves (Ebbinghaus-inspired) rather than simple TTL or LRU eviction. Memories degrade probabilistically over time based on access frequency and recency, with recall probability decreasing according to a decay function. The system tracks memory age, access count, and last-accessed timestamp to compute dynamic decay rates, enabling memories to fade naturally while high-value memories remain retrievable longer.
Uses biological forgetting curves (Ebbinghaus decay model) to probabilistically fade memories over time based on recency and frequency, rather than fixed TTL or LRU eviction. Decay is parameterized and continuous, not discrete, allowing smooth degradation of memory confidence.
More cognitively plausible than simple vector DB retrieval + fixed context windows; enables natural forgetting without explicit memory management, but trades determinism and recall accuracy (52%) for more human-like behavior.
time-aware memory indexing and retrieval
Medium confidenceMaintains a time-indexed memory store where each memory record includes creation timestamp, last-access timestamp, and access frequency counters. Retrieval queries compute decay scores on-the-fly by evaluating the memory's age against a decay function, then filter/rank results by decay probability. The system supports both semantic similarity search (via embeddings) and temporal filtering, allowing queries like 'retrieve memories from the last week' or 'find facts I've accessed frequently'.
Combines semantic embedding-based retrieval with temporal decay scoring, computing memory confidence dynamically based on age and access patterns. Decay is applied at query time rather than pre-computed, enabling adaptive confidence thresholds.
More sophisticated than simple vector DB retrieval (which ignores time) and simpler than full knowledge graph systems; enables temporal reasoning without requiring explicit memory consolidation or summarization logic.
probabilistic memory filtering by decay threshold
Medium confidenceImplements a confidence-based filtering mechanism where memories are included in the agent's context window only if their decay probability exceeds a configurable threshold. The system computes decay probability as a function of memory age, access frequency, and a parameterized decay curve (e.g., exponential, power-law). Memories below the threshold are excluded from LLM prompts, effectively implementing 'soft forgetting' where low-confidence memories don't influence reasoning but remain in storage for potential recovery.
Uses probabilistic decay scores as a filtering mechanism rather than hard deletion, allowing memories to fade gracefully from context while remaining recoverable. Threshold-based filtering decouples memory storage from context injection.
More nuanced than fixed-size context windows (which discard memories arbitrarily) and simpler than learned importance weighting; enables confidence-aware context selection without training.
access frequency tracking for memory reinforcement
Medium confidenceTracks how many times each memory has been retrieved or referenced by the agent, using access count as a signal of memory importance. Frequently accessed memories decay more slowly (higher half-life) than rarely accessed ones, implementing a reinforcement mechanism where 'using' a memory strengthens it. The system updates access counts on every retrieval and incorporates them into the decay function, so memories that are repeatedly useful resist forgetting longer.
Uses access frequency as an implicit importance signal, slowing decay for frequently-retrieved memories without requiring explicit user annotation. Access count is incorporated directly into the decay function rather than as a separate ranking signal.
Simpler than learned importance models (no training required) but more sophisticated than uniform decay; enables emergent memory hierarchies based on agent behavior.
embedding-based semantic memory retrieval
Medium confidenceConverts memory text to dense vector embeddings (via OpenAI, Anthropic, or local embedding model) and stores them in a vector index. Retrieval queries are also embedded and matched against the index using cosine similarity or other distance metrics, enabling semantic search where 'what did we discuss about budgets' retrieves memories about 'financial planning' even without exact keyword match. The system integrates embedding generation with the decay filtering pipeline, so retrieved memories are ranked by both semantic relevance and decay probability.
Integrates semantic embedding-based retrieval with decay probability scoring, ranking memories by both semantic relevance and temporal confidence. Decay filtering is applied post-retrieval, not pre-computed, allowing dynamic threshold adjustment.
More flexible than keyword-based search (handles paraphrasing and semantic drift) but more expensive and slower than simple BM25; enables natural language queries without requiring structured memory schemas.
configurable decay function parameterization
Medium confidenceAllows users to specify decay function parameters (half-life, shape, minimum confidence floor) that control how quickly memories fade. The system supports multiple decay models (exponential, power-law, or custom functions) and applies them uniformly across all memories. Parameters can be adjusted globally or per-memory-type, enabling domain-specific tuning (e.g., facts decay slower than opinions). The decay function is evaluated at query time using memory age and access frequency to compute current confidence probability.
Exposes decay function parameters as configuration rather than hardcoding them, enabling users to experiment with different decay models and tune memory persistence without code changes. Supports multiple decay function families (exponential, power-law, custom).
More flexible than fixed decay rates (common in simple TTL systems) but requires manual tuning; enables domain-specific memory policies without requiring ML-based importance learning.
memory consolidation and summarization (inferred capability)
Medium confidenceBased on the 52% recall metric and biological memory inspiration, the system likely implements or supports memory consolidation where related memories are periodically merged or summarized to reduce storage and improve retrieval efficiency. This would involve identifying semantically similar memories, generating summaries, and replacing clusters with consolidated records. The consolidation process would preserve high-level information while discarding redundant details, mimicking biological memory consolidation during sleep.
unknown — insufficient data on consolidation implementation; inferred from biological memory inspiration and 52% recall metric suggesting information loss through consolidation
More sophisticated than simple TTL-based forgetting; enables long-term memory without unbounded storage growth, but requires careful tuning to avoid losing important details.
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with AI memory with biological decay, ranked by overlap. Discovered automatically through the match graph.
mem0ai
Long-term memory for AI Agents
@kuindji/memory-domain
Domain-driven memory engine with graph storage, embeddings, and semantic search
agent-second-brain
Send voice notes to Telegram → get organized knowledge base, tasks in Todoist, and daily reports. Persistent memory with Ebbinghaus decay, vault health scoring, knowledge graph. Runs on Claude Code + OpenClaw. 5/mo.
@membank/core
Core library for membank — handles storage, embeddings, deduplication, and semantic search.
agent-recall-core
Core memory palace engine for AgentRecall
mcp-memory-service
Open-source persistent memory for AI agent pipelines (LangGraph, CrewAI, AutoGen) and Claude. REST API + knowledge graph + autonomous consolidation.
Best For
- ✓AI agent developers building long-running conversational systems
- ✓Researchers studying memory mechanisms in LLM-based agents
- ✓Teams building personal assistant agents that need contextual awareness over weeks/months
- ✓Developers building conversational agents with multi-turn memory
- ✓Teams implementing memory-augmented LLM systems with temporal awareness
- ✓Researchers prototyping memory decay models for cognitive science
- ✓Long-running agents where context pollution from stale facts is a problem
- ✓Systems requiring tunable memory confidence thresholds per domain
Known Limitations
- ⚠Decay function parameters (half-life, decay rate) require tuning per use case; no adaptive learning of optimal decay curves
- ⚠52% recall rate suggests significant information loss; may not be suitable for fact-critical applications
- ⚠No mechanism to distinguish between important and unimportant memories during decay — all memories fade uniformly unless explicitly weighted
- ⚠Probabilistic forgetting introduces non-determinism; same query may retrieve different context on different invocations
- ⚠Requires persistent storage with timestamp indexing; no built-in database abstraction (must implement own storage layer)
- ⚠Decay computation on every retrieval adds latency; no pre-computed decay scores or caching strategy documented
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
About
Show HN: AI memory with biological decay (52% recall)
Categories
Alternatives to AI memory with biological decay
Search the Supabase docs for up-to-date guidance and troubleshoot errors quickly. Manage organizations, projects, databases, and Edge Functions, including migrations, SQL, logs, advisors, keys, and type generation, in one flow. Create and manage development branches to iterate safely, confirm costs
Compare →Are you the builder of AI memory with biological decay?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →