Meta-agent: self-improving agent harnesses from live traces
FrameworkFreeWe built meta-agent: an open-source library that automatically and continuously improves agent harnesses from production traces.Point it at an existing agent, a stream of unlabeled production traces, and a small labeled holdout set.An LLM judge scores unlabeled production traces as they stream.A pro
Capabilities9 decomposed
live execution trace capture and serialization
Medium confidenceCaptures real-time execution traces from agent runs by instrumenting function calls, tool invocations, and LLM interactions into a structured trace format. Uses runtime hooking or decorator patterns to intercept agent behavior without modifying core agent logic, serializing traces as JSON or structured logs that preserve call hierarchy, latency, inputs, outputs, and error states for later analysis and optimization.
Focuses specifically on capturing live traces from agent execution rather than post-hoc logging, enabling real-time analysis and immediate feedback loops for self-improvement without requiring agent code changes
Differs from generic observability tools (Datadog, New Relic) by preserving agent-specific semantics (tool calls, reasoning steps, LLM interactions) in a format directly usable for agent optimization rather than just metrics
trace-based agent harness generation
Medium confidenceAutomatically synthesizes executable agent harnesses (wrapper code, prompt templates, tool bindings) from captured execution traces by analyzing successful execution patterns and extracting the minimal set of instructions, tools, and context needed to reproduce similar behavior. Uses pattern matching or AST analysis on traces to identify which tool calls were critical, which prompts were effective, and which context was necessary, then generates clean, reusable harness code that can be deployed or further refined.
Generates agent harnesses directly from execution traces rather than from manual specifications, using trace analysis to infer effective prompts, tool selections, and control flow automatically
Unlike prompt engineering tools that require manual iteration, this learns from successful execution patterns, reducing the feedback loop from hours of manual testing to minutes of trace analysis
self-improving agent loop with trace feedback
Medium confidenceImplements a closed-loop system where generated agent harnesses are executed, their traces are captured, analyzed for success/failure patterns, and used to automatically refine prompts, tool selections, and execution strategies. Uses metrics extracted from traces (success rate, latency, tool call efficiency) to drive iterative improvements, potentially using LLM-based analysis to suggest prompt modifications or tool reordering based on observed failure modes.
Creates a closed-loop system where agents improve themselves by analyzing their own execution traces, using trace-derived insights to automatically refine prompts and tool selections without human intervention
Goes beyond static prompt optimization (like DSPy or PromptOpt) by continuously learning from live execution traces, enabling agents to adapt to changing environments and task distributions in real-time
trace-based failure analysis and diagnosis
Medium confidenceAnalyzes execution traces to identify failure modes, bottlenecks, and inefficiencies by comparing successful vs. failed traces, extracting common patterns in tool call sequences, prompt effectiveness, and decision points. Uses diff-based analysis or statistical comparison to highlight which steps diverged between successful and failed runs, then generates diagnostic reports or suggestions for remediation (e.g., 'tool X failed 40% of the time when called after tool Y').
Performs comparative analysis across multiple traces to identify systematic failure patterns rather than analyzing single failures in isolation, enabling root cause identification at scale
More targeted than generic log analysis tools because it understands agent-specific semantics (tool calls, reasoning steps) and can correlate failures with specific prompt or tool configuration choices
multi-run trace aggregation and statistics
Medium confidenceCollects and aggregates execution traces from multiple agent runs into statistical summaries, computing metrics like tool call frequency, success rates per tool, average latencies, and decision distribution across runs. Enables comparative analysis (e.g., 'prompt A succeeded 85% of the time vs. prompt B at 72%') and identifies performance trends or regressions by tracking metrics over time or across agent variants.
Aggregates agent-specific metrics (tool call patterns, reasoning step counts, decision distributions) rather than generic performance metrics, enabling agent-centric performance analysis
Provides agent-aware statistical analysis compared to generic time-series databases, automatically computing relevant metrics like 'tool success rate' and 'decision tree depth' without manual metric definition
trace-to-prompt synthesis
Medium confidenceExtracts effective prompts from execution traces by analyzing which instructions, context, and framing led to successful agent behavior, then synthesizes new prompts that capture the essential elements. Uses LLM-based analysis or pattern extraction to identify key phrases, instruction structures, and context patterns from successful traces, then generates clean, generalizable prompts that can be applied to new tasks or agent variants.
Learns prompts from successful execution traces rather than requiring manual engineering, using trace analysis to identify effective instruction patterns and context automatically
Faster than manual prompt iteration because it extracts patterns from successful runs rather than requiring trial-and-error testing, reducing prompt engineering time from hours to minutes
trace-based tool selection and optimization
Medium confidenceAnalyzes execution traces to identify which tools are most effective for specific task types, then automatically optimizes tool selection and ordering based on observed success patterns. Tracks tool call sequences, success rates per tool, and latency impact, then recommends tool reordering, removal of ineffective tools, or addition of missing tools based on trace analysis.
Optimizes tool selection and ordering based on observed success patterns in traces rather than relying on static tool definitions, enabling data-driven tool configuration
More effective than manual tool selection because it analyzes actual agent behavior across multiple runs, identifying tool combinations and orderings that work in practice rather than in theory
trace replay and validation
Medium confidenceReplays execution traces to validate that generated harnesses or refined agents reproduce the same behavior as the original traces, ensuring that optimizations don't introduce regressions. Executes agent harnesses with the same inputs as captured traces, compares outputs and tool call sequences, and flags divergences or unexpected behavior changes.
Validates agent behavior by replaying traces rather than relying on unit tests or manual testing, ensuring that generated harnesses preserve the behavior observed in successful runs
More comprehensive than traditional unit tests because it validates entire agent execution flows including tool interactions and LLM behavior, not just individual functions
context and memory extraction from traces
Medium confidenceExtracts relevant context, state, and memory requirements from execution traces by analyzing which variables, context windows, and state information were accessed during successful runs. Identifies minimal context needed to reproduce behavior and generates context initialization code or memory setup instructions that can be embedded in generated harnesses.
Automatically extracts context and memory requirements from traces rather than requiring manual specification, enabling generated harnesses to include necessary state setup automatically
More accurate than manual context specification because it analyzes actual agent behavior, identifying only the context that was actually used rather than guessing at requirements
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with Meta-agent: self-improving agent harnesses from live traces, ranked by overlap. Discovered automatically through the match graph.
Agent framework that generates its own topology and evolves at runtime
Hi HN,I’m Vincent from Aden. We spent 4 years building ERP automation for construction (PO/invoice reconciliation). We had real enterprise customers but hit a technical wall: Chatbots aren't for real work. Accountants don't want to chat; they want the ledger reconciled while they slee
smolagents
🤗 smolagents: a barebones library for agents. Agents write python code to call tools or orchestrate other agents.
TaskWeaver
Microsoft's code-first agent for data analytics.
npi
Action library for AI Agent
Portia AI
Open source framework for building agents that pre-express their planned actions, share their progress and can be interrupted by a human. [#opensource](https://github.com/portiaAI/portia-sdk-python)
Multi-agent coding assistant with a sandboxed Rust execution engine
Show HN: Multi-agent coding assistant with a sandboxed Rust execution engine
Best For
- ✓teams building production agents who need observability into agent behavior
- ✓researchers studying agent decision-making patterns
- ✓developers iterating on agent prompts and tool definitions based on real execution data
- ✓teams wanting to operationalize ad-hoc agent experiments into production harnesses
- ✓developers who want to avoid manual prompt engineering by learning from successful traces
- ✓researchers studying what makes agents effective by examining generated harnesses
- ✓teams running agents in production who want continuous performance improvement
- ✓researchers studying agent self-improvement and meta-learning
Known Limitations
- ⚠trace overhead scales with agent depth and tool call frequency — deep reasoning chains may incur 10-50ms per trace event
- ⚠sensitive data in traces (API keys, user PII) requires explicit filtering or redaction logic
- ⚠trace storage grows linearly with execution volume — no built-in compression or sampling strategies
- ⚠generated harnesses may overfit to specific trace patterns — generalization to new inputs requires validation
- ⚠tool dependencies and API signatures must be stable; harness generation cannot infer breaking changes in downstream tools
- ⚠prompt synthesis from traces may produce verbose or redundant instructions that require manual cleanup
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
About
Show HN: Meta-agent: self-improving agent harnesses from live traces
Categories
Alternatives to Meta-agent: self-improving agent harnesses from live traces
Search the Supabase docs for up-to-date guidance and troubleshoot errors quickly. Manage organizations, projects, databases, and Edge Functions, including migrations, SQL, logs, advisors, keys, and type generation, in one flow. Create and manage development branches to iterate safely, confirm costs
Compare →Are you the builder of Meta-agent: self-improving agent harnesses from live traces?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →