learn-claude-code
AgentFreeBash is all you need - A nano claude code–like 「agent harness」, built from 0 to 1
Capabilities13 decomposed
agent loop orchestration with llm perception-action cycles
Medium confidenceImplements a minimal but complete agent loop pattern where an LLM (Claude) perceives environment state, reasons about next actions, and executes tool calls in a synchronous request-response cycle. The harness captures tool outputs as observations, feeds them back into the next loop iteration, and maintains conversation history across cycles. This is the foundational pattern taught in s01 and reused throughout all 12 sessions.
Explicitly separates the agent (the LLM model) from the harness (tools, state, permissions) as a pedagogical principle, making the loop pattern visible and modifiable without conflating model training with environment design. Most frameworks blur this distinction.
Clearer mental model than frameworks like LangChain or AutoGPT because it isolates the loop pattern and teaches harness engineering as a distinct discipline, not just LLM API wrapping.
tool dispatch with schema-based function calling
Medium confidenceRoutes LLM-generated tool calls to concrete implementations (bash, read_file, write_file, edit_file, load_skill, task_* operations) via a schema registry that defines input/output contracts. The harness validates tool schemas against LLM requests, executes the tool in an isolated context, captures output, and returns it to the agent. This is taught in s02 and extended throughout the curriculum.
Implements a two-layer tool injection strategy (s05) where tools are defined as both schema (for LLM awareness) and implementation (for execution), allowing the harness to validate and sandbox tool calls before execution. This decoupling is rarely explicit in other frameworks.
More transparent than OpenAI function calling because the schema and implementation are separately visible, making it easier to audit what tools the agent can actually invoke and how they're constrained.
autonomous task claiming and work distribution
Medium confidenceImplements a task claiming mechanism (s11) where agents autonomously claim tasks from a shared task board based on their capabilities and current workload. Agents can evaluate task requirements, decide whether to claim a task, and update task status. This enables self-organizing agent teams without a central scheduler.
Gives agents agency in task selection rather than assigning tasks from above. Agents evaluate task requirements and decide autonomously, making the system more adaptive to agent capabilities and workload.
More flexible than centralized task assignment because agents can adapt to changing conditions and new capabilities. Requires less coordination overhead but may be less optimal in terms of global load balancing.
worktree isolation and filesystem sandboxing
Medium confidenceImplements WorktreeManager (s12) that creates isolated filesystem subtrees for each agent or task, preventing cross-contamination and enabling parallel execution. Each worktree is a separate directory with its own file state, and agents can only access files within their worktree. This is the final session and combines all previous concepts into a complete isolated execution environment.
Combines path validation (s01) with filesystem-level isolation, creating a complete sandbox where agents can safely modify files without affecting other agents or the host system. This is the culmination of all previous security and isolation patterns.
More complete than simple path validation because it provides true isolation at the filesystem level. Agents can be run in parallel without coordination, unlike shared-filesystem approaches that require locks or careful ordering.
pedagogical progression through 12 learning sessions
Medium confidenceStructures the entire framework as a 12-session curriculum (s01–s12) where each session introduces exactly one harness mechanism without modifying the core agent loop. Sessions build incrementally: s01 teaches the loop, s02 adds tools, s03 adds planning, s04 adds subagents, s05 adds skills, s06 adds compression, s07 adds tasks, s08 adds background execution, s09 adds teams, s10 adds protocols, s11 adds autonomous claiming, s12 adds worktree isolation. This design makes the framework explicitly educational and modular.
Explicitly designs the framework as a teaching tool with a structured progression, rather than a production system. Each session is a minimal, self-contained example that teaches one concept. This is rare — most frameworks prioritize features over pedagogy.
More educational than production frameworks like LangChain because it isolates concepts and builds understanding incrementally. Trades off feature completeness for clarity and learnability.
safe path validation and dangerous command blocking
Medium confidenceImplements a permission layer that validates file paths against a safe_path whitelist before executing read/write/edit operations, and blocks dangerous bash commands (rm -rf, sudo, etc.) via a blocklist. The harness intercepts tool calls at dispatch time, checks paths and commands against rules, and rejects unsafe operations before they reach the OS. This is a core security mechanism taught in the overview and applied throughout.
Combines filesystem-level path whitelisting with command-pattern blacklisting, creating a two-layer defense that is simple to understand and audit. Most frameworks either omit this entirely or use complex capability-based security models.
Simpler and more transparent than capability-based security (like secomp or AppArmor) because rules are human-readable and can be inspected without kernel knowledge, making it suitable for educational and small-scale deployments.
planning and task decomposition via todomanager
Medium confidenceProvides a persistent task board (TodoManager) where agents can write, read, and update tasks in a structured format. Tasks are stored as markdown with metadata (status, assignee, priority), and the agent can decompose complex goals into subtasks, track progress, and coordinate with other agents. This is introduced in s03 and extended in s07 (TaskManager) and s09 (multi-agent teams).
Uses markdown as the task storage format, making tasks human-readable and editable outside the agent system. This is unusual — most frameworks use databases or JSON. The design choice prioritizes transparency over performance.
More transparent than database-backed task systems because tasks are plain text and can be inspected, edited, or version-controlled directly. Trades off concurrent write safety for simplicity and auditability.
subagent spawning with context isolation
Medium confidenceAllows a parent agent to spawn child agents (subagents) with isolated context, separate tool access, and independent task boards. Each subagent runs its own agent loop with a subset of the parent's tools and knowledge, and communicates back via message passing. This is taught in s04 and forms the foundation for multi-agent teams in s09.
Implements context isolation as a first-class pattern by giving each subagent its own tool registry and knowledge base, rather than sharing the parent's full context. This makes permission boundaries explicit and teachable.
More explicit about isolation than frameworks like LangChain's SubTask agents, which often share parent context by default. This design forces developers to think about what each agent should know and can do.
dynamic skill loading and knowledge injection
Medium confidenceImplements a SkillLoader that reads markdown files from a skills/ directory and injects them into the agent's context as knowledge. Skills are markdown documents describing tools, APIs, or domain knowledge that the agent can reference during reasoning. The two-layer injection strategy (s05) allows skills to be loaded at runtime without modifying the agent loop or tool registry.
Separates skill definition (markdown documentation) from skill implementation (tool code), allowing non-developers to add agent knowledge by writing markdown. The two-layer injection strategy makes this explicit and composable.
More flexible than static tool registries because skills can be added, updated, or removed without code deployment. More transparent than embedding knowledge in system prompts because skills are separately versioned and auditable.
context compression and token optimization
Medium confidenceImplements a context compression pipeline (s06) that summarizes or prunes conversation history, task boards, and skill content to reduce token usage while preserving semantic meaning. The harness can compress context before sending to the LLM, trading off detail for cost and latency. Compression strategies include summarization, chunking, and selective retention of recent/important messages.
Treats context compression as a pluggable pipeline component that can be inserted between the harness and the LLM, allowing different compression strategies to be tested without modifying the agent loop. Most frameworks don't expose compression as a first-class mechanism.
More explicit about compression trade-offs than frameworks that silently truncate context. Allows developers to choose compression strategy based on their cost/quality requirements.
background task execution and async job management
Medium confidenceProvides a BackgroundManager (s08) that allows agents to spawn long-running tasks that execute asynchronously while the agent loop continues. Background tasks are tracked in a job queue, can be polled for status, and can emit events when complete. This enables agents to parallelize work without blocking the main loop.
Exposes background task management as a tool the agent can call, rather than hiding it in the harness. This makes async patterns visible to the agent and allows it to reason about job status and dependencies.
More transparent than frameworks that automatically parallelize tool execution, because the agent explicitly decides which tasks to background and can monitor their progress. Trades off automatic optimization for explicit control.
multi-agent team coordination with messagebus
Medium confidenceImplements a MessageBus (s10) that allows multiple agents to communicate asynchronously via message passing. Agents can publish messages to topics, subscribe to topics, and react to messages from teammates. This enables team protocols (FSMs, workflows) where agents coordinate work without shared mutable state. Teams are taught in s09, with protocols in s10.
Uses message passing as the primary coordination mechanism instead of shared state or RPC, making agent interactions explicit and auditable. Each agent remains independent and can be reasoned about in isolation.
Decouples agents more cleanly than shared-state approaches because agents don't need to know about each other's internal state. Easier to test and debug because message flows are visible.
team protocols and finite state machine workflows
Medium confidenceAllows teams of agents to follow structured protocols defined as finite state machines (FSMs) or workflow specifications. Protocols define valid state transitions, which agents can trigger via messages, and enforce rules about who can do what in each state. This is taught in s10 and enables complex multi-agent behaviors like approval workflows, handoffs, and consensus patterns.
Formalizes team interactions as FSMs, making protocol rules explicit and verifiable. Most multi-agent frameworks rely on implicit conventions or natural language descriptions.
More rigorous than convention-based coordination because FSM violations are caught at runtime. Enables formal verification of protocol properties (e.g., no deadlocks) that would be difficult with implicit rules.
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with learn-claude-code, ranked by overlap. Discovered automatically through the match graph.
@tanstack/ai
Core TanStack AI library - Open source AI SDK
Nerve
** is an open source command line tool designed to be a simple yet powerful platform for creating and executing MCP integrated LLM-based agents.
@observee/agents
Observee SDK - A TypeScript SDK for MCP tool integration with LLM providers
happy-llm
📚 从零开始构建大模型
llamaindex
<p align="center"> <img height="100" width="100" alt="LlamaIndex logo" src="https://ts.llamaindex.ai/square.svg" /> </p> <h1 align="center">LlamaIndex.TS</h1> <h3 align="center"> Data framework for your LLM application. </h3>
mcp-client-for-ollama
A text-based user interface (TUI) client for interacting with MCP servers using Ollama. Features include agent mode, multi-server, model switching, streaming responses, tool management, human-in-the-loop, thinking mode, model params config, MCP prompts, custom system prompt and saved preferences. Bu
Best For
- ✓educators teaching AI agent architecture
- ✓developers learning harness engineering principles
- ✓teams prototyping agent behavior before scaling
- ✓developers building agent toolkits with safety constraints
- ✓teams implementing tool governance and permission models
- ✓educators demonstrating tool abstraction layers
- ✓developers building self-organizing agent teams
- ✓teams implementing decentralized task distribution
Known Limitations
- ⚠No session persistence — agent state is lost between process restarts
- ⚠Single-threaded synchronous loop — cannot handle concurrent tool execution
- ⚠No built-in error recovery — tool failures propagate directly to LLM without retry logic
- ⚠Context window grows unbounded — no automatic conversation pruning or summarization
- ⚠No parallel tool execution — tools run sequentially, blocking the agent loop
- ⚠Schema validation is basic — no complex type constraints or conditional validation
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
Repository Details
Last commit: Apr 14, 2026
About
Bash is all you need - A nano claude code–like 「agent harness」, built from 0 to 1
Categories
Alternatives to learn-claude-code
Are you the builder of learn-claude-code?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →