provider-agnostic model abstraction with unified interface
Implements a dual sync/async base class hierarchy (Model, AsyncModel, KeyModel, AsyncKeyModel) defined in llm/models.py that abstracts away provider-specific details. Any model—whether OpenAI, Anthropic, local, or plugin-provided—inherits from these base classes and implements prompt() and execute() methods, allowing identical code to work across all providers without conditional logic or provider detection.
Unique: Uses inheritance-based polymorphism with separate sync/async class hierarchies (Model vs AsyncModel) rather than wrapper patterns, enabling native async/await support without callback hell. Plugin system auto-discovers and registers models via entry points, eliminating manual provider registration.
vs alternatives: More flexible than LangChain's LLMBase because it supports both sync and async natively without wrapping, and simpler than Anthropic SDK because it doesn't require provider-specific imports for basic operations.
persistent conversation history with sqlite logging
Automatically logs all model interactions to a local SQLite database (logs.db) with full conversation state, including prompts, responses, model metadata, tokens used, and timestamps. The Conversation class in llm/models.py maintains multi-turn dialogue state and can be serialized/deserialized from the database, enabling conversation resumption, audit trails, and historical analysis without external services.
Unique: Uses SQLite as the primary persistence layer rather than in-memory caches or external services, making conversation history available offline and queryable via SQL. Conversation class encapsulates both state and serialization, allowing seamless round-tripping between Python objects and database records.
vs alternatives: Simpler and more portable than LangChain's memory implementations because it doesn't require Redis or external databases, and more transparent than Anthropic's conversation API because you own and can query the raw data.
python api for programmatic llm access
Exposes a Python library interface (llm module) that allows developers to interact with models programmatically without using the CLI. Core functions like llm.get_model(), model.prompt(), and model.execute() provide a simple API for single-turn and multi-turn interactions. The API supports both sync and async patterns, enabling integration into web frameworks, scripts, and applications. Responses are returned as Response objects with methods for accessing text, JSON, and usage statistics.
Unique: Provides both sync and async APIs at the same level of abstraction, allowing developers to choose based on their use case without learning two different libraries. Response objects provide multiple accessors (text(), json(), usage()) that abstract away provider-specific response formats.
vs alternatives: Simpler than OpenAI's SDK because it abstracts away provider-specific details, and more flexible than Anthropic's SDK because it supports multiple providers and async natively.
batch embedding and cost estimation
Supports generating embeddings for large batches of text via the embed_batch() method on EmbeddingModel, which is more efficient than calling embed() repeatedly. The system tracks token usage and can estimate costs based on model pricing. Batch operations are optimized to minimize API calls and reduce costs, particularly useful for processing large document corpora.
Unique: Batch operations are optimized at the EmbeddingModel level, allowing providers to implement efficient batch APIs (e.g., OpenAI's batch endpoint) without changing the caller's code. Cost estimation is built-in, enabling developers to make informed decisions about batch size and model choice.
vs alternatives: More efficient than calling embed() in a loop because it batches API calls, and more transparent than cloud provider dashboards because cost estimates are available programmatically.
model capability introspection and feature detection
Provides methods to query model capabilities at runtime, such as whether a model supports function calling, vision, streaming, or structured output. The Model base class exposes properties and methods that describe supported features, enabling applications to adapt behavior based on model capabilities without hardcoding provider-specific logic. This enables graceful degradation when features are unavailable.
Unique: Capability information is exposed via properties and methods on the Model class, allowing runtime feature detection without external configuration. This enables applications to adapt to model capabilities without hardcoding provider-specific logic.
vs alternatives: More flexible than hardcoding capabilities because they can be queried at runtime, and more reliable than trying features and catching exceptions because capabilities are known upfront.
plugin-based model and tool discovery with entry points
Implements a plugin system using Python entry points (setuptools) that auto-discovers and registers custom models, tools, and templates at runtime. The plugin manager in llm/cli.py scans installed packages for llm.models, llm.tools, and llm.templates entry points, dynamically loading them without modifying core code. Plugins can extend functionality by subclassing Model, Tool, or Template base classes.
Unique: Uses Python's standard entry_points mechanism rather than custom plugin loaders, making plugins installable via pip and discoverable by any tool that reads entry points. Plugin base classes (Model, Tool, Template) are simple and require minimal boilerplate, lowering the barrier to contribution.
vs alternatives: More lightweight than LangChain's integration system because it relies on standard Python packaging rather than custom registries, and more discoverable than Anthropic's approach because plugins are installed as regular packages and visible to the Python environment.
tool execution and function calling with schema validation
Enables models to invoke Python functions by defining a Tool class with a function() decorator and optional JSON schema. The Toolbox class collects related tools and prepares them for model consumption via prepare() method, which generates tool schemas compatible with OpenAI and Anthropic function-calling APIs. When a model invokes a tool, llm executes the corresponding Python function and returns the result to the model, enabling multi-step reasoning and external action.
Unique: Decouples tool definition (Python functions) from tool schema (JSON) via a decorator pattern, allowing the same function to be used with multiple model providers without rewriting. Toolbox.prepare() generates provider-specific schemas on-the-fly, abstracting away OpenAI vs Anthropic schema differences.
vs alternatives: Simpler than LangChain's tool system because it doesn't require wrapping functions in Tool objects with separate schema definitions, and more portable than Anthropic's native tool_use because it works across providers via the plugin abstraction.
structured output generation with json schema enforcement
Supports constrained generation where models must return JSON matching a provided schema. The Prompt class accepts a schema parameter, and the Response class provides a json() method that parses and validates the model output against the schema. Some providers (e.g., OpenAI with JSON mode) enforce this at the API level; others validate client-side. This enables reliable extraction of structured data (e.g., entities, classifications) from unstructured model outputs.
Unique: Decouples schema definition from model invocation via the Prompt class, allowing the same schema to be used across different models and providers. Response.json() method provides a unified interface for parsing and validating output, abstracting away provider-specific JSON mode implementations.
vs alternatives: More flexible than Anthropic's native structured output because it works across providers via plugins, and simpler than LangChain's output parsers because it doesn't require custom parser classes for each schema.
+5 more capabilities