multi-provider session log discovery and parsing
Automatically locates and parses session logs from Claude Code, Cursor, GitHub Copilot, Codex, and other AI coding tools by scanning platform-specific directories (~/.claude, ~/.config, etc.). Implements a provider plugin system with standardized parsers that convert heterogeneous log formats into a unified ParsedTurn and Session object model, enabling downstream analysis across multiple tools without manual configuration.
Unique: Implements a provider plugin architecture that decouples provider-specific parsing logic from the core analysis engine, allowing new providers to be added via standardized interfaces (discoverAllSessions, parseSessionFile) without modifying core code. Uses LiteLLM's pricing database as the canonical source for model cost data across 100+ models.
vs alternatives: Supports 5+ AI coding tools natively with a pluggable architecture, whereas most token trackers are single-tool specific or require API proxies that add latency and privacy concerns.
turn classification and task categorization engine
Analyzes parsed session turns and classifies them into TaskCategory buckets (coding, testing, terminal usage, debugging, etc.) using heuristic rules based on turn content, tool invocations, and file types. Implements a classifyTurn function that examines API calls, file modifications, and context patterns to assign semantic meaning to raw token consumption, enabling cost breakdown by activity type rather than just by model.
Unique: Uses multi-signal heuristic classification (file types, tool invocations, context patterns) rather than simple keyword matching, enabling semantic understanding of turn purpose. Tracks one-shot success rate per task category to identify which activity types benefit most from AI assistance.
vs alternatives: Provides task-level cost visibility that generic token counters cannot offer, allowing developers to optimize by activity type rather than just by model or project.
status reporting and session metadata inspection
Provides CLI commands (codeburn status, codeburn report) that generate detailed reports on session discovery status, parsing errors, and data quality metrics. Implements metadata inspection capabilities that allow developers to examine individual session files, view parsing errors, and understand data completeness. Generates status summaries showing how many sessions were discovered, parsed successfully, and skipped due to errors.
Unique: Provides transparent visibility into the data ingestion pipeline, showing exactly which sessions were discovered, parsed, and skipped with detailed error messages. Enables developers to audit data quality before relying on cost calculations.
vs alternatives: Offers detailed status and error reporting that helps developers understand data completeness, whereas black-box tools that silently skip sessions make it difficult to detect data quality issues.
provider plugin system with extensible architecture
Implements a plugin-based architecture that allows new AI coding providers to be added without modifying core CodeBurn code. Each provider plugin implements standardized interfaces (discoverAllSessions, parseSessionFile) that return normalized ParsedTurn and Session objects. Plugins are loaded dynamically at runtime and can be distributed as npm packages, enabling community contributions and custom provider support.
Unique: Defines a minimal, standardized plugin interface (discoverAllSessions, parseSessionFile) that decouples provider-specific logic from the core analysis engine, enabling community contributions without core code changes. Plugins are loaded dynamically at runtime.
vs alternatives: Enables extensibility without forking or modifying core code, whereas monolithic tools that hardcode provider support require core maintainers to add each new provider.
accurate cost calculation with litellm pricing integration
Calculates USD costs for each turn by multiplying token counts (input + output) by model-specific pricing rates sourced from LiteLLM's pricing database, which covers 100+ models across OpenAI, Anthropic, and other providers. Implements a calculateCost function that handles variable pricing tiers, currency conversion, and subscription plan adjustments (e.g., Claude Pro discounts), ensuring accurate financial visibility without requiring API calls to pricing services.
Unique: Integrates LiteLLM's comprehensive pricing database as a built-in data source rather than requiring external API calls, enabling offline cost calculation and eliminating latency. Handles subscription plan adjustments (Claude Pro discounts) and multi-currency support natively.
vs alternatives: Provides accurate, offline cost calculation across 100+ models without API dependencies, whereas most token trackers either hardcode pricing or require cloud lookups that add latency and privacy exposure.
interactive terminal ui dashboard with real-time metrics
Renders a terminal-based interactive dashboard (TUI) using a framework like Ink or Blessed that displays aggregated token usage, costs, and efficiency metrics across multiple time periods (Today, 7 Days, 30 Days, All Time). Implements keyboard-driven navigation, filtering by project/model/task category, and drill-down capabilities that allow developers to explore cost patterns without leaving the terminal. Updates metrics in real-time as new session data is discovered.
Unique: Implements a keyboard-driven TUI dashboard that runs entirely in the terminal without external dependencies, enabling cost monitoring in headless environments and SSH sessions. Provides drill-down navigation from aggregate metrics to individual turns without context switching.
vs alternatives: Offers a native terminal experience for developers who live in the CLI, whereas web-based dashboards require browser context switching and are inaccessible in SSH/headless environments.
daily aggregation and time-period bucketing with caching
Aggregates parsed session turns into daily buckets and higher-level time periods (7 Days, 30 Days, All Time) using an aggregateProjectsIntoDays function that groups by date, project, and model. Implements a caching layer that stores aggregated results to avoid recomputing statistics on every dashboard load, with cache invalidation triggered by new session data discovery. Supports efficient querying of cost trends across arbitrary time windows.
Unique: Implements a two-level aggregation strategy (daily buckets + period summaries) with intelligent cache invalidation that rebuilds only affected time periods when new sessions are discovered, avoiding full recomputation. Uses immutable daily aggregates as the foundation for all higher-level queries.
vs alternatives: Provides fast metric queries even with large datasets by pre-aggregating and caching, whereas naive approaches that recalculate from raw turns on every query become slow with 1000+ turns.
token burn pattern detection and optimization recommendations
Scans session history to identify inefficient token usage patterns such as redundant file reads, bloated context windows, unused MCP tool invocations, and low one-shot success rates. Implements an optimization engine (codeburn optimize) that analyzes turn sequences, detects repeated operations on the same files, and generates actionable recommendations to reduce token waste. Uses heuristic rules and statistical analysis to flag anomalies in token consumption.
Unique: Analyzes turn sequences and file access patterns to detect structural inefficiencies (e.g., reading the same file 5 times in a single session) rather than just flagging high token counts. Tracks one-shot success rate as a proxy for efficiency and correlates it with context size and tool usage.
vs alternatives: Provides actionable optimization recommendations based on actual usage patterns, whereas generic cost-cutting advice (e.g., 'use smaller models') ignores the specific inefficiencies in a developer's workflow.
+4 more capabilities