codeburn vs IntelliCode
Side-by-side comparison to help you choose.
| Feature | codeburn | IntelliCode |
|---|---|---|
| Type | Repository | Extension |
| UnfragileRank | 49/100 | 40/100 |
| Adoption | 1 | 1 |
| Quality | 1 | 0 |
| Ecosystem |
| 1 |
| 0 |
| Match Graph | 0 | 0 |
| Pricing | Free | Free |
| Capabilities | 12 decomposed | 6 decomposed |
| Times Matched | 0 | 0 |
Automatically locates and parses session logs from Claude Code, Cursor, GitHub Copilot, Codex, and other AI coding tools by scanning platform-specific directories (~/.claude, ~/.config, etc.). Implements a provider plugin system with standardized parsers that convert heterogeneous log formats into a unified ParsedTurn and Session object model, enabling downstream analysis across multiple tools without manual configuration.
Unique: Implements a provider plugin architecture that decouples provider-specific parsing logic from the core analysis engine, allowing new providers to be added via standardized interfaces (discoverAllSessions, parseSessionFile) without modifying core code. Uses LiteLLM's pricing database as the canonical source for model cost data across 100+ models.
vs alternatives: Supports 5+ AI coding tools natively with a pluggable architecture, whereas most token trackers are single-tool specific or require API proxies that add latency and privacy concerns.
Analyzes parsed session turns and classifies them into TaskCategory buckets (coding, testing, terminal usage, debugging, etc.) using heuristic rules based on turn content, tool invocations, and file types. Implements a classifyTurn function that examines API calls, file modifications, and context patterns to assign semantic meaning to raw token consumption, enabling cost breakdown by activity type rather than just by model.
Unique: Uses multi-signal heuristic classification (file types, tool invocations, context patterns) rather than simple keyword matching, enabling semantic understanding of turn purpose. Tracks one-shot success rate per task category to identify which activity types benefit most from AI assistance.
vs alternatives: Provides task-level cost visibility that generic token counters cannot offer, allowing developers to optimize by activity type rather than just by model or project.
Provides CLI commands (codeburn status, codeburn report) that generate detailed reports on session discovery status, parsing errors, and data quality metrics. Implements metadata inspection capabilities that allow developers to examine individual session files, view parsing errors, and understand data completeness. Generates status summaries showing how many sessions were discovered, parsed successfully, and skipped due to errors.
Unique: Provides transparent visibility into the data ingestion pipeline, showing exactly which sessions were discovered, parsed, and skipped with detailed error messages. Enables developers to audit data quality before relying on cost calculations.
vs alternatives: Offers detailed status and error reporting that helps developers understand data completeness, whereas black-box tools that silently skip sessions make it difficult to detect data quality issues.
Implements a plugin-based architecture that allows new AI coding providers to be added without modifying core CodeBurn code. Each provider plugin implements standardized interfaces (discoverAllSessions, parseSessionFile) that return normalized ParsedTurn and Session objects. Plugins are loaded dynamically at runtime and can be distributed as npm packages, enabling community contributions and custom provider support.
Unique: Defines a minimal, standardized plugin interface (discoverAllSessions, parseSessionFile) that decouples provider-specific logic from the core analysis engine, enabling community contributions without core code changes. Plugins are loaded dynamically at runtime.
vs alternatives: Enables extensibility without forking or modifying core code, whereas monolithic tools that hardcode provider support require core maintainers to add each new provider.
Calculates USD costs for each turn by multiplying token counts (input + output) by model-specific pricing rates sourced from LiteLLM's pricing database, which covers 100+ models across OpenAI, Anthropic, and other providers. Implements a calculateCost function that handles variable pricing tiers, currency conversion, and subscription plan adjustments (e.g., Claude Pro discounts), ensuring accurate financial visibility without requiring API calls to pricing services.
Unique: Integrates LiteLLM's comprehensive pricing database as a built-in data source rather than requiring external API calls, enabling offline cost calculation and eliminating latency. Handles subscription plan adjustments (Claude Pro discounts) and multi-currency support natively.
vs alternatives: Provides accurate, offline cost calculation across 100+ models without API dependencies, whereas most token trackers either hardcode pricing or require cloud lookups that add latency and privacy exposure.
Renders a terminal-based interactive dashboard (TUI) using a framework like Ink or Blessed that displays aggregated token usage, costs, and efficiency metrics across multiple time periods (Today, 7 Days, 30 Days, All Time). Implements keyboard-driven navigation, filtering by project/model/task category, and drill-down capabilities that allow developers to explore cost patterns without leaving the terminal. Updates metrics in real-time as new session data is discovered.
Unique: Implements a keyboard-driven TUI dashboard that runs entirely in the terminal without external dependencies, enabling cost monitoring in headless environments and SSH sessions. Provides drill-down navigation from aggregate metrics to individual turns without context switching.
vs alternatives: Offers a native terminal experience for developers who live in the CLI, whereas web-based dashboards require browser context switching and are inaccessible in SSH/headless environments.
Aggregates parsed session turns into daily buckets and higher-level time periods (7 Days, 30 Days, All Time) using an aggregateProjectsIntoDays function that groups by date, project, and model. Implements a caching layer that stores aggregated results to avoid recomputing statistics on every dashboard load, with cache invalidation triggered by new session data discovery. Supports efficient querying of cost trends across arbitrary time windows.
Unique: Implements a two-level aggregation strategy (daily buckets + period summaries) with intelligent cache invalidation that rebuilds only affected time periods when new sessions are discovered, avoiding full recomputation. Uses immutable daily aggregates as the foundation for all higher-level queries.
vs alternatives: Provides fast metric queries even with large datasets by pre-aggregating and caching, whereas naive approaches that recalculate from raw turns on every query become slow with 1000+ turns.
Scans session history to identify inefficient token usage patterns such as redundant file reads, bloated context windows, unused MCP tool invocations, and low one-shot success rates. Implements an optimization engine (codeburn optimize) that analyzes turn sequences, detects repeated operations on the same files, and generates actionable recommendations to reduce token waste. Uses heuristic rules and statistical analysis to flag anomalies in token consumption.
Unique: Analyzes turn sequences and file access patterns to detect structural inefficiencies (e.g., reading the same file 5 times in a single session) rather than just flagging high token counts. Tracks one-shot success rate as a proxy for efficiency and correlates it with context size and tool usage.
vs alternatives: Provides actionable optimization recommendations based on actual usage patterns, whereas generic cost-cutting advice (e.g., 'use smaller models') ignores the specific inefficiencies in a developer's workflow.
+4 more capabilities
Provides AI-ranked code completion suggestions with star ratings based on statistical patterns mined from thousands of open-source repositories. Uses machine learning models trained on public code to predict the most contextually relevant completions and surfaces them first in the IntelliSense dropdown, reducing cognitive load by filtering low-probability suggestions.
Unique: Uses statistical ranking trained on thousands of public repositories to surface the most contextually probable completions first, rather than relying on syntax-only or recency-based ordering. The star-rating visualization explicitly communicates confidence derived from aggregate community usage patterns.
vs alternatives: Ranks completions by real-world usage frequency across open-source projects rather than generic language models, making suggestions more aligned with idiomatic patterns than generic code-LLM completions.
Extends IntelliSense completion across Python, TypeScript, JavaScript, and Java by analyzing the semantic context of the current file (variable types, function signatures, imported modules) and using language-specific AST parsing to understand scope and type information. Completions are contextualized to the current scope and type constraints, not just string-matching.
Unique: Combines language-specific semantic analysis (via language servers) with ML-based ranking to provide completions that are both type-correct and statistically likely based on open-source patterns. The architecture bridges static type checking with probabilistic ranking.
vs alternatives: More accurate than generic LLM completions for typed languages because it enforces type constraints before ranking, and more discoverable than bare language servers because it surfaces the most idiomatic suggestions first.
codeburn scores higher at 49/100 vs IntelliCode at 40/100. codeburn leads on quality and ecosystem, while IntelliCode is stronger on adoption.
Need something different?
Search the match graph →© 2026 Unfragile. Stronger through disorder.
Trains machine learning models on a curated corpus of thousands of open-source repositories to learn statistical patterns about code structure, naming conventions, and API usage. These patterns are encoded into the ranking model that powers starred recommendations, allowing the system to suggest code that aligns with community best practices without requiring explicit rule definition.
Unique: Leverages a proprietary corpus of thousands of open-source repositories to train ranking models that capture statistical patterns in code structure and API usage. The approach is corpus-driven rather than rule-based, allowing patterns to emerge from data rather than being hand-coded.
vs alternatives: More aligned with real-world usage than rule-based linters or generic language models because it learns from actual open-source code at scale, but less customizable than local pattern definitions.
Executes machine learning model inference on Microsoft's cloud infrastructure to rank completion suggestions in real-time. The architecture sends code context (current file, surrounding lines, cursor position) to a remote inference service, which applies pre-trained ranking models and returns scored suggestions. This cloud-based approach enables complex model computation without requiring local GPU resources.
Unique: Centralizes ML inference on Microsoft's cloud infrastructure rather than running models locally, enabling use of large, complex models without local GPU requirements. The architecture trades latency for model sophistication and automatic updates.
vs alternatives: Enables more sophisticated ranking than local models without requiring developer hardware investment, but introduces network latency and privacy concerns compared to fully local alternatives like Copilot's local fallback.
Displays star ratings (1-5 stars) next to each completion suggestion in the IntelliSense dropdown to communicate the confidence level derived from the ML ranking model. Stars are a visual encoding of the statistical likelihood that a suggestion is idiomatic and correct based on open-source patterns, making the ranking decision transparent to the developer.
Unique: Uses a simple, intuitive star-rating visualization to communicate ML confidence levels directly in the editor UI, making the ranking decision visible without requiring developers to understand the underlying model.
vs alternatives: More transparent than hidden ranking (like generic Copilot suggestions) but less informative than detailed explanations of why a suggestion was ranked.
Integrates with VS Code's native IntelliSense API to inject ranked suggestions into the standard completion dropdown. The extension hooks into the completion provider interface, intercepts suggestions from language servers, re-ranks them using the ML model, and returns the sorted list to VS Code's UI. This architecture preserves the native IntelliSense UX while augmenting the ranking logic.
Unique: Integrates as a completion provider in VS Code's IntelliSense pipeline, intercepting and re-ranking suggestions from language servers rather than replacing them entirely. This architecture preserves compatibility with existing language extensions and UX.
vs alternatives: More seamless integration with VS Code than standalone tools, but less powerful than language-server-level modifications because it can only re-rank existing suggestions, not generate new ones.