production-context-aware code debugging via mcp
Bridges AI agents (Claude Desktop, Cursor, Windsurf) directly to Last9 observability platform using the Model Context Protocol, enabling LLMs to query live production logs, metrics, traces, and alerts without context switching. Implements a dual-transport architecture (HTTP for managed mode, STDIO for local/air-gapped) that translates natural language intent into structured Last9 API calls, with background attribute caching to optimize LLM token usage and reduce round-trip latency.
Unique: Implements dual-transport MCP server (HTTP + STDIO) with background attribute caching and chunking strategy specifically optimized for LLM token efficiency, enabling agents to maintain context across multi-turn debugging sessions without exhausting context windows. Translates natural language to Last9's JSON-pipeline query syntax automatically.
vs alternatives: Unlike generic observability dashboards or REST API clients, Last9 MCP embeds production context directly into the LLM's reasoning loop with zero IDE context-switching, and optimizes for token efficiency through intelligent result chunking and attribute discovery.
red metrics querying with promql execution
Exposes high-level service summaries and RED metrics (Rate, Error, Duration) through structured MCP tools that execute PromQL queries against Last9's metrics backend. Abstracts Prometheus query complexity by providing pre-built metric templates while allowing raw PromQL execution for advanced use cases, with automatic time-range normalization and result formatting for LLM consumption.
Unique: Provides both templated RED metric queries (for simplicity) and raw PromQL execution (for flexibility), with automatic time-range normalization and LLM-optimized result formatting. Maintains an internal attribute cache to enable service/metric discovery without requiring users to know exact label names.
vs alternatives: Simpler than direct Prometheus API access (no PromQL expertise required for common queries) but more flexible than static dashboards, allowing LLMs to dynamically construct queries based on incident context.
deep link generation to last9 ui for manual investigation
Generates contextual deep links to Last9 UI that preserve query parameters (service, time range, filters) enabling users to seamlessly transition from LLM-assisted analysis to manual investigation. Links include pre-filled filters, time ranges, and service selections, reducing manual re-entry of context. Supports links to logs, metrics, traces, and alerts views.
Unique: Generates context-preserving deep links that encode query parameters (service, time range, filters) into Last9 UI URLs, enabling seamless transition from LLM analysis to manual investigation without re-entering context.
vs alternatives: More useful than generic Last9 links (preserves query context) and more maintainable than hard-coded UI paths (parameterized link generation adapts to UI changes).
authentication and credential management (api token vs refresh token)
Manages two authentication modes: API Token for HTTP mode (long-lived, suitable for service accounts) and Refresh Token for STDIO mode (short-lived, suitable for user sessions). Implements token validation, expiration handling, and secure credential storage. Abstracts authentication differences between modes, allowing same tool implementations to work with either credential type.
Unique: Implements dual authentication modes (API Token for HTTP, Refresh Token for STDIO) with automatic token refresh and expiration handling, abstracting auth differences while maintaining security best practices.
vs alternatives: More flexible than single-auth systems (supports both service and user authentication) and more secure than hardcoded credentials (supports environment variables and credential rotation).
advanced log filtering and attribute discovery
Enables LLMs to query logs using Last9's JSON-pipeline filter syntax, with automatic attribute discovery that surfaces available log fields and their cardinality. Implements a chunking strategy to handle large result sets, manages drop-rule configuration for sensitive data filtering, and generates deep links to Last9 UI for manual log exploration. Abstracts complex log query DSL through structured tool parameters while exposing raw query capability for advanced filtering.
Unique: Combines templated log queries (for common patterns) with raw JSON-pipeline DSL support, includes automatic attribute discovery to enable dynamic query construction, and implements chunking strategy optimized for LLM token budgets. Manages drop-rule visibility to help teams understand data filtering policies.
vs alternatives: More powerful than simple keyword search (supports complex multi-field filtering) but more accessible than raw Elasticsearch/Loki queries; attribute discovery enables LLMs to construct valid queries without prior knowledge of log schema.
distributed trace retrieval and exception aggregation
Retrieves distributed traces by trace ID or service name, with automatic exception aggregation across trace spans. Implements span-level filtering, service dependency visualization, and correlation of trace data with deployment events. Generates structured trace summaries optimized for LLM analysis, including root cause indicators and latency attribution across service boundaries.
Unique: Automatically aggregates exceptions across trace spans and correlates with deployment events, providing root-cause indicators without requiring manual trace analysis. Implements span-level filtering and service dependency visualization derived from trace topology.
vs alternatives: More structured than raw trace JSON (includes exception aggregation and latency attribution), and integrates deployment context to enable correlation analysis that standalone tracing tools don't provide.
real-time alert status and change event correlation
Exposes firing alerts and system change events (deployments, configuration changes) through structured MCP tools, enabling LLMs to correlate alert triggers with recent infrastructure changes. Implements event timeline visualization and alert metadata enrichment, allowing agents to construct incident narratives by linking alerts to deployment events and metric anomalies.
Unique: Automatically correlates firing alerts with deployment and configuration change events, enabling LLMs to construct incident narratives without manual timeline assembly. Enriches alert metadata with context about what changed recently, surfacing potential root causes.
vs alternatives: More contextual than alert-only systems (includes change events for correlation) and more actionable than change logs alone (links changes to their observable impact via alerts and metrics).
mcp tool registration with dynamic attribute caching
Implements the Model Context Protocol tool registration system with a background attribute cache that discovers and maintains available log fields, metric labels, and service names. Dynamically updates tool schemas based on cached attributes, enabling LLMs to construct valid queries without prior knowledge of data structure. Handles tool lifecycle (registration, discovery, invocation) and maintains an internal state machine for cache synchronization.
Unique: Implements background attribute caching with automatic tool schema updates, enabling MCP clients to discover and invoke tools with current data structure without manual configuration. Maintains internal state machine for cache lifecycle and synchronization.
vs alternatives: More dynamic than static tool definitions (adapts to schema changes automatically) and more efficient than querying attributes on every invocation (background caching reduces latency and API calls).
+4 more capabilities