natural-language log querying with llm interpretation
Translates natural language questions into Axiom query language (AQL) by leveraging an LLM to parse user intent, extract filter conditions, aggregations, and time ranges, then executes the generated query against Axiom's event data backend. Uses MCP protocol to expose Axiom as a tool-callable service, allowing Claude and other LLM clients to invoke queries without users learning AQL syntax.
Unique: Exposes Axiom's event query engine as an MCP tool, allowing LLMs to autonomously translate conversational debugging questions into AQL without requiring users to learn query syntax or manually construct filters. Uses MCP's standardized tool-calling interface to bridge natural language intent to structured observability queries.
vs alternatives: More accessible than writing raw AQL or SQL for log analysis, and integrates directly into LLM chat workflows (vs. separate dashboard tools), but trades query precision and performance for ease-of-use since LLM interpretation adds latency and potential misinterpretation.
multi-dataset event correlation and cross-filtering
Enables querying across Axiom datasets (logs, traces, metrics) in a single natural language request by mapping dataset names and field relationships, then executing coordinated queries that correlate events across sources. The MCP server maintains awareness of available datasets and their schemas, allowing the LLM to construct queries that join or filter across multiple event streams.
Unique: Axiom's MCP server maintains schema awareness across multiple datasets and enables the LLM to construct correlated queries by mapping field relationships, rather than requiring manual JOIN syntax or separate sequential queries. This allows conversational queries like 'show me traces with errors' to automatically correlate across logs and traces.
vs alternatives: More powerful than single-dataset log viewers because it correlates across event types in one query, but requires more upfront schema documentation and is slower than pre-built dashboards since correlation happens at query-time via LLM interpretation.
time-range-aware contextual querying with relative time expressions
Parses natural language time expressions ('last hour', 'since 3pm', 'past 7 days') and converts them to absolute Axiom query time ranges, maintaining context across multi-turn conversations so follow-up questions inherit the same time window. The MCP server tracks conversation state to avoid re-specifying time ranges in each query.
Unique: Maintains conversation-level time context so users don't repeat time specifications across multi-turn debugging sessions. Uses relative time parsing to map natural language expressions to Axiom's absolute timestamp ranges, with state tracking to apply context to follow-up queries.
vs alternatives: More conversational than dashboard UIs that require explicit date-picker selections, and faster than manually calculating and re-entering timestamps, but relies on heuristic parsing that may misinterpret ambiguous expressions like 'last week'.
schema-aware field suggestion and auto-completion
Introspects Axiom dataset schemas to provide the LLM with available fields, data types, and common values, enabling intelligent suggestions when users ask vague questions (e.g., 'show me errors' → suggests filtering by 'level=error' or 'status_code>=400'). The MCP server caches schema metadata and exposes it as context to the LLM for better query generation.
Unique: Caches and exposes Axiom dataset schemas to the LLM as context, enabling intelligent field suggestions and auto-completion without requiring users to manually browse schema documentation. The MCP server acts as a schema broker, translating vague user intent into concrete field filters.
vs alternatives: More discoverable than requiring users to memorize field names or consult documentation, and faster than trial-and-error query construction, but adds latency for schema introspection and may suggest incorrect fields if domain semantics are not captured in field names.
trace-aware debugging with span-level filtering and aggregation
Exposes Axiom's trace data (spans, parent-child relationships, duration metrics) to the LLM for querying and analyzing distributed traces. Enables filtering by span attributes, duration thresholds, and error status, then aggregates results to identify slow or failing spans across traces. The MCP server understands trace structure (trace_id, span_id, parent_span_id) and can correlate spans with logs.
Unique: Axiom's MCP server understands trace structure (span hierarchies, parent-child relationships) and enables the LLM to query traces by span attributes and duration thresholds, then correlate slow/failed spans with logs. This allows conversational trace debugging without requiring users to navigate trace UIs.
vs alternatives: More accessible than learning Jaeger or Zipkin UIs, and faster than manually clicking through trace waterfalls, but lacks visual span waterfall diagrams and is limited to Axiom's trace schema and indexing capabilities.
mcp-protocol-based tool registration and function calling
Implements the Model Context Protocol (MCP) server specification, exposing Axiom query capabilities as callable tools that LLM clients (Claude, etc.) can invoke with structured arguments. Uses MCP's resource and tool definitions to declare available queries, their parameters, and return types, enabling the LLM to autonomously decide when to query Axiom and how to interpret results.
Unique: Implements the MCP server specification to expose Axiom as a first-class tool in LLM applications, using MCP's standardized resource and tool definitions to enable autonomous tool invocation. This allows LLMs to query Axiom without custom integrations or API wrappers.
vs alternatives: More standardized and interoperable than custom REST API wrappers, and enables autonomous LLM tool use without manual function calling, but adds protocol overhead and requires MCP-compatible LLM clients (currently limited to Claude and a few others).
conversational multi-turn debugging with context preservation
Maintains conversation state across multiple turns, preserving query context (selected datasets, time ranges, filters) so follow-up questions can reference previous results without re-specifying parameters. The MCP server tracks conversation history and allows the LLM to refer back to earlier queries (e.g., 'show me more details about the error from the last query').
Unique: Preserves query context (datasets, time ranges, filters) across multi-turn conversations, allowing follow-up questions to inherit context without re-specification. The MCP server tracks conversation state and enables the LLM to reference previous results.
vs alternatives: More natural than stateless query interfaces where each question requires full context re-specification, but loses state on connection reset and requires LLM context window to track conversation history.