mcp-standardized harness api bridging via json-rpc stdio protocol
Exposes Harness platform APIs through a Model Context Protocol (MCP) server that communicates with clients (Claude Desktop, VS Code, Cursor, Windsurf) using JSON-RPC 2.0 over stdio. The server acts as a protocol adapter, translating MCP tool calls into authenticated HTTP requests to Harness backend services and marshaling responses back through the MCP interface. This enables AI assistants and development tools to invoke Harness operations without direct API knowledge.
Unique: Implements dual-mode authentication (API key for external clients via stdio, JWT for internal services) with mode-specific toolset registration, allowing the same MCP server binary to serve both external developers and internal Harness microservices with appropriate access controls and base URLs.
vs alternatives: Provides standardized MCP protocol support across multiple IDEs and AI tools simultaneously, whereas direct REST API clients require tool-specific integration code for each platform.
dual-mode authentication with api key and jwt token providers
The server implements two distinct authentication mechanisms selected via config.Internal flag: external stdio mode uses APIKeyProvider to authenticate requests with Harness API keys passed by clients, while internal mode uses JWTProvider to authenticate with JWT tokens signed using service-specific secrets. Each provider wraps HTTP client operations, injecting credentials into request headers before forwarding to Harness backend services. This architecture enables the same MCP server to serve both external developers and internal microservices with appropriate security boundaries.
Unique: Implements pluggable authentication providers (APIKeyProvider and JWTProvider) that wrap HTTP client creation at initialization time, allowing the same service client code to work with either authentication mechanism without conditional logic throughout the codebase. The InitToolsets orchestrator selects the appropriate provider based on config.Internal flag.
vs alternatives: Supports both external API key and internal JWT authentication in a single binary, whereas most MCP servers require separate deployments or hardcoded authentication mechanisms.
internal ai services access with genai and chatbot integration (internal mode only)
Exposes internal Harness AI services through AIServices toolset available only in internal mode (JWT authentication). This includes genai service for AI-powered code generation and analysis, and chatbot service for conversational AI interactions. The implementation provides internal Harness microservices with direct access to AI capabilities through MCP tools, enabling AI-driven features within the Harness platform itself. These toolsets are not exposed in external stdio mode for security and licensing reasons.
Unique: Implements internal AI services (genai, chatbot) as toolsets that are conditionally registered only in internal mode (config.Internal = true), providing Harness microservices with direct MCP access to AI capabilities while maintaining security boundaries that prevent external client access.
vs alternatives: Provides internal Harness services with standardized MCP access to AI capabilities, whereas direct service-to-service calls require custom integration code and lack the standardized tool interface.
connector management and configuration querying
Exposes connector operations through a Connectors toolset that enables listing configured connectors, retrieving connector details, validating connector connectivity, and managing connector configurations. The implementation provides access to all Harness connector types (Git, artifact registry, cloud, infrastructure) through unified APIs. This enables AI agents to discover available integrations, validate connector health, and manage connector configurations programmatically.
Unique: Implements connector operations through Harness Connector Service, providing unified access to all connector types (Git, artifact, cloud, infrastructure) with consistent APIs for listing, validating, and managing connectors. The Connectors service client abstracts connector-specific details, enabling AI agents to work with any connector type using identical tool signatures.
vs alternatives: Provides unified connector management across all Harness connector types through a single toolset, whereas direct connector APIs require separate implementations for each connector type.
dashboard and metric visualization querying with custom dashboard support
Exposes dashboard operations through a Dashboards toolset that enables listing dashboards, retrieving dashboard definitions, querying dashboard metrics, and analyzing dashboard data. The implementation provides access to Harness dashboards and custom dashboards, enabling AI agents to retrieve metrics and visualizations for analysis. This enables AI agents to understand system state through dashboard data, generate reports, and provide insights based on dashboard metrics.
Unique: Implements dashboard operations through Harness Dashboard Service, providing unified access to both built-in and custom dashboards with metric querying and analysis capabilities. The Dashboards service client abstracts dashboard-specific details, enabling AI agents to retrieve and analyze dashboard data without understanding dashboard definition formats.
vs alternatives: Provides unified dashboard data retrieval and analysis through Harness, whereas direct dashboard tools (Grafana, Datadog) require separate APIs and metric aggregation logic.
read-only mode enforcement with configurable write operation restrictions
Implements a read-only mode that can be enabled via --read-only flag in stdio mode, preventing write operations (pipeline execution, PR comments, connector modifications) while allowing read operations (querying status, retrieving logs, listing resources). The implementation enforces read-only restrictions at the toolset level by conditionally registering write-capable tools. This enables safe deployment of MCP servers in restricted environments where only query operations are permitted.
Unique: Implements read-only mode as a startup configuration flag that conditionally registers write-capable toolsets, providing a simple but effective mechanism to prevent write operations in restricted environments. The implementation enforces read-only restrictions at the toolset registration level rather than per-operation, reducing complexity.
vs alternatives: Provides simple read-only mode enforcement through startup flags, whereas fine-grained access control systems require complex permission management and per-operation authorization checks.
layered toolset registration with service client abstraction
The server uses a layered architecture where InitToolsets function orchestrates the registration of multiple domain-specific toolsets (Pipeline, PullRequest, Repository, ArtifactRegistry, CloudCost, ChaosEngineering, Logs, AIServices, Connectors, Dashboards). Each toolset follows a consistent registration pattern: create an HTTP client with appropriate authentication, instantiate a service client that wraps Harness API operations, create a toolset with individual tools, and add it to a toolset group. Service clients abstract HTTP details and provide business logic, while toolsets expose individual operations as MCP tools with standardized parameter schemas.
Unique: Implements a consistent registration pattern across 10+ toolsets where each follows: HTTP client creation → service client instantiation → tool definition → toolset group addition. This pattern is enforced in pkg/harness/tools.go registration functions (lines 125-221), enabling predictable extension points and reducing boilerplate for new toolsets.
vs alternatives: Provides organized, domain-specific toolset grouping with consistent registration patterns, whereas generic MCP servers require flat tool lists or custom registration logic for each new capability.
pipeline execution and status monitoring with real-time log streaming
Exposes Harness pipeline operations through a Pipeline toolset that enables triggering pipeline executions, querying execution status, retrieving execution logs, and monitoring execution stages. The implementation wraps Harness Pipeline Service APIs, allowing clients to start pipelines with input variables, poll execution status with stage-level granularity, and stream execution logs in real-time. This enables AI agents to orchestrate CI/CD workflows and provide developers with execution feedback without manual dashboard navigation.
Unique: Implements pipeline execution as a toolset that combines execution triggering, status polling, and log retrieval into a cohesive workflow abstraction. The Pipeline service client wraps Harness Pipeline Service APIs with business logic for variable injection and stage-level status tracking, enabling AI agents to reason about pipeline state without understanding Harness API details.
vs alternatives: Provides integrated pipeline execution and monitoring through MCP tools, whereas direct Harness API clients require separate calls to trigger, poll, and retrieve logs with manual state management.
+6 more capabilities