Linear MCP Server
MCP ServerFreeCreate and manage Linear issues and projects via MCP.
Capabilities13 decomposed
mcp protocol-compliant issue creation with team-scoped context
Medium confidenceCreates new Linear issues through MCP tool invocation by translating LLM natural language requests into Linear API mutations. The server validates required parameters (title, teamId) and optional fields (description, priority, status), then queues the request through a rate-limited client that enforces Linear's 1400 requests/hour limit. Returns structured issue metadata including ID, URL, and status for LLM context.
Implements MCP tool schema with Linear-specific parameter validation and rate-limit-aware queueing, ensuring LLM requests respect API quotas without blocking the client. Uses LinearMCPClient abstraction to decouple protocol handling from API integration.
Simpler than building custom Linear integrations because it handles MCP protocol translation and rate limiting automatically, while remaining more flexible than Linear's native Slack/GitHub integrations by supporting any MCP-compatible LLM client.
multi-filter issue search with query expansion and team scoping
Medium confidenceSearches Linear issues using a query string combined with optional filters (teamId, status, assigneeId, labels, priority) by translating them into Linear GraphQL queries. The server constructs parameterized queries that filter across multiple dimensions simultaneously, returning paginated results with issue metadata. Supports both full-text search on title/description and structured filtering on issue properties.
Combines full-text search with structured filtering through a single MCP tool, allowing LLMs to express complex queries naturally ('find open bugs assigned to me') without requiring users to learn Linear's filter syntax. Rate limiter ensures search requests don't exhaust API quota.
More flexible than Linear's built-in saved views because it accepts dynamic filter parameters from LLM context, and simpler than building custom GraphQL clients because the MCP server handles query construction and pagination.
mcp protocol translation with stdio transport and tool schema exposure
Medium confidenceImplements the Model Context Protocol (MCP) server specification by handling MCP requests (list resources, read resource, list tools, call tool) from LLM clients via stdio transport. The server translates MCP tool invocations into LinearMCPClient method calls and formats responses back to the protocol format. Exposes tool schemas that describe available operations and their parameters to the LLM client.
Implements full MCP server specification with stdio transport, enabling seamless integration with Claude Desktop and other MCP-compatible clients. Tool schemas are statically defined but cover all major Linear operations.
Simpler than building custom REST APIs because MCP handles protocol translation automatically, and more flexible than Linear's native integrations because it works with any MCP-compatible LLM client.
structured error handling and response formatting for llm consumption
Medium confidenceHandles errors from Linear API calls and formats them as MCP-compliant error responses that LLMs can interpret. The server catches API errors (authentication failures, invalid parameters, rate limit errors) and serializes them with descriptive messages and error codes. Ensures that LLM clients receive actionable error information rather than raw API responses.
Translates Linear API errors into MCP-compliant error responses with descriptive messages, enabling LLM clients to understand failures without exposing raw API details. Error handling is transparent to MCP tools.
More user-friendly than raw API errors because it provides MCP-formatted messages, and simpler than building custom error recovery because it delegates retry logic to the LLM client.
resource template definition for issue context
Medium confidenceDefines MCP resource templates that allow clients to request issue data using URI patterns (e.g., 'linear://issue/{issueId}'), enabling LLMs to reference issues as persistent resources rather than one-off API calls. The server implements resource reading that fetches issue details when a client requests a resource URI, integrating issue context into the LLM's knowledge base.
Implements MCP resource templates for issues, allowing LLMs to treat Linear issues as first-class resources in the conversation context rather than requiring explicit tool calls
More seamless than tool-based issue fetching because users can paste issue URIs directly; simpler than building a separate context manager because it leverages MCP's native resource protocol
issue update with selective field mutation and conflict avoidance
Medium confidenceUpdates existing Linear issues by accepting an issue ID and a set of fields to modify (title, description, priority, status, assignee). The server constructs targeted GraphQL mutations that update only specified fields, avoiding unnecessary API calls or conflicts from partial updates. Returns the updated issue state to confirm changes to the LLM client.
Implements selective field updates through GraphQL mutations rather than full-object replacement, reducing API payload size and avoiding unnecessary field overwrites. Rate limiter queues mutations to respect Linear's request limits.
More granular than Linear's REST API because it updates only specified fields, and safer than direct GraphQL access because the MCP server validates field names and types before submission.
user-scoped issue retrieval with archive filtering and pagination
Medium confidenceRetrieves all issues assigned to a specific user by querying the Linear API with userId and optional filters (includeArchived, limit). The server constructs a GraphQL query that fetches the user's issue list with metadata, supporting pagination through limit parameters. Returns issues in a format suitable for LLM processing (title, status, priority, team, URL).
Provides a dedicated user-scoped query path that's more efficient than generic search for the common case of 'show me my issues', with built-in archive filtering to distinguish active from historical work. Integrates with rate limiter to queue requests.
Simpler than building custom GraphQL queries because it abstracts away Linear's schema, and more efficient than searching by assigneeId because it's optimized for the single-user case.
issue comment addition with optional user attribution and icon customization
Medium confidenceAdds comments to Linear issues by accepting an issueId, comment body, and optional parameters for user attribution (createAsUser) and display customization (displayIconUrl). The server constructs a GraphQL mutation that appends the comment to the issue's activity stream. Supports both direct comments and comments attributed to specific users or bots with custom icons.
Supports optional user attribution and custom icon URLs, enabling LLM agents to post comments that appear to come from specific users or branded bots. Rate limiter queues comment mutations to avoid API quota exhaustion.
More flexible than Linear's native integrations because it allows custom user attribution and icon customization, and simpler than building custom GraphQL clients because the MCP server handles mutation construction.
single issue detail retrieval with full metadata expansion
Medium confidenceRetrieves complete metadata for a specific Linear issue by issueId, including title, description, status, priority, assignee, labels, team, created/updated timestamps, and issue URL. The server constructs a GraphQL query that fetches all available issue fields in a single request. Returns structured data suitable for LLM context or display.
Provides a dedicated single-issue query that fetches all standard fields in one GraphQL request, optimized for the common case of 'tell me about issue X'. Integrates with rate limiter to queue requests.
More efficient than generic search for retrieving a known issue, and simpler than building custom GraphQL queries because it abstracts away Linear's schema.
team-scoped issue listing with pagination and filtering
Medium confidenceRetrieves all issues belonging to a specific Linear team by teamId, with optional pagination (limit) and filtering. The server constructs a GraphQL query that fetches the team's issue list with metadata. Returns issues in paginated batches suitable for LLM processing or display.
Provides a dedicated team-scoped query that's more efficient than generic search for the common case of 'show me team X's issues'. Rate limiter queues requests to respect API limits.
Simpler than building custom GraphQL queries because it abstracts away Linear's schema, and more efficient than searching by teamId because it's optimized for the team-scoped case.
authenticated user context retrieval for permission and identity validation
Medium confidenceRetrieves the current authenticated user's information (ID, name, email, team memberships) by querying the Linear API's viewer endpoint. The server uses the provided API key to determine who is making requests and returns user metadata for LLM context. Enables permission checks and user-scoped operations.
Provides a dedicated viewer query that returns the authenticated user's identity and team context, enabling LLM agents to make permission-aware decisions and user-scoped queries. Integrates with rate limiter.
Simpler than building custom GraphQL queries for user validation, and provides a clear identity anchor for multi-user MCP server deployments.
organization-level metadata retrieval for workspace context
Medium confidenceRetrieves organization-level information (name, teams, members, settings) from the Linear workspace by querying the organization endpoint. The server returns workspace metadata that provides context for team and user operations. Enables LLM agents to understand the workspace structure.
Provides a dedicated organization query that returns workspace structure and team list, enabling LLM agents to resolve team references and understand workspace context. Rate limiter queues requests.
Simpler than building custom GraphQL queries for workspace metadata, and provides a clear context anchor for multi-team MCP server deployments.
request rate limiting with queue-based throttling and quota tracking
Medium confidenceEnforces Linear API rate limits (1400 requests per hour) by implementing a queue-based rate limiter that delays requests to stay within quota. The RateLimiter component tracks request statistics, enforces minimum delays between requests, and batches requests when possible. All API calls flow through this limiter to prevent quota exhaustion and API errors.
Implements queue-based rate limiting with request batching to maximize throughput while respecting Linear's 1400 req/hr quota. Transparent to MCP tools — all rate limiting happens in the LinearMCPClient abstraction layer.
More sophisticated than naive request delays because it batches requests and tracks quota, and simpler than implementing per-user rate limiting because it uses a shared quota model suitable for single-workspace deployments.
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with Linear MCP Server, ranked by overlap. Discovered automatically through the match graph.
@scope-pm/mcp
ScopePM MCP proxy for routing MCP tool calls to the hosted API.
phoenix-ai
GenAI library for RAG , MCP and Agentic AI
mcpflow-router
MCP tool router with smart-search and on-demand loading
@maz-ui/mcp
Maz-UI ModelContextProtocol Client
fast-filesystem-mcp
** - Advanced filesystem operations with large file handling capabilities and Claude-optimized features. Provides fast file reading/writing, sequential reading for large files, directory operations, file search, and streaming writes with backup & recovery.
MCP-Connect
** A client that enables cloud-based AI services to access local Stdio based MCP servers by HTTP/HTTPS requests.
Best For
- ✓AI assistant users (Claude Desktop) wanting hands-free issue creation
- ✓Teams building LLM agents that manage Linear workflows
- ✓Developers integrating Linear into MCP-compatible AI applications
- ✓Teams using Claude to query Linear without opening the app
- ✓LLM agents building issue triage or reporting workflows
- ✓Developers building AI-powered Linear dashboards or summaries
- ✓Claude Desktop users wanting native Linear integration
- ✓Developers building MCP-compatible LLM applications
Known Limitations
- ⚠Requires explicit teamId parameter — no team auto-detection from context
- ⚠Single issue creation per tool call — no batch creation in one invocation
- ⚠Rate limiter queues requests, adding latency during high-volume creation (1400 req/hr cap)
- ⚠No validation of custom field values — relies on Linear API to reject invalid data
- ⚠Query string is basic substring matching — no advanced syntax (regex, boolean operators)
- ⚠Filters are AND-ed together — no OR logic between filter dimensions
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
About
Community MCP server for Linear project management. Provides tools to create and update issues, search across projects, manage cycles and labels, and query team workflows in Linear.
Categories
Alternatives to Linear MCP Server
Are you the builder of Linear MCP Server?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →