mcp
MCP ServerFreeOfficial MCP Servers for AWS
Capabilities12 decomposed
aws service tool exposure via standardized mcp protocol
Medium confidenceExposes 50+ AWS services (Lambda, DynamoDB, S3, ECS, SageMaker, Bedrock, etc.) as callable tools through the Model Context Protocol, using a unified schema-based function registry that translates AWS SDK operations into LLM-compatible tool definitions. Each MCP server wraps AWS service clients and translates their responses into structured JSON that LLMs can reason about and chain together, enabling AI assistants to orchestrate multi-service AWS workflows without custom integration code.
Implements 50+ specialized MCP servers (not a single monolithic wrapper) where each server is independently deployable and focuses on a specific AWS service domain (compute, data, AI/ML, infrastructure), using a standardized MCP server template and design guidelines to ensure consistent tool schema generation and error handling across heterogeneous AWS APIs
Provides deeper AWS service coverage than generic AWS SDK wrappers because each server is purpose-built with domain-specific tool schemas, error handling, and documentation rather than auto-generating tools from SDK method signatures
infrastructure-as-code tool generation for terraform, cloudformation, and cdk
Medium confidenceGenerates specialized MCP servers for Terraform, CloudFormation, and AWS CDK that expose infrastructure-as-code operations as LLM-callable tools. These servers parse IaC configuration files, generate tool schemas for resource creation/modification, and translate LLM tool invocations back into IaC syntax or API calls, enabling AI assistants to author and modify infrastructure definitions without direct file editing.
Implements separate, specialized MCP servers for each IaC framework (Terraform, CloudFormation, CDK) rather than a unified wrapper, allowing each server to leverage framework-specific parsing (HCL parser for Terraform, CloudFormation template introspection, CDK construct APIs) and generate native syntax that preserves framework idioms and best practices
Generates framework-native IaC code with proper syntax and idioms rather than generic resource definitions, because each server understands the specific framework's module system, variable scoping, and composition patterns
multi-server orchestration and client-side tool aggregation
Medium confidenceEnables MCP clients (Claude Desktop, custom LLM applications) to connect to multiple MCP servers simultaneously and aggregate their tool definitions into a unified tool registry. The client-side orchestration layer handles server lifecycle management, tool schema merging, request routing to appropriate servers, and error handling across heterogeneous servers, enabling LLMs to seamlessly invoke tools across AWS services without awareness of server boundaries.
Implements client-side orchestration that aggregates tools from multiple independent MCP servers and routes invocations to appropriate servers based on tool schema metadata, rather than requiring a centralized server that proxies all AWS service calls, enabling horizontal scaling and independent server deployment
Provides flexible multi-server orchestration without a single point of failure, because each server is independently deployable and the client can route around failed servers, whereas a monolithic proxy server would be a bottleneck and single point of failure
aws documentation and api reference integration for context-aware llm assistance
Medium confidenceProvides an MCP server that exposes AWS documentation and API reference materials as searchable context, enabling LLMs to retrieve relevant documentation snippets during tool invocation. The server indexes AWS documentation, performs semantic search over documentation content, and returns relevant sections that provide context for tool usage, error messages, and best practices.
Implements AWS documentation as a searchable MCP tool that provides context-aware documentation retrieval during LLM interactions, rather than requiring LLMs to search documentation independently, enabling seamless integration of AWS knowledge into tool invocation workflows
Provides context-aware documentation retrieval integrated into MCP workflows rather than requiring separate documentation lookups, because the server understands AWS service structure and can return relevant documentation based on tool invocation context
database query and schema introspection with multi-database support
Medium confidenceProvides MCP servers for PostgreSQL, DynamoDB, Neptune, and other databases that expose query execution, schema introspection, and data manipulation as LLM-callable tools. Servers parse database schemas, generate tool definitions for common queries and mutations, and translate LLM tool invocations into SQL/query language commands, enabling AI assistants to explore database structure and execute queries without direct database client access.
Implements database-specific MCP servers (PostgreSQL, DynamoDB, Neptune) that leverage native database drivers and query languages rather than a generic SQL abstraction, enabling each server to expose database-specific features (PostgreSQL JSON operators, DynamoDB secondary indexes, Neptune graph traversal) as first-class tools
Provides database-native query capabilities and schema introspection rather than generic SQL translation, because each server understands the specific database's query language, indexing strategy, and performance characteristics
container and kubernetes orchestration tool exposure
Medium confidenceExposes ECS, EKS, and Kubernetes operations as MCP tools, enabling LLMs to inspect cluster state, deploy containers, manage services, and troubleshoot deployments. Servers integrate with Kubernetes APIs and ECS APIs to translate LLM tool invocations into cluster operations, providing real-time visibility into container workloads and enabling AI-driven deployment automation.
Implements separate MCP servers for EKS (Kubernetes-native) and ECS (AWS-native) rather than a unified abstraction, allowing each server to leverage native APIs (Kubernetes client-go SDK for EKS, boto3 ECS API for ECS) and expose platform-specific operations like Kubernetes resource patching and ECS task placement strategies
Provides platform-native container orchestration capabilities rather than lowest-common-denominator abstractions, because EKS server uses Kubernetes API semantics and ECS server uses AWS-specific concepts like task definitions and service registries
ai and machine learning service integration with bedrock, sagemaker, and nova
Medium confidenceExposes AWS AI/ML services (Bedrock for foundation models, SageMaker for training/inference, Nova Canvas for image generation) as MCP tools, enabling LLMs to invoke other AI models, retrieve knowledge base documents, generate images, and manage ML workflows. Servers translate LLM tool invocations into Bedrock API calls, SageMaker operations, and image generation requests, enabling multi-model AI orchestration and knowledge retrieval augmentation.
Implements specialized MCP servers for different AI/ML service categories (Bedrock for model invocation, Bedrock KB for knowledge retrieval, SageMaker for training/inference, Nova for image generation) rather than a monolithic AI service wrapper, allowing each server to expose service-specific capabilities like Bedrock's model routing and knowledge base filtering, SageMaker's training job management, and Nova's image editing parameters
Provides service-specific AI/ML capabilities rather than generic model invocation, because each server understands the specific service's API semantics, parameter requirements, and response formats (e.g., Bedrock's converse API vs SageMaker's invoke_endpoint)
cost analysis and billing exploration with aws cost explorer integration
Medium confidenceExposes AWS Cost Explorer and billing APIs as MCP tools, enabling LLMs to analyze cloud spending patterns, identify cost anomalies, and generate cost optimization recommendations. Servers translate natural language cost analysis requests into Cost Explorer queries, aggregate billing data by service/dimension, and present findings in structured formats that LLMs can reason about and summarize.
Implements Cost Explorer integration as a specialized MCP server that translates natural language cost queries into Cost Explorer API calls with proper dimension filtering and time-series aggregation, rather than exposing raw billing APIs, enabling LLMs to perform sophisticated cost analysis without understanding Cost Explorer's query syntax
Provides cost analysis capabilities tailored to FinOps workflows rather than generic billing data access, because the server understands cost dimensions (service, linked account, region, tag), aggregation strategies, and presents results in formats optimized for LLM reasoning about cost patterns
monitoring and observability tool exposure via cloudwatch and aws x-ray
Medium confidenceExposes CloudWatch metrics, logs, alarms, and AWS X-Ray tracing as MCP tools, enabling LLMs to query monitoring data, analyze application performance, troubleshoot errors, and recommend alerting strategies. Servers translate LLM requests into CloudWatch Insights queries, metric retrievals, and X-Ray trace analysis, providing real-time visibility into application health and enabling AI-driven incident investigation.
Implements separate MCP servers for CloudWatch (metrics, logs, alarms) and X-Ray (distributed tracing) that leverage service-specific APIs and query languages (CloudWatch Insights for logs, CloudWatch Metrics API for time-series data, X-Ray GetTraceSummaries for trace analysis) rather than a unified monitoring abstraction
Provides observability capabilities tailored to AWS monitoring patterns rather than generic time-series database access, because each server understands CloudWatch's metric dimensions and log query syntax, and X-Ray's service map and trace filtering semantics
iam and security policy analysis with automated permission recommendations
Medium confidenceExposes AWS IAM APIs as MCP tools, enabling LLMs to analyze IAM policies, identify overly permissive roles, detect security misconfigurations, and recommend least-privilege policy changes. Servers parse IAM policy documents, compare actual permissions against required permissions, and generate policy recommendations in JSON format that LLMs can reason about and present to security teams.
Implements IAM policy analysis as an MCP server that parses IAM policy documents, performs permission comparison logic, and generates least-privilege recommendations rather than exposing raw IAM APIs, enabling LLMs to reason about security posture without understanding IAM policy syntax and permission semantics
Provides security-focused IAM analysis rather than generic policy management, because the server understands IAM policy structure, permission hierarchies, and can identify overly permissive patterns that generic policy tools might miss
messaging and event-driven workflow orchestration with sns, sqs, and step functions
Medium confidenceExposes SNS, SQS, and AWS Step Functions as MCP tools, enabling LLMs to publish messages, inspect queue contents, trigger workflows, and monitor execution status. Servers translate LLM tool invocations into messaging API calls and Step Functions state machine executions, enabling AI-driven event orchestration and asynchronous workflow management without direct service client access.
Implements separate MCP servers for SNS (publish-subscribe), SQS (queuing), and Step Functions (workflow orchestration) that leverage service-specific APIs and semantics rather than a unified messaging abstraction, allowing each server to expose service-specific features like SNS message filtering, SQS visibility timeout management, and Step Functions execution history
Provides event-driven workflow capabilities tailored to AWS messaging patterns rather than generic message queue access, because each server understands the specific service's semantics (SNS topics and subscriptions, SQS FIFO vs standard queues, Step Functions state machine definitions)
mcp server framework and standardized server template for custom aws service integration
Medium confidenceProvides a Python-based MCP server framework and standardized design guidelines that enable developers to build custom MCP servers for AWS services not covered by the official servers. The framework handles MCP protocol mechanics (JSON-RPC 2.0 transport, tool schema generation, error handling), allowing developers to focus on AWS service integration logic. Includes testing patterns, CI/CD workflows, and documentation templates to ensure consistency across custom servers.
Provides a standardized Python MCP server framework with design guidelines, testing patterns, and CI/CD templates that enable rapid custom server development while maintaining consistency with the official AWS MCP servers, rather than requiring developers to implement MCP protocol mechanics from scratch
Accelerates custom MCP server development by providing battle-tested patterns and templates from 50+ official servers, rather than requiring developers to learn MCP protocol details and design patterns independently
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with mcp, ranked by overlap. Discovered automatically through the match graph.
mcp
Official MCP Servers for AWS
AWS Core
** - Core AWS MCP server providing prompt understanding and server management capabilities.
Azure MCP Server
Provides Model Context Protocol (MCP) integration and tooling for Azure in Visual Studio Code.
AWS KB Retrieval
** - Retrieval from AWS Knowledge Base using Bedrock Agent Runtime.
@modelcontextprotocol/server-basic-solid
Basic MCP App Server example using Solid
mcp-hello-world
A simple Hello World MCP server
Best For
- ✓AWS-native development teams building AI-augmented workflows
- ✓DevOps engineers integrating LLMs into infrastructure automation
- ✓Enterprise teams deploying AI assistants that need AWS service access
- ✓Infrastructure teams using Terraform or CloudFormation who want AI-assisted IaC authoring
- ✓CDK-based development teams integrating LLM-driven infrastructure generation
- ✓DevOps engineers building AI-augmented deployment pipelines
- ✓LLM application developers building multi-service AWS orchestration
- ✓Enterprise teams deploying AI assistants with access to 50+ AWS services
Known Limitations
- ⚠Requires AWS credentials and IAM permissions for each service — no built-in credential abstraction across services
- ⚠Each MCP server is a separate process — multi-service workflows require coordinating multiple server instances
- ⚠Response latency depends on AWS API call times; no built-in caching of frequently accessed resources like S3 object listings
- ⚠Tool schemas are static at server startup — dynamic AWS resource discovery requires custom server implementation
- ⚠Terraform server requires local Terraform binary and state file access — no remote state abstraction
- ⚠CloudFormation server is read-only for template inspection; tool invocations require separate AWS API calls to apply changes
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
Repository Details
Last commit: Apr 22, 2026
About
Official MCP Servers for AWS
Categories
Alternatives to mcp
Are you the builder of mcp?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →