SWE-bench
BenchmarkFreeAI coding agent benchmark — real GitHub issues, end-to-end evaluation, the standard for code agents.
Capabilities9 decomposed
real-world github issue evaluation dataset construction
Medium confidenceConstructs a curated benchmark of 2,294 task instances by extracting real, unresolved GitHub issues from 12 popular Python repositories (Django, Flask, Matplotlib, etc.), preserving full repository context, issue descriptions, and ground-truth patches. Uses automated filtering to ensure issues are solvable and have deterministic test outcomes, creating a reproducible evaluation corpus that mirrors production software engineering workflows rather than synthetic coding tasks.
Uses real, unresolved GitHub issues with full repository context and deterministic test outcomes, rather than synthetic coding tasks or isolated code snippets. Preserves the complete software engineering workflow (issue understanding → codebase navigation → patch writing → test validation) that agents must execute end-to-end.
More representative of production software engineering than HumanEval or MBPP (which use isolated functions), and more reproducible than ad-hoc issue evaluation because it provides standardized, versioned task instances with ground-truth solutions.
end-to-end agent execution harness with test validation
Medium confidenceProvides a standardized execution environment that runs AI agents against benchmark tasks, capturing their interactions with the codebase (file reads, edits, command execution), executing generated patches against the repository's test suite, and measuring success via test pass rates. The harness isolates each task execution in a clean repository state, manages dependency installation, and collects detailed execution traces for post-hoc analysis and debugging.
Provides a complete execution sandbox that captures agent interactions at the file system and command execution level, enabling detailed analysis of agent behavior beyond just pass/fail outcomes. Includes automatic repository state reset between tasks and dependency management to ensure reproducible, isolated execution.
More comprehensive than simple test runners because it captures the full agent interaction trace (what files were read, what edits were attempted, what commands were run), enabling detailed failure analysis and agent behavior understanding beyond just test outcomes.
multi-repository codebase indexing and navigation simulation
Medium confidenceIndexes 12 Python repositories with their full source code, test suites, and dependency metadata, enabling agents to navigate, search, and understand codebases as they would in a real development environment. The indexing preserves repository structure, file relationships, and test discovery information, allowing agents to locate relevant code sections, understand module dependencies, and identify which tests exercise specific functionality.
Provides a standardized, pre-indexed view of 12 real Python repositories with full source code and test metadata, allowing agents to navigate and understand codebases as they would in production. The indexing preserves repository structure and relationships without imposing a specific code understanding format, allowing agents to use their own analysis approaches.
More realistic than synthetic code snippets because it preserves full repository context and structure, but more manageable than requiring agents to index arbitrary repositories because the 12 repositories are pre-selected and standardized.
issue-to-patch ground-truth mapping with test validation
Medium confidenceMaintains a curated mapping of 2,294 GitHub issues to their ground-truth patches, where each patch has been validated to pass the repository's test suite. The mapping includes issue metadata (title, description, labels), the exact patch that resolves the issue (in unified diff format), and test execution results confirming the patch's correctness. This enables evaluation of agent-generated patches against a known-good solution.
Provides validated ground-truth patches for each issue, ensuring that the benchmark's success criterion (test pass rate) is achievable and that patches have been verified to work. This prevents evaluation against impossible or incorrect ground-truth solutions.
More reliable than inferring correctness from test pass rates alone because it includes human-verified patches that demonstrate a known-good solution path, enabling deeper analysis of agent solution quality.
standardized evaluation metrics and reporting
Medium confidenceComputes standardized metrics for evaluating agent performance across the benchmark, including task-level success (test pass rate), repository-level aggregation, and comparative analysis across agent implementations. Metrics include pass@1 (single attempt success), pass@k (success within k attempts), and detailed breakdowns by repository, issue type, and difficulty. Generates structured reports enabling comparison between different agents and tracking performance trends.
Provides standardized, reproducible metrics for comparing agent performance across a large, diverse benchmark. Enables fair comparison by ensuring all agents are evaluated on identical tasks with consistent success criteria.
More rigorous than ad-hoc evaluation because it enforces consistent metrics and reporting formats, making agent comparisons reproducible and enabling tracking of performance trends over time.
repository-specific test suite execution and result parsing
Medium confidenceExecutes each repository's native test suite (pytest, unittest, etc.) against agent-generated patches, parses test output to extract pass/fail results, and determines overall task success based on test outcomes. Handles repository-specific test configurations, environment setup, and dependency installation, normalizing test execution across repositories with different testing frameworks and configurations.
Handles test execution across 12 different Python repositories with varying test frameworks and configurations, normalizing the execution and result parsing to provide consistent success metrics. Manages repository-specific setup and teardown to ensure clean, reproducible test runs.
More comprehensive than simple test runners because it handles repository-specific configurations and dependencies, ensuring tests execute correctly across diverse codebases rather than assuming a standard setup.
agent interface specification and integration protocol
Medium confidenceDefines a standardized interface that agents must implement to participate in the benchmark, including methods for file I/O (read, write, list), command execution, and task initialization. The interface abstracts away implementation details, allowing agents built with different frameworks or languages to be evaluated on identical tasks. Includes reference implementations and documentation for integrating new agents.
Defines a minimal, language-agnostic interface for agent interaction (file I/O, command execution) that allows agents built with different frameworks to be evaluated on identical tasks. The interface is intentionally simple to minimize integration overhead while capturing the essential agent capabilities.
More flexible than framework-specific evaluation because it allows agents built with different tools (LangChain, AutoGPT, etc.) to be compared on equal footing, but more constrained than unrestricted agent execution because it enforces a standard interaction model.
task instance versioning and reproducibility management
Medium confidenceMaintains versioned snapshots of each task instance, including the exact repository state (commit hash), issue description, test command, and expected test results. Enables reproducible evaluation by ensuring agents always operate on identical task versions, preventing drift from repository updates or issue modifications. Includes tooling for creating new task versions and migrating between versions.
Maintains versioned snapshots of task instances with exact repository states (commit hashes), ensuring reproducible evaluation across time and preventing drift from repository updates. Enables tracking of benchmark evolution and comparison across benchmark versions.
More rigorous than ad-hoc task management because it enforces versioning and reproducibility, enabling long-term tracking of agent performance and preventing evaluation drift from repository changes.
issue difficulty and complexity classification
Medium confidenceClassifies each of the 2,294 task instances by difficulty and complexity metrics, including number of files modified, lines of code changed, test coverage, and issue description length. Enables stratified analysis of agent performance across difficulty levels and identification of which types of issues are most challenging. Classification is computed automatically from patch metadata and repository structure.
Automatically classifies task instances by difficulty using heuristic metrics (files modified, lines changed, test coverage), enabling stratified analysis of agent performance across difficulty levels without manual annotation.
More scalable than manual difficulty annotation because it uses automated metrics, but less accurate than human-labeled difficulty because it relies on syntactic rather than semantic complexity measures.
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with SWE-bench, ranked by overlap. Discovered automatically through the match graph.
SWE-bench Verified
Human-verified benchmark for AI coding agents.
"An open source Devin getting 12.29% on 100% of the SWE Bench test set vs Devin's 13.84% on 25% of the test set!"
SWE-agent works by interacting with a specialized terminal, which allows it to:
varies
based on the model used by the agent.
Demo
[Discord](https://discord.com/invite/AVEFbBn2rH)
500-AI-Agents-Projects
The 500 AI Agents Projects is a curated collection of AI agent use cases across various industries. It showcases practical applications and provides links to open-source projects for implementation, illustrating how AI agents are transforming sectors such as healthcare, finance, education, retail, a
SWE-agent
Princeton's GitHub issue solver — navigates code, edits files, runs tests, submits patches.
Best For
- ✓AI research teams evaluating coding agent capabilities
- ✓LLM providers benchmarking code generation models
- ✓Teams building autonomous software engineering agents
- ✓Researchers evaluating coding agent architectures
- ✓Teams implementing agents that need a reference evaluation framework
- ✓Organizations benchmarking in-house vs. commercial coding agents
- ✓Agents that need to understand codebase structure before making edits
- ✓Evaluating agent ability to navigate unfamiliar codebases
Known Limitations
- ⚠Limited to 12 Python repositories — may not represent diversity of languages, frameworks, or domain-specific codebases
- ⚠Issues are historical (collected at specific point in time) — may not reflect current repository state or modern dependency versions
- ⚠Requires full repository clones and test suite execution — computationally expensive for large-scale evaluation runs
- ⚠Ground-truth patches are human-written solutions — may not capture all valid solution approaches
- ⚠Requires agents to implement a specific interface (file I/O, command execution) — not all agent frameworks natively support this
- ⚠Test-based success metric may miss valid solutions that pass tests but don't match ground-truth patch
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
About
Benchmark for evaluating AI coding agents on real GitHub issues. Contains 2,294 task instances from 12 popular Python repos. Tests end-to-end: understanding issue, navigating codebase, writing patch, passing tests. The standard for coding agent evaluation.
Categories
Alternatives to SWE-bench
Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.
Compare →Amplication brings order to the chaos of large-scale software development by creating Golden Paths for developers - streamlined workflows that drive consistency, enable high-quality code practices, simplify onboarding, and accelerate standardized delivery across teams.
Compare →Are you the builder of SWE-bench?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →