Cody by Sourcegraph
AgentAgent that writes code and answers your questions
Capabilities11 decomposed
codebase-aware code generation with semantic indexing
Medium confidenceGenerates code by leveraging Sourcegraph's semantic code index to understand repository structure, dependencies, and patterns. Uses embeddings-based retrieval to surface relevant code context from the entire codebase, then passes this context to an LLM (Claude, GPT-4, or local models) to generate contextually appropriate code that follows existing patterns and conventions.
Integrates Sourcegraph's semantic code graph (built on SCIP protocol) to retrieve contextually relevant code from the entire repository, not just open files or recent edits. Uses precise symbol resolution and cross-repository dependency tracking to ensure generated code aligns with actual project structure.
Outperforms Copilot and Cursor for large monorepos because it indexes semantic relationships between symbols across the entire codebase rather than relying on file proximity and recency heuristics.
natural language code explanation and documentation generation
Medium confidenceAnalyzes selected code blocks and generates human-readable explanations, docstrings, and documentation by passing code through an LLM with optional codebase context. Can generate explanations at multiple levels of detail (one-liner, paragraph, full documentation) and produce documentation in multiple formats (JSDoc, Python docstrings, Markdown).
Leverages Sourcegraph's symbol resolution to provide context-aware explanations that reference related code, dependencies, and usage patterns across the codebase, not just the isolated code block.
Generates more accurate explanations than generic LLM-based tools because it can resolve symbols and cross-reference actual usage patterns in the indexed codebase.
llm model selection and provider abstraction
Medium confidenceAbstracts away LLM provider differences by supporting multiple LLM backends (OpenAI, Anthropic, local models via Ollama, etc.) through a unified interface. Allows users to switch between providers and models without changing code, and supports configuring different models for different tasks (code generation vs. explanation).
Provides a unified abstraction layer over multiple LLM providers and models, allowing users to swap providers without changing Cody configuration or code.
More flexible than tools locked to a single LLM provider because it supports multiple backends and allows switching based on cost, capability, or privacy requirements.
multi-file code refactoring with dependency tracking
Medium confidencePerforms refactoring operations (rename, extract, move, restructure) across multiple files while maintaining referential integrity. Uses Sourcegraph's semantic index to identify all usages of symbols, then generates coordinated changes across the codebase to preserve functionality. Supports both automated refactoring and LLM-assisted refactoring for complex transformations.
Uses Sourcegraph's SCIP-based semantic index to track symbol definitions and usages across the entire codebase, enabling precise multi-file refactoring that accounts for indirect dependencies, transitive imports, and cross-module references that text-based tools miss.
More reliable than IDE-native refactoring tools for large monorepos because it indexes the entire codebase rather than relying on single-workspace symbol tables, and can handle cross-repository dependencies.
context-aware code completion with repository patterns
Medium confidenceProvides inline code completion suggestions by analyzing the current file context, surrounding code patterns, and repository-wide conventions. Uses a combination of local syntax analysis and Sourcegraph's semantic index to suggest completions that match the project's style, imports, and architectural patterns. Supports multi-line completions and function signature inference.
Combines local syntax analysis with repository-wide semantic indexing to suggest completions that not only are syntactically correct but also follow the project's established patterns, import conventions, and architectural style.
More contextually accurate than Copilot for established codebases because it indexes actual usage patterns in the repository rather than relying on general training data.
intelligent code search with natural language queries
Medium confidenceEnables searching code using natural language descriptions rather than regex or keywords. Converts natural language queries to semantic embeddings and searches Sourcegraph's indexed codebase for matching code patterns, functions, and implementations. Returns ranked results with code snippets and context about where matches are used.
Uses Sourcegraph's semantic code graph and embedding-based search to understand code intent and patterns, not just keyword matching. Ranks results by relevance to the query's semantic meaning.
More powerful than grep or IDE find-in-files for discovering code patterns because it understands semantic meaning rather than relying on exact keyword matches.
bug detection and fix suggestion with codebase context
Medium confidenceAnalyzes code for potential bugs by examining patterns, type mismatches, and common error conditions, then suggests fixes based on how similar issues are handled elsewhere in the codebase. Uses static analysis combined with LLM reasoning to identify issues and propose corrections that align with project conventions.
Combines static analysis with LLM reasoning and codebase context to suggest fixes that not only correct the bug but also align with the project's error handling patterns and conventions.
More contextually appropriate fixes than generic linters because it learns from how the codebase handles similar issues.
test generation with coverage-aware suggestions
Medium confidenceGenerates unit tests for functions and modules by analyzing code structure, dependencies, and existing test patterns in the codebase. Uses LLM to create test cases covering normal paths, edge cases, and error conditions, then formats them according to the project's testing framework and style conventions.
Analyzes existing test patterns in the codebase to generate tests that match the project's testing style, assertion patterns, and mocking conventions, rather than generating generic tests.
Produces tests that integrate seamlessly with the project's test suite because it learns from existing tests rather than applying generic testing patterns.
cross-repository dependency analysis and impact assessment
Medium confidenceAnalyzes how changes in one repository affect dependent repositories by querying Sourcegraph's dependency graph. Identifies all downstream consumers of a module, function, or API, then assesses the impact of proposed changes and suggests migration paths for breaking changes.
Leverages Sourcegraph's multi-repository dependency graph to provide organization-wide impact analysis, not just single-repository dependency tracking.
Provides organization-wide visibility into dependencies that single-repository tools cannot achieve, enabling safer large-scale refactoring.
conversational code assistant with multi-turn context
Medium confidenceProvides an interactive chat interface where developers can ask questions about code, request changes, and iterate on solutions. Maintains conversation history and code context across multiple turns, allowing follow-up questions and refinements. Supports both code-specific questions and general development questions with codebase awareness.
Maintains codebase context across multi-turn conversations, allowing developers to reference code, ask follow-up questions, and iterate on solutions without re-establishing context each turn.
More natural and iterative than single-shot code generation tools because it supports conversation-style interaction with persistent codebase context.
ide-integrated code review with inline suggestions
Medium confidenceProvides code review feedback directly in the IDE by analyzing code changes (diffs) and suggesting improvements. Uses LLM to identify potential issues, style violations, and optimization opportunities, then displays suggestions inline with the ability to apply fixes directly. Integrates with git to analyze staged changes and pull requests.
Integrates directly into IDE workflows with inline suggestions that can be applied with one click, and uses codebase context to tailor suggestions to project conventions.
More actionable than standalone code review tools because suggestions appear inline during development and can be applied immediately without context switching.
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with Cody by Sourcegraph, ranked by overlap. Discovered automatically through the match graph.
Automata
Generate code based on your project context
InternLM
Shanghai AI Lab's multilingual foundation model.
Demo
[Discord](https://discord.com/invite/AVEFbBn2rH)
Meta: Llama 3.3 70B Instruct
The Meta Llama 3.3 multilingual large language model (LLM) is a pretrained and instruction tuned generative model in 70B (text in/text out). The Llama 3.3 instruction tuned text only model...
Meta: Llama 3.1 70B Instruct
Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 70B instruct-tuned version is optimized for high quality dialogue usecases. It has demonstrated strong...
IBM: Granite 4.0 Micro
Granite-4.0-H-Micro is a 3B parameter from the Granite 4 family of models. These models are the latest in a series of models released by IBM. They are fine-tuned for long...
Best For
- ✓teams with large, complex codebases (100k+ LOC) where pattern consistency matters
- ✓developers working in monorepos with multiple interconnected services
- ✓organizations using Sourcegraph for code intelligence already
- ✓teams with legacy codebases lacking documentation
- ✓developers onboarding to unfamiliar projects
- ✓technical writers generating API docs from source code
- ✓organizations with LLM provider preferences or cost constraints
- ✓teams needing to keep code analysis on-premises for security
Known Limitations
- ⚠Requires Sourcegraph instance to be running and indexed — cannot work offline or with unindexed repos
- ⚠Indexing latency means newly committed code may not be immediately available for context (typically 5-15 minute delay)
- ⚠Context window limits mean very large codebases may not surface all relevant patterns
- ⚠Semantic indexing quality depends on code quality and documentation — poorly structured repos yield weaker context
- ⚠Explanations may be inaccurate if code is obfuscated, uses non-standard patterns, or has misleading variable names
- ⚠Generated docstrings may not capture all edge cases or error conditions
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
About
Agent that writes code and answers your questions
Categories
Alternatives to Cody by Sourcegraph
Search the Supabase docs for up-to-date guidance and troubleshoot errors quickly. Manage organizations, projects, databases, and Edge Functions, including migrations, SQL, logs, advisors, keys, and type generation, in one flow. Create and manage development branches to iterate safely, confirm costs
Compare →Are you the builder of Cody by Sourcegraph?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →