Gigacode – Use OpenCode's UI with Claude Code/Codex/Amp
FrameworkFreeGigacode is an experimental, just-for-fun project that makes OpenCode's TUI + web + SDK work with Claude Code, Codex, and Amp.It's not a fork of OpenCode. Instead, it implements the OpenCode protocol and just runs `opencode attach` to the server that converts API calls to the underlying ag
Capabilities8 decomposed
multi-model code generation with unified ui abstraction
Medium confidenceGigacode provides a unified web interface (OpenCode's UI) that abstracts away differences between multiple LLM code generation backends (Claude, Codex, Amp). The framework implements a provider-agnostic adapter pattern that translates UI interactions into model-specific API calls, handling differences in prompt formatting, token limits, and response parsing across heterogeneous model APIs without requiring users to switch interfaces.
Implements a provider adapter pattern that decouples OpenCode's UI from specific LLM backends, allowing seamless switching between Claude, Codex, and Amp without modifying the frontend or requiring users to learn different interfaces for each model.
Unlike single-model IDEs (VS Code + Copilot) or separate tools per model, Gigacode enables side-by-side model comparison and backend swapping within one interface, reducing context switching overhead for multi-model evaluation workflows.
opencode ui integration with custom llm backends
Medium confidenceGigacode extends OpenCode's web-based code editor with pluggable LLM backend adapters that implement a standardized interface for code generation requests. The framework uses a request/response translation layer that converts OpenCode's internal API calls into model-specific formats (e.g., Anthropic's Messages API, OpenAI's Completions API) and normalizes responses back into OpenCode's expected schema, enabling any compatible LLM to power the editor.
Decouples OpenCode's frontend from backend LLM selection through a standardized adapter interface, allowing developers to plug in any LLM (Claude, Codex, Amp, or custom models) without forking or modifying the core editor UI.
More flexible than OpenCode's default single-backend setup and more UI-consistent than manually switching between separate tools for different models; trades some model-specific feature exposure for unified interface simplicity.
code context aggregation and prompt construction
Medium confidenceGigacode automatically collects relevant code context (open files, selected code, project structure) from the OpenCode editor and constructs optimized prompts for each LLM backend. The framework implements context windowing logic that respects each model's token limits, prioritizes relevant files based on import relationships or edit history, and formats context according to each backend's preferred prompt structure (e.g., system messages for Claude vs. instruction-following for Codex).
Implements model-aware context windowing that respects each backend's token limits and prompt format preferences, automatically selecting and formatting relevant codebase context rather than requiring manual context specification.
More sophisticated than naive context inclusion (which often exceeds token limits) and more flexible than single-model solutions that optimize for one backend's preferences; requires more complex prompt engineering logic but enables better multi-model compatibility.
real-time code generation streaming with multi-backend support
Medium confidenceGigacode implements streaming response handling for code generation across multiple LLM backends, translating each model's streaming protocol (Anthropic's Server-Sent Events, OpenAI's streaming format, etc.) into a unified stream that updates the OpenCode editor in real-time. The framework buffers partial tokens, handles mid-stream errors, and provides fallback behavior if a backend's stream is interrupted, enabling responsive code completion without waiting for full response generation.
Abstracts away backend-specific streaming protocols (Anthropic SSE vs. OpenAI streaming format) into a unified streaming interface, allowing OpenCode to display incremental code generation regardless of which backend is active.
More responsive than batch-mode code generation and more robust than naive streaming implementations that don't handle backend-specific protocol differences; adds latency overhead for protocol translation but improves perceived performance.
model-specific configuration and capability mapping
Medium confidenceGigacode maintains a configuration layer that maps model-specific capabilities (e.g., Claude supports function calling, Codex supports fine-tuning, Amp supports custom instructions) and constraints (token limits, rate limits, supported languages) to a unified interface. The framework uses capability detection to enable or disable UI features based on the active backend and automatically adjusts request parameters (temperature, max_tokens, top_p) according to each model's supported ranges and defaults.
Maintains explicit capability mappings for each LLM backend, enabling the UI to adapt features and constraints dynamically based on the active model rather than assuming all backends support the same feature set.
More flexible than single-model tools and more maintainable than hardcoded backend-specific logic scattered throughout the codebase; requires upfront configuration effort but enables cleaner separation of concerns.
backend model switching with state preservation
Medium confidenceGigacode allows users to switch between LLM backends (Claude, Codex, Amp) mid-session while preserving the current code context, editor state, and generation history. The framework implements a state abstraction layer that decouples the editor's internal state from backend-specific session management, enabling seamless backend switching without losing work or requiring re-authentication.
Implements a state abstraction layer that decouples editor state from backend-specific session management, enabling mid-session backend switching without losing context or requiring manual state re-entry.
More convenient than manually copying context between separate tools for each model and more robust than naive switching that loses state; adds complexity to state management but significantly improves multi-model comparison workflows.
cost and latency tracking across multiple backends
Medium confidenceGigacode tracks API usage (tokens, requests, costs) and latency metrics for each LLM backend, aggregating this data to help users understand the cost-performance tradeoff of different models. The framework logs request/response metadata (tokens used, time to first token, total generation time) and provides dashboards or reports showing which models are most cost-effective or fastest for different code generation tasks.
Aggregates cost and latency metrics across multiple LLM backends in a unified dashboard, enabling data-driven backend selection based on actual usage patterns rather than theoretical pricing or performance claims.
More comprehensive than per-model cost tracking and more actionable than generic performance metrics; requires infrastructure investment but provides clear ROI for teams with significant API spending.
fallback and retry logic with backend health monitoring
Medium confidenceGigacode implements automatic fallback and retry mechanisms that detect when an LLM backend is unavailable, rate-limited, or experiencing errors, and automatically retry the request with an alternative backend or with exponential backoff. The framework monitors backend health (API response times, error rates, availability) and can proactively switch to a healthy backend before the user's request fails, improving reliability without requiring manual intervention.
Implements proactive health monitoring and automatic backend fallback, switching to alternative models before user requests fail rather than only retrying after errors occur.
More resilient than single-backend solutions and more sophisticated than naive retry logic; adds operational complexity but significantly improves availability for production code generation workflows.
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with Gigacode – Use OpenCode's UI with Claude Code/Codex/Amp, ranked by overlap. Discovered automatically through the match graph.
Qwen3-8B
text-generation model by undefined. 1,00,18,533 downloads.
OpenAI: o3
o3 is a well-rounded and powerful model across domains. It sets a new standard for math, science, coding, and visual reasoning tasks. It also excels at technical writing and instruction-following....
InternLM
Shanghai AI Lab's multilingual foundation model.
AI/ML API
Unlock AI capabilities easily with 100+ models, serverless, cost-effective, OpenAI...
First Claude Code client for Ollama local models
Just to clarify the background a bit. This project wasn’t planned as a big standalone release at first. On January 16, Ollama added support for an Anthropic-compatible API, and I was curious how far this could be pushed in practice. I decided to try plugging local Ollama models directly into a Claud
LiquidAI: LFM2-24B-A2B
LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment. Built as a 24B parameter Mixture-of-Experts model with only 2B active parameters per...
Best For
- ✓developers evaluating multiple LLM code generation models
- ✓teams building code generation products with multi-model support
- ✓researchers benchmarking code generation across different backends
- ✓developers wanting to use OpenCode with non-standard LLM backends
- ✓organizations running self-hosted or fine-tuned code models
- ✓teams building custom code generation pipelines with specialized models
- ✓developers working in large codebases who need context-aware code generation
- ✓teams using multiple LLM models and needing consistent context handling
Known Limitations
- ⚠Requires API keys for each backend model (Claude, Codex, Amp) — no single unified authentication
- ⚠Latency varies significantly across models; no built-in response time normalization or timeout handling
- ⚠Model-specific features (e.g., streaming, function calling) may not be uniformly exposed across all backends
- ⚠Adapter implementation required for each new backend — no automatic compatibility
- ⚠OpenCode UI limitations (e.g., max context window, UI responsiveness) still apply regardless of backend
- ⚠No built-in caching or request deduplication across multiple backend calls
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
About
Show HN: Gigacode – Use OpenCode's UI with Claude Code/Codex/Amp
Categories
Alternatives to Gigacode – Use OpenCode's UI with Claude Code/Codex/Amp
Search the Supabase docs for up-to-date guidance and troubleshoot errors quickly. Manage organizations, projects, databases, and Edge Functions, including migrations, SQL, logs, advisors, keys, and type generation, in one flow. Create and manage development branches to iterate safely, confirm costs
Compare →Are you the builder of Gigacode – Use OpenCode's UI with Claude Code/Codex/Amp?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →