Prompt-Engineering-Guide
AgentFree🐙 Guides, papers, lessons, notebooks and resources for prompt engineering, context engineering, RAG, and AI Agents.
Capabilities18 decomposed
multi-language prompt engineering documentation with mdx-based content delivery
Medium confidenceServes comprehensive prompt engineering educational content across 11 languages using Next.js 13 with Nextra 2.13 static site generation. The platform uses MDX files as the source of truth, enabling interactive code examples, embedded notebooks, and dynamic content rendering while maintaining a single source for all language variants through i18n middleware. Content is organized hierarchically across 745+ pages covering foundational to advanced prompting techniques.
Uses Nextra 2.13 framework built on Next.js 13 with MDX-first architecture, enabling single-source-of-truth content that compiles to static HTML while supporting embedded interactive React components and automatic i18n routing through middleware.js without requiring separate content databases or translation management systems
More maintainable than wiki-based platforms (GitHub Wiki, Notion) because content lives in version-controlled MDX files; faster than dynamic CMS platforms because it's pre-built static HTML; more interactive than PDF guides because it supports embedded notebooks and React components
chain-of-thought (cot) prompting technique documentation and examples
Medium confidenceProvides structured educational content explaining Chain-of-Thought prompting methodology, which breaks down complex reasoning tasks into intermediate steps. The guide documents the theoretical foundation, implementation patterns, and practical examples showing how CoT improves LLM accuracy on multi-step reasoning problems. Content includes worked examples demonstrating step-by-step reasoning decomposition.
Provides comprehensive CoT documentation integrated within a larger prompting guide ecosystem, allowing readers to understand CoT in context of other techniques (zero-shot, few-shot, ReAct, ToT) and see how CoT serves as a foundation for more advanced reasoning patterns
More thorough than scattered blog posts because it covers CoT variants, failure modes, and integration with other techniques; more accessible than academic papers because it includes worked examples and practical implementation guidance
adversarial prompting and defense techniques documentation
Medium confidenceDocuments adversarial prompting attacks (prompt injection, jailbreaking, manipulation) and defense strategies to make LLM systems robust. The guide explains attack vectors like instruction override, context confusion, and output manipulation, along with defensive techniques like input validation, output filtering, and prompt hardening.
Integrates adversarial prompting within a broader safety and best practices section, showing how prompt-level attacks relate to system-level security and providing both attack examples and defensive strategies
More practical than academic adversarial ML papers because it focuses on prompt-specific attacks; more comprehensive than security checklists because it explains attack mechanisms and defense rationales
llm model comparison and selection guidance across providers and architectures
Medium confidenceProvides structured documentation comparing LLM capabilities across providers (OpenAI, Anthropic, open-source) and architectures (GPT-4, Claude, Llama, etc.), covering performance characteristics, cost, context window, and specialized capabilities. The guide helps developers select appropriate models for specific use cases based on task requirements and constraints.
Provides vendor-neutral model comparison documentation that covers both closed-source (OpenAI, Anthropic) and open-source models, enabling developers to make informed choices across the full LLM landscape
More comprehensive than individual vendor documentation because it compares across providers; more objective than vendor marketing because it focuses on technical capabilities; more current than academic benchmarks because it tracks rapidly evolving model landscape
function calling and tool integration patterns for llm agents
Medium confidenceDocuments function calling capabilities that enable LLMs to invoke external tools and APIs by generating structured function calls. The guide explains how to define function schemas, parse LLM function call outputs, handle execution results, and integrate function calling into agent loops for tool-augmented reasoning.
Explains function calling as a core capability for building agents, showing how it enables structured tool invocation and integrates with reasoning techniques like ReAct
More structured than free-form tool use because function schemas enforce valid calls; more reliable than natural language tool invocation because it uses structured output; more flexible than hard-coded tool integrations because schemas can be dynamically defined
context engineering for ai agents with memory and state management
Medium confidenceDocuments context engineering practices for building effective AI agents, including how to structure system prompts, manage conversation history, implement memory systems, and handle context window constraints. The guide covers techniques for maintaining agent state, prioritizing relevant context, and designing prompts that enable agents to reason effectively within limited context windows.
Treats context engineering as a first-class concern for agent design, showing how careful context structuring and management is critical for building effective agents that can reason and act over long interactions
More comprehensive than framework-specific context management because it covers principles independent of implementation; more practical than academic papers because it includes concrete strategies and examples
synthetic dataset generation using llms for training and evaluation
Medium confidenceDocuments techniques for using LLMs to generate synthetic training data, evaluation datasets, and test cases. The guide covers prompt engineering for data generation, quality control strategies, and how to use synthetic data for fine-tuning, evaluation, and testing LLM applications.
Presents synthetic data generation as a practical solution for data scarcity in LLM applications, showing how LLMs can be used to bootstrap training and evaluation data
More cost-effective than manual data labeling; more flexible than fixed datasets because generation can be customized; more practical than purely synthetic approaches because it leverages LLM capabilities
fine-tuning guidance for gpt-4o and other models with prompt engineering integration
Medium confidenceDocuments fine-tuning approaches for adapting LLMs to specific tasks, including when to fine-tune vs use prompt engineering, how to prepare training data, and how to combine fine-tuning with advanced prompting techniques. The guide covers fine-tuning for GPT-4o and discusses tradeoffs between fine-tuning and in-context learning.
Integrates fine-tuning guidance within the broader prompt engineering context, showing how fine-tuning and prompting are complementary approaches rather than alternatives
More practical than academic fine-tuning papers because it includes cost-benefit analysis; more comprehensive than vendor documentation because it compares fine-tuning with prompt engineering alternatives
interactive jupyter notebook examples for hands-on prompt engineering practice
Medium confidenceProvides executable Jupyter notebooks demonstrating prompt engineering techniques with runnable code examples. Notebooks cover techniques like CoT, PAL, adversarial prompting, and RAG with actual LLM API calls, enabling learners to experiment and modify examples in real-time.
Provides executable notebooks integrated within the documentation platform, enabling learners to run examples directly from the guide without setting up separate environments
More interactive than static documentation because code is executable; more accessible than academic papers because it includes working examples; more practical than tutorials because learners can modify and experiment
research papers and findings collection on prompt engineering, rag, and agents
Medium confidenceCurates and summarizes research papers on prompt engineering, RAG, LLM agents, and related topics, providing links to original papers and distilled summaries of key findings. The collection helps practitioners stay current with research advances and understand the theoretical foundations of prompting techniques.
Integrates research papers within a practical guide, bridging the gap between academic research and practitioner knowledge by providing both theoretical foundations and practical applications
More curated than raw paper databases because papers are selected and summarized; more accessible than academic conferences because summaries distill key findings; more current than textbooks because it includes recent research
retrieval augmented generation (rag) technique documentation with architecture patterns
Medium confidenceDocuments RAG methodology for augmenting LLM responses with retrieved external knowledge, explaining the three-stage pipeline: retrieval (finding relevant documents), augmentation (injecting context into prompts), and generation (LLM producing grounded responses). The guide covers architectural patterns for building RAG systems, including vector store integration, retrieval ranking strategies, and context window management.
Positions RAG within the broader prompt engineering landscape, showing how it complements other techniques (CoT, few-shot prompting) and contrasts with alternatives (fine-tuning, in-context learning) rather than treating RAG in isolation
More comprehensive than vendor-specific RAG tutorials because it covers architectural principles independent of particular vector databases; more practical than academic RAG papers because it includes implementation patterns and integration strategies
react (reasoning + acting) framework documentation with agent loop patterns
Medium confidenceExplains the ReAct framework that combines reasoning (chain-of-thought) with acting (tool use), enabling LLMs to iteratively think, act, and observe in a loop. The guide documents the ReAct prompt structure, how to integrate external tools/APIs, and how to manage the reasoning-action-observation cycle. Content shows how ReAct enables agents to solve complex tasks requiring multiple tool invocations.
Integrates ReAct documentation within a comprehensive agent framework section that covers agent components, context engineering, and research findings, enabling readers to understand ReAct as one pattern within broader agent architecture design
More foundational than framework-specific agent documentation (LangChain, AutoGPT) because it explains the underlying ReAct pattern independent of implementation; more practical than academic papers because it includes prompt templates and integration examples
tree of thoughts (tot) advanced reasoning technique documentation
Medium confidenceDocuments Tree of Thoughts methodology that explores multiple reasoning paths simultaneously rather than a single linear chain, enabling LLMs to backtrack and explore alternative solutions. The guide explains the ToT search strategy (breadth-first, depth-first, beam search), how to evaluate intermediate reasoning states, and when ToT outperforms simpler techniques like CoT.
Positions ToT as an advanced evolution of CoT within a reasoning technique hierarchy, showing how it builds on simpler techniques and comparing computational tradeoffs with alternatives like self-consistency and beam search
More accessible than the original ToT research paper because it explains the core concept and search strategies in plain language; more comprehensive than framework tutorials because it covers multiple search strategies and evaluation approaches
self-consistency prompting technique for improving reasoning reliability
Medium confidenceExplains Self-Consistency methodology that samples multiple reasoning paths from an LLM and aggregates results through majority voting or weighted consensus, improving accuracy on reasoning tasks without requiring external tools. The technique leverages LLM temperature/sampling to generate diverse reasoning traces, then selects the most consistent answer across samples.
Presents self-consistency as a practical reliability technique that doesn't require external tools or fine-tuning, positioning it as an accessible alternative to more complex methods while acknowledging the cost-accuracy tradeoff
Simpler to implement than Tree of Thoughts because it doesn't require intermediate state evaluation; more cost-effective than fine-tuning for improving accuracy; more practical than ensemble models because it uses a single LLM with sampling
prompt chaining technique for decomposing complex tasks into sequential steps
Medium confidenceDocuments Prompt Chaining methodology that breaks complex tasks into sequential prompts where outputs from one step feed into the next, enabling task decomposition and intermediate validation. The guide explains how to design prompt chains, manage context between steps, and handle errors in multi-step workflows.
Explains prompt chaining as a foundational workflow pattern that complements other techniques (CoT, RAG, ReAct), showing how chaining enables more complex agent behaviors and task automation
More flexible than single-prompt approaches because it enables task decomposition and intermediate validation; simpler than full agent frameworks because it doesn't require tool integration or dynamic decision-making
automatic prompt engineer (ape) technique for optimizing prompts through search
Medium confidenceDocuments Automatic Prompt Engineer methodology that uses LLMs to generate and optimize prompts for specific tasks through iterative search and evaluation. APE treats prompt optimization as a search problem, generating candidate prompts, evaluating them on a task, and iteratively improving based on performance feedback.
Presents APE as a meta-level prompting technique where LLMs are used to optimize prompts for other LLM tasks, showing how prompting techniques can be applied recursively to improve themselves
More scalable than manual prompt engineering for many tasks; more interpretable than black-box fine-tuning because optimized prompts remain human-readable; more automated than human-in-the-loop prompt engineering
zero-shot and few-shot prompting technique documentation with examples
Medium confidenceExplains foundational prompting techniques where zero-shot uses no examples and few-shot provides a small number of examples to guide LLM behavior. The guide documents how examples improve task understanding, the importance of example selection and ordering, and when zero-shot vs few-shot is appropriate.
Positions zero-shot and few-shot as foundational techniques that enable all other prompting methods, showing how they form the basis for more advanced techniques like CoT and ReAct
More accessible than academic papers on in-context learning because it focuses on practical application; more comprehensive than vendor tutorials because it covers both techniques and their tradeoffs
program-aided language models (pal) for code-based reasoning and computation
Medium confidenceDocuments Program-Aided Language Models technique where LLMs generate executable code (Python, etc.) to solve problems rather than reasoning purely in natural language. PAL leverages LLMs' code generation abilities to handle complex math, logic, and computation tasks by writing programs that can be executed for precise results.
Presents PAL as a complementary approach to natural language reasoning, showing how code generation can overcome limitations of pure language-based reasoning for computational tasks
More precise than pure language reasoning because code execution is deterministic; more flexible than symbolic solvers because LLMs can generate code for novel problems; more interpretable than neural approaches because generated code is human-readable
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with Prompt-Engineering-Guide, ranked by overlap. Discovered automatically through the match graph.
Prompt Engineering Guide
Comprehensive prompt engineering techniques and templates.
Prompt Engineering Guide
Guide and resources for prompt engineering.
Prompt Engineering Guide
Guide and resources for prompt...
Learn Prompting
A free, open source course on communicating with artificial intelligence.
Learn Prompting
A free, open-source course on communicating with artificial...
awesome-generative-ai-guide
A one stop repository for generative AI research updates, interview resources, notebooks and much more!
Best For
- ✓AI practitioners and developers learning prompt engineering systematically
- ✓Non-English speaking teams adopting LLM technologies
- ✓Educators building curriculum around LLM capabilities
- ✓Open-source contributors extending prompt engineering knowledge
- ✓Developers building reasoning-heavy LLM applications (math solvers, logic engines)
- ✓Data scientists improving LLM accuracy on complex tasks
- ✓Researchers studying LLM reasoning capabilities
- ✓Teams migrating from simple prompts to structured reasoning workflows
Known Limitations
- ⚠Static site generation means real-time updates require rebuild and redeploy cycle
- ⚠No built-in interactive prompt testing environment — examples are read-only documentation
- ⚠Language translations depend on community contributions; some languages may lag behind English
- ⚠Content organization is fixed by MDX file structure; dynamic content filtering/search is limited to client-side
- ⚠Documentation is educational reference material, not executable code — requires manual implementation
- ⚠CoT adds latency because it forces multi-step token generation; no guidance on latency-accuracy tradeoffs
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
Repository Details
Last commit: Mar 11, 2026
About
🐙 Guides, papers, lessons, notebooks and resources for prompt engineering, context engineering, RAG, and AI Agents.
Categories
Alternatives to Prompt-Engineering-Guide
Are you the builder of Prompt-Engineering-Guide?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →