search-augmented llm inference with real-time web grounding
Perplexity's Sonar models integrate web search directly into the inference pipeline, automatically retrieving and ranking current web data during response generation. The API supports four model variants (Sonar, Sonar Pro, Sonar Reasoning Pro, Sonar Deep Research) with configurable search context depth (Low/Medium/High), enabling responses grounded in real-time information without requiring separate search orchestration. Search context size directly affects both latency and pricing, allowing builders to trade off comprehensiveness against cost.
Unique: Integrates web search directly into the inference pipeline rather than as a separate tool call, with configurable search context depth (Low/Medium/High) that affects both response quality and pricing. Sonar Deep Research variant includes native citation token generation and reasoning tokens, enabling multi-step research workflows without external citation extraction.
vs alternatives: Unlike OpenAI's GPT-4 + web search plugins or Claude with tool calling, Sonar models have search baked into inference, reducing latency and eliminating the need for separate search orchestration; pricing is transparent per-context-depth rather than opaque tool invocation costs.
multi-provider llm access with integrated web search tools
The Agent API provides unified access to third-party LLM models (OpenAI, Anthropic, Google, xAI) through Perplexity's infrastructure, with two built-in web search tools (web_search and fetch_url) available as function calls. Builders invoke third-party models via a single API endpoint, and the models can autonomously call web_search ($0.005/invocation) or fetch_url ($0.0005/invocation) to retrieve current information. Pricing is transparent: model tokens charged at direct provider rates with no markup, plus separate tool invocation fees.
Unique: Provides unified access to multiple LLM providers (OpenAI, Anthropic, Google, xAI) through a single API endpoint with consistent web search tools, eliminating the need to manage separate provider SDKs or search integrations. Tool invocation costs are itemized separately from model token costs, enabling precise cost attribution.
vs alternatives: Simpler than building multi-provider support with individual SDKs and integrating search separately; more transparent pricing than OpenAI's plugin system or Claude's tool calling, which obscure tool invocation costs in token counts.
api key-based authentication with key management dashboard
Perplexity API uses API key-based authentication where developers create and manage keys through the API Key Management dashboard. Keys are used in HTTP requests to authenticate API calls. The authentication mechanism is standard HTTP header-based (typical pattern: Authorization: Bearer <api_key>), enabling integration with standard HTTP clients and SDKs. Key management dashboard provides visibility into key creation, rotation, and usage.
Unique: Standard API key-based authentication with a dedicated Key Management dashboard for creation, rotation, and tracking. No complex OAuth flows or third-party authentication providers required.
vs alternatives: Simpler than OAuth-based authentication (used by some APIs) but less flexible than scoped tokens or role-based access control; standard pattern that integrates easily with existing HTTP clients and SDKs.
perplexity sdk with quickstart guides and integration documentation
Perplexity provides an official SDK (language support not specified in documentation) with quickstart guides and integration documentation. The SDK abstracts HTTP request/response handling and provides language-native interfaces for API calls. SDK documentation includes guides for common use cases (e.g., building search assistants, implementing RAG pipelines), enabling developers to get started quickly without building HTTP clients from scratch.
Unique: Official SDK with quickstart guides and integration documentation, reducing time-to-first-API-call. SDK abstracts HTTP details and provides language-native interfaces.
vs alternatives: More convenient than raw HTTP clients (no need to build request/response handling); official documentation ensures best practices and up-to-date API support.
raw web search api with advanced filtering and ranking
The Search API provides direct access to Perplexity's web search infrastructure, returning ranked search results with advanced filtering capabilities. Unlike the Sonar or Agent APIs which generate text responses, the Search API returns raw search results suitable for building custom search UIs, RAG pipelines, or search-augmented applications. Pricing is flat-rate ($5 per 1,000 requests) with no token-based costs, making it cost-predictable for high-volume search workloads.
Unique: Decouples search from text generation, providing raw ranked search results with flat-rate pricing ($5/1K requests) instead of token-based costs. Enables builders to implement custom search UIs, RAG pipelines, or search-augmented workflows without paying for LLM inference.
vs alternatives: Cheaper than Sonar API for search-heavy workloads (flat-rate vs token-based); more flexible than Google Custom Search or Bing Search API for RAG pipelines because results are optimized for relevance rather than ad-serving.
semantic embeddings generation for rag and similarity search
The Embeddings API generates vector embeddings for text, supporting both standard and contextualized embedding variants. Embeddings can be used for semantic search, similarity matching, and RAG (Retrieval-Augmented Generation) pipelines. The API supports two embedding strategies: standard embeddings for general-purpose similarity, and contextualized embeddings that incorporate surrounding context for improved relevance in domain-specific applications.
Unique: Offers both standard and contextualized embedding variants, allowing builders to choose between general-purpose similarity and context-aware embeddings for domain-specific RAG pipelines. Contextualized embeddings incorporate surrounding text context during embedding generation, improving relevance for specialized domains.
vs alternatives: Contextualized embeddings differentiate from OpenAI's text-embedding-3 or Cohere's embed API, which provide only standard embeddings; enables better domain-specific retrieval without fine-tuning.
web search tool invocation with autonomous model decision-making
Within the Agent API, third-party LLM models can autonomously invoke two web search tools (web_search and fetch_url) via function calling. The model decides when to search based on query content, and Perplexity's infrastructure executes the search and returns results to the model for incorporation into its response. This enables agentic workflows where the model acts as a decision-maker: it can choose to use training data, invoke web_search to retrieve current information, or fetch_url to extract content from specific URLs. Each tool invocation is charged separately ($0.005 for web_search, $0.0005 for fetch_url).
Unique: Enables autonomous tool invocation where the LLM model decides when to search based on query content, without requiring explicit tool orchestration from the application layer. Tool invocation costs are itemized separately, enabling precise cost attribution and optimization of agentic workflows.
vs alternatives: More flexible than Sonar's built-in search (which always searches) because the model can choose when to search; simpler than building custom tool calling with OpenAI or Anthropic SDKs because search tools are pre-integrated and optimized.
configurable search context depth for cost-quality tradeoffs
The Sonar API supports three configurable search context depths (Low, Medium, High) that control how comprehensively the model searches the web during inference. Low context (default) performs minimal search for speed and cost; Medium context balances comprehensiveness and cost; High context performs exhaustive search for research-grade responses. Search context depth directly affects both response latency and pricing, with High context costing 2-3x more than Low context per request. This enables builders to implement dynamic pricing and latency strategies based on query complexity or user tier.
Unique: Provides explicit, configurable control over search comprehensiveness (Low/Medium/High) with transparent pricing impact, enabling builders to implement dynamic cost-quality strategies. Unlike Sonar's built-in search which is always-on, context depth allows trading off search exhaustiveness against cost and latency.
vs alternatives: More transparent than OpenAI's web search plugins (which have opaque search behavior) or Claude's tool calling (which requires manual search orchestration); enables cost optimization that's not possible with always-on search models.
+4 more capabilities