real-time web search with ai-optimized result ranking
Executes real-time web searches and returns clean, relevance-ranked results specifically formatted for LLM consumption rather than human browsing. The API filters out boilerplate, ads, and navigation elements, returning structured content that reduces token waste and improves RAG quality. Achieves 180ms p50 latency through optimized crawling infrastructure and result ranking tuned for semantic relevance to agent queries.
Unique: Specifically optimizes result ranking and content cleaning for LLM consumption (removing ads, boilerplate, navigation) rather than human readability, paired with 180ms p50 latency claimed as fastest on market. Integrates directly with OpenAI, Anthropic, and Groq function-calling APIs for seamless agent integration.
vs alternatives: Faster and more LLM-focused than generic search APIs like Google Custom Search; optimized for agent use cases rather than human browsing, reducing token waste in RAG pipelines.
domain-filtered and depth-controlled search
Restricts search scope to specified domains or domain lists and controls search depth (basic vs. comprehensive) to balance result relevance against latency and cost. Enables agents to search within trusted sources or exclude unreliable domains, and allows tuning between quick shallow searches and exhaustive deep research modes. Implementation details not documented, but claimed as core feature for agent control.
Unique: Offers explicit search depth controls and domain filtering as first-class features for agent builders, allowing fine-grained control over source trust and search comprehensiveness. Claimed in product description but implementation details absent from documentation.
vs alternatives: More agent-centric than generic search APIs; provides explicit depth and domain controls rather than requiring post-processing filtering.
enterprise sla and custom deployment
Enterprise tier provides custom SLAs, custom rate limits, and custom pricing. Enables dedicated support, performance guarantees, and potentially on-premise or private deployment options. Details not documented, but positioned as white-glove service for large-scale deployments.
Unique: Offers fully customizable enterprise tier with negotiable SLAs, rate limits, and pricing. Suggests potential for on-premise or private deployment, though not explicitly documented.
vs alternatives: More flexible than fixed enterprise tiers; enables custom terms for large-scale or specialized deployments.
answer extraction and summarization
Extracts direct answers to queries from search results and provides summarized information optimized for LLM consumption. Rather than returning full search results, answer extraction identifies and returns the most relevant answer snippet. Reduces token consumption and improves answer quality by filtering to relevant information. Implementation mechanism not documented, but claimed as core feature.
Unique: Provides answer extraction as dedicated capability rather than requiring agents to parse full search results. Optimizes for token efficiency and direct answer retrieval vs. full-page content.
vs alternatives: More efficient than returning full search results; reduces token consumption and improves answer relevance for question-answering tasks.
pay-as-you-go pricing at $0.008 per credit
Offers flexible pay-as-you-go pricing at $0.008 per API credit, allowing developers to scale usage without committing to monthly plans. Billing is based on actual usage rather than fixed monthly allocations. Exact credit-to-operation mapping and overage handling are not documented, making cost prediction difficult.
Unique: Offers granular pay-as-you-go pricing at $0.008 per credit, providing cost flexibility for variable workloads without requiring monthly commitments, though credit-to-operation mapping is undocumented.
vs alternatives: More flexible than fixed monthly plans because it scales with actual usage, though less predictable than monthly subscriptions due to unclear credit-to-operation mapping.
monthly subscription plans with bundled credits (4,000+ credits)
Offers monthly subscription plans bundling 4,000+ API credits per month at fixed prices, providing better per-credit rates than pay-as-you-go pricing for committed usage. Plans include 'Project' tier with adjustable pricing slider and higher rate limits than free tier. Exact pricing, rate limits, and credit-to-operation mapping are not documented.
Unique: Provides monthly subscription plans with 4,000+ bundled credits and adjustable pricing sliders, offering better per-credit rates than pay-as-you-go for committed usage and access to higher rate limits.
vs alternatives: More cost-effective than pay-as-you-go for high-volume applications because bundled credits provide volume discounts, though less flexible for variable workloads.
enterprise custom pricing and sla with 99.99% uptime guarantee
Offers enterprise tier with custom pricing, custom rate limits, and 99.99% uptime SLA for mission-critical applications. Includes dedicated support and customized integration assistance. Exact SLA terms, support response times, and customization options are not documented.
Unique: Provides enterprise tier with custom pricing, custom rate limits, and 99.99% uptime SLA, enabling mission-critical deployments with contractual guarantees and dedicated support.
vs alternatives: More suitable for enterprise deployments than self-service tiers because it provides contractual SLA guarantees, custom rate limits, and dedicated support, though at higher cost.
content extraction and cleaning from web pages
Extracts relevant content from web pages and cleans it for LLM consumption by removing HTML markup, scripts, ads, and boilerplate. Returns structured text optimized for embedding and context injection. Works as a companion to search results, allowing agents to fetch full page content after identifying relevant URLs.
Unique: Provides extraction as a dedicated API endpoint optimized for LLM consumption, with built-in boilerplate removal and content cleaning. Designed as a companion to search results rather than standalone scraping tool.
vs alternatives: Simpler than building custom HTML parsers or using generic scraping libraries; output is pre-optimized for LLM context injection.
+7 more capabilities