ultra-low-latency streaming text-to-speech with state-space model architecture
Generates speech from text input using state-space model (SSM) architecture optimized for real-time streaming, delivering time-to-first-audio in 40-90ms depending on model variant (Sonic-Turbo: 40ms, Sonic-3: 90ms). Streams audio chunks progressively to client as text is processed, enabling interactive voice agent applications with near-instantaneous speech output. Uses character-level pricing (1 credit per character) with support for 42 languages and dynamic voice control parameters.
Unique: Uses state-space model (SSM) architecture instead of traditional transformer-based TTS, enabling 40-90ms time-to-first-audio with streaming output. This architectural choice allows progressive audio generation without waiting for full sequence completion, critical for interactive applications. Sonic-Turbo variant achieves 40ms latency (claimed as 'twice as fast as the blink of an eye'), positioning it as fastest in category.
vs alternatives: Achieves 2-4x lower latency than transformer-based TTS systems (e.g., Google Cloud TTS, Azure Speech Services) by using SSM architecture with streaming-first design, making it the only viable option for sub-100ms voice agent interactions.
emotion and prosody control in speech synthesis
Enables fine-grained control over emotional tone and prosodic characteristics of generated speech through inline text tokens and voice parameters. Supports explicit emotion markers like '[excited]' and '[sad]' embedded in input text, allowing dynamic emotional expression within a single speech generation request. Works in conjunction with voice selection and voice localization to modulate pitch, pace, and emotional coloring of output audio.
Unique: Implements emotion control through inline text tokens ('[excited]', '[sad]') rather than separate API parameters, allowing emotion changes mid-utterance without multiple API calls. This token-based approach integrates emotion control directly into the text input stream, enabling natural emotional transitions within continuous speech generation.
vs alternatives: Provides more granular, mid-utterance emotion control than cloud TTS systems (Google Cloud, Azure) which typically apply emotion at the request level; token-based approach allows emotional expression to follow narrative flow without API call overhead.
credit-based usage pricing with character-level granularity
Implements credit-based pricing model where TTS generation costs 1 credit per character of input text, with additional credits for advanced features (voice cloning, localization, infilling). Credits are allocated monthly based on subscription tier (Free: 20K, Pro: 100K, Startup: 1.25M, Scale: 8M, Enterprise: custom) and do not roll over between months. This granular pricing model enables transparent cost prediction and prevents surprise bills.
Unique: Uses character-level credit granularity (1 credit per character) rather than per-request or per-minute pricing, enabling precise cost prediction based on input volume. Advanced features have separate credit costs (voice cloning: 1M credits training + 1.5 credits/character; localization: 225 credits; infilling: 300 credits + 1 credit/character).
vs alternatives: Provides more transparent, granular pricing than per-request models; character-level pricing aligns cost with actual usage, unlike per-minute pricing which penalizes longer utterances.
pre-built integrations with voice agent and rtc platforms
Provides native integrations with popular voice agent frameworks (Pipecat, Rasa), real-time communication platforms (LiveKit, Tencent RTC, Twilio), and specialized voice agent services (Thoughtly, Vision Agents by Stream). Integrations handle authentication, streaming audio transport, and request/response marshaling, enabling developers to use Cartesia TTS/STT without building custom API clients.
Unique: Provides native integrations with multiple voice agent frameworks (Pipecat, Rasa) and RTC platforms (LiveKit, Twilio, Tencent RTC), reducing integration effort compared to building custom API clients. Integrations handle streaming audio transport and request marshaling transparently.
vs alternatives: Reduces integration effort compared to competitors requiring custom API client development; pre-built integrations with popular frameworks enable faster time-to-market for voice agent projects.
agent credit system for voice agent deployments
Provides separate credit allocation for voice agent deployments through 'agent credits' distinct from model credits. Agent credits are prepaid amounts (Free: $1, Pro: $5, Startup: $49, Scale: $299, Enterprise: custom) that fund voice agent operations, enabling separate cost tracking and budget management for agent-based systems vs direct API usage. Mechanism for converting agent credits to API calls is not documented.
Unique: Implements separate agent credit system for voice agent deployments, enabling cost tracking and budget management independent from direct API usage. This architectural choice allows organizations to manage voice agent costs separately from other API usage.
vs alternatives: Provides separate cost tracking for voice agents vs direct API usage, enabling better budget allocation and cost visibility than unified credit systems; prepaid agent credits enable predictable monthly costs.
instant and professional voice cloning with credit-based training
Supports two voice cloning modes: Instant Voice Cloning (IVC) requiring zero training credits, and Professional Voice Cloning (PVC) requiring 1M credits for one-time training plus 1.5 credits per character of generated speech. IVC uses speaker embedding extraction from reference audio to immediately synthesize speech in that voice without training. PVC trains a custom voice model on reference samples for higher quality and consistency, suitable for production voice agent deployments.
Unique: Offers dual voice cloning modes: IVC (zero training cost, immediate) and PVC (1M credit training, higher quality). This two-tier approach allows rapid prototyping with IVC while enabling production-grade voice consistency with PVC. The credit-based pricing for training (1M credits) is transparent and predictable, unlike some competitors offering opaque training processes.
vs alternatives: Provides faster voice cloning than Google Cloud Speech-to-Text voice cloning (which requires manual training and approval) and more transparent pricing than ElevenLabs (which uses opaque 'voice cloning credits'); IVC mode enables immediate voice cloning for prototyping without training overhead.
laughter and non-speech vocalization synthesis
Generates laughter and other non-speech vocalizations (e.g., sighs, gasps) by embedding special tokens like '[laughter]' directly in input text. The synthesis engine recognizes these tokens and generates appropriate audio vocalizations that integrate seamlessly with surrounding speech, enabling natural conversational dynamics in voice agents and interactive media.
Unique: Implements laughter and vocalizations as inline text tokens ('[laughter]') rather than separate API calls or post-processing, allowing vocalizations to be generated as part of continuous streaming speech without latency overhead. This token-based approach treats vocalizations as first-class elements of the speech synthesis pipeline.
vs alternatives: Provides more natural vocalization integration than systems requiring separate API calls for laughter generation; token-based approach ensures vocalizations flow naturally with surrounding speech without timing gaps or synchronization issues.
voice localization and accent control
Enables regional accent and localization control for synthesized speech through voice localization parameters, allowing the same voice to be rendered with different regional accents or pronunciation patterns. Implemented as a one-time 225-credit cost per localization variant, suggesting a voice model fine-tuning or adaptation approach. Supports 42 languages with localization variants available for each.
Unique: Implements voice localization as a one-time 225-credit training/adaptation cost per variant, suggesting voice model fine-tuning on regional speech data. This approach trades upfront cost for consistent, high-quality accent rendering, rather than real-time accent morphing which would be lower quality.
vs alternatives: Provides more authentic regional accents than real-time accent morphing approaches (which often sound artificial); one-time training cost ensures consistent accent quality across all generations, unlike parameter-based accent control which may degrade voice naturalness.
+5 more capabilities