Voximplant has new realtime speech generation for voice AI from Inworld, our latest Voice AI text-to-speech (TTS) partner. Together, we combine state-of-the-art TTS with carrier-grade connectivity so you can build voice agents that sound like your brand, not a generic robot.
With speech synthesis support for Microsoft Azure Text-to-Speech, you can choose from more than 425 voice options covering 40 languages and 60 dialects from 6 distinct speech synthesis engine providers.
We recently added IBM Watson™ Text-to-Speech to our list of speech synthesis engine options, expanding the number of voices you dynamically synthesize as part of phone and web calls.
Learn how a Voice AI Orchestration Platform connects LLMs, STT/TTS, turn‑taking, and telephony (PSTN, SIP, WebRTC) to build reliable real‑time voice agents. See benefits, architecture, and how Voximplant helps.
Voximplant now includes a native Grok module that connects any Voximplant call to xAI’s Grok Voice Agent API for real-time, speech-to-speech conversations. With a single VoxEngine scenario, you can interact via audio with Grok over phone numbers, SIP trunks and infrastructure, WhatsApp Business, or WebRTC into Grok — all without building custom media gateways or WebSocket streaming infrastructure.
Voximplant now includes a native Cartesia Line / Agents connector that connects any Voximplant call to a Cartesia Line voice agent for real-time, speech-to-speech conversations—over PSTN, SIP, WebRTC, or WhatsApp Business Calling—without building custom media gateways or WebSocket streaming infrastructure.
Check out the latest useful Voximplant Kit updates — we developed chat analytics, improved call history, added new tools for supervisors, expanded scenario capabilities, and updated the softphone. Below is a brief overview of the essential enhancements.
Voximplant now includes a native Cartesia module for streaming, low-latency text-to-speech (TTS). You can use a single VoxEngine API to synthesize speech in real time, connect it to any call (PSTN, SIP, WebRTC, WhatsApp) and control playback from a Large Language Model (LLM) or other source, all inside VoxEngine.