Rate this page:


The block allows you to connect an Avatar voice bot to a scenario call for real-time request processing.


You can only use this block after configuring an integration with Avatar in the Integrations section, as shown in this manual.

See Available Roles
  1. Connect the block using the Fail and Success ports.
  2. Follow the link at the top to go the platform control panel and create an avatar.
Link to the platform control panel

Access to the control panel is granted to you under the subuser created during the integration setup and is limited to the Avatar section.

See Available Roles
  1. Click Create.
Click Create
  1. Enter your avatar name, select the language it will speak with customers, define the timezone, and click Create again.
Create an avatar
  1. Now add intents to your avatar and train it as needed. For details on how to create and train an avatar, go here.
  2. Go back to the scenario block and select the avatar you have just created from the drop-down list.
Select your avatar
  1. In the Speech synthesis section, select the Synth language and the desired Voice for your avatar.

Depending on the tts-provider you select, you can also configure the following advanced settings:

  • Voice pitch - Configure the synthesized voice pitch (Google). Available options: x-low, low, medium, high, x-high, default.

  • Speech volume - Set the speech volume (Google). Available options: silent, x-soft, soft, medium, loud, x-loud, default.

  • Speech rate - Set the synthesized speech speed (Google, Yandex). Available options: x-slow, slow, medium, fast, x-fast, default.

Speech synthesis settings
  1. In the Recognition settings section, select the Recognition language.
  2. Enable the Background noise switch to play background sounds to fill in the gaps between the robot’s phrases. Select an audio recording from the drop-down list or upload a media file from your PC.
  3. Enable the Use phrase hints switch if you want the avatar to detect and recognize user inputs based on preset words and phrases. In the Possible response field, enter the required words and word combinations.

If you need to enter several words or utterances, press Enter after each of them.

See Available Roles
Use phrase hints
  1. Enable the Single utterance switch if you want the system to detect when a speaker has spoken a single utterance and to automatically end recognition returning the final ASR result. You typically use this setting for short customer replies: yes/no, service quality assessments, etc.
  2. Enable the Interim results switch if you want the system to return intermediate recognition results (assumptions) that are subject to change while processing more audio before you receive the final ASR result. When disabled, you only receive the final ASR result, without assumptions. You typically need this setting when you expect longer customer responses.

The Single utterance and Interim results settings are mutually exclusive.

See Available Roles
  1. In the End-of-phrase detection timeout limit field, define the time in milliseconds after which the system takes the last interim instead of the final ASR result if there are no new ones during this timeout.
  2. In the Customer message timeout limit field, specify the time in milliseconds after which the system notifies an avatar that the timeout occurred if no input came from ASR during this period.
ASR settings
  1. If required, you can monitor what the avatar replies to customers. For that, in the Response handling settings section, enable the Process response parameters switch and select a function to process the bot responses.
Process avatar response

Keep in mind that you first need to add the required function to Voximplant Kit as follows:

  • Go to the Functions section.

  • Click New function.

  • Name the function, add the required code, and click Create. For the example of a function that gets an avatar response, go here.

New function
  1. The function returns an object with the avatar's response message and the custom data the avatar received from Voximplant Kit at initialization (CALL, SKILLS, VARIABLES, HEADERS, TAGS, TOPICS, WORKSPACE_SETTINGS). Use the getCustomData method to get the custom data in your scenario.
  2. Click Save.