Receive audio chunks as they are individually processed.
Your authentication credentials. For Basic authentication, please populate Basic $INWORLD_RUNTIME_BASE64_CREDENTIAL
The text to be synthesized into speech. Maximum input of 2,000 characters.
The ID of the voice to use for synthesizing speech.
Configurations to use when synthesizing speech.
Determines the degree of randomness when sampling audio tokens to generate the response.
Defaults to 1.1. Accepts values between 0 and 2. Higher values will make the output more random and can lead to more expressive results. Lower values will make it more deterministic.
For the most stable results, we recommend using the default value.
Controls timestamp metadata returned with the audio. When enabled, the response includes timing arrays under timestampInfo.wordAlignment (WORD) or timestampInfo.characterAlignment (CHARACTER). Useful for word-highlighting, karaoke-style captions, and lipsync.
Note: Enabling alignment slightly increases latency. Internal experiments show an average ~100 ms increase.
Language support: Timestamp alignment currently supports English only; other languages are experimental.
timestampInfo.wordAlignment (words, wordStartTimeSeconds, wordEndTimeSeconds).timestampInfo.characterAlignment (characters, characterStartTimeSeconds, characterEndTimeSeconds).TIMESTAMP_TYPE_UNSPECIFIED, WORD, CHARACTER When enabled, text normalization automatically expands and standardizes things like numbers, dates, times, and abbreviations before converting them to speech. For example, Dr. Smith becomes Doctor Smith, and 3/10/25 is spoken as March tenth, twenty twenty-five. Turning this off may reduce latency, but the speech output will read the text exactly as written. Defaults to automatically deciding whether to apply text normalization.
APPLY_TEXT_NORMALIZATION_UNSPECIFIED, ON, OFF A successful response returns a stream of objects.