Skip to main content

Lip syncing

After Inworle Unity SDK Ver. 2.0.3, Lipsyncing is automatically embedded in the package.

All the characters fetched by Inworld Studio Panel would automatically embed this feature.

Our Lipsync received phoneme data from server time to time, transferred into viseme data, then modified on the Skinned Mesh Renderer(Blend Shape).

OwnCharChooser

Enable custom avatar with lip syncing

If you'd like to enable lip syncing for your own avatar instead of Ready Player Me, please check the followings.

Prerequisites

  1. Your avatar is exported with Blend Shape in 3D modeling software.

  2. After imported, under the BlendShapes of Skinned Mesh Renderer, there are indices for visemes. Viseme2

Set the starting index

After Inworld SDK Ver. 2.1.3, if the name of your first viseme is not viseme_sil, you can change the name at InworldLipAnimation. Viseme

(Optional) Configure P2V Map

If the viseme indices is continuous and in the right order (From Sil to U), you can skip this step. Viseme3

Otherwise, you need to configure the P2V map. You can check the data at Resources > Animations > Inworld Face Animations > P2V Map. P2V

If you don't know how to set the data, you can check the reference here.

⚠️ Note: We recommend you duplicate this Inworld Face Animations, then update and allocate your genenated asset to your InworldCharacter's InworldLipAnimation.

UIE02

Before 2.1.3

The Lipsync library before Ver. 2.1.3 uses Oculus lipsync. If you want to integrate with your own lipsync, please replace it in GLTFAvatarLoader > LipAnimations.