Lip syncing
After Inworld Unity SDK Ver. 2.0.3, Lipsyncing is automatically embedded in the package.
All the characters fetched by Inworld Studio Panel automatically use this feature.
Our Lipsync system receives phoneme data from the server whenever the client receives audio for an interaction, this phoneme data is then translated into viseme data which is modified to be displayed on the Skinned Mesh Renderer(Blend Shape) of the Inworld Character Avatar.
Enable custom avatar with lip syncing
If you'd like to enable lip syncing for your own avatar instead of Ready Player Me, please follow the instructions below:
Prerequisites
-
Your avatar is exported with Blend Shapes in your 3D modeling software of choice.
-
After importing, under the BlendShapes of Skinned Mesh Renderer, there are indices for visemes.
Set the starting index
After Inworld SDK Ver. 2.1.3, if the name of your first viseme is not viseme_sil
, you can change the name at the InworldFacialAnimation component.
(Optional) Configure P2V Map
If your avatar contains all the visemes listed below in the same continuous order (from Sil to U), you can skip this step.
Otherwise, you will need to configure the P2V map. You can check the data for InworldLipSync at Assets > Inworld > Inworld.Assets > Animations > Facial
.
If you do not know how to set the data, please check the reference here.
⚠️ Note: We recommend duplicating this InworldLipSync, and then update and allocate your genenated asset to your InworldCharacter's InworldFacialAnimation.
Before 2.1.2
The Lipsync library before Ver. 2.1.2 uses Oculus lipsync.
If you want to continue using this package, and integrate with your own lipsync, please replace it in GLTFAvatarLoader > LipAnimations
.
Upgrade from 2.1.1 or lower
If you want to upgrade to Ver. 2.1.2 and higher, please notice that the legacy package (Ver. 2.1.0 or lower) may overwrite Oculus lipsync package if both packages have been installed in your project. We recommend you also upgrade Oculus lipsync as well.