How it works
To setup your own level with your own characters you need to understand how the level we've set up here works.
Inworld Session Controller
When you click Assign to Current Map button a blueprint of type BP_InworldSessionController is added to the level. It contains the API key you've chosen in the dropdown menu.
In the Blueprint it starts Inworld Session on Begin Play.
When you click Create Blueprint button the Blueprint Asset together with Skeletal Mesh, Materials and Textures are creataed.
Character Blueprint contains OVRLipSyncPlaybackActor comoponent that provides Vesemes for lipsyncing. InworldCharacterComponent is very important to be setup with appropriate Brain Name or Machine Readable Id for your character. Inworld Studio Widget tool does it automatically but don't forget to do it yourself when you set up your own Blueprint in UE. Brain name can be copied on Inworld Studio website. Audio component has to be on the Blueprint to play character's voice.
Character Blueprint is setup with default Animation Blueprint which runs default Idle animation. It gets animation Visems from OVRLipSyncPlaybackActor comoponent every frame and updates lipsyncing. Also it provides simple blinking and head motions.
When you click Setup as Inworld Player in Blueprint's context menu it's setup with two components
- InworldPlayerTargetingComponent. It has some useful settings for character targeting max distance and angle. Set Interaction Dot Threshold to -1 to not pay attention to character-player facing angles.
- AudioCaptureComponent. It has to be set up with DefaultMuteMicSubmix to avoid mic sound to go to speackers while recording
When you start level BP_InworldSessionController starts Inworld Session and prints Connection State on screen. InworldPlayerTargetingComponent on Players Pawn starts searching target InworldCharacterComponent, when one of them is at appropriate distance and angle Audio Session with the character is started. During Audio Session sound from your mic is captured by InworldPlayerAudioCaptureComponent goes to server through InworldPlayerComponent and responses go through InworldCharacterComponent to character's Audio component. Animation Visemes are created based on Audio coming from server and character's Animation Blueprint translates it into lipsyncing animation.