I am experiencing an issue where my Character Creator 4 (CC4) character receives audio from the Convai service, but neither the talking animation nor the lip-sync (blendshapes) are triggered.
My System Specifications:
Unreal Engine Version: 5.7 (Custom/Latest Build)
Convai_Reallusion_AnimBP Version: 1.0.3
Character Base: Reallusion Character Creator 4 (CC4)
Problem Description:
Audio Output: The character successfully receives and plays audio from the Convai service.
Animation Transition: The character remains in the “Idle Animation” state and does not transition to the “Talk Animation” state machine, even when audio is playing.
Lip-Sync/Blendshapes: Although the logs show that blendshape frames are being received (e.g., 404 or 1180 frames matched), there is no visible mouth movement on the mesh.
Logs & Observations:
Log Warning:ConvaiUtilsLog: Warning: FixCC5LipsyncPostProcessBlendshapes: No post process anim instance found.
Component Setup:Convai Face Sync is set to CC4 Extended Blendshapes and Neurosync provider.
AnimBP Issue: Despite casting to the character and setting the isTalking variable, the AnimGraph does not switch the pose to “True” in the Blend Poses by bool node.
I have already resolved previous deprecated warnings regarding Get Skeletal Mesh by updating to Get Skeletal Mesh Asset, but the core animation trigger issue persists.
Could you please provide guidance on why the isTalking state might not be correctly recognized by the AnimBP or why the blendshapes are failing to apply to the post-process anim instance?
yes.
What Do the Logs Say?
Data Flow: Successful (298 frames received).
Face System: Ready (60 modifications completed).
LipSync: Appears to be working (bIsPlaying: true).
Result: Data is entering Unreal, but because you haven’t made the final connection (Input Pose) that “applies” that data to the character’s face, you’re not seeing any visual movement.
That’s what he said when I shared the logs with Gemini.
In addition to K3’s suggestion to use BP_ConvaiChatbotComponent (rather than reparenting the character), we noticed that the bIsTalking boolean isn’t being set to true correctly.
Please try applying the modification shown in the attached image to the Lipsync node within the Animation Blueprint Event Graph.
I made the character the same as in the video. I also applied all the settings from the tutorial. I added the bIsTalking edit you sent as well. But it still doesn’t work. Could it be because I’m using a CC4 character? Did you optimize the face animations for a CC5 character?
I fixed the lip issue by using ARKit Blendshapes. However, the problem of the character not transitioning to talk animation still persists, and the character doesn’t focus on me while speaking; the idle animation continues.