Simple Actions not triggering

Hello,

I’m trying to implement some custom actions for my NPC in my UE5 project but there seems to be an issue parsing the response? I followed a couple of different tutorials and already read the documentations multiple times but I’m not sure what I missed. The same action with the same character seems to fire up in the sample project but not in mine.

You can see in the logs that everything else seems to be working but it’s missing the two lines relevant to the action being recognized.

P.S. I have enabled actions (experimental) in project settings → convai
My NPC is a child of ‘Convai Base Character’ class.

Any help would be appreciated.


The logs from sample project:

ConvaiGRPCLog: Received Text Observe as I, The Fallen God, grace you with a dance most divine!: | Character ID : 6aaf9646-dfc9-11ef-b528-42010a7be016 | Session ID : c5b9067f4bbe401036bab3b59d1e8351 | IsFinalResponse : False
ConvaiAudioStreamerLog: PlayLipSyncWithPreGeneratedDataSynced: Detected New LipSync Chunk ChunkDuration: 3.849958 ChunkLipSyncFrameRate: 100.000000 FrameIndex:0 ChunkFrameCounter: 482 ExpectedFrameCount:384.995850 ChunkFrameCounter: 482
ConvaiAudioStreamerLog: Play Available Audio and LipSync - Audio Duration: 3.849958 - Audio Chunks: 1 - LipSync Duration: 0.710000 - LipSync Chunks: 1 - Audio Chunks Remaining: 0 - LipSync Chunks Remaining: 0
ConvaiAudioStreamerLog: PlayLipSyncWithPreGeneratedDataSynced: Failed to detect New LipSync Chunk due to insufficent audio chunks NumAudioChunks: 0 NumLipSyncChunks: 0 FrameIndex:0 ChunkFrameCounter: 385 ExpectedFrameCount:384.995850 ChunkFrameCounter: 385
ConvaiAudioStreamerLog: PlayLipSyncWithPreGeneratedDataSynced: Failed to detect New LipSync Chunk due to insufficent audio chunks NumAudioChunks: 0 NumLipSyncChunks: 0 FrameIndex:0 ChunkFrameCounter: 386 ExpectedFrameCount:384.995850 ChunkFrameCounter: 386
ConvaiGRPCLog: Chatbot Total Received Lipsync Responses: 1179 Responses
ConvaiChatbotComponentLog: Chatbot Total Received Audio: 11.759625 seconds
ConvaiGRPCLog: Received Text : | Character ID : 6aaf9646-dfc9-11ef-b528-42010a7be016 | Session ID : c5b9067f4bbe401036bab3b59d1e8351 | IsFinalResponse : True
ConvaiGRPCLog: GetResponse SequenceString: Dances
ConvaiGRPCLog: Action: Dances
ConvaiGRPCLog: GetResponse EmotionResponseDebug: session_id: “c5b9067f4bbe401036bab3b59d1e8351”
emotion_response: “Joy Anticipation Surprise Acceptance Apprehension Pensiveness”


The logs from my project:

ConvaiGRPCLog: Received Text Care to witness the divine attempt a jig?: | Character ID : 6aaf9646-dfc9-11ef-b528-42010a7be016 | Session ID : 14c45c60ed976d08d6c59a24199ca6ec | IsFinalResponse : False
ConvaiAudioStreamerLog: PlayLipSyncWithPreGeneratedDataSynced: Detected New LipSync Chunk ChunkDuration: 2.537458 ChunkLipSyncFrameRate: 100.000000 FrameIndex:0 ChunkFrameCounter: 527 ExpectedFrameCount:253.745850 ChunkFrameCounter: 527
ConvaiAudioStreamerLog: Play Available Audio and LipSync - Audio Duration: 2.537458 - Audio Chunks: 1 - LipSync Duration: 0.710000 - LipSync Chunks: 1 - Audio Chunks Remaining: 0 - LipSync Chunks Remaining: 0
ConvaiAudioStreamerLog: PlayLipSyncWithPreGeneratedDataSynced: Failed to detect New LipSync Chunk due to insufficent audio chunks NumAudioChunks: 0 NumLipSyncChunks: 0 FrameIndex:0 ChunkFrameCounter: 254 ExpectedFrameCount:253.745850 ChunkFrameCounter: 254
ConvaiGRPCLog: Chatbot Total Received Lipsync Responses: 1245 Responses
ConvaiChatbotComponentLog: Chatbot Total Received Audio: 12.420209 seconds
ConvaiGRPCLog: Received Text : | Character ID : 6aaf9646-dfc9-11ef-b528-42010a7be016 | Session ID : 14c45c60ed976d08d6c59a24199ca6ec | IsFinalResponse : True
ConvaiGRPCLog: GetResponse EmotionResponseDebug: session_id: “14c45c60ed976d08d6c59a24199ca6ec”
emotion_response: “Joy Anticipation Acceptance Distraction Apprehension Annoyance”


Hello @1wm5w1ok,

Sorry to hear you’re running into issues.

To help troubleshoot, could you please confirm the version of the Convai SDK and Unreal Engine you’re currently using?

As a first step, we recommend testing your character in the default First Person Template level. Please clear your Output Log before starting. Then:

  1. Add your NPC to the level.

  2. Try the “Follow me” command and check if it works.

  3. If it doesn’t, please share the Output Log so we can investigate.

  4. If it does work, test the “Move to” action by adding a reachable object (e.g., a cube) to your Object list.

Please make sure your logs do not include your API key before sharing them. Looking forward to helping you sort this out!

Hey @K3,
thank you for the quick reply!
Sorry it took me a bit to do so because I had modified the template level for other tests.

Engine Version: 5.4.4-35576357+++UE5+Release-5.4
Convai Plugin Version: 3.3.0

This is the full log I get from the template level, the AI can converse but non of the action commands are being fired. So “Follow me”, “Dance” or any other ones.

output logs.txt (26.5 KB)

1 Like

Please try creating a new Unreal Engine project and follow the steps below using version 3.5.2 of our plugin:

  • Download the ZIP file that matches your Unreal Engine version from our latest release: Convai Unreal SDK 3.5.2

  • In your Unreal project directory, create a Plugins folder if it doesn’t already exist.

  • Move the Convai folder from the ZIP into the Plugins folder.

  • Open your project.

  • Go to Project Settings > Convai and enter your API key.

Let us know if the issue persists after this clean setup!

1 Like

I created a new project, added the plugin, imported my Metahuman and added ‘ConvaiPlayer’ component to the FirstPersonCharacter blueprint. However the issue still persists.
I can confirm the plugin version is 3.5.2 in the project plugins.

It’s possible there may still be a misstep in the setup process. Could you try creating a new character and test again. Also, please share your output logs so we can take a closer look at what might be going wrong.

Apologies, I forgot to include the log files.
Here are the logs for a freshly imported Metahuman.
I changed the parent class to ‘ConvaiBaseCharacter’, added character ID, ConvaiFaceSync and changed the animations to convai animations. I’m also noticing that the NPC is floating in air.
output logs.txt (32.5 KB)

Did you enable Experimental Actions, and if so, disable it.

I have, after not being able to get it work yesterday. I disabled it now :slight_smile:
output logs.txt (37.7 KB)

1 Like

I tried changing the player BP parent to ‘Convai Player Character’. This stops the NPC from floating and Follows you from the start but does not fix the commands issue. I tried running the project on a different machine but still no luck :frowning: Could this be an issue with Metahuman integration?

I migrated ‘Taro’ the Metahuman asset to my project to try and see if it changes anything.
If I set the character ID to the one in the sample project, Taro straight up refuses to do any of the actions.
If I set the character ID to mine, it agrees do to the action verbally but has the same issue as before.
I tried toggling the experimental actions which didn’t help either.

Thanks for the update and sorry to hear you’re still facing issues.

To better understand what might be going wrong, could you please record a video showing how you’re setting up Convai in a new project from scratch? Please include the steps of:

  • Creating a fresh Unreal project
  • Installing the Convai plugin
  • Setting up the Convai character
  • Creating a new character on convai.com and assigning it

You can send the video via DM or upload it as an unlisted video on YouTube and share the link. This will help us troubleshoot more effectively. Looking forward to your video!

Oh I just got a new warning! Check this out:
output logs.txt (1.1 KB)

I will start doing the video now :slight_smile: Thanks for your patience

Thanks for sending the video via PM!

Please make sure you’re using a Blueprint project, not a C++ one. Also, instead of setting up the player manually, change the Player’s Parent Class to ConvaiBasePlayer. Give it a try with this setup and let us know how it goes!

1 Like

The good news is that it works in a Blueprint project, without needing to change the Player’s parent class so I guess we found the culprit. However this doesn’t solve the main issue because my project has lots of C++ classes that are necessary for the game :sad_but_relieved_face: Do you have any ideas on where we can go from here?

Hello @1wm5w1ok,

Could you please share your Player Blueprint setup?

Hey again @K3 ,
Thanks for following up.
This is the Blueprint


This is the basic .cpp class but I thought I’d share anyway.
output logs.txt (3.3 KB)

In the Start Talking function, the “Generate Actions” option is disabled. If you enable it, everything should work as expected :slight_smile:

1 Like

Wow can’t believe I missed that.
So there is an improvement, however it still isn’t working as intended.
I can see with my first character that it tries to capture the action, but the ‘Response SequenceString’ is always empty
output logs.txt (41.9 KB)

So I switched to a test character, this one recognizes the String, here is ‘dances’, but for action it responds with ‘None’. Even though I have added ‘dances’ to the list of actions of my character and created a custom event for it. Am I missing something?
output logs.txt (8.6 KB)

Edit: It is the same with the default actions
output logs.txt (13.0 KB)

Your action_config appears to be empty:

action_config { characters { } classification: “multistep” }

I would recommend testing this in a fresh project. This can help rule out any misconfigurations or project-specific issues that might be causing the problem.