Using Convai in Unreal Engine with a custom listening system based on audio level thresholds

I created a function where the character starts listening only when the input audio level reaches a specific threshold, and stops listening when the level drops below it. I cannot keep the listening always enabled because I have conditions that control when the character is allowed to listen.

The issue is that when the audio level first crosses the threshold, part of the voice is already lost. Sometimes the character misses the beginning of the speech, and in some cases it does not capture the full sentence at all. This seems to be related to the node execution order and the delay caused by enabling listening only after the threshold is reached.

The current stable Unreal plugin is not designed to work reliably with this kind of start/stop logic, which is why you’re losing the beginning of the speech when you enable listening after the user has already started talking.

Hands Free Conversation is only supported in our new Unreal Beta plugin, and even there some features are still missing or experimental. If you’d like to try it, you can follow the setup here:
https://docs.convai.com/api-docs/plugins-and-integrations/unreal-engine-plugin-beta-overview

In short: with the current stable plugin this behavior is not really supported; for true hands free / continuous listening, please experiment with the beta plugin.