Hello, everyone!
I’m working on an integration where I need to intercept the user’s input in Convai, send it to a Webhook in Make for processing, and then return the response so the avatar can speak the processed message.
My main questions are:
- Is there a specific event within ConvoAI to capture the recognized text before it’s sent to TTS?
- Is it possible to modify this flow to send the text to a Webhook and return the response for the avatar to speak?
- What Blueprint or configuration would be most suitable for this approach?
I really appreciate any help!