How to get different results for voice and text in the same interaction?

Hi,
I want to use the AI character for navigation. So the user will ask it “Where is the empire state building?”. The AI Voice response could be similar to “It is located in New York, USA” but I should also receive a GPS coordinates text that i can use as a dev. The AI should not say the coordinates verbally.
Can this be done in one interaction? Or at least only one interaction from the user.
we can maybe send two queries to the AI, first is the voice input fro the user. Once we detect that the user wants to know a location/direction/position etc of something, then behind the scenes we get the AI to send the GPS coordinates in a csv format that the dev can use.

Is it possible via “Actions” to understand that the use might need direction for a location and respond respectively?
There could be many ways for the user to ask for directions:
”Where is ”
”How do I reach ”
”Give me directions for ”
etc.

Kindly let me know how to achieve this.

The actual guidance system will be separately developed. we just need the AI to understand that the user might wish to go somewhere and should say something like “Here you go” or “I can help you with that” and give the developer coordinates to use behind the scene.

Thank you.

Hello @Aseem,

Welcome to the Convai Developer Forum!

It’s not possible to separate the voice response from the text output, if coordinates are provided, they will also be spoken aloud. You could, however, design a flow using Narrative Design to guide the interaction, but the coordinates included in the response would still be voiced.

Can i get the text before the character says it, i can just interrupt it’s speech after getting the text. will that work?

No, this is not possible.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.