The character sometimes starts messages with ">response" when using gpt-4o-mini

We’ve been testing with one character set to use gpt-4o-mini as the core LLM model. It usually works fine, but sometimes their message starts with “>response”, which is both said by them and shown in the message text.

Hello @heivoll,

Welcome to the Convai Developer Forum!

Thank you for bringing this issue to our attention. To assist you better, could you please provide the following details?

  1. The Character ID of the affected character.
  2. The Session ID where this issue occurred.

This information will help us investigate the problem and provide a resolution. Let us know if you need assistance gathering these details! :blush:

Character ID: 80e1a088-bc8a-11ef-b85d-42010a7be016
Session ID: 661174e00c7a50b02f624d3e83114e2c

Note that we’ve changed the model to GPT-4o again to avoid the issue. But feel free to change it in order to test - this character is not in use in a production setting.

hey!
Thanks for the reply

I am Kamal, from Convai.

We have a slight problem debugging the issue, because its not particularly quite reproducible. Woud it be possible for you to turn the temperature of your character to “0” and then try to see if you are getting the same error, if so just share the session ID here i will help you debug it

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.