Very slow response

Hello guys !!

I have the same problem. I haven’t used the account for about 1 month (although it is still paid every month) and 5 days ago I started using it again to see new projects but it also takes about 10 seconds to respond.

Any solution ? Thank you.

Hello @Lumi_Convai,

Welcome to the Convai Developer Forum!

Thanks for your message. To better investigate this issue, could you please share a few more details?

  • Character ID
  • Session ID
  • Are you testing this on convai.com or within your own application?
  • If you’re testing within your app, please try testing directly on convai.com and let us know if the issue persists.
  • Also, does the delay still happen when you create a new character?

With this information, we’ll be able to assist you more effectively. Looking forward to your response!

Hello @K3 , could I get some feedback and help with my case? Nothing has changed for months.

The information is still the same:

Character ID:
bcd04d60-bdfd-11ee-adad-42010a40000f

Session ID:
Every Session

Problem occurs in the browser and in my app.

It also happens when I create a new character.

The main problem seems to be the Foundation Model Claude 3.5. With my voice OpenAI Nova the answers usually take more than 10 seconds and it is actually not usable. Changing the Foundation Model or Voice is not really an option. (Most Voices are slow with claude)

Hello @K3 ,

Sorry now i leave the info:

  • Character ID: 0e1a4762-96d3-11ef-8e96-42010a7be016
  • Session ID: 8f1cba46d0e63d91f7471281309ec61d (created today to test)
  • I am testing it through the convai website and the problems of long response times are the same.
  • Too in my ue5.5 application. I also have a version of ue5.4 that used to work fine with another character id (433f2f9e-915f-11ef-a874-42010a7be011) and the same thing is happening to me.
  • I have not tested if this happens when I create a new character. I test this now.

Thanks !!!

Well, I tested creating a new character and I have used other models than Claude-Sonnet-3.5 and it improved the response time.

I have noticed that the larger the model, the longer it takes to respond. With Claude-Sonnet-3.5 it used to be fast (1-3 seconds) but now it takes between 8 and 10 seconds.

The other models I have tested take between 2 and 4 seconds. Gemini-flash is the fastest.

The problem with these models is that they do not respond well from loaded documents and although Claude-Sonne-3.5 is not a TOP model, it is one of the most responsive from RAG. The Gemini models I had not tried and Gemini-flash is not very accurate in the responses but Gemini-Pro works much better and accurate.

Anyway, it’s still not 100%, but something can be done while it’s sorted out, at least on my end.

Hello @Tim_Guhlke @Lumi_Convai :waving_hand:,

Thank you for sharing the details.

I’ve informed our team, and they’ll investigate the issue further. We’ll get back to you with updates as soon as possible.

Hey @Tim_Guhlke, Openai voices have high latency (3-6s) which is causing the delay. If optimising latency is a priority then GCP/Azure voices would be better

Hey @ayush , thanks for your feedback but I would ask you to look into it more, I have already given some context. If it was just the voice, then the OpenAI voice would not be faster with a different LLM.

In addition, it wasn’t like that from the beginning, when I started using Claude 3.5 with the OpenAI voice the delay was normal but as I said, now it’s 10sec + so not 3-6 but really mostly over 10sec.

I have been using and paying for Convai for over a year now and would appreciate better support. I wait months for feedback and then you don’t even deal with the error description properly. Besides, I don’t seem to be the only one with this problem.

@Tim_Guhlke All of these factors- size of instructions(backstory, chat history ..), llm, knowledge bank, voice provider affect the response time. I saw that your character is using 4o-mini so a next suggestion is to switch voices. Do you know the time when your character was working well with the claude + openai voice? I might be able to check the previous config and see what’s taking longer

@ayush I switch between 4o and 4o mini because the delay is normal. I don’t change the voice at all for my application.

The problem is the combination with Claude 3.5 but I actually want to use the LLM. So Claude+OpenAI Voice.

It’s hard to say, I think I switched to Claude sometime in September or October last year and it was still very good, but since this year I can’t really use it anymore because of the long delay.

Hi, it’s been 14 days… any solution/advance ?

I’m also experiencing a delay of around 8 to 9 seconds when getting a response from the avatar, both in the Convai panel and in my own build. I tried adjusting the description by breaking it into two lines to reduce the information, but the response time is still slow.

Has anyone else faced a similar issue? I wonder if there’s something in the settings or a way to optimize it, similar to how we fine-tune builds in Path of Building for better performance. Any tips or advice would be greatly appreciated!

Hello @Roger_Johnson ,

Could you please share the Character ID?

@Tim_Guhlke @Lumi_Convai

Thanks for your patience, and we truly apologize for the delay.

We’ve been unable to reproduce the issue on our end, which makes it a bit more complex to investigate. Our team is continuing with a deeper technical review, and this may take some additional time. We’ll be sure to update this thread as soon as we have any developments or need further input from your side.

Thanks again for your understanding!

Hello guys,

I asked you a few questions in a private message, can you let me know? Also, is the problem still persisting? Some customers said that the problem was solved.

I can only speak for myself but the response times have improved recently. One thing that does happen is about 1 in every 8 to 12 responses (it varies) from the avatar fails to happen, I ask a question etc and there’s no response, will make a video of this soon and make a post about it.

But yeh as far as latency is concerned it is better for sure, and that’s even with a quite detailed Knowledge Bank and using OpenAI voices. Sure I’d like it to be even less but it’s acceptable, good work guys (from my perspective anyway).

1 Like

Hello,

We’re currently working on an update that will improve response times. While this update isn’t live just yet, there are a few things you can try in the meantime to improve performance:

  • Experiment with different LLM and voice combinations. (Options like GPT-4o paired with Azure or GCP voices typically offer the lowest latency.)
  • Simplify your Character Description, Knowledge Bank, and Narrative Design where possible, as more complex configurations can increase processing time.

Stay tuned for updates, we’ll announce improvements as they roll out. Since there are no immediate changes planned for this topic, I’ll go ahead and lock the thread for now.

Thanks for your understanding!