Tried to test speaking with the Digital Human and straight away Error 14 is being triggered. What is causing these errors constantly?
Hi Kaan,
Is there any update or progress on the Error 14 Timeout issue? Why does this keep happening?
I am using Claude 4.1 Opus as the primary Core AI in Convai as that works best for our research.
Looking forward to hearing from you.
Unfortunately, the exact cause of the Error 14 Timeout issue is still unknown, as we haven’t been able to consistently reproduce it on our side, which makes resolving it more challenging.
We’ll share an update as soon as we have any progress. The good news is that we’ll be introducing a new plugin soon, which won’t rely on gRPC, meaning this issue should no longer occur with that version.
Can’t wait to start testing out the New Plugin to give feedback Kaan. Awesome! Count me and my Research Assistants in for testing, research and development.
Happy to hop on a call with you, Yash and others to discuss further.
Hi Kaan,
Following up on the new plugin. The research assistants and I are keen to test it out and share results. When do you believe the new plugin would be ready for testing?
We’re planning to release it later this month.
Hi Kaan,
Just following up on the new plugin version release. Is there an ETA, My Research Team is standing by to test the new plugin if we can have access to the same.
Thanks!
Jeasy
The beta documentation and draft walkthrough video are already available here:
https://docs.convai.com/api-docs/plugins-and-integrations/unreal-engine-plugin-beta-overview
Once the public video is finalized and the updated version with several fixes is released, we’ll share it immediately.
Thanks, Kaan, I’ll start some testing with the Beta version and try and implement it for the digital humans directly in a sandbox project. Will start a new thread for feedback as well. ![]()
I recommend using it only for testing at this stage. Please don’t start any full project implementation yet.
Agreed! Will setup a testing sandbox environment for the digital human to test the new plugin!
Quick Question: The Environment Camera, for the Digital Human / AI to see the environment, do we need to add a camera to the head of the MetaHuman in the blueprint like in the example, the component is added to the First Player Camera component? I think it doesn’t need to, I think it needs to be the vision render target in front of the MetaHuman’s face only.
Yes, no need for a camera.
Working with Core AI: Claude 4 Opus, just received this error. On Plugin 3.6.8-beta3
API Error: Error code: 429 - {‘error’: {‘message’: ‘Provider returned error’, ‘code’: 429, ‘metadata’: {‘raw’: 'meta-llama/llama-4-maverick is temporarily rate-limited upstream. Please retry shortly, or add your own key to accumulate your rate limits: OpenRouter ', ‘provider_name’: ‘Groq’}}, ‘user_id’: ‘user_31TqtEYL4wcZq2Kdyo7Ha518dSz’}
Character ID 60e087de-76af-11ef-80ce-42010a7be011
Session ID c0c01a2ee0c3f3f266e33f3a2d76bc11
Never had this error before!
Vivience20_20251114_151519_Default.log (32.1 KB)
This LLM is not supported, please select a live model.
I’m confused Kaan, Claude 4.0 Opus is not supported? We’ve been using it so far with GCP Voice Charon. Only got this error today for the first time.
Which Core AI models are supported, and which voices can be used with the supported Core AI models?
None of the Core AI models show the Word “Live” apart from gemini 2.5 flash beta in front, how do we know which ones are Live?
The new plugin is still in beta, which means not all models are supported yet. At the moment, the only model that supports Live mode is: Gemini 2.5 Flash Live
Claude 4.0 Opus, GPT models, and others are not yet supported in Live mode under the new backend. This is expected during the beta phase.
Please keep in mind:
-
This beta uses an entirely new backend and plugin infrastructure.
-
Not everything is fully supported yet, and compatibility will expand over time.
-
More models and voice combinations will become available as we complete the rollout.
This is the previous project using 3.6.8 Beta 3 plugin and not the new plugin. I haven’t started the implementation of the beta 4 SDK yet.
Please create separate posts for different issues.
Ok, I will do that. Or it may be easier to jump on a call to explain.