Hello, I’m having an issue with the responses generated by the avatar.
The avatar is not only answering the question, but it is also including the internal instructions or behavior cues that are supposed to guide how the response should be delivered.
For example, if it is asked, “What is your name?”, instead of replying only with “Hello, my name is Paula,” it says something like:
“Hello, my name is Paula, I pause briefly, I smile shyly, how are you?”
So the answer itself is correct, but it also adds the descriptive directions or performance instructions, and those should not appear in the final response.
I don’t understand why this is happening or how to fix it. We are using Claude 4 Sonnet because it is the model that best understands our needs and adapts well to this use case.
The issue is not related to the character ID; it affects all characters using Claude 4 Sonnet, and occasionally Opus, but especially Sonnet.
I would appreciate it if you could run tests with this model, as it’s the one that best follows instructions. However, with the current issue, it cannot be used properly.
In any case, I’m providing the ID (6c983b60-21eb-11f1-b4b5-42010a7be02c) in case it helps. I hope this can be resolved.
With the shared character ID I can see that you are supplying the first instruction in Backstory as “Every answer you give, you give while showing emotion.“
These kinds of prompts generally lead to the behaviour you are describing. It is about controlling these kinds of instructions carefully to suit what kinds of responses you receive.
A tip would be to dial these down in case you receive too much noise in the responses.
Also have you tried the Sonnet 4.5? It is more intelligent than Sonnet 4. In case you have some negative examples like in the above scenario including those and telling not to behave like this is also very helpful.