Original Discord Post by darkladen | 2024-10-08 15:00:44
Hello,
I have managed to set up my project with pixel streaming but the chat widget is not visible and I can’t see that the key activation sending (such as the “T” key to send audio) is working.
Do you have any recommendation or any idea what could be happening ?
I have followed the guide but that alone does not work.
I finally got it to work and I noticed that the use of a sequencer as the main camera does not allow all the convai functionalities (chat, audio, etc) to be loaded.
Now I noticed another thing, and that is that if more than 1 person connects, only the first one can send audio to the avatar and the other connected users can do everything but no send audio.
Is there any way to allow multiple users to send audio ?
Hi <@305834434178056192>, glad you figured that issue out, we did not think of having multiple users per session, but one way to go about it is to create multiple Convai Player components with its own PixelStreaming Audio Input component, The idea here is to manage the PixelStreaming Audio components such that each one will be attached to one of the users
Hi <@365628745886859267>, I understand the idea, but these components must be associated only to a single character, or is it feasible for the same character to use several Convai Player components ?
If this is correct then when creating a new Convai Player component, it prioritizes its audio streaming with the user that is currently connected and would not listen to the one that is already connected ?
Finally the situation would be, how to assign an exclusive connection to the user who connects to the component that has been created so that he can listen and respond to this user with audio and that the other users do not listen to this audio.
All this is almost like having several instances of the Avatar project as users connected to the streaming.
This is a very custom area, so I’m happy to share some pointers, but the details will be brief, and you’ll need to connect the dots for the full implementation.
Yes, multiple player components can coexist, but only one can interact with the character at a time. If another player tries to interrupt, the character will ignore them. The main challenge is determining which player component should be used to communicate with the character.
To address this, you can use the emitUIInteraction function from the Pixel Streaming JavaScript code. For instance, when a user presses the “T” key or a specific UI button, it can send the player’s name to Unreal. Based on this, you can ensure the correct player component is used for the interaction.
I understand the idea perfectly. I will try to recreate what you have told me and well there are certain points that I have to see if they really work because finally each user should see a character with whom he has his own interaction and all from the same project. Sure it must be complicated but the idea is interesting.
Hello again, today I had the same problem of the beginning of the thread (luckily I have a backup of the project) and I made several changes and now when I activate the transmission with pixel streaming, the widgets and the print(“message”), no longer appear in the transmission but in the desktop execution as a normal app.
Do you know why the widgets are not visible ?
I have already tried several things but nothing works.
I am still investigating.
Hello,
What could be happening that by pixel streaming the robotic voice is heard as if there were 2 voices at the same time but one with a very small phase difference of a few milliseconds ?
I have checked everything and only in my character I have 2 audio output but the second one is muted.
Please try creating your own widget and use it instead of the Convai chat widget. This way, we can determine whether the issue lies with the Convai chat widget or if there’s something wrong with your project.
I hope I understand what you are saying because as I understand it, what I would do would be to activate the audio capture with another widget, but does this really solve it ? Does the widget influence the audio playback ?
If so what would be the functions that I should implement in the new widget to accept the “T key press” to capture the audio and then play the audio that arrives as a response ?
It seems strange to me the implementation but I can still try with a blank project and implement everything again just as a test because if it is a problem on my part, in this new project should not occur.
I am waiting for comments but in the meantime I will test,
Thanks.