I’d been experimenting with the Unity beta package with vision, trying to get it to see the rendered VR scene in my Quest 3 instead of live camera feed in MR mode.
So I’d deactivated the [BuildingBlock] Passthrough gameobject in Unity(which disables passthrough mode in Quest) and set Track Source in the Convai Vision Publisher Meta component of Vision Publisher gameobject to “Source Screenshare” or “Source Screenshare Audio” while Capture Camera is already set to “CenterEyeAnchor”, hoping that this will enable the vision functionality to process the virtual scene instead.
However this doesn’t work - the AI doesn’t respond to voice prompts any more(seemingly deactivated).
I’ve finally managed to install the beta UPM package from the git URL(it was conflicting with my existing Convai package). So it’s Convai SDK for Unity v4.0.0. I’m actually quite confused because this seems to be similar to the Convai SDK for Unity v4.0.0 downloadable from Unity Asset Store?
Anyway I’ve imported and opened the lip sync sample scene. I don’t see anything that connects with your instructions above. Leaving aside the VR stuff, how do I configure the character to see anything(be it the virtual scene or camera feed), assuming it’s even supported? This seems to be markedly different from the Convai Vision Unity Beta plug-in as before?
In the existing setup, you just need to remove “CameraVisionFrameSource” and use “WebcamVisionFrameSource.”
By doing this, you can use a webcam feed for vision in place of the Unity camera feed.