I’m trying to integrate a custom character modeled in Blender (not from Ready Player Me or CC4) into my Unity → WebGL build, and I have some questions about how lip sync should be configured. I want to make sure I do it correctly from the start.
My Situation
-
My character is designed in Blender, then exported (FBX / glTF) into Unity.
-
The model does have ARKit blendshapes / facial morph targets in Blender.
-
I want this to work in a WebGL build, so the lip sync must be compatible.
-
But in Unity / Convai’s inspector, I don’t see options like “Viseme Skin Effector”, “Bone Effector”, or the menu Create → Convai → Expression → Viseme Skin Effector that documentation mentions.
Questions
-
How many blendshapes are required?
-
Do I need exactly 140 (or 52 ARKit + extra)?
-
Can I use a smaller set of morph targets / blendshapes (e.g. 40–60) and still get usable lip sync?
-
Is there a minimum required set, or are all ARKit ones mandatory?
-
-
How should I organize them?
-
Should face, jaw, tongue be in separate SkinnedMeshRenderers, or all on one mesh?
-
If multiple meshes / sub-meshes, how do I assign them to Convai’s LipSync component (Head / Teeth / Tongue fields)?
-
-
Mapping & Effector setup
-
Since I didn’t see the “Viseme Skin Effector” menu, how can I create or import a viseme → blendshape mapping manually?
-
What is the correct format / structure of the mapping?
-
Are there default mapping assets (for ARKit) I can reuse?
-
-
WebGL limitations
-
Are there any limitations or special requirements for lip sync in WebGL (versus standalone)?
-
For example, do I need to bake or import anything differently to ensure the blendshape updates work in WebGL?
-
-
Best practice or working example
-
Does Convai provide a sample project or template using a custom Blender character (with ARKit blendshapes) that works in WebGL?
-
Could someone share settings / screenshots / mapping files that work?
-
Does all this component are required For lip Sync

