Using a custom Blender-designed character in WebGL: how many blendshapes are needed and how to set up lip sync?

I’m trying to integrate a custom character modeled in Blender (not from Ready Player Me or CC4) into my Unity → WebGL build, and I have some questions about how lip sync should be configured. I want to make sure I do it correctly from the start.

My Situation

  • My character is designed in Blender, then exported (FBX / glTF) into Unity.

  • The model does have ARKit blendshapes / facial morph targets in Blender.

  • I want this to work in a WebGL build, so the lip sync must be compatible.

  • But in Unity / Convai’s inspector, I don’t see options like “Viseme Skin Effector”, “Bone Effector”, or the menu Create → Convai → Expression → Viseme Skin Effector that documentation mentions.

Questions

  1. How many blendshapes are required?

    • Do I need exactly 140 (or 52 ARKit + extra)?

    • Can I use a smaller set of morph targets / blendshapes (e.g. 40–60) and still get usable lip sync?

    • Is there a minimum required set, or are all ARKit ones mandatory?

  2. How should I organize them?

    • Should face, jaw, tongue be in separate SkinnedMeshRenderers, or all on one mesh?

    • If multiple meshes / sub-meshes, how do I assign them to Convai’s LipSync component (Head / Teeth / Tongue fields)?

  3. Mapping & Effector setup

    • Since I didn’t see the “Viseme Skin Effector” menu, how can I create or import a viseme → blendshape mapping manually?

    • What is the correct format / structure of the mapping?

    • Are there default mapping assets (for ARKit) I can reuse?

  4. WebGL limitations

    • Are there any limitations or special requirements for lip sync in WebGL (versus standalone)?

    • For example, do I need to bake or import anything differently to ensure the blendshape updates work in WebGL?

  5. Best practice or working example

    • Does Convai provide a sample project or template using a custom Blender character (with ARKit blendshapes) that works in WebGL?

    • Could someone share settings / screenshots / mapping files that work?

Does all this component are required For lip Sync

Hello,

Welcome to the Convai Developer Forum!

You are correct to notice those options are missing. The Unity Convai WebGL plugin has a different architecture from the Core package on the Unity Asset Store due to WebGL platform constraints. As a result, features like Viseme Skin Effector, Bone Effector, and the Create → Convai → Expression → Viseme Skin Effector menu are not present in the WebGL plugin.

For WebGL, the plugin currently supports:

  • OVR blendshapes

  • Reallusion Plus blendshapes

If your 3D Character includes OVR-compatible blendshapes, you can set it up as follows:

  1. Add the Convai LipSync component to your character.

  2. Select the OVR option in the LipSync component.

  3. Assign the SkinnedMeshRenderer that contains your facial blendshapes.

For examples, check the demo scene characters included in the WebGL package such as the Ready Player Me and Reallusion samples.

If your character does not use OVR or Reallusion Plus, consider integrating an alternative lip sync pipeline that drives your blendshapes at runtime in WebGL, for example an AudioSource-driven viseme system that maps to your custom morph targets. You can then keep Convai for voice and text while your custom system handles the facial animation.