Hey folks — I’m using NeuroSync in a UE game. I arrived in this forum after trying to find contact info from the NeuroSync github.
We’re doing audio → NeuroSync → LiveLink → MetaHuman and packaging for Shipping/offline use.
Because ARKit LiveLink / default paths can get messy in deployment, we built our own UDP LiveLink source + curve publishing, and we’re running the model via ONNX runtime (deployment-friendly vs PyTorch).
In our integration the ONNX output is y with shape (B, T, 68), and we want to make sure we’re not guessing the channel semantics.
Questions
-
Index → meaning mapping for all 68 dims
-
Is
y[..., 0:52]in standard ARKit 52 blendshape order (LiveLinkFace / Apple order)?-
If yes: can you confirm it’s exactly the same ordering?
-
If no: do you have a mapping table / list (index → ARKit name)?
-
-
-
What are dims 52..60?
-
Are these head + eye rotations? What’s the exact order?
-
What units/scaling? (normalized [-1,1] vs radians/degrees)
-
-
What are dims 61..67?
-
Are these emotion outputs (Angry/Disgust/Fear/Happy/Neutral/Sad/Surprise)?
-
If yes: what’s the exact order + what should the values look like (probabilities/logits)?
-
Are these trained signals from audio prosody or more heuristic/aux?
-
-
Expected ranges
-
For blendshape channels: should we assume [0,1]? Can values exceed 1? Can they go negative?
-
Any recommended clamps/gains for brows/eyes to keep expressions lively but stable?
-
Why we need this
We can “make it look okay” by trial-and-error, but for MetaHuman expressiveness (brows/eyes/cheeks + emotion while talking), we need the authoritative mapping + value ranges so we’re not accidentally mixing channels.
If someone can share a mapping table or point me to docs/source where this is defined, that would save a ton of time.
Thanks!