Improving Facial Rig Semantics for Tracking and Retargeting

1Stanford University, 2Epic Games

Our face calibration methods improve the semantic correctness of facial performance retargeting.

Abstract

In this paper, we consider retargeting a tracked facial performance to other people or virtual characters. We utilize the same rig framework for both tracking and animation to remove the difficulties associated with retargeting the semantics of one framework to another. Our carefully designed set of Simon-Says expressions and regularizers is used to calibrate each rig to the motion signatures of the relevant performer or target. Although a uniform set of Simon-Says expressions can likely be used for all person-to-person retargeting, we argue that person-to-virtual-character retargeting benefits from an expression set that captures the distinct motion signature of the virtual character rig. The Simon-Says calibrated rigs tend to produce the desired expressions when exercising animation controls. Unfortunately, these well-calibrated rigs still lead to undesirable controls when tracking a performance, even though they generally produce acceptable geometry reconstructions. Thus, we propose a fine-tuning approach that modifies the rig used by the tracker to promote the output of more semantically meaningful animation controls, facilitating high efficacy retargeting. To better address real-world scenarios, the fine-tuning relies on implicit differentiation so that the tracker can be treated as a potentially non-differentiable black box. Experiments demonstrate the benefits of our calibration methods on high-fidelity expressive performance retargeting for different capture conditions, trackers, and rig frameworks.

Overview

Splash

We propose a suite of methods to calibrate facial rigs for improved tracking and retargeting. Our method consists of two stages: Simon-Says calibration and fine-tuning. The Simon-Says calibration stage uses a carefully designed set of expressions and regularizers to calibrate the rig to the motion signatures of the relevant performer or target. The fine-tuning stage modifies the rig used by the tracker to promote the output of more semantically meaningful animation controls, facilitating high efficacy retargeting.

Overview

Supplementary Video

Our supplementary video has example animations before and after calibrations, as well as a nice high-level overview of the method.

BibTeX

@article{omens2026improving,
  author    = {Omens, Dalton and Thurman, Allise and Yu, Jihun and Fedkiw, Ron},
  title     = {Improving Facial Semantics for Tracking and Retargeting},
  journal   = {Computer Graphics Forum},
  year      = {2026},
}