Abstract: | Facial expression transfer has been actively researched in the past few years. Existing methods either suffer from depth ambiguity or require special hardware. We present a novel marker‐less, real‐time facial transfer method that requires only a single video camera. We develop a robust model, which is adaptive to user‐specific facial data. It computes expression variances in real time and rapidly transfers them onto a target character either from images or videos. Our method can be applied to videos without prior camera calibration and focal adjustment. It enables realistic online facial expression editing and performance transferring in many scenarios such as video conference, news broadcasting, lip‐syncing for song performances and so on. With low computational cost and hardware requirement, our method tracks a single user at an average of 38fps and runs smoothly even in web browsers. Copyright © 2016 John Wiley & Sons, Ltd. |