top of page

THE PIPELINE TO MAKE REAL-TIME FACIAL CAPTURING MUCH MORE ACCURATE

We set up a real-time facial capture system for animated series production in UE5, which is the most important part of our animation. Especially for children, their attention is always attracted by exaggerated expressions or even micro-expressions. That means we need a much more accurate workflow for the facial capture system.


The EpicGame platform offered some FaceAR solutions and combined with Metahuman, which is already good enough for V-Tuber live streaming. but sometimes it'll get some unpredictable stuck or mouth closing problems. For the animated performance, especially for animal characters is not enough. we need a much more quick reaction time and accurate capture data.




OFFICIAL FACE CAPTURE
OUR ANIMAL FACIAL CAPTURE WITH 300W FUR




SIMPLIFY THE MODELS

Base on the Epic official sample file, you need a bunch of pieces of blend shapes and combine them with pose assets(the joints/bones movement) in the UE5. But that makes it complicated, we reduce the jaw joint and other driving joints to make it as simple as possible. those joints were generated in Maya for making face animation. but in the UE5 we don't need that kind of redundancy joints, instead, we make blend shapes for the JawOpen, teeth, tongue, and even Eye Rolling. Only one head bone joint is kept for the head which makes the data calculation a lot faster.


But on the other hand, if the eyeball shape is not a perfect sphere, like the camel's eyes are kind of spheroid. the blend shape for eye moving is not quite fit. There will be a slight shift in the position. so we keep that eye joint and use PoseAsset to store the eye's animation curves. This required exporting eyes' animation from Maya and keying the eyes' different looking directions, keyframe by keyframe.


MODIFY THE BLEND SHAPES ITERATIVELY

Some people have tried metahuman or FaceAR, their faces also can drive the character's head. but didn't get a good result as lively looking as iPhone's emoji. that's because the blend shapes didn't get the right shape, though apple offered the base mesh as a reference for users.

To get better blend shapes. first of all, we will treat it as a normal animated character to apply all the joints, weights, and controls on the face, even a very detailed bone on the check. And then we use these joints to control the face to the specific expression based on the apple meshes, like "puff mouth".



after that, you can sculpture the blend shape or modify it in further detail.



The official expression blend shapes are about 51 objects, but you can extend them more than that. Our dog character has two movable ears, we hook that parameter to the eyes and mouth to make the dog much more emotional with ears.


To make the facial expression accurate here is the point, you have to open two or three of the blend shapes and modify the shape together. otherwise, you won't get a satisfactory result. For example, you need to open the "JawOpen" and "Mouth close" together to modify to get a correct mouth zip shape when it's chewing something. and yet need also considering the influence of "mouth frown".



And again, after you finish the blend shape models. from your iPhone apps, you need to double-check every parameter's data and feedback them to Maya blend shapes. By modifying it iteratively you will get a perfect result.




16 views0 comments

Recent Posts

See All
bottom of page