About MF3D

The Macaque Face 3D project is an effort to develop a parametrically controlled, anatomically accurate, three-dimensional, virtual avatar of the Rhesus macaque face and head, for visual stimulation in behavioral neuroscience experiments involving this important model species. The avatar has been used to produced the first ever publicly available database MF3D R1 of computer generated images of macaque facial expression and identity, which you can learn more about below.

Full details of how the model and rendered images were generated have been published here:

The graphical abstract below summarizes how biometric data from real animals were acquired, analysed and parameterized to generate a realistic digital model. For more details and information on continuing development of MF3D, browse the table of contents.

MF3D construction summary
_images/spacer.png

Feature Overview

The following video animations demonstrate some of the parameters of the MF3D avatar that can be controlled and how these variations are encoded.

Facial expression, gaze and lighting

This video demonstrates how our macaque model of emotional facial expressions (for a single identity) can be continuously and parametrically varied to adjust appearance. The model was constructed using computed tomography (CT) data from a real Rhesus macaque, acquired under anesthesia, and edited and rigged by a professional digital artist. In addition to control of various facial expressions, the model’s head and eye gaze direction can be programmatically controlled, as well as other variables such as environmental lighting and surface coloration, amongst others.

Facial dynamics estimation

In order to simulate naturalistic facial dynamics in the macaque avatar, we estimate the time courses of facial motion from video footage of real animals. Applying these time courses to the animation of bones and shape keys of the model, we can mimic the facial motion of the original clip, while retaining independent control over a wide range of other variables. The output animation can be rendered at a higher resolution and frame rate (using interpolation) than the input video. (Original video footage in the left panel is used with permission of Off The Fence™).

Identity morphing

Individual variations in cranio-facial morphology (3D face shape) can be continuously and parametrically varied to adjust appearance, as in the MF3D R1 Identity stimulus set. The statistical model was constructed through principal component analysis (PCA) of the 3D surface reconstructions of 23 real Rhesus monkeys from computed tomography (CT) data acquired under anesthesia. The 3D plot in the top right corner illustrates the first three principal components of this ‘face-space’, where the origin of the plot represents the sample average face.

Animated sequences

Animated facial expression clips from the MF3D R1 Animation stimulus set can be combined to form a longer continuous animation sequence for use in experiments that require more naturalistic dynamics. This example was generated using the Python script MF3D_ConcatClips_Demo.py to interleave the appropriate head rotation sequences between consecutive expression clips, controlled via the open-source Blender video sequence editor.