Many researchers code muscle movements to learn more about how and why the face moves. They use the Facial Action Coding System which provides them with a technique for the reliable coding and analysis of facial movements and expressions. The Facial Action Coding System (FACS), first developed by Paul Ekman and Wallace Friesen in 1978 and revised by Ekman, Friesen, & Hager in 2002, is a comprehensive, anatomically-based system for describing anatomical movement of the face.
FACS coders describe every observable change in facial movement on the basis of Action Units (AUs). They indicate which AUs moved to produce the changes observed in the face. This makes FACS coding quite objective.
In scoring, it will be necessary to apply slow motion and frame-by-frame viewing to identify the AUs that occur, always alternating with real time viewing. As such, FACS coding is very time intensive.
Want to know how Action Units look like? We have listed 20 Action Units, of which you see three examples below, as well as some frequently occurring or difficult action unit combinations.
Automatically code facial actions with FaceReader
FaceReader, the software for automatic recognition and analysis of facial expressions, offers you reliable automated facial action coding, which saves you valuable time and resources. It analyzes facial expressions: happy, sad, angry, surprised, scared, and disgusted, as well as affective attitudes such as interest, boredom, and confusion. FaceReader analyzes left and right Action Units separately. This unique feature distinguishes the intensity of the active muscles at the left and the right side of the face.
With The Observer® XT it is possible to manage the whole FACS or parts of it, allowing comprehensive coding when needed. Video and codes can easily be synchronized and The Observer XT offers multiple analysis options, such as reliability analysis.
Interested? Read the white paper on this subject.
“The Observer XT is an excellent tool to code the complexity of facial expressions, their on- and offset, their time course and their changing intensities.”
Dr. Miriam Kunz | Germany
Non-verbal behavior studies
Research has demonstrated that most human communication is non-verbal, and only measurable by means of observations. Methods range from straightforward live observations with a handheld device to elaborate lab studies, and from video analysis with The Observer XT to automatic facial expression recording with FaceReader™.
Interesting publications and blog posts
A diverse collection of scientific articles citing Noldus products are published in renowned journals each week. The following list is only a small selection of scientific publications in different research fields.
- Ekman, P.; Friesen, W. (1978). Facial action coding system: A technique for the measurement of facial movement. Palo Alto, Ca.: Consulting Psychologists Press.
- Forestell, C.A.; Mennella, J.A. (2012). More than just a pretty face. The relationship between infant’s temperament, food acceptance, and mothers’ perceptions of their enjoyment of food. Appetite, 58, 1136-1142.
- Kunz, M.; Lautenbacher, S. (2014). The faces of pain: A cluster analysis of individual differences in facial activity patterns of pain. European Journal of Pain, doi: 10.1002/j.1532-2149.2013.00421
- Platt, T.; Hofmann, J.; Ruch, W.; Proyer, R.T. (2013). Duchenne display responses towards sixteen enjoyable emotions: Individual differences between no and fear of being laughed at. Motivation and Emotion, doi:10.1007/s11031-013-9342-9.
- Zhang, X.; Yin, L.; Cohn, J.F.; Canavan, S.; Reale, M.; Horowitz, A.; Liu, P.; Girard, J.M. (2014). BP4D-Spontaneous: a high-resolution spontaneous 3D dynamic facial expression database. Image and Vision Computing, doi: 10.1016/j.imavis.2014.06.002.
- Zumhasch, J. (2018). A validation-study to review the performance of FaceReader in the detection of basic emotions and actions units. Behavioral Research Blog.