Tools for the
Facial action coding system (FACS)
Many researchers code muscle movements to learn more about how and why the face moves. They use the Facial Action Coding System which provides them with a technique for the reliable coding and analysis of facial movements and expressions. The Facial Action Coding System (FACS), first developed by Paul Ekman and Wallace Friesen in 1978 and revised by Ekman, Friesen, & Hager in 2002, is a comprehensive, anatomically-based system for describing anatomical movement of the face.
FACS coders describe every observable change in facial movement on the basis of Action Units (AUs). They indicate which AUs moved to produce the changes observed in the face. This makes FACS coding quite objective.
In scoring, it will be necessary to apply slow motion and frame-by-frame viewing to identify the AUs that occur, always alternating with real time viewing. As such, FACS coding is very time intensive.
Automatically code facial actions with FaceReader
FaceReader™, the software for automatic recognition and analysis of facial expressions, offers you reliable automated facial action coding, which saves you valuable time and resources. When an action is active, its intensity is displayed in 5 categories. Output will be presented to you on this scale with different colors and can be exported for further analysis in Excel, The Observer® XT, or another program of your choice.
FaceReader analyzes left and right Action Units separately. This unique feature distinguishes the intensity of the active muscles at the left and the right side of the face.
Check out what Facial Action Units look like!
Recent advances in computer vision have allowed for reliable automated facial action coding. Below you can see the 20 Action Units offered in FaceReader as well as some frequently occurring or difficult action unit combinations. Some images have been zoomed in on the area of interest to explicitly show what muscle movement corresponds to the specific Facial Action Unit.
Combinations of action units
Non-verbal behavior studies
Research has demonstrated that most human communication is non-verbal, and only measurable by means of observations. Methods range from straightforward live observations with a handheld device to elaborate lab studies, and from video analysis with The Observer XT to automatic facial expression recording with FaceReader.
A diverse collection of scientific articles citing Noldus products are published in renowned journals each week. The following list is only a small selection of scientific publications about facial action coding system and/or facial expression analysis.
- Ekman, P.; Friesen, W. V.; Hager, J. C. (2002). Facial action coding system: The manual on CD-ROM. Instructor’s Guide. Salt Lake City: Network Information Research Co.
- Lewinski, P.; Fransen, M. L.; Tan, E.S.H. (2014). Predicting Advertising Effectiveness by Facial Expressions in Response to Amusing Persuasive Stimuli. Journal of Neuroscience, Psychology, and Economics, 7, 1-14.
- Kunz, M.; Lautenbacher, S. (2014). The faces of pain: A cluster analysis of individual differences in facial activity patterns of pain. European Journal of Pain, https://doi.org/10.1002/j.1532-2149.2013.00421.x.
- Schalk, J.; Hawk, S.; Fischer, A. & Doosje, B. (2011). Moving Faces, Looking Places: Validation of the Amsterdam Dynamic Facial Expression Set (ADFES). Emotion, 11 (4), 907-920. https://doi.org/10.1037/a0023853.