FaceReader is the most robust automated system for the recognition of a number of specific properties in facial images, including the six basic or universal expressions: happy, sad, angry, surprised, scared, and disgusted.
Additionally, FaceReader can recognize a 'neutral' state and analyze 'contempt'. The software immediately analyzes your data (live, video, or still images), saving valuable time. The circumplex model of affect describes the distribution of emotions in a 2D circular space, containing arousal and valence dimensions.
FaceReader works in three steps
- Face finding - an accurate position of the face is found.
- Face modeling – the Active Appearance Model is used to synchronize an artificial face model, which describes the location of 500 key points as well as the texture of the face. These outcomes are combined with the results of the Deep Face algorithm to achieve a higher classification accuracy. When face modeling is not successful (for example, when a hand is covering the mouth, but both eyes can be found), the Deep Face algorithm takes over.
- Face classification – output is presented as seven basic expressions and one neutral state.
To save you valuable time when analyzing videos, FaceReader also automatically classifies:
- mouth open-closed
- eyes open-shut
- eyebrows raised-neutral-lowered
- head orientation
- gaze direction
Additionally, FaceReader can classify faces based on the following characteristics: gender, age, ethnicity, and facial hair (beard and/or moustache). Other independent variables can be entered manually.
Components of FaceReader
FaceReader is a complete end-to-end solution that consists of the following components:
- FaceReader 7 software
- Project Analysis Module
- Action Unit Module
- Remote Photoplethysmography Module
- Illumination set
FaceReader is used at over 600 sites worldwide. It scores between 91% and 100%, depending on which emotion is measured, when comparing FaceReader outcomes with the facial expressions scored manually by the professional annotators - Van der Schalk, J.; Hawk, S.T.; Fisher, A.H.; Doosje, B.J. (2011). Moving faces, looking places: The Amsterdam Dynamic Facial Expressions Set (ADFES). Emotion, 11, 907-920. DOI: 10.1037/a0023853
FaceReader methodology note
Request the FREE FaceReader methodology note to learn more about facial expression analysis theory.
- Learn what FaceReader is and how it works
- Learn how the calibration works
- Get insight in quality of analysis and output
Online FaceReader Demonstration
Curious what emotions your own face shows? In this demo the facial expression of a person is automatically extracted from a single picture. Additionally, FaceReader is capable of extracting some personal characteristics, like gender, facial hair, an age indication and whether a person is wearing glasses or not. This online demonstration lets you analyze images containing a face, by entering an URL or uploading a file.
Participant emotion analysis
Facial expressions can be visualized as bar graphs, in a pie chart, and as a continuous signal. A gauge display summarizes the negativity or positivity of the emotion (valence). The timeline gives you a detailed visual representation of the data. A separate reporting window displays a pie chart with percentages, a smiley, and a traffic light, indicating whether a person’s mood is positive, neutral, or negative. All visualizations are given to you in real-time and may be viewed afterwards. With the Project Analysis Module, advanced facial expression analysis has become available in FaceReader.
Circumplex model of affect
The circumplex model of affect describes the distribution of emotions in a 2D circular space, containing arousal and valence dimensions. FaceReader offers a real-time representation of this model with the horizontal axis representing the valence dimension (pleasant - unpleasant) and the vertical axis representing the arousal dimension (active - inactive).
Facial expressions automatically measured with FaceReader can be represented at any level of valence and arousal. Circumplex models are commonly used to assess liking in marketing, consumer science, and psychology (Russell, J. A. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39 (6), 1161).
Project Analysis Module
FaceReader is a full-scale solution enabling you to work effectively and efficiently allowing you to present your research findings quickly. Simply use a marker function to code events such as ‘start commercial’, ‘distraction’, ‘talking’, or ‘presenting stimulus’. By choosing a name, trigger code, and color, you can annotate your videos for further analysis in FaceReader. The Project Analysis Module can create multiple visual presentations of your data.
Action Unit Module
FaceReader automatically analyzes a selection of 20 commonly used Action Units, such as raising of cheeks, wrinkling of nose, dimpling, and lip tightening.