FaceReader is the most robust automated system for the recognition of a number of specific properties in facial images, including the six basic or universal expressions: happy, sad, angry, surprised, scared, and disgusted.
Additionally, FaceReader can recognize a 'neutral' state and analyze 'contempt'. The software immediately analyzes your data (live, video, or still images), saving valuable time. The circumplex model of affect describes the distribution of emotions in a 2D circular space, containing arousal and valence dimensions.
"FaceReader is a very friendly software that has improved the scope of our research, opening to new questions and to rethink our experiments."
Miguel Ibaceta, MSc. | Pontificia Universidad Catolica de Chile
FaceReader works in three steps
- Face finding - an accurate position of the face is found.
- Face modeling – the Active Appearance Model is used to synchronize an artificial face model, which describes the location of 500 key points as well as the texture of the face. These outcomes are combined with the results of the Deep Face algorithm to achieve a higher classification accuracy. When face modeling is not successful (for example, when a hand is covering the mouth, but both eyes can be found), the Deep Face algorithm, based on Deep Learning, takes over.
- Face classification – output is presented as seven basic expressions and one neutral state.
To save you valuable time when analyzing videos, FaceReader also automatically classifies:
- mouth open-closed
- eyes open-shut
- eyebrows raised-neutral-lowered
- head orientation
- gaze direction
Additionally, FaceReader can classify faces based on the following characteristics: gender, age, ethnicity, and facial hair (beard and/or moustache). Other independent variables can be entered manually.
Deep Learning: analyze faces under challenging circumstances
With the classification engine Deep Face Model, FaceReader can make sense of large amounts of complex data. What does the Deep Face Model do exactly?
The Deep Face Model makes use of deep learning, which is based on an artificial neural network with multiple layers between the input and the output. The network moves through the layers calculating the probability of each output.
Currently it is the most successful artificial intelligence technique in machine learning. Like in real neural networks, information on the input side is collected and processed by neurons that are connected with each other. Mapping of input to output goes via a series of nonlinear computations, clubbing together lower levels of information to form higher level features (e.g. expressed emotion, age, gender).
For more information about Deep Learning we refer you to the articles of Gudi et al.:
- Deep Learning based FACS Action Unit Occurrence and Intensity.
- Recognizing Semantic Features in Faces using Deep Learning.
Components of FaceReader
FaceReader is a complete end-to-end solution that consists of the following components:
- FaceReader 7 software
- Project Analysis Module
- Action Unit Module
- Remote Photoplethysmography Module
- Illumination set
FaceReader is used at over 700 sites worldwide. It scores between 91% and 100%, depending on which emotion is measured, when comparing FaceReader outcomes with the facial expressions scored manually by the professional annotators - Van der Schalk, J.; Hawk, S.T.; Fisher, A.H.; Doosje, B.J. (2011). Moving faces, looking places: The Amsterdam Dynamic Facial Expressions Set (ADFES). Emotion, 11, 907-920. DOI: 10.1037/a0023853
FaceReader methodology note
Request the FREE FaceReader methodology note to learn more about facial expression analysis theory.
- Learn what FaceReader is and how it works
- Learn how the calibration works
- Get insight in quality of analysis and output
Online FaceReader Demonstration
Curious what emotions your own face shows? In this demo the facial expression of a person is automatically extracted from a single picture. Additionally, FaceReader is capable of extracting some personal characteristics, like gender, facial hair, an age indication and whether a person is wearing glasses or not. This online demonstration lets you analyze images containing a face, by entering an URL or uploading a file.
Participant emotion analysis
Facial expressions can be visualized as bar graphs, in a pie chart, and as a continuous signal. A gauge display summarizes the negativity or positivity of the emotion (valence). The timeline gives you a detailed visual representation of the data. A separate reporting window displays a pie chart with percentages, a smiley, and a traffic light, indicating whether a person’s mood is positive, neutral, or negative. All visualizations are given to you in real-time and may be viewed afterwards. With the Project Analysis Module, advanced facial expression analysis has become available in FaceReader.
The circumplex model of affect describes the distribution of emotions in a 2D circular space, containing arousal and valence dimensions. FaceReader offers a real-time representation of this model with the horizontal axis representing the valence dimension (pleasant - unpleasant) and the vertical axis representing the arousal dimension (active - inactive).
Facial expressions automatically measured with FaceReader can be represented at any level of valence and arousal. Circumplex models are commonly used to assess liking in marketing, consumer science, and psychology (Russell, J. A. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39 (6), 1161).
Privacy by design