FaceReader
Facial expression analysis
Facial expressions and emotions
Emotions are an important aspect of human life. They instinctively influence our behaviors and decisions. Our face is often the best indicator for this, as our facial expressions convey emotions without saying a word and can be observed by others. Facial expressions are created with the help of muscle movements beneath the skin of the face. For researchers emotions are fundamental in understanding human behavior, as they are a crucial part in non-verbal communication and a rich source of social signals. Facial expression analysis and emotion data provides crucial insights that allow researchers to gain insight in complex human behaviors in greater depth.

Facial expression analysis with FaceReader
FaceReader is the most robust automated system for the recognition of a number of specific properties in facial images, including the six basic or universal expressions: happy, sad, angry, surprised, scared, and disgusted. Paul Ekman described these emotional categories as the basic or universal emotions. Additionally, FaceReader can recognize a 'neutral' state and analyze 'contempt'. It also calculates Action Units, valence, arousal, gaze direction, head orientation, and personal characteristics such as gender and age.
Measure emotions accurately
FaceReader software is fast, flexible, accurate, and easy to use. It immediately analyzes your data (live, video, or still images), saving valuable time. The option to record audio as well as video makes it possible to hear what people have been saying – for example, during human-computer interactions, or while watching stimuli.
FaceReader is used at over 1,000 sites worldwide. FaceReader 8 scores between 91% and 100%, depending on which emotion is measured, when comparing FaceReader outcomes with the facial expressions scored manually by the professional annotators.


FaceReader is used in nearly 1,300 publications! Get your free trial to try it out!
Classifying facial expressions
FaceReader’s main output is a classification of the facial expressions of your test participant. These results are visualized in several different charts such as line and/or bar graphs, or pie charts, which shows the percentage per emotion.
This video shows the realtime modeling capabilities of FaceReader. Based on the model, FaceReader can classify facial expressions, action units, the level or arousal and the valence of the expressions can be visualized in the Circumplex model.
Circumplex model of affect
The circumplex model of affect describes the distribution of emotions in a 2D circular space, containing arousal and valence dimensions. FaceReader offers a real-time representation of this model with the horizontal axis representing the valence dimension (pleasant - unpleasant) and the vertical axis representing the arousal dimension (active - inactive).
Facial expressions automatically measured with FaceReader can be represented at any level of valence and arousal. Circumplex models are commonly used to assess liking in marketing, consumer science, and psychology (Russell, J. A. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39 (6), 1161).






Download our white papers

FaceReader SDK or API
Need to easily integrate facial expression analysis into other applications? Using the FaceReader SDK is the perfect solution. The SDK is available for Windows and Android, and can run on your PC or server. For cloud-based analysis, a FaceReader Web API is available. Additionally, a FaceReader Application Programming Interface (API) has been developed to serve as an interface between your software, and to facilitate integration with the standard FaceReader for Windows software.
FaceReader Demonstration
Curious what emotions your own face shows? In this online demo the facial expression of a person is automatically extracted from a single picture. Additionally, FaceReader is capable of extracting some personal characteristics, like gender, facial hair, an age indication and whether a person is wearing glasses or not. This online demonstration lets you analyze images containing a face, by entering an URL or uploading a file.