FaceReader

Gathering data

FaceReader is a program for facial analysis and has been trained to classify facial expressions. FaceReader classifies facial expressions in three steps.

Gathering data

To gain accurate and reliable data about facial expressions, FaceReader is the most robust automated system that will help you out. In addition to analyzing facial expressions, FaceReader offers a number of extra classifications. It can, for example, detect gaze direction and whether eyes and mouth are closed. FaceReader classifies facial expressions according to the steps explained below.

To gain accurate and reliable data about facial expressions, FaceReader is the most robust automated system that will help you out.

  • Clear insights into the effect of different stimuli on emotions

  • Very easy to use: save valuable time and resources

  • Easy integration with eye tracking data and physiology data

 

Determine facial expressions in 3 steps with FaceReader

  1. Face finding. FaceReader finds the position of a face within an image by using a face-finding algorithm that is based on deep learning. This algorithm searches for areas in the image that have the appearance of a face, at different scales.
  2. Face modeling. FaceReader uses a facial modeling technique based on deep neural networks. It creates an artificial face model, which describes the location of 468 key points in the face. It is a single-pass quick method to directly estimate the full collection of landmarks in the face. After the initial estimation, the software compresses key points using Principal Component Analysis. This leads to a highly compressed vector representation, describing the state of the face.
  3. Face classification. A trained deep artificial neural network recognizes patterns in the face and then classifies facial expressions. FaceReader directly classifies the facial expressions from image pixels. Over 20,000 manually annotated images were used to train the artificial neural network.

To save you valuable time when analyzing videos, FaceReader also automatically classifies:

  • mouth: open-closed
  • eyes: open-shut
  • eyebrows: raised-neutral-lowered
  • head orientation
  • gaze direction
  • characteristics: gender, age, and facial hair (beard and/or mustache)

You can also enter other independent variables yourself.

 
 


 


Define your own Custom expressions

Custom expressions are facial expressions or mental states that you can define yourself, by combining the facial expressions and Action Units that FaceReader can recognize. You can also use valence, arousal, head orientation (Action Unit 51-56), heart rate, and heart rate variability in your definition of a custom expression.

You can use custom expressions for a wide range of applications, for example:

  • Define ‘Duchenne smile’ as a custom expression by combining Action Unit 6 and 12*
  • Analyze the positive expression ‘awe’ using a combination of Action Unit 1, 5, 25, and 26*
  • Research related to pain, based on Action Unit 4, 6, 7, 9, 10, 25, 26, 27, and 43*
  • Change the name of an expression to better fit your research, for example, ‘Smile’ instead of ‘Happy’
  • Modify the intensity of an expression
  • When you purchase the Action Unit module, you'll find the following affective states and behaviors: interest, boredom, confusion, attention, smiling, leaning backward/forward, spontaneous laughter, and talking. This also includes blinking (AU45), head turn left (AU51), head turn right (AU52), head up (AU53), and head down (AU54).

*These examples are available upon request. Please contact us if you are interested to test or validate these custom expressions. If you have developed your own custom expression and would like to share this with other researchers, please let us know!

Define Custom Expressions with FaceReader
Define Custom Expressions with FaceReader
 


Participant emotion analysis

Facial expressions can be visualized as bar graphs, in a pie chart, and as a continuous signal. A gauge display summarizes the negativity or positivity of the emotion, which is also referred to as valence. The timeline gives you a detailed visual representation of the data.

A separate reporting window displays a pie chart with percentages, a smiley, and a traffic light, indicating whether a person’s mood is positive, neutral, or negative. All visualizations are given to you in realtime and may be viewed afterwards. With the Project Analysis Module, advanced facial expression analysis has become available in FaceReader as well.

 
 


Deep Learning: analyze faces under challenging circumstances

With the Deep Face Model as a classification engine, FaceReader can make sense of large amounts of complex data. What does the Deep Face Model do exactly?

The Deep Face Model makes use of deep learning, which is based on an artificial neural network with multiple layers between the input and the output. The network moves through the layers, calculating the probability of each output. 

Currently, it is the most successful artificial intelligence technique in machine learning. Like in real neural networks, information on the input side is collected and processed by neurons that are connected with each other. Mapping of input to output takes place through a series of nonlinear computations, clubbing together lower levels of information to form higher level features (for example, expressed emotion, age, and gender).

woman-eating-watermelon-facereader
 


Privacy & ethics

FaceReader is installed on-site and adheres to strict privacy-by-design protocols. For example, the software offers you the option not to record the test participants face during the analysis. In this case, only metadata are acquired from the recordings of the face, which cannot be related to an identifiable person. Examples of metadata are facial expressions, head pose, age, and gender. 

FaceReader is a software tool for scientific research. FaceReader is not capable of recognizing or identifying faces or people, and is therefore unsuitable for surveillance purposes. For more details please read this blog post or refer to our ethics statement and/or privacy policy.