Set up your system

FaceReader is the most robust automated system for the recognition of a number of specific properties in facial images, including the six basic or universal expressions:

  • happy
  • sad
  • angry
  • surprised
  • scared
  • disgusted

Additionally, FaceReader can recognize a 'neutral' state and analyze 'contempt'. The software immediately analyzes your data (live, video, or still images) saving valuable time. The circumplex model of affect describes the distribution of emotions in a 2D circular space, containing arousal and valence dimensions.

The circumplex model of affect.

FaceReader works in three steps

  • Face finding - an accurate position of the face is found.
  • Face modeling – the Active Appearance Model is used to synchronize an artificial face model, which describes the location of 500 key points as well as the texture of the face. 
  • Face classification – output is presented as seven basic expressions and one neutral state. 

To save you valuable time when analyzing videos, FaceReader also automatically classifies:

  • mouth open-closed 
  • eyes open-shut 
  • eyebrows raised-neutral-lowered 
  • head orientation
  • gaze direction 

Additionally, FaceReader can classify faces based on the following characteristics: gender, age, ethnicity, and facial hair (beard and/or moustache). Other independent variables can be entered manually.

Video, camera, or image 

FaceReader offers you different options. You can analyze facial expressions from video, using a pre-recorded file. When analyzing from video, you can choose an accurate frame-by-frame mode or skip frames for high-speed analysis. You can also switch to an USB webcam or IP camera where FaceReader enables you to analyze live and record the video simultaneously. And you can upload still images for analysis. If you plan to analyze multiple videos, you can analyze them all at one time. Once you have selected the videos, the software does the work for you. 

Choose your system preferences

FaceReader is a user friendly package which helps automate your research. The software can quickly detect interesting episodes, even in long series of events. You can choose to analyze the whole video or only parts of it. Moreover, when using the Project Analysis Module, you can add markers to interesting events. The Project Analysis Module allows you to compare groups. For example, compare the results of one group (commercial 1 - female) to another group (commercial 1 - male).

Light and movement are important

The best results are achieved using diffused frontal lighting on the test participant; Noldus offers illumination for optimization of your set-up. You can follow the emotions of your test participants if their orientation, movement, and rotation is within certain limits.

Automatic calibration

When desired, you can use the automatic calibration to tailor the analysis of facial expressions to a specific person. You can either run the calibration before the analysis or continuously during the observation. Check out the easy calibration in version 6 below.

The individual calibration function enables the correction of person specific biases towards a certain emotional expression. A calibration model can be created using live camera input, or images or video of the test participant showing a neutral expression.

Easy integration with other programs

FaceReader data can be exported to other programs. You can export both the detailed and the state log including data about image quality and key point coordinates. There is a FaceReader API to enable a direct link between FaceReader and another program. Also, FaceReader works perfectly with The Observer XT, creating an integrated solution to study behavior, emotions, and physiological responses.