Set up your system

FaceReader is a user friendly package which helps automate your research. It offers the possibility to use different input sources and system preferences. FaceReader achieves the best performance if it gets a good (video) image. Both the placement of the camera and the lighting of the subject’s face are of crucial importance in obtaining reliable classification results. You might wonder how to set-up your system for the best results?

Video, camera, infrared recording, or image 

FaceReader can be used with different input sources. You can upload still images for analysis, or analyze facial expressions from video, using a pre-recorded file. When analyzing from video, you can choose an accurate frame-by-frame mode or skip frames for high-speed analysis. You can also switch to an USB webcam or IP camera where FaceReader enables you to analyze live and record the video and audio simultaneously. 

In FaceReader 8, it is now possible to analyze infrared (IR) recordings. This allows use of FaceReader in areas with low light conditions. 

If you plan to analyze multiple videos, you can analyze them all at one time. Once you have selected the videos, the software does the work for you. 

Choose your system preferences

The software can quickly detect interesting episodes, even in long series of events. You can choose to analyze the whole video or only parts of it. Moreover, when using the Project Analysis Module, you can add markers to interesting events. The Project Analysis Module allows you to compare groups. For example, compare the results of one group (commercial 1 - female) to another group (commercial 1 - male).

Light and movement are important

The best results are achieved using diffused frontal lighting on the test participant; Noldus offers illumination for optimization of your set-up. You can follow the emotions of your test participants if their orientation, movement, and rotation is within certain limits.

In the behavioral research blog posts about how to use FaceReader, you'll find 5 tips to get started, or to optimize your facial expression analysis.

Automatic calibration

When desired, you can use the automatic calibration to tailor the analysis of facial expressions to a specific person. You can either run the calibration before the analysis or continuously during the observation. Check out the easy calibration in version 6 below.

The individual calibration function enables the correction of person specific biases towards a certain emotional expression. A calibration model can be created using live camera input, or images or video of the test participant showing a neutral expression.

Using FaceReader with Site Licenses

The Site License Manager is a program to manage your licenses of FaceReader via your own server. Why use it? The main benefit is that you no longer need a hardware key to use the software. A secured login will give you access to your projects. Furthermore, you can login at any time and on any computer on the network. This enables you to work together with others on a project from a distance. 

The Site License Manager shows how many licenses you have, how many are in use, and when each license type expires. It distinguishes two types, Basic and Full. 

Basic license:

  • FaceReader
  • Project Analysis Module

Full license:

  • FaceReader
  • Project Analysis Module
  • Action Unit Module
  • Remote PPG Module


The basic and full licenses are available for 2 or more authorizations.

Easy integration with other programs

FaceReader data can be exported to other programs. You can export both the detailed and the state log including data about image quality and key point coordinates. There is a FaceReader API to enable a direct link between FaceReader and another program. Also, FaceReader works perfectly with The Observer XT, creating an integrated solution to study behavior, emotions, and physiological responses.  

This offers a unique solution for synchronization, integration, and analysis of FaceReader data with other data, including but not limited to physiological data, screen captures, and eye tracking data. This compatibility enables you to perform multimodal measurements: what user interface is the test participant looking at or which image is triggering an emotion?