MultiCom is a multidisciplinary team, involved in the design and the evaluation of interactive systems. The human resources include researchers, computer scientists, a statistician, and ergonomists. These various skills allow MultiCom to take part in research projects and industrial contracts concerning new technologies.

The layout of the MultiCom Usability Lab
Audio, video and screen activity streams coming from the experiment room are selected and mixed in order to produce a convenient output stream.


Our methodology is user-centered and a dedicated experimentation platform is used as support with observation and behavior capture of users interacting with specific electronic devices. The platform features a large experiment room (100 m2) well adjusted to simulate small-scale environments (e.g. shop, classroom, house, etc.). Two one-way mirrors, as well as cameras and microphones embedded in the experiment room ceiling enable evaluators to observe and record all interesting user activities from the observation room. A second small experiment room is dedicated to the study of the gaze strategy of users interacting with a web site, or any application, with an eye tracker. The modular and flexible architecture of the platform has been designed to support experimentations in various contexts such as house automation, e-learning, intelligent environments with RFID technology, museums, etc.

Area dedicated for simulated environments (i.e. sports shop, restaurant, information office in this picture). Users are asked to perform scenarios with new interaction devices, such as a PDA with RFID technology, in an intelligent environment.
User interacting with a PDA (including a RFID antenna) and reading information provided by the PDA about the equipment she chose. All equipment is "stamped" with an electronic tag (with a unique identifier); when the RFID antenna of the PDA is close to the tag, the PDA is able to read the identifier and perform requests on a server database to provide adequate information.

Eye tracking

Eye tracking studies are performed in a dedicated room. Typical applications are software evaluation and internet site evaluation. MultiCom uses eye tracking in contracts for industrial clients (e.g. air traffic control) and in research projects (e.g. facial expression recognition).

Subject in front of an eye tracker. The camera and infrared illumination are integrated in the display.
Example of the scan path of an expert user gazing at a travel agency site. Each circle is a fixation, its diameter indicates the duration of fixation.

A user writes information on a paper sheet and, in real time, a "wizard" writes the same information on the PDA (he takes control of the PDA via the network and can observe the scene via a camera), so that the user believes the electronic pen really exists. This technique is used to test the future functionalities of an interactive system and allows to avoid writing complex software if the concerned functionality is not accepted by the users.

Example of a Wizard of Oz experiment concerning the study of the concept for an electronic pen.
Area dedicated for brainstorming and participatory design sessions, with electronic tools.

Participatory design

Another feature of our lab is the use of participatory design methodology, whereby all actors in a future interactive product or service, such as designers, engineers, ergonomists, market researchers and end-users, are involved in the design cycle. Some specific tools have been designed by MultiCom to instrument these sessions.

Video annotation

For manual annotation of video streams after an experiment we use The Observer. For instance, in the ACE project (“Agent Conversationnel Expressif”, in English: Expressive Conversational Agent), we annotated the different gestures, postures, dialogue phases and facial expressions of an actress, simulating a conversational agent on the web, whose final purpose would consist in helping net surfers to navigate on a commercial site. The goal of the annotation was to identify which sequences of postures, gestures and facial expressions are associated with which dialogue phases (i.e. give information, ask question, give a choice, give an advice, wait for an answer, etc.), in order to provide examples to the graphical designer in charge of the animation of the future web agent.

For this purpose, using The Observer, different Video Play Lists (VPL) were extracted, e.g. all sequences where the actress was waiting for the answer of the net surfer, or giving information about a product. These video sequences aimed to help the graphical designer to give a very human behavior to the web agent.

Gestures performed by an actress and coded with The Observer.
Time-event plot of postures, gestures and facial expressions, produced with The Observer.

Focus groups

For another project, manual video annotation with The Observer has been performed in order to analyze four focus groups about the usage of new technologies. Each focus group gathered people with a specific sociological profile, such as fanatic people about new technology, or detractors. The goal of the observation was to understand the social interaction between the users, according to their profiles.

Automatic real-time annotation

We use a variety of techniques for automated real-time data collection. The type of data collected is different, depending on the experiment. For example, we started using the specific Noldus software uLog Pro to automatically collect keypresses, mouse movements, and any sentences typed on a keyboard. This allows us to easily collect information during human-computer interactions. In other cases, during usability evaluations of mobile devices "in the wild", we record user’s position and actions on the device, system feedback, and device localization in the building. All these data are monitored and stored in the observation room thanks to wireless technologies (HF or WiFi). The digital data collection is possible thanks to our specialized software bus "Usybus" that enables real-time annotation of the events in The Observer. Results of this work have been presented at CHI 2006 by Francis Jambon.

MultiCom - CLIPS IMAG Laboratory

Domaine Universitaire
220, rue de la chimie
B.P. 53
F-38041 Grenoble cedex