DriveLab is a modular system that we will configure to your needs, with the components that give you the functions you require and pre-defined scenarios to get you up and running quickly. If you already have your own eyetracker or simulator, we can support you turning this into your custom system for analyzing driver behavior.
Green Dino’s driving simulator is a realistic virtual environment proven to evoke real-world driving behavior. The virtual environment uses artificial intelligence to create a realistic, interactive driving experience which increases the effectiveness of behavior evaluation. It allows for controllable and reproducible driving scenarios. Multiple driving tasks are integrated for automated assessment of learning curves, driving styles and road safety. Driving tasks are predefined, but the scenarios can easily be adapted to suit your specific application, even if you’re not a trained programmer. A large set of predefined scenarios is standard included.
You can select from a wide range of simulator events which ones you want to use in your research, for example:
- Driving tasks, e.g. not moving, driving (forward, reverse), turn (left, right), change lane
- Vehicle handling, e.g. steering, steer movement change, shifting gears, indicator, break
- Road position, e.g. lane (left, right, middle), ramp on/off
- Time-to-collision for car in front or car behind
A three-camera remote eye tracking system, made by Smart Eye, is optionally included in the set-up. By using this eye tracker, you can accurately and unobtrusively assess a driver’s gaze behavior, allowing you to see and analyze what a driver is looking at, in which order and how long. Whether a driver looks at objects in the car or at traffic can influence task performance: DriveLab allows you to automatically analyze what static and dynamic objects the driver looks at, independent of task or environment, for example:
- Left of right window
- Rearview mirror
- Left of right mirror
- RPM Indicator
Smart Eye is the preferred eye tracker for the automotive industry because of its robustness and because it is not head-mounted, performing well even if the head is facing different directions. Gaze information measured in full 3D is automatically imported into The Observer® XT, saving you a lot of time transferring data. Furthermore, the eye tracker automatically collects data on head-movement, eyelid position and pupil dilation.
Multiple video streams are generated in the simulator set-up, including the virtual environment and a video of the driver. Noldus’ digital video recording tool MediaRecorder automatically makes synchronous recordings of all video streams.
Other measurements such as physiological sensors (ECG, GSR, etc.) and video-based facial expression analysis (FaceReader™) can be added to the set-up, which can also be used to assess performance, workload and emotion of the driver. Since DriveLab is an open platform, it is also possible to add your own algorithms and sensors.
Data integration and analysis
The Observer XT is Noldus Information Technology’s flagship tool for the analysis of human behavior. Its synchronization and integration capabilities make it very easy to accurately link gaze behavior, vehicle control parameters, workload and expert evaluations, supported with video feeds. Besides an integrated visualization of all data streams, The Observer XT allows you to execute a combined analysis on all data streams. You can for example compare performance of the driver when looking at the dashboard to the moments when the driver was looking at traffic. Although all relevant data are combined into one system, the original data are also still available for the researcher.
Workload is a key parameter to understanding why drivers do what they do. The integrated driving research environment enables development of methods for assessing driver workload based on parameters such as pupil diameter, blink rates, scan patterns, speed variation, and so on.
For more information, please contact us.