Rakshit Kothari, Zhizhuo Yang, Christopher Kanan, Reynold Bailey, Jeff Pelz, Gabriel Diaz


The interaction between the vestibular and ocular system has primarily been studied in controlled environments. Consequently, off-the shelf tools for categorization of gaze events (e.g. fixations, pursuits, saccade) fail when head movements are allowed. Our approach was to collect a novel, naturalistic, and multimodal dataset of eye+head movements when subjects performed everyday tasks while wearing a mobile eye tracker equipped with an inertial measurement unit and a 3D stereo camera. This Gaze-in-the-Wild dataset (GW) includes eye+head rotational velocities (\(^\circ/s\)), infrared eye images and scene imagery (RGB+D). A portion was labelled by coders into gaze motion events with a mutual agreement of 0.74 sample based Cohen's \(\kappa\). This labelled data was used to train and evaluate two machine learning algorithms, Random Forest and a Recurrent Neural Network model, for gaze event classification. Assessment involved the application of established and novel event based performance metrics. Classifiers achieve ~87 human performance in detecting fixations and saccades but fall short (50%) on detecting pursuit movements. Moreover, pursuit classification is far worse in the absence of head movement information. A subsequent analysis of feature significance in our best performing model revealed that classification can be done using only the magnitudes of eye and head movements, potentially removing the need for calibration between the head and eye tracking systems. The GW dataset, trained classifiers and evaluation metrics will be made publicly available with the intention of facilitating growth in the emerging area of head-free gaze event classification.

Codes and Data

Published dataset (2GB): Sequences

Gaze-in-Wild sequences and labels (2GB): Sequences

Dataset video and image data (2TB): Video data

Gaze-in-Wild repository (bitbucket): GW repo

ScientificReports (V2): Gaze-in-wild: A dataset for studying eye and head coordination in everyday activities

Arxiv submission (V1): Gaze-in-wild: A dataset for studying eye and head coordination in everyday activities

Unpublished data

We also share a cleaned variant of [ProcessData ] in [ProcessData_cleaned]. This data is shared publicly in good faith for academic researchers. It is not part of the manuscript shared above. No analysis present in the above publication uses this data.

If you use any of the attached code or data, please cite our work. Work published with MIT license.

title={Gaze-in-wild: A dataset for studying eye and head coordination in everyday activities},
author={Kothari, Rakshit and Yang, Zhizhuo and Kanan, Christopher and Bailey, Reynold and Pelz, Jeff and Diaz, Gabriel},
journal={arXiv preprint arXiv:1905.13146},