DIRS Laboratory 76-3215
August 2, 2019 at 1:00am
Anjali K. Jogeshwar
MS Thesis Defense
Abstract: 

Abstract

 

 

The study of human vision must include our interaction with objects. These studies can include behavior modeling, understanding visual attention, and motor guidance, and enhancing user experiences. But all these studies have one thing in common. To analyze the data in detail, researchers typically have to analyze video data frame by frame. Real world interaction data often comprises of data from both eye and hand. Analyzing such data frame by frame can get very tedious and time-consuming. A calibrated scene video from an eye-tracker captured at 120 Hz for 3 minutes has over 21,000 frames to be analyzed.

 

Automating the process is crucial to allow interaction research to proceed. Research in object recognition over the last decade now allows eye-movement data to be analyzed automatically to determine what a subject is looking at and for how long. I will describe my research in which I developed a pipeline to help researchers analyze interaction data including eye and hand. Inspired by a semi-automated pipeline for analyzing eye tracking data, I have created a pipeline for analyzing hand grasp along with gaze. Putting both pipelines together can help researchers analyze interaction data.

 

The hand-grasp pipeline detects skin to locate the hands, then determines what object (if any) the hand is over, and where the thumbs/fingers occluded that object. I also compare identification with recognition throughout the pipeline. The current pipeline operates on independent frames; future work will extend the pipeline to take advantage of the dynamics of natural interactions.