Click to expand.

Online and Predictive Strategies for Goal-Directed Visual Tracking of a Moving Object

When preparing to intercept a ball in flight, humans make predictive saccades ahead of the ball’s current position and to a location along the ball’s future trajectory. Such visual prediction is ubiquitous amongst novices, extremely accurate, and can reach at least 400ms into the future. This ocular prediction is not a simple extrapolation of a ball’s trajectory. Previously, this was demonstrated in a virtual-reality ball interception task, in which subjects were asked to intercept an approaching virtual ball shortly after its bounce upon the ground. Because movement of the body was tracked with a motion-capture system, subjects were able to affect a virtual ball seen through a head-mounted display. Subjects made spontaneous pre-bounce saccades to a location along the ball’s eventual post-bounce trajectory, where they would fixate until the ball passed within 2˚ of the fixation location. Importantly, the saccades also demonstrated accurate prediction of the ball’s speed in the vertical direction after the bounce, and thus the eventual height of the ball at the time of the catch. 

 

 

Machine Learning for the Automated Identification of Coordinated Eye + Head Movements

Collaborators: Jeff Pelz, Reynold Bailey, and Chris Kanan at RIT

Sponsors:  Google, Inc.

The future of virtual and augmented reality devices is one in which media and advertising is seamlessly integrated into the 3D environment. Now that the user's visual attention is freed from the confines of the traditional 2D display, new advances will be required to detect when a user is visually attending to an object or advertisement placed in the 3D environment. One promising tool is eye tracking. However, identifying gaze upon a region of interest is particularly challenging when the object or observer are in motion - as is most often the case in the natural or simulated 3D environments. This is a challenging problem because the eye-tracking community remains primarily constrained to the context of 2D displays, and has not produced algorithms suitable for the transformation of a 3D gaze position signal into a usable characterization of the subject's intentions.

Our lab is currently working to novel machine learning classification tool that analyzes movements of the head, and eyes for the automated classification of coordinated movements, including fixation, pursuit, saccade, whether they arise from movements of the eyes, head, or coordinated movements of both. 

Graduate student Rakshit Kothari wears his custom hardware:  a helmet that tracks the movements of the eyes and head when coordinated by task.  This hardware involves a Pupil Labs eye tracker, intertial measurement unit, and stereo RGB cameras.

 

 

A custom Matlab program allows a group of trained labellers to find and label fixations, saccades, and pursuit in the eye+head velocity signal.

 

The Statistics of Eye and Head Movements When Coordinated by Natural Task

Decades of research upon eye movements made in the laboratory have produced a firm understanding of the parameters of eye movements, including duration, dispersion, amplitudes, and velocities. No such understanding exists for coordinated movements of the eyes and head made in the wild.  In part, this is because algorithms for the analysis of gaze within the 3D environment are in their infancy.  The lack of insight into gaze parameters and event statistics may be largely attributed to limitations in algorithmic tools for analysis of gaze behavior in 3D environments.  Because an eye tracker reports only the orientation of the eyes within their orbit, existing algorithms for the classification of gaze events developed for the 2D context do not generalize well to the 3D context in which the head is free to move.  For example, in the natural environment, one can maintain "fixation" of a target despite a constantly changing eye-in-head orientation through a counter-rotation of the eyes (vesibulo-ocluar reflex; VOR). To properly detect VOR other head+eye gaze events requires that movements of both the head and eye are taken into account.  Our laboratory is solving these problems using custom technology, and the machine learning algorithms currently under development for the automated detection of eye movements.

Figure:  Preliminary data shows the main sequence for gaze shifts made by an unconstrained behaving human. 

 

Visual and manual interactions with banknotes in natural contexts

Collaborators: Jeff Pelz (RIT)

Sponsors: the Federal Reserve and National Academy of Sciences

This study is designed to investigate the effectiveness of security features (e.g. watermark, ribbon, etc.) as cashiers interact with potentially counterfeit bills in natural workplace environment. The primary objectives are to 1) develop a methodology to robustly record the physical behaviors (e.g., actions and gestures) and visual attention of cashiers and customers during cash transactions, 2) develop a methodology to reliably analyze the physical behaviors and visual attention of cashiers and customers during cash transactions, 3) develop a methodology to robustly record the physical behaviors and visual attention of cashiers during currency handling (i.e., counting and sorting) tasks, 4) develop a methodology to reliably analyze the physical behaviors of cashiers during currency handling tasks, and 5) to use those methods to collect data on a sample of customers and enough cashiers to explore variation due to age and experience.

 

 

Rehabilitation for Vision Following Stroke

Collaborators:  Krystel Huxlin (University of Rochester), and Ross Maddox (University of Rochester)

Sponsors:  The Unyte Translational Research Network

Stroke-induced occipital damage is an increasingly prevalent, debilitating cause of partial blindness, which afflicts about 1% of the population over the age of 50 yrs. Until recently, this condition was considered permanent. Over the last 10 years, the Huxlin lab has pioneered a new method of retraining visual discriminations in cortically blind (CB) fields. Although exciting and life altering for patients, and a game-changer clinically, the approach faces some major limitations:  (1) Rehabilitation is time consuming, and impractical if training must take place in the laboratory (2) Stimuli must be gaze contingent, and this requires an eye-tracker, and (3) The current, passive, single-target-on-a-uniform-background stimulus delivery requiring only a perceptual judgment may not be the most effective way to retrain vision in CB. An alternative notion is that additional cues present in the real world (e.g. sound, depth, color and target-in-background context).

The Perform Lab is currently working with Drs. Huxlin and Maddox to help us address all these limitations: integrating precision eye tracking into a modern VR helmet, and porting the CB visual retraining paradigm into this system.

 

Phantogram Groundplane for the study of visually guided walking behavior

Humans in the natural environment are able to maintain efficient and stable gait while navigating across complex terrain. In part, this is because visual feedback is used to guide changes in heading, foot placement, and posture. However, the role of gaze-adjustments and the role of visual information in heading selection and postural control while walking is not well understood. We present a novel method for the study of visually guided walking over complex terrain in a laboratory setting. A ground-plane is projected upon a physically flat laboratory floor in a way that makes it appear to have height, similar to a one-plane CAVE environment. The percept of height is reinforced through the use of stereoscopic shutter glasses, and by the dynamic updating of the projected image to maintain the coincidence of the accidental perspective with the subject’s motion-tracked head position. The benefits of this “phantogram” groundplane are numerous. Obstacle height, shape, and location can be instantaneously manipulated, decreasing setup time, and increasing the number of trials-per session. Because the groundplane is physically flat, the likelihood of falls is dramatically reduced. Furthermore, because the use of whole-body motion capture, eye trackers, and computer imagery provides a record of the position of the head, body, and gaze in relation to objects in the scene, the environment facilitates the computational analysis of eye-tracking data in relation to postural control during locomotion. Furthermore, the apparatus offers the possibility of real-time manipulations of the environment that are contingent upon the participant’s behavior. In summary, the apparatus facilitates rigorous hypotheses testing in a controlled environment. 

This work is being conducted in coordination with fellow collaborators Brett Fajen (RPI), Melissa Parade (RPI), Jon Matthis (UT Austin), and Mary Hayhoe (UT Austin).

Some important questions

  • Is this a valid method for studying “natural” walking behavior?
  • What does gaze tell us about how information is used to guide postural control while walking?
  • What do eye-movements tell us about the degree to which postural control and foot placement rely upon memory of obstacle location?