Click to expand.

Coordinated movements of the hands and eyes in a naturalistic virtual environment

When preparing to intercept a ball in flight, humans make predictive saccades ahead of the ball’s current position and to a location along the ball’s future trajectory. Such visual prediction is ubiquitous amongst novices, extremely accurate, and can reach at least 400ms into the future. This ocular prediction is not a simple extrapolation of a ball’s trajectory. Previously, this was demonstrated in a virtual-reality ball interception task, in which subjects were asked to intercept an approaching virtual ball shortly after its bounce upon the ground. Because movement of the body was tracked with a motion-capture system, subjects were able to affect a virtual ball seen through a head-mounted display. Subjects made spontaneous pre-bounce saccades to a location along the ball’s eventual post-bounce trajectory, where they would fixate until the ball passed within 2˚ of the fixation location. Importantly, the saccades also demonstrated accurate prediction of the ball’s speed in the vertical direction after the bounce, and thus the eventual height of the ball at the time of the catch. 



Phantogram Groundplane for the study of visually guided walking behavior

Humans in the natural environment are able to maintain efficient and stable gait while navigating across complex terrain. In part, this is because visual feedback is used to guide changes in heading, foot placement, and posture. However, the role of gaze-adjustments and the role of visual information in heading selection and postural control while walking is not well understood. We present a novel method for the study of visually guided walking over complex terrain in a laboratory setting. A ground-plane is projected upon a physically flat laboratory floor in a way that makes it appear to have height, similar to a one-plane CAVE environment. The percept of height is reinforced through the use of stereoscopic shutter glasses, and by the dynamic updating of the projected image to maintain the coincidence of the accidental perspective with the subject’s motion-tracked head position. The benefits of this “phantogram” groundplane are numerous. Obstacle height, shape, and location can be instantaneously manipulated, decreasing setup time, and increasing the number of trials-per session. Because the groundplane is physically flat, the likelihood of falls is dramatically reduced. Furthermore, because the use of whole-body motion capture, eye trackers, and computer imagery provides a record of the position of the head, body, and gaze in relation to objects in the scene, the environment facilitates the computational analysis of eye-tracking data in relation to postural control during locomotion. Furthermore, the apparatus offers the possibility of real-time manipulations of the environment that are contingent upon the participant’s behavior. In summary, the apparatus facilitates rigorous hypotheses testing in a controlled environment. 

This work is being conducted in coordination with fellow collaborators Brett Fajen (RPI), Melissa Parade (RPI), Jon Matthis (UT Austin), and Mary Hayhoe (UT Austin).

Some important questions

  • Is this a valid method for studying “natural” walking behavior?
  • What does gaze tell us about how information is used to guide postural control while walking?
  • What do eye-movements tell us about the degree to which postural control and foot placement rely upon memory of obstacle location?



Evaluation of a Stereoscopic Display for Surgical Training


Funding by a Deans Research Initiation Grant, and in collaboration with Professors Cristian Linte and Jim Ferwerda.

Although planning and execution of surgical interventions is implicitly a 3D problem, traditional forms of medical training rely heavily upon 2D media (i.e., textbooks and slides). One solution involves leveraging 3D virtual and augmented reality display devices to present and visualize 3D anatomical images. We propose one such augmented reality medical training device that involves rendering virtual objects upon a 2D tabletop, much like holograms, such that when viewed from different perspectives, their 3D appearance changes according to the viewing angle. In addition, the use of stereoscopic projection creates the illusion that the objects are floating above the tabletop. The proposed hardware will be used as a test bed for two additional major aims.  Firstly, we propose the development of novel algorithms for the generation of 3D virtual anatomical model from real-world medical imaging data. To develop these algorithms, we will leverage open source, high-quality medical imaging datasets (i.e., The Visible Human Project) for the generation of 2D slices, 3D surfaces and 4D volume renderings of real human anatomy. Secondly, we propose a novel methodology for the evaluation of the tool as a means for supporting natural human behavior and interaction. The evaluation involves the comparison of human eye and hand coordination when interacting with the augmented reality imagery, and 3D printed and physical representations of the same objects. This approach allows for the direct comparison of human interactive behavior when manipulating both real-world and virtual models of the medical anatomy. In summary, we propose the development of an augmented-reality training device, novel algorithms for the generation of interactive 3D anatomical models from real-world images, and an method for the empirical evaluation of the proposed visualization paradigms.


This is a mockup of the instrument currently in design.



Multisensor helmet for the study of walking in the natural environment. 

The goal here is to use a combination of stereo cameras, GPS, and IMU sensors to to record a digital model of a groundplane traversed by a subject walking through the natural environment.  Subsequently, we will map a gaze vector upon the groundplane.  This tool will facilitate the relationship between anticipatory visual information and foot placement strategies for humans walking through the natural environment.