Much of the research on eye movements to date has been focused on understanding the mechanics and dynamics of the oculomotor system. The question of how successive fixations are aligned spatially has also received much attention. Most of this research has been aimed at discovering how the visual system 'knows' where the eyes are situated for each fixation so that the individual images captured with each fixation can be correctly aligned to build the rich internal representation we experience. Evidence is emerging, however, that we may have been asking the wrong question. We are able to use regularities in the environment to maintain a stable representation without resorting to complex alignment mechanisms and large changes in the environment may go undetected. Understanding visual perception requires us to ask a similar, but orthogonal question about the temporal stitching of successive views. This issue has not arisen with experimental tasks in the past because task complexity was purposely restricted.
We are studying eye movements in complex tasks and natural environments so that we can better understand the process, rather than the mechanics, of visual perception.
While we know a great deal about the dynamics and characteristics of eye movements in relatively simple tasks performed under reduced laboratory conditions, we know less about oculomotor behavior in complex, multi-step tasks. Complex tasks are not necessarily difficult. Part of the transition from ‘hard’ to ‘easy’ in completing complex tasks is the gradual reduction in conscious effort required to complete the sub-tasks. We are interested in learning whether high-level perceptual strategies can aid that transition. In the past, subjects performed relatively simple tasks or the eye movements themselves were the instructed task. But outside the laboratory vision is a tool, not the task. To study the oculomotor system in its native mode, we developed a wearable eyetracker that allows natural eye, head and whole-body movements.
Using the over-learned, common tasks (hand washing, filling a cup with water, purchasing something from a vending machine, etc.), we measured the global characteristics of fixation duration, saccade amplitude, and the spatial distribution of fixation positions. An important observation was the emergence of higher-order perceptual strategies in the complex task: while most fixations were related to the immediate action, a small number of fixations were made to objects relevant only to future actions. Based on a control task that differed only in the high-level goal, we conclude that the look-ahead fixations represent a task-dependent strategy, not a general behavior elicited by the salience or conspicuity of objects in the environment. We propose that the strategy of looking ahead to objects of future relevance supports the conscious percept of an environment seamless in time as well as in space.
Read more on this topic:
In order to develop algorithms that predict the most important spatial regions in an image, researchers have attempted to analyze observers' eye movements. Understanding the eye movement patterns of people viewing images may prove useful in designing image quality and perceptual image-difference models. While it is clear that visual data collected from psychophysical experiments is an essential component in the development and evaluation of such models, electing the best psychophysical technique is often based on the confusion of the sample set, the number of samples used, and observer effort. While there is an implicit assumption that these methods can be selected using such criteria alone, it may be the case that viewers adopt different strategies based on the chosen experimental method. Task-dependent eye movements may be a source of differences between results from different psychometric tasks. One question to be answered through eye movement analysis is whether viewing strategies substantially change across paired comparison, graphical scaling, and rank order judgments. Additionally, by tracking the subject’s eye movements, the locus of fixations can be compared across subjects and across images to indicate which regions receive the most attention during image quality judgments.
Read more on this topic:
By replicating the functionality of human visual processing, an anthropomorphic vision system enhances its performance under conditions of varying task constraints and in an uncertain environment.
Selective attention plays an important role by limiting the amount of information that is needed to represent the scene to only that which is required at the moment. For humans, eye movements are an external manifestation of selective attention, yet selection can occur without an explicit eye movement. Such covert orienting modulates information retrieval in an analogous way, however the extent of coverage is more flexible and may cover non-contiguous regions of space.
Read more on this topic: