Visual Representations... Chapter 4 (J. B. Pelz)

Visual Representations in a Natural Visuo-motor Task


Introduction

Recent developments in eye movement monitoring systems, notably revolving magnetic field eye-coil and light, head-mounted IR reflection systems, are allowing conclusions drawn from earlier studies of eye movements to be re-examined. While many of the 'classic' results have been supported (e.g., there is no reason to suspect the pulse-step model is not valid when the head is freed), other conclusions have been called into question. Many commonly accepted metrics, such as gaze stability in the dark, peak velocity, duration, and latency of saccades appear to be different when the head is freed [Collewijn et al. 1992]. Five to eight week old infants' visuo-motor coordination is greatly diminished when their heads are not artificially stabilized [Jeannerod 1988], and adult subjects' ability to hold gaze stable with the head free is poorer than had been reported in earlier experiments with the head immobilized. Collewijn [1985] reported that when subjects' heads are stabilized using a biteboard, retinal slip velocities are limited to approximately 15 arc minutes per second (15'/sec). When subjects tried to minimize head movement without artificial support the slip velocities nearly doubled, increasing to 27'/sec. The retinal slip velocities rose to 97'/sec when subjects attempted to hold gaze fixed during natural head movements. Even though retinal slip velocities in this range do not reduce visual acuity significantly, these results demonstrate that oculomotor performance deteriorates by some measures when the head is freed.

Such differences are not unexpected; it seems intuitive that artificially stabilizing the head should reduce retinal slip. But not all of the changes detected when the head is freed are deficits. A striking example is the velocity of vergence eye movements; experiments performed with the head fixed have found that vergence movements are much slower than typical saccadic movements, with maximum velocities of only 20-30deg./second [van der Steen 1992]. With the head free however, maximum velocities for vergence movements as high as 100deg./second have been reported [Koken & Erkelens 1992]. Other types of eye movements are affected as well. In self-paced saccades between visible targets, peak velocities increased by more than 10% and durations fell, with no deficit in accuracy when the head was freed [Collewijn et al. 1992]. The gains were not simply the result of summing eye and head component velocities; Collewijn et al.'s analysis of the individual components showed that the eye-in-head component alone was greater than the peak velocity in the head-fixed condition. In fact, the peak values occurred early in the gaze change, before the head had accelerated to its maximum value. The 'profile' of the gaze changes was also different when the head was free to move. The faster acceleration at the beginning of the saccade was matched by sharper deceleration at the end of the gaze shift, resulting in a 'squarer' gaze position profile.

Experiments performed with the head fixed frequently show that subjects undershoot target location when they attempt large saccades; Carpenter [1988] reported that large saccades "almost invariably fall short of their targets," described undershoots of approximately 10% for saccades greater than 20deg., and noted a 'range effect' in which the undershoots were related to the mean size of recent saccades. Becker [1991] reported a much lower frequency of overshoots for small saccades than for large ones (<50% vs. 90%). These undershoots may be an artifact of unnaturally restricting head movements. Becker [1989] reported that when the head is free to move, head movements become a regular feature of gaze shifts at approximately 20 degrees. Carpenter [1988] reported that undershoots were less pronounced in saccades to the midline than were peripheral saccades when the head is fixed. The head is of course held in a central position, and would not be expected to contribute to a gaze change toward the midline under those conditions.

Visual localization is also affected by restraining the head. In studies of pointing performance, Skavenski [1990] and Biguer et al. [1985] reported smaller mean errors in pointing tasks when the head was free to move naturally. Biguer et al. [1985] reported that when the head was fixed, errors increased with eccentricity, suggesting that the eye-in-head signal's accuracy falls off at larger angles while head position information is not degraded to the same extent. Biguer et al. [1982] found a correlation between head and arm movements, reporting that the head stabilized before pointing was complete, again suggesting an advantage to having the eye centered in the orbit for localization tasks. Becker and Jürgens [1992] suggested that measurements made with the head fixed by a bite board may be ".. so unphysiological a situation as to explain in itself the lower velocities that have been observed." [p.429]. Regardless of the cause, there is ample evidence that restricting head movements affects performance, and that much of what is known about oculomotor performance may be better thought of as data regarding 'head-fixed oculomotor performance.' The simple tasks and reduced environments often used in eye movement experiments further call into question whether some of our knowledge of oculomotor performance can be meaningfully applied to 'real-world' behaviors. New instrumentation capable of studying eye and head movements without restraining the subjects' head allows the study of eye movements under natural conditions.

The coordination of the eye and head can now be studied during complex tasks as well, a capability that has shed light on the motor commands controlling eye and head movements. There is evidence suggesting that under some conditions gaze changes made up of eye and head movements result from a single command. When subjects are in the dark, and a light appears at an unpredictable position, the eye typically leads the head in an attempt to foveate the target. But this temporal dissociation of eye and head movements does not necessarily imply separate commands; EMG recordings in such cases show simultaneous innervation of the extraocular and neck muscles [Biguer et al. 1985]; the lag is apparently the result of the higher inertial load of the head. In the absence of further stimuli, the eye and head then perform coordinated movements that end with the eye near its primary position. There are natural behaviors where the eyes and head work together in a relatively simple, stereotyped manner. Land [1992] studied drivers' eye and head movements as they approached an intersection, looking left and right to check for traffic. Land was able to predict eye and head movements remarkably well with a very simple model, whose input consisted only of the ordered list of fixation points. Head movements were modeled with a constant duration of 400 msec, and a variable velocity (1.9deg./sec per degree of gaze change). Eye movements were modeled with a constant velocity of 400deg./sec and a variable duration (2.5 msec per degree of gaze change). Eye movements were modulated by a unit-gain VOR; i.e., head velocity was subtracted from eye velocity in a form of linear summation model as proposed by Robinson [1981]. Land selected the driving task because the subjects would be "too busy to exert conscious control over head or eye movements" [Land 1992, p. 318].

Another interesting finding in Land's study was the "strict synchrony" between onset of eye and head movements in his measurements. Based on this observation, Land concluded that the eye and head receive commands at the same time. But Biguer et al.'s [1985] EMG data suggest that Land's conclusion is inaccurate. This makes his result more interesting; because of the greater inertial load of the head, the neck muscles must receive innervation before the eyes if their movements are to begin together [Rosenbaum 1991]. In Land's driving task, subjects were planning and executing a series of gaze movements while performing a complex task, not responding to a flash of light in the dark. Rather than a central gaze command that is sent in parallel to eye and head, there appears to be a central gaze goal that, in order to be executed in "strict synchrony," requires that the head's command be initiated before the eyes'. While the tight coordination between eye and head movements reported by Land is possible, there is evidence that it is not compulsory. Kowler et al. [1992] studied subjects' eye and head movements while reading. They found that when the head was free to move, subjects made substantial head movements (both rotational and translational). When the subject was performing a real task with the head freed, the strict correlation between eye and head movements sometimes disappeared. Gaze changes from the end of one line to the beginning of the following line were made up of eye and head movements that were not always tightly correlated in time. One of their subjects showed several instances in which his head motion was opposite in direction from his eye movements. There were no significant differences in reading speed when the head was free. But when Kowler et al. had their subjects scan a page covered with characters spaced at word-length intervals (a condition which offered identical demands on the oculomotor system), freeing the head resulted in a slight increase in maximum scan rate.

In an attempt to examine the degree to which eye and head movements are correlated, Kowler and colleagues [1992] had a subject attempt to scan a square grid of targets. The subject performed the task under five different instructions; 1) scanning with the head held as still as possible, 2) scanning in a natural pattern, without regard to head movements, 3) scanning the array as fast as possible, 4) shaking the head while scanning the array of targets, and 5) moving the head and eye in opposite directions. There was a deficit in speed when the subject tried to minimize head movements, even without artificial restraints. Scan rates increased when the subject's head was free to move, and when instructed to scan as fast as possible, the amplitude of the head movements increased. When the subject attempted to scan while shaking the head, the subject tended to fall into a pattern in which eye and head movements were synchronized. Subjects reported that they found this condition very difficult. Kowler et al. [1992] concluded that these results ".. revealed a natural tendency to program head and eye movements concurrently in similar spatial and temporal patterns." Additionally, "We found that separate commands to the head and eyes are possible, but only with special effort and, perhaps, with some sacrifice in the precision of the visual or oculomotor performance." [p.426].

All of these issues can now be explored with new instrumentation capable of monitoring unconstrained eye, head, and hand movements. The block-copying paradigm offers an ideal environment in which to investigate these issues. These experiments have revealed a tighter linkage between fixations and actions (motor and cognitive) than had been understood until now. Every component of the task is regulated by these externally observable fixations. Gathering information about the color and position of the blocks forming the model is performed by fixating the model; hand movements to pick up blocks from the resource area and place them in the workspace are guided by fixations in the respective areas, and the gaze shifts between those are made up of coordinated eye and head movements, allowing us to study the coordination of the eye/head gaze system under natural conditions. It should be noted that not all the fixations are necessary to perform the task. Two subjects occasionally perform the pickup without fixating the resource area. This behavior is rare, occurring on only 1% to 2% of the two subjects' block moves, but in these cases the subjects had no difficulty picking up the correct block without fixating the block. So, while not all of the fixations are necessary, subjects choose to regulate the subtasks with fixations in almost every case.

Basic features of eye, head, and hand movements:

The block-copying task and the instrumentation described earlier provide a unique opportunity to measure the basic features of eye, head, and hand movements during a natural task. The task is made up of identifiable subtasks: the information gathering eye and head movements and the visually guided, coordinated actions of the eye, head, and hand. Until now, we have had very little information about such complex, natural behaviors.

A striking aspect of task performance is the regular, rhythmic pattern of eye, head, and hand movements observed while subjects perform the block-copying task. Figure 4.1 shows the horizontal components of gaze, head, and hand movements during a trial. The gaze, head, and hand intercepts with the working plane are plotted in cm. The gaze component is the horizontal point of regard on the plane reported by the ASL. The head intercept is computed by projecting a vector fixed to the subject's head to the plane. Gaze and head intercept were set to (0,0) at the beginning of each trial while the subject looked at a central fixation point. The hand intercept is defined by the horizontal and vertical components of the magnetic hand tracker's output, offset to place (0,0) at the same central fixation point. The angular position of gaze and head components varied with distance to the board, but are approximately equal to the intercept values given in cm (they are equivalent when the distance to the board is 57 cm [tan(1deg.) -1] ). The rightward gaze and head movements ('up' on the plot) are made when a block in the resource area is targeted for pickup.

Figure 4.2 shows a four second section from the same trial, expanded in scale to more clearly illustrate the movements. At this scale, the task-dependent asymmetries of the movements are more evident; the velocities of the head movements are different in each direction, with higher velocities for rightward head movements towards the resource area than for the leftward movements toward the model and workspace. The hand movements also display a marked asymmetry, with longer dwell times to the left (down in the graph) for 'drops' than to the right for pickups.

While subjects almost always fixated the resource area before a pickup event and the workspace before a drop, there was a clear difference in the pattern of eye and head coordination between the pickup and drop events. Figure 4.3 a) shows a frame from the video record of a typical pickup event. As the hand nears the selected block, but before the subject grasps the block and lifts it away from the board, the gaze returns to the model area for the second model fixation or to the workspace in preparation for the drop. In contrast, subjects usually maintain fixation until the block move is completed

Figure 4.1 Horizontal components of gaze, head, and hand movements during the block-copying task.

when performing a drop event (see Figure 4.3 b). This task-dependent asymmetry is also evident in the longer dwell times in the workspace than in the resource area, as seen in Figure 4.2. The head intercept trace in Figure 4.2 illustrates a similar task-dependent pattern for head movements; the head is held stable longer for the putdown, and is then moved more rapidly to the right for the next pickup event (note the difference in velocities for rightward and leftward head movements).

Figure 4.4 shows the gaze and head data plotted together to make it easier to see the relative onsets of eye and head movements. It is clear from the Figure that the head movement pattern is not simply a 'low-pass' version of the gaze pattern, nor are the onset of all gaze and head movements synchronous, as Land reported in his study of drivers. In this trace, the head leads the eye by over one hundred milliseconds for rightward (up on the graph) gaze shifts to the resource area while eye and head movements leaving the resource area (towards the model or workspace) are nearly synchronous, and include instances where the eye leads the head. This task-dependent, independent programming is very different than the type of behavior reported by Land. In addition to the variation in eye/head latencies, the record shows an instance where the eye and head are moving in opposite directions (~ 6200 - 6600 msec).

Correlation of Eye, Head, and Hand Movements

The graphs of gaze, head, and hand movements in Figure 4.1, Figure 4.2, and Figure 4.4 demonstrate the tight coupling of these motor systems. One way to examine that coupling is to determine the degree of correlation between the different systems. A cross-correlation analysis was performed on the gaze/head, gaze/hand, and head/hand

Figure 4.2 A four second section from Figure 4.1, enlarged to show more detail. (Note that the head position signal is scaled 4X.)

Figure 4.3 Asymmetry in eye/hand coordination for block pickup a) and drop b).

Figure 4.4 Task-dependent asymmetries in the temporal coordination of eye and head movements.

records for four subjects. Figure 4.5 shows the result of a typical analysis. The cross-correlation of the horizontal gaze and head position ("gaze/head"), gaze and hand ("gaze/hand"), and head and hand ("head/hand") signals for subject sc are shown.

In this trial, the peak correlation of horizontal gaze and head position signals was 0.69, and occurred at 324 msec, indicating that the gaze pattern leads the head pattern. It is important to note that the cross-correlation analysis provides a measure of the correlation of two waveforms, and not of individual movement onsets. As noted, the head movement toward the resource was often initiated before the associated saccade. It is evident in Figure 4.4 however, that the gaze "waveform" does indeed lead the head's; i.e., the two signals' maximum correlation occurs when the gaze is delayed by several hundred milliseconds. As would be expected from examining the gaze, head, and hand traces (e.g., Figure 4.1), the cross-correlation functions are periodic. Figure 4.6, Figure 4.7, and Figure 4.8 show representative cross-correlations for subjects jw, eb, and mh, respectively.

The peak correlation and the offset at which that peak occurred were recorded for each trial. Table 4.1 shows the mean peak correlation obtained for the four subjects (and within-subject standard error), along with the mean across the four subjects (and between-subject standard error). Table 4.2 shows the temporal offset at which the peak value for each cross-correlation occurred for the four subjects. The head and hand were most closely correlated in most trials.

Figure 4.5 Representative cross-correlation functions for subject sc.

Figure 4.6 Representative cross-correlation functions for subject jw.

Figure 4.7 Representative cross-correlation functions for subject eb.

Figure 4.8 Representative cross-correlation functions for subject mh.

Table 4.1 Maximum value of the cross-correlation functions for gaze/head, gaze/hand, and head/hand.

   Subject    gaze/head   peak    gaze/hand  peak     head/head   peak    
              correlation         correlation         correlation         
     sc       0.70 (0.01)         0.68 (0.02)         0.79 (0.02)         
     jw       0.69 (0.06)         0.71 (0.05)         0.73 (0.06)         
     eb       0.77 (0.03)         0.66 (0.01)         0.83 (0.01)         
     mh       0.90 (0.01)         0.80 (0.08)         0.82 (0.06)         
    mean      0.76 (0.05)         0.71 (0.03)         0.79 (0.02)         

Table 4.2 Temporal offset (in msec) at which the cross-correlation reached a maximum value.

   Subject    gaze/head           gaze/hand           head/hand           
              correlation peak    correlation peak    correlation peak    
              (msec)              (msec)              (msec)              
     sc       319 (16)            247 (24)            -55 (18)            
     jw       112 (61)            302 (81)            159 (38)            
     eb       136 (9)             269 (19)            111 (26)            
     mh       32 (5)              337 (42)            322 (54)            
    mean      150 (60)            289 (20)            134 (78)            

Figure 4.9 shows the temporal positions of the peak gaze/head, gaze/hand and head/hand cross-correlations for the four subjects. The cross-correlation is sensitive to the relative timing and the shapes of the gaze, head, and hand waveforms, so the large variability between subjects may be due to timing and/or pattern differences. The mean values and between-subjects standard error are shown in Figure 4.10.

Figure 4.11 shows the peak value of the gaze/head, gaze/hand, and head/hand cross-correlation functions. Figure 4.12 shows the mean and between-subjects standard error for the peak cross-correlation values. The smaller between-subject variability is evident, with most peak correlations very close to 0.8.

Amplitude of Head Movements

Since the capability to monitor head-free gaze movements was developed, reports of the degree to which head movements contribute to gaze changes has varied widely, and there are few published studies reporting the dynamics of head movements measured while humans are performing complex visuo-motor tasks. There are several metrics that could be used to measure the amplitude of subjects' head movements. The simplest, the maximum range over which the head is rotated, can be misleading because a single extreme movement during a trial can increase the measure dramatically. A measure that better represents the average behavior over an extended task is the root-mean-square (RMS) variation in head orientation. Head rotation about three axes (azimuth, elevation, and roll) were recorded while subjects performed the block-copying task. The RMS measure, however, could be inflated by a change in the mean orientation of the head over the trial. This had to be taken into account because subjects typically displayed such mean orientation shifts as they progressed from the beginning of the model to the end; subjects almost always began in the upper-left corner of the model.

Figure 4.13 shows the head's azimuth angle over a nine second period (the beginning and end of each trial are excluded because large head movements not representative of the ongoing task occur as the subject begins the trial and signals the end). The periodic movement is evident, as is a slow 'trend' in orientation. This low frequency component in the head's orientation signal could inflate the RMS measure, so a regression line was fit to the recorded rotation about each axis. The regression line (shown in Figure 4.13 a) was subtracted from the raw orientation record before analysis. Figure 4.13 b) shows raw and corrected azimuth records. In this case the uncorrected measure overestimated the actual RMS value by approximately 5% (2.59 vs. 2.48 degrees). Figure 4.14 shows the head's elevation over the same period, along with the regression line fit to the data. The trend (downward in this case) is of approximately the same magnitude as the trend in azimuth (<2.5deg. over 9 seconds), but the superimposed movement is much smaller (note that the vertical scale is different than in Figure 4.13). In this case, failing to correct for the shift in mean orientation would lead to a 25% overestimate of RMS amplitude (1.22 vs. 0.97 degrees). Figure 4.14 shows the raw and corrected elevation data.

Figure 4.9 Temporal positions of the peak gaze/head, gaze/hand and head/hand cross-correlations for four subjects.

Figure 4.10 Mean of temporal positions of the peak gaze/head, gaze/hand and head/hand cross-correlations for four subjects.

Figure 4.11 Peak values the cross-correlation functions of gaze, head, and hand for four subjects.

Figure 4.12 Mean peak value of the cross-correlation functions of gaze, head, and hand for four subjects

Figure 4.15 shows mean azimuth, elevation, and roll RMS values for three subjects. As expected due to the configuration of the model, resource, and workspace areas on the working plane, the largest movements were in azimuth. The widest variation between subjects was found in the roll angle. Figure 4.16 shows the average RMS amplitude across the subjects, and the between-subjects s.e.m. Note that the RMS values are approximately 25% of the range for a normal distribution, and as seen in Figure 4.13, the range of horizontal head movements was ~10deg., or about 2/3 the size of the gaze shifts.

a)

b)

Figure 4.13 a) Head orientation (azimuth) as a function of time over nine seconds with linear regression. b) Raw and corrected head azimuth records

a)

b)

Figure 4.14 a) Head elevation as a function of time over the same period shown in Figure 4.13. b) Raw and corrected head elevation records.

Figure 4.15 Mean azimuth, elevation, and roll RMS amplitudes for three subjects.

Figure 4.16 Azimuth, elevation, and roll RMS amplitudes averaged across three subjects.

The Coordination of Eye and Head Movements in Gaze Changes

The coordination of eye and head movements during complex visuo-motor tasks is not well understood because previous work either fixed the head or was limited to simple tasks not representative of natural behaviors. What little is known about the programming of eye and head movements suggests that a single gaze change command may be responsible for both eye and head movements under most conditions [Kowler et al. 1992, Land 1992]. The block-copying task provides a new paradigm in which to examine the programming of concurrent eye and head movements. In the series of fixations making up a PMD sequence (alone or as part of an MPMD sequence), the gaze moves from the resource area to the model, then on to the workspace. The resource -> model gaze change is primarily horizontal, and the model -> workspace gaze change is primarily vertical. Because of the frequency of MPMD and PMD sequences, it is useful to examine gaze and head trajectories during the PMD sequence. Four subjects' gaze and head movement records were examined. The sections of each trial with a PMD sequence were isolated, and the concurrent head movements were analyzed to determine to what degree the head movements paralleled the gaze changes as would be expected from reports of tight linkage between eye and head. Significant between-subject variability was observed in the degree to which eye and head movements were linked. Figure 4.17 a) shows a two-dimensional plot of gaze and head intersections on the working plane during a PMD sequence. Gaze and head traces are shown over an interval starting when the block is picked up in the resource area, and ending when the gaze arrives in the workspace. Figure 4.17 a) shows an example of the tight coupling of eye and head movements typical of that reported in the literature. Eye and head movements toward the model area are initiated at time t1 after a block is picked up in the resource area, and then toward the workspace at t2. Some subjects completing the extended block-copying task showed very different patterns of eye and head movements, programming eye and head movements to separate targets. For example, after picking up a block in the resource area, subjects sometimes moved their gaze to the model area while moving the head directly to the workspace preparing for the putdown. Figure 4.17 b) shows gaze and head traces during such a PMD sequence. At time t1, a leftward gaze change to the model area is initiated. At the same time, the head begins a single, diagonal movement toward the workspace in anticipation of the putdown in the workspace. At time t2, while the head is still completing its movement toward the workspace, gaze moves vertically to the workspace.

These examples represent extreme cases. Subjects showed evidence of such eye/head dissociation with varying frequency and to varying degrees. Three PMD sequences for subject eb are shown in Figure 4.18 through Figure 4.20. They are samples selected to illustrate the range of eye/head coordination observed in subjects performing the block-copying task. Figure 4.18 shows eb executing a PMD sequence in which gaze and head are apparently executing coupled eye/head movements with common spatial and temporal patterns. Figure 4.18 a) shows a two-dimensional plot of the gaze intercept as a block is picked up in the resource area, gaze returns to the model, then moves to the workspace to guide the putdown. Figure 4.18 b) shows a two-dimensional plot of head orientation over the same period, at an expanded scale to show the head movements more clearly.

a) b)

Figure 4.17 Two-dimensional gaze and head records showing a) tight coupling of eye & head, and b) dissociated eye and head movements.

The common goal of eye and head for each of the two gaze changes is evident in the plot of head position. Figure 4.18 c) shows the same head orientation data plotted as a function of time. It is evident in this plot that the two head movements are executed independently. The horizontal component of the head's motion is completed before the vertical component is initiated. Figure 4.19 shows another PMD sequence from subject eb, from a near-control trial performed on the same day as that shown in Figure 4.18. It is obvious from the two-dimensional plots of gaze (a) and head orientation (b) that the eye and head are not executing movements that were programmed with the same goals. The eyes are executing a sequential program, moving first to the model, then to the workspace, while the head is executing a single, diagonal movement toward the workspace in preparation for guiding the placement of the block, as was seen in Figure 4.17 b). Figure 4.19 c) shows the temporal coincidence of the horizontal and vertical head movements. Figure 4.20 shows another PMD sequence for subject eb. This case can be considered intermediate between the two discussed above. In this case the head follows a curved trajectory from the resource to the workspace. The horizontal head movement is initiated first, but the vertical component begins before the horizontal movement is complete, causing a curved trajectory.

Analysis of the four subjects' PMD sequences yielded the distributions shown in Table 4.3. Head movements were labeled as "Separate H & V" for block moves like that shown in Figure 4.18, "Diagonal" for cases like that shown in Figure 4.19, and "Curved" for cases like that shown in Figure 4.20. Block moves in which the vertical component of head motion was too small to meaningfully label the head movement

a) b)

c)

Figure 4.18 Example of common commands to eye and head. The vertical component of head motion is initiated after the horizontal component is complete. a) Gaze intercept, b) horizontal and vertical head orientation, and c) horizontal and vertical head orientation vs. time.

a) b)

c)

Figure 4.19 Dissociation of eye and head trajectories. Example of 'diagonal' head trajectory, where the horizontal and vertical components of head motion are executed concurrently. a) Gaze intercept, b) horizontal and vertical head orientation, and c) horizontal and vertical head orientation vs. time.

a) b)

c)

Figure 4.20 Example of 'curved' head trajectory, where the vertical component of head motion begins before the horizontal component is completed. a) Gaze intercept, b) horizontal and vertical head orientation, and c) horizontal and vertical head orientation vs. time.

Table 4.3 Relative frequency of head movement types for PMD sequences for four subjects.

Head Movement         eb        jw              mh              sc              
Separate H & V       0.45       0.22            0.67            0.35            
   Diagonal          0.36       .072            0.00            0.50            
    Curved           0.19       0.72            0.33            0.15            

into one of the above categories, and cases where eye and head movements could not be reliably paired were excluded. Note the wide variation between subjects. No diagonal head movements were observed for subject mh, while jw showed 0.72. Subjects eb and sc were intermediate between those extremes.

Discussion

Subjects adopt a regular, rhythmic pattern of eye, head, and hand movements while performing the block-copying task. On first inspection the head and hand records appear almost sinusoidal, but an examination of the details of the movements shows marked asymmetries. The oscillating head and hand movements made by subjects while copying the model pattern vary depending on the specific sub-task being performed. For example, the head is moved at a higher velocity when moving towards the resource area than in the opposite direction, and the head remains stable for a shorter period for the pickup than for the drop. This asymmetry in dwell-time is more pronounced in the hand movement record, where the hand typically remains in the workspace about twice as long as in the resource area. The differences between pickup and drop actions extend to the temporal coordination of eye and hand as well. When picking up a block in the resource area, gaze was typically held on the block targeted for pickup only until the fingers were about to touch the targeted block. When the block was being placed in the workspace, however, gaze was usually maintained until the drop was complete and the hand moved away from the board. Asymmetries were also observed in subjects' eye/head coordination. The head often leads the eye in gaze changes away from the resource area , while eye and head movements were initiated at approximately the same time for gaze changes away from the resource area. These results demonstrate that movements of the eye, head, and hand, which one might expect to be controlled by low-level motor routines, are in fact dependent on the immediate task. Answers to questions like "does the eye lead the head in gaze changes?" are meaningful only when the specific task being performed is clearly understood. Refinements to the question, such as distinguishing 'predictive' saccades begin to acknowledge this relationship, but these experiments demonstrate that investigations into complex behaviors are only useful when the task is taken into account. This is crucial when considering the reports of Land [1992] and Kowler, et al. [1992] who both concluded that the eye and head are programmed together.

In the block-copying task, when subjects are performing a PMD sequence (alone or as part of an MPMD block move), they make two gaze changes, first from the resource to the model, then from the model to the workspace. Because subjects typically move the head toward the area where manual manipulations are performed (and hold the head still for a period), there are two goals for the eye and head motor systems. The gaze must go from resource -> model -> workspace, while the head must get from the resource to the workspace in preparation for the drop. Current models of changes in gaze postulate that both eye and head movements are driven by the desired gaze shift in body-centered or exocentric (spatial) coordinates [Guitton 1992, van der Steen 1992] and that the VOR is suppressed until the spatial goal is achieved. Based on these models, and the results of Land [1992] and Kowler et al. [1992], one would expect that the eye and head trajectories would be tightly linked. Some subjects did indeed move the eye and head together, first from resource -> model, then from model -> workspace. Other subjects, however, executed the movements in a very different way with gaze and head moving to different targets at the same time. The degree to which the eye and head motor programs dissociate varied between- and within-subjects.

While there is large variation between subjects, this experiment has shown clear evidence that subjects are capable of programming independent eye and head movements. Given this conclusion, one must ask why evidence of this dissociation has not been observed in the past. One cannot look at behavior divorced from the immediate task being performed by a subject and make conclusions about the capabilities of the visuo-motor system. In this case, it is not a question of whether the task was `natural,' or whether unbiological movement restraints were forced by the test apparatus, but the nature of the primary task itself that must be considered. Land's [1992] driving task was a natural task and subjects were free to move their heads, yet his data showed no hint of eye/head dissociation. He concluded that in cases where subjects were too busy to exert conscious control over eye and head movements, they were directed to a common goal. The difference between Land's driving task and the block-copying task, however, is not how `busy' the subjects were; it was the nature of the immediate task being performed. The driving task required large gaze changes with almost no vertical component, and there is no clear advantage to a temporal dissociation between the horizontal eye and head movements in Land's task. Kowler et al.'s [1992] conclusion that there was a natural tendency to program common eye and head movements was based on their study of reading and nonsense tasks requiring the same kind of eye and head movements as reading. If their data is examined from a different perspective, with an emphasis on the immediate task, a very different conclusion is possible: while they observed common eye and head trajectories in the nonsense tasks, dissociation of eye and head movements was observed in the natural multi-line reading task. Kowler et al. [1992] reported that subjects sometimes made eye and head movements in the opposite direction at the end of a line of text with the head moving left and down to the next line of text as the eye makes a final saccade to the right. As in the block-copying task, the reading task presents a situation in which dissociation of eye and head is an optimal strategy.

Chap. 5: The Effect of a Concurrent Cognitive Load on Task Performance


==> "Visual Representations in a Natural Visuo-motor Task"

By: Jeff B. Pelz
Center for Imaging Science, Rochester Institute of Technology
Department of Brain and Cognitive Sciences, University of Rochester

1995