Acessibilidade / Reportar erro

Are first-order disparity gradients spatial primitives of the orientation of lines on the ground plane?

Abstract

The present study investigated the mechanisms involved in processing orientation on the frontal and ground planes. The stimuli comprised two yellow circles conceived as the endpoints of a segment and depicted on a black background. In Experiment 1, the observers performed two tasks on both planes (frontal and ground). In Task 1 they were asked to indicate the absolute location of the two endpoints, presented one at a time (successive task). In Task 2 they had to locate the relative position of the endpoints presented simultaneously (simultaneous task). Relative and absolute errors were analyzed according to a cyclopean coordinate system derived from the geometry of the visual scene. These two kinds of errors were studied within the framework of the hypothesis that each kind of task would minimize the error related to its codification. The results showed greater absolute errors in the simultaneous task than in the successive task and greater relative errors in which the successive task seemingly activated a more accurate way of codification of the orientation. In Experiment 2 we controlled the availability of visual depth cues by changing the presentation time (50 and 3000 ms) and viewing conditions (monocular and binocular) in the simultaneous task. The results showed that the precision of orientation judgments was poorer on the ground plane than on the frontal plane, except when the observers used binocular vision. These results suggest that the orientation of a segment, at least on the ground plane, can be conceptualized as a gradient of disparities.

orientation; size perception; spatial location; depth cues; binocular and monocular vision


PERCEPTION-ACTION

Are first-order disparity gradients spatial primitives of the orientation of lines on the ground plane?

Laura Pérez ZapataI; J. Antonio Aznar-CasanovaI; Nelson Torro-AlvesII; Hans SupèrI,III

IUniversity of Barcelona, Barcelona, Spain

IIUniversidade Federal da Paraíba, João Pessoa, PB, Brazil

IIICatalan Institution in Advanced Research, Barcelona, Spain

Correspondence Correspondence: Nelson Torro-Alves, Departamento de Psicologia, Universidade Federal da Paraíba, Joao Pessoa, PB, 58051-900, Brazil. Phone: 55- 83-3216-7337. Fax: 55-83-3216-7064. E-mail: nelsontorro@ yahoo.com.br

ABSTRACT

The present study investigated the mechanisms involved in processing orientation on the frontal and ground planes. The stimuli comprised two yellow circles conceived as the endpoints of a segment and depicted on a black background. In Experiment 1, the observers performed two tasks on both planes (frontal and ground). In Task 1 they were asked to indicate the absolute location of the two endpoints, presented one at a time (successive task). In Task 2 they had to locate the relative position of the endpoints presented simultaneously (simultaneous task). Relative and absolute errors were analyzed according to a cyclopean coordinate system derived from the geometry of the visual scene. These two kinds of errors were studied within the framework of the hypothesis that each kind of task would minimize the error related to its codification. The results showed greater absolute errors in the simultaneous task than in the successive task and greater relative errors in which the successive task seemingly activated a more accurate way of codification of the orientation. In Experiment 2 we controlled the availability of visual depth cues by changing the presentation time (50 and 3000 ms) and viewing conditions (monocular and binocular) in the simultaneous task. The results showed that the precision of orientation judgments was poorer on the ground plane than on the frontal plane, except when the observers used binocular vision. These results suggest that the orientation of a segment, at least on the ground plane, can be conceptualized as a gradient of disparities.

Keywords: orientation, size perception, spatial location, depth cues, binocular and monocular vision.

Introduction

Since ancient civilization, mankind has developed its own systems of reference and spatial orientation (i.e., its own way of representing space, either on a two-dimensional [2-D] system [flat plane] or three-dimensional [3-D] system [volume]). In ancient times, orientation in space consisted of orient searching (i.e., determining the place where the sun rises). Geographically speaking, orientation consists of finding the so-called North-South direction. From this axis, one will posit the cardinal points (North, South, East, and West) that comprise a system of exocentric reference. Therefore, when we are looking for orientation, we perform a positioning exercise (i.e., we draw a cognitive map in a reference system that is valid from the position where we find ourselves). Additionally, determining the orientation of a line or figure establishes your position relative to a frame of reference. Moreover, shapes are composed of oriented lines. Therefore, recognizing shapes is necessary to perceive the orientation of their lines. The perception of orientation allows shape recognition and establishes our position in the world. People have or develop a sense of orientation. However, despite its importance, whether the processing of oriented lines occurs more from primitive features (e.g., endpoints or ends of the segment) or from other derivative features (e.g., slant, inclination, and tilt) remains unclear. Is the orientation of a line a spatial primitive (i.e., a trait that does not come from other simpler trait as location of the end points) or, conversely, a derivative trait for spatial processing?

To compute the orientation of lines, observers can utilize an egocentric (oculocentric or retinotopic) or exocentric (head-centric or cyclopean) frame of reference. In the first case, the orientation of an object is calculated relative to the horizontal and vertical meridians with its origin in the fovea (Kelly, Loomis, & Beall, 2004; Koenderink, van Doorn, & Lappin, 2003; Loomis, da Silva, Fujita, & Fukusima, 1992). In the second case, the orientation of a segment is calculated relative to another segment in the visual scene. However, to deal with the processing of oriented lines, the plane where these lines are depicted (e.g., the frontoparallel or ground plane) should be taken into account.

Previous studies assumed that the orientation of segments is calculated in accordance with at least one of the following procedures: (1) computation from retinal coordinates of the endpoints of the oriented line using an oculocentric frame of reference (e.g., Borra, Hooge, & Verstraten, 2007; Li & Westheimer, 1997; Morgan & Glennerster, 1991; Seizova-Cajic & Gillam, 2006) or (2) calculation of the angular deviation (tilt) of a segment projected on the retina with regard to the vertical meridian (e.g., Asch & Witkin, 1948a, b; Westheimer, 1984). Note that the retinal vertical meridian can serve as a reference to state the true vertical gravitational axis. However, for stimuli on the horizontal or transverse plane (e.g., on the ground), it also states the visual direction looking straight ahead. Thus, to give sense to the retinal image with lines that lie on the horizontal plane along the ground, it is required to put in correspondence (alignment) the egocentric frame (oculocentric or head-centric) with an exocentric frame. Thus, the vertical meridian that operates within a cyclopean frame of reference (therefore, egocentric) would state the sagittal axis in this frame.

The extracted information within one or another frame of reference is supposed to be processed by means of the integration of responses through receptive field pairs that are tuned to orthogonal orientations (based on the physiological findings of Blakemore, Fiorentini, & Maffei, 1972; Hubel & Wiesel, 1962, 1968), as well as on the psychophysical evidence reported by Dakin, Williams, and Hess (1999), Morgan and Baldassi (1997), and Tyler and Nakayama (1984).

Interestingly, these two procedures use different spatial information as inputs (primitives). The first uses the retinal representation of the stimuli as primitives and operates in an oculocentric or egocentric frame of reference (Kelly et al., 2004; Koenderink et al., 2003; Loomis et al., 1992). The second mechanism uses the disparities between orientations as primitives and operates in an exocentric frame of reference.

In the case of a segment on the ground, such mechanisms cannot extract information about orientation so precisely. Geometrically, as the observer's eye level decreases, the retinal projection of a segment on the ground plane progressively shifts from 90º (a frontal plane) to 0º (the ground plane); consequently, greater angular compression (and a greater error) occurs. Likewise, as the orientation shifts between 90º and 180º, less angular compression (and a smaller error) occurs. Thus, both elevation from the ground plane and changes in the orientation of the segment promote changes in the visual compression rate, which affects the precision of direction and size estimates.

In the present study, observers performed two tasks with stimuli presented on the frontal and ground planes. In the first task, they had to indicate the absolute location of the two endpoints of a segment, presented one at a time (successive task). In the second task, they were asked to locate the relative position of the endpoints when presented simultaneously (simultaneous task). In both tasks, the participants could compute orientation using a cyclopean frame of reference (egocentric) with three orthogonal axes: (1) a vertical axis that passes through the midline of the body, (2) an horizontal axis that corresponds to the eye level of the observer, and (3) the visual direction point in front of him. Hearing and Helmholtz used the term "cyclopean eye" to denote the point midway between the eyes that serves as a center of reference for head-centric directional judgments (see Howard & Rogers, 2002). Julesz (1971) generalized the term "cyclopean" to denote the processing of visual information after inputs from the two eyes are combined. According to Howard & Rogers (2002), the term "cyclopean" has the same connotation as "central processing" as opposed to "retinal processing." In this sense, the processing that is responsible for coding orientation, motion, and disparity has been said to be cyclopean. Nobody questions the idea that the last two are determined in a cyclopean frame. However, evidence to claim that orientation is coded in a cyclopean system is controversial. Therefore, we attempted to verify the basis for this claim.

We hypothesized that in the successive task, observers would make smaller absolute errors but greater relative errors because they would encode only the spatial location of the endpoints successively. By contrast, in the simultaneous task, the observers would make smaller relative errors but greater absolute errors because they would preferentially encode the relative distance between the two endpoints. This difference is related to the fact that these tasks require the use of different inputs to process orientation. For example, convergence or absolute disparity cues are necessary to compute egocentric or absolute distances, whereas binocular disparity or motion parallax is required to compute exocentric or relative distances.

Here, responses were analyzed according to a cyclopean coordinate system derived from the geometry of the visual scene. Appendix 1 describes in detail how the coordinates of the two endpoints that were projected onto a 2-D cyclopean system were calculated. Appendix 2 describes how the horizontal and vertical dimensions of the cyclopean system were normalized in cases of (a) absolute errors, (b) relative errors, and (c) orientation ratios of implicit lines projected onto the cyclopean system. The origin of this system is located in the "egocenter" (i.e., where they intersect the body bilateral symmetry axis and the imaginary line that connects the center of the two pupils). This system enabled us to register both the absolute and relative coordinates of the stimuli and compare them to the responses of the observers. The assumption behind the experiment was that the magnitudes of errors (absolute or relative) would reveal the nature of the encoding of perceived orientation. Thus, it will be possible to infer whether the processing of orientation derives from the processing of the absolute or relative location of a segment's endpoints.

The study also combines two novel aspects. First, the results were analyzed according to a cyclopean (head-centric) coordinate system. The coordinates were derived from a geometric analysis of the projection of the visual scene on the retina of the participants. Second, the experimental design combined absolute (successive presentation) and relative (simultaneous presentation) location tasks. Errors were computed as the differences between the physical and perceptual coordinates, derived from the judgment of the participants.

In Experiment 1, the results revealed that the sagittal plane was used as a reference for the judgments of orientation and distance. However, we were unable to determine whether deviations from the sagittal plane were attributable to extra-retinal depth cues (e.g., eye movements) or retinal cues (binocular disparity). In Experiment 2, we manipulated the availability of such cues and found that the precision of orientation judgments was poorer on the ground plane than on the frontal plane, except when the observers used binocular vision. The results derived from Experiment 2 suggest that the orientation of a segment, at least on the ground plane, can be conceptualized as a gradient of disparities. Finally, we propose that the mechanism based on extracting a vertical gradient of horizontal positional disparities in a vertical direction, which processes orientation on the ground plane, is the same as the one used to process the inclination of a surface, such as a gradient of horizontal disparity.

General methods

Stimuli and apparatus

The stimuli were generated by a C++ program that ran on a Pentium IV 3.0 GHz processor using the glut32 library under OpenGL. Two Mitsubishi SD430U/ SVGA 2500 lumens (1024 × 768 pixels) digital projectors were used to present stimuli on two screens located on the frontoparallel or ground plane. One of the devices was placed on the ceiling of the laboratory to project stimuli onto the ground plane, whereas the other device was set up to present stimuli on a screen located in the frontoparallel plane. In both cases, the projectors were located 4 m from the screen.

The stimuli comprised a projected pair of yellow circles (1.76 cm diameter on the screen) conceived as the endpoints of a segment (Westheimer, 1996). These circles were depicted on a black background, and the distance and orientations between the points were varied. The size of the segments ranged from 49.30 to 105.6 cm (in 14.08 cm steps), and the orientations ranged from 0º to 157.5º (in 22.5º steps; 5 sizes × 8 orientations), resulting in 40 potential positions of the pair of points (Figure 1).


A chin rest was used to prevent head motion and keep the participant's eye distance at 6.20 m from the center of the screen. The observer's eye level was adjusted to a height of 110 cm from the floor. The projected area in the frontal plane condition subtended 16 × 16 degrees of visual angle. The projected area in the ground plane condition subtended 16 × 2.17 degrees of visual angle. The experimental tasks were conducted under reduced visual conditions (4 cd/m2). The software that was used to present the stimuli was also used to record the participants' responses.

Tasks

Two tasks were designed for the study. In Task 1 (successive task), the participants were asked to locate, using the mouse, the absolute location of two points presented one at a time on the screen (Figure 2A). This task was designed to compel the participants to make egocentric location judgments. In Task 2 (simultaneous task), the participants were required to determine the relative position of two points presented at the same time on the screen (Figure 2B). In this case, they were instructed to maintain, in their responses, the distance between points and the orientation of the segment. Because the participants had to determine the relative location of one point in relation to the other, this task promoted exocentric location judgments.


The instructions for Task 1 emphasized an attitude like performing a purely spatial memory task, in which we asked for the location of each endpoint independent from one another. The instructions for Task 2 emphasized consideration of the two points as a whole, like both points composed of an implicit oriented segment. Therefore, Task 1 was an absolute location task, and Task 2 was analogous to the well-known psychophysical "matching intervals of distance" or relative distance task.

The precision of responses was analyzed by considering the perspective of the participants. The position of the observer relative to the screen enabled us to analyze responses according to an egocentric coordinate system. Responses were used to reconstruct the perceived scene and establish a map between the exocentric frame of reference (location of the points in the world) and egocentric frame of reference (projection of the points in the cyclopean system; see Appendix 1).

EXPERIMENT 1

Methods

Participants

Eight volunteers, four female (Mage = 26 years, SDage = 3 years) and four male (Mage = 39 years, SDage = 11 years) participated in the experiment. All of the participants had normal or corrected-to-normal visual acuity and normal stereo vision (at least 60 sec arc, according to the Titmus test). All of the participants provided informed consent, and the study was approved by the institutional ethics committee of the University of Barcelona in accordance with the ethical standards established in the 1964 Declaration of Helsinki.

Procedure

In Task 1 (successive task), the trials began with the presentation of a warning sound (500 Hz beep for 100 ms) followed by the presentation of a point for 1.5 s and a blank screen for 500 ms to prevent after-effects. The observers were instructed to locate the point on the screen by moving the cursor and pressing the left button of the mouse at the position where they perceived the presentation of the point. Afterward, a new point was displayed, followed by a white screen, and the observers indicated its corresponding position (Figure 2A). Pointing to the locations with the mouse was performed after the stimuli were removed from the screen.

In Task 2 (simultaneous task), the trials began with presentation of a warning sound (500 Hz beep for 100 ms) followed by presentation of the two endpoints of a segment for 3000 ms. A black screen was then presented, and the observers had to mark the points by moving the cursor on the screen and pressing the left button of the mouse over the perceived location (Figure 2B). In this task, the observers were instructed to maintain the same distance and orientation of the endpoints of the segment.

Each observer participated in four experimental sessions, involving a combination of two planes (frontal and ground) and two tasks (successive and simultaneous). Before starting the experimental sessions, the participants responded to a small series of training stimuli. Each session consisted of three blocks of 40 trials, involving a combination of five distances and eight orientations that were randomly presented. Between blocks, a rest period of 5 min was indicated by a message on the screen. On average, the participants took 6 min to complete the simultaneous task and 8 min to complete the successive task. The order of the sessions was counterbalanced across observers.

Data analysis

Differences in orientation precision matching derived for the different sizes under probe were not found. This result allowed us to analyze the data without taking into account size as a variable. The data obtained from different sizes were pooled, and only orientation, plane, and task were taken as variables for the analysis.

To determine the observers' accuracy in encoding the endpoints, we examined (1) absolute errors, calculated in spatial coordinates, with their origin in the center of the screen, and (2) relative errors (geometry of the projection of the segment in the cyclopean coordinate system). Errors were measured according to a cyclopean coordinate system expressed in normalized units of visual subtended angles.

The subtended angles could be conceived as the coordinates of a head-centric system, derived geometrically from the corresponding projections. A detailed description of the geometric analysis in the cyclopean coordinate system is presented in Appendix 1.

Experimental errors (absolute and relative; see Appendix 2) were analyzed as a function of three factors related to the perception of a segment's orientation: (1) the real physical orientation of the stimulus, (2) the plane of presentation of the stimulus (frontal vs. ground), and (3) the task completed by the observer (i.e., the absolute [successive task] or relative [simultaneous task] location). The results were analyzed using repeated-measures analysis of variance (ANOVA) with Greenhouse-Geisser corrections when appropriate.

F values, uncorrected degrees of freedom, probability levels following correction, and ε values are reported. Confidence intervals were calculated in the presence of a significant interaction.

Results

Absolute errors

The accuracy for normalized absolute errors in the horizontal dimension (Abs-Err-H) of the two endpoints was analyzed using repeated-measures ANOVA, with plane (frontal and ground), task (successive and simultaneous), and orientation (0º, 22.5º, 45º, 67.5º, 90º, 112.5º, 135º, and 157.5º) as the within-subjects factors. The means of Abs-Err-H for both tasks and planes are plotted as a function of stimulus orientation in Figure 3 (upper-left panel). Abs-Err-H varied with orientation (F7,161 = 129.803, p < .001, ε = .849) and was greater in the simultaneous task (implying relative location; m = 11.956 ± .447) than in the successive task (which involves absolute location; m = 6.963 ± .260; F1,23 = 80.195, p < .001, ε =.777).


The ANOVA also revealed a significant orientation × task interaction (F7,161 = 9.84, p = .001, ε = .634). Therefore, some orientations (0º, 22.5º, 135º, and 157.5º) maximized the differences in performance on the X-axis, with precision in estimating the horizontal angular component varying as the orientation deviated from 90º.

A main effect of the plane factor (F1,23 = 6.271, p = .020, ε = .214) was also found, with greater mean Abs-Err-H on the ground plane (m = 9.811 ± .331) than on the frontal plane (m = 9.108 ± .205). The other interactions were not significant.

The accuracy for absolute errors in the vertical/ depth dimension of the two endpoints (Abs-Err-V) was also analyzed using repeated-measures ANOVA, adopting the same factors as above. The means of Abs-Err-V for both planes and tasks are plotted as a function of stimulus orientation in Figure 3 (upper- right panel). Abs-Err-V showed a significant effect for all of the single factors under study: plane where the stimulus was displayed (F 1,23 = 510.714, p < .001, ε = .959), in which greater errors were observed in the depth dimension in the ground plane (m = 55.185 ± 1.974) than in the vertical dimension in the frontal plane (m = 9.742 ± .220); task (F1,23 = 77.272, p < .001, ε = .778), with Abs-Err-V greater in the simultaneous task (m = 40.518 ± 1.861) than in the successive task (m = 24.409 ± .372); and orientation (F7,161 = 11.431, p < .001, ε = .342), with orientations close to 90º producing the greatest errors.

Significant interactions were also found for plane × task (F 1,231 = 55.216, p < .001, ε = .715), orientation × task (F7,161 = 10.377, p < .001, ε = .321), and plane × orientation (F 7,161 = 2.861, p = .008, ε = .115). Finally and most interestingly, a second-order plane × task × orientation (F7,161 = 9.532, p < .001, ε = .302) was also found.

Certainly, greater absolute errors were made in the depth plane in the simultaneous task, especially when the orientation approached the sagittal plane (67.5º, 90º, and 112º).

Relative errors

The accuracy for relative errors in the horizontal dimension of the endpoints was analyzed using repeated- measures ANOVA, with plane (frontal and ground), task (absolute and relative location), and orientation (0º, 22.5º, 45º, 67.5º, 90º, 112.5º, 135º, and 157.5º) as the within-subjects factors. The means of relative Rel- Err-H for both tasks and planes are plotted as a function of stimulus orientation in Figure 3 (lower-left panel).

Rel-Err-H varied with orientation (F7,161 = 39.069, p < .001, ε = .629) similarly to absolute errors. A significant main effect of task was found (F1,23 = 39.069, p < .001, ε = .629). Relative errors were smaller in the successive task (absolute location; mean = 5.286 ± .48) than in the simultaneous task (relative location; mean = 8.106 ± .71). Moreover, a task × orientation interaction was found as orientation deviated from 90º (F7,161 = 10.325, p < .001, ε = .310). However, neither the main effect of plane nor any other interaction was significant.

The accuracy for normalized relative errors in the vertical dimension (Rel-Err-V) of the two endpoints was analyzed using repeated-measures ANOVA, adopting the same factors as in the previous analysis. The means of Rel-Err-V for both plane and task are plotted as a function of stimulus orientation in Figure 3 (lower-right panel). Errors on the Y-axis increased as orientation rose above 90º (F7,161 = 27.835, p < .001, ε = .548). A significant main effect of plane where the stimulus was displayed was found (F1,23 = 10.103, p = .004, ε = .305). Thus, the accuracy of the vertical dimension on the frontal plane (Rel-Err-V: m = 10.926 ± .310) was better than the accuracy of the depth dimension on the ground plane (Rel-Err-V: m = 17.672 ± 2.012). A simple main effect of task was also found (F1,23 = 8.752, p = .007, ε = .276), in which Rel-Err-V was smaller for the absolute location task (m = 10.983 ± .852) than for the relative location task (m = 17.615 ± 2.012). Two significant first-order interactions were found: task × orientation (F7,161 = 6.554, p < .001, ε = .222) and plane × orientation (F7,161= 5.291, p = .007, ε = .187). Similar to the case for Abs-Err-V, a plane × task × orientation second-order interaction was also found (F7,161 = 3.993, p = .025, ε = .148).

Overall, these results show that the magnitude of the absolute and relative errors was determined by the combination of orientation and task (successive or simultaneous) and by the dimensions involved in a particular plane (horizontal, vertical, and depth). The absolute error analysis demonstrated that these errors were greater for the simultaneous task than for the successive task. Interestingly, absolute errors in depth were larger than absolute errors in the horizontal dimension. Figure 4 (left panel) summarizes these findings.


The relative error analysis revealed that errors were greater for the relative location (simultaneous) task than for the absolute location (successive) task but only in the case of the ground plane in the depth dimension (Z dimension). However, relative errors did not differ between tasks in the other dimensions (see Figure 4, right panel).

Therefore, a specific pattern of results could be seen for an interval distance in depth (Z-axis), which differed from the horizontal and vertical dimensions. Specifically , the observers made smaller absolute errors, at least when operating on the ground plane, strongly suggesting that depth is encoded absolutely rather than from the relative spatial locations of each of the endpoints.

Precision of cyclopean orientation

The precision of cyclopean orientation was analyzed using repeated-measures ANOVA, using the same within-subjects factors as in the previous analysis. A significant main effect of orientation was found (F7,161 = 16.351, p < .001, ε = .461), with plane × orientation (F7,161 = 4.803, p = .002, ε = .173) and task × orientation (F7,161 = 10.384, p < .001, ε = .311) interactions.

Interestingly, a plane × task × orientation second-order interaction was found (F7,161 = 2.528, p = .043, ε = .099). The means of orientation precision for the successive (absolute location) and simultaneous (relative location) tasks for each plane are plotted as a function of stimulus orientation in Figure 5.


A more detailed analysis revealed that the precision of perceived orientation followed the same trend on the frontal plane (for both tasks) and on the ground plane (for the successive task only). Judgments of orientation became more accurate when the orientation of the segments was close to 90º. In the case of the simultaneous task, this trend was reversed on the ground plane, in which errors increased as the orientation of the segment approached 90º.

Discussion

Absolute and relative errors for horizontal and vertical projections on a cyclopean coordinate system were analyzed separately. In the case of the horizontal projection, absolute errors were approximately equivalent to relative errors. However, the shape of the psychometric function for the task that requires absolute location judgments (successive task) differed from the shape of the psychometric function obtained for relative location judgments (simultaneous task). In general and as expected, the observers made smaller absolute errors in the successive task than in the simultaneous task. These results indicate strong spatial compression (foreshortening) in the simultaneous task because its psychometric curve was sharper than the one recorded in the successive task. This pattern of results in the horizontal dimension led us to conclude that an absolute codification (successive task) produces fewer absolute errors compared with relative codification (simultaneous task). In the case of relative errors, no differences were found between the two codifications.

The results indicated that orientation judgments were more precise in the successive task (absolute task) than in the simultaneous task (relative location) on both planes. The data also showed that the simultaneous task modulated the participants' judgments differently on the ground and frontal planes. On the ground plane, judgments were more precise in the transverse orientation, decreasing their precision for the sagittal orientation. This suggests that accuracy in depth perception is based on the absolute locations of objects. In short and consistent with Westheimer (1996), we can conclude that the absolute location of the endpoints is necessary to compute orientation.

Under these viewing conditions, the visual angle that was subtended by the two points in the retina was compressed, with a foreshortening effect. Therefore, the observers seemed to adjust the relative distance between the two points in accordance with the explanation provided by the visual angle hypothesis (Levin & Haber, 1993). Such anisotropies in the visual space have been demonstrated by previous studies (Loomis et al., 1992; Loomis and Philbeck, 1999; Foley, Ribeiro-Filho, & Da Silva, 2004; Matsushima, de Oliveira, Ribeiro-Filho, & Da Silva, 2005). The tendency to compress visual space in the sagittal direction occurs mainly under reduced viewing conditions, such as those in the experiment, but also when extra-retinal visual cues (e.g., fusional vergence and accommodation) cannot be used. These results provide support for the "visual angle" hypothesis.

Although these findings are interesting, the stimuli in Experiment 1 were presented for 3000 ms, thereby allowing findings on both endpoints by the observers. Therefore, we cannot be sure whether two fixations are in fact minimally necessary to encode the stimulus or whether just one fixation is sufficient Neither can we exclude the possibility that the simultaneous task promoted relative codification because two absolute codifications were possible. Likewise, the participants observed stimuli binocularly. Therefore, this experimental design was unable to isolate the particular contribution of binocular depth cues (vertical and horizontal disparity) and oculomotor cues (convergence).

To examine how these inputs are used in the processing of orientation, a new experiment was conducted, in which we varied the viewing conditions (monocular and binocular), duration of the stimulus (50 and 3000 ms), and plane of projection of the stimuli (frontal or ground). We sought to determine the sufficient depth cues that enable observers to perceive orientation on these planes.

EXPERIMENT 2

Methods

Participants

Five volunteer observers (two female, three male) participated in this experiment. These participants, aged 22, 24, 24, 25, and 54 years, had normal or corrected- to-normal visual acuity and normal stereo vision (at least 60" according to the Titmus test). Their dominant eye was tested through a preferential vision test (hole- in-the-card test; Ehrenstein, Arnold-Schulz-Gahmen, & Jaschinski, 2005). All of the participants provided informed consent, and the study was approved by the institutional ethics committee of the University of Barcelona in accordance with the ethics standards established in the 1964 Declaration of Helsinki.

Experimental procedure

The experiment was conducted over eight sessions, involving the combination of two planes of projection (frontal and ground), two viewing conditions (monocular and binocular), and two durations of stimulus presentation (50 and 3000 ms). Thus, we manipulated the availability of visual cues (binocular disparity and extraretinal information) and, hence, the inputs required to compute orientation (Table 1).

The trials were similar to those in Experiment 1. However, in Experiment 2, only the simultaneous task was conducted (i.e., the one that involved relative codification of the orientation). The experimental sessions began with training trials to instruct observers in the task.

Each session comprised three blocks of 40 trials (5 sizes × 8 orientations). Each block took approximately 5 min, and the participants were requested to rest for 5 min between blocks. The participants performed one session per day, and the sequence of the sessions was counterbalanced as much as possible across observers.

Data analysis

The dependent variables that were used in the data analysis were the same as those described in Experiment 1: accuracy (absolute and relative errors) and precision (angular ratio of orientations; Appendix 2). Similar to Experiment 1, the measures were normalized by calculating the percentage relative to the maximum value for each dimension (horizontal or vertical) on each plane (for a review of models that describe how normalization is achieved, see Frisby et al., 1999).

The accuracy and precision of the variables were analyzed using repeated-measures ANOVA, with plane (frontal and ground), viewing condition (monocular and binocular), duration (50 and 3000 ms), and orientation (0º, 22.5º, 45º, 67.5º, 90º, 112.5º, 135º, and 157.5º) as the within-subjects factors. F values, corrected degrees of freedom, and p values are reported exclusively for significant differences.

Results and discussion

Absolute errors

The means of Abs-Err-H for viewing condition, duration, and plane were analyzed using ANOVA as a function of stimulus orientation. Abs-Err-H varied with orientation (F7,98 = 37.295, p < .001, ε = .727). The ANOVA also revealed a significant orientation × plane interaction (F7,98= 10.670, p < .001, ε = .433). In this case, performance errors were minimal at 90º in the horizontal cyclopean dimension on the frontal plane, whereas no significant modulation of accuracy was produced by changes in orientation on the ground plane. Other main effects and interactions were not significant.

In short, absolute errors in the horizontal dimension (Abs-Err-H) increased as the orientation deviated from the sagittal visual direction of the observer on both the frontal and ground planes. However, the variation in Abs-Err-H was slightly greater on the frontal plane than on the ground plane, diminishing when the endpoints were aligned with the sagittal direction and increasing as they deviated from it. Therefore, the vertical direction appeared to be a relevant spatial reference for processing orientation. By contrast, no significant differences were found for the factors group (defined as the result from the combination of both viewing conditions and durations) and plane (Figure 6, upper left panel).


The accuracy of absolute errors in the vertical and depth dimensions of the two endpoints (Abs-Err-V) was also analyzed using repeated-measures ANOVA, using the same factors as above. Abs-Err-V showed significant main effects for all of the factors tested (p < .05), with the exception of duration. Additionally, all second-order interactions were significant: plane × viewing condition × duration (F1,14 = 5.954, p = .029, ε = .298), plane × viewing condition × orientation (F7,98 = 4.008, p = .006, ε = .223), plane × duration × orientation (F7,98= 2.805, p = .030, ε = .167), and viewing condition × duration × orientation (F7,98 = 3.089, p = .024, ε = .181). The third- order interaction (plane × viewing condition × duration orientation) was not significant.

Unlike the Abs-Err-H, absolute errors in the vertical dimension (Abs-Err-V) diminished as the orientation deviated from the observer's sagittal visual direction on both the frontal and ground planes. However, on the ground plane, Abs-Err-V was threefold greater than on the frontal plane. Abs-Err-V was statistically equivalent for all of the groups on the frontal plane. On the ground plane, absolute errors in the vertical dimension were significantly smaller only when the test was conducted under binocular vision and when the stimuli were displayed for 3000 ms (Figure 6, upper right panel). In conclusion, these results clearly showed the adverse effects of the compression of depth distance on the codification of spatial location, although they also indicated that accuracy in the relative (simultaneous) task was greater when the endpoints were displayed on the frontal plane and when the observers viewed them binocularly for 3000 ms (i.e., a sufficient time to fixate on the endpoints).

Relative errors

The means of Rel-Err-H for viewing condition, duration, and plane were analyzed using ANOVA as a function of stimulus orientation. Rel-Err-H varied with plane (F1,14 = 20.348, p < .001, ε = .592) and orientation (F7,98 = 45.331, p < .001, ε = .764). However, the main effects of viewing condition and duration and all other interactions were not significant. Changes in orientation influenced relative errors in the horizontal dimension, causing compression and, thus, greater errors as the endpoints deviated from 90º. Moreover, Rel-Err-H was 25% greater on the ground plane than on the frontal plane. The results also showed that within a particular plane, whether frontal or ground, the magnitude of errors was statistically equivalent between groups (Figure 6, bottom left panel).

The accuracy for normalized relative errors in the vertical dimension (Rel-Err V) of the two endpoints was analyzed using repeated-measures ANOVA, using the same factors as in the previous analysis. Errors in the vertical dimension (Y-axis on the frontal plane but Z-axis on the ground plane) increased as the orientation approached 90º (F 7,98 = 18.988, p < .001, ε = .576). A main effect of plane where the stimulus was displayed was also found (F1,14 = 22.084, p = .001, ε = .612).

Accordingly, a significant plane × orientation interaction (F7,98 = 6.30, p < .001, ε = .310) was observed. However, the other interactions were not significant. Orientation caused a compression effect, which might well be responsible for the errors.

The analysis also showed that relative errors in the vertical dimension on the frontal plane were 30% smaller (see Figure 6, bottom right panel). However, because group had no influence on the relative errors in the vertical dimension, we concluded that neither binocular (disparity and convergence) nor oculomotor cues (fixations) enabled the subjects to reduce their relative or absolute errors in the vertical dimension on the ground plane.

Precision of cyclopean orientation

The precision of cyclopean orientation was analyzed using repeated-measures

ANOVA, with plane (frontal and ground), viewing condition (monocular and binocular), duration (50 and 3000 ms), and orientation (0º, 22.5º, 45º, 67.5º, 90º,112.5º, 135º, and 157.5º) as the within-subjects factors. All main effects were significant plane (F1,14 = 42.964, p < .001, ε = .754), viewing condition (F1,14 = 12.464, p = .003, ε = .471), duration (F1,14 = 4.601, p = .050, ε = .247), and orientation (F7,98 = 79.147, p < .001, ε = .850). A statistically significant plane × orientation interaction was also found (F7,98 = 13.313, p < .001, ε = .487). Thus, differences in observer precision on both planes were significant when the stimulus orientation deviated from 90º. We found no differences between groups on the frontal plane, whereas the observers' precision in matching orientations on the ground plane was nearer to constancy (θα'/θα = 1) when they viewed the endpoints binocularly rather than monocularly (Figure 7). This indicates that two fixations were not necessary to detect orientation.


General discussion

The present study was designed to investigate the inputs used by a mechanism that processes orientation. In Experiment 1, we found that absolute errors were greater in the simultaneous task (relative location) than in the successive task (absolute location). If we assume that the observers performed the task by minimizing one of the types of errors, then the results would suggest that they encoded the endpoints in a stable exocentric system whose origin was located at the center of the screen. However, this claim cannot be upheld because the absolute errors in depth were three-times greater on the ground plane than on the frontal plane.

Relative errors in the simultaneous and successive tasks did not differ in the horizontal dimension. However, when the participants judged stimuli on the ground plane, we found an advantage in the absolute codification of endpoint coordinates, with smaller errors in the vertical dimension for the successive task. The analysis of accuracy showed that sagittal visual direction was a key reference in determining the orientation of stimuli on the ground plane, which is consistent with the findings of Westheimer (1984). Experiment 1 revealed the adverse effect of spatial foreshortening on the accuracy of orientation judgments. Both relative and absolute errors were greater when the participants judged stimuli in the depth dimension on the ground plane.

By contrast, in Experiment 2, the viewing condition duration combination produced a significant effect only on the ground plane, in which binocular vision and 3000 ms stimulus presentation were associated with smaller absolute errors in depth. Therefore, binocularity and more fixations improved spatial location. However, when ground plane, binocularity, and 3000 ms stimulus presentation were combined, the relative errors were approximately 30% greater on the ground plane than on the frontal plane. Our results also showed that precision in orientation decreased as the orientation of the endpoints deviated from 90º (sagittal visual direction). However, the duration of presentation of the stimulus per se did not appear to play a relevant role.

Moreover, the significant interaction between plane and orientation indicated that judgments on the frontal plane were always more precise, except when the observers operated binocularly and when the duration of the stimulus allowed only one fixation (50 ms presentation). Under such conditions, the precision in orientation approached the constancy value (α'/α = 1) for observations on both planes. These results suggest that the orientation of a segment in 3-D space (e.g., on the ground plane) can be conceptualized as a gradient of disparities and that two fixations are not required to attain constancy.

Orientation can certainly be specified by changes in spatial disparity. Burt and Julesz (1980) distinguished four gradients of disparity (first-order spatial gradients), composed of two horizontal gradients (in the azimuthal or elevation direction) and two vertical gradients, which specify inclination. The ground plane has a gradient of horizontal disparity in the vertical direction. Therefore, each pair of points has a disparity difference that varies with the interval of distance between them but also according to the orientation of a virtual segment that joins them. This is why errors in the vertical dimension were greater than those in the horizontal dimension in our experiments. Consequently, disparity differences varied as a function of the distance between points and their orientation. Analogously to our cyclopean system, horizontal disparity on the ground plane in the horizontal dimension was maximal for the two endpoints oriented at 90º (vertically) and minimal for endpoints oriented at 0º (horizontally). In other words, we observed a correlation between the vertical gradient (Y-axis) in vertical disparity and horizontal gradient (X-axis) in horizontal disparity but in a cyclopean system.

Several conclusions can be drawn about the mechanism that underlies the perception of orientation. First, to achieve constancy in the perception of orientation (precision = 1), the visual system needs to correct the width and height disparities created by a rotated virtual segment inserted on the frontal or ground plane. Indeed, horizontal and vertical eccentricity must be analyzed separately. Second, compression in depth for vertically oriented segments (vertical disparity) occurs in a more accelerated manner than compression in the horizontal segments (horizontal disparity). Therefore, the scales of these two orthogonal dimensions must be normalized. Ogle (1938) demonstrated an induced effect in which vertical magnification of one half of a stereogram encouraged the perception of an inclined surface. Furthermore, Rogers and Bradshaw (1994) showed that the relative vertical size of binocular gratings played an important role in slant (and also inclination) perception. Third, the inclination of the surface that contains the endpoints must be taken into account, particularly when the plane is not a frontal one. Here, the HSR/VSR ratio, in which HSR is the Horizontal Size Ratio and VSR is the Vertical Size Ratio, as proposed by Koenderink (1985; but also see Koenderink & van Doorn, 1976), might also play an important role. Fourth, the average size angle between the two eyes has been postulated to be necessary to determine the head-centric eccentricity of a point. This can be obtained from either the oculomotor system (fixations to each point) or binocular disparity. Our data do not enable a determination of the more prevalent cue because there were no statistically significant differences between the slow (3000 ms) and fast (50 ms) presentations during binocular viewing. Fifth, according to our results, given the absolute spatial location of the endpoints, orientation could be processed from the gradient of disparities that interpolated points in the virtual segment that joined them, rather than from the responses of receptive fields that were tuned to differences in the orientation of the virtual segment. Sixth, consistent with Liu, Stevenson, and Schor (1994), if we express disparities in polar coordinates (with a meridional direction [φ], and eccentricity [θ] in the radial direction), then binocular differences in the directional component (φ) for each endpoint could be used to process the orientation of the virtual segment on the ground plane. Therefore, we propose that the orientation disparity mechanism is the same as the one used to process surface inclination. In other words, consistent with Cagenello and Rogers (1990), it is a mechanism based on extracting a vertical gradient of horizontal positional disparities in a vertical direction.

The visual processing of the orientation of straight segments, at least when they are oriented in depth, appears to be processed by the binocular disparity mechanism.

Thus, to extract the orientation of a segment (or two endpoints) presented on an inclined surface (e.g., the ground plane), the mechanism uses the spatial allocentric locations of the endpoints as primitives before mapping them onto a cyclopean system. Hence, orientation appears to be processed by the combined action of the two mechanisms: (1) one mechanism that operates on the inclination of the surface so that the resulting representation (output) can be mapped onto the frontal plane and (2) another mechanism that operates on the segment itself on the frontal plane. Knowledge about the inclination of the surface, therefore, improves the precision of orientation. More research is needed to understand how these mechanisms work in the case of different inclinations of the presentation plane.

Acknowledgements

This research was supported by a grant to JAAC (Ref. PSI2012-35194) from the Spanish Ministry of Education and Science (MICINN).

Received 11 January 2014;

received in revised form 08 June 2014;

accepted 26 June 2014.

Available online 25 November 2014.

Laura Pérez Zapata, J. Antonio Aznar-Casanova, and Hans Supèr, Departmentof Basic Psychology, Facultyof Psychology, University of Barcelona, Barcelona, Spain. J. Antonio Aznar- Casanova and Hans Supèr, Institute for Brain, Cognition and Behaviour (IR3C). Nelson Torro-Alves, Universidade Federal da Paraíba, João Pessoa, Brazil. Hans Supèr, Catalan Institution in Advanced Research (ICREA).

Appendix 1

We calculated the coordinates of two endpoints projected onto a two-dimensional cyclopean system, comprising a horizontal (θh) and vertical (θv) component. Equation (A1-1) calculates θh in both planes, where X1 and X2 are the horizontal coordinates of points 1 and 2, respectively, on the X-axis of the exocentric system (screen plane of presentation), and Y1 and Y2 are the vertical coordinates on the Y-axis of the exocentric system. Note that the absolute distance of the endpoints on the frontal plane (Figure Append-A1) differed when they were placed on the ground plane (Figure Append-A2). Similarly, note that the origin of the exocentric system defined by the screen coincided exactly with the center of the screen and that the midpoint of the interval distance between the endpoints was always at the origin of this system. Thus, we calculated the two horizontal angular coordinates (θh) for each endpoint on the frontal and ground planes, as shown in Equation 1, but in the frontal plane Y1 and Y2 were equal to zero.


Click to enlarge

Two different equations were used to compute the visual angle projected onto the vertical component of the cyclopean system (θv) in each trial, corresponding to the depth dimension on the ground plane and vertical dimension on the frontal plane.

When the stimuli were on the frontal plane, this optical declination angle was calculated by means of Equation A1-2A (see Figure Append-B1 for a graphical description of the variables involved).

However, when the stimuli were presented on the ground plane, we took into account the angular optical declination (Δ) to each point (of the endpoints). The angular optical declination to point-1 (Δ 1) and point-2 (Δ 2) was obtained as shown in Equation A1-2B, where h is the height of the observer's eye level relative to the height of the screen-plane, d is the viewing distance to the center of the screen, and Y1 and Y2 are the coordinates of the points on the Y-axis in the exocentric system (screen plane of presentation). The value of the subtended angle in the vertical axis of the cyclopean system (which represents depth on the ground plane) was computed as the difference between the two optical declination angles (Δ 1 and Δ 2). See Figure Append-B2 for more details.

Appendix 2

Because of anisotropies between the cardinal retinal meridians and, therefore, the existence of a different scaling factor, it was necessary to normalize the axes of the cyclopean system (both horizontal and vertical). This normalization of angular units involved calculating the percentage of retinal projection of the endpoints for either the vertical or horizontal meridian relative to the maximum angular value obtained for each test condition. Thus, the normalized angular horizontal projection is denoted as θh1 (point 1) and θh2 (point 2). Similarly, the normalized angular vertical projection is denoted as θv1 (point 1) and θv2 (point 2). In short, the errors in responses were calculated by means of the following equations:

Normalized absolute error in the horizontal and vertical dimensions of the cyclopean system

Absolute errors in the horizontal and vertical dimensions of a point refer to the mean difference between its precise location in space and the one perceived by the observers.

In Equation A2-1, θ'h1 and θ'h2 are the normalized horizontal coordinates of the perceived points 1 and 2, respectively. Analogously, θh1 and θh2 are the normalized horizontal coordinates of the true physical points 1 and 2, respectively. Similarly, in Equation A2-2, θ'v1 and θ'v2 are the normalized vertical coordinates of the perceived points 1 and 2, respectively. Analogously, θv1 and θv2 are the normalized vertical coordinates of the true physical points 1 and 2, respectively.

Normalized relative error in the horizontal and vertical dimensions of the cyclopean system

The relative error in the horizontal dimension is equivalent to the difference in angular width between the two eyes (i.e., a horizontal width disparity or horizontal dif-size disparity) but defined in head-centric coordinates (cyclopean system).

The relative error in the vertical dimension is equivalent to the difference in angular height between the two eyes (i.e., a vertical height disparity or vertical dif- size disparity) but defined in head-centric coordinates.

The notation here is the same as in Equation A2-1 and Equation A2-2.

Normalized orientation ratios projected onto the cyclopean system

Additionally, to measure the precision of the mechanism that computes orientation, we compared two oriented segments (the physical and the perceived) projected onto the retina and also compared their projection onto the cyclopean system from the very same plane of presentation as the one of the stimulus. Therefore, we computed both the normalized cyclopean perceived orientation (θα' in Equation 5) and normalized cyclopean physical orientation (θα in Equation 6). Finally, we calculated the normalized cyclopean orientation ratios using Equation 7.

  • Asch, S. E., & Witkin, H. A. (1948a). Studies in space orientation: perception of the upright with displaced visual fields. Journal of Experimental Psychology, 38, 325-337.
  • Asch, S. E., & Witkin, H. A., (1948b). Studies in space orientation: perception of the upright with displaced visual fields and with body tilted. Journal of Experimental Psychology, 38, 455-477.
  • Blakemore, C., Fiorentini, A., & Maffei, L. (1972). A second neural mechanism of binocular depth discrimination. Journal of Physiology, 226, 725-749.
  • Borra, T., Hooge, I. T. C., & Verstraten, F. A. J. (2007). The use of optimal object information in fronto-parallel orientation discrimination. Vision Research, 47(26), 3307-3314.
  • Burt, P., & Julesz, B. (1980). A disparity gradient limit for binocular fusion. Science, 208, 615-617.
  • Cagenello, R., & Rogers, B. J. (1990). Orientation disparity, cyclotorsion, and the perception of surface slant. Investigative Ophthalmology and Visual Science, 31(Abstracts), 97.
  • Dakin, S. C., Williams, C. B., & Hess, R. F. (1999). The interaction of first- and second-order cues to orientation. Vision Research, 39, 2867-2884.
  • Ehrenstein, W. H., Arnold-Schulz-Gahmen B. E., & Jaschinski, W. (2005). Eye preference within the context of binocular functions. Graefe's Archive for Clinical and Experimental Ophthalmology, 243(9), 926-932.
  • Foley, J. M., Ribeiro-Filho, N. P., & Da Silva, J. A. (2004). Visual perception of extent and the geometry of visual space. Vision Research, 44, 147-156.
  • Frisby, J. P. Buckley, D., Grant, H., Gårding, J., Horsman, J. M., Hippisley- Cox, S. D., & Porrill, J. (1999). An orientation anisotropy in the effects of scaling vertical disparities. Vision Research, 39(3), 481-492.
  • Howard, I. P. & Rogers, B. J. (2002). Seeing in depth Toronto: University of Toronto Press.
  • Hubel, D. H., & Wiesel, T. B. N. (1962). Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. Journal of Physiology, 160, 106-154.
  • Hubel, D. H., & Wiesel, T. N. (1968). Receptive fields and functional architecture of monkey striate cortex. Journal of Physiology, 195, 215-243.
  • Julesz, B. (1971). Foundations of cyclopean perception Chicago: please confirm publisher location University of Chicago Press is in Chicago, IL (USA).
  • Kelly, J. W., Loomis, J. M., & Beall, A. C. (2004). Judgments of exocentric direction in large-scale space. Perception, 33, 443-454.
  • Koenderink, J. J. (1985). Space, form and optical deformations. In: D. Ingle, M. Jeannerod, & D. Lee (Eds.), Brain mechanisms and spatial vision (pp. 31-58). Dordrecht: Nijhoff.
  • Koenderink, J. J., & van Doorn, A. J. (1976). Geometry of binocular vision and a model for stereopsis. Biological Cybernetics, 21, 29-35.
  • Koenderink, J. J., van Doorn, A. J., & Lappin, J. S. (2003). Exocentric pointing to opposite targets. Acta Psychologica, 112, 71-87.
  • Levin, C. A., & Haber, R. N. (1993). Visual angle as a determinant of perceived interobject distance. Perception and Psychophysics, 54(2), 250-259.
  • Li, W., & Westheimer, G. (1997). Human discrimination of the implicit orientation of simple symmetrical patterns. Vision Research, 37(5), 565-572.
  • Liu, L., Stevenson, S. B., & Schor, C. M. (1994). A polar coordinate system for describing binocular disparity. Vision Research, 34, 1205-1222.
  • Loomis, J. M., & Philbeck, J. W. (1999). Is the anisotropy of perceived 3-D shape invariant across scale? Perception and Psychophysics, 61, 397-402.
  • Loomis, J. M., da Silva, J. A., Fujita, N., & Fukusima, S. S. (1992). Visual space perception and visually directed action. Journal of Experimental Psychology: Human Perception and Performance, 18, 906-921.
  • Matsushima, E. H., de Oliveira, A. P., Ribeiro-Filho, N. P., & Da Silva, J. A. (2005). Visual angle as determinant factor for relative distance perception. Psicologica, 26, 97-104.
  • Morgan, M. J., & Baldassi, S. (1997). How the human visual system encodes the orientation of a texture, and why it makes mistakes. Current Biology, 7, 999-1002.
  • Morgan, M. J., & Glennerster, A. (1991). Efficiency of locating centres of dot-clusters by human observers. Vision Research, 31, 2075-2083.
  • Ogle, K. N. (1938). Induced size effect: I. A new phenomenon in binocular space-perception associated with the relative sizes of the images of the two eyes. Archives of Ophthalmology, 20, 604-624.
  • Rogers, B. J., & Bradshaw, M. F. (1994). Is dif-frequency a stimulus for stereoscopic slant? Investigative Ophthalmology and Visual Science, 35(Abstracts), 1316.
  • Seizova-Cajic, T., & Gillam, B. (2006). Biases in judgments of separation and orientation of elements belonging to different clusters. Vision Research, 46(16), 2525-2534.
  • Tyler, C. W., & Nakayama, K. (1984). Size interactions in the perception of orientation. In: I. Kohler, & L. Spillmann (Eds.), Sensory experience, adaptation, and perception: festschrift for Ivo Kohler (pp. 529-546). Hillsdale, NJ: Lawrence Erlbaum.
  • Westheimer, G. (1984). Sensitivity for vertical retinal image differences. Nature, 307(5952), 632-634.
  • Westheimer, G. (1996). Location and line orientation as distinguishable primitives in spatial vision. Proceeding of the Royal Society of London B, 263, 503-508.
  • Correspondence:

    Nelson Torro-Alves,
    Departamento de Psicologia,
    Universidade Federal da Paraíba,
    Joao Pessoa, PB, 58051-900, Brazil.
    Phone: 55- 83-3216-7337. Fax: 55-83-3216-7064.
    E-mail:
  • Publication Dates

    • Publication in this collection
      24 Feb 2015
    • Date of issue
      Dec 2014

    History

    • Received
      11 Jan 2014
    • Accepted
      26 June 2014
    • Reviewed
      08 June 2014
    Pontificia Universidade Católica do Rio de Janeiro, Universidade de Brasília, Universidade de São Paulo Rua Marques de São Vicente, 225, 22453-900 Rio de Janeiro/RJ Brasil, Tel.: (55 21) 3527-2109, Fax: (55 21) 3527-1187 - Rio de Janeiro - RJ - Brazil
    E-mail: psycneuro@psycneuro.org