首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We investigated orienting of attention by social and symbolic cues presented inside/outside the locus of attention. Participants responded to laterally presented targets preceded by simultaneously presented gaze and arrow cues. Participants’ attention was allocated to either of the cues and the other cue served as a distractor. In Experiments 1–4 nonpredictive cues were employed. The validity of the attended cue and distractor were varied orthogonally. Valid cues and distractors produced additive facilitation to reaction times when compared to invalid cues and distractors. The effects of gaze and arrow distractors were similar. When the cue was 100% valid and the distractor 50% valid (Experiment 5), distractor validity had no effect on reaction times. When realistic gaze and arrow cues were employed (Experiment 6), arrow but not gaze distractors influenced the reaction times. The results suggest that social and symbolic directional information can be integrated for attention orienting. The processing of social and symbolic directional information can be modulated by top-down control, but the efficiency of the control depends on the visual saliency of the cues. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Two methods assessed the use of experimenter-given directional cues by a New World monkey species, cotton top tamarins (Saguinus oedipus). Experiment 1 used cues to elicit visual co-orienting toward distal objects. Experiment 2 used cues to generate responses in an object-choice task. Although there were strong positive correlations between monkey pairs to co-orient, visual co-orienting with a human experimenter occurred at a low frequency to distal objects. Human hand pointing cues generated more visual co-orienting than did eye gaze to distal objects. Significant accurate choices of baited cups occurred with human point and tap cues and human look cues. Results highlight the importance of head and body orientation to induce shared attention in cotton top tamarins, both in a task that involved food getting and a task that did not. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
The direction of another person's gaze is difficult to ignore when presented at the center of attention. In 6 experiments, perception of unattended gaze was investigated. Participants made directional (left-right) judgments to gazing-face or pointing-hand targets, which were accompanied by a distractor face or hand. Processing of the distractor was assessed via congruency effects on target response times. Congruency effects were found from the direction of distractor hands but not from the direction of distractor gazes (Experiment 1). This pattern persisted even when distractor sizes were increased to compensate for their peripheral presentation (Experiments 2 and 5). In contrast, congruency effects were exerted by profile heads (Experiments 3 and 4). In Experiment 6, isolated eye region distractors produced no congruency effects, even when they were presented near the target. These results suggest that, unlike other facial information, gaze direction cannot be perceived outside the focus of attention. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Five experiments are reported that investigate the distribution of selective attention to verbal and nonverbal components of an utterance when conflicting information exists in these channels. A Stroop-type interference paradigm is adopted in which attributes from the verbal and nonverbal dimensions are placed into conflict. Static directional (deictic) gestures and corresponding spoken and written words show symmetrical interference (Experiments 1, 2, and 3), as do directional arrows and spoken words (Experiment 4). This symmetry is maintained when the task is switched from a manual keypress to a verbal naming response (Experiment 5), suggesting the mutual influence of the 2 dimensions is independent of spatial stimulus-response compatibility. It is concluded that the results are consistent with a model of interference in which information from pointing gestures and speech is integrated prior to the response selection stage of processing. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
The processing of gaze cues plays an important role in social interactions, and mutual gaze in particular is relevant for natural as well as video-mediated communications. Mutual gaze occurs when an observer looks at or in the direction of the eyes of another person. The authors chose the metaphor of a cone of gaze to characterize this range of gaze directions that constitutes "looking at" another person. In 4 experiments using either a real person or a virtual head, the authors investigated the influences of observer distance, head orientation, visibility of the eyes, and the presence of a 2nd head on the perceived direction and width of the gaze cone. The direction of the gaze cone was largely affected by all experimental manipulations, whereas its angular width remained comparatively stable. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Objective: Individuals with schizophrenia have difficulty interpreting social and emotional cues such as facial expression, gaze direction, body position, and voice intonation. Nonverbal cues are powerful social signals but are often processed implicitly, outside the focus of attention. The aim of this research was to assess implicit processing of social cues in individuals with schizophrenia. Method: Patients with schizophrenia or schizoaffective disorder and matched controls performed a primary task of word classification with social cues in the background. Participants were asked to classify target words (LEFT/RIGHT) by pressing a key that corresponded to the word, in the context of facial expressions with eye gaze averted to the left or right. Results: Although facial expression and gaze direction were irrelevant to the task, these facial cues influenced word classification performance. Participants were slower to classify target words (e.g., LEFT) that were incongruent to gaze direction (e.g., eyes averted to the right) compared to target words (e.g., LEFT) that were congruent to gaze direction (e.g., eyes averted to the left), but this only occurred for expressions of fear. This pattern did not differ for patients and controls. Conclusion: The results showed that threat-related signals capture the attention of individuals with schizophrenia. These data suggest that implicit processing of eye gaze and fearful expressions is intact in schizophrenia. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Five experiments examined children's use of eye gaze information for "mind-reading" purposes, specifically, for inferring another person's desire. When presented with static displays in the first 3 experiments, only by 4 years of age did children use another person's eye direction to infer desires, although younger children could identify the person's focus of attention. Further, 3-year-olds were capable of inferring desire from other nonverbal cues, such as pointing (Experiment 3). When eye gaze was presented dynamically with several other scaffolding cues (Experiment 4), 2- and 3-year-olds successfully used eye gaze for desire inference. Scaffolding cues were removed in Experiment 5, and 2- and 3-year-olds still performed above chance in using eye gaze. Results suggest that 2-year-olds are capable of using eye gaze alone to infer about another's desire. The authors propose that the acquisition of the ability to use attentional cues to infer another's mental state may involve both an association process and a differentiation process. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
The relationship between facial expression and gaze processing was investigated with the Garner selective attention paradigm. In Experiment 1, participants performed expression judgments without interference from gaze, but expression interfered with gaze judgments. Experiment 2 replicated these results across different emotions. In both experiments, expression judgments occurred faster than gaze judgments, suggesting that expression was processed before gaze could interfere. In Experiments 3 and 4, the difficulty of the emotion discrimination was increased in two different ways. In both cases, gaze interfered with emotion judgments and vice versa. Furthermore, increasing the difficulty of the emotion discrimination resulted in gaze and expression interactions. Results indicate that expression and gaze interactions are modulated by discriminability. Whereas expression generally interferes with gaze judgments, gaze direction modulates expression processing only when facial emotion is difficult to discriminate. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Factors affecting joint visual attention in 12- and 18-month-olds were investigated. In Experiment 1 infants responded to 1 of 3 parental gestures: looking, looking and pointing, or looking, pointing, and verbalizing. Target objects were either identical to or distinctive from distractor objects. Targets were in front of or behind the infant to test G. E. Butterworth's (1991b) hypothesis that 12-month-olds do not follow gaze to objects behind them. Pointing elicited more episodes of joint visual attention than looking alone. Distinctive targets elicited more episodes of joint visual attention than identical targets. Although infants most reliably followed gestures to targets in front of them, even 12-month-olds followed gestures to targets behind them. In Experiment 2 parents were rotated so that the magnitude of their head turns to fixate front and back targets was equivalent. Infants looked more at front than at back targets, but there was also an effect of magnitude of head turn. Infants' relative neglect of back targets is partly due to the "size" of adult's gesture. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
The authors tested 2 bottlenosed dolphins (Tursiops truncatus) for their understanding of human-directed gazing or pointing in a 2-alternative object-choice task. A dolphin watched a human informant either gazing at or pointing toward 1 of 2 laterally placed objects and was required to perform a previously indicated action to that object. Both static and dynamic gaze, as well as static and dynamic direct points and cross-body points, yielded errorless or nearly errorless performance. Gaze with the informant's torso obscured (only the head was shown) produced no performance decrement, but gaze with eyes only resulted in chance performance. The results revealed spontaneous understanding of human gaze accomplished through head orientation, with or without the human informant's eyes obscured, and demonstrated that gaze-directed cues were as effective as point-directed cues in the object-choice task. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Dogs' (Canis familiaris) and cats' (Felis catus) interspecific communicative behavior toward humans was investigated. In Experiment 1, the ability of dogs and cats to use human pointing gestures in an object-choice task was compared using 4 types of pointing cues differing in distance between the signaled object and the end of the fingertip and in visibility duration of the given signal. Using these gestures, both dogs and cats were able to find the hidden food; there was no significant difference in their performance. In Experiment 2, the hidden food was made inaccessible to the subjects to determine whether they could indicate the place of the hidden food to a naive owner. Cats lacked some components of attention-getting behavior compared with dogs. The results suggest that individual familiarization with pointing gestures ensures high-level performance in the presence of such gestures; however, species-specific differences could cause differences in signaling toward the human. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
How information is exchanged between the cognitive mechanisms responsible for gaze perception and social attention is unclear. These systems could be independent; the “gaze cueing” effect could emerge from the activation of a general-purpose attentional mechanism that is ignorant of the social nature of the gaze cue. Alternatively, orienting to social gaze direction might be directly determined by the operation of cognitive mechanisms specifically dedicated to gaze perception. This second notion is the dominant assumption in the literature, but there is little direct support for this account. Here, we systematically manipulated observers' perception of gaze direction by implementing a gaze adaptation paradigm. Gaze cueing was reduced only in conditions where perception of specific averted gaze stimuli was impaired (Experiment 1). Adaptation to a pointing stimulus failed to impact gaze cueing (Experiment 2). Overall, these data suggest a direct link between the specific operation of gaze perception mechanisms and the consequential orienting of attention. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

13.
Gaze direction influences younger adults' perception of emotional expressions, with direct gaze enhancing the perception of anger and joy, while averted gaze enhances the perception of fear. Age-related declines in emotion recognition and eye-gaze processing have been reported, indicating that there may be age-related changes in the ability to integrate these facial cues. As there is evidence of a positivity bias with age, age-related difficulties integrating these cues may be greatest for negative emotions. The present research investigated age differences in the extent to which gaze direction influenced explicit perception (e.g., anger, fear and joy; Study 1) and social judgments (e.g., of approachability; Study 2) of emotion faces. Gaze direction did not influence the perception of fear in either age group. In both studies, age differences were found in the extent to which gaze direction influenced judgments of angry and joyful faces, with older adults showing less integration of gaze and emotion cues than younger adults. Age differences were greatest when interpreting angry expressions. Implications of these findings for older adults' social functioning are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Two types of neurons in the rat brain have been proposed to participate in spatial learning and navigation: place cells, which fire selectively in specific locations of an environment and which may constitute key elements of cognitive maps, and head direction cells, which fire selectively when the rat's head is pointed in a specific direction and which may serve as an internal compass to orient the cognitive map. The spatially and directionally selective properties of these cells arise from a complex interaction between input from external landmarks and from idiothetic cues; however, the exact nature of this interaction is poorly understood. To address this issue, directional information from visual landmarks was placed in direct conflict with directional information from idiothetic cues. When the mismatch between the two sources of information was small (45 degrees), the visual landmarks had robust control over the firing properties of place cells; when the mismatch was larger, however, the firing fields of the place cells were altered radically, and the hippocampus formed a new representation of the environment. Similarly, the visual cues had control over the firing properties of head direction cells when the mismatch was small (45 degrees), but the idiothetic input usually predominated over the visual landmarks when the mismatch was larger. Under some conditions, when the visual landmarks predominated after a large mismatch, there was always a delay before the visual cues exerted their control over head direction cells. These results support recent models proposing that prewired intrinsic connections enable idiothetic cues to serve as the primary drive on place cells and head direction cells, whereas modifiable extrinsic connections mediate a learned, secondary influence of visual landmarks.  相似文献   

15.
There is mixed evidence on the nature of the relationship between the perception of gaze direction and the perception of facial expressions. Major support for shared processing of gaze and expression comes from behavioral studies that showed that observers cannot process expression or gaze and ignore irrelevant variations in the other dimension. However, these studies have not considered the role of head orientation, which is known to play a key role in the processing of gaze direction. In a series of experiments, the relationship between the processing of expression and gaze was tested both with head orientation held constant and with head orientation varied between trials, making it a relevant source of information for computing gaze direction. Results show that when head orientation varied between trials, the processing of facial expression was not interfered with gaze direction, and conversely, the processing of gaze could be made without being interfered from irrelevant variations in expression. These findings suggest that the processing of gaze and the processing of expression are not functionally interconnected as was previously assumed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Three experiments examined 3- to 5-year-olds' use of eye gaze cues to infer truth in a deceptive situation. Children watched a video of an actor who hid a toy in 1 of 3 cups. In Experiments 1 and 2, the actor claimed ignorance about the toy's location but looked toward 1 of the cups, without (Experiment 1) and with (Experiment 2) head movement. In Experiment 3, the actor provided contradictory verbal and eye gaze clues about the location of the toy. Four- and 5-year-olds correctly used the actor's gaze cues to locate the toy, whereas 3-year-olds failed to do so. Results suggest that by 4 years of age, children begin to understand that eye gaze cues displayed by a deceiver can be informative about the true state of affairs. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Faces provide a complex source of information via invariant (e.g., race, sex and age) and variant (e.g., emotional expressions) cues. At present, it is not clear whether these different cues are processed separately or whether they interact. Using the Garner Paradigm, Experiment 1 confirmed that race, sex, and age cues affected the categorization of faces according to emotional expression whereas emotional expression had no effect on the categorization of faces by sex, age, or race. Experiment 2 used inverted faces and replicated this pattern of asymmetrical interference for race and age cues, but not for sex cues for which no interference on emotional expression categorization was observed. Experiment 3 confirmed this finding with a more stringently matched set of facial stimuli. Overall, this study shows that invariant cues interfere with the processing of emotional expressions. It indicates that the processing of invariant cues, but not of emotional expressions, is obligatory and that it precedes that of emotional expressions. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

18.
Toddlers' ability to use cues such as eye gaze and gestures to infer the meaning of novel action words was examined. In Experiment 1, 21- and 27-month-olds were taught labels for pairs of videotaped actions that were either similar or dissimilar in appearance. Similar actions differed mainly in the presence of behavioral cues related to the agents' intentions (e.g., extended arms). Only the older children were able to learn the labels for the similar actions. In Experiment 2, 3 new pairs of labels (2 similar, 1 dissimilar) were taught to children in the same age range. Eye gaze and gestures were the main features distinguishing the similar events. The same developmental effect was observed, with only the older children showing learning of both types of verbs and the younger children being impeded by the appearance of the actions. The results show that by the middle of the 2nd year, children begin to consider intentions-in-action when acquiring the meaning of novel action verbs. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Two experiments are reported in which the role of attribute exposure duration in naming performance was examined by tracking eye movements. Participants were presented with color-word Stroop stimuli and left- or right-pointing arrows on different sides of a computer screen. They named the color attribute and shifted their gaze to the arrow to manually indicate its direction. The color attribute (Experiment 1) or the complete color-word stimulus (Experiment 2) was removed from the screen 100 ms after stimulus onset. Compared with presentation until trial offset, removing the color attribute diminished Stroop interference, as well as facilitation effects in color naming latencies, whereas removing the complete stimulus diminished interference only. Attribute and stimulus removal reduced the latency of gaze shifting, which suggests decreased rather than increased attentional demand. These results provide evidence that limiting exposure duration contributes to attribute naming performance by diminishing the extent to which irrelevant attributes are processed, which reduces attentional demand. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

20.
Anticipation of others' actions is of paramount importance in social interactions. Cues such as gaze direction and facial expressions can be informative, but can also produce ambiguity with respect to others' intentions. We investigated the combined effect of an actor's gaze and expression on judgments made by observers about the end-point of the actor's head rotation toward the observer. Expressions of approach gave rise to an unambiguous intention to move toward the observer, while expressions of avoidance gave rise to an ambiguous behavioral intention (as the expression and motion cues were in conflict). In the ambiguous condition, observers overestimated how far the actor's head had rotated when the actor's gaze was directed ahead of head rotation (compared to congruent or lagging behind). In the unambiguous condition the estimations were not influenced by the gaze manipulation. These results show that social cue integration does not follow simple additive rules, and suggests that the involuntary allocation of attention to another's gaze depends on the perceived ambiguity of the agent's behavioral intentions. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号