首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The ability of 4 olive baboons (Papio anubis) to use human gaze cues during a competitive food task was investigated. Three baboons used head orientation as a cue, and 1 individual also used eye direction alone. As the baboons did not receive prior training with gestural cuts, their performance suggests that the competitive paradigm may be more appropriate for testing nonhuman primates than the standard object-choice paradigm. However, the baboons were insensitive to whether the experimenter could actually perceive the food item, and therefore the use of visual orientation cues may not be indicative of visual perspective-taking abilities. Performance was disrupted by the introduction of a screen and objects to conceal food items and by the absence of movement in cues presented. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
The authors tested whether the understanding by dolphins (Tursiops truncatus) of human pointing and head-gazing cues extends to knowing the identity of an indicated object as well as its location. In Experiment 1, the dolphins Phoenix and Akeakamai processed the identity of a cued object (of 2 that were present), as shown by their success in selecting a matching object from among 2 alternatives remotely located. Phoenix was errorless on first trials in this task. In Experiment 2, Phoenix reliably responded to a cued object in alternate ways, either by matching it or by acting directly on it, with each type of response signaled by a distinct gestural command given after the indicative cue. She never confused matching and acting. In Experiment 3, Akeakamai was able to process the geometry of pointing cues (but not head-gazing cues), as revealed by her errorless responses to either a proximal or distal object simultaneously present, when each object was indicated only by the angle at which the informant pointed. The overall results establish that these dolphins could identify, through indicative cues alone, what a human is attending to as well as where. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Several experiments have been performed, to examine whether nonhuman primates are able to make use of experimenter-given manual and facial (visual) cues to direct their attention to a baited object. Contrary to the performance of prosimians and monkeys, great apes repeatedly have shown task efficiency in experiments such as these. However, many great ape subjects used have been "enculturated" individuals. In the present study, 3 nonenculturated orangutans (Pongo pygmaeus) were tested for their ability to use experimenter-given pointing, gazing, and glancing cues in an object-choice task. All subjects readily made use of the pointing gesture. However, when subjects were left with only gazing or glancing cues, their performance deteriorated markedly, and they were not able to complete the task. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
In a series of experiments, chimpanzees (Pan troglodytes), an orangutan (Pongo pygmaeus), and human infants (Homo sapiens) were investigated as to whether they used experimenter-given cues when responding to object-choice tasks. Five conditions were used in different phases: the experimenter tapping on the correct object, gazing plus pointing, gazing closely, gazing alone, and glancing without head orientation. The 3 subject species were able to use all of the experimenter-given cues, in contrast to previous reports of limited use of such cues by monkeys. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Two methods assessed the use of experimenter-given directional cues by a New World monkey species, cotton top tamarins (Saguinus oedipus). Experiment 1 used cues to elicit visual co-orienting toward distal objects. Experiment 2 used cues to generate responses in an object-choice task. Although there were strong positive correlations between monkey pairs to co-orient, visual co-orienting with a human experimenter occurred at a low frequency to distal objects. Human hand pointing cues generated more visual co-orienting than did eye gaze to distal objects. Significant accurate choices of baited cups occurred with human point and tap cues and human look cues. Results highlight the importance of head and body orientation to induce shared attention in cotton top tamarins, both in a task that involved food getting and a task that did not. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
The ability of 3 capuchin monkeys (Cebus apella) to use experimenter-given cues to solve an object-choice task was assessed. The monkeys learned to use explicit gestural and postural cues and then progressed to using eye-gaze-only cues to solve the task, that is, to choose the baited 1 of 2 objects and thus obtain a food reward. Increasing cue-stimulus distance and introducing movement of the eyes impeded the establishment of effective eye-gaze reading. One monkey showed positive but imperfect transfer of use of eye gaze when a novel experimenter presented the cue. When head and eye orientation cues were presented simultaneously and in conflict, the monkeys showed greater responsiveness to head orientation cues. The results show that capuchin monkeys can learn to use eye gaze as a discriminative cue, but there was no evidence for any underlying awareness of eye gaze as a cue to direction of attention. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
The processing of gaze cues plays an important role in social interactions, and mutual gaze in particular is relevant for natural as well as video-mediated communications. Mutual gaze occurs when an observer looks at or in the direction of the eyes of another person. The authors chose the metaphor of a cone of gaze to characterize this range of gaze directions that constitutes "looking at" another person. In 4 experiments using either a real person or a virtual head, the authors investigated the influences of observer distance, head orientation, visibility of the eyes, and the presence of a 2nd head on the perceived direction and width of the gaze cone. The direction of the gaze cone was largely affected by all experimental manipulations, whereas its angular width remained comparatively stable. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
The object-choice task tests animals’ ability to use human-given cues to find a hidden reward located in 1 of 2 (or more) containers. Great apes are generally unskillful in this task whereas other species including dogs (Canis familiaris) and goats (Capra hircus) can use human-given cues to locate the reward. However, great apes are typically positioned proximal to the containers when receiving the experimenter’s cue whereas other species are invariably positioned distally. The authors investigated how the position of the subject, the human giving the cue and the containers (and the distance among them) affected the performance of 19 captive great apes. Compared to the proximal condition, the distal condition involved larger distances and, critically, it reduced the potential ambiguity of the cues as well as the strong influence that the sight of the containers may have had when subjects received the cue. Subjects were far more successful in the distal compared to the proximal condition. The authors suggest several possibilities to account for this difference and discuss our findings in relation to previous and future object-choice research. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Dogs' (Canis familiaris) and cats' (Felis catus) interspecific communicative behavior toward humans was investigated. In Experiment 1, the ability of dogs and cats to use human pointing gestures in an object-choice task was compared using 4 types of pointing cues differing in distance between the signaled object and the end of the fingertip and in visibility duration of the given signal. Using these gestures, both dogs and cats were able to find the hidden food; there was no significant difference in their performance. In Experiment 2, the hidden food was made inaccessible to the subjects to determine whether they could indicate the place of the hidden food to a naive owner. Cats lacked some components of attention-getting behavior compared with dogs. The results suggest that individual familiarization with pointing gestures ensures high-level performance in the presence of such gestures; however, species-specific differences could cause differences in signaling toward the human. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
The authors assessed the ability of 6 captive dolphins (Tursiops truncatus) to comprehend without explicit training 3 human communicative signs (pointing, directed gaze, and replica). Pointing consisted of indicating the target item with the index finger and a fully extended arm. Directed gaze consisted of orienting the head and eyes toward the target item while the rest of the body remained stationary. The replica signal consisted of holding up an exact duplicate of the target item. On the initial series of 12 trials for each condition, 3 dolphins performed above chance on pointing, 2 on gaze, and none for replica. With additional trials, above chance performance increased to 4 dolphins for pointing, 6 for gazing, and 2 for replica. The replica sign seemed to be the most taxing for them (only 2 dolphins achieved results significantly above chance). Taken together, these results indicate that dolphins are able to interpret untrained communicative signs successfully. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Objective: Individuals with schizophrenia have difficulty interpreting social and emotional cues such as facial expression, gaze direction, body position, and voice intonation. Nonverbal cues are powerful social signals but are often processed implicitly, outside the focus of attention. The aim of this research was to assess implicit processing of social cues in individuals with schizophrenia. Method: Patients with schizophrenia or schizoaffective disorder and matched controls performed a primary task of word classification with social cues in the background. Participants were asked to classify target words (LEFT/RIGHT) by pressing a key that corresponded to the word, in the context of facial expressions with eye gaze averted to the left or right. Results: Although facial expression and gaze direction were irrelevant to the task, these facial cues influenced word classification performance. Participants were slower to classify target words (e.g., LEFT) that were incongruent to gaze direction (e.g., eyes averted to the right) compared to target words (e.g., LEFT) that were congruent to gaze direction (e.g., eyes averted to the left), but this only occurred for expressions of fear. This pattern did not differ for patients and controls. Conclusion: The results showed that threat-related signals capture the attention of individuals with schizophrenia. These data suggest that implicit processing of eye gaze and fearful expressions is intact in schizophrenia. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
On the basis of a study by D. J. Povinelli, D. T. Bierschwale, and C. G. Cech (1999), the performance of family dogs (Canis familiaris) was examined in a 2-way food choice task in which 4 types of directional cues were given by the experimenter: pointing and gazing, head-nodding ("at target"), head turning above the correct container ("above target"), and glancing only ("eyes only"). The results showed that the performance of the dogs resembled more closely that of the children in D. J. Povinelli et al.'s study, in contrast to the chimpanzees' performance in the same study. It seems that dogs, like children, interpret the test situation as being a form of communication. The hypothesis is that this similarity is attributable to the social experience and acquired social routines in dogs because they spend more time in close contact with humans than apes do, and as a result dogs are probably more experienced in the recognition of human gestures. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Four experiments explored the processing of pointing gestures comprising hand and combined head and gaze cues to direction. The cross-modal interference effect exerted by pointing hand gestures on the processing of spoken directional words, first noted by S. R. H. Langton, C. O'Malley, and V. Bruce (see record 1996-06577-002), was found to be moderated by the orientation of the gesturer's head-gaze (Experiment 1). Hand and head cues also produced bidirectional interference effects in a within-modalities version of the task (Experiment 2). These findings suggest that both head-gaze and hand cues to direction are processed automatically and in parallel up to a stage in processing where a directional decision is computed. In support of this model, head-gaze cues produced no influence on nondirectional decisions to social emblematic gestures in Experiment 3 but exerted significant interference effects on directional responses to arrows in Experiment 4. It is suggested that the automatic analysis of head, gaze, and pointing gestures occurs because these directional signals are processed as cues to the direction of another individual's social attention. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Ten domestic dogs (Canis familiaris) of different breeds and ages were exposed to 2 different social cues indicating the location of hidden food, each provided by both a human informant and a conspecific informant (for a total of 4 different social cues). For the local enhancement cue, the informant approached the location where food was hidden and then stayed beside it. For the gaze and point cue, the informant stood equidistant between 2 hiding locations and bodily oriented and gazed toward the 1 in which food was hidden (the human informant also pointed). Eight of the 10 subjects, including the one 6-month-old juvenile, were above chance with 2 or more cues. Results are discussed in terms of the phylogenetic and ontogenetic processes by means of which dogs come to use social cues to locate food. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Captive lowland gorillas (Gorilla gorilla) were tested for their ability to use experimenter-given manual and facial cues in an object-choice task. Performance levels were high when the experimenter tapped on or pointed at an object that contained a reward. Performance remained good when the experimenter withheld manual gestures and instead gazed with eyes and head oriented toward the correct object. In contrast, when only the experimenter's eye orientation served as the cue, the gorillas did not appropriately complete the task. Repeated attempts to establish prolonged mutual eye contact with 1 gorilla failed. The gorillas' failure to use eye signals as a cue may be due to an aversion to direct eye contact and contrasts with findings in other great apes. The results may indicate a difference among great ape species in detection of intentionality, but an alternative interpretation is that performance in such tests is influenced by factors such as rearing experience and temperament. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Povinelli, Bierschwale, and ?ech (1999) reported that when tested on a visual attention task, the behavior of juvenile chimpanzees did not support a high-level understanding of visual attention. This study replicates their research using adult humans and aims to investigate the validity of their experimental design. Participants were trained to respond to pointing cues given by an experimenter, and then tested on their ability to locate hidden objects from visual cues. Povinelli et al.'s assertion that the generalization of pointing to gaze is indicative of a high-level framework was not supported by our findings: Training improved performance only on initial probe trials when the experimenter's gaze was not directed at the baited cup. Furthermore, participants performed above chance on such trials, the same result exhibited by chimpanzees and used as evidence by Povinelli et al. to support a low-level framework. These findings, together with the high performance of participants in an incongruent condition, in which the experimenter pointed to or gazed at an unbaited container, challenge the validity of their experimental design. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
How the processing of emotional expression is influenced by perceived gaze remains a debated issue. Discrepancies between previous results may stem from differences in the nature of stimuli and task characteristics. Here we used a highly controlled set of computer-generated animated faces combining dynamic emotional expressions with varying intensity, and gaze shifts either directed at or averted from the observer. We predicted that perceived self-relevance of fearful faces would be higher with averted gaze—signaling a nearby danger; whereas conversely, direct gaze would be more relevant for angry faces—signaling aggressiveness. This interaction pattern was observed behaviorally for emotion intensity ratings, and neurally for functional magnetic resonance imaging activation in amygdala, as well as fusiform and medial prefrontal cortices, but only for mild- and not high-intensity expressions. These results support an involvement of human amygdala in the appraisal of self-relevance and reveal a crucial role of expression intensity in emotion and gaze interactions. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Empirical evidence shows an effect of gaze direction on cueing spatial attention, regardless of the emotional expression shown by a face, whereas a combined effect of gaze direction and facial expression has been observed on individuals' evaluative judgments. In 2 experiments, the authors investigated whether gaze direction and facial expression affect spatial attention depending upon the presence of an evaluative goal. Disgusted, fearful, happy, or neutral faces gazing left or right were followed by positive or negative target words presented either at the spatial location looked at by the face or at the opposite spatial location. Participants responded to target words based on affective valence (i.e., positive/negative) in Experiment 1 and on letter case (lowercase/uppercase) in Experiment 2. Results showed that participants responded much faster to targets presented at the spatial location looked at by disgusted or fearful faces but only in Experiment 1, when an evaluative task was used. The present findings clearly show that negative facial expressions enhance the attentional shifts due to eye-gaze direction, provided that there was an explicit evaluative goal present. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
Three experiments examined 3- to 5-year-olds' use of eye gaze cues to infer truth in a deceptive situation. Children watched a video of an actor who hid a toy in 1 of 3 cups. In Experiments 1 and 2, the actor claimed ignorance about the toy's location but looked toward 1 of the cups, without (Experiment 1) and with (Experiment 2) head movement. In Experiment 3, the actor provided contradictory verbal and eye gaze clues about the location of the toy. Four- and 5-year-olds correctly used the actor's gaze cues to locate the toy, whereas 3-year-olds failed to do so. Results suggest that by 4 years of age, children begin to understand that eye gaze cues displayed by a deceiver can be informative about the true state of affairs. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
In 6 experiments, the authors investigated whether attention orienting by gaze direction is modulated by the emotional expression (neutral, happy, angry, or fearful) on the face. The results showed a clear spatial cuing effect by gaze direction but no effect by facial expression. In addition, it was shown that the cuing effect was stronger with schematic faces than with real faces, that gaze cuing could be achieved at very short stimulus onset asynchronies (14 ms), and that there was no evidence for a difference in the strength of cuing triggered by static gaze cues and by cues involving apparent motion of the pupils. In sum, the results suggest that in normal, healthy adults, eye direction processing for attention shifts is independent of facial expression analysis. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号