首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The authors tested 2 bottlenosed dolphins (Tursiops truncatus) for their understanding of human-directed gazing or pointing in a 2-alternative object-choice task. A dolphin watched a human informant either gazing at or pointing toward 1 of 2 laterally placed objects and was required to perform a previously indicated action to that object. Both static and dynamic gaze, as well as static and dynamic direct points and cross-body points, yielded errorless or nearly errorless performance. Gaze with the informant's torso obscured (only the head was shown) produced no performance decrement, but gaze with eyes only resulted in chance performance. The results revealed spontaneous understanding of human gaze accomplished through head orientation, with or without the human informant's eyes obscured, and demonstrated that gaze-directed cues were as effective as point-directed cues in the object-choice task. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Dogs' (Canis familiaris) and cats' (Felis catus) interspecific communicative behavior toward humans was investigated. In Experiment 1, the ability of dogs and cats to use human pointing gestures in an object-choice task was compared using 4 types of pointing cues differing in distance between the signaled object and the end of the fingertip and in visibility duration of the given signal. Using these gestures, both dogs and cats were able to find the hidden food; there was no significant difference in their performance. In Experiment 2, the hidden food was made inaccessible to the subjects to determine whether they could indicate the place of the hidden food to a naive owner. Cats lacked some components of attention-getting behavior compared with dogs. The results suggest that individual familiarization with pointing gestures ensures high-level performance in the presence of such gestures; however, species-specific differences could cause differences in signaling toward the human. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
The authors tested a dolphin's (Tursiops truncatus) understanding of human manual pointing gestures to 3 distal objects located to the left of, to the right of, or behind the dolphin. The human referred to an object through a direct point (Pd), a cross-body point (Px), or a familiar symbolic gesture (S). In Experiment 1, the dolphin responded correctly to 80% of Pds toward laterally placed objects but to only 40% of Pds to the object behind. Responding to objects behind improved to 88% in Experiment 2 after exaggerated pointing was briefly instituted. Spontaneous comprehension of Pxs also was demonstrated. In Experiment 3, the human produced a sequence of 2 Pds, 2 Pxs, 2 Ss, or all 2-way combinations of these 3 to direct the dolphin to take the object referenced second to the object referenced first. Accuracy ranged from 68% to 77% correct (chance?=?17%). These results established that the dolphin understood the referential character of the human manual pointing gesture. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Four experiments explored the processing of pointing gestures comprising hand and combined head and gaze cues to direction. The cross-modal interference effect exerted by pointing hand gestures on the processing of spoken directional words, first noted by S. R. H. Langton, C. O'Malley, and V. Bruce (see record 1996-06577-002), was found to be moderated by the orientation of the gesturer's head-gaze (Experiment 1). Hand and head cues also produced bidirectional interference effects in a within-modalities version of the task (Experiment 2). These findings suggest that both head-gaze and hand cues to direction are processed automatically and in parallel up to a stage in processing where a directional decision is computed. In support of this model, head-gaze cues produced no influence on nondirectional decisions to social emblematic gestures in Experiment 3 but exerted significant interference effects on directional responses to arrows in Experiment 4. It is suggested that the automatic analysis of head, gaze, and pointing gestures occurs because these directional signals are processed as cues to the direction of another individual's social attention. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
The authors assessed the ability of 6 captive dolphins (Tursiops truncatus) to comprehend without explicit training 3 human communicative signs (pointing, directed gaze, and replica). Pointing consisted of indicating the target item with the index finger and a fully extended arm. Directed gaze consisted of orienting the head and eyes toward the target item while the rest of the body remained stationary. The replica signal consisted of holding up an exact duplicate of the target item. On the initial series of 12 trials for each condition, 3 dolphins performed above chance on pointing, 2 on gaze, and none for replica. With additional trials, above chance performance increased to 4 dolphins for pointing, 6 for gazing, and 2 for replica. The replica sign seemed to be the most taxing for them (only 2 dolphins achieved results significantly above chance). Taken together, these results indicate that dolphins are able to interpret untrained communicative signs successfully. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Experiment 1 tested a dolphin (Tursiops truncatus) for cross-modal recognition of 25 unique pairings of 8 familiar, complexly shaped objects, using the senses of echolocation and vision. Cross-modal recognition was errorless or nearly so for 24 of the 25 pairings under both visual to echoic matching (V-E) and echoic to visual matching (E-V). First-trial recognition occurred for 20 pairings under V-E and for 24 under E-V. Echoic decision time under V-E averaged only 1.88 s. Experiment 2 tested 4 new pairs of objects for 24 trials of V-E and 24 trials of E-V without any prior exposure of these objects. Two pairs yielded performance significantly above chance in both V-E and E-V. Also, the dolphin matched correctly on 7 of 8 1st trials with these pairs. The results support a capacity for direct echoic perception of object shape by this species and demonstrate that prior object exposure is not required for spontaneous cross-modal recognition. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
Two methods assessed the use of experimenter-given directional cues by a New World monkey species, cotton top tamarins (Saguinus oedipus). Experiment 1 used cues to elicit visual co-orienting toward distal objects. Experiment 2 used cues to generate responses in an object-choice task. Although there were strong positive correlations between monkey pairs to co-orient, visual co-orienting with a human experimenter occurred at a low frequency to distal objects. Human hand pointing cues generated more visual co-orienting than did eye gaze to distal objects. Significant accurate choices of baited cups occurred with human point and tap cues and human look cues. Results highlight the importance of head and body orientation to induce shared attention in cotton top tamarins, both in a task that involved food getting and a task that did not. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
The authors examined the ability of domestic dogs to use human body cues (gestures) and equivalent-sized nonhuman cues to find hidden food in an object choice paradigm. In Experiment 1 the authors addressed the importance of the human element of the cue, and the effects of size, topography, and familiarity on dogs' success in using cues. Experiment 2 further explored the role of the human as cue-giver, and the impact of a change in the experimenter's attentional state during cue presentation. This included a systematic test of the role inanimate tokens play as cues apart from human placement. Our results indicate that dogs are more sensitive to human cues than equivalent nonhuman cues, and that the size of the cue is a critical element in determining dogs' success in following it. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
We investigated the roles of top-down task set and bottom-up stimulus salience for feature-specific attentional capture. Spatially nonpredictive cues preceded search arrays that included a color-defined target. For target-color singleton cues, behavioral spatial cueing effects were accompanied by cue-induced N2pc components, indicative of attentional capture. These effects were only minimally attenuated for nonsingleton target-color cues, underlining the dominance of top-down task set over salience in attentional capture. Nontarget-color singleton cues triggered no N2pc, but instead an anterior N2 component indicative of top-down inhibition. In Experiment 2, inverted behavioral cueing effects of these cues were accompanied by a delayed N2pc to targets at cued locations, suggesting that perceptually salient but task-irrelevant visual events trigger location-specific inhibition mechanisms that can delay subsequent target selection. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
In a series of experiments, chimpanzees (Pan troglodytes), an orangutan (Pongo pygmaeus), and human infants (Homo sapiens) were investigated as to whether they used experimenter-given cues when responding to object-choice tasks. Five conditions were used in different phases: the experimenter tapping on the correct object, gazing plus pointing, gazing closely, gazing alone, and glancing without head orientation. The 3 subject species were able to use all of the experimenter-given cues, in contrast to previous reports of limited use of such cues by monkeys. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
We report 3 studies of the referential pointing of 2 orangutans (Pongo pygmaeus). Chantek was raised in an enculturated environment; Puti, raised in a nursery, had a more typical captive life. In Experiment 1, flexibility of pointing behavior was investigated by requiring subjects to point in novel circumstances (for an out-of-sight tool, not food). In Experiment 2, we investigated the orangutans' comprehension of the significance of a human point in helping them to locate food. In Experiment 3, we investigated whether these pointing subjects comprehended that a human recipient must be looking for the point to achieve its attention-directing goal. In all experiments the enculturated orangutan showed better understanding of pointing than the captive orangutan. This finding is consistent with recent studies that have found differences in the cognitive and social-cognitive abilities of apes that have had different types of experience with humans. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Three experiments examined the ability of birds to discriminate between the actions of walking forwards and backwards as demonstrated by video clips of a human walking a dog. Experiment 1 revealed that budgerigars (Melopsittacus undulates) could discriminate between these actions when the demonstrators moved consistently from left to right. Test trials then revealed that the discrimination transferred, without additional training, to clips of the demonstrators moving from right to left. Experiment 2 replicated the findings from Experiment 1 except that the demonstrators walked as if on a treadmill in the center of the display screen. The results from the first 2 experiments were replicated with pigeons in Experiment 3. The results cannot be explained if it is assumed that animals rely on static cues, such as those derived from individual postures, in order to discriminate between the actions of another animal. Instead, this type of discrimination appears to be controlled by dynamic cues derived from changes in the posture of the demonstrators. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
The relative role of associative processes and the use of explicit cues about object location in search behavior in dogs (Canis familiaris) was assessed by using a spatial binary discrimination reversal paradigm in which reversal conditions featured: (1) a previously rewarded location and a novel location, (2) a previously nonrewarded location and a novel location, or (3) a previously rewarded location and a previously nonrewarded location. Rule mediated learning predicts a similar performance in these different reversal conditions whereas associative learning predicts the worst performance in Condition 3. Evidence for an associative control of search emerged when no explicit cues about food location were provided (Experiment 1) but also when dogs witnessed the hiding of food in the reversal trials (Experiment 2) and when they did so in both the prereversal and the reversal trials (Experiment 3). Nevertheless, dogs performed better in the prereversal phase of Experiment 3 indicating that their search could be informed by the knowledge of the food location. Experiment 4 confirmed the results of Experiments 1 and 2, under a different arrangement of search locations. We conclude that knowledge about object location guides search behavior in dogs but it cannot override associative processes. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

14.
In 3 experiments, the authors examined part-set cuing effects in younger and older adults. Participants heard lists of category exemplars and later recalled them. Recall was uncued or cued with a subset of studied items. In Experiment 1, participants were cued with some of the category names, and they remembered fewer never-cued categories than a free-recall condition. In Experiment 2, a similar effect was observed for category exemplar cues. There was also an age difference: By some measures, a small number of cues impaired older adults more than younger. Experiment 3 replicated this result and found that older adults were disproportionately slow in the presence of cues. Across experiments, older adults showed robust part-set cuing effects, and sometimes, they were disproportionately impaired by cues. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Recent studies have shown that the face and voice of an unfamiliar person can be matched for identity. Here the authors compare the relative effects of changing sentence content (what is said) and sentence manner (how it is said) on matching identity between faces and voices. A change between speaking a sentence as a statement and as a question disrupted matching performance, whereas changing the sentence itself did not. This was the case when the faces and voices were from the same race as participants and speaking a familiar language (English; Experiment 1) or from another race and speaking an unfamiliar language (Japanese; Experiment 2). Altering manner between conversational and clear speech (Experiment 3) or between conversational and casual speech (Experiment 4) was also disruptive. However, artificially slowing (Experiment 5) or speeding (Experiment 6) speech did not affect cross-modal matching performance. The results show that bimodal cues to identity are closely linked to manner but that content (what is said) and absolute tempo are not critical. Instead, prosodic variations in rhythmic structure and/or expressiveness may provide a bimodal, dynamic identity signature. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Two rhesus monkeys selected the larger of two sequentially presented sets of items on a computer monitor. In Experiment 1, performance was related to the ratio of set sizes, and the monkeys discriminated between sets with up to 10 items. Performance was not disrupted when 1 set had fewer than 4 items and 1 set had more than 4 items, a critical trial type for differentiating object file and analog models of numerical representation. Experiment 2 controlled the interitem rate of presentation. Experiment 3 included some trials on which number and amount (visual surface area) offered conflicting cues. Experiment 4 varied the total duration of set presentation and the duration of item visibility. In all of the experiments, performance remained high, although total set presentation duration also acted as a partial cue for the monkeys. Overall, the data indicated that rhesus monkeys estimate the approximate number of items in sequentially presented sets and that they are not relying solely on nonnumerical cues such as rate, duration, or cumulative amount. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Echolocating bottlenose dolphins (Tursiops truncatus) discriminate between objects on the basis of the echoes reflected by the objects. However, it is not clear which echo features are important for object discrimination. To gain insight into the salient features, the authors had a dolphin perform a match-to-sample task and then presented human listeners with echoes from the same objects used in the dolphin's task. In 2 experiments, human listeners performed as well or better than the dolphin at discriminating objects, and they reported the salient acoustic cues. The error patterns of the humans and the dolphin were compared to determine which acoustic features were likely to have been used by the dolphin. The results indicate that the dolphin did not appear to use overall echo amplitude, but that it attended to the pattern of changes in the echoes across different object orientations. Human listeners can quickly identify salient combinations of echo features that permit object discrimination, which can be used to generate hypotheses that can be tested using dolphins as subjects. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Three experiments tested the ability to integrate information about object–object relationships in 2 chimpanzees. In Experiment 1, the subjects were trained to match 1 part of a 2-part object to its other part, match a tool to its assembled object, match a container to its tool, and match a tool to its container. In Experiment 2, the subjects were trained to match a picture of the sample. One subject learned this type of matching task and was then tested on whether she could choose the pictures of related items in Experiment 1. Although the subject was reinforced irrespective of her choices, she chose pictures of items related to the sample when there was no picture of the sample. Experiment 3 showed that the subject was able to match a picture of the item among related items. The results suggest that the subject might integrate information about relationships acquired in Experiment 1 and organize it to make networks of related items. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
A partial report procedure was used to test the ability of observers to split attention over noncontiguous locations. Observers reported the identity of 2 targets that appeared within a 5?×?5 stimulus array, and cues (validity?=?80%) informed them of the 2 most likely target locations. On invalid trials, 1 of the targets appeared directly in between the cued locations. Experiments 1, 1 a, and 2 showed a strong accuracy advantage at cued locations compared with intervening ones. This effect was larger when the cues were arranged horizontally rather than vertically. Experiment 3 suggests that this effect of cue orientation reflects an advantage for processing targets that appear in different hemifields. Experiments 4 and 4a suggest that the primary mechanism supporting the flexible deployment of spatial attention is the suppression of interference from stimuli at unattended locations. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Several experiments have been performed, to examine whether nonhuman primates are able to make use of experimenter-given manual and facial (visual) cues to direct their attention to a baited object. Contrary to the performance of prosimians and monkeys, great apes repeatedly have shown task efficiency in experiments such as these. However, many great ape subjects used have been "enculturated" individuals. In the present study, 3 nonenculturated orangutans (Pongo pygmaeus) were tested for their ability to use experimenter-given pointing, gazing, and glancing cues in an object-choice task. All subjects readily made use of the pointing gesture. However, when subjects were left with only gazing or glancing cues, their performance deteriorated markedly, and they were not able to complete the task. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号