首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
The authors tested 2 bottlenosed dolphins (Tursiops truncatus) for their understanding of human-directed gazing or pointing in a 2-alternative object-choice task. A dolphin watched a human informant either gazing at or pointing toward 1 of 2 laterally placed objects and was required to perform a previously indicated action to that object. Both static and dynamic gaze, as well as static and dynamic direct points and cross-body points, yielded errorless or nearly errorless performance. Gaze with the informant's torso obscured (only the head was shown) produced no performance decrement, but gaze with eyes only resulted in chance performance. The results revealed spontaneous understanding of human gaze accomplished through head orientation, with or without the human informant's eyes obscured, and demonstrated that gaze-directed cues were as effective as point-directed cues in the object-choice task. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Factors affecting joint visual attention in 12- and 18-month-olds were investigated. In Experiment 1 infants responded to 1 of 3 parental gestures: looking, looking and pointing, or looking, pointing, and verbalizing. Target objects were either identical to or distinctive from distractor objects. Targets were in front of or behind the infant to test G. E. Butterworth's (1991b) hypothesis that 12-month-olds do not follow gaze to objects behind them. Pointing elicited more episodes of joint visual attention than looking alone. Distinctive targets elicited more episodes of joint visual attention than identical targets. Although infants most reliably followed gestures to targets in front of them, even 12-month-olds followed gestures to targets behind them. In Experiment 2 parents were rotated so that the magnitude of their head turns to fixate front and back targets was equivalent. Infants looked more at front than at back targets, but there was also an effect of magnitude of head turn. Infants' relative neglect of back targets is partly due to the "size" of adult's gesture. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Experiment 1 tested a dolphin (Tursiops truncatus) for cross-modal recognition of 25 unique pairings of 8 familiar, complexly shaped objects, using the senses of echolocation and vision. Cross-modal recognition was errorless or nearly so for 24 of the 25 pairings under both visual to echoic matching (V-E) and echoic to visual matching (E-V). First-trial recognition occurred for 20 pairings under V-E and for 24 under E-V. Echoic decision time under V-E averaged only 1.88 s. Experiment 2 tested 4 new pairs of objects for 24 trials of V-E and 24 trials of E-V without any prior exposure of these objects. Two pairs yielded performance significantly above chance in both V-E and E-V. Also, the dolphin matched correctly on 7 of 8 1st trials with these pairs. The results support a capacity for direct echoic perception of object shape by this species and demonstrate that prior object exposure is not required for spontaneous cross-modal recognition. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
Determined the age at which infants call interesting objects to another's attention by pointing, related their ability to follow another's pointing to their own use of the gesture, and compared the uses of pointing and reaching. 48 Ss aged 10–16 mo were studied with their mothers in a setting containing 6 special stimulus objects. By 12.5 mo, most Ss pointed, usually vocalizing or looking at their partner while pointing. The communicative function of the gesture was further established by the partner's response of verbal acknowledgment and looking at the object. The ability to follow another's points seemed to be acquired before Ss began to point but improved with their own use of the gesture. Reaching partook of the behaviors associated with pointing but developed earlier and decreased as pointing increased. Data show that at an early age Ss exhibit an elementary form of the ability to take the visual perspective of others. (12 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Including gesture in instruction facilitates learning. Why? One possibility is that gesture points out objects in the immediate context and thus helps ground the words learners hear in the world they see. Previous work on gesture's role in instruction has used gestures that either point to or trace paths on objects, thus providing support for this hypothesis. The experiments described here investigated the possibility that gesture helps children learn even when it is not produced in relation to an object but is instead produced "in the air." Children were given instruction in Piagetian conservation problems with or without gesture and with or without concrete objects. The results indicate that children given instruction with speech and gesture learned more about conservation than children given instruction with speech alone, whether or not objects were present during instruction. Gesture in instruction can thus help learners learn even when those gestures do not direct attention to visible objects, suggesting that gesture can do more for learners than simply ground arbitrary, symbolic language in the physical, observable world. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
Echolocating bottlenose dolphins (Tursiops truncatus) discriminate between objects on the basis of the echoes reflected by the objects. However, it is not clear which echo features are important for object discrimination. To gain insight into the salient features, the authors had a dolphin perform a match-to-sample task and then presented human listeners with echoes from the same objects used in the dolphin's task. In 2 experiments, human listeners performed as well or better than the dolphin at discriminating objects, and they reported the salient acoustic cues. The error patterns of the humans and the dolphin were compared to determine which acoustic features were likely to have been used by the dolphin. The results indicate that the dolphin did not appear to use overall echo amplitude, but that it attended to the pattern of changes in the echoes across different object orientations. Human listeners can quickly identify salient combinations of echo features that permit object discrimination, which can be used to generate hypotheses that can be tested using dolphins as subjects. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
A dolphin performed a 3-alternative matching-to-sample task in different modality conditions (visual/echoic, both vision and echolocation: visual, vision only; echoic, echolocation only). In Experiment 1, training occurred in the dual-modality (visual/echoic) condition. Choice accuracy in tests of all conditions was above chance without further training. In Experiment 2, unfamiliar objects with complementary similarity relations in vision and echolocation were presented in single-modality conditions until accuracy was about 70%. When tested in the visual/echoic condition, accuracy immediately rose (95%), suggesting integration across modalities. In Experiment 3, conditions varied between presentation of sample and alternatives. The dolphin successfully matched familiar objects in the cross-modal conditions. These data suggest that the dolphin has an object-based representational system.  相似文献   

8.
Several experiments have been performed, to examine whether nonhuman primates are able to make use of experimenter-given manual and facial (visual) cues to direct their attention to a baited object. Contrary to the performance of prosimians and monkeys, great apes repeatedly have shown task efficiency in experiments such as these. However, many great ape subjects used have been "enculturated" individuals. In the present study, 3 nonenculturated orangutans (Pongo pygmaeus) were tested for their ability to use experimenter-given pointing, gazing, and glancing cues in an object-choice task. All subjects readily made use of the pointing gesture. However, when subjects were left with only gazing or glancing cues, their performance deteriorated markedly, and they were not able to complete the task. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
The authors tested whether the understanding by dolphins (Tursiops truncatus) of human pointing and head-gazing cues extends to knowing the identity of an indicated object as well as its location. In Experiment 1, the dolphins Phoenix and Akeakamai processed the identity of a cued object (of 2 that were present), as shown by their success in selecting a matching object from among 2 alternatives remotely located. Phoenix was errorless on first trials in this task. In Experiment 2, Phoenix reliably responded to a cued object in alternate ways, either by matching it or by acting directly on it, with each type of response signaled by a distinct gestural command given after the indicative cue. She never confused matching and acting. In Experiment 3, Akeakamai was able to process the geometry of pointing cues (but not head-gazing cues), as revealed by her errorless responses to either a proximal or distal object simultaneously present, when each object was indicated only by the angle at which the informant pointed. The overall results establish that these dolphins could identify, through indicative cues alone, what a human is attending to as well as where. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
Six experiments compared spatial updating of an array after imagined rotations of the array versus viewer. Participants responded faster and made fewer errors in viewer tasks than in array tasks while positioned outside (Experiment 1) or inside (Experiment 2) the array. An apparent array advantage for updating objects rather than locations was attributable to participants imagining translations of single objects rather than rotations of the array (Experiment 3). Superior viewer performance persisted when the array was reduced to 1 object (Experiment 4); however, an object with a familiar configuration improved object performance somewhat (Experiment 5). Object performance reached near-viewer levels when rotations included haptic information for the turning object. The researchers discuss these findings in terms of the relative differences in which the human cognitive system transforms the spatial reference frames corresponding to each imagined rotation. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
In 3 picture–word interference experiments, speakers named a target object in the presence of an unrelated not-to-be-named context object. Distractor words, which were phonologically related or unrelated to the context object's name, were used to determine whether the context object had become phonologically activated. All objects had high frequency names, and the ease of processing of these objects was manipulated by a visual degradation technique. In Experiment 1, both objects were nondegraded; in Experiment 2, both objects were degraded; and in Experiment 3, either the target object or the context object was degraded. Distractor words, which were phonologically related to the context objects, interfered with the naming response when both objects were nondegraded, indicating that the context objects had become phonologically coactivated. The effect vanished when both objects were degraded, when only the context object was degraded, and when only the target object was degraded. These data demonstrate that the amount of available processing resources constrains the forward cascading of activation in the conceptual-lexical system. Context objects are likely to become phonologically coactivated if they are easily retrieved and if prioritized target processing leaves sufficient resources. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

12.
The authors assessed rats' encoding of the appearance or egocentric position of objects within visual scenes containing 3 objects (Experiment 1) or 1 object (Experiment 2A). Experiment 2B assessed encoding of the shape and fill pattern of single objects, and encoding of configurations (object + position, shape + fill). All were assessed by testing rats' ability to discriminate changes from familiar scenes (constant-negative paradigm). Perirhinal cortex lesions impaired encoding of objects and their shape; postrhinal cortex lesions impaired encoding of egocentric position, but the effect may have been partly due to entorhinal involvement. Neither lesioned group was impaired in detecting configural change. In Experiment 1, both lesion groups were impaired in detecting small changes in relative position of the 3 objects, suggesting that more sensitive tests might reveal configural encoding deficits. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
We report 3 studies of the referential pointing of 2 orangutans (Pongo pygmaeus). Chantek was raised in an enculturated environment; Puti, raised in a nursery, had a more typical captive life. In Experiment 1, flexibility of pointing behavior was investigated by requiring subjects to point in novel circumstances (for an out-of-sight tool, not food). In Experiment 2, we investigated the orangutans' comprehension of the significance of a human point in helping them to locate food. In Experiment 3, we investigated whether these pointing subjects comprehended that a human recipient must be looking for the point to achieve its attention-directing goal. In all experiments the enculturated orangutan showed better understanding of pointing than the captive orangutan. This finding is consistent with recent studies that have found differences in the cognitive and social-cognitive abilities of apes that have had different types of experience with humans. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
We explored infants' ability to perceive stationary, partially occluded objects as connected units (Experiments 1 and 2) with specific appearances (Experiment 3). In each experiment, the infants saw 2 test events involving what appeared to adults to be a tall rectangular object whose middle portion was hidden behind a narrow screen. During the test events, the screen alternately uncovered and covered the object. In Experiments 1 and 2, removal of the screen revealed either a single, connected rectangle (complete object event) or an interrupted rectangle with a gap where the screen had been (broken object event). In Experiment 3, removal of the screen revealed either a rectangle (rectangle event) or a cross-shaped object (cross-shape event). The pattern of infants' looking times at these events suggest that they perceive the unity of the partially occluded object by 6.5 months of age but did not perceive the form of the hidden part of the object until 8 months. The results of baseline control conditions support this interpretation.  相似文献   

15.
Two methods assessed the use of experimenter-given directional cues by a New World monkey species, cotton top tamarins (Saguinus oedipus). Experiment 1 used cues to elicit visual co-orienting toward distal objects. Experiment 2 used cues to generate responses in an object-choice task. Although there were strong positive correlations between monkey pairs to co-orient, visual co-orienting with a human experimenter occurred at a low frequency to distal objects. Human hand pointing cues generated more visual co-orienting than did eye gaze to distal objects. Significant accurate choices of baited cups occurred with human point and tap cues and human look cues. Results highlight the importance of head and body orientation to induce shared attention in cotton top tamarins, both in a task that involved food getting and a task that did not. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Pointing by monkeys, apes, and human infants is reviewed and compared. Pointing with the index finger is a species-typical human gesture, although human infants exhibit more whole-hand pointing than is commonly appreciated. Captive monkeys and feral apes have been reported to only rarely "spontaneously" point, although apes in captivity frequently acquire pointing, both with the index finger and with the whole hand, without explicit training. Captive apes exhibit relatively more gaze alternation while pointing than do human infants about 1 year old. Human infants are relatively more vocal while pointing than are captive apes, consistent with paralinguistic use of pointing. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
Four experiments investigated the influence of a sudden social request on the kinematics of a preplanned action. In Experiment 1, participants were requested to grasp an object and then locate it within a container (unperturbed trials). On 20% of trials, a human agent seated nearby the participant unexpectedly stretched out her arm and unfolded her hand as if to ask for the object (perturbed trials). In the remaining 3 experiments, similar procedures were adopted except that (a) the human was replaced by a robotic agent, (b) the gesture performed by the human agent did not imply a social request, and (c) the gaze of the human agent was not available. Only when the perturbation was characterized by a social request involving a human agent were there kinematic changes to the action directed toward the target. Conversely, no effects on kinematics were evident when the perturbation was caused by the robotic agent or by a human agent performing a nonsocial gesture. These findings are discussed in the light of current theories proposed to explain the effects of social context on the control of action. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
In 4 experiments, the authors examined the use of the hands in simple arithmetic tasks. Experiments 1 and 2 demonstrated that pointing increases both accuracy and speed in counting arrays of items, whether those items are identical or distinctive. Experiment 3 demonstrated that individuals tend to nod their heads when not allowed to point and that nodding is associated with greater accuracy, suggesting that pointing is functional for reasons other than simply providing additional visual information. Experiment 4 examined changes in speech when adding arrays of digits, depending on whether participants were allowed to use their hands to manipulate the tokens on which the digits were presented. Taken together, the results of these experiments are consistent with recent research suggesting that gesture can serve cognitive functions and that the hands can support the binding of representational elements to their functional roles by providing phase markers for cyclic cognitive processes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
This study describes the use of referential gestures with concomitant gaze orienting behavior to both distal food objects and communicative interactants by 115 chimpanzees, ranging from 3 to 56 years of age. Gaze alternation between a banana and an experimenter was significantly associated with vocal and gestural communication. Pointing was the most common gesture elicited; 47 subjects pointed with the whole hand, whereas 6 subjects pointed with index fingers. Thus, communicative pointing is commonly used by laboratory chimpanzees, without explicit training to point, language training, or home rearing. Juveniles exhibited striking decrements in their propensity to communicate with adult male experimenters compared with older chimpanzees. Significantly fewer mother-reared chimpanzees exhibited gaze alternation compared with nursery-reared chimpanzees. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Previous research using briefly presented displays has indicated that objects in a coherent scene are easier to identify than are objects in incoherent backgrounds. Of interest is whether the identification of the target object depends on the identification of the scene or the identification of other diagnostic objects in the scene. Experiment 1 indicated objects are more difficult to identify when located in an "episodically" inconsistent background even when the same diagnostic objects are present in both inconsistent and consistent backgrounds. Experiment 2 demonstrated that the degree to which noncued (cohort) objects are consistent with the target object has no effect on this object identification task. Experiment 3 showed consistent episodic background information facilitated object identification and inconsistent episodic background information did not interfere relative to "nonsense" backgrounds roughly equated on visual characteristics. Implications for models of scene perception are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号