首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Implicit in many informal and formal principles of psychological change is the understudied assumption that change requires either an active approach or an inactive approach. This issue was systematically investigated by comparing the effects of general action goals and general inaction goals on attitude change. As prior attitudes facilitate preparation for an upcoming persuasive message, general action goals were hypothesized to facilitate conscious retrieval of prior attitudes and therefore hinder attitude change to a greater extent than general inaction goals. Experiment 1 demonstrated that action primes (e.g., “go,” “energy”) yielded faster attitude report than inaction primes (e.g., “rest,” “still”) among participants who were forewarned of an upcoming persuasive message. Experiment 2 showed that the faster attitude report identified in Experiment 1 was localized on attitudes toward a message topic participants were prepared to receive. Experiments 3, 4, and 5 showed that, compared with inaction primes, action primes produced less attitude change and less argument scrutiny in response to a counterattitudinal message on a previously forewarned topic. Experiment 6 confirmed that the effects of the primes on attitude change were due to differential attitude retrieval. That is, when attitude expression was induced immediately after the primes, action and inaction goals produced similar amounts of attitude change. In contrast, when no attitude expression was induced after the prime, action goals produced less attitude change than inaction goals. Finally, Experiment 7 validated the assumption that these goal effects can be reduced or reversed when the goals have already been satisfied by an intervening task. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

2.
We demonstrate that right-handed participants make speeded classification responses to pairs of objects that appear in standard co-locations for right-handed actions relative to when they appear in reflected locations. These effects are greater when participants “weight” information for action when deciding if 2 objects are typically used together, compared with deciding if objects typically occur in a given context. The effects are enhanced, and affect both types of decision, when an agent is shown holding the objects. However, the effects are eliminated when the objects are not viewed from the first-person perspective and when words are presented rather than objects. The data suggest that (a) participants are sensitive to whether objects are positioned correctly for their own actions, (b) the position information is coded within an egocentric reference frame, (c) the critical representation involved is visual and not semantic, and (d) the effects are enhanced by a sense of agency. The results can be interpreted within a dual-route framework for action retrieval in which a direct visual route is influenced by affordances for action. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Given the distributed representation of visual features in the human brain, binding mechanisms are necessary to integrate visual information about the same perceptual event. It has been assumed that feature codes are bound into object files—pointers to the neural codes of the features of a given event. The present study investigated the perceptual criteria underlying integration into an object file. Previous studies confounded the sharing of spatial location with belongingness to the same perceptual object, 2 factors we tried to disentangle. Our findings suggest that orientation and color features appearing in a task-irrelevant preview display were integrated irrespective of whether they appeared as part of the same object or of different objects (e.g., 1 stationary and the other moving continuously, or a banana in a particular orientation overlaying an apple of a particular color). In contrast, integration was markedly reduced when the 2 objects were separated in space. Taken together, these findings suggest that spatial overlap of visual features is a sufficient criterion for integrating them into the same object file. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
The problem of feature binding has been examined under conditions of distributed attention or with spatially dispersed stimuli. We studied binding by asking whether selective attention to a feature of a masked object enables perceptual access to the other features of that object using conditions in which spatial attention was directed at a single location where all objects appeared. In an identification condition, the task required reporting the same property of each object. High rates of identification showed good perceptual availability. In a search condition, the task required reporting the property (e.g., shape) that was associated with a specific value of the searched property (e.g., surface texture) of the same object. Focusing of attention on a target’s searched property value did not result in a high rate of identifying the target’s other property, and strong object masking was found. Backward masking between spatially superimposed visual objects appears to be primarily due to a difficulty in feature binding of a target object rather than to a substitution of an integrated object by the following stimulus. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Pictures of handled objects such as a beer mug or frying pan are shown to prime speeded reach and grasp actions that are compatible with the object. To determine whether the evocation of motor affordances implied by this result is driven merely by the physical orientation of the object's handle as opposed to higher-level properties of the object, including its function, prime objects were presented either in an upright orientation or rotated 90° from upright. Rotated objects successfully primed hand actions that fit the object's new orientation (e.g., a frying pan rotated 90° so that its handle pointed downward primed a vertically oriented power grasp), but only when the required grasp was commensurate with the object's proper function. This constraint suggests that rotated objects evoke motor representations only when they afford the potential to be readily positioned for functional action. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

6.
7.
The purpose of this study is to examine whether the way in which adults process animate and inanimate objects is impacted by the distinctiveness of the object and whether processing ability varies with age and with the quality of aging (e.g. normal aging versus pathological aging). We examined the perceptual functioning of young adults, elderly subjects and patients suffering from Alzheimer's disease. Generally, the results do not support the distributed model of conceptual representation. However, they do demonstrate that the ability to recognize objects by their distinctiveness is affected by normal and pathological aging. A gradual deterioration in the ability to correctly perceive animate objects was also observed as pathological aging progressed. These results, as well as our methods of assessing semantic memory, are discussed in terms of their theoretical and practical implications. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
The present study assessed the functional organization of action semantics by asking subjects to categorize pictures of an actor holding objects with a correct or incorrect grip at either a correct or incorrect goal location. Overall, reaction times were slower if the object was presented with an inappropriate posture, and this effect was stronger for goal violations compared with grip violations (Experiment 1). In addition, the retrieval of action semantics was found accompanied by the implicit activation of motor representations. Body-related objects (e.g., cup) were classified faster when a movement toward the subject’s body was required, whereas world-related objects (e.g., pincers) were responded to faster with a movement in the opposite direction (Experiments 2 and 3). In contrast, when subjects were required to retrieve only visual semantics (Experiment 4), no interference effects of postural information were observed, and motor representations were only partially activated. These findings suggest that action semantics can be accessed independently from visual semantics and that the retrieval of action semantics is supported by functional motor activation reflecting the prototypical use of an object. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
With computer simulations of self-motion, Ss controlled their altitude as they approached a floating object and, after getting as close as possible to the object, tried to "jump" over it without collision. Ss jumped significantly later for small objects, compared with larger objects that were approached from equal distances at equal speeds and were positioned at equal clearance heights. This occurred even when accretion-deletion information was present and when object width and length were varied independently. Results were consistent with studies (e.g., P. R. Delucia, see record 1992-00230-001) in which Ss judged a large, far-approaching object to hit the viewpoint before a small, near object that would have arrived sooner. Results suggest that pictorial information such as relative size contributes to active collision-avoidance tasks and must be considered in models of perceived distance and time-to-arrival. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
We examined whether the apparent size of an object is scaled to the morphology of the relevant body part with which one intends to act on it. To be specific, we tested if the visually perceived size of graspable objects is scaled to the extent of apparent grasping ability for the individual. Previous research has shown that right-handed individuals perceive their right hand as larger and capable of grasping larger objects than their left. In the first 2 experiments, we found that objects looked smaller when placed in or judged relative to their right hand compared to their left. In the third experiment, we directly manipulated apparent hand size by magnifying the participants' hands. Participants perceived objects to be smaller when their hand was magnified than when their hand was unmagnified. We interpret these results as demonstrating that perceivers use the extent of their hands' grasping abilities as “perceptual rulers” to scale the apparent size of graspable objects. Furthermore, hand size manipulations did not affect the perceived size of objects too big to be grasped, which suggests that hand size is only used as a scaling mechanism when the object affords the relevant action, in this case, grasping. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

11.
The binding of stimulus and response features into stimulus–response (S-R) episodes or “event files” is a basic process for the efficient control of behavior. However, relevant information is usually accompanied by information that is irrelevant for the selection of action. Recent studies showed that even irrelevant information is bound into event files. In this study, we investigated the boundary conditions of distractor–response binding and subsequent distractor-based response retrieval processes. In particular, we tested whether the inclusion of distractor information into S-R episodes is modulated by whether the distractor and target stimulus are perceived as belonging to the same object or as belonging to different objects. We argue that distracting information is only bound into S-R episodes if it is perceived as belonging to the same object as the relevant information, whereas no binding occurs when the distracting information is perceived as belonging to a separate object. In 6 experiments, we found evidence for the modulation of distractor–response bindings according to perceptual grouping principles. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

12.
"Temporal migration" describes a situation in which subjects viewing rapidly presented stimuli (e.g., 9–20 items/s) confidently report a target element as having been presented in the same display as a previous or following stimulus in the sequence. Four experiments tested a short-term buffer model of this phenomenon. Experiments 1 and 4 tested the hypothesis that subjects' errors are due to the demands of the verbal report procedure rather than to perceptual integration. In Experiment 1, 12 color objects were presented at a rate of 9/s. Prior to each sequence, an object was named and subjects responded "yes" or "no" to indicate whether the target element (a black frame) occurred with that object. Consistent with the perceptual hypothesis, the yes/no procedure yielded the same results as the verbal report procedure. Experiment 2 tested the hypothesis that the direction of migration depends on "frame" detection time. Results showed that reaction time to frame detection was significantly faster in trials in which subjects reported the frame on a preceding rather than a following picture. Experiments 3 and 4 used the standard naming procedure and the yes/no procedure to test temporal migration using more complex, interrelated stimuli (objects and scenes). Implications for the use of the temporal migration effect to study visual integration within eye fixations are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Recent results from Cannon, Hayes, and Tipper (2010) have established that the Action Compatibility Effect (ACE) is hedonically marked and elicits a genuine positive reaction. In this work, we aim to show that the hedonic marking of the ACE has incidental consequences on affective judgment. For this, we used the affective priming paradigm principle (for a review, see Musch & Klauer, 2003): participants have to respond, as quickly as they can, regarding the pleasantness or unpleasantness character of a target word. In the priming phase, we do not present an affective stimulus; however, we present two different graspable objects, one after the other. The handles of the graspable objects are shown either both on the same side (i.e., perceptual action compatibility) or not (i.e., perceptual action incompatibility). In addition, the orientation of the handles of the objects are either compatible (i.e., action compatibility) or not (i.e., action compatibility) with the response hand used for the word evaluation. Consistent with our hypothesis, participants responded faster to positive words after perceptual action compatibility and action compatibility (thus demonstrating the ACE) than after incompatibility conditions. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

14.
We investigated whether there exists a behavioral dependency between object detection and categorization. Previous work (Grill-Spector & Kanwisher, 2005) suggests that object detection and basic-level categorization may be the very same perceptual mechanism: As objects are parsed from the background they are categorized at the basic level. In the current study, we decouple object detection from categorization by manipulating the between-category contrast of the categorization decision. With a superordinate-level contrast with people as one of the target categories (e.g., cars vs. people), which replicates Grill-Spector and Kanwisher, we found that success at object detection depended on success at basic-level categorization and vice versa. But with a basic-level contrast (e.g., cars vs. boats) or superordinate-level contrast without people as a target category (e.g., dog vs. boat), success at object detection did not depend on success at basic-level categorization. Successful object detection could occur without successful basic-level categorization. Object detection and basic-level categorization do not seem to occur within the same early stage of visual processing. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
In 2-tone images (e.g., Dallenbach's cow), only two levels of brightness are used to convey image structure-dark object regions and shadows are turned to black and light regions are light regions are turned white. Despite a lack of shading, hue and texture information, many 2-tone images of familiar objects and scenes are accurately interpreted, even by naive observers. Objects frequently appear fully volumetric and are distinct from their shadows. If perceptual interpretation of 2-tone images is accomplished via bottom-up processes on the basis of geometrical structure projected to the image (e.g., volumetric parts, contour and junction information) novel objects should appear volumetric as readily as their familiar counterparts. We demonstrate that accurate volumetric representations are rarely extracted from 2-tone images of novel objects, even when these objects are constructed from volumetric primitives such as generalized cones (Marr, D., Nishihara, H.K., 1978. Proceedings of the Royal Society London 200, 269-294; Biederman, I. 1985. Computer Vision, Graphics, and Image Processing 32, 29-73), or from the rearranged components of a familiar object which is itself recognizable as a 2-tone image. Even familiar volumes such as canonical bricks and cylinders require scenes with redundant structure (e.g., rows of cylinders) or explicit lighting (a lamp in the image) for recovery of global volumetric shape. We conclude that 2-tone image perception is not mediated by bottom-up extraction of geometrical features such as junctions or volumetric parts, but may rely on previously stored representations in memory and a model of the illumination of the scene. The success of this top-down strategy implies it is available for general object recognition in natural scenes.  相似文献   

16.
Configural coding is known to take place between the parts of individual objects but has never been shown between separate objects. We provide novel evidence here for configural coding between separate objects through a study of the effects of action relations between objects on extinction. Patients showing visual extinction were presented with pairs of objects that were or were not co-located for action. We first confirmed the reduced extinction effect for objects co-located for action. Consistent with prior results showing that inversion disrupts configural coding, we found that inversion disrupted the benefit for action-related object pairs. This occurred both for objects with a standard canonical orientation (e.g., teapot and teacup) and those without, but where grasping and using the objects was made more difficult by inversion (e.g., spanner and nut). The data suggest that part of the affordance effect may reflect a visuo-motor response to the configural relations between stimuli. Experiment 2 showed that distorting the relative sizes of the objects also reduced the advantage for action-related pairs. We conclude that action-related pairs are processed as configurations. (PsycINFO Database Record (c) 2011 APA, all rights reserved)  相似文献   

17.
The role of semantic knowledge in object utilisation is a matter of debate. It is usually presumed that access to semantic knowledge is a necessary condition for manipulation, but a few reports challenged this view. The existence of a direct, pre-semantic route from vision to action has been proposed. We report the case of a patient with a disorder of object use in everyday life, in the context of probable Alzheimer's disease. This patient was also impaired when manipulating single objects. He showed a striking dissociation between impairment in object use and preserved capacity to perform symbolic and meaningless gestures. To elucidate the nature of the disorder, and to clarify the relations between semantic knowledge and object use, we systematically assessed his capacity to recognise, name, access semantic knowledge, and use 15 common objects. We found no general semantic impairment for the objects that were not correctly manipulated, and, more importantly, no difference between the semantic knowledge of objects correctly manipulated and objects incorrectly manipulated. These data, although not incompatible with the hypothesis of a direct route for action, are better accommodated by the idea of a distributed semantic memory, where different types of knowledge are represented, as proposed by Allport (Allport, D. A. Current perspectives in dysphasia, pp. 32-60. Churchill Livingstone, Edinburgh, 1985).  相似文献   

18.
The authors studied 2 patients, S.M. and R.N., to examine perceptual organization and its relationship to object recognition. Both patients had normal, low-level vision and performed simple grouping operations normally but were unable to apprehend a multielement stimulus as a whole. R.N. failed to derive global structure even under optimal stimulus conditions, was less sensitive to grouping by closure, and was more impaired in object recognition than S.M. These findings suggest that perceptual organization involves a multiplicity of processes, some of which are simpler and are instantiated in lower order areas of visual cortex (e.g., collinearity). Other processes are more complex and rely on higher order visual areas (e.g., closure and shape formation). The failure to exploit these latter configural processes adversely affects object recognition. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
24 2.5- and 5-yr-olds were asked to name objects in each of 3 groups first by feeling the objects and then by seeing them. The object groups were miniaturized large objects (e.g., doll's bed), miniaturized small objects (e.g., doll's spoon), and nonminiaturized small objects (e.g., keys). All Ss identified all of the objects by sight. Both age groups tactilely identified more objects in the nonminiaturized small object group than in the miniaturized groups. The 2.5-yr-olds tactilely identified more objects in the miniaturized small object group than in the miniaturized large object group. The 5-yr-olds tactilely identified more objects than the 2.5-yr-olds in all of the groups. Results suggest that for the younger children the tactile identification of miniaturized common objects is inhibited when the normal size of the objects prohibits overall tactile exploration. By the late preschool years, the tactile identification of miniaturized common objects may be facilitated by intermodal transfer of visual–tactile information. (8 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
The authors examined the relation between infants' motor skills and attention to objects features in events in which a hand acted on an object (e.g., squeezed it) that then produced a sound (e.g., squeaking). In this study, 6- to 7-month-old infants (N = 41) were habituated to a single event and then tested with changes in appearance and action. Infants robustly responded to changes in action, but as a group did not respond to changes in appearance. Moreover, more skilled activity with objects during naturalistic play was associated with longer looking in response to a change in appearance, but not to a change in action. Implications for the relation between perception and action in infancy are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号