首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This study investigated neuronal activity in the anterior striatum while monkeys repeatedly learned to associate new instruction stimuli with known behavioral reactions and reinforcers. In a delayed go-nogo task with several trial types, an initial picture instructed the animal to execute or withhold a reaching movement and to expect a liquid reward or not. During learning, new instruction pictures were presented, and animals guessed and performed one of the trial types according to a trial-and-error strategy. Learning of a large number of pictures resulted in a learning set in which learning took place in a few trials and correct performance exceeded 80% in the first 60-90 trials. About 200 task-related striatal neurons studied in both familiar and learning conditions showed three forms of changes during learning. Activations related to the preparation and execution of behavioral reactions and the expectation of reward were maintained in many neurons but occurred in inappropriate trial types when behavioral errors were made. The activations became appropriate for individual trial types when the animals' behavior adapted to the new task contingencies. In particular, reward expectation-related activations occurred initially in both rewarded and unrewarded movement trials and became subsequently restricted to rewarded trials. These changes occurred in parallel with the visible adaptation of reward expectations by the animals. The second learning change consisted in decreases of task-related activations that were either restricted to the initial trials of new learning problems or persisted during the subsequent consolidation phase. They probably reflected reductions in the expectation and preparation of upcoming task events, including reward. The third learning change consisted in transient or sustained increases of activations. These might reflect the increased attention accompanying learning and serve to induce synaptic changes underlying the behavioral adaptations. Both decreases and increases often induced changes in the trial selective occurrence of activations. In conclusion, neurons in anterior striatum showed changes related to adaptations or reductions of expectations in new task situations and displayed activations that might serve to induce structural changes during learning.  相似文献   

2.
1. The primate orbitofrontal cortex receives inputs from the primary olfactory (pyriform) cortex and also from the primary taste cortex. To investigate how olfactory information is encoded in the orbitofrontal cortex, the responses of single neurons in the orbitofrontal cortex and surrounding areas were recorded during the performance of an olfactory discrimination task. In the task, the delivery of one of eight different odors indicated that the monkey could lick to obtain a taste of sucrose. If one of two other odors was delivered from the olfactometer, the monkey had to refrain from licking, otherwise he received a taste of saline. 2. Of the 1,580 neurons recorded in the orbitofrontal cortex, 3.1% (48) had olfactory responses and 34 (2.2%) responded differently to the different odors in the task. The neurons responded with a typical latency of 180 ms from the onset of odorant delivery. 3. Of the olfactory neurons with differential responses in the task, 35% responded solely on the basis of the taste reward association of the odorants. Such neurons responded either to all the rewarded stimuli, and none of the saline-associated stimuli, or vice versa. 4. The remaining 65% of these neurons showed differential selectivity for the stimuli based on the odor quality and not on the taste reward association of the odor. 5. The findings show that the olfactory representation within the orbitofrontal cortex reflects for some neurons (65%) which odor is present independently of its association with taste reward, and that for other neurons (35%), the olfactory response reflects (and encodes) the taste association of the odor. The additional finding that some of the odor-responsive neurons were also responsive to taste stimuli supports the hypothesis that odor-taste association learning at the level of single neurons in the orbitofrontal cortex enables such cells to show olfactory responses that reflect the taste association of the odor.  相似文献   

3.
In view of the behavioral deficits arising after lesions of midbrain dopamine systems, we recorded single dopamine neuron activity in monkeys which learned and performed reaction time tasks, delayed response tasks, and controlled, self-initiated movements. Dopamine neurons respond in a rather homogeneous fashion to salient external stimuli that attract the attention of the subject. Depending on the particular behavioral situation, dopamine neurons are activated by primary liquid and food rewards during learning or in the absence of predictive stimuli, by conditioned stimuli predicting reward and eliciting behavioral reactions, and by novel, unexpected stimuli. Thus, dopamine neurons signal the presence of reward-related, alerting stimuli that need to be processed by the subject with high priority. Besides these phasic responses, dopamine systems apparently operate also in a tonic mode, as inferred from the beneficial effects of dopamine receptor agonist drugs on Parkinsonian symptoms. Whereas the phasic responses may mediate alerting functions or possibly reward-directed learning, the tonic activity may be involved in maintaining states of behavioral alertness and thus enable-movements and cognitive processes. These data provide neurophysiological correlates for the involvement of dopamine neurons in central processes determining the behavioral reactivity of the subject to important environmental events, and possibly the learning of reward-directed behavior.  相似文献   

4.
Cells in the orbitofrontal cortex (OF) respond to odors and their associated rewards. To determine how these responses are acquired and maintained, the authors recorded single OF units in rats performing an odor discrimination task. Approximately 64% of all cells differentiated between rewarded and nonrewarded odors. These odor valence responses changed during learning in 26% of all cells, and these changes were positively correlated with improving performance, supporting the idea that the information provided by these cells is used in learning the task. However, changes in odor valence responses were also observed after learning, and included not only increases in odor discrimination, but also decreases or mixed increases and decreases. Thus, only some of the changes in firing reflected acquisition of the task. The results suggest that learning triggers a continuing reorganization of OF neural ensembles representing odors and their rewards. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Rewards constitute important goals for voluntary behavior. This study aimed to investigate how expected rewards influence behavior-related neuronal activity in the anterior striatum. In a delayed go-nogo task, monkeys executed or withheld a reaching movement and obtained liquid or sound as reinforcement. An initial instruction picture indicated the behavioral reaction to be performed and the reinforcer to be obtained after a subsequent trigger stimulus. Movements varied according to the reinforcers predicted by the instructions, suggesting that animals differentially expected the two outcomes. About 250 of nearly 1,500 neurons in anterior parts of caudate nucleus, putamen, and ventral striatum showed typical task-related activations that reflected the expectation of instructions and trigger, and the preparation, initiation, and execution of behavioral reactions. Strikingly, most task-related activations occurred only when liquid reward was delivered at trial end, rather than the reinforcing sound. Activations close to the time of reward showed similar preferences for liquid reward over the reinforcing sound, suggesting a relationship to the expectation or detection of the motivational outcome of the trial rather than to a "correct" or "end-of-trial" signal. By contrast, relatively few activations in the present task occurred irrespective of the type of reinforcement. In conclusion, many of the behavior-related neurons investigated in the anterior striatum were influenced by an upcoming primary liquid reward and did not appear to code behavioral acts in a motivationally neutral manner. Rather, these neurons incorporated information about the expected outcome into their behavior-related activity. The activations influenced by reward several seconds before its occurrence may constitute a neuronal basis for the retrograde effects of rewards on behavioral reactions.  相似文献   

6.
The primate orbitofrontal cortex is a site of convergence of information from primary taste, olfactory, and somatosensory cortical areas. We describe the responses of a population of single neurons in the orbitofrontal cortex that responds to fat in the mouth. The neurons respond, when fatty foods are being eaten, to pure fat such as glyceryl trioleate and also to substances with a similar texture but different chemical composition such as paraffin oil (hydrocarbon) and silicone oil [Si(CH3)2O)n]. This is evidence that the neurons respond to the oral texture of fat, sensed by the somatosensory system. Some of the population of neurons respond unimodally to the texture of fat. Other single neurons show convergence of taste inputs, and others of olfactory inputs, onto single neurons that respond to fat. For example, neurons were found that responded to the mouth feel of fat and the taste of monosodium glutamate (both found in milk), or to the mouth feel of fat and to odor. Feeding to satiety reduces the responses of these neurons to the fatty food eaten, but the neurons still respond to some other foods that have not been fed to satiety. Thus sensory-specific satiety for fat is represented in the responses of single neurons in the primate orbitofrontal cortex. Fat is an important constituent of food that affects its palatability and nutritional effects. The findings described provide evidence that the reward value (or pleasantness) of the mouth feel of fat is represented in the primate orbitofrontal cortex and that the representation is relevant to appetite.  相似文献   

7.
This study assessed how rewards impacted intrinsic motivation when students were rewarded for achievement while learning an activity, for performing at a specific level on a test, or for both. Undergraduate university students engaged in a problem-solving activity. The design was a 2 × 2 factorial with 2 levels of reward in a learning phase (reward for achievement, no reward) and 2 levels of reward in a test phase (reward for achievement, no reward). Intrinsic motivation was measured as time spent on the experimental task and ratings of task interest during a free-choice period. A major finding was that achievement-based rewards during learning or testing increased participants' intrinsic motivation. A path analysis indicated that 2 processes (perceived competence and interest-internal attribution) mediated the positive effects of achievement-based rewards in learning and testing on intrinsic motivation. Findings are discussed in terms of the cognitive evaluation, attribution, and social-cognitive theories. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
The authors present their primary value learned value (PVLV) model for understanding the reward-predictive firing properties of dopamine (DA) neurons as an alternative to the temporal-differences (TD) algorithm. PVLV is more directly related to underlying biology and is also more robust to variability in the environment. The primary value (PV) system controls performance and learning during primary rewards, whereas the learned value (LV) system learns about conditioned stimuli. The PV system is essentially the Rescorla-Wagner/delta-rule and comprises the neurons in the ventral striatum/nucleus accumbens that inhibit DA cells. The LV system comprises the neurons in the central nucleus of the amygdala that excite DA cells. The authors show that the PVLV model can account for critical aspects of the DA firing data, making a number of clear predictions about lesion effects, several of which are consistent with existing data. For example, first- and second-order conditioning can be anatomically dissociated, which is consistent with PVLV and not TD. Overall, the model provides a biologically plausible framework for understanding the neural basis of reward learning. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
Changes in amplitude are a characteristic feature of most natural sounds, including the biosonar signals used by bats for echolocation. Previous evidence suggests that the nuclei of the lateral lemniscus play an important role in processing timing information that is essential for target range determination in echolocation. Neurons that respond to unmodulated tones with a sustained discharge are found in the dorsal nucleus (DNLL), intermediate nucleus (INLL) and multipolar cell division of the ventral nucleus (VNLLm). These neurons provide a graded response over a broad dynamic range of intensities, and would be expected to provide information about the amplitude envelope of a modulated signal. Neurons that respond only at the onset of a tone make up a small proportion of cells in DNLL, INLL and VNLLm, but are the only type found in the columnar division of the ventral nucleus (VNLLc). Onset neurons in VNLLc maintain a constant latency across a wide range of stimulus frequencies and intensities, thus providing a precise marker for when a sound begins. To determine how these different functional classes of cells respond to amplitude changes, we presented sinusoidally amplitude modulated (SAM) signals monaurally to awake, restrained bats and recorded the responses of single neurons extracellularly. There were clear differences in the ability of neurons in the different cell groups to respond to SAM. In the VNLLm, INLL and DNLL, 90% of neurons responded to SAM with a synchronous discharge. Neurons in the VNLLc responded poorly or not at all to SAM signals. This finding was unexpected given the precise onset responses of VNLLc neurons to unmodulated tones and their ability to respond synchronously to sinusoidally frequency modulated (SFM) signals. Among neurons that responded synchronously to SAM, synchronization as a function of modulation rate described either a bandpass or a lowpass function, with the majority of bandpass functions in neurons that responded to unmodulated tones with a sustained discharge. The maximal modulation rates that elicited synchronous responses were similar for the different cell groups, ranging from 320 Hz in VNLLm to 230 Hz in DNLL. The range of best modulation rates was greater for SAM than for SFM; this was also true of the range of maximal modulation rates at which synchronous discharge occurred. There was little correlation between a neuron's best modulation rate or maximal modulation rate for SAM signals and those for SFM signals, suggesting that responsiveness to amplitude and frequency modulations depends on different neural processing mechanisms.  相似文献   

10.
Previous research has shown that spatial, movement, and reward information is integrated within the ventral striatum (VS). The present study examined the possible contribution of the basolateral nuclei of the amygdala (BLA) to this interaction by examining behavioral correlates of BLA neurons while rats performed multiple memory trials on an 8-arm radial maze. Alternate arms consistently held 1 of 2 different amounts of reward. Recorded cells were correlated with motion, auditory input, space, and reward acquisition. Reward-related units were found that anticipated reward encounter, that responded during reward consumption, and that differentiated between high and low reward magnitude. This is consistent with the hypothesis that BLA neurons may provide the VS with reward-related information that could then be integrated with spatial information to ultimately affect goal-directed behavior. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
To examine whether the avian hippocampus-parahippocampus (HF) is necessary for nonspatial, paired-associate learning, as has been suggested for rodents, HF-lesioned and control homing pigeons were tested on a visual paired-associate learning task. Both groups learned equally well to discriminate trials that consisted of a stimulus preceded by its paired associate from trials that consisted of a stimulus preceded by stimuli from other paired associates (mispair trials), even when a mispair was experienced for the first time. The groups also learned equally well not to respond to 2 stimuli that were never rewarded. The results demonstrate that HF lesions do not impair nonspatial paired-associate learning in birds, suggesting that the role of HF in nonspatial cognition differs between birds and mammals. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
The effects of partial (intermittent) vs consistent reward on the acquisition and extinction of a shuttling response were studied in 3 experiments with foraging honeybees. Adding nonrewarded trials to rewarded trials (the equated-reinforcements design) improved performance in acquisition and increased resistance to extinction. Substituting nonrewarded trials for some rewarded trials (the equated-trials design), which had little effect on acquisition, also increased resistance to extinction but to a lesser extent than adding nonrewarded trials. Marked variations in the schedule of partial reward (the sequence of rewarded and nonrewarded trials) were without effect. The results are compared with those of analogous experiments on vertebrates. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
Examined the ability of 2 female Atlantic bottle-nosed dolphins to maximize reward in a discrimination learning task with multiple differentially rewarded stimuli. Although both dolphins reportedly performed well, results refer primarily to only 1 S. Results show that the S learned to choose, from among simultaneous groupings of 2–5 objects, the object that represented the greatest food value. The S surpassed the accomplishments of other animals previously tested by responding appropriately to all groupings of 6 different representational objects, each associated with a different food value, even after an 11-wk separation from the objects. (25 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Objective: Patients with schizophrenia (SZ) show reinforcement learning impairments related to both the gradual/procedural acquisition of reward contingencies, and the ability to use trial-to-trial feedback to make rapid behavioral adjustments. Method: We used neurocomputational modeling to develop plausible mechanistic hypotheses explaining reinforcement learning impairments in individuals with SZ. We tested the model with a novel Go/NoGo learning task in which subjects had to learn to respond or withhold responses when presented with different stimuli associated with different probabilities of gains or losses in points. We analyzed data from 34 patients and 23 matched controls, characterizing positive- and negative-feedback-driven learning in both a training phase and a test phase. Results: Consistent with simulations from a computational model of aberrant dopamine input to the basal ganglia patients, patients with SZ showed an overall increased rate of responding in the training phase, together with reduced response-time acceleration to frequently rewarded stimuli across training blocks, and a reduced relative preference for frequently rewarded training stimuli in the test phase. Patients did not differ from controls on measures of procedural negative-feedback-driven learning, although patients with SZ exhibited deficits in trial-to-trial adjustments to negative feedback, with these measures correlating with negative symptom severity. Conclusions: These findings support the hypothesis that patients with SZ have a deficit in procedural “Go” learning, linked to abnormalities in DA transmission at D1-type receptors, despite a “Go bias” (increased response rate), potentially related to excessive tonic dopamine. Deficits in trial-to-trial reinforcement learning were limited to a subset of patients with SZ with severe negative symptoms, putatively stemming from prefrontal cortical dysfunction. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Ordinal learning was investigated in capuchin monkeys (Cebus apella) and rhesus monkeys (Macaca mulatta). In Experiment 1, both species were presented with pairings of the Arabic numerals 0 to 9. Some monkeys were given food rewards equal to the value of the numeral selected and some were rewarded with a single pellet only for choosing the higher numeral within the pair. Both species learned to select the larger numeral, but only rhesus monkeys that were differentially rewarded performed above chance levels when presented with novel probe pairings. In Experiment 2, the monkeys were first presented with arrays of 5 familiar numerals (from the range 0 to 9) and then arrays of 5 novel letters (from the range A to J) with the same reward outcomes in place as in Experiment 1. Both species performed better with the numerals, suggesting that an ordinal sequence of all stimuli had been learned during Experiment 1, rather than a matrix of two-choice discriminations. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
Impulse activity was recorded extracellularly from noradrenergic neurons in the nucleus locus coeruleus of three cynomolgus monkeys performing a visual discrimination (vigilance) task. For juice reward, the subjects were required to release a lever rapidly in response to an improbable target stimulus (20% of trials) that was randomly intermixed with non-target stimuli presented on a video display. All locus coeruleus neurons examined were phasically and selectively activated by target stimuli in this task. Other task events elicited no consistent response from these neurons (juice reward, lever release, fix spot stimuli, non-target stimuli). With reversal of the task contingency, locus coeruleus neurons ceased responding to the former target stimuli, and began responding instead to the new target (old non-target) stimuli. In addition, the latency of locus coeruleus response to target stimuli increased after reversal (by about 140 ms) in parallel with a similar increase in the latency of the behavioral response. These results indicate that the conditioned locus coeruleus responses reflect stimulus meaning and cognitive processing, and are not driven by physical sensors attributes. Notably, the reversal in locus coeruleus response to stimuli after task reversal occurred rapidly, hundreds of trials before reversal was expressed in behavioral responses. These findings indicate that conditioned responses of locus coeruleus neurons are plastic and easily altered by changes in stimulus meaning, and that the locus coeruleus may play an active role in learning the significance of behaviorally important stimuli.  相似文献   

17.
In an effort to answer the question posed in the title, we assessed the effects of rewards on the immediate task performance of preschool children in two studies. Both studies had within-subjects, repeated measures designs, and both yielded highly consistent results showing a detrimental effect of reward on the Peabody Picture Vocabulary Test and on the Goodenough-Harris Draw-a-Man test. Performance decrements were confined to sessions in which subjects were rewarded; when rewarded subjects were shifted to nonreward, their performance improved dramatically. Although these studies were not concerned with the effects of reward on intrinsic motivation, the findings appear to present theoretical difficulties for current cognitive-motivational explanations of the adverse effects of material rewards on immediate task performance. An alternative viewpoint that material rewards can produce a temporary regression in psychological functioning is suggested. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Usually the conditional probabilities needed to calculate transmitted information are estimated directly from empirically measured distributions. Here we show that an explicit model of the relation between response strength (here, spike count) and its variability allows accurate estimates of transmitted information. This method of estimating information is reliable for data sets with nine or more trials per stimulus. We assume that the model characterizes all response distributions, whether observed in a given experiment or not. All stimuli eliciting the same response are considered equivalent. This allows us to calculate the channel capacity, the maximum information that a neuron can transmit given the variability with which it sends signals. Channel capacity is uniquely defined, thus avoiding the difficulty of knowing whether the 'right' stimulus set has been chosen in a particular experiment. Channel capacity increases with increasing dynamic range and decreases as the variance of the signal (noise) increases. Neurons in V1 send more variable signals in a wide dynamic range of spike counts, while neurons in IT send less variable signals in a narrower dynamic range. Nonetheless, neurons in the two areas have similar channel capacities. This suggests that variance is being traded off against dynamic range in coding.  相似文献   

19.
Previous studies have shown that the lateral nucleus of the amygdala (AL) is essential in auditory fear conditioning and that neurons in the AL respond to auditory stimuli. The goals of the present study were to determine whether neurons in the AL are also responsive to somatosensory stimuli and, if so, whether single neurons in the AL respond to both auditory and somatosensory stimulation. Single-unit activity was recorded in the AL in anesthetized rats during the presentation of acoustic (clicks) and somatosensory (footshock) stimuli. Neurons in the dorsal subdivision of the AL responded to both somatosensory and auditory stimuli, whereas neurons in the ventrolateral AL responded only to somatosensory stimuli and neurons in the ventromedial AL did not respond to either stimuli. These findings indicate that the dorsal AL is a site of auditory and somatosensory convergence and may therefore be a focus of convergence of conditioned and unconditioned stimuli (CS and UCS) in auditory fear conditioning. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
The basal ganglia have been shown to contribute to habit and stimulus-response (S-R) learning. These forms of learning have the property of slow acquisition and, in humans, can occur without conscious awareness. This paper proposes that one aspect of basal ganglia-based learning is the recoding of cortically derived information within the striatum. Modular corticostriatal projection patterns, demonstrated experimentally, are viewed as producing recoded templates suitable for the gradual selection of new input-output relations in cortico-basal ganglia loops. Recordings from striatal projection neurons and interneurons show that activity patterns in the striatum are modified gradually during the course of S-R learning. It is proposed that this recoding within the striatum can chunk the representations of motor and cognitive action sequences so that they can be implemented as performance units. This scheme generalizes Miller's notion of information chunking to action control. The formation and the efficient implementation of action chunks are viewed as being based on predictive signals. It is suggested that information chunking provides a mechanism for the acquisition and the expression of action repertoires that, without such information compression would be biologically unwieldy or difficult to implement. The learning and memory functions of the basal ganglia are thus seen as core features of the basal ganglia's influence on motor and cognitive pattern generators.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号