首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 537 毫秒
1.
In 4 experiments, college students viewed an animation and listened to concurrent narration explaining the formation of lightning. When students also received concurrent on-screen text that summarized (Experiment 1) or duplicated (Experiment 2) the narration, they performed worse on tests of retention and transfer than did students who received no on-screen text. This redundancy effect is consistent with a dual-channel theory of multimedia learning in which adding on-screen text can overload the visual information-processing channel, causing learners to split their visual attention between 2 sources. Lower transfer performance also occurred when the authors added interesting but irrelevant details to the narration (Experiment 1) or inserted interesting but conceptually irrelevant video clips within (Experiment 3) or before the presentation (Experiment 4). This coherence effect is consistent with a seductive details hypothesis in which the inserted video and narration prime the activation of inappropriate prior knowledge as the organizing schema for the lesson. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Three studies investigated whether and under what conditions the addition of on-screen text would facilitate the learning of a narrated scientific multimedia explanation. Students were presented with an explanation about the process of lightning formation in the auditory alone (nonredundant) or auditory and visual (redundant) modalities. In Experiment 1, the effects of preceding the nonredundant or redundant explanation with a corresponding animation were examined. In Experiment 2, the effects of presenting the nonredundant or redundant explanation with a simultaneous or a preceding animation were compared. In Experiment 3, environmental sounds were added to the nonredundant or redundant explanation. Learning was measured by retention, transfer, and matching tests. Students better comprehended the explanation when the words were presented auditorily and visually rather than auditorily only, provided there was no other concurrent visual material. The overall pattern of results can be explained by a dual-processing model of working memory, which has implications for the design of multimedia instruction. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Students viewed a computer-generated animation depicting the process of lightning formation (Experiment 1) or the operation of a car's braking system (Experiment 2). In each experiment, students received either concurrent narration describing the major steps (Group AN) or concurrent on-screen text involving the same words and presentation timing (Group AT). Across both experiments, students in Group AN outperformed students in Group AT in recalling the steps in the process on a retention test, in finding named elements in an illustration on a matching test, and in generating correct solutions to problems on a transfer test. Multimedia learners can integrate words and pictures more easily when the words are presented auditorily rather than visually. This split-attention effect is consistent with a dual-processing model of working memory consisting of separate visual and auditory channels. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

4.
College students learned about botany through an agent-based multimedia game. In Experiment 1, students received either spoken or identical on-screen text explanations; in addition, the lesson was presented either via a desktop display (D), a head-mounted display (HMD) used while sitting, or an HMD used while walking (W). In Experiment 2, we examined the effects of presenting explanations as narration (N), text (T), or both (NT) within the D and W conditions. Students scored higher on retention, transfer, and program ratings in N conditions than in T conditions. The NT condition produced results in between. Students gave higher ratings of presence when learning with HMDs, but media did not affect performance on measures of retention, transfer, or program ratings. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Students learned about electric motors by asking questions and receiving answers from an on-screen pedagogical agent named Dr. Phyz who stood next to an on-screen drawing of an electric motor. Students performed better on a problem-solving transfer test when Dr. Phyz's explanations were presented as narration rather than on-screen text (Experiment 1), when students were able to ask questions and receive answers interactively rather than receive the same information as a noninteractive multimedia message (Experiments 2a and 2b), and when students were given a prequestion to guide their self-explanations during learning (Experiment 3). Deleting Dr. Phyz's image from the screen had no significant effect on problem-solving transfer performance (Experiment 4). The results are consistent with a cognitive theory of multimedia learning and yield principles for the design of interactive multimedia learning environments. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
In 4 experiments, the authors examined sex differences in audiospatial perception of sounds that moved toward and away from the listener. Experiment 1 showed that both men and women underestimated the time-to-arrival of full-cue looming sounds. However, this perceptual bias was significantly stronger among women than among men. In Experiment 2, listeners estimated the terminal distance of sounds that approached but stopped before reaching them. Women perceived the looming sounds as closer than did men. However, in Experiment 3, with greater statistical power, the authors found no sex difference in the perceived distance of sounds that traveled away from the listener, demonstrating a sex-based specificity for auditory looming perception. Experiment 4 confirmed these results using equidistant looming and receding sounds. The findings suggest that sex differences in auditory looming perception are not due to general differences in audiospatial ability, but rather illustrate the environmental salience and evolutionary importance of perceiving looming objects. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
College students viewed a short multimedia PowerPoint presentation consisting of 16 narrated slides explaining lightning formation (Experiment 1) or 8 narrated slides explaining how a car's braking system works (Experiment 2). Each slide appeared for approximately 8-10 s and contained a diagram along with 1-2 sentences of narration spoken in a female voice. For some students (the redundant group), each slide also contained 2-3 printed words that were identical to the words in the narration, conveyed the main event described in the narration, and were placed next to the corresponding portion of the diagram. For other students (the nonredundant group), no on-screen text was presented. Results showed that the group whose presentation included short redundant phrases within the diagram outperformed the nonredundant group on a subsequent test of retention (d = 0.47 and 0.70, respectively) but not on transfer. Results are explained by R. E. Mayer's (2001, 2005a) cognitive theory of multimedia learning, in which the redundant text served to guide the learner's attention without priming extraneous processing. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
In Experiment 1, students received an illustrated booklet, PowerPoint presentation, or narrated animation that explained 6 steps in how a cold virus infects the human body. The material included 6 high-interest details mainly about the role of viruses in sex or death (high group) or 6 low-interest details consisting of facts and health tips about viruses (low group). The low group outperformed the high group across all 3 media on a subsequent test of problem-solving transfer (d = .80) but not retention (d = .05). In Experiment 2, students who studied a PowerPoint lesson explaining the steps in how digestion works performed better on a problem-solving transfer test if the lesson contained 7 low-interest details rather than 7 high-interest details (d = .86), but the groups did not differ on retention (d = .26). In both experiments, as the interestingness of details was increased, student understanding decreased (as measured by transfer). Results are consistent with a cognitive theory of multimedia learning, in which highly interesting details sap processing capacity away from deeper cognitive processing of the core material during learning. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
In 3 experiments, students received a short science lesson on how airplanes achieve lift and then were asked to write an explanation (retention test) and to write solutions to 5 problems, such as how to design an airplane to achieve lift more rapidly (transfer test). For some students, the lesson contained signals, including a preview summary paragraph outlining the 3 main steps involved in lift, section headings, and pointer words such as because or as a result. The signaling did not add any additional content information about lift but helped clarify the structure of the passage. Students who received signaling generated significantly more solutions on the transfer test than did students who did not receive signaling when the explanation was presented as printed text (Experiment 1), spoken text (Experiment 2), and spoken text with corresponding animation (Experiment 3). Results are consistent with a knowledge construction view of multimedia learning in which learners seek to build mental models of cause-and-effect systems. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

10.
In Experiment 1, inexperienced trade apprentices were presented with one of four alternative instructional designs: a diagram with visual text, a diagram with auditory text, a diagram with both visual and auditory text, or the diagram only. An auditory presentation of text proved superior to a visual-only presentation but not when the text was presented in both auditory and visual forms. The diagram-only format was the least intelligible to inexperienced learners. When participants became more experienced in the domain after two specifically designed training sessions, the advantage of a visual diagram-auditory text format disappeared. In Experiment 2, the diagram-only group was compared with the audio-text group after an additional training session. The results were the reverse of those of Experiment 1: The diagram-only group outperformed the audio–text group. Suggestions are made for multimedia instruction that takes learner experience into consideration. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

11.
Social cues in multimedia learning: Role of speaker's voice.   总被引:1,自引:0,他引:1  
In 2 experiments, learners who were seated at a computer workstation received a narrated animation about lightning formation. Then, they took a retention test, took a transfer test, and rated the speaker. There was a voice effect, in which students performed better on the transfer test and rated the speaker more positively if the voice in the narration had a standard accent rather than a foreign accent (Experiment 1) and if the voice was human rather than machine synthesized (Experiment 2). The retention test results were mixed. The results are consistent with social agency theory, which posits that social cues in multimedia messages can encourage learners to interpret human-computer interactions as more similar to human-to-human conversation. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

12.
Previous findings on streaming are generalized to sequences composed of more than 2 subsequences. A new paradigm identified whether listeners perceive complex sequences as a single unit (integrative listening) or segregate them into 2 (or more) perceptual units (stream segregation). Listeners heard 2 complex sequences, each composed of 1, 2, 3, or 4 subsequences. Their task was to detect a temporal irregularity within 1 subsequence. In Experiment 1, the smallest frequency separation under which listeners were able to focus on 1 subsequence was unaffected by the number of co-occurring subsequences; nonfocused sounds were not perceptually organized into streams. In Experiment 2, detection improved progressively, not abruptly, as the frequency separation between subsequences increased from 0.25 to 6 auditory filters. The authors propose a model of perceptual organization of complex auditory sequences. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
It is widely accepted that most developmental dyslexics perform poorly on tasks which assess phonological awareness. One reason for this association might be that the early or "input" phonological representations of speech sounds are distorted or noisy in some way. We have attempted to test this hypothesis directly. In Experiment 1, we measured the confusions that adult dyslexics and controls made when they listened to nine randomly presented consonant-vowel (CV) segments [sequence: see text] under four conditions of increasing white noise masking. Subjects could replay stimuli and were under no obligation to respond quickly. Responses were selected with a computer mouse from a set of nine letter-strings, corresponding to the auditory stimuli, presented on a VDU. While the overall pattern of confusions made by dyslexics and controls was very similar for this stimulus set, dyslexics confused [sequence: see text] significantly more than did controls. In Experiment 2, subjects heard each stimulus once only and were forced to respond as quickly as possible. Under these timed conditions, the pattern of confusions made by dyslexics and controls was the same as before, but dyslexics took longer to respond than controls. The slower responses of dyslexics in Experiment 2 could have arisen because: (a) they were slower at processing the auditory stimuli than controls, (b) they had worse visual pattern memory for letter strings than controls, (c) they were slower than controls at using the computer mouse. In Experiments 3, 4 and 5 subjects carried out control tasks which eliminated each of these possibilities and confirmed that the results from the auditory tasks genuinely reflected subjects' speech perception. We propose that the fine structure of dyslexics' input phonological representations should be further explored with this confusion paradigm by using other speech sounds containing VCs, CCVs and VCCs.  相似文献   

14.
The authors tested the hypothesis that personalized messages in a multimedia science lesson can promote deep learning by actively engaging students in the elaboration of the materials and reducing processing load. Students received a multimedia explanation of lightning formation (Experiments 1 and 2) or played an agent-based computer game about environmental science (Experiments 3, 4, and 5). Instructional messages were presented in either a personalized style, where students received spoken or written explanations in the 1st- and 2nd-person points of view, or a neutral style, where students received spoken or written explanations in the 3rd-person point of view. Personalized rather than neutral messages produced better problem-solving transfer performance across all experiments and better retention performance on the computer game. The theoretical and educational implications of the findings are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
Students viewed a computer animation depicting the process of lightning. In Experiment 1, they concurrently viewed on-screen text presented near the animation or far from the animation, or concurrently listened to a narration. In Experiment 2, they concurrently viewed on-screen text or listened to a narration, viewed on-screen text following or preceding the animation, or listened to a narration following or preceding the animation. Learning was measured by retention, transfer, and matching tests. Experiment 1 revealed a spatial-contiguity effect in which students learned better when visual and verbal materials were physically close. Both experiments revealed a modality effect in which students learned better when verbal input was presented auditorily as speech rather than visually as text. The results support 2 cognitive principles of multimedia learning. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
This research focuses on the ability of book-based animated stories, when well designed and produced, to have positive effects on young viewers' narrative comprehension and language skills. Sixty 5-year-olds, learning Dutch as a 2nd language, were randomly assigned to 4 experimental and 2 control conditions. The children profited to some extent from repeated encounters with a storybook with static pictures but more from repeated encounters with the animated form of the story. Both story formats were presented on a computer screen; both included the same oral text spoken in the same voice but the animated story was supplemented with multimedia features (video, sounds, and music) dramatizing the events. Multimedia additions were especially effective for gaining knowledge of implied elements of stories that refer to goals or motives of main characters, and in expanding vocabulary and syntax. The added value of multimedia books was strengthened over sessions. In a group from families with low educational levels who were lagging in language and literacy skills, multimedia storybooks seem to provide a framework for understanding stories and remembering linguistic information. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
This study investigated the mental representation of music notation. Notational audiation is the ability to internally "hear" the music one is reading before physically hearing it performed on an instrument. In earlier studies, the authors claimed that this process engages music imagery contingent on subvocal silent singing. This study refines the previously developed embedded melody task and further explores the phonatory nature of notational audiation with throat-audio and larynx-electromyography measurement. Experiment 1 corroborates previous findings and confirms that notational audiation is a process engaging kinesthetic-like covert excitation of the vocal folds linked to phonatory resources. Experiment 2 explores whether covert rehearsal with the mind's voice also involves actual motor processing systems and suggests that the mental representation of music notation cues manual motor imagery. Experiment 3 verifies findings of both Experiments 1 and 2 with a sample of professional drummers. The study points to the profound reliance on phonatory and manual motor processing--a dual-route stratagem--used during music reading. Further implications concern the integration of auditory and motor imagery in the brain and cross-modal encoding of a unisensory input. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
Two studies tested the effects of social motives during negotiation on postnegotiation group performance. In both experiments, a prosocial or a proself motivation was induced, and participants negotiated in 3-person groups about a joint market. In Experiment 1, groups subsequently performed an advertisement task. Consistent with the authors' predictions, results showed that proself groups performed worse on the convergent aspects of this task but better on the divergent aspects than prosocial groups. In Experiment 2, the authors manipulated social motive and negotiation (negotiation vs. no negotiation), and groups performed a creativity task (requiring divergent performance) or a planning task (requiring convergent performance). Proself groups showed greater dedication, functioned more effectively, and performed better than prosocial groups on the creativity task, whereas prosocial groups showed greater dedication, functioned more effectively, and performed better than proself groups on the planning task, and these effects only occurred when the task was preceded by group negotiation. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

19.
This study investigated multisensory interactions in the perception of auditory and visual motion. When auditory and visual apparent motion streams are presented concurrently in opposite directions, participants often fail to discriminate the direction of motion of the auditory stream, whereas perception of the visual stream is unaffected by the direction of auditory motion (Experiment 1). This asymmetry persists even when the perceived quality of apparent motion is equated for the 2 modalities (Experiment 2). Subsequently, it was found that this visual modulation of auditory motion is caused by an illusory reversal in the perceived direction of sounds (Experiment 3). This "dynamic capture" effect occurs over and above ventriloquism among static events (Experiments 4 and 5), and it generalizes to continuous motion displays (Experiment 6). These data are discussed in light of related multisensory phenomena and their support for a "modality appropriateness" interpretation of multisensory integration in motion perception. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
Participants made speeded target-nontarget responses to singly presented auditory stimuli in 2 tasks. In within-dimension conditions, participants listened for either of 2 target features taken from the same dimension; in between-dimensions conditions, the target features were taken from different dimensions. Judgments were based on the presence or absence of either target feature. Speech sounds, defined relative to sound identity and locale, were used in Experiment 1, whereas tones, comprising pitch and locale components, were used in Experiments 2 and 3. In all cases, participants performed better when the target features were taken from the same dimension than when they were taken from different dimensions. Data suggest that the auditory and visual systems exhibit the same higher level processing constraints. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号