首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This study used eye movement modeling examples (EMME) to support students' integrative processing of verbal and graphical information during the reading of an illustrated text. EMME consists of a replay of eye movements of a model superimposed onto the materials that are processed for accomplishing the task. Specifically, the study investigated the effects of modeling the temporal sequence of text and picture processing as shown in various replays of a model's gazes. Eighty‐four 7th graders were randomly assigned to one of the four experimental conditions: text‐first processing sequence (text‐first EMME), picture‐first processing sequence (picture‐first EMME), picture‐last processing sequence (picture‐last EMME) and no‐EMME (control). Online and offline measures were used. Eye movement indices indicate that only readers in the picture‐first EMME condition spent significantly longer processing the picture and showed stronger integrative processing of verbal and graphical information than students in the no‐EMME condition. Moreover, readers in all EMME conditions outperformed those in the control condition for recall. However, for learning and transfer, only readers in the picture‐first EMME condition were significantly superior to readers of the control condition. Furthermore, both the frequency and duration of integrative processing of verbal and graphical information mediated the effect of condition on learning outcomes.  相似文献   

2.
Eye movement modeling examples (EMME) are demonstrations of a computer-based task by a human model (e.g., a teacher), with the model's eye movements superimposed on the task to guide learners' attention. EMME have been shown to enhance learning of perceptual classification tasks; however, it is an open question whether EMME would also improve learning of procedural problem-solving tasks. We investigated this question in two experiments. In Experiment 1 (72 university students, Mage = 19.94), the effectiveness of EMME for learning simple geometry problems was addressed, in which the eye movements cued the underlying principle for calculating an angle. The only significant difference between the EMME and a no eye movement control condition was that participants in the EMME condition required less time for solving the transfer test problems. In Experiment 2 (68 university students, Mage = 21.12), we investigated the effectiveness of EMME for more complex geometry problems. Again, we found no significant effects on performance except for time spent on transfer test problems, although it was now in the opposite direction: participants who had studied EMME took longer to solve those items. These findings suggest that EMME may not be more effective than regular video examples for teaching procedural problem-solving skills.  相似文献   

3.
Eye movement modelling examples (EMME) are computer-based videos displaying the visualized eye gaze behaviour of a domain expert person (model) while carefully executing the learning or problem-solving task. The role of EMME in promoting cognitive performance (i.e., final scores of learning outcome or problem solving) has been questioned due to the mixed findings from empirical studies. This study tested the effects of EMME on attention guidance and cognitive performance by means of meta-analytic procedures. Data for both experimental and control groups and both posttest and pretest were extracted to calculate the effect sizes. The EMME group was treated as the experimental group and the non-EMME group was treated as the control group. Twenty-five independent articles were included. The overall analysis showed a significant effect of EMME on time to first fixation (d = −0.83), fixation duration (d = 0.74), as well as cognitive performance (d = 0.43), but not on fixation count, indicating that using EMME not only helped learners attend faster and longer to the task-relevant elements, but also fostered their final cognitive performance. Interestingly, task type significantly moderated the effect of EMME on cognitive performance. Moderation analyses showed that EMME was beneficial to learners' performance when non-procedural tasks (rather than procedural tasks) were used. These findings show contributions for future research as well as practical application in the field of computers and learning regarding videos displaying a model's visualized eye gaze behaviour.  相似文献   

4.
Anecdotal evidence suggests that people with autism may have different processing strategies when accessing the web. However, limited empirical evidence is available to support this. This paper presents an eye tracking study with 18 participants with high-functioning autism and 18 neurotypical participants to investigate the similarities and differences between these two groups in terms of how they search for information within web pages. According to our analysis, people with autism are likely to be less successful in completing their searching tasks. They also have a tendency to look at more elements on web pages and make more transitions between the elements in comparison to neurotypical people. In addition, they tend to make shorter but more frequent fixations on elements which are not directly related to a given search task. Therefore, this paper presents the first empirical study to investigate how people with autism differ from neurotypical people when they search for information within web pages based on an in-depth statistical analysis of their gaze patterns.  相似文献   

5.
This paper introduces a novel approach for collecting and processing data originated by web user ocular movements on a web page, which are captured by using an eye-tracking tool. These data allow knowing the exact web user's eye position on a computer screen, and by combining them with the sequence of web page visits registered in the web log, significant insights about his/her behavior within a website can be extracted.With this approach, we can improve the effectiveness of the current methodology for identifying the most important web objects from the web user's point of view, also called Website Keyobjects. It takes as input the website's logs, the pages that compose it and the interest of users in the web objects of each page, which is quantified by means of a survey. Subsequently, the data are transformed and preprocessed before finally applying web mining algorithms that allow the extraction of the Website Keyobjects.With the utilization of the eye-tracking technology, we can eliminate the survey by using a more precise and objective tool to achieve an improvement in the classification of the Website Keyobjects. It was concluded that eye-tracking technology is useful and accurate when it comes to knowing what a user looks at and therefore, what attracts their attention the most. Finally, it was established that there is an improvement between 15% and 20% when using the information generated by the eye tracker.  相似文献   

6.
Web sites contain an ever increasing amount of information within their pages. As the amount of information increases so does the complexity of the structure of the web site. Consequently it has become difficult for visitors to find the information relevant to their needs. To overcome this problem various clustering methods have been proposed to cluster data in an effort to help visitors find the relevant information. These clustering methods have typically focused either on the content or the context of the web pages. In this paper we are proposing a method based on Kohonen’s self-organizing map (SOM) that utilizes both content and context mining clustering techniques to help visitors identify relevant information quicker. The input of the content mining is the set of web pages of the web site whereas the source of the context mining is the access-logs of the web site. SOM can be used to identify clusters of web sessions with similar context and also clusters of web pages with similar content. It can also provide means of visualizing the outcome of this processing. In this paper we show how this two-level clustering can help visitors identify the relevant information faster. This procedure has been tested to the access-logs and web pages of the Department of Informatics and Telecommunications of the University of Athens.  相似文献   

7.
We investigated the effects of seeing the instructor's (i.e., the model's) face in video modeling examples on students' attention and their learning outcomes. Research with university students suggested that the model's face attracts students' attention away from what the model is doing, but this did not hamper learning. We aimed to investigate whether we would replicate this finding in adolescents (prevocational education) and to establish how adolescents with autism spectrum disorder, who have been found to look less at faces generally, would process video examples in which the model's face is visible. Results showed that typically developing adolescents who did see the model's face paid significantly less attention to the task area than typically developing adolescents who did not see the model's face. Adolescents with autism spectrum disorder paid less attention to the model's face and more to the task demonstration area than typically developing adolescents who saw the model's face. These differences in viewing behavior, however, did not affect learning outcomes. This study provides further evidence that seeing the model's face in video examples affects students' attention but not their learning outcomes.  相似文献   

8.
Human's attention is an important element in human–machine interface design due to a close relationship between operator's attention and operator's work performance. However, understanding of operator's attention allocation while he or she is performing a task remains a challenging task because attention is generally unobservable, immeasurable, and uncertain. In our previous study, we demonstrated the effectiveness of using operator's eye movement information to understand attention allocation, which has made attention observable. The present paper describes our study which addressed immeasurability and uncertainty of attention. Specifically, we used eye fixation's duration to indicate operator's attention and developed a new computational model for the attention and its allocation using fuzzy logics clustering techniques. Along with the development of this model, we also developed an experiment to verify the effectiveness of the model. The result of the experiment shows that the model is promising.  相似文献   

9.
The purpose of the study was to investigate university learners' visual attention during a PowerPoint (PPT) presentation on the topic of “Dinosaurs” in a real classroom. The presentation, which lasted for about 12–15 min, consisted of 12 slides with various text and graphic formats. An instructor gave the presentation to 21students whose eye movements were recorded by the eye tracking system. Participants came from various science departments in a national university in Taiwan, of which ten were earth-science majors (ES) and the other 11 were assigned to the non-earth-science group (NES). Eye movement indicators, such as total time spent on the interest zone, fixation count, total fixation duration, percent time spent in zone, etc., were abstracted to indicate their visual attention. One-way ANOVA as well as t-test analysis was applied to find the associations between the eye movement data and the students' background as well as different formats of PPT slides. The results showed that the students attended significantly more to the text zones on the PPT slides and the narrations delivered by the instruction. Nevertheless, the average fixation duration, indicating the average information processing time, was longer on the picture zones. In general, the ES students displayed higher visual attention than the NES students to the text zones, but few differences were found for the picture zones. When the students viewed those slides containing scientific hypotheses, the difference in attention distributions between the text and pictures reduced. Further analyses of fixation densities and saccade paths showed that the ES students were better at information decoding and integration.  相似文献   

10.
Knowing students' learning styles allows us to improve their experience in an educational environment. Particularly, the perception style is one of the most important dimensions of the learning styles since it describes the way students perceive the world as well as the kind of learning content they prefer. Several approaches to detect students' perception style according to Felder's model have been proposed. However, these approaches exhibit several limitations that make their implementation difficult. Thus, we propose a novel approach to detect the perception style of a student by analyzing his/her interaction with games, namely puzzle games. To carry out this detection, we track how students play a puzzle game and extract information about this interaction. Then, we train a Naive Bayes Classifier to infer the students' perception style by using the information extracted. We have evaluated our proposed approach with 47 Computer Engineering students. Experimental results showed that the perception style was successfully predicted through the use of games, with an accuracy of 85%. Finally, we conclude that games are a promising environment where the students' perception style can be detected.  相似文献   

11.
12.
This project analyzed high school students' performance and eye movement while learning in a simulation-based laboratory (SBL) and a microcomputer-based laboratory (MBL). Although the SBL and the MBL both used computers to collect, graph, and analyze data, the MBL involved manual manipulation of concrete materials, whereas the SBL displayed everything on a monitor. Fifty senior high school students at three urban public high schools in Taipei were randomly assigned to the MBL and SBL settings. The participants conducted the Boyle's Law experiment with an accompanying worksheet and completed pre- and post-conceptual tests. FaceLAB and ASL MobileEye were used to record each participant's eye movements in the SBL and MBL settings, respectively. The results showed that lower achievers improved significantly from the pre-to post-conceptual tests. The SBL group tended to carry out more experiments. Moreover, the MBL group's performance on the worksheet was moderately correlated with their post-test. However, this correlation was not found for the SBL group. Furthermore, at the beginning of the laboratories, the SBL group had a higher percentage of fixations with longer fixation duration, which implies more attention to and deeper cognitive processing of the equipment and running experiments, while the MBL group focused on the worksheet. This study concludes that, for e-learning like SBLs, students tend to start off doing an experiment, and then think about the questions on the worksheets, whereas for physical laboratories like MBLs, they tend to think before doing.  相似文献   

13.
This study compared clicker technology against mobile polling and the Just-in-Time Teaching (JiTT) strategy to investigate how these methods may differently affect students' anxiety, self-efficacy, engagement, academic performance, and attention and relaxation as indicated by brainwave activity. The study utilized a quasi-experimental research design. To assess the differences between the effects of clickers and mobile polling, the study collected data from two courses at a large research university in Taiwan in which 69 students used either clickers or mobile polling. The results showed that mobile polling along with the JiTT strategy and in-class polls reduce graduate students' anxiety, improve student outcomes in an environment comprising both graduate and undergraduate students, and increase students' attention during polling. However, brainwave data revealed that during the polling activities, students' attention in the clicker and mobile polling groups respectively increased and decreased. Students nowadays do not find smartphones a novelty; however, incorporating them into class is still a potentially effective way to increase student attention and provide a direct way for instructors to observe the learning effects of lectures and improve their teaching approach on that basis.  相似文献   

14.
Eye movement modelling examples (EMMEs) are instructional videos of a model's demonstration and explanation of a task that also show where the model is looking. EMMEs are expected to synchronize students' visual attention with the model's, leading to better learning than regular video modelling examples (MEs). However, synchronization is seldom directly tested. Moreover, recent research suggests that EMMEs might be more effective than ME for low prior knowledge learners. We therefore used a 2 × 2 between-subjects design to investigate if the effectiveness of EMMEs (EMMEs/ME) is moderated by prior knowledge (high/low, manipulated by pretraining), applying eye tracking to assess synchronization. Contrary to expectations, EMMEs did not lead to higher learning outcomes than ME, and no interaction with prior knowledge was found. Structural equation modelling shows the mechanism through which EMMEs affect learning: Seeing the model's eye movements helped learners to look faster at referenced information, which was associated with higher learning outcomes.  相似文献   

15.
随着Internet的迅猛发展,Web上的网页数目呈现指数级的爆炸性增长趋势,在Web上检索及发现有价值的信息已成为了一项重要的任务,"噪音"的出现往往会降低基于页面处理的各种算法的效率。因此,如何删除页面的噪音,提取页面中的主要内容是Web挖掘中的重要问题。给出了抽取网页中各种分类有效的文本的具体实现。  相似文献   

16.
We aim to identify the salient objects in an image by applying a model of visual attention. We automate the process by predicting those objects in an image that are most likely to be the focus of someone's visual attention. Concretely, we first generate fixation maps from the eye tracking data, which express the ground truth of people's visual attention for each training image. Then, we extract the high-level features based on the bag-of-visual-words image representation as input attributes along with the fixation maps to train a support vector regression model. With this model, we can predict a new query image's saliency. Our experiments show that the model is capable of providing a good estimate for human visual attention in test images sets with one salient object and multiple salient objects. In this way, we seek to reduce the redundant information within the scene, and thus provide a more accurate depiction of the scene.  相似文献   

17.
This study aimed to explore the relationships between students' visual behaviors and learning outcomes, and between visual behaviors and prior cooking interest in multimedia recipe learning. An eye-tracking experiment, including pretest, recall test, and retention test, was conducted with a sample of 29 volunteer hospitality majors in Taiwan. The multimedia recipe included a static page showing the ingredients in a text-and-picture representation and a dynamic page showing the knife skills in a text-and-video representation. Total reading time, total fixation duration, number of fixations and inter-scanning count were used to explore the students' visual attention distributions among the different representation elements and their visual strategies for learning the recipe. The results showed that all students paid more visual attention to the text than to the picture information for the static recipe, and paid more visual attention to the video than to the text on the dynamic page. In addition, the visual attention paid to the text on the dynamic page was negatively correlated with the retention of the episodic knowledge of knife skills. In contrast, the visual attention paid to the text on the static ingredient page was positively correlated with students' prior cooking interest. Finally, the inter-scanning count between text and video on the dynamic page was the best index to negatively predict students' learning retention. Total fixation duration on the text information on the static page was the best index to positively predict students' prior cooking interest. Future studies and applications are discussed.  相似文献   

18.
Medical image interpretation is moving from using 2D- to volumetric images, thereby changing the cognitive and perceptual processes involved. This is expected to affect medical students' experienced cognitive load, while learning image interpretation skills. With two studies this explorative research investigated whether measures inherent to image interpretation, i.e. human-computer interaction and eye tracking, relate to cognitive load. Subsequently, it investigated effects of volumetric image interpretation on second-year medical students' cognitive load. Study 1 measured human-computer interactions of participants during two volumetric image interpretation tasks. Using structural equation modelling, the latent variable ‘volumetric image information’ was identified from the data, which significantly predicted self-reported mental effort as a measure of cognitive load. Study 2 measured participants' eye movements during multiple 2D and volumetric image interpretation tasks. Multilevel analysis showed that time to locate a relevant structure in an image was significantly related to pupil dilation, as a proxy for cognitive load. It is discussed how combining human-computer interaction and eye tracking allows for comprehensive measurement of cognitive load. Combining such measures in a single model would allow for disentangling unique sources of cognitive load, leading to recommendations for implementation of volumetric image interpretation in the medical education curriculum.  相似文献   

19.
An XML-enabled data extraction toolkit for web sources   总被引:7,自引:0,他引:7  
The amount of useful semi-structured data on the web continues to grow at a stunning pace. Often interesting web data are not in database systems but in HTML pages, XML pages, or text files. Data in these formats are not directly usable by standard SQL-like query processing engines that support sophisticated querying and reporting beyond keyword-based retrieval. Hence, the web users or applications need a smart way of extracting data from these web sources. One of the popular approaches is to write wrappers around the sources, either manually or with software assistance, to bring the web data within the reach of more sophisticated query tools and general mediator-based information integration systems. In this paper, we describe the methodology and the software development of an XML-enabled wrapper construction system—XWRAP for semi-automatic generation of wrapper programs. By XML-enabled we mean that the metadata about information content that are implicit in the original web pages will be extracted and encoded explicitly as XML tags in the wrapped documents. In addition, the query-based content filtering process is performed against the XML documents. The XWRAP wrapper generation framework has three distinct features. First, it explicitly separates tasks of building wrappers that are specific to a web source from the tasks that are repetitive for any source, and uses a component library to provide basic building blocks for wrapper programs. Second, it provides inductive learning algorithms that derive or discover wrapper patterns by reasoning about sample pages or sample specifications. Third and most importantly, we introduce and develop a two-phase code generation framework. The first phase utilizes an interactive interface facility to encode the source-specific metadata knowledge identified by individual wrapper developers as declarative information extraction rules. The second phase combines the information extraction rules generated at the first phase with the XWRAP component library to construct an executable wrapper program for the given web source.  相似文献   

20.
Social interactions to supplement learning and asynchronous tools to facilitate exchange of quality ideas have gained much attention in information systems education. While various systems exist, students have difficulty with deep processing of complex instructional materials (e.g., concepts of a theory and pedagogical support mechanisms derived from a theory). This research proposes a theoretical framework that leverages attention guidance in a social constructivist approach to facilitate processing of central domain concepts, principles, and their interrelations. Using an open source anchored discussion system, we designed a set of instructor-based and peer-oriented attention guidance functionalities involving dynamic manipulation of text font size similar to tag clouds. We conducted an experimental study with two small groups of first-year doctoral students in a blended-learning classroom format. Students in the control group had no access to attention guidance functions. Students in the treatment group used instructor-based attention guidance functionality and then switched to peer-oriented attention guidance functionality. The evaluation compared focus, content, and sequential organization of students' online discussion messages with heat maps, content analysis, sequential analysis, and statistical discourse analysis to examine different facets of the phenomenon in a holistic way. The results show that in areas where students struggle to understand challenging concepts, instructor-based attention guidance functionality facilitated elaboration and negotiation of ideas, which is fundamental to higher order thinking. In addition, after switching to peer-oriented attention guidance functionality, students in the treatment group took the lead in pinpointing challenging concepts they did not previously understand. These findings indicate that instructor-based and peer-oriented attention guidance functionalities offer students an indirect way of focusing their attention on deep processing of challenging concepts in an inherently open learning environment. Implications for theory, software design, and future research are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号