首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
E-learning systems are increasingly essential in universities, schools, government departments and other organizations that provide an education or training service. The objective for adopting e-learning systems is to provide students with educational services via electronic channels. The focus of this study is on the impact of IT infrastructure services and IT quality on perceptions of usefulness of e-learning systems. A model is proposed which includes five constructs: IT infrastructure services, system quality, information quality, service delivery quality, and perceived usefulness. A quantitative study was conducted at an Australian university with 720 survey responses from students who were enrolled in online courses. The results suggest that IT infrastructure services play a critical role in generating information with high quality, enhancing the aspects of e-learning system quality, and improving service delivery quality. The impact of IT infrastructure services, system quality, and information quality on perceived usefulness is fully mediated by service delivery quality. Universities need to be aware of the critical impact of IT infrastructure services and consider how investment in these services could improve system and information quality, service delivery quality, and the usefulness and success of e-learning systems.  相似文献   

2.
Educational data mining is an emerging discipline, concerned with developing methods for exploring the unique types of data that come from the educational context. This work is a survey of the specific application of data mining in learning management systems and a case study tutorial with the Moodle system. Our objective is to introduce it both theoretically and practically to all users interested in this new research area, and in particular to online instructors and e-learning administrators. We describe the full process for mining e-learning data step by step as well as how to apply the main data mining techniques used, such as statistics, visualization, classification, clustering and association rule mining of Moodle data. We have used free data mining tools so that any user can immediately begin to apply data mining without having to purchase a commercial tool or program a specific personalized tool.  相似文献   

3.
Web mining involves the application of data mining techniques to large amounts of web-related data in order to improve web services. Web traversal pattern mining involves discovering users’ access patterns from web server access logs. This information can provide navigation suggestions for web users indicating appropriate actions that can be taken. However, web logs keep growing continuously, and some web logs may become out of date over time. The users’ behaviors may change as web logs are updated, or when the web site structure is changed. Additionally, it can be difficult to determine a perfect minimum support threshold during the data mining process to find interesting rules. Accordingly, we must constantly adjust the minimum support threshold until satisfactory data mining results can be found.The essence of incremental data mining and interactive data mining is the ability to use previous mining results in order to reduce unnecessary processes when web logs or web site structures are updated, or when the minimum support is changed. In this paper, we propose efficient incremental and interactive data mining algorithms to discover web traversal patterns that match users’ requirements. The experimental results show that our algorithms are more efficient than other comparable approaches.  相似文献   

4.
In this study, we introduce a web information fusion tool – web warehouse, which is suitable for web mining and knowledge discovery. To formulate a web warehouse, a four-layer web warehouse architecture for decision support is firstly proposed. According to the layered web warehouse framework architecture, an extraction–fusion–mapping–loading (EFML) process model for web warehouse construction is then constructed. In the web warehouse process model, a series of web services including wrapper service, mediation service, ontology service and mapping service are used. Particularly, two kinds of mediators are introduced to fuse the heterogeneous web information. Finally, a simple case study is presented to illustrate the construction process of web warehouse.  相似文献   

5.
The study is concerned with a development of information granules and their use in data analysis. The design of information granules exploits a concept of overshadowing meaning that, we retain a given level of membership to a given concept unless faced with a contrary evidence. A detailed algorithm is provided and illustrated through a number of numerical studies. The idea of noninvasive data analysis is then introduced and discussed from a standpoint of a minimal level of structural dependencies to be used in the model.  相似文献   

6.
Businesses and people often organize their information of interest (IOI) into a hierarchy of folders (or categories). The personalized folder hierarchy provides a natural way for each of the users to manage and utilize his/her IOI (a folder corresponds to an interest type). Since the interest is relatively long-term, continuous web scanning is essential. It should be directed by precise and comprehensible specifications of the interest. A precise specification may direct the scanner to those spaces that deserve scanning, while a specification comprehensible to the user may facilitate manual refinement, and a specification comprehensible to information providers (e.g. Internet search engines) may facilitate the identification of proper seed sites to start scanning. However, expressing such specifications is quite difficult (and even implausible) for the user, since each interest type is often implicitly and collectively defined by the content (i.e. documents) of the corresponding folder, which may even evolve over time. In this paper, we present an incremental text mining technique to efficiently identify the user's current interest by mining the user's information folders. The specification mined for each interest type specifies the context of the interest type in conjunctive normal form, which is comprehensible to general users and information providers. The specification is also shown to be more precise in directing the scanner to those sites that are more likely to provide IOI. The user may thus maintain his/her folders and then constantly get IOI, without paying much attention to the difficult tasks of interest specification and seed identification.  相似文献   

7.
The World Wide Web (WWW) has been recognized as the ultimate and unique source of information for information retrieval and knowledge discovery communities. Tremendous amount of knowledge are recorded using various types of media, producing enormous amount of web pages in the WWW. Retrieval of required information from the WWW is thus an arduous task. Different schemes for retrieving web pages have been used by the WWW community. One of the most widely used scheme is to traverse predefined web directories to reach a user's goal. These web directories are compiled or classified folders of web pages and are usually organized into hierarchical structures. The classification of web pages into proper directories and the organization of directory hierarchies are generally performed by human experts. In this work, we provide a corpus-based method that applies a kind of text mining techniques on a corpus of web pages to automatically create web directories and organize them into hierarchies. The method is based on the self-organizing map learning algorithm and requires no human intervention during the construction of web directories and hierarchies. The experiments show that our method can produce comprehensible and reasonable web directories and hierarchies.  相似文献   

8.
Web挖掘是数据挖掘的新方向之一,其应用领域非常广泛。架构基于购物网站的Web数据挖掘工具,通过该工具可发现客户识别、客户获取及客户保持等方面的有用信息,有效地使用这些信息可促进购物网站的发展。  相似文献   

9.
Distance learning programs have been dramatically expanding in accordance with demand. Assessment of the quality of e-learning has become a strategic issue, one that is critical to program survival. In this study we propose a modified SERVQUAL instrument for assessing e-learning quality. The instrument consists of five dimensions: Assurance, Empathy, Responsiveness, Reliability, and Website Content. Data analysis from 203 e-learning students shows that four out of these five dimensions (except Reliability) play a significant role in perceived e-learning quality, which in turn affects learners’ satisfaction and future intentions to enroll in online courses. Managerial implications of the major findings are provided.  相似文献   

10.
The quality of discovered association rules is commonly evaluated by interestingness measures (commonly support and confidence) with the purpose of supplying indicators to the user in the understanding and use of the new discovered knowledge. Low-quality datasets have a very bad impact over the quality of the discovered association rules, and one might legitimately wonder if a so-called “interesting” rule noted LHSRHS is meaningful when 30% of the LHS data are not up-to-date anymore, 20% of the RHS data are not accurate, and 15% of the LHS data come from a data source that is well-known for its bad credibility. This paper presents an overview of data quality characterization and management techniques that can be advantageously employed for improving the quality awareness of the knowledge discovery and data mining processes. We propose to integrate data quality indicators for quality aware association rule mining. We propose a cost-based probabilistic model for selecting legitimately interesting rules. Experiments on the challenging KDD-Cup-98 datasets show that variations on data quality have a great impact on the cost and quality of discovered association rules and confirm our approach for the integrated management of data quality indicators into the KDD process that ensure the quality of data mining results.  相似文献   

11.
12.
The popularity of the big data domain has boosted corporate interest in collecting and storing tremendous amounts of consumers’ textual information. However, decision makers are often overwhelmed by the abundance of information, and the usage of text mining (TM) tools is still at its infancy. This study validates an extended technology acceptance model integrating information quality (IQ) and top management support. Results confirm that IQ influences behavioral intentions and TM tools usage, through perceptions of external control, perceived ease of use, and perceived usefulness; top management support also has a key role in determining the usage of TM tools.  相似文献   

13.
With the explosive growth of the Internet and World Wide Web comes a dramatic increase in the number of users that compete for the shared resources of distributed system environments. Most implementations of application servers and distributed search software do not distinguish among requests to different web pages. This has the implication that the behavior of application servers is quite unpredictable. Applications that require timely delivery of fresh information consequently suffer the most in such competitive environments. This paper presents a model of quality of service (QoS) and the design of a QoS-enabled information delivery system that implements such a QoS model. The goal of this development is two-fold. On one hand, we want to enable users or applications to specify the desired quality of service requirements for their requests so that application-aware QoS adaptation is supported throughout the Web query and search processing. On the other hand, we want to enable an application server to customize how it should respond to external requests by setting priorities among query requests and allocating server resources using adaptive QoS control mechanisms. We introduce the Infopipe approach as the systems support architecture and underlying technology for building a QoS-enabled distributed system for fresh information delivery. Ling Liu, Ph.D.: She is an associate professor at the College of Computing, Georgia Institute of Technology. She received her Ph.D. from Tilburg University, The Netherlands in 1993. Her research interests are in the area of large-scale data intensive systems and its applications in distributed, mobile, multimedia, and Internet computing environments. Her work has focused on systems support for creating, searching, manipulating, and monitoring streams of information in wide area networked information systems. She has published more than 70 papers in internal journals or international conferences, and has served on more than 20 program committees in the area of data engineering, databases, and knowledge and information management. Calton Pu, Ph. D.: He is a Professor and John P. Imlay, Jr. Chair in Software at the College of Computing, Georgia Institute of Technology. Calton received his Ph.D. from University of Washington in 1986. He leads the Infosphere expedition project, which is building the system software to support the next generation information flow applications. Infosphere research includes adaptive operating system kernels, communications middleware, and distributed information flow applications. His past research included operating system projects such as Synthetix and Microfeedback, extended transaction projects such as Epsilon Serializability, and Internet data management. He has published more than 125 journal and conference papers, and served on more than 40 program committees. Karsten Schwan, Ph.D.: He is a professor in the College of Computing at the Georgia Institute of Technology. He received the M.Sc. and Ph.D. degrees from Carnegie-Mellon University in Pittsburgh, Pennsylvania. He directs the IHPC project for high performance cluster computing at Georgia Tech. His current research addresses the interactive nature of modern high performance applications (i.e., online monitoring and computational steering), the development of efficient and object-based middleware, the operating system support for distributed and parallel programs, and the online configuration of applications for distributed real-time applications and for communication protocols. Jonathan Walpole, Ph.D.: He is a Professor in the Computer Science and Engineering Department at oregon Graduate Institute of Science and Technology. He received his Ph.D. in Computer Science from Lancaster University, U.K. in 1987. His research interests are in the area of adaptive systems software and its application in distributed, mobile, multimedia computing environments. His work has focused on quality of service specification, adaptive resource management and dynamic specialization for enhanced performance, survivability and evolvability of large software systems, and he has published extensively in these areas.  相似文献   

14.
We use information from the Web for performing our daily tasks more and more often. Locating the right resources that help us in doing so is a daunting task, especially with the present rate of growth of the Web as well as the many different kinds of resources available. The tasks of search engines is to assist us in finding those resources that are apt for our given tasks. In this paper we propose to use the notion of quality as a metric for estimating the aptness of online resources for individual searchers.The formal model for quality as presented in this paper is firmly grounded in literature. It is based on the observations that objects (dubbed artefacts in our work) can play different roles (i.e., perform different functions). An artefact can be of high quality in one role but of poor quality in another. Even more, the notion of quality is highly personal.Our quality-computations for estimating the aptness of resources for searches uses the notion of linguistic variables from the field of fuzzy logic. After presenting our model for quality we also show how manipulation of online resources by means of transformations can influence the quality of these resources.  相似文献   

15.
Abstract. The analysis of web usage has mostly focused on sites composed of conventional static pages. However, huge amounts of information available in the web come from databases or other data collections and are presented to the users in the form of dynamically generated pages. The query interfaces of such sites allow the specification of many search criteria. Their generated results support navigation to pages of results combining cross-linked data from many sources. For the analysis of visitor navigation behaviour in such web sites, we propose the web usage miner (WUM), which discovers navigation patterns subject to advanced statistical and structural constraints. Since our objective is the discovery of interesting navigation patterns, we do not focus on accesses to individual pages. Instead, we construct conceptual hierarchies that reflect the query capabilities used in the production of those pages. Our experiments with a real web site that integrates data from multiple databases, the German SchulWeb, demonstrate the appropriateness of WUM in discovering navigation patterns and show how those discoveries can help in assessing and improving the quality of the site. Received June 21, 1999 / Accepted December 24, 1999  相似文献   

16.
The rapid growth of the Internet has created a tremendous number of multilingual resources. However, language boundaries prevent information sharing and discovery across countries. Proper names play an important role in search queries and knowledge discovery. When foreign names are involved, proper names are often translated phonetically which is referred to as transliteration. In this research we propose a generic transliteration framework, which incorporates an enhanced Hidden Markov Model (HMM) and a Web mining model. We improved the traditional statistical-based transliteration in three areas: (1) incorporated a simple phonetic transliteration knowledge base; (2) incorporated a bigram and a trigram HMM; (3) incorporated a Web mining model that uses word frequency of occurrence information from the Web. We evaluated the framework on an English–Arabic back transliteration. Experiments showed that when using HMM alone, a combination of the bigram and trigram HMM approach performed the best for English–Arabic transliteration. While the bigram model alone achieved fairly good performance, the trigram model alone did not. The Web mining approach boosted the performance by 79.05%. Overall, our framework achieved a precision of 0.72 when the eight best transliterations were considered. Our results show promise for using transliteration techniques to improve multilingual Web retrieval.  相似文献   

17.
E-learning is now being used by many organizations as an approach for enhancing the skills of knowledge workers. However, most applications have performed poorly in motivating employee learning, being perceived as less effective due to a lack of alignment of learning with work performance. To help solve this problem, we developed a performance-oriented approach using design science research methods. It uses performance measurement to clarify organizational goals and individual learning needs and links them to e-learning applications. The key concept lies in a Key Performance Indicator model, where organizational mission and vision are translated into a set of targets that drive learning towards a goal of improving work performance. We explored the mechanisms needed to utilize our approach and examined the necessary conceptual framework and implementation details. To demonstrate the effectiveness of the approach, a prototype workplace e-learning system was developed and used to evaluate the effectiveness of our approach.  相似文献   

18.
Internet的迅猛发展,使其日益成为人们查找有用数据的重要来源。一般的搜索引擎是基于关键字的查询,命中率较低,且不能针对特定用户给出特定服务。提出了将自然语言理解技术与Web数据挖掘相结合,根据用户的特殊需求定制个性化的Web数据挖掘系统,给出了面向新闻挖掘这一特定领域的Web挖掘系统News-Miner的应用方案及设计实现。初步实验结果表明该方案是可行的。该方法可方便地扩展到其它专业应用领域。  相似文献   

19.
Web usage mining: extracting unexpected periods from web logs   总被引:3,自引:0,他引:3  
Existing Web usage mining techniques are currently based on an arbitrary division of the data (e.g. “one log per month”) or guided by presumed results (e.g. “what is the customers’ behaviour for the period of Christmas purchases?”). These approaches have two main drawbacks. First, they depend on the above-mentioned arbitrary organization of data. Second, they cannot automatically extract “seasonal peaks” from among the stored data. In this paper, we propose a specific data mining process (in particular, to extract frequent behaviour patterns) in order to reveal the densest periods automatically. From the whole set of possible combinations, our method extracts the frequent sequential patterns related to the extracted periods. A period is considered to be dense if it contains at least one frequent sequential pattern for the set of users connected to the website in that period. Our experiments show that the extracted periods are relevant and our approach is able to extract both frequent sequential patterns and the associated dense periods.  相似文献   

20.
The significance of modeling and measuring various attributes of the Web in part or as a whole is undeniable. Modeling information phenomena on the Web constitutes fundamental research towards an understanding that will contribute to the goal of increasing its utility. Although Web related metrics have become increasingly sophisticated, few employ models to explain their measurements. In this paper, we discuss issues related to metrics for Web page significance. These metrics are used for ranking the quality and relevance of Web pages in response to user needs. We focus on the problem of ascertaining the statistical distribution of some well-known hyperlink-based Web page quality metrics. Based on empirical distributions of Web page degrees, we derived analytically the probability distribution for the PageRank metric. We found out that it follows the familiar inverse polynomial law reported for Web page degrees. We verified the theoretical exercise with experimental results that suggest a highly concentrated distribution of the metric.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号