首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 687 毫秒
1.
Abstract

This paper describes a system for explaining solutions generated by genetic algorithms (GAs) using tools developed for case-based reasoning (CBR). In addition, this work empirically supports the building block hypothesis (BBH) which states that genetic algorithms work by combining good sub-solutions called building blocks into complete solutions. Since the space of possible building blocks and their combinations is extremely large, solutions found by GAs are often opaque and cannot be easily explained. Ironically, much of the knowledge required to explain such solutions is implicit in the processing done by the GA. Our system extracts and processes historical information from the GA by using knowledge acquisition and analysis tools developed for case-based reasoning. If properly analysed, the resulting knowledge base can be used: to shed light on the nature of the search space; to explain how a solution evolved; to discover its building blocks; and to justify why it works. Such knowledge about the search space can be used to tune the GA in various ways. As well as being a useful explanatory tool for GA researchers, our system serves as an empirical test of the building block hypothesis. The fact that it works so well lends credence to the theory that GAs work by exploiting common genetic building blocks.  相似文献   

2.
In the context of technological expansion and development, companies feel the need to renew and optimize their information systems as they search for the best way to manage knowledge. Business ontologies within the semantic web are an excellent tool for managing knowledge within this space. The proposal in this article consists of a methodology for integrating information in companies. The application of this methodology results in the creation of a specific business ontology capable of semantic interoperability. The resulting ontology, developed from the information system of specific companies, represents the fundamental business concepts, thus making it a highly appropriate information integration tool. Its level of semantic expressivity improves on that of its own sources, and its solidity and consistency are guaranteed by means of checking by current reasoning tools. An ontology created in this way could drive the renewal processes of companies’ information systems. A comparison is also made with a number of well-known business ontologies, and similarities and differences are drawn, highlighting the difficulty in aligning general ontologies to specific ones, such as the one we present.  相似文献   

3.
Previews in hypertexts: effects on navigation and knowledge acquisition   总被引:1,自引:0,他引:1  
Abstract In hypertexts, previews can be used as local tools for navigation. They pop up when a link is activated and provide information about the linked page. In an experimental study with 50 participants the effect of previews on searching and knowledge acquisition was investigated. The participants had to explore a hypertext with the aim either to understand as much as they could or to search for information. Previews enhanced knowledge acquisition in both conditions and supported intentional and incidental learning. In the searching condition previews were used for link selection, even if they could not enhance the search results.  相似文献   

4.
5.
This paper introduces Ontoolsearch, a new search system that can be employed by educators in order to find suitable tools for supporting collaborative learning settings. Current tool search facilities commonly allow simple keyword searches, limiting the accuracy of obtained results. In contrast, Ontoolsearch supports semantic querying of tool knowledge bases annotated with the Ontoolcole ontology, specifically designed to fit educators’ questions. Moreover, Ontoolsearch offers an innovative direct manipulation interface to educators, intended to facilitate query formulation as well as the analysis of obtained results. To evaluate this proposal, a group of educators was engaged in a formal comparison study of Ontoolsearch with a keyword search facility based on Lucene. Six search tasks were proposed, each responding to the learning tool needs of a real CSCL setting. Participants had to find tools for these search tasks using both systems alternatively. Evaluation results showed that retrieval performance was significantly better with Ontoolsearch, despite educators’ previous experience with keyword searches. Further, educators rated very positively the user interface of Ontoolsearch and considered this system very useful to find tools for their own learning situations.  相似文献   

6.
Knowledge conceptualization tool   总被引:3,自引:0,他引:3  
Knowledge acquisition is one of the most important and problematic aspects of developing knowledge-based systems. Many automated tools have been introduced in the past, however, manual techniques are still heavily used. Interviewing is one of the most commonly used manual techniques for a knowledge acquisition process, and few automated support tools exist to help knowledge engineers enhance their performance. The paper presents a knowledge conceptualization tool (KCT) in which the knowledge engineer can effectively retrieve, structure, and formalize knowledge components, so that the resulting knowledge base is accurate and complete. The KCT uses information retrieval technique to facilitate conceptualization, which is one of the human intensive activities of knowledge acquisition. Two information retrieval techniques employing best-match strategies are used: vector space model and probabilistic ranking principle model. A prototype of the KCT was implemented to demonstrate the concept. The results from KCT are compared with the outputs from a manual knowledge acquisition process in terms of amount of information retrieved and the process time spent. An analysis of the results shows that the process time to retrieve knowledge components (e.g., facts, rules, protocols, and uncertainty) of KCT is about half that of the manual process, and the number of knowledge components retrieved from knowledge acquisition activities is four times more than that retrieved through a manual process  相似文献   

7.
This paper introduces the TORISHIKI-KAI project, which aims to construct a million-word-scale semantic network from the Web using state of the art knowledge acquisition methods. The resulting network can be browsed as a Web search directory, and we show that the directory is useful for finding “unknown unknowns” — in the infamous words of D.H. Rumsfeld: things “we don't know we don't know.” Because typically we have no way to look for information we don't even know is missing, a crucial characteristic of unknown unknowns is that they are very difficult to discover through keyword-based Web search. Some examples of the unknown unknowns we have found include unexpected troubles associated with commercial products, surprising new combinations of ingredients in new recipes, unexpected tools or methods for commiting suicide, and so on. We expect such information to be useful for risk management, innovation support, and the detection of harmful information on the Web.  相似文献   

8.
Computer-supported collaborative learning (CSCL) is a dynamic and varied area of research. Ideally, tools for CSCL support and encourage solo and group learning processes and products. However, most CSCL research does not focus on supporting and sustaining the co-construction of knowledge. We identify four reasons for this situation and identify three critical resources every collaborator brings to collaborations that are underutilized in CSCL research: (a) prior knowledge, (b) information not yet transformed into knowledge that is judged relevant to the task(s) addressed in collaboration, and (c) cognitive processes used to construct these informational resources. Finally, we introduce gStudy, a software tool designed to advance research in the learning sciences. gStudy helps learners manage cognitive load so they can re-assign cognitive resources to self-, co-, and shared regulation; and it automatically and unobtrusively traces each user′s engagement with content and the means chosen for cognitively processing content, thus generating real-time performance data about processes of collaborative learning.  相似文献   

9.
Pivot-based algorithms are effective tools for proximity searching in metric spaces. They allow trading space overhead for number of distance evaluations performed at query time. With additional search structures (that pose extra space overhead) they can also reduce the amount of side computations. We introduce a new data structure, the Fixed Queries Array (FQA), whose novelties are (1) it permits sublinear extra CPU time without any extra data structure; (2) it permits trading number of pivots for their precision so as to make better use of the available memory. We show experimentally that the FQA is an efficient tool to search in metric spaces and that it compares favorably against other state of the art approaches. Its simplicity converts it into a simple yet effective tool for practitioners seeking for a black-box method to plug in their applications.  相似文献   

10.
Knowledge graphs have gained increasing popularity in the past couple of years, thanks to their adoption in everyday search engines. Typically, they consist of fairly static and encyclopedic facts about persons and organizations–e.g. a celebrity’s birth date, occupation and family members–obtained from large repositories such as Freebase or Wikipedia.In this paper, we present a method and tools to automatically build knowledge graphs from news articles. As news articles describe changes in the world through the events they report, we present an approach to create Event-Centric Knowledge Graphs (ECKGs) using state-of-the-art natural language processing and semantic web techniques. Such ECKGs capture long-term developments and histories on hundreds of thousands of entities and are complementary to the static encyclopedic information in traditional knowledge graphs.We describe our event-centric representation schema, the challenges in extracting event information from news, our open source pipeline, and the knowledge graphs we have extracted from four different news corpora: general news (Wikinews), the FIFA world cup, the Global Automotive Industry, and Airbus A380 airplanes. Furthermore, we present an assessment on the accuracy of the pipeline in extracting the triples of the knowledge graphs. Moreover, through an event-centered browser and visualization tool we show how approaching information from news in an event-centric manner can increase the user’s understanding of the domain, facilitates the reconstruction of news story lines, and enable to perform exploratory investigation of news hidden facts.  相似文献   

11.
ContextThis systematic mapping review is set in a Global Software Engineering (GSE) context, characterized by a highly distributed environment in which project team members work separately in different countries. This geographic separation creates specific challenges associated with global communication, coordination and control.ObjectiveThe main goal of this study is to discover all the available communication and coordination tools that can support highly distributed teams, how these tools have been applied in GSE, and then to describe and classify the tools to allow both practitioners and researchers involved in GSE to make use of the available tool support in GSE.MethodWe performed a systematic mapping review through a search for studies that answered our research question, “Which software tools (commercial, free or research based) are available to support Global Software Engineering?” Applying a range of related search terms to key electronic databases, selected journals, and conferences and workshops enabled us to extract relevant papers. We then used a data extraction template to classify, extract and record important information about the GSD tools from each paper. This information was synthesized and presented as a general map of types of GSD tools, the tool’s main features and how each tool was validated in practice.ResultsThe main result is a list of 132 tools, which, according to the literature, have been, or are intended to be, used in global software projects. The classification of these tools includes lists of features for communication, coordination and control as well as how the tool has been validated in practice. We found that out the total of 132, the majority of tools were developed at research centers, and only a small percentage of tools (18.9%) are reported as having been tested outside the initial context in which they were developed.ConclusionThe most common features in the GSE tools included in this study are: team activity and social awareness, support for informal communication, Support for Distributed Knowledge Management and Interoperability with other tools. Finally, there is the need for an evaluation of these tools to verify their external validity, or usefulness in a wider global environment.  相似文献   

12.
13.
《Computer》2005,38(11):97-99
Looks at the custom tool developed by the author that leverages the Google Web search API (or a similar search service) to discover a list of Web pages matching a given topic; identify and extract trends and patterns from these Web pages' text; and transform those trends and patterns into an understandable, useful, and well-organized information resource. The tool accomplishes these tasks using four main components. First, a search engine client discovers a list of relevant Web pages using the Google Web search API. An information extraction engine then mines concepts and associated text passages from these Web pages. Next, a clustering engine organizes the most significant concepts into a hierarchical taxonomy. Finally, a knowledge base generator uses this taxonomy to generate a hypertext knowledge base from the extracted concepts and text passages.  相似文献   

14.
基于Lucene的搜索引擎设计与实现   总被引:14,自引:0,他引:14  
当今搜索引擎已经成为人们在网上搜索信息的重要工具。通用的搜索引擎虽然功能强大,但对具有很多子网站的企业门户网站进行搜索时响应速度慢,索引范围不全。Lucene是一个强大的全文索引引擎工具包,应用它可以快速地开发一个搜索引擎。文中描述了利用基于Java的全文检索工具包Lucene开发定制的中文搜索引擎方法,并且将该定制的搜索引擎与Google的站内搜索进行试验比较,发现在对具有很多子网站的企业门户网站进行搜索时有优于Google的性能。  相似文献   

15.
Knowledge managers need to select which knowledge management tool to use for any given problem and problem environment. A graphical tool, named the “house of knowledge management tool selection” is proposed, based on the house of quality matrix used in the quality function deployment methodology. A simple case study is described that acts as a proof of concept to show the house of knowledge management Tool selection can systematically evaluate potential tools to solve a knowledge management problem. To help identify the tools to populate the house, an examination was undertaken of how knowledge management tools had previously been listed and classified, but these existing classifications were found to be of little help. No classification existed that categorised the tools in terms of the knowledge problems they helped resolve, yet this classification would seem more useful for knowledge managers. To meet this need, knowledge problems were divided into ten subtypes and the knowledge management tools were then categorised according to their effectiveness at solving each subtype. This new classification was flexible enough to include all types of knowledge management tools and could also change with each problem environment. It was found to give a greater understanding of the knowledge management tools in the context of a particular knowledge problem, and it could therefore help populate the house tool. The house of knowledge management tool selection is a promising development of a tool that should be able to become an essential part of a manager’s decision-making toolkit.  相似文献   

16.
多年来的软件开发积累了大量的源代码,同时不少代码搜索工具也被开发出来,但是,现有的工具都不够精确,因而很少有人使用。本文提出一种高效的源代码搜索算法,通过识别查询语句与API的关系,以提高代码搜索的准确性。基于该算法,本文实现一个针对C#源代码的搜索工具,并通过客观实验与用户调研对算法进行评估。实验结果表明本文提出的搜索算法是十分有效的。   相似文献   

17.
一个交互式的Fortran77并行化系统   总被引:6,自引:1,他引:5  
陈文光  杨博  王紫瑶  郑丰宙  郑纬民 《软件学报》1999,10(12):1259-1267
并行化编译器可以把现有的串行程序自动或半自动地转换为并行程序.现有并行化系统的自动并行化效果与手工并行化的效果相比还有一定的差距,这是由于并行化工具的分析能力不足以及程序中所固有的语义信息无法被并行化工具所理解而造成的.TIPS(Tsinghua interactive parallelizing system)系统通过提供一些友好的交互式工具,使用户与编译器紧密协作,是提高并行化系统的能力和效率的一条有效途径.  相似文献   

18.
Web-videoconference systems offer several tools (like chat, audio, and webcam) that vary in the amount and type of information learners can share with each other and the teacher. It has been proposed that tools fostering more direct social interaction and feedback amongst learners and teachers would foster higher levels of engagement. If so, one would expect that the richer the tools used, the higher the levels of learner engagement. However, the actual use of tools and contributions to interactions in the learning situation may relate to students’ motivation. Therefore, we investigated the relationship between available tools used, student motivation, participation, and performance on a final exam in an online course in economics (N = 110). In line with our assumptions, we found some support for the expected association between autonomous motivation and participation in web-videoconferences as well as between autonomous motivation and the grade on the final exam. Students’ tool use and participation were significantly correlated with each other and with exam scores, but participation appeared to be a stronger predictor of the final exam score than tool use. This study adds to the knowledge base needed to develop guidelines on how synchronous communication in e-learning can be used.  相似文献   

19.
《Applied Soft Computing》2007,7(1):398-410
Personalized search engines are important tools for finding web documents for specific users, because they are able to provide the location of information on the WWW as accurately as possible, using efficient methods of data mining and knowledge discovery. The types and features of traditional search engines are various, including support for different functionality and ranking methods. New search engines that use link structures have produced improved search results which can overcome the limitations of conventional text-based search engines. Going a step further, this paper presents a system that provides users with personalized results derived from a search engine that uses link structures. The fuzzy document retrieval system (constructed from a fuzzy concept network based on the user's profile) personalizes the results yielded from link-based search engines with the preferences of the specific user. A preliminary experiment with six subjects indicates that the developed system is capable of searching not only relevant but also personalized web pages, depending on the preferences of the user.  相似文献   

20.
We describe in this paper the Internet Fish Construction Kit, a tool for building persistent, personal, dynamic information gatherers (“Internet Fish”) [3] for the World-Wide Web. Internet Fish (IFish) differ from current resource discovery tools in that they are introspective, incorporating deep structural knowledge of the organization and services of the Web, and are also capable of on-the-fly reconfiguration, modification and expansion. Introspection lets IFish examine their own information-gathering processes and identify successful resource discovery techniques while they operate; IFish automatically remember not only what information has been uncovered but also how that information was derived. Dynamic reconfiguration and expansion permits IFish to be modified to take advantage of new information sources or analysis techniques, or to model changes in the user's interests, as they wander the Web. Together, these properties define a much more powerful class of resource discovery tools than previously available. The IFish Construction Kit makes it possible to rapidly and easily build IFish-class tools. The Kit includes a general architecture for individual IFish, a language for specifying IFish information objects, and an operating environment for running individual IFish.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号