首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Content annotation for the semantic web: an automatic web-based approach   总被引:1,自引:1,他引:0  
Semantic Annotation is required to add machine-readable content to natural language text. A global initiative such as the Semantic Web directly depends on the annotation of massive amounts of textual Web resources. However, considering the amount of those resources, a manual semantic annotation of their contents is neither feasible nor scalable. In this paper we introduce a methodology to partially annotate textual content of Web resources in an automatic and unsupervised way. It uses several well-established learning techniques and heuristics to discover relevant entities in text and to associate them to classes of an input ontology by means of linguistic patterns. It also relies on the Web information distribution to assess the degree of semantic co-relation between entities and classes of the input domain ontology. Special efforts have been put in minimizing the amount of Web accesses required to evaluate entities in order to ensure the scalability of the approach. A manual evaluation has been carried out to test the methodology for several domains showing promising results.  相似文献   

2.
Learning Team Strategies: Soccer Case Studies   总被引:1,自引:0,他引:1  
We use simulated soccer to study multiagent learning. Each team's players (agents) share action set and policy, but may behave differently due to position-dependent inputs. All agents making up a team are rewarded or punished collectively in case of goals. We conduct simulations with varying team sizes, and compare several learning algorithms: TD-Q learning with linear neural networks (TD-Q), Probabilistic Incremental Program Evolution (PIPE), and a PIPE version that learns by coevolution (CO-PIPE). TD-Q is based on learning evaluation functions (EFs) mapping input/action pairs to expected reward. PIPE and CO-PIPE search policy space directly. They use adaptive probability distributions to synthesize programs that calculate action probabilities from current inputs. Our results show that linear TD-Q encounters several difficulties in learning appropriate shared EFs. PIPE and CO-PIPE, however, do not depend on EFs and find good policies faster and more reliably. This suggests that in some multiagent learning scenarios direct search in policy space can offer advantages over EF-based approaches.  相似文献   

3.
部分计值是一种程序转换技术.在给定程序部分输入的情况下,可使用该技术对程序进行例化,完成程序中尽可能多的计算,最终得到高效的剩余代码。人们已经研究了许多程序设计语言的部分计值系统,并把它们应用到编译和编译器生成、计算机图形学等领域。本文介绍了部分计值理论及其应用.讨论了Java部分计值器的研究现状.并简单描述了本课题组设计的一个Java分布式部分计值系统DJmix。  相似文献   

4.
The tremendous success of the World Wide Web is countervailed by efforts needed to search and find relevant information. For tabular structures embedded in HTML documents, typical keyword or link-analysis based search fails. The Semantic Web relies on annotating resources such as documents by means of ontologies and aims to overcome the bottleneck of finding relevant information. Turning the current Web into a Semantic Web requires automatic approaches for annotation since manual approaches will not scale in general. Most efforts have been devoted to automatic generation of ontologies from text, but with quite limited success. However, tabular structures require additional efforts, mainly because understanding of table contents requires the comprehension of the logical structure of the table on the one hand, as well as its semantic interpretation on the other. The focus of this paper is on the automatic transformation and generation of semantic (F-Logic) frames from table-like structures. The presented work consists of a methodology, an accompanying implementation (called TARTAR) and a thorough evaluation. It is based on a grounded cognitive table model which is stepwise instantiated by the methodology. A typical application scenario is the automatic population of ontologies to enable query answering over arbitrary tables (e.g. HTML tables).  相似文献   

5.
In this paper, we develop a framework for the automated verification of Web sites, which can be used to specify integrity conditions for a given Web site, and then automatically check whether these conditions are fulfilled. First, we provide a rewriting-based, formal specification language which allows us to define syntactic as well as semantic properties of the Web site. Then, we formalize a verification technique which detects both incorrect/forbidden patterns as well as lack of information, that is, incomplete/missing Web pages inside the Web site. Useful information is gathered during the verification process which can be used to repair the Web site. Our methodology is based on a novel rewriting-based technique, called partial rewriting, in which the traditional pattern matching mechanism is replaced by tree simulation, a suitable technique for recognizing patterns inside semistructured documents. The framework has been implemented in the prototype GVerdi, which is publicly available.  相似文献   

6.
We develop a new approach for Web information discovery and filtering. Our system, called WID, allows the user to specify long-term information needs by means of various topic profile specifications. An entire example page or an index page can be accepted as input for the discovery. It makes use of a simulated annealing algorithm to automatically explore new Web pages. Simulated annealing algorithms possess some favorable properties to fulfill the discovery objectives. Information retrieval techniques are adopted to evaluate the content-based relevance of each page being explored. The hyperlink information, in addition to the textual context, is considered in the relevance score evaluation of a Web page. WID allows users to provide three forms of the relevance feedback model, namely, the positive page feedback, the negative page feedback, and the positive keyword feedback. The system is domain independent and does not rely on any prior knowledge or information about the Web content. Extensive experiments have been conducted to demonstrate the effectiveness of the discovery performance achieved by WID.  相似文献   

7.
Web search evaluation is the process of measuring the effectiveness of a Web search system. Such an evaluation helps in identifying the most effective one and helps the users to find the required information with less effort. Web search systems have been evaluated in many different ways in the last 15 years. In this paper, we review some of the efforts made for the evaluation of Web search systems. We discuss these evaluation studies by classifying them into eight different categories. As the size and content of Web is changing rapidly, and hence, the Web search techniques, we mention the necessity of an automatic evaluation methodology. But, at the same time, we emphasize that the significance of user based evaluation can not be neglected. Finally, we conclude that an automatic evaluation method that models users’ feedback based evaluation is required for the effective and realistic evaluation of Web search systems.  相似文献   

8.
Given the increasingly important role the World Wide Web plays as an information source, and yet with the continuing problems that certain individuals, particularly those with disabilities and those using ‘non-standard’ Web browsing technology, it is vital that web resource providers be aware of design features which introduce barriers affecting the accessibility of on-line information.The role of the accessibility audit is seen as an important one in uncovering, describing, and explaining potential accessibility barriers present in a web site. It furthermore acts as an educational tool by raising awareness in accessible design amongst web designers and content providers in providing them with a recovery plan for improving the accessiblility of the audited resource, and potentially other resources.In 1999, the authors were commissioned to carry out accessibility audits of 11 web sites in the UK Higher Education sector. This paper discusses the development of the methodology used to carry out the audits, the findings of the audits in terms of accessibility levels of the subject sites, and feedback as a result of the auditing process. It concludes by looking at ways in which the methodology adopted may be tailored to suit specific types of web resource evaluation.  相似文献   

9.
Ontology-enabled pervasive computing applications   总被引:1,自引:0,他引:1  
Information technology's rapid evolution has made tremendous amounts of information and services available at our fingertips. However, we still face the frustration of trying to do simple things in the device- and application-rich environments where we live and work. Task computing is defined as computation to fill the gap between the tasks that users want to perform and the services that constitute available actionable functionality. To support task computing, we have implemented a Task Computing Environment including client environment, service discovery mechanism, and Semantic Web services and tools. TCE is composed of several components including STEER (Semantic Task Execution EditoR), White Hole, and PIPE (Pervasive Instance Provision Environment).  相似文献   

10.
As the information on the Internet dramatically increases, more and more limitations in information searching are revealed, because web pages are designed for human use by mixing content with presentation. In order to overcome these limitations, the Semantic Web, based on ontology, was introduced by W3C to bring about significant advancement in web searching. To accomplish this, the Semantic Web must provide search methods based on the different relationships between resources.In this paper, we propose a semantic association search methodology that consists of the evaluation of resources and relationships between resources, as well as the identification of relevant information based on ontology, a semantic network of resources and properties. The proposed semantic search method is based on an extended spreading activation technique. In order to evaluate the importance of a query result, we propose weighting methods for measuring properties and resources based on their specificity and generality. From this work, users can search semantically associated resources for their query, confident that the information is valuable and important. The experimental results show that our method is valid and efficient for searching and ranking semantic search results.  相似文献   

11.
Nowadays, the patients and physicians use the health-related websites as an important information source and, therefore, it is critical the quality evaluation of health- related websites. The quality assessment of health-related websites becomes especially relevant because their use imply the existence of a wide range of threats which can affect people’s health. Additionally, website quality evaluation can also contribute to maximize the exploitation of invested resources by organizations in the development of user-perceived quality websites. But there is not yet a clear and unambiguous definition of the concept of website quality and the debate about quality evaluation on the Web remains open. In this paper, we present a qualitative and user-oriented methodology for assessing quality of health-related websites based on a 2-tuple fuzzy linguistic approach. To identify the quality criteria set, a qualitative research has been carried out using the focus groups technique. The measurement method generates linguistic quality assessments considering the visitors’ judgements with respect to those quality criteria. The combination of the linguistic judgements is implemented without a loss of information by applying a 2-tuple linguistic weighted average operator. This methodology means an improvement on quality evaluation of health websites through the commitment to put users first.  相似文献   

12.
In this paper, we develop a framework for the automated verification of Web sites which can be used to specify integrity conditions for a given Web site, and then automatically check whether these conditions are fulfilled. First, we provide a rewriting-based, formal specification language which allows us to define syntactic as well as semantic properties of the Web site. Then, we formalize a verification technique which obtains the requirements not fulfilled by the Web site, and helps to repair the errors by finding out incomplete information and/or missing pages. Our methodology is based on a novel rewriting-based technique, called partial rewriting, in which the traditional pattern matching mechanism is replaced by tree simulation, a suitable technique for recognizing patterns inside semistructured documents. The framework has been implemented in the prototype Web verification system Verdi which is publicly available.  相似文献   

13.
Web applications are fast becoming more widespread, larger, more interactive, and more essential to the international use of computers. It is well understood that web applications must be highly dependable, and as a field we are just now beginning to understand how to model and test Web applications. One straightforward technique is to model Web applications as finite state machines. However, large numbers of input fields, input choices and the ability to enter values in any order combine to create a state space explosion problem. This paper evaluates a solution that uses constraints on the inputs to reduce the number of transitions, thus compressing the FSM. The paper presents an analysis of the potential savings of the compression technique and reports actual savings from two case studies.  相似文献   

14.
基于Web页面的藏文在线输入技术   总被引:1,自引:0,他引:1       下载免费PDF全文
于洪志  何向真 《计算机工程》2008,34(18):260-262
Web页面藏文在线输入技术,能够在浏览器中脱离本机输入法而进行藏文输入,实现藏文网络在线文字交互,为网络系统提供了跨平台的藏文输入解决方案。阐述了基于Web页面藏文在线输入技术的工作原理及基本设计思想,介绍藏文在线输入法的组成、内码外码设计原则和输入法流程,对藏文输入法进行系统分析,给出实现模型,论述浏览器内嵌藏文字体信息技术,达到在线、即时的藏文输入。采用内嵌法和外挂法,实现藏文在线输入技术与主流网页编辑器的整合。  相似文献   

15.
Improving pattern quality in web usage mining by using semantic information   总被引:1,自引:1,他引:0  
Frequent Web navigation patterns generated by using Web usage mining techniques provide valuable information for several applications such as Web site restructuring and recommendation. In conventional Web usage mining, semantic information of the Web page content does not take part in the pattern generation process. In this work, we investigate the effect of semantic information on the patterns generated for Web usage mining in the form of frequent sequences. To this aim, we developed a technique and a framework for integrating semantic information into Web navigation pattern generation process, where frequent navigational patterns are composed of ontology instances instead of Web page addresses. The quality of the generated patterns is measured through an evaluation mechanism involving Web page recommendation. Experimental results show that more accurate recommendations can be obtained by including semantic information in navigation pattern generation, which indicates the increase in pattern quality.  相似文献   

16.
Probabilistic incremental program evolution   总被引:1,自引:0,他引:1  
Probabilistic incremental program evolution (PIPE) is a novel technique for automatic program synthesis. We combine probability vector coding of program instructions, population-based incremental learning, and tree-coded programs like those used in some variants of genetic programming (GP). PIPE iteratively generates successive populations of functional programs according to an adaptive probability distribution over all possible programs. Each iteration, it uses the best program to refine the distribution. Thus, it stochastically generates better and better programs. Since distribution refinements depend only on the best program of the current population, PIPE can evaluate program populations efficiently when the goal is to discover a program with minimal runtime. We compare PIPE to GP on a function regression problem and the 6-bit parity problem. We also use PIPE to solve tasks in partially observable mazes, where the best programs have minimal runtime.  相似文献   

17.
Metadata is needed to facilitate data sharing among geospatial information communities. Geographic Metadata Standards are available but tend to be general and complex in nature and also are not well suited to overcome semantic heterogeneities across vocabularies of different domains and user communities. Current formalizations of metadata standards are not flexible enough to allow reuse and extension of metadata specifications, in particular for Web based information systems. In order to address this problem we propose a methodology to create community specific metadata profiles for the Semantic Web by reusing metadata specifications and domain vocabularies encoded as resources for the Web. This ensures that these community profiles are semantically compatible so they can be used in Web based information systems. The ISO-19115:2003 geographic metadata standard is the most general standard available and is being used in conjunction with the Web Ontology Language as the expression medium to test the methodology for each one of the possible extensions documented in ISO-19115:2003. It is shown that it is possible to extend and reuse metadata specifications and vocabularies distributed in the Web using the Web Ontology Language, by utilizing the language's flexibility to create restrictions on inherit properties and to make interferences on web distributed resources. Examples from the area of Hydrology are provided to demonstrate the technical details of the approach.  相似文献   

18.
Recent work on searching the Semantic Web has yielded a wide range of approaches with respect to the underlying search mechanisms, results management and presentation, and style of input. Each approach impacts upon the quality of the information retrieved and the user’s experience of the search process. However, despite the wealth of experience accumulated from evaluating Information Retrieval (IR) systems, the evaluation of Semantic Web search systems has largely been developed in isolation from mainstream IR evaluation with a far less unified approach to the design of evaluation activities. This has led to slow progress and low interest when compared to other established evaluation series, such as TREC for IR or OAEI for Ontology Matching. In this paper, we review existing approaches to IR evaluation and analyse evaluation activities for Semantic Web search systems. Through a discussion of these, we identify their weaknesses and highlight the future need for a more comprehensive evaluation framework that addresses current limitations.  相似文献   

19.
20.
《Journal of Web Semantics》2005,3(2-3):132-146
Turning the current Web into a Semantic Web requires automatic approaches for annotation of existing data since manual approaches will not scale in general. We here present an approach for automatic generation of F-Logic frames out of tables which subsequently supports the automatic population of ontologies from table-like structures. The approach consists of a methodology, an accompanying implementation and a thorough evaluation. It is based on a grounded cognitive table model which is stepwise instantiated by our methodology.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号