首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 249 毫秒
1.
The sheer volume of information and variety of sources from which it may be retrieved on the Web make searching the sources a difficult task. Usually, meta-search engines can be used only to search Web pages or documents; other major sources such as data bases, library corpuses and the so-called Web data bases are not involved. Faced with these restrictions, an effective retrieval technology for a much wider range of sources becomes increasingly important. In our previous work, we proposed an Integrated Retrieval (IIR), which is based on Common Object Request Broker Architecture, to spare clients the trouble of complicated semantics when federating multiple sources. In this paper, we present an IIR-based prototype for integrated information gathering system. It offers a unified interface for querying heterogeneous interfaces or protocols of sources and uses SQL compatible query language for heterogeneous backend targets. We use it to link two general search engines (Yahoo and AltaVista), a science paper explorer (IEEE), and two library corpus explorers. We also perform preliminary measurements to assess the potential of the system. The results shown that the overhead spent on each source as the system queries them is within reason, that is, that using IIR to construct an integrated gathering system incurs low overhead.  相似文献   

2.
一种基于改进的权值调整技术数据源分类算法研究*   总被引:1,自引:0,他引:1  
针对传统的搜索引擎无法正确搜索到Deep Web中隐藏的海量信息,对Web数据库的分类是通向Web数据库分类集成和检索的关键步骤。提出了一种基于权值调整技术的Deep Web数据库分类方法,首先从网页表单中提取特征;然后对这些特征使用一种新的权重计算方法进行估值;最后利用朴素贝叶斯分类器对Web数据库进行分类。实验表明,这种分类方法经过少量样本训练后,就能达到很好的分类效果,并且随着训练样本的增加,该分类器的性能保持稳定,准确率、召回率都在很小的范围内波动。  相似文献   

3.
Deep Web自动分类是建立深网数据集成系统的前提和基础。提出了一种基于领域特征文本的Deep Web分类方法。首先借助本体知识对表达同一语义的不同词汇进行了概念抽象,进而给出了领域相关度的定义,并将其作为特征文本选择的量化标准,避免了人为选取的主观性和不确定性;在接口向量模型构建中,考虑了不同特征文本对于分类作用的差异,提出了一种改进的W-TFIDF权重计算方法;最后采用KNN算法对接口向量进行了分类。对比实验证明,利用所提方法选择的特征文本是准确有效的,新的特征文本权重计算方法能显著地提高分类精度,且在KNN算法中表现出较好的稳定性。  相似文献   

4.
Automatic integration of Web search interfaces with WISE-Integrator   总被引:3,自引:0,他引:3  
An increasing number of databases are becoming Web accessible through form-based search interfaces, and many of these sources are database-driven e-commerce sites. It is a daunting task for users to access numerous Web sites individually to get the desired information. Hence, providing a unified access to multiple e-commerce search engines selling similar products is of great importance in allowing users to search and compare products from multiple sites with ease. One key task for providing such a capability is to integrate the Web search interfaces of these e-commerce search engines so that user queries can be submitted against the integrated interface. Currently, integrating such search interfaces is carried out either manually or semiautomatically, which is inefficient and difficult to maintain. In this paper, we present WISE-Integrator - a tool that performs automatic integration of Web Interfaces of Search Engines. WISE-Integrator explores a rich set of special metainformation that exists in Web search interfaces and uses the information to identify matching attributes from different search interfaces for integration. It also resolves domain differences of matching attributes. In this paper, we also discuss how to automatically extract information from search interfaces that is needed by WISE-Integrator to perform automatic interface integration. Our experimental results, based on 143 real-world search interfaces in four different domains, indicate that WISE-Integrator can achieve high attribute matching accuracy and can produce high-quality integrated search interfaces without human interactions.Received: 2 January 2004, Accepted: 25 March 2004, Published online: 12 August 2004Edited by: M. Carey  相似文献   

5.
Many experts predict that the next huge step forward in Web information technology will be achieved by adding semantics to Web data, and will possibly consist of (some form of) the Semantic Web. In this paper, we present a novel approach to Semantic Web search, called Serene, which allows for a semantic processing of Web search queries, and for evaluating complex Web search queries that involve reasoning over the Web. More specifically, we first add ontological structure and semantics to Web pages, which then allows for both attaching a meaning to Web search queries and Web pages, and for formulating and processing ontology-based complex Web search queries (i.e., conjunctive queries) that involve reasoning over the Web. Here, we assume the existence of an underlying ontology (in a lightweight ontology language) relative to which Web pages are annotated and Web search queries are formulated. Depending on whether we use a general or a specialized ontology, we thus obtain a general or a vertical Semantic Web search interface, respectively. That is, we are actually mapping the Web into an ontological knowledge base, which then allows for Semantic Web search relative to the underlying ontology. The latter is then realized by reduction to standard Web search on standard Web pages and logically completed ontological annotations. That is, standard Web search engines are used as the main inference motor for ontology-based Semantic Web search. We develop the formal model behind this approach and also provide an implementation in desktop search. Furthermore, we report on extensive experiments, including an implemented Semantic Web search on the Internet Movie Database.  相似文献   

6.
Scaling access to heterogeneous data sources with DISCO   总被引:5,自引:0,他引:5  
Accessing many data sources aggravates problems for users of heterogeneous distributed databases. Database administrators must deal with fragile mediators, that is, mediators with schemas and views that must be significantly changed to incorporate a new data source. When implementing translators of queries from mediators to data sources, database implementers must deal with data sources that do not support all the functionality required by mediators. Application programmers must deal with graceless failures for unavailable data sources. Queries simply return failure and no further information when data sources are unavailable for query processing. The Distributed Information Search COmponent (Disco) addresses these problems. Data modeling techniques manage the connections to data sources, and sources can be added transparently to the users and applications. The interface between mediators and data sources flexibly handles different query languages and different data source functionality. Query rewriting and optimization techniques rewrite queries so they are efficiently evaluated by sources. Query processing and evaluation semantics are developed to process queries over unavailable data sources. In this article, we describe: 1) the distributed mediator architecture of Disco; 2) the data model and its modeling of data source connections; 3) the interface to underlying data sources and the query rewriting process; and 4) query processing semantics. We describe several advantages of our system  相似文献   

7.
When classifying search queries into a set of target categories, machine learning based conventional approaches usually make use of external sources of information to obtain additional features for search queries and training data for target categories. Unfortunately, these approaches rely on large amount of training data for high classification precision. Moreover, they are known to suffer from inability to adapt to different target categories which may be caused by the dynamic changes observed in both Web topic taxonomy and Web content. In this paper, we propose a feature-free classification approach using semantic distance. We analyze queries and categories themselves and utilizes the number of Web pages containing both a query and a category as a semantic distance to determine their similarity. The most attractive feature of our approach is that it only utilizes the Web page counts estimated by a search engine to provide the search query classification with respectable accuracy. In addition, it can be easily adaptive to the changes in the target categories, since machine learning based approaches require extensive updating process, e.g., re-labeling outdated training data, re-training classifiers, to name a few, which is time consuming and high-cost. We conduct experimental study on the effectiveness of our approach using a set of rank measures and show that our approach performs competitively to some popular state-of-the-art solutions which, however, frequently use external sources and are inherently insufficient in flexibility.  相似文献   

8.
Information sources such as relational databases, spreadsheets, XML, JSON, and Web APIs contain a tremendous amount of structured data that can be leveraged to build and augment knowledge graphs. However, they rarely provide a semantic model to describe their contents. Semantic models of data sources represent the implicit meaning of the data by specifying the concepts and the relationships within the data. Such models are the key ingredients to automatically publish the data into knowledge graphs. Manually modeling the semantics of data sources requires significant effort and expertise, and although desirable, building these models automatically is a challenging problem. Most of the related work focuses on semantic annotation of the data fields (source attributes). However, constructing a semantic model that explicitly describes the relationships between the attributes in addition to their semantic types is critical.We present a novel approach that exploits the knowledge from a domain ontology and the semantic models of previously modeled sources to automatically learn a rich semantic model for a new source. This model represents the semantics of the new source in terms of the concepts and relationships defined by the domain ontology. Given some sample data from the new source, we leverage the knowledge in the domain ontology and the known semantic models to construct a weighted graph that represents the space of plausible semantic models for the new source. Then, we compute the top k candidate semantic models and suggest to the user a ranked list of the semantic models for the new source. The approach takes into account user corrections to learn more accurate semantic models on future data sources. Our evaluation shows that our method generates expressive semantic models for data sources and services with minimal user input. These precise models make it possible to automatically integrate the data across sources and provide rich support for source discovery and service composition. They also make it possible to automatically publish semantic data into knowledge graphs.  相似文献   

9.
深层网数据库的访问方式主要是通过查询接口,所以查询接口是外部访问深层网数据库的门户.为了能够同时访问同一领域多个Web数据库,需要对多个Web数据库的查询接口进行集成.因此,提出基于本体的深层网查询接口集成方法.首先构建领域核心本体,在模式匹配过程中,不断完善核心本体;然后,以本体作为媒介,在不同查询接口模式间建立属性映射关系,发现属性间的语义关联;最后,根据本体概念出现的频数生成集成接口.实验表明提出的深层网查询接口自动集成方法是可行的和高效的.  相似文献   

10.
Using JavaBeans and CORBA agents in conjunction with Web search technologies, this prototype search engine (Agora), automatically generates and indexes a worldwide database of software products, classified by component model. Users of Agora can search for components in this database by describing specific properties of a component's interface. The system combines Web search engines with an introspection process. Introspection, primarily associated with JavaBeans, describes the capability of components to provide information about their own interfaces. The Common Object Request Broker Architecture offers a similar capability, although this data is maintained external to the CORBA server in an interface repository  相似文献   

11.
The Web has undergone a tremendous change from a primarily publication platform towards a participatory and"programmable"platform,where a large number of heterogeneous Web-delivered services(including SOAP and RESTful Web services,RSS and Atom feeds)are emerging.It results in the creation of Web mashup applications with rich user experiences.However,the integration of Web-delivered services is still a challenging issue.It not only requires the developers’tedious eforts in understanding and coordinating heterogeneous service types,but also results in the time-consuming development of user interfaces.In this paper,we propose the iMashup composition framework to facilitate mashup development and deployment.We provide a unified mashup component model for the common representation of heterogeneous Web-delivered service interfaces.The component model specifies necessary properties and behaviors at both business and user interface level.We associate the component model with semantically meaningful tags,so that mashup developers can fast understand the service capabilities.The mashup developers can search and put the proper mashup components into the Web browser based composition environment,and connect them by data flows based on the tag-based semantics.Such an integration manner might prevent some low-level programming eforts and improve the composition efciency.A series of experimental study are conducted to evaluate our framework.  相似文献   

12.
使用分类器自动发现特定领域的深度网入口   总被引:4,自引:0,他引:4  
王辉  刘艳威  左万利 《软件学报》2008,19(2):246-256
在深度网研究领域,通用搜索引擎(比如Google和Yahoo)具有许多不足之处:它们各自所能覆盖的数据量与整个深度网数据总量的比值小于1/3;与表层网中的情况不同,几个搜索引擎相结合所能覆盖的数据量基本没有发生变化.许多深度网站点能够提供大量高质量的信息,并且,深度网正在逐渐成为一个最重要的信息资源.提出了一个三分类器的框架,用于自动识别特定领域的深度网入口.查询接口得到以后,可以将它们进行集成,然后将一个统一的接口提交给用户以方便他们查询信息.通过8组大规模的实验,验证了所提出的方法可以准确高效地发现特定领域的深度网入口.  相似文献   

13.
Over the past few years, research and development in bioinformatics (e.g. genomic sequence alignment) has grown with each passing day fueling continuing demands for vast computing power to support better performance. This trend usually requires solutions involving parallel computing techniques because cluster computing technology reduces execution times and increases genomic sequence alignment efficiency. One example, mpiBLAST is a parallel version of NCBI BLAST that combines NCBI BLAST with message passing interface (MPI) standards. However, as most laboratories cannot build up powerful cluster computing environments, Grid computing framework concepts have been designed to meet the need. Grid computing environments coordinate the resources of distributed virtual organizations and satisfy the various computational demands of bioinformatics applications. In this paper, we report on designing and implementing a BioGrid framework, called G‐BLAST, that performs genomic sequence alignments using Grid computing environments and accessible mpiBLAST applications. G‐BLAST is also suitable for cluster computing environments with a server node and several client nodes. G‐BLAST is able to select the most appropriate work nodes, dynamically fragment genomic databases, and self‐adjust according to performance data. To enhance G‐BLAST capability and usability, we also employ a WSRF Grid Service Portal and a Grid Service GUI desk application for general users to submit jobs and host administrators to maintain work nodes. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

14.
Ontologies are the first step toward realizing the full power of online e-commerce. Ontologies enable machine-understandable semantics of data, and building this data infrastructure will enable completely new kinds of automated services. Software agents can search for products, form buyer and seller coalitions, negotiate about products, or help automatically configure products and services according to specified user requirements. The combination of machine-processable semantics of data based on ontologies and the development of many specialized reasoning services will bring the Web to its full power. The authors discuss: taxonomies; information sources; future issues; business viewpoints; the e-marketplace; and B2B e-commerce standardisation and integration.  相似文献   

15.
In this work we present an architecture for XML‐based mediator systems and a framework for helping systems developers in the construction of mediator‐services for the integration of heterogeneous data sources. A unique feature of our architecture is its capability to manage (proprietary) user's software tools and algorithms, modelled as Extended Value Added Services (EVASs), and integrated in the data flow. The mediator offers a view of the system as a single data source where EVASs are readily available for enhancing query processing. A Web‐based graphic interface has been developed to allow dynamic and flexible EVASs inter‐connection, thus creating complex distributed bioinformatics machines. The feasibility and usefulness of our ideas has been validated by the development of a mediator system (Bio‐Broker) and by a diverse set of applications aimed at combining gene expression data with genomic, sequence‐based and structural information, so as to provide a general, transparent and powerful solution that integrates data analysis tools and algorithms. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

16.
The Web as a global information space is developing from a Web of documents to a Web of data. This development opens new ways for addressing complex information needs. Search is no longer limited to matching keywords against documents, but instead complex information needs can be expressed in a structured way, with precise answers as results. In this paper, we present Hermes, an infrastructure for data Web search that addresses a number of challenges involved in realizing search on the data Web. To provide an end-user oriented interface, we support expressive user information needs by translating keywords into structured queries. We integrate heterogeneous Web data sources with automatically computed mappings. Schema-level mappings are exploited in constructing structured queries against the integrated schema. These structured queries are decomposed into queries against the local Web data sources, which are then processed in a distributed way. Finally, heterogeneous result sets are combined using an algorithm called map join, making use of data-level mappings. In evaluation experiments with real life data sets from the data Web, we show the practicability and scalability of the Hermes infrastructure.  相似文献   

17.
Interuet上有大量的页面是由后台数据库动态产生的,传统的搜索引擎搜索不出这部分页面,我们称之为深网,其中大部分深网信息是结构化的。将这些结构化的深网数据库按所属领域进行分类是获得深网信息的一个关键问题。本文针对已有深网数据库分类方法实现成本高昂、效率低下的问题,提出了一种基于Web日志粒度化的深网数据库分类算法,并通过实验检验了方法的分类效果。  相似文献   

18.
提出一种基于本体的Deep Web数据源发现方法,采用网页分类、表单内容分类、表单结构分类方式,确定符合某领域的Deep Web查询接口。在网页分类和表单内容分类中引入本体的半自动构建和自动扩展模块,在表单结构分类中添加启发式规则。实验结果证 明,该方法能有效提高Deep Web数据源的查全率和查准率。  相似文献   

19.
The Semantic Web is gaining increasing interest to fulfill the need of sharing, retrieving, and reusing information. Since Web pages are designed to be read by people, not machines, searching and reusing information on the Web is a difficult task without human participation. To this aim adding semantics (i.e meaning) to a Web page would help the machines to understand Web contents and better support the Web search process. One of the latest developments in this field is Google’s Rich Snippets, a service for Web site owners to add semantics to their Web pages. In this paper we provide a structured approach to automatically annotate a Web page with Rich Snippets RDFa tags. Exploiting a data reverse engineering method, combined with several heuristics, and a named entity recognition technique, our method is capable of recognizing and annotating a subset of Rich Snippets’ vocabulary, i.e., all the attributes of its Review concept, and the names of the Person and Organization concepts. We implemented tools and services and evaluated the accuracy of the approach on real E-commerce Web sites.  相似文献   

20.
Deep Web数据源聚类与分类   总被引:1,自引:0,他引:1  
随着Internet信息的迅速增长,许多Web信息已经被各种各样的可搜索在线数据库所深化,并被隐藏在Web查询接口下面.传统的搜索引擎由于技术原因不能索引这些信息--Deep Web信息.本文分析了Deep Web查询接口的各种类型,研究了基于查询接口特征的数据源聚类方法和基于聚类结果的数据源分类方法,讨论了从基于规则与线性文档分类器中抽取查询探测集的规则抽取算法和Web文档数据库分类的查询探测算法.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号