首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The most fascinating advantage of the semantic web would be its capability of understanding and processing the contents of web pages automatically. Basically, the semantic web realization involves two main tasks: (1) Representation and management of a large amount of data and metadata for web contents; (2) Information extraction and annotation on web pages. On the one hand, recognition of named-entities is regarded as a basic and important problem to be solved, before deeper semantics of a web page could be extracted. On the other hand, semantic web information extraction is a language-dependent problem, which requires particular natural language processing techniques. This paper introduces VN-KIM IE, the information extraction module of the semantic web system VN-KIM that we have developed. The function of VN-KIM IE is to automatically recognize named-entities in Vietnamese web pages, by identifying their classes, and addresses if existing, in the knowledge base of discourse. That information is then annotated to those web pages, providing a basis for NE-based searching on them, as compared to the current keyword-based one. The design, implementation, and performance of VN-KIM IE are presented and discussed.  相似文献   

2.
传统网络爬虫为基于关键字检索的通用搜索引擎服务,无法抓取网页类别信息,给文本聚类和话题检测带来计算效率和准确度问题。本文提出基于站点分层结构的网页分类与抽取,通过构建虚拟站点层次分类树并抽取真实站点分层结构,设计并实现了面向分层结构的网页抓取;对于无分类信息的站点,给出了基于标题的网页分类技术,包括领域知识库构建和基于《知网》的词语语义相似度计算。实验结果表明,该方法具有良好的分类效果。  相似文献   

3.
文本过滤是信息过滤的一个研究分支,信息过滤随着信息检索的发展而受到关注,它是一个寻找人们感兴趣的信息的处理过程。为了提高检索web页面的效率,把原型web页面集合预处理为有结构的页面集,然后再进行快速分类处理。  相似文献   

4.
田莉霞 《软件》2020,(4):67-71
随着信息化社会的来临,各种互联网技术应运而生,数字信息已然成为当今社会中商家必争的宝贵财富资源。众多数字信息中,怎样帮助用户精准筛选出有效信息是当前搜索引擎所面临的巨大挑战。传统的互联网搜索仅仅是基于本文的链接,搜索时仅单纯的给出包含搜索词的网页,让用户去网页中寻找答案,这种检索方法耗时耗力,还不能准确给出用户想要的答案。由此谷歌率先提出以知识图谱(Knowledge Graph)为技术基础的的搜索引擎,这是搜索引擎界的一次重大变革。它以图的形式表现客观世界中的概念和实体及其之间关系,现如今广泛应用于语义搜索、智能问答、决策支持等智能服务领域。本文针对什么是知识图谱、如何表示构建知识图谱及知识图谱的主要应用作了详细阐述,希望更多的读者可以了解知识图谱及其对人工智能发展的巨大贡献。  相似文献   

5.
6.
基于搜索引擎的知识发现   总被引:3,自引:0,他引:3  
数据挖掘一般用于高度结构化的大型数据库,以发现其中所蕴含的知识。随着在线文本的增多,其中所蕴含的知识也越来越丰富,但是,它们却难以被分析利用。因而,研究一套行之有效的方案发现文本中所蕴含的知识是非常重要的,也是当前重要的研究课题。该文利用搜索引擎Google获取相关Web页面,进行过滤和清洗后得到相关文本,然后,进行文本聚类,利用Episode进行事件识别和信息抽取,数据集成及数据挖掘,从而实现知识发现。最后给出了原型系统,对知识发现进行实践检验,收到了很好的效果。  相似文献   

7.
王斌 《计算机仿真》2004,21(5):95-99
近些年对internet的使用提供了获取大量信息的方法。但是,在单个网页中或者多个网页间缺少信息结构,成为获取网络数据的障碍。因此为了有效地搜索网络信息,迫切需要结构化网页有效的管理方法。该文提出的结构化网页的管理方法基于以下两个方面:第一步把HTML转化为XML,第二步建立导航层次结构。同时也学习如何用结构化的网页管理方法进行有效的数据查询,用户可以按照网站的导航层次浏览整个网站,包括互联的网页或者内部的网页,并且可以搜索感兴趣的信息。  相似文献   

8.
黄晓  钟琴 《计算机仿真》2004,21(4):83-86
近年来万维网(World Wide Web)的广泛使用为人们访问大量的数据源提供了一种开放式的途径,而影响web数据访问的一个主要原因就是web页面之间以及web页面内部的信息都缺乏结构化。为了能更加有效的检索web数据,就有必要实现web页面结构化的管理。该文所提出的结构化的管理web页面分为两步:①将超文本标记语言(html)转换为扩展标记语言(xml);②分级导航检索。  相似文献   

9.
With the Internet growing exponentially, search engines are encountering unprecedented challenges. A focused search engine selectively seeks out web pages that are relevant to user topics. Determining the best strategy to utilize a focused search is a crucial and popular research topic. At present, the rank values of unvisited web pages are computed by considering the hyperlinks (as in the PageRank algorithm), a Vector Space Model and a combination of them, and not by considering the semantic relations between the user topic and unvisited web pages. In this paper, we propose a concept context graph to store the knowledge context based on the user's history of clicked web pages and to guide a focused crawler for the next crawling. The concept context graph provides a novel semantic ranking to guide the web crawler in order to retrieve highly relevant web pages on the user's topic. By computing the concept distance and concept similarity among the concepts of the concept context graph and by matching unvisited web pages with the concept context graph, we compute the rank values of the unvisited web pages to pick out the relevant hyperlinks. Additionally, we constitute the focused crawling system, and we retrieve the precision, recall, average harvest rate, and F-measure of our proposed approach, using Breadth First, Cosine Similarity, the Link Context Graph and the Relevancy Context Graph. The results show that our proposed method outperforms other methods.  相似文献   

10.
An XML-enabled data extraction toolkit for web sources   总被引:7,自引:0,他引:7  
The amount of useful semi-structured data on the web continues to grow at a stunning pace. Often interesting web data are not in database systems but in HTML pages, XML pages, or text files. Data in these formats are not directly usable by standard SQL-like query processing engines that support sophisticated querying and reporting beyond keyword-based retrieval. Hence, the web users or applications need a smart way of extracting data from these web sources. One of the popular approaches is to write wrappers around the sources, either manually or with software assistance, to bring the web data within the reach of more sophisticated query tools and general mediator-based information integration systems. In this paper, we describe the methodology and the software development of an XML-enabled wrapper construction system—XWRAP for semi-automatic generation of wrapper programs. By XML-enabled we mean that the metadata about information content that are implicit in the original web pages will be extracted and encoded explicitly as XML tags in the wrapped documents. In addition, the query-based content filtering process is performed against the XML documents. The XWRAP wrapper generation framework has three distinct features. First, it explicitly separates tasks of building wrappers that are specific to a web source from the tasks that are repetitive for any source, and uses a component library to provide basic building blocks for wrapper programs. Second, it provides inductive learning algorithms that derive or discover wrapper patterns by reasoning about sample pages or sample specifications. Third and most importantly, we introduce and develop a two-phase code generation framework. The first phase utilizes an interactive interface facility to encode the source-specific metadata knowledge identified by individual wrapper developers as declarative information extraction rules. The second phase combines the information extraction rules generated at the first phase with the XWRAP component library to construct an executable wrapper program for the given web source.  相似文献   

11.
More and more (semi) structured information is becoming available on the web in the form of documents embedding metadata (e.g., RDF, RDFa, Microformats and others). There are already hundreds of millions of such documents accessible and their number is growing rapidly. This calls for large scale systems providing effective means of searching and retrieving this semi-structured information with the ultimate goal of making it exploitable by humans and machines alike.This article examines the shift from the traditional web document model to a web data object (entity) model and studies the challenges faced in implementing a scalable and high performance system for searching semi-structured data objects over a large heterogeneous and decentralised infrastructure. Towards this goal, we define an entity retrieval model, develop novel methodologies for supporting this model and show how to achieve a high-performance entity retrieval system. We introduce an indexing methodology for semi-structured data which offers a good compromise between query expressiveness, query processing and index maintenance compared to other approaches. We address high-performance by optimisation of the index data structure using appropriate compression techniques. Finally, we demonstrate that the resulting system can index billions of data objects and provides keyword-based as well as more advanced search interfaces for retrieving relevant data objects in sub-second time.This work has been part of the Sindice search engine project at the Digital Enterprise Research Institute (DERI), NUI Galway. The Sindice system currently maintains more than 200 million pages downloaded from the web and is being used actively by many researchers within and outside of DERI.  相似文献   

12.
基于概率模型的Web信息抽取   总被引:1,自引:0,他引:1  
针对Web网页的二维结构和内容的特点,提出一种树型结构分层条件随机场(TH-CRFs)来进行Web对象的抽取。首先,从网页结构和内容两个方面使用改进多特征向量空间模型来表示网页的特征;第二,引入布尔模型和多规则属性来更好地表示Web对象结构与语义的特征;第三,利用TH-CRFs来进行Web对象的信息提取,从而找出相关的招聘信息并优化模型训练的效率。通过实验并与现有的Web信息抽取模型对比,结果表明,基于TH-CRFs的Web信息抽取的准确率已有效改善,同时抽取的时间复杂度也得到降低。  相似文献   

13.
多元表征军事信息可信度研究   总被引:1,自引:0,他引:1  
伴随互联网的迅猛发展,网页成为信息主要承载形式,各国的军事机构开始对网页信息中的军事情报进行搜集和监管,如何快速准确地从互联网获取所需军事信息成为研究的热点.本文针对互联网军事信息数据量大、更新速度快且真伪难辨的特点,提出了多元表征军事信息可信度的方法.首先,在对军事信息可信度表征研究的基础上,提出了基于信息源、信息评...  相似文献   

14.
The World Wide Web (WWW) has been recognized as the ultimate and unique source of information for information retrieval and knowledge discovery communities. Tremendous amount of knowledge are recorded using various types of media, producing enormous amount of web pages in the WWW. Retrieval of required information from the WWW is thus an arduous task. Different schemes for retrieving web pages have been used by the WWW community. One of the most widely used scheme is to traverse predefined web directories to reach a user's goal. These web directories are compiled or classified folders of web pages and are usually organized into hierarchical structures. The classification of web pages into proper directories and the organization of directory hierarchies are generally performed by human experts. In this work, we provide a corpus-based method that applies a kind of text mining techniques on a corpus of web pages to automatically create web directories and organize them into hierarchies. The method is based on the self-organizing map learning algorithm and requires no human intervention during the construction of web directories and hierarchies. The experiments show that our method can produce comprehensible and reasonable web directories and hierarchies.  相似文献   

15.
一种互联网信息智能搜索新方法   总被引:10,自引:1,他引:9  
提出了一种互联网信息智能搜索新方法。该方法能够从组织结构和内容描述类似的同类网站中,准确有效搜索出隐藏于其内部的目标网页。为此它采用了将网页间相互关联特征与网页内容特征描述有机结合而形成的一种新的搜索知识表示方法。基于这种知识表示方法及其所表示的知识;该智能搜索方法不仅能够对风站中网页进行深度优先的智能搜索,而且还能够通过对其搜索过程和结果的自学习来获取更多更好的搜索知识。初步实验结果表明,这种智能搜索新方法在对同类型网站的目标网页搜索中具有很强的深度网页搜索能力。  相似文献   

16.
Web Search is increasingly entity centric; as a large fraction of common queries target specific entities, search results get progressively augmented with semi-structured and multimedia information about those entities. However, search over personal web browsing history still revolves around keyword-search mostly. In this paper, we present a novel approach to answer queries over web browsing logs that takes into account entities appearing in the web pages, user activities, as well as temporal information. Our system, B-hist, aims at providing web users with an effective tool for searching and accessing information they previously looked up on the web by supporting multiple ways of filtering results using clustering and entity-centric search. In the following, we present our system and motivate our User Interface (UI) design choices by detailing the results of a survey on web browsing and history search. In addition, we present an empirical evaluation of our entity-based approach used to cluster web pages.  相似文献   

17.
《Applied Soft Computing》2007,7(1):398-410
Personalized search engines are important tools for finding web documents for specific users, because they are able to provide the location of information on the WWW as accurately as possible, using efficient methods of data mining and knowledge discovery. The types and features of traditional search engines are various, including support for different functionality and ranking methods. New search engines that use link structures have produced improved search results which can overcome the limitations of conventional text-based search engines. Going a step further, this paper presents a system that provides users with personalized results derived from a search engine that uses link structures. The fuzzy document retrieval system (constructed from a fuzzy concept network based on the user's profile) personalizes the results yielded from link-based search engines with the preferences of the specific user. A preliminary experiment with six subjects indicates that the developed system is capable of searching not only relevant but also personalized web pages, depending on the preferences of the user.  相似文献   

18.
Future automated question answering systems will typically involve the use of local knowledge available on the users' systems as well as knowledge retrieved from the Web. The determination of what information we should seek out on the Web must be directed by its potential value or relevance to our objective in the light of what knowledge is already available. Here we begin to provide a formal quantification of the concept of relevance and related ideas for systems that use fuzzy‐set‐based representations to provide the underlying semantics. We also introduce the idea of ease of extraction to quantify the ability of extracting relevant information from complex relationships. © 2007 Wiley Periodicals, Inc. Int J Int Syst 22: 385–396, 2007.  相似文献   

19.
主题爬虫的搜索策略研究   总被引:10,自引:2,他引:8  
主题爬虫收集主题相关信息时,需要评价网页的主题相关度,并优先爬取相关度较高的网页,在决定了搜索路径的同时也决定了主题爬虫的搜索效率.针对不同的网页评价算法,对现有的主题爬虫的搜索策略进行分类,指出了各类搜索策略的特点和优缺点,总结了能够提高主题爬虫搜索效率的几方面内容.  相似文献   

20.
Integrating a large number of Web information sources may significantly increase the utility of the World-Wide Web. A promising solution to the integration is through the use of a Web Information mediator that provides seamless, transparent access for the clients. Information mediators need wrappers to access a Web source as a structured database, but building wrappers by hand is impractical. Previous work on wrapper induction is too restrictive to handle a large number of Web pages that contain tuples with missing attributes, multiple values, variant attribute permutations, exceptions and typos. This paper presents SoftMealy, a novel wrapper representation formalism. This representation is based on a finite-state transducer (FST) and contextual rules. This approach can wrap a wide range of semistructured Web pages because FSTs can encode each different attribute permutation as a path. A SoftMealy wrapper can be induced from a handful of labeled examples using our generalization algorithm. We have implemented this approach into a prototype system and tested it on real Web pages. The performance statistics shows that the sizes of the induced wrappers as well as the required training effort are linear with regard to the structural variance of the test pages. Our experiment also shows that the induced wrappers can generalize over unseen pages.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号