首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
Current models of web navigation focus only on the influence of textual information and ignore the role of graphical information. We studied the differential role of text and graphics in identifying web page widgets classified into two kinds: textual and graphical. Four different versions of web pages were created by systematically removing textual and graphical information from each page. The task of the participants was to locate either textual or graphical widgets on the displayed web page. Results show that for any widget, the task-completion time and the number of clicks were significantly less in web pages with graphics than in those with no graphics. This demonstrates the importance of graphical information. However, textual information is also important because performance in locating graphical widgets under no-graphics conditions was better when text was present than with no text. Since, for identifying graphical widgets, text and graphics interact and complement each other, we conclude that cognitive models on web navigation should include the role of graphical information next to textual information.  相似文献   

2.
为了提高对Web动画素材的组织、管理,该文提出了基于文本特征和视觉特征融合的Web动画素材标注算法。首先利用自动提取的Web动画素材上下文信息,结合Web动画素材名称、页面主题、URL以及ALT等属性组成特征集,提取出文本关键字;然后利用视觉与标注字之间的相关性,对自动提取的标注字进行过滤,实现Web动画素材的自动标注。实验表明该文提出的基于文本特征和视觉特征融合的Web动画素材标注算法可有效地应用于Web动画素材自动标注。  相似文献   

3.
基于扩展DOM树的Web页面信息抽取   总被引:1,自引:0,他引:1  
随着Internet的发展,Web页面提供的信息量日益增长,信息的密集程度也不断增强.多数Web页面包含多个信息块,它们布局紧凑,在HTML语法上具有类似的模式.针对含有多信息块的Web页面,提出一种信息抽取的方法:首先创建扩展的DOM(Document Object Model)树,将页面抽取成离散的信息条;然后根据扩展DOM树的层次结构,并结合必要的视觉特性和语义信息对离散化的信息条重新整合;最后确定包含信息块的子树,深度遍历DOM树实现信息抽取.该算法能对多信息块的Web页面进行信息抽取.  相似文献   

4.
探讨基于体裁的中文网页分类的特征项选取问题.词汇特征方面,结合自动抽取和人工归纳的方式来获得.通过改进PAT树存储结构,进行序列挖掘来获得频繁字符串特征,使得文本分类系统摆脱对切词处理和词典的依赖,并提出了模糊字符串模式的特征表达方式.此外,特征集中融入了文本的形式特征,并根据网页的特点,引入链接信息特征.实现了基于体裁的中文网页分类系统,结果表明分类效果得到了有效的改善.  相似文献   

5.
In this work we propose a model to represent the web as a directed hypergraph (instead of a graph), where links connect pairs of disjointed sets of pages. The web hypergraph is derived from the web graph by dividing the set of pages into non-overlapping blocks and using the links between pages of distinct blocks to create hyperarcs. A hyperarc connects a block of pages to a single page, in order to provide more reliable information for link analysis. We use the hypergraph model to create the hypergraph versions of the Pagerank and Indegree algorithms, referred to as HyperPagerank and HyperIndegree, respectively. The hypergraph is derived from the web graph by grouping pages by two different partition criteria: grouping together the pages that belong to the same web host or to the same web domain. We compared the original page-based algorithms with the host-based and domain-based versions of the algorithms, considering a combination of the page reputation, the textual content of the pages and the anchor text. Experimental results using three distinct web collections show that the HyperPagerank and HyperIndegree algorithms may yield better results than the original graph versions of the Pagerank and Indegree algorithms. We also show that the hypergraph versions of the algorithms were slightly less affected by noise links and spamming.  相似文献   

6.
The most fascinating advantage of the semantic web would be its capability of understanding and processing the contents of web pages automatically. Basically, the semantic web realization involves two main tasks: (1) Representation and management of a large amount of data and metadata for web contents; (2) Information extraction and annotation on web pages. On the one hand, recognition of named-entities is regarded as a basic and important problem to be solved, before deeper semantics of a web page could be extracted. On the other hand, semantic web information extraction is a language-dependent problem, which requires particular natural language processing techniques. This paper introduces VN-KIM IE, the information extraction module of the semantic web system VN-KIM that we have developed. The function of VN-KIM IE is to automatically recognize named-entities in Vietnamese web pages, by identifying their classes, and addresses if existing, in the knowledge base of discourse. That information is then annotated to those web pages, providing a basis for NE-based searching on them, as compared to the current keyword-based one. The design, implementation, and performance of VN-KIM IE are presented and discussed.  相似文献   

7.
An effective solution to automate information extraction from Web pages is represented by wrappers. A wrapper associates a Web page with an XML document that represents part of the information in that page in a machine-readable format. Most existing wrapping approaches have traditionally focused on how to generate extraction rules, while they have ignored potential benefits deriving from the use of the schema of the information being extracted in the wrapper evaluation. In this paper, we investigate how the schema of extracted information can be effectively used in both the design and evaluation of a Web wrapper. We define a clean declarative semantics for schema-based wrappers by introducing the notion of (preferred) extraction model, which is essential to compute a valid XML document containing the information extracted from a Web page. We developed the SCRAP (SChema-based wRAPper for web data) system for the proposed schema-based wrapping approach, which also provides visual support tools to the wrapper designer. Moreover, we present a wrapper generalization framework to profitably speed up the design of schema-based wrappers. Experimental evaluation has shown that SCRAP wrappers are not only able to successfully extract the required data, but also they are robust to changes that may occur in the source Web pages.  相似文献   

8.
We present a general framework for the information extraction from web pages based on a special wrapper language, called token-templates. By using token-templates in conjunction with logic programs we are able to reason about web page contents, search and collect facts and derive new facts from various web pages. We give a formal definition for the semantics of logic programs extended by token-templates and define a general answer-complete calculus for these extended programs. These methods and techniques are used to build intelligent mediators and web information systems.  相似文献   

9.
网络舆情分析系统中,网页信息预处理方案的实现采用了基于网页结构分析的信息抽取技术和数据存储技术。结合HTML网页的内部结构,设计了一种基于HTMLDOM结构节点路径的网页信息解析模板,用于网页信息抽取。通过网页U1KL的特征研究建立了网页之间的联系机制,应用于数据库存取提高了效率。  相似文献   

10.
11.
It is common to browse web pages via mobile devices. However, most of the web pages were designed for desktop computers equipped with big screens. When browsing on mobile devices, a user has to scroll up and down to find the information they want because of the limited screen size. Some commercial products reformat web pages. However, the result pages still contain irrelevant information. We propose a system to personalize users’ mobile web pages. A user can determine which blocks in a web page should be retained. The sequence of these blocks can also be altered according to individual preferences.  相似文献   

12.
Search engines are increasingly efficient at identifying the best sources for any given keyword query, and are often able to identify the answer within the sources. Unfortunately, many web sources are not trustworthy, because of erroneous, misleading, biased, or outdated information. In many cases, users are not satisfied with the results from any single source. In this paper, we propose a framework to aggregate query results from different sources in order to save users the hassle of individually checking query-related web sites to corroborate answers. To return the best answers to the users, we assign a score to each individual answer by taking into account the number, relevance and originality of the sources reporting the answer, as well as the prominence of the answer within the sources, and aggregate the scores of similar answers. We conducted extensive qualitative and quantitative experiments of our corroboration techniques on queries extracted from the TREC Question Answering track and from a log of real web search engine queries. Our results show that taking into account the quality of web pages and answers extracted from the pages in a corroborative way results in the identification of a correct answer for a majority of queries.  相似文献   

13.
会话识别是Web日志预处理过程中的一个重要环节,针对传统会话识别的不足,提出一种改进的会话识别算法.在识别出具体的用户之后,过滤大量的框架网页;然后根据每个页面的内容及网站结构,构造出相对合理的页面访问时间阈值,并以此阈值来进行用户的会话识别.最后通过实验数据,与几种传统的会话识别方法进行了比较,表明该算法更为合理有效.  相似文献   

14.
Most Web pages contain location information, which are usually neglected by traditional search engines. Queries combining location and textual terms are called as spatial textual Web queries. Based on the fact that traditional search engines pay little attention in the location information in Web pages, in this paper we study a framework to utilize location information for Web search. The proposed framework consists of an offline stage to extract focused locations for crawled Web pages, as well as an online ranking stage to perform location-aware ranking for search results. The focused locations of a Web page refer to the most appropriate locations associated with the Web page. In the offline stage, we extract the focused locations and keywords from Web pages and map each keyword with specific focused locations, which forms a set of <keyword, location> pairs. In the second online query processing stage, we extract keywords from the query, and computer the ranking scores based on location relevance and the location-constrained scores for each querying keyword. The experiments on various real datasets crawled from nj.gov, BBC and New York Time show that the performance of our algorithm on focused location extraction is superior to previous methods and the proposed ranking algorithm has the best performance w.r.t different spatial textual queries.  相似文献   

15.
基于特征选取及模糊学习的网页分类方法研究   总被引:2,自引:0,他引:2  
www上的信息极大丰富 ,为准确地从网页中提取有用信息 ,发展一个自动的分类器已成为当务之急 .由于文本集中关键词的数量很多 ,分类存在巨大的维度问题 ,并且以往大多数分类器或者工作速度慢 ,或者不具有自学习功能 .本文提出了一种基于相似度的特征选择算法和适应模糊学习算法来实现分类 .特征选择算法用来解决巨大维度问题 ,提高分类速度 ,适应模糊学习算法为分类提供学习人类知识的能力 ,提高准确率  相似文献   

16.
基于多知识的Web网页信息抽取方法   总被引:10,自引:1,他引:9  
从Web网页中自动抽取所需要的信息内容,是互联网信息智能搜取的一个重要研究课题,为有效解决网页信息抽取所需的信息描述知识获取问题,这里提出了一个种基于多知识的Web网页信息抽取方法(简称MKIE方法)。该方法将网页信息抽取所需的知识分为二类,一类是描绘网页内容本身表示特点,以及识别各网信息对象的确定模式知识,另一类则描述网页信息记录块,以及各网页信息对象的非确定模式知识,MKIE方法根据前一类知识,动态分析获得后一类知识;并利用这两类知识,最终完全从信息内容类似担其表现形式各异的网页中,抽取出所需要的信息,美大学教员论文网页信息抽取实验结果表明,MKIE方法具有较强的网而信息自动识别与抽取能力。  相似文献   

17.
针对已有网页分割方法都基于文档对象模型实现且实现难度较高的问题,提出了一种采用字符串数据模型实现网页分割的新方法。该方法通过机器学习获取网页标题的特征,利用标题实现网页分割。首先,利用网页行块分布函数和网页标题标签学习得到网页标题特征;然后,基于标题将网页分割成内容块;最后,利用块深度对内容块进行合并,完成网页分割。理论分析与实验结果表明,该方法中的算法具有O(n)的时间复杂度和空间复杂度,该方法对于高校门户、博客日志和资源网站等类型的网页具有较好的分割效果,并且可以用于网页信息管理的多种应用中,具有良好的应用前景。  相似文献   

18.
基于概率模型的Web信息抽取   总被引:1,自引:0,他引:1  
针对Web网页的二维结构和内容的特点,提出一种树型结构分层条件随机场(TH-CRFs)来进行Web对象的抽取。首先,从网页结构和内容两个方面使用改进多特征向量空间模型来表示网页的特征;第二,引入布尔模型和多规则属性来更好地表示Web对象结构与语义的特征;第三,利用TH-CRFs来进行Web对象的信息提取,从而找出相关的招聘信息并优化模型训练的效率。通过实验并与现有的Web信息抽取模型对比,结果表明,基于TH-CRFs的Web信息抽取的准确率已有效改善,同时抽取的时间复杂度也得到降低。  相似文献   

19.
识别搜索引擎用户的查询意图在信息检索领域是备受关注的研究内容。文中提出一种融合多类特征识别Web查询意图的方法。将Web查询意图识别作为一个分类问题,并从不同类型的资源包括查询文本、搜索引擎返回内容及Web查询日志中抽取出有效的分类特征。在人工标注的真实Web查询语料上采用文中方法进行查询意图识别实验,实验结果显示文中采用的各类特征对于提高查询意图识别的效果皆有一定帮助,综合使用这些特征进行查询意图识别,88。5%的测试查询获得准确的意图识别结果。  相似文献   

20.
面对大规模异构网页,基于视觉特征的网页信息抽取方法普遍存在通用性较差、抽取效率较低的问题。针对通用性较差的问题,该文提出了基于视觉特征的使用有监督机器学习的网页信息抽取框架WEMLVF。该框架具有良好的通用性,通过对论坛网站和新闻评论网站的信息抽取实验,验证了该框架的有效性。然后,针对视觉特征提取时间代价过高导致信息抽取效率较低的问题,该文使用WEMLVF,分别提出基于XPath和基于经典包装器归纳算法SoftMealy的自动生成信息抽取模板的方法。这两种方法使用视觉特征自动生成信息抽取模板,但模板的表达并不包含视觉特征,使得在使用模板进行信息抽取的过程中无需提取网页的视觉特征,从而既充分利用了视觉特征在信息抽取中的作用,又显著提升了信息抽取的效率,实验结果验证了这一结论。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号