首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
一种新的HTML页面清洗压缩算法   总被引:1,自引:0,他引:1  
任仲晟 《福建电脑》2009,25(1):60-61
本文提出了一种新的适用于Web信息抽取的HTML页面清洗压缩算法。该算法充分利用了HTML页面树中各标签的相对位置信息。实验表明,该算法能够有效地处理页面中的语法错误,并实现对页面冗余数据的压缩。具有良好的实用价值和应用前景。  相似文献   

2.
对无人机空中通信数据库信息盲检索系统进行设计,能够有效解决传统盲检索系统存在的数据召回率低、细粒度差、检索准确度低及实时性差等问题。先给出无人机空中通信数据库信息盲检索系统的总体架构设计,通过对存储器结构进行改进,实现系统硬件部分的优化;采用Java语言和嵌入式开发库设计可视化检索页面,选取检索信息,增设中间件搜索功能,通过盲检索功能的实现,完成系统软件部分的开发,从而设计出无人机空中通信数据库信息盲检索系统。实验结果表明,该系统数据召回率高,细粒度强,检索准确度高,实时性好。  相似文献   

3.
在目前数据爆发的互联网时代,论坛舆论走向对于社会的影响越来越大,对舆论进行监控引导已经不可避免,在数据如此巨大的环境中,有效地监控舆论信息成为一个难题.论坛网页中标题、内容等关键信息是舆论监控中的主要以及重点信息.为了提取论坛网页中的标题、内容、作者等与舆情相关的信息,文章提出了一种基于VIPS算法和智能模糊字典匹配相结合的网页内容提取方法.VIPS算法是利用Web页面的视觉提示背景颜色,字体的颜色和大小,边框、逻辑块和逻辑块之间的间距等,结合DOM树进行页面语义分块.智能模糊字典采用AC BM匹配算法把VIPS分块的语义块与数据库里的标签相匹配,提取出匹配正确的字段.两者的结合可以提取出帖子的标题、内容、作者、发帖时间等信息.该方法具体步骤是首先利用VIPS算法将网页页面块进行提取,再用分隔条检测设置分隔条,然后重构语义块,检测后将分割后的网页保存为xml格式文件,再将xml文件中的语义块与字典进行匹配,提取出匹配成功的内容.最后,文章通过实验证明了该方法的有效性.  相似文献   

4.
Web主题检索是信息检索领域一个将采集技术与过滤方法结合的新兴方向,也是信息处理领域的研究热点。针对现有主题检索系统在Web页面文本的主题相关性判断和Spider搜索策略方面存在的问题,引入两个性能优化方案,即利用信息抽取技术,提出了一种基于模式集的主题相关性判断方法来提高主题判断准确度;针对pagerank在主题检索中存在的不足,引入基于增强学习的页面评估算法,提出了Web环境优先的搜索策略。最后根据实验结果评估两个算法的性能。  相似文献   

5.
该文研究论坛的增量搜集问题。由于在论坛中同一主题通常分布在多个页面上,而传统增量搜集技术的抓取策略通常是基于单个页面,因此这些技术并不适于对论坛增量搜集。该文通过对许多论坛中版块变化规律的统计分析,提出了基于版块的论坛增量搜集策略。该策略将属于同一版块的所有页面看做一个整体,以它做为抓取的基本单位。同时该策略利用版块权重和局部时间规律确定抓取频率和抓取时间点。实验结果表明本策略对新增和新回复帖子的平均召回率为99.3%,并且与平均调度方法相比系统总延迟最高可减小42%。  相似文献   

6.
基于内容的视频检索为人们检索具有相似内容的视频数据提供了新的手段,而运动信息作为视频内容中的一种特有信息,是视频检索领域研究关键问题之一.通过对运动特征提取算法进行研究,设计并实现了一个实用的全局运动特征和局部运动特征提取模块.实验表明:该模块能够有效地分割全局运动与局部运动,提取的运动特征信息可作为基于内容的视频相似检索系统的重要索引.  相似文献   

7.
Vague集在刻画事物性质较之传统模糊集更加直观自然,将其引入到图像检索系统中,提出并实现了基于Vague集的图像检索模型IRUV(Image Retrieval Using Vague)及其相关检索和反馈算法。实验结果表明,该模型和算法能够有效地刻画图像性质,并且有着良好的图像匹配检索效果。  相似文献   

8.
数据融合是一种利用多种检索系统优势来增强检索结果的技术,当候选成员系统数量过多时,融合结果性能并未随之提升.论文提出了一种基于变色龙层次聚类和序列前向的选择算法(RFS),该方法首先评估所有检索结果列表的相关性进行聚类,之后使用序列向前算法从不同的簇中挑选成员系统组用于数据融合,实验结果表明该算法能有效地筛选出较优的成...  相似文献   

9.
一种基于锚文本的并行检索策略   总被引:1,自引:0,他引:1       下载免费PDF全文
高珊  何婷婷  胡文敏 《计算机工程》2008,34(19):30-31,3
进行Web信息检索时,页面中的锚文本与正文存在较大相关性,多数检索系统忽视了锚文本对页面正文的贡献。该文提出一种提高检索精度的方法,为文档集建立一个基于页面正文的索引和一个基于锚文本的索引,对其采取并行检索策略。实验结果表明,该方法可以有效处理特定结构的网页集。  相似文献   

10.
信息社会中在线百科已成为人们获取知识的重要途径,而在线百科的标签系统作为其重要组成部分,不仅可以帮助人们在浏览某张页面时获取其他相关页面的信息,而且对于海量文本分类,以及提高在线百科检索系统的检索效率都有很大帮助。充分利用在线百科页面间的链接关系,提出了一种基于页面间的同质性原理和向量空间模型的全新针对在线百科的标签推荐算法HVSM(homogeneous principle based vector space model)。该标签推荐算法具有普适性,可在不同在线百科系统间推荐标签。实验结果表明,通过与朴素推荐算法NAM(nave recomm endation model)进行比较,新的推荐算法可以达到更高的准确率。并且通过对实验数据进行分析,得到了若干有益的结论,为今后的研究工作奠定了基础。  相似文献   

11.
The World Wide Web (WWW) has been recognized as the ultimate and unique source of information for information retrieval and knowledge discovery communities. Tremendous amount of knowledge are recorded using various types of media, producing enormous amount of web pages in the WWW. Retrieval of required information from the WWW is thus an arduous task. Different schemes for retrieving web pages have been used by the WWW community. One of the most widely used scheme is to traverse predefined web directories to reach a user's goal. These web directories are compiled or classified folders of web pages and are usually organized into hierarchical structures. The classification of web pages into proper directories and the organization of directory hierarchies are generally performed by human experts. In this work, we provide a corpus-based method that applies a kind of text mining techniques on a corpus of web pages to automatically create web directories and organize them into hierarchies. The method is based on the self-organizing map learning algorithm and requires no human intervention during the construction of web directories and hierarchies. The experiments show that our method can produce comprehensible and reasonable web directories and hierarchies.  相似文献   

12.
We propose a new way of browsing bilingual web sites through concurrent browsing with automatic similar-content synchronization and viewpoint retrieval facilities. Our prototype browser system is called the Bilingual Comparative Web Browser (B-CWB) and it concurrently presents bilingual web pages in a way that enables their contents to be automatically synchronized. The B-CWB allows users to browse multiple web news sites concurrently and compare their viewpoint of news articles written in different languages (English and Japanese). Our viewpoint retrieval is based on similar and different detection. We described categorizing pages in terms of viewpoint: the entire similarity, the content difference, and subject difference. Content synchronization means that user operation (scrolling or clicking) on one web page does not necessarily invoke the same operations on the other web page to preserve similarity of content between the multiple web pages. For example, scrolling a web page may invoke passage-level viewpoint retrieval on the other web page. Clicking a web page (and obtaining a new web page) invokes page-level viewpoint retrieval within the other site's pages through the use of an English-Japanese dictionary.  相似文献   

13.
Mining linguistic browsing patterns in the world wide web   总被引:2,自引:0,他引:2  
 World-wide-web applications have grown very rapidly and have made a significant impact on computer systems. Among them, web browsing for useful information may be most commonly seen. Due to its tremendous amounts of use, efficient and effective web retrieval has thus become a very important research topic in this field. Data mining is the process of extracting desirable knowledge or interesting patterns from existing databases for a certain purpose. In this paper, we use the data mining techniques to discover relevant browsing behavior from log data in web servers, thus being able to help make rules for retrieval of web pages. The browsing time of a customer on each web page is used to analyze the retrieval behavior. Since the data collected are numeric, fuzzy concepts are used to process them and to form linguistic terms. A sophisticated web-mining algorithm is thus proposed to find relevant browsing behavior from the linguistic data. Each page uses only the linguistic term with the maximum cardinality in later mining processes, thus making the number of fuzzy regions to be processed the same as the number of the pages. Computational time can thus be greatly reduced. The patterns mined out thus exhibit the browsing behavior and can be used to provide some appropriate suggestions to web-server managers.  相似文献   

14.
Social networking websites, which profile objects with predefined attributes and their relationships, often rely heavily on their users to contribute the required information. We, however, have observed that many web pages are actually created collectively according to the composition of some physical or abstract entity, e.g., company, people, and event. Furthermore, users often like to organize pages into conceptual categories for better search and retrieval, making it feasible to extract relevant attributes and relationships from the web. Given a set of entities each consisting of a set of web pages, we name the task of assigning pages to the corresponding conceptual categories conceptual web classification. To address this, we propose an entity-based co-training (EcT) algorithm which learns from the unlabeled examples to boost its performance. Different from existing co-training algorithms, EcT has taken into account the entity semantics hidden in web pages and requires no prior knowledge about the underlying class distribution which is crucial in standard co-training algorithms used in web classification. In our experiments, we evaluated EcT, standard co-training, and other three non co-training learning methods on Conf-425 dataset. Both EcT and co-training performed well when compared to the baseline methods that required large amount of training examples.  相似文献   

15.
现在的互联网中存在网页重复的问题,这些问题将会使数据挖掘,搜索的复杂度加大。现有技术一些不足之处,针对互联网中的重复网页采用基于Bloom Filter的网页去重算法。使用了现有的网页去杂算法,对网页进行预处理,同时利用Bloom Filter结构大大降低了网页去重算法的时间复杂度和空间复杂度。从网页中提炼出表示网页特征的一些长句,从而把网页去重过程转换为一个搜索长句的过程,使用Bloom Filter减小了算法的时间复杂度。  相似文献   

16.
为了更好地向用户提供个性化的Web检索服务,实现了一种改进的个性化词典的生成算法——IGAUPD,用于在用户浏览的大量兴趣网页中挖掘出真正符合用户兴趣的词语,以此缩小传统词库的容量,使得在用户兴趣建模时,能更快更准确地形成兴趣网页的特征描述,并更好地支持个性化检索。IGAUPD算法采用新的词权计算公式IWTUPD,以更好地描述词语在网页集中的重要性,有效排除频繁词。最后,用实验验证了由IGAUPD算法生成的个性化词典的优势。  相似文献   

17.
User reviews in forum sites are the important information source for many popular applications (e.g., monitoring and analysis of public opinion), which are usually represented in form of structured records. To the best of our knowledge, little existing work reported in the literature has systemically investigated the problem of extracting user reviews from forum sites. Besides the variety of web page templates, user-generated reviews raise two new challenges. First, the inconsistency of review contents in terms of both the document object model (DOM) tree and visual appearance impair the similarity between review records; second, the review content in a review record corresponds to complicated subtrees rather than single nodes in the DOM tree. To tackle these challenges, we present WeRE — a system that performs automatic user review extraction by employing sophisticated techniques. The review records are extracted from web pages based on the proposed level-weighted tree similarity algorithm first, and then the review contents in records are extracted exactly by measuring the node consistency. Our experimental results based on 20 forum sites indicate that WeRE can achieve high extraction accuracy.  相似文献   

18.
基于文本相似度的网页消重策略   总被引:1,自引:0,他引:1  
针对在网页检索结果中经常出现内容相同或相似的问题,提出了一种通过计算网页相似度的方法进行网页消重。该算法通过提取网页特征串,特征串的提取在参考以往特征码提取的基础上,加入了文本结构特征的提取,通过比较特征串之间差异性的基础上得到网页的相似度。经与相似方法比较,结果表明,该方法减少了时间复杂度,具有较高的查全率和查准率,适于大规模网页消重。  相似文献   

19.
互联网中存在着大量的重复网页,在进行信息检索或大规模网页采集时,网页去重是提高效率的关键之一。本文在研究"指纹"或特征码等网页去重算法的基础上,提出了一种基于编辑距离的网页去重算法,通过计算网页指纹序列的编辑距离得到网页之间的相似度。它克服了"指纹"或特征码这类算法没有兼顾网页正文结构的缺点,同时从网页内容和正文结构上进行比较,使得网页重复的判断更加准确。实验证明,该算法是有效的,去重的准确率和召回率都比较高。  相似文献   

20.
IPSMS:一个网络舆情监控系统的设计与实现   总被引:3,自引:0,他引:3  
描述一个网络舆情监控系统IPSMS(Internet public sentiment monitoring system)。该系统试图将网络新闻及论坛、BBS上的帖子依关键词搜索,并依事件聚类,让管理者通过阅读事件可以了解正在发生或已经发生的事件,并提供自动持续追踪事件发展的功能,以协助管理者快速完整且全面地了解事件全貌。系统由网页抓取器、网页解析器及跟踪检测系统三部分组成。由于网络舆情的特点是数据量巨大,为了提高效率,系统采用了网页清理技术,并且在话题跟踪过程中使用了k-d tree方法。最后,对系统的未来工作进行了展望。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号