首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
近年来,恶意网页检测主要依赖于语义分析或代码模拟执行来提取特征,但是这类方法实现复杂,需要高额的计算开销,并且增加了攻击面.为此,提出了一种基于深度学习的恶意网页检测方法,首先使用简单的正则表达式直接从静态HTML文档中提取与语义无关的标记,然后采用神经网络模型捕获文档在多个分层空间尺度上的局部性表示,实现了能够从任意长度的网页中快速找到微小恶意代码片段的能力.将该方法与多种基线模型和简化模型进行对比实验,结果表明该方法在0.1%的误报率下实现了96.4%的检测率,获得了更好的分类准确率.本方法的速度和准确性使其适合部署到端点、防火墙和Web代理中.  相似文献   

2.
基于DOM的网页主题信息自动提取   总被引:43,自引:0,他引:43  
Web页面所表达的主要信息通常隐藏在大量无关的结构和文字中,使用户不能迅速获取主题信息,限制了Web的可用性,信息提取有助于解决这一问题.基于DOM规范,针对HTML的半结构化特征和缺乏语义描述的不足,提出含有语义信息的STU-DOM树模型.将HTML文档转换为STU-DOM树,并对其进行基于结构的过滤和基于语义的剪枝,能够准确地提取出主题信息.方法不依赖于信息源,而且不改变源网页的结构和内容,是一种自动、可靠和通用的方法.具有可观的应用价值,可应用于PAD和手机上的web浏览以及信息检索系统.  相似文献   

3.
基于搜索引擎的知识发现   总被引:3,自引:0,他引:3  
数据挖掘一般用于高度结构化的大型数据库,以发现其中所蕴含的知识。随着在线文本的增多,其中所蕴含的知识也越来越丰富,但是,它们却难以被分析利用。因而,研究一套行之有效的方案发现文本中所蕴含的知识是非常重要的,也是当前重要的研究课题。该文利用搜索引擎Google获取相关Web页面,进行过滤和清洗后得到相关文本,然后,进行文本聚类,利用Episode进行事件识别和信息抽取,数据集成及数据挖掘,从而实现知识发现。最后给出了原型系统,对知识发现进行实践检验,收到了很好的效果。  相似文献   

4.
随着语义网的不断发展,网页语义的研究也在不断的进步。但现阶段的网络结构中,非语义化网页仍旧占据了信息系统最主要的部分。信息系统在整合的过程中,也需要了解网页的语义结构以完成信息的获取和分析。提出一种基于视觉特征筛选的网页语义结构分析方法。该方法可以在忽略网页语义的情况下,通过网页结构的视觉特性和内容特性分析网页中不同结构的语义关系,使用聚类分析方法来推定网页中半结构化信息的语义结构,并通过该方法对一组随机网页进行了分析,结果证明该方法具有比较好的分析能力。  相似文献   

5.
基于链接描述文本及其上下文的Web信息检索   总被引:20,自引:0,他引:20  
文档之间的超链接结构是Web信息检索和传统信息检索的最大区别之一,由此产生了基于超链接结构的检索技术。描述了链接描述文档的概念,并在此基础上研究链接文本(anchor text)及其上下文信息在检索中的作用。通过使用超过169万篇网页的大规模真实数据集以及TREC 2001提供的相关文档及评价方法进行测试,得到如下结论:首先,链接描述文档对网页主题的概括有高度的精确性,但是对网页内容的描述有极大的不完全性;其次,与传统检索方法相比,使用链接文本在已知网页定位的任务上能够使系统性能提高96%,但是链接文本及其上下文信息无法在未知信息查询任务上改善检索性能;最后,把基于链接描述文本的方法与传统方法相结合,能够在检索性能上提高近16%。  相似文献   

6.
As the World Wide Web develops at an unprecedented pace, identifying web page genre has recently attracted increasing attention because of its importance in web search. A common approach for identifying genre is to use textual features that can be extracted directly from a web page, that is, On-Page features. The extracted features are subsequently inputted into a machine learning algorithm that will perform classification. However, these approaches may be ineffective when the web page contains limited textual information (e.g., the page is full of images). In this study, we address genre identification of web pages under the aforementioned situation. We propose a framework that uses On-Page features while simultaneously considering information in neighboring pages, that is, the pages that are connected to the original page by backward and forward links. We first introduce a graph-based model called GenreSim, which selects an appropriate set of neighboring pages. We then construct a multiple classifier combination module that utilizes information from the selected neighboring pages and On-Page features to improve performance in genre identification. Experiments are conducted on well-known corpora, and favorable results indicate that our proposed framework is effective, particularly in identifying web pages with limited textual information.  相似文献   

7.
8.
本文提出一种新词语识别新方法。该方法直接抽取分类网页上人工标引的关键词,并按照其网页栏目所属类目存储进各分类词表,从而快速完成新词语识别和聚类任务。该方法简单快捷。我们利用该方法从15类6亿字网页中抽取到229237个词条,其中新词语175187个,新词率为76.42% ,其中游戏类新词率最高,时政_社会类新词率最低。新词语以命名实体为主,结构固定,意义完整性和专指性强,有助于解决歧义切分和未登录词问题,并能提高文本表示如分类和关键词标引的效果。  相似文献   

9.
Web页面主题信息抽取研究与实现   总被引:5,自引:0,他引:5  
Web页面中的主要信息通常隐藏在大量无关的特征中,如无关紧要的图片和不相关的连接,使用户不能迅速获取主题信息,限制了Web的可用性。论文提出一种网页主题内容提取的方法及相应算法,并通过人工判定的方法对来自120个网站的5000个网页进行了测试和评估。实验结果表明该方法切实可行,可达到91.35%的准确率。  相似文献   

10.
Server pages (also called dynamic pages) render a generic web page into many similar ones. The technique is commonly used for implementing web application user interfaces (UIs). Yet our previous study found a high rate of repetitions (also called ‘clones’) in web applications, particularly in UIs. The finding raised the question as to why such repetitions had not been averted with the use of server pages. For an answer, we conducted an experiment using PHP server pages to explore how far server pages can be pushed to achieve generic web applications. Our initial findings suggested that generic representation obtained using server pages sometimes compromises certain important system qualities such as run‐time performance. It may also complicate the use of WYSIWYG editors. We have analysed the nature of these trade‐offs, and now propose a mixed‐strategy approach to obtain optimum generic representation of web applications without unnecessary compromise to critical system qualities and user experience. The mixed‐strategy approach applies the generative technique of XVCL to achieve genericity at the meta‐level representation of a web application, leaving repetitions to the actual web application. Our experiments show that the mixed‐strategy approach can achieve a good level of genericity without conflicting with other system qualities. Our findings should open the way for others to better‐informed decisions regarding generic design solutions, which should in turn lead to simpler, more maintainable and more reusable web applications. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

11.
平行语料库是对机器翻译、跨语言信息检索等应用技术具有重要支撑作用的基础数据资源。虽然互联网上的平行网页数量巨大且持续增长,但由于平行网站的异构性和复杂性,如何快速自动获取高质量的平行网页进而构造平行语料库仍然是巨大的挑战。该文提出了一种URL模式与HTML结构相结合的平行网页获取方法,首先利用HTML结构实现平行网页的递归访问,其次使用URL模式优化遍历平行网站的拓扑顺序,从而实现高效准确的平行网页获取。在联合国与香港政府两个平行网站上的实验表明,该方法相对传统获取方法在获取时间上减少50%以上,准确率提高15%,并显著提高了机器翻译的质量(BLEU 值分别提高1.6 和0.7 个百分点)。  相似文献   

12.
基于概率模型的Web信息抽取   总被引:1,自引:0,他引:1  
针对Web网页的二维结构和内容的特点,提出一种树型结构分层条件随机场(TH-CRFs)来进行Web对象的抽取。首先,从网页结构和内容两个方面使用改进多特征向量空间模型来表示网页的特征;第二,引入布尔模型和多规则属性来更好地表示Web对象结构与语义的特征;第三,利用TH-CRFs来进行Web对象的信息提取,从而找出相关的招聘信息并优化模型训练的效率。通过实验并与现有的Web信息抽取模型对比,结果表明,基于TH-CRFs的Web信息抽取的准确率已有效改善,同时抽取的时间复杂度也得到降低。  相似文献   

13.
Document representation and its application to page decomposition   总被引:6,自引:0,他引:6  
Transforming a paper document to its electronic version in a form suitable for efficient storage, retrieval, and interpretation continues to be a challenging problem. An efficient representation scheme for document images is necessary to solve this problem. Document representation involves techniques of thresholding, skew detection, geometric layout analysis, and logical layout analysis. The derived representation can then be used in document storage and retrieval. Page segmentation is an important stage in representing document images obtained by scanning journal pages. The performance of a document understanding system greatly depends on the correctness of page segmentation and labeling of different regions such as text, tables, images, drawings, and rulers. We use the traditional bottom-up approach based on the connected component extraction to efficiently implement page segmentation and region identification. A new document model which preserves top-down generation information is proposed based on which a document is logically represented for interactive editing, storage, retrieval, transfer, and logical analysis. Our algorithm has a high accuracy and takes approximately 1.4 seconds on a SGI Indy workstation for model creation, including orientation estimation, segmentation, and labeling (text, table, image, drawing, and ruler) for a 2550×3300 image of a typical journal page scanned at 300 dpi. This method is applicable to documents from various technical journals and can accommodate moderate amounts of skew and noise  相似文献   

14.
The complexity of web information environments and multiple‐topic web pages are negative factors significantly affecting the performance of focused crawling. A highly relevant region in a web page may be obscured because of low overall relevance of that page. Segmenting the web pages into smaller units will significantly improve the performance. Conquering and traversing irrelevant page to reach a relevant one (tunneling) can improve the effectiveness of focused crawling by expanding its reach. This paper presents a heuristic‐based method to enhance focused crawling performance. The method uses a Document Object Model (DOM)‐based page partition algorithm to segment a web page into content blocks with a hierarchical structure and investigates how to take advantage of block‐level evidence to enhance focused crawling by tunneling. Page segmentation can transform an uninteresting multi‐topic web page into several single topic context blocks and some of which may be interesting. Accordingly, focused crawler can pursue the interesting content blocks to retrieve the relevant pages. Experimental results indicate that this approach outperforms Breadth‐First, Best‐First and Link‐context algorithm both in harvest rate, target recall and target length. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

15.
Web information may currently be acquired by activating search engines. However, our daily experience is not only that web pages are often either redundant or missing but also that there is a mismatch between information needs and the web's responses. If we wish to satisfy more complex requests, we need to extract part of the information and transform it into new interactive knowledge. This transformation may either be performed by hand or automatically. In this article we describe an experimental agent-based framework skilled to help the user both in managing achieved information and in personalizing web searching activity. The first process is supported by a query-formulation facility and by a friendly structured representation of the searching results. On the other hand, the system provides a proactive support to the searching on the web by suggesting pages, which are selected according to the user's behavior shown in his navigation activity. A basic role is played by an extension of a classical fuzzy-clustering algorithm that provides a prototype-based representation of the knowledge extracted from the web. These prototypes lead both the proactive suggestion of new pages, mined through web spidering, and the structured representation of the searching results. © 2007 Wiley Periodicals, Inc. Int J Int Syst 22: 1101–1122, 2007.  相似文献   

16.
Web文本表示方法作为所有Web文本分析的基础工作,对文本分析的结果有深远的影响。提出了一种多维度的Web文本表示方法。传统的文本表示方法一般都是从文本内容中提取特征,而文档的深层次特征和外部特征也可以用来表示文本。本文主要研究文本的表层特征、隐含特征和社交特征,其中表层特征和隐含特征可以由文本内容中提取和学习得到,而文本的社交特征可以通过分析文档与用户的交互行为得到。所提出的多维度文本表示方法具有易用性,可以应用于各种文本分析模型中。在实验中,改进了两种常用的文本聚类算法——K-means和层次聚类算法,并命名为多维度K-means MDKM和多维度层次聚类算法MDHAC。通过大量的实验表明了本方法的高效性。此外,我们在各种特征的结合实验结果中还有一些深层次的发现。  相似文献   

17.
基于Lucene的中文全文检索系统的研究与设计   总被引:4,自引:0,他引:4  
提出了一种基于Lucene的中文全文检索系统模型.通过分析Lucene的系统结构,系统采用了基于统计的网页正文提取技术,并且加入了中文分词模块和索引文档预处理模块来提高检索系统的效率和精度.在检索结果的处理上,采用文本聚类的办法,使检索结果分类显示,提高了用户的查找的效率.实验数据表明,该系统在检索中文网页时,在效率,精度和结果处理等方面性能明显提高.  相似文献   

18.
基于新约束图模型的布图规划和布局算法   总被引:1,自引:0,他引:1  
董社勤  洪先龙  黄钢  顾均 《软件学报》2001,12(11):1586-1594
布图规划和布局构形的表示是基于随机优化方法的布图规划和布局算法的核心问题.针对Non-slicing结构的布图规划和布局,提出了一种新的基于约束图表示的模型.基于该模型及其性质,可以得到近似O(n)时间复杂度的有效的布局算法.通过引入变形网格的假设,得到了一种新的更加精确的Non-Slicing结构的表示模型:梯形网格模型.其空间复杂度为n(3+lg[n]),时间复杂度为O(n),解空间规模为n!23n-7.已经证明,梯形网格模型可以表示所有的Slicing结构的布局,同时又可以有效地表示Non-Slicing结构的布局,而时间复杂度与Slicing表示相同.实验结果表明,该表示优于刚刚发表的O-tree模型.梯形网格模型是一种拓扑模型,而O-tree的表示依赖于模块的尺寸,因而梯形网格能更有效地处理含有软模块的的布图规划问题.  相似文献   

19.
基于词汇链的中文新闻网页关键词抽取方法   总被引:1,自引:0,他引:1  
词汇链是一种词语间语义关系引起的连贯性的外在表现,提供关于文本结构和主题的重要线索。文中在解决歧义消解问题的基础上提出利用词汇链,结合词频特征、位置特征和集聚特征抽取中文新闻网页关键词的方法。该方法根据词语在文档中语义联系将文档表示成词汇链形式,并在此基础上抽取关键词。对中文新闻网页和学术期刊文献两种语料进行实验,结果表明该方法可明显提高抽取的关键词质量。  相似文献   

20.
This paper proposes a new document retrieval (DR) and plagiarism detection (PD) system using multilayer self-organizing map (MLSOM). A document is modeled by a rich tree-structured representation, and a SOM-based system is used as a computationally effective solution. Instead of relying on keywords/lines, the proposed scheme compares a full document as a query for performing retrieval and PD. The tree-structured representation hierarchically includes document features as document, pages, and paragraphs. Thus, it can reflect underlying context that is difficult to acquire from the currently used word-frequency information. We show that the tree-structured data is effective for DR and PD. To handle tree-structured representation in an efficient way, we use an MLSOM algorithm, which was previously developed by the authors for the application of image retrieval. In this study, it serves as an effective clustering algorithm. Using the MLSOM, local matching techniques are developed for comparing text documents. Two novel MLSOM-based PD methods are proposed. Detailed simulations are conducted and the experimental results corroborate that the proposed approach is computationally efficient and accurate for DR and PD.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号