首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
先将网页转换为规范的DOM树,然后计算每行文本的文本密度、与标题相关度等值,并将其作为输入参数利用BP神经网络进行训练,进而形成抽取规则,最后通过实验验证该方法的可行性.  相似文献   

2.
Materials and Process Specifications are complex semi-structured documents containing numeric data, text, and images. This article describes a coarse-grain extraction technique to automatically reorganize and summarize spec content. Specifically, a strategy for semantic-markup, to capture content within a semantic ontology, relevant to semi-automatic extraction, has been developed and experimented with. The working prototypes were built in the context of Cohesia's existing software infrastructure, and use techniques from Information Extraction, XML technology, etc.  相似文献   

3.
马冬雪  宋设  谢振平  刘渊 《计算机应用》2020,40(6):1574-1579
针对正则表达式解析招投标网页效率低下的问题,提出了一种基于招投标领域本体的网页自动化解析新方法。首先,分析了招投标网页文本的结构特征;其次,构建了招投标本体的轻量级领域知识模型;最后,给出一种招投标网页元素语义匹配与抽取算法,实现招投标网页的自动化解析。实验结果表明,新方法通过自适应的解析,准确率、召回率分别可达到95.33%、88.29%,与正则表达式方法相比,分别提高了3.98个百分点和3.81个百分点。所提方法可实现自适应地对招投标网页中语义信息的结构化解析抽取,能够较好地满足实用性能要求。  相似文献   

4.
为了能准确挖掘用户兴趣点,首先利用概率潜在语义分析PLSA模型将“网页 词”矩阵向量投影到概率潜在语义向量空间,并提出“自动相似度阈值选择”方法得到网页间的相似度阈值,最后提出将平面划分法与凝聚式层次聚类相结合的凝聚式层次k中心点HAK medoids算法,实现用户兴趣点聚类。实验结果表明,与传统的基于划分的算法相比,HAK medoids算法聚类效果更好。同时,提出的用户兴趣点聚类技术在个性化服务领域可提高个性化推荐和搜索的效率。关键词:  相似文献   

5.
Recently, the class imbalance problem has attracted much attention from researchers in the field of data mining. When learning from imbalanced data in which most examples are labeled as one class and only few belong to another class, traditional data mining approaches do not have a good ability to predict the crucial minority instances. Unfortunately, many real world data sets like health examination, inspection, credit fraud detection, spam identification and text mining all are faced with this situation. In this study, we present a novel model called the “Information Granulation Based Data Mining Approach” to tackle this problem. The proposed methodology, which imitates the human ability to process information, acquires knowledge from Information Granules rather then from numerical data. This method also introduces a Latent Semantic Indexing based feature extraction tool by using Singular Value Decomposition, to dramatically reduce the data dimensions. In addition, several data sets from the UCI Machine Learning Repository are employed to demonstrate the effectiveness of our method. Experimental results show that our method can significantly increase the ability of classifying imbalanced data.  相似文献   

6.
Phishing attack is growing significantly each year and is considered as one of the most dangerous threats in the Internet which may cause people to lose confidence in e-commerce. In this paper, we present a heuristic method to determine whether a webpage is a legitimate or a phishing page. This scheme could detect new phishing pages which black list based anti-phishing tools could not. We first convert a web page into 12 features which are well selected based on the existing normal and fishing pages. A training set of web pages including normal and fishing pages are then input for a support vector machine to do training. A testing set is finally fed into the trained model to do the testing. Compared to the existing methods, the experimental results show that the proposed phishing detector can achieve the high accuracy rate with relatively low false positive and low false negative rates.  相似文献   

7.
研究基于CURE聚类的Web页面分块方法及正文块的提取规则。对页面DOM树增加节点属性,使其转换成为带有信息节点偏移量的扩展DOM树。利用CURE算法进行信息节点聚类,各个结果簇即代表页面的不同块。最后提取了正文块的三个主要特征,构造信息块权值公式,利用该公式识别正文块。  相似文献   

8.
提出了一种带有节点频度的扩展DOM树模型—BF-DOM树模型(Block node Frequency-Document Object Module),并基于此模型进行网页正文信息的抽取。该方法通过向DOM树的某些节点上添加频度和相关度属性来构造文中新的模型,再结合语义距离抽取网页正文信息。方法主要基于以下三点考虑:在同源的网页集合内噪音节点的频度值很高;正文信息一般由非链接文字组成;与正文相关的链接和文章标题有较近的语义距离。针对8个网站的实验表明,该方法能有效地抽取正文信息,召回率和准确率都在96%以上,优于基于信息熵的抽取方法。  相似文献   

9.
The enormous amount of information available through the World Wide Web requires the development of effective tools for extracting and summarizing relevant data from Web sources. In this article we present a data model for representing Web documents and an associated SQL-like query language. Our framework provides an easy-to-use and well-formalized method for automatic generation of wrappers extracting data from Web documents.  相似文献   

10.
With the growing availability of online information systems, a need for user interfaces that are flexible and easy to use has arisen. For such type of systems, an interface that allows the formulation of approximate queries can be of great utility since these allow the user to quickly explore the database contents even when he is unaware of the exact values of the database instances. Our work focuses on this problem, presenting a new model for ranking approximate answers and a new algorithm to compute the semantic similarity between attribute values, based on information retrieval techniques. To demonstrate the utility and usefulness of the approach, we perform a series of usability tests. The results suggest that our approach allows the retrieval of more relevant answers with less effort by the user.  相似文献   

11.
Blog(博客)可以称为在线个人日志。作为一种新兴的媒体,Blog目前已经成为一种在Web上表达个人观点和情感的一种非常流行的方式。那么如何从Blog中快速准确地抽取有用的信息(话题发布时间、话题题目、话题内容、评论内容等)就成为了Blog应用中一个非常重要的步骤。提出了一种基于模板化的Blog信息抽取方法,该方法通过分析Blog网站的HTML源代码,然后提取出网站的模板,并根据该模板对Blog网页进行信息抽取。对来自国内10个著名博客网站进行模板的提取,并对这10个网站中的7 374个Blog网页进行了实验,实验结果表明,该方法能根据提取出的模板快速、准确地对Blog网页进行信息抽取。  相似文献   

12.
TEG—a hybrid approach to information extraction   总被引:1,自引:1,他引:1  
This paper describes a hybrid statistical and knowledge-based information extraction model, able to extract entities and relations at the sentence level. The model attempts to retain and improve the high accuracy levels of knowledge-based systems while drastically reducing the amount of manual labour by relying on statistics drawn from a training corpus. The implementation of the model, called TEG (trainable extraction grammar), can be adapted to any IE domain by writing a suitable set of rules in a SCFG (stochastic context-free grammar)-based extraction language and training them using an annotated corpus. The system does not contain any purely linguistic components, such as PoS tagger or shallow parser, but allows to using external linguistic components if necessary. We demonstrate the performance of the system on several named entity extraction and relation extraction tasks. The experiments show that our hybrid approach outperforms both purely statistical and purely knowledge-based systems, while requiring orders of magnitude less manual rule writing and smaller amounts of training data. We also demonstrate the robustness of our system under conditions of poor training-data quality. Ronen Feldman is a senior lecturer at the Mathematics and Computer Science Department of Bar-Ilan University in Israel, and the Director of the Data Mining Laboratory. He received his B.Sc. in Math, Physics and Computer Science from the Hebrew University, M.Sc. in Computer Science from Bar-Ilan University, and his Ph.D. in Computer Science from Cornell University in NY. He was an Adjunct Professor at NYU Stern Business School. He is the founder of ClearForest Corporation, a Boston based company specializing in development of text mining tools and applications. He has given more than 30 tutorials on next mining and information extraction and authored numerous papers on these topics. He is currently finishing his book “The Text Mining Handbook” to the published by Cambridge University Press. Benjamin Rosenfeld is a research scientist at ClearForest Corporation. He received his B.Sc. in Mathematics and Computer Science from Bar-Ilan University. He is the co-inventor of the DIAL information extraction language. Moshe Fresko is finalizing his Ph.D. in Computer Science Department at Bar-Ilan University in Israel. He received his B.Sc. in Computer Engineering from Bogazici University, Istanbul/Turkey on 1991, and M.Sc. on 1994. He is also an adjunct lecturer at the Computer Science Department of Bar-Ilan University and functions as the Information-Extraction Group Leader in the Data Mining Laboratory.  相似文献   

13.
J. Li  X. Tang  J. Liu  J. Huang  Y. Wang 《Pattern recognition》2008,41(6):1975-1984
Various microarray experiments are now done in many laboratories, resulting in the rapid accumulation of microarray data in public repositories. One of the major challenges of analyzing microarray data is how to extract and select efficient features from it for accurate cancer classification. Here we introduce a new feature extraction and selection method based on information gene pairs that have significant change in different tissue samples. Experimental results on five public microarray data sets demonstrate that the feature subset selected by the proposed method performs well and achieves higher classification accuracy on several classifiers. We perform extensive experimental comparison of the features selected by the proposed method and features selected by other methods using different evaluation methods and classifiers. The results confirm that the proposed method performs as well as other methods on acute lymphoblastic-acute myeloid leukemia, adenocarcinoma and breast cancer data sets using a fewer information genes and leads to significant improvement of classification accuracy on colon and diffuse large B cell lymphoma cancer data sets.  相似文献   

14.
A technology for automatically assembling large software libraries which promote software reuse by helping the user locate the components closest to her/his needs is described. Software libraries are automatically assembled from a set of unorganized components by using information retrieval techniques. The construction of the library is done in two steps. First, attributes are automatically extracted from natural language documentation by using an indexing scheme based on the notions of lexical affinities and quantity of information. Then a hierarchy for browsing is automatically generated using a clustering technique which draws only on the information provided by the attributes. Due to the free-text indexing scheme, tools following this approach can accept free-style natural language queries  相似文献   

15.
Most biomedical signals are non-stationary. The knowledge of their frequency content and temporal distribution is then useful in a clinical context. The wavelet analysis is appropriate to achieve this task. The present paper uses this method to reveal hidden characteristics and anomalies of the human a-wave, an important component of the electroretinogram since it is a measure of the functional integrity of the photoreceptors. We here analyse the time–frequency features of the a-wave both in normal subjects and in patients affected by Achromatopsia, a pathology disturbing the functionality of the cones. The results indicate the presence of two or three stable frequencies that, in the pathological case, shift toward lower values and change their times of occurrence. The present findings are a first step toward a deeper understanding of the features of the a-wave and possible applications to diagnostic procedures in order to recognise incipient photoreceptoral pathologies.  相似文献   

16.
In this paper, a simple and robust approach for flame and fire image analysis is proposed. It is based on the local binary patterns, double thresholding and Levenberg–Marquardt optimization technique. The presented algorithm detects the sharp edges and removes the noise and irrelevant artifacts. The autoadaptive nature of the algorithm ensures the primary edges of the flame and fire are identified in the different conditions. Moreover, a graphical approach is presented which can be used to calculate the combustion furnace flame temperature. The various experimentations are carried out on synthetic as well as real flame and fire images which validate the efficacy and robustness of the proposed approach.  相似文献   

17.
一种基于提取上下文信息的分词算法   总被引:8,自引:0,他引:8  
汉语分词在汉语文本处理过程中是一个特殊而重要的组成部分。传统的基于词典的分词算法存在很大的缺陷,无法对未登录词进行很好的处理。基于概率的算法只考虑了训练集语料的概率模型,对于不同领域的文本的处理不尽如人意。文章提出一种基于上下文信息提取的概率分词算法,能够将切分文本的上下文信息加入到分词概率模型中,以指导文本的切分。这种切分算法结合经典n元模型以及EM算法,在封闭和开放测试环境中分别取得了比较好的效果。  相似文献   

18.
支宗良  陈少飞 《计算机应用》2008,28(1):152-154,
由于缺乏对页面特征适应性的分析,现有的典型系统难以保障抽取规则的健壮性。提出一种优化的Web信息抽取方法,该方法引入了相互关联的三层规则,在分析页面特征适应性的基础上,从准确率和召回率两方面出发提出了抽取规则的优化算法,并用标准XQuery表达复杂对象抽取规则。实验证明,该方法有效地增强了抽取规则的健壮性及可用性。  相似文献   

19.
基于包装器模型的文本信息抽取   总被引:6,自引:0,他引:6  
在分析基于标志和基于文本模式两类算法的基础上,提出了一种新的包装器归纳学习算法。新算法综合上述两类算法的优点,不但能利用页面的标志信息进行信息定位,而且能利用文本的模式信息来进行信息抽取和对抽取结果进行必要的过滤。实验结果表明,新算法具有较高的信息抽取精度与信息表达能力。  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号