首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
潜在语义分析权重计算的改进   总被引:12,自引:0,他引:12  
自从潜在语义分析方法诞生以来,被广泛应用于信息检索、文本分类、自动问答系统等领域中。潜在语义分析的一个重要过程是对词语文档矩阵作加权转换,加权函数直接影响潜在语义分析结果的优劣。本文首先总结了传统的、已成熟的权重计算方法,包括局部权重部分和词语全局权重部分,随后指出已有方法的不足之处,并对权重计算方法进行扩展,提出文档全局权重的概念。在最后的实验中,提出了一种新的检验潜在语义分析结果优劣的方法———文档自检索矩阵,实验结果证明改进后的权重计算方法提高了检索效率。  相似文献   

2.
In vector space model (VSM), text representation is the task of transforming the content of a textual document into a vector in the term space so that the document could be recognized and classified by a computer or a classifier. Different terms (i.e. words, phrases, or any other indexing units used to identify the contents of a text) have different importance in a text. The term weighting methods assign appropriate weights to the terms to improve the performance of text categorization. In this study, we investigate several widely-used unsupervised (traditional) and supervised term weighting methods on benchmark data collections in combination with SVM and kNN algorithms. In consideration of the distribution of relevant documents in the collection, we propose a new simple supervised term weighting method, i.e. tf.rf, to improve the terms' discriminating power for text categorization task. From the controlled experimental results, these supervised term weighting methods have mixed performance. Specifically, our proposed supervised term weighting method, tf.rf, has a consistently better performance than other term weighting methods while other supervised term weighting methods based on information theory or statistical metric perform the worst in all experiments. On the other hand, the popularly used tf.idf method has not shown a uniformly good performance in terms of different data sets.  相似文献   

3.
Recovering traceability links between code and documentation   总被引:2,自引:0,他引:2  
Software system documentation is almost always expressed informally in natural language and free text. Examples include requirement specifications, design documents, manual pages, system development journals, error logs, and related maintenance reports. We propose a method based on information retrieval to recover traceability links between source code and free text documents. A premise of our work is that programmers use meaningful names for program items, such as functions, variables, types, classes, and methods. We believe that the application-domain knowledge that programmers process when writing the code is often captured by the mnemonics for identifiers; therefore, the analysis of these mnemonics can help to associate high-level concepts with program concepts and vice-versa. We apply both a probabilistic and a vector space information retrieval model in two case studies to trace C++ source code onto manual pages and Java code to functional requirements. We compare the results of applying the two models, discuss the benefits and limitations, and describe directions for improvements.  相似文献   

4.
一种基于提取上下文信息的分词算法   总被引:8,自引:0,他引:8  
汉语分词在汉语文本处理过程中是一个特殊而重要的组成部分。传统的基于词典的分词算法存在很大的缺陷,无法对未登录词进行很好的处理。基于概率的算法只考虑了训练集语料的概率模型,对于不同领域的文本的处理不尽如人意。文章提出一种基于上下文信息提取的概率分词算法,能够将切分文本的上下文信息加入到分词概率模型中,以指导文本的切分。这种切分算法结合经典n元模型以及EM算法,在封闭和开放测试环境中分别取得了比较好的效果。  相似文献   

5.
The minimum cost flow problem (MCFP) is the most generic variation of the network flow problem which aims to transfer a commodity throughout the network to satisfy demands. The problem size (in terms of the number of nodes and arcs) and the shape of the cost function are the most critical factors when considering MCFPs. Existing mathematical programming techniques often assume the cost functions to be linear or convex. Unfortunately, the linearity and convexity assumptions are too restrictive for modelling many real-world scenarios. In addition, many real-world MCFPs are large-scale, with networks having a large number of nodes and arcs. In this paper, we propose a probabilistic tree-based genetic algorithm (PTbGA) for solving large-scale minimum cost integer flow problems with nonlinear non-convex cost functions. We first compare this probabilistic tree-based representation scheme with the priority-based representation scheme, which is the most commonly-used representation for solving MCFPs. We then compare the performance of PTbGA with that of the priority-based genetic algorithm (PrGA), and two state-of-the-art mathematical solvers on a set of MCFP instances. Our experimental results demonstrate the superiority and efficiency of PTbGA in dealing with large-sized MCFPs, as compared to the PrGA method and the mathematical solvers.  相似文献   

6.
面对海量数据的管理和分析,文本自动分类技术必不可少。特征权重计算是文本分类过程的基础,一个好的特征权重算法能够明显提升文本分类的性能。本文对比了多种不同的特征权重算法,并针对前人算法的不足,提出了基于文档类密度的特征权重算法(tf-idcd)。该算法不仅包括传统的词频度量,还提出了一个新的概念,文档类密度,它通过计算类内包含特征的文档数和类内总文档数的比值来度量。最后,本文在两个中文常见数据集上对五种算法进行实验对比。实验结果显示,本文提出的算法相比较其他特征权重算法在F1宏平均和F1微平均上都有较大的提升。  相似文献   

7.
三个层面的中文文本主题自动提取研究   总被引:8,自引:0,他引:8  
为适应Internet时代和大规模文献处理的需要,以中文文本为处理对象,研究了从主题词、主题概念和主题句三个不同层面自动抽取文本主题的方法,着重讨论了加权体系和一些经验值的获取方法。对新闻类文献做了实验,并简单进行了性能分析。  相似文献   

8.
The efficiency of using the chi-square metrics to weigh terms used in text documents is evaluated. The procedure includes the selection and advanced processing of class C and ~C texts, compilation of a reference dictionary and calculation of scores for all the terms in the dictionary, calculation of χ2 coefficients for terms from a class C text, and calculation of the general efficiency factor by the sum of the coefficients found for the terms from the reference dictionary. The weighting by the χ2 formula, odds-ratio (OR) formula, and on the basis of probabilistic variables is analyzed and compared. It was found that the best result is yielded by the OR-based weighting.  相似文献   

9.
Term weighting is a strategy that assigns weights to terms to improve the performance of sentiment analysis and other text mining tasks. In this paper, we propose a supervised term weighting scheme based on two basic factors: Importance of a term in a document (ITD) and importance of a term for expressing sentiment (ITS), to improve the performance of analysis. For ITD, we explore three definitions based on term frequency. Then, seven statistical functions are employed to learn the ITS of each term from training documents with category labels. Compared with the previous unsupervised term weighting schemes originated from information retrieval, our scheme can make full use of the available labeling information to assign appropriate weights to terms. We have experimentally evaluated the proposed method against the state-of-the-art method. The experimental results show that our method outperforms the method and produce the best accuracy on two of three data sets.  相似文献   

10.
通过对GPU通用计算与文本分类的研究,提出了一种基于GPU的文本特征选择与加权方法。首先介绍了文本分类中常用的特征选择方法和特征加权方法,并在GPU上实现了其中的DF(文档频率)方法和TFIDF方法。实验结果显示,利用提出的并行特征选择与加权方法能能有效的提高特征选择与加权过程的速度。  相似文献   

11.
传统的文本分类方法大多数使用单一的分类器,而不同的分类器对分类任务的侧重点不同,就使得单一的分类方法有一定的局限性,同时每个特征提取方法对特征词的考虑角度不同。针对以上问题,提出了多类型分类器融合的文本分类方法。该模型使用了word2vec、主成分分析、潜在语义索引以及TFIDF特征提取方法作为多类型分类器融合的特征提取方法。并在多类型分类器加权投票方法中忽略了类别信息的问题,提出了类别加权的分类器权重计算方法。通过实验结果表明,多类型分类器融合方法在二元语料库、多元语料库以及特定语料库上都取得了很好的性能,类别加权的分类器权重计算方法比多类型分类器融合方法在分类性能方面提高了1.19%。  相似文献   

12.
中文文本中,传统的n-grams特征选择加权算法(如滑动窗口法等)存在两点不足:在将每个词进行组合、生成n-grams特征之前必须对每篇文本调用分词接口。无法删除n-grams中的冗余词,使得冗余的n-grams特征对其他有用的n-grams特征产生干扰,降低分类准确率。为解决以上问题,根据汉语单、双字词识别研究理论,将文本转化为字矩阵。通过对字矩阵中元素进行冗余过滤和交运算得到n-grams特征,避免了n-grams特征中存在冗余词的情况,且不需对文本调用任何分词接口。在搜狗中文新闻语料库和网易文本语料库中的实验结果表明,相比于滑动窗口法和其他n-grams特征选择加权算法,基于字矩阵交运算的n-grams特征选择加权算法得到的n-grams特征耗时更短,在支持向量机(Support Vector Machine,SVM)中的分类效果更好。  相似文献   

13.
In the rapidly evolving field of genomics, many clustering and classification methods have been developed and employed to explore patterns in gene expression data. Biologists face the choice of which clustering algorithm(s) to use and how to interpret different results from various clustering algorithms. No clear objective criteria have been developed to assess the agreement and compare the results from different clustering methods. We describe two generally applicable objective measures to quantify agreement between different clustering methods. These two measures are referred to as the local agreement measure, which is defined for each gene/subject, and the global agreement measure, which is defined for the whole gene expression experiment. The agreement measures are based on a probabilistic weighting scheme applied to the number of concordant and discordant pairs from two clustering methods. In the comparison and assessment process, newly-developed concepts are implemented under the framework of reliability of a cluster. The algorithms are illustrated by simulations and then applied to a yeast sporulation gene expression microarray data. Analysis of the sporulation data identified ∼5% (23 of 477) genes which were not consistently clustered using a neural net algorithm and K-means or pam. The two agreement measures provide objective criteria to conclude whether or not two clustering methods agree with each other. Using the local agreement measure, genes of unknown function which cluster consistently can more confidently be assigned functions based on co-regulation.  相似文献   

14.
焦庆争  蔚承建 《计算机应用》2009,29(12):3303-3306
针对文本分类问题,基于特征分布评估权值调节特征概率标准差设计了一种无须特征选择的高效的线性文本分类器。该算法的基本思路是使用特征概率标准差量化特征在文档类中的离散度,并作为特征的基础权重,同时以后验概率的Beta分布函数为基础,运用概率确定性密度函数,评估特征在类别中的分布信息得到特征分布权值,将其调节基础权重得到特征权重,实现了线性文本分类器。在20Newsgroup、复旦中文分类语料、Reuters-21578三个语料集进行了比较实验,实验结果表明,新算法分类性能相对传统算法优势显著,且稳定、高效、实用,适于大规模文本分类任务。  相似文献   

15.
训练集中文本质量的好坏直接决定着文本分类的结果。实际应用中训练集的构建不可避免地会产生噪声样本,从而影响文本分类方法的实际应用效果。为此,针对文本分类中的噪声问题,本文提出一种基于概率主题模型的噪声处理方法,首先对训练集中的每个样本计算其类别熵,根据类别熵对噪声样本进行过滤;然后利用主题模型进行数据平滑,进一步减弱噪声样本的影响。这种方法不但能够减弱噪声样本对分类结果的影响,同时还保持了训练集的原有规模。在真实数据上的实验表明,该方法对噪声样本的分布具有较好的鲁棒性,在噪声比例较大的情况下仍能保持较好的分类结果。  相似文献   

16.
文本表示是使用分类算法处理文本时必不可少的环节,文本表示方法的选择对最终的分类精度起着至关重要的作用。针对经典的特征权重计算方法TFIDF(Term Frequency and Inverted Document Frequency)中存在的不足,提出了一种基于信息熵理论的特征权重算法ETFIDF(Entropy based TFIDF)。ETFIDF不仅考虑特征项在文档中出现的频率及该特征项在训练集中的集中度,而且还考虑该特征项在各个类别中的分散度。实验结果表明,采用ETFIDF计算特征权重可以有效地提高文本分类性能,对ETFIDF与特征选择的关系进行了较详细的理论分析和实验研究。实验结果表明,在文本表示阶段考虑特征与类别的关系可以更为准确地表示文本;如果综合考虑精度与效率两个方面因素,ETFIDF算法与特征选择算法一起采用能够得到更好的分类效果。  相似文献   

17.
传统的文本相似度计算大多基于词匹配的方法,忽略了词汇语义信息,计算结果很大程度上取决于文本的词汇重复率。虽然分布式词向量可以有效表达词汇语义关系,但目前基于词向量的文本处理方法大都通过词汇串联等形式表示文本,无法体现词汇在语料库中的分布情况。 针对以上问题,本文提出了一种新的计算方法, 该方法认为基于统计的文本向量各元素之间存在相关性,且该相关性可通过词汇语义相似度表示。 因此, 本文利用词汇相似度改进了基于余弦公式的文本相似度计算方法。 实验表明该方法在 F1 值和准确度评价标准上优于其他方法。  相似文献   

18.
文章提出了一种基于聚类的微博关键词提取方法。实验过程分三个步骤进行。第一步,对微博文本进行预处理和分词处理,再运用TF-IDF算法与TextRank算法计算词语权重,针对微博短文本的特性在计算词语权重时运用加权计算的方法,在得到词语权重后使用聚类算法提取候选关键词;第二步,根据n-gram语言模型的理论,取n的值为2定义最大左邻概率和最大右邻概率,据此对候选关键词进行扩展;第三步,根据语义扩展模型中邻接变化数和语义单元数的概念,对扩展后的关键词进行筛选,得到最终的提取结果。实验结果表明在处理短文本时Text Ramk算法比TF-IDF算法表现更佳,同时该方法能够有效地提取出微博中的关键词。  相似文献   

19.
The proliferation of forums and blogs leads to challenges and opportunities for processing large amounts of information. The information shared on various topics often contains opinionated words which are qualitative in nature. These qualitative words need statistical computations to convert them into useful quantitative data. This data should be processed properly since it expresses opinions. Each of these opinion bearing words differs based on the significant meaning it conveys. To process the linguistic meaning of words into data and to enhance opinion mining analysis, we propose a novel weighting scheme, referred to as inferred word weighting (IWW). IWW is computed based on the significance of the word in the document (SWD) and the significance of the word in the expression (SWE) to enhance their performance. The proposed weighting methods give an analytic view and provide appropriate weights to the words compared to existing methods. In addition to the new weighting methods, another type of checking is done on the performance of text classification by including stop-words. Generally, stop-words are removed in text processing. When this new concept of including stop-words is applied to the proposed and existing weighting methods, two facts are observed: (1) Classification performance is enhanced; (2) The outcome difference between inclusion and exclusion of stop-words is smaller in the proposed methods, and larger in existing methods. The inferences provided by these observations are discussed. Experimental results of the benchmark data sets show the potential enhancement in terms of classification accuracy.  相似文献   

20.
针对单传感器联合概率数据互联(Joint Probabilistic Data Association, JPDA)在复杂环境下难以跟踪多个目标的问题,提出一种基于JPDA量测目标互联概率统计加权并行式和序贯式多传感器数据融合方法。首先,给出单传感器JPDA算法。然后,介绍多传感器JPDA数学模型,基于这一模型,使用互联概率加权,推导并行式和序贯式多传感器数据融合公式,这对多传感器数据融合有一定指导意义。最后,对单传感器JPDA方法在不同杂波密度、不同过程和不同观测噪声下目标跟踪的距离RMSE进行仿真,结果表明,随着这3项指标皆增大,目标距离RMSE增大;同时,对本文的2类多传感器JPDA方法与其他几类跟踪方法在数据集PETS2009下有关行人跟踪性能进行仿真,结果表明,本文并行式和序贯式多传感器JPDA方法相较于其他方法在跟踪准确性、跟踪位置准确性、航迹维持以及航迹遗失上皆为最优,而且序贯式融合略优于并行式多传感器JPDA。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号