首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
一种改进的基于条件互信息的特征选择算法   总被引:10,自引:0,他引:10  
目前在文本分类领域较常用到的特征选择算法中,仅仅考虑了特征与类别之间的关联性,而对特征与特征之间的关联性没有予以足够的重视,这导致了特征之间预测能力的相互削弱,无法选出最有效的特征。提出了一种新的用于文本分类的特征选择算法(CMIM),它可以帮助选出区分能力强、弱相关的特征。经实验验证,CMIM比传统的特征选择算法具有更好的性能。  相似文献   

2.
This article proposes a novel approach for text categorization based on a regularization extreme learning machine (RELM) in which its weights can be obtained analytically, and a bias-variance trade-off could be achieved by adding a regularization term into the linear system of single-hidden layer feedforward neural networks. To fit the input scale of RELM, the latent semantic analysis was used to represent text for dimensionality reduction. Moreover, a classification algorithm based on RELM was developed including the uni-label (i.e., a document can only be assigned to a unique category) and multi-label (i.e., a document can be assigned to multiple categories simultaneously) situations. The experimental results in two benchmarks show that the proposed method can produce good performance in most cases, and it could learn faster than popular methods such as feedforward neural networks or support vector machine.  相似文献   

3.
Feature selection is an important preprocessing step for dealing with high dimensional data. In this paper, we propose a novel unsupervised feature selection method by embedding a subspace learning regularization (i.e., principal component analysis (PCA)) into the sparse feature selection framework. Specifically, we select informative features via the sparse learning framework and consider preserving the principal components (i.e., the maximal variance) of the data at the same time, such that improving the interpretable ability of the feature selection model. Furthermore, we propose an effective optimization algorithm to solve the proposed objective function which can achieve stable optimal result with fast convergence. By comparing with five state-of-the-art unsupervised feature selection methods on six benchmark and real-world datasets, our proposed method achieved the best result in terms of classification performance.  相似文献   

4.
微博用户性别分类旨在根据用户信息进行用户性别的识别。目前性别分类的相关研究主要针对单一类型的特征(文本特征或者社交特征)进行性别分类。与以往研究不同,文中提出了一种双通道LSTM(Long-Short Term Memory)模型,以充分结合文本特征(用户发表的微博文本)和社交特征(用户关注者的信息)进行用户性别分类方法的研究。首先,利用单通道LSTM模型分别学习两组文本特征,得到两种特征表示;然后,在神经网络中加入Merge层, 结合两种特征表示进行集成学习,以充分学习文本特征和社交特征之间的联系。实验结果表明,相对于传统的分类算法,双通道LSTM模型分类算法能够获得更好的用户性别分类效果。  相似文献   

5.
方丁  王刚 《计算机系统应用》2012,21(7):177-181,248
随着Web2.0的迅速发展,越来越多的用户乐于在互联网上分享自己的观点或体验。这类评论信息迅速膨胀,仅靠人工的方法难以应对网上海量信息的收集和处理,因此基于计算机的文本情感分类技术应运而生,并且研究的重点之一就是提高分类的精度。由于集成学习理论是提高分类精度的一种有效途径,并且已在许多领域显示出其优于单个分类器的良好性能,为此,提出基于集成学习理论的文本情感分类方法。实验结果显示三种常用的集成学习方法 Bagging、Boosting和Random Subspace对基础分类器的分类精度都有提高,并且在不同的基础分类器条件下,Random Subspace方法较Bagging和Boosting方法在统计意义上更优,以上结果进一步验证了集成学习理论在文本情感分类中应用的有效性。  相似文献   

6.
基于上下文重构的短文本情感极性判别研究   总被引:2,自引:1,他引:2  
文本对象所固有的多义性,面对短文本特征稀疏和上下文缺失的情况,现有处理方法无法明辨语义,形成了底层特征和高层表达之间巨大的语义鸿沟.本文尝试借由时间、空间、联系等要素挖掘文本间隐含的关联关系,重构文本上下文范畴,提升情感极性分类性能.具体做法对应一个两阶段处理过程:1)基于短文本的内在联系将其初步重组成上下文(领域);2)将待处理短文本归入适合的上下文(领域)进行深入处理.首先给出了基于Naive Bayes分类器的短文本情感极性分类基本框架,揭示出上下文(领域)范畴差异对分类性能的影响.接下来讨论了基于领域归属划分的文本情感极性分类增强方法,并将领域的概念扩展为上下文关系,提出了基于特殊上下文关系的文本情感极性判别方法.同时为了解决由于信息缺失所造成的上下文重组困难,给出基于遗传算法的任意上下文重组方案.理论分析表明,满足限制条件的前提下,基于上下文重构的情感极性判别方法能够同时降低抽样误差(Sample error)和近似误差(Approximation error).真实数据集上的实验结果也验证了理论分析的结论.  相似文献   

7.
A conceptual model is proposed for a system whose function is to solve the problem of automatic classification of text documents in a natural language, i.e., to determine whether a new text document belongs to a predefined class. The functional requirements of the future system are given. Various representations of natural language texts, as well as statistical and logical-combinatorial methods of text analysis, are discussed. This work may be of interest to specialists in natural-language processing, data mining, and computational linguistics.  相似文献   

8.
Automatic classification of text documents, one of essential techniques for Web mining, has always been a hot topic due to the explosive growth of digital documents available on-line. In text classification community, k-nearest neighbor (kNN) is a simple and yet effective classifier. However, as being a lazy learning method without premodelling, kNN has a high cost to classify new documents when training set is large. Rocchio algorithm is another well-known and widely used technique for text classification. One drawback of the Rocchio classifier is that it restricts the hypothesis space to the set of linear separable hyperplane regions. When the data does not fit its underlying assumption well, Rocchio classifier suffers. In this paper, a hybrid algorithm based on variable precision rough set is proposed to combine the strength of both kNN and Rocchio techniques and overcome their weaknesses. An experimental evaluation of different methods is carried out on two common text corpora, i.e., the Reuters-21578 collection and the 20-newsgroup collection. The experimental results indicate that the novel algorithm achieves significant performance improvement.  相似文献   

9.
In this paper, we study the problem of learning from multiple model data for the purpose of document classification. In this problem, each document is composed of two different models of data, i.e., an image and a text. We propose to represent the data of two models by projecting them to a shared data space by using cross-model factor analysis formula and classify them in the shared space by using a linear class label predictor, named cross-model classifier. The parameters of both cross-model classifier and cross-model factor analysis are learned jointly, so that they can regularize the learning of each other. We construct a unified objective function for this learning problem. With this objective function, we minimize the distance between the projections of image and text of the same document, and the classification error of the projections measured by hinge loss function. The objective function is optimized by an alternate optimization strategy in an iterative algorithm. Experiments in two different multiple model document data sets show the advantage of the proposed algorithm over state-of-the-art multimedia data classification methods.  相似文献   

10.
Network representation learning called NRL for short aims at embedding various networks into lowdimensional continuous distributed vector spaces. Most existing representation learning methods focus on learning representations purely based on the network topology, i.e., the linkage relationships between network nodes, but the nodes in lots of networks may contain rich text features, which are beneficial to network analysis tasks, such as node classification, link prediction and so on. In this paper, we propose a novel network representation learning model, which is named as Text-Enhanced Network Representation Learning called TENR for short, by introducing text features of the nodes to learn more discriminative network representations, which come from joint learning of both the network topology and text features, and include common influencing factors of both parties. In the experiments, we evaluate our proposed method and other baseline methods on the task of node classification. The experimental results demonstrate that our method outperforms other baseline methods on three real-world datasets.  相似文献   

11.
一般细粒度图像分类只关注图像局部视觉信息,但在一些问题中图像局部的文本 信息对图像分类结果有直接帮助,通过提取图像文本语义信息可以进一步提升图像细分类效果。 我们综合考虑了图像视觉信息与图像局部文本信息,提出一个端到端的分类模型来解决细粒度 图像分类问题。一方面使用深度卷积神经网络获取图像视觉特征,另一方面依据提出的端到端 文本识别网络,提取图像的文本信息,再通过相关性计算模块合并视觉特征与文本特征,送入 分类网络。最终在公共数据集 Con-Text 上测试该方法在图像细分类中的结果,同时也在 SVT 数据集上验证端到端文本识别网络的能力,均较之前方法获得更好的效果。  相似文献   

12.
Matrices, or more generally, multi-way arrays (tensors) are common forms of data that are encountered in a wide range of real applications. How to classify this kind of data is an important research topic for both pattern recognition and machine learning. In this paper, by analyzing the relationship between two famous traditional classification approaches, i.e., SVM and STM, a novel tensor-based method, i.e., multiple rank multi-linear SVM (MRMLSVM), is proposed. Different from traditional vector-based and tensor based methods, multiple-rank left and right projecting vectors are employed to construct decision boundary and establish margin function. We reveal that the rank of transformation can be regarded as a tradeoff parameter to balance the capacity of learning and generalization in essence. We also proposed an effective approach to solve the proposed non-convex optimization problem. The convergence behavior, initialization, computational complexity and parameter determination problems are analyzed. Compared with vector-based classification methods, MRMLSVM achieves higher accuracy and has lower computational complexity. Compared with traditional supervised tensor-based methods, MRMLSVM performs better for matrix data classification. Promising experimental results on various kinds of data sets are provided to show the effectiveness of our method.  相似文献   

13.
The revolution of the Internet together with the progression in computer technology makes it easy for institutions to collect an unprecedented amount of personal data. This pervasive data collection rally coupled with the increasing necessity of dissemination and sharing of non-aggregated data, i.e., microdata, raised a lot of concerns about privacy. One method to ensure privacy is to selectively hide the confidential, i.e. sensitive, information before disclosure. However, with data mining techniques, it is now possible for an adversary to predict the hidden confidential information from the disclosed data sets. In this paper, we concentrate on one such data mining technique called classification. We extend our previous work on microdata suppression to prevent both probabilistic and decision tree classification based inference. We also provide experimental results showing the effectiveness of not only the proposed methods but also the hybrid methods, i.e., methods suppressing microdata against both classification models, on real-life data sets.  相似文献   

14.
Boosting text segmentation via progressive classification   总被引:5,自引:4,他引:1  
A novel approach for reconciling tuples stored as free text into an existing attribute schema is proposed. The basic idea is to subject the available text to progressive classification, i.e., a multi-stage classification scheme where, at each intermediate stage, a classifier is learnt that analyzes the textual fragments not reconciled at the end of the previous steps. Classification is accomplished by an ad hoc exploitation of traditional association mining algorithms, and is supported by a data transformation scheme which takes advantage of domain-specific dictionaries/ontologies. A key feature is the capability of progressively enriching the available ontology with the results of the previous stages of classification, thus significantly improving the overall classification accuracy. An extensive experimental evaluation shows the effectiveness of our approach.  相似文献   

15.
We describe approaches for positive data modeling and classification using both finite inverted Dirichlet mixture models and support vector machines (SVMs). Inverted Dirichlet mixture models are used to tackle an outstanding challenge in SVMs namely the generation of accurate kernels. The kernels generation approaches, grounded on ideas from information theory that we consider, allow the incorporation of data structure and its structural constraints. Inverted Dirichlet mixture models are learned within a principled Bayesian framework using both Gibbs sampler and Metropolis-Hastings for parameter estimation and Bayes factor for model selection (i.e., determining the number of mixture’s components). Our Bayesian learning approach uses priors, which we derive by showing that the inverted Dirichlet distribution belongs to the family of exponential distributions, over the model parameters, and then combines these priors with information from the data to build posterior distributions. We illustrate the merits and the effectiveness of the proposed method with two real-world challenging applications namely object detection and visual scenes analysis and classification.  相似文献   

16.
Image classification is of great importance for digital photograph management. In this paper we propose a general statistical learning method based on boosting algorithm to perform image classification for photograph annotation and management. The proposed method employs both features extracted from image content (i.e., color moment and edge direction histogram) and features from the EXIF metadata recorded by digital cameras. To fully utilize potential feature correlations and improve the classification accuracy, feature combination is needed. We incorporate linear discriminant analysis (LDA) algorithm to implement linear combinations between selected features and generate new combined features. The combined features are used along with the original features in boosting algorithm for improving classification performance. To make the proposed learning algorithm more efficient, we present two heuristics for selective feature combinations, which can significantly reduce training computation without losing performance. The proposed image classification method has several advantages: small model size, computational efficiency and improved classification performance based on LDA feature combination.  相似文献   

17.
A novel training method has been proposed for increasing efficiency and generalization of support vector machine (SVM). The efficiency of SVM in classification is directly determined by the number of the support vectors used, which is often huge in the complicated classification problem in order to represent a highly convoluted separation hypersurface for better nonlinear classification. However, the separation hypersurface of SVM might be unnecessarily over-convoluted around extreme outliers, as these outliers can easily dominate the objective function of SVM. This situation eventually affects the efficiency and generalization of SVM in classifying unseen testing samples. To avoid this problem, we propose a novel objective function for SVM, i.e., an adaptive penalty term is designed to suppress the effects of extreme outliers, thus simplifying the separation hypersurface and increasing the classification efficiency. Since maximization of the margin distance of hypersurface is no longer dominated by those extreme outliers, our generated SVM tends to have a wider margin, i.e., better generalization ability. Importantly, as our designed objective function can be reformulated as a dual problem, similar to that of standard SVM, any existing SVM training algorithm can be borrowed for the training of our proposed SVM. The performances of our method have been extensively tested on the UCI machine learning repository, as well as a real clinical problem, i.e., tissue classification in prostate ultrasound images. Experimental results show that our method is able to simultaneously increase the classification efficiency and the generalization ability of the SVM.  相似文献   

18.
19.
基于多源的跨领域数据分类快速新算法   总被引:1,自引:0,他引:1  
顾鑫  王士同  许敏 《自动化学报》2014,40(3):531-547
研究跨领域学习与分类是为了将对多源域的有监督学习结果有效地迁移至目标域,实现对目标域的无标记分 类. 当前的跨领域学习一般侧重于对单一源域到目标域的学习,且样本规模普遍较小,此类方法领域自适应性较差,面对 大样本数据更显得无能为力,从而直接影响跨域学习的分类精度与效率. 为了尽可能多地利用相关领域的有用数据,本文 提出了一种多源跨领域分类算法(Multiple sources cross-domain classification,MSCC),该算法依据被众多实验证明有效的罗杰斯特回归模型与一致性方法构建多个源域分类器并综合指导目标域的数据分类. 为了充分高效利用大样本的 源域数据,满足大样本的快速运算,在MSCC的基础上,本文结合最新的CDdual (Dual coordinate descent method)算 法,提出了算法MSCC的快速算法MSCC-CDdual,并进行了相关的理论分析. 人工数据集、文本数据集与图像数据集的实 验运行结果表明,该算法对于大样本数据集有着较高的分类精度、快速的运行速度和较高的领域自适应性. 本文的主要贡 献体现在三个方面:1)针对多源跨领域分类提出了一种新的一致性方法,该方法有利于将MSCC算法发展为MSCC-CDdual快速算法;2)提出了MSCC-CDdual快速算法,该算法既适用于样本较少的数据集又适用于大样本数据集;3) MSCC-CDdual 算法在高维数据集上相比其他算法展现了其独特的优势.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号