首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 302 毫秒
1.
智能零售场景中往往会使用到图像分类技术来识别商品,然而实际场景中并不是所有出现的物体都是已知的,未知的物体会干扰场景中的模型正常运行.针对智能零售场景中的图像分类问题,从已知类别封闭数据集的分类特征出发,通过对已知类别的分类特征进行计算和修正得到对未知类别物体的分类预测.通过构造已知类别的特征空间,并结合针对图像分类特征空间的特性优化的特征距离——归一化主类距离,可以更好地拟合特征空间在已知类别数据集中的边界概率模型.最终用边界概率模型对原分类特征做出修正计算,得到对物体的未知类别的分类预测,并通过设计实验验证该方法的可行性.此外,在智能零售场景的数据集支持下,与已有方法进行了对比实验.使用特征空间归一化主类距离的开放集分类算法在有着更高的已知类别分类准确率的同时,开放集拒绝率有14.20%的提升,达到了44.85%.  相似文献   

2.
目的 小样本学习任务旨在仅提供少量有标签样本的情况下完成对测试样本的正确分类。基于度量学习的小样本学习方法通过将样本映射到嵌入空间,计算距离得到相似性度量以预测类别,但未能从类内多个支持向量中归纳出具有代表性的特征以表征类概念,限制了分类准确率的进一步提高。针对该问题,本文提出代表特征网络,分类效果提升显著。方法 代表特征网络通过类代表特征的度量学习策略,利用类中支持向量集学习得到的代表特征有效地表达类概念,实现对测试样本的正确分类。具体地说,代表特征网络包含两个模块,首先通过嵌入模块提取抽象层次高的嵌入向量,然后堆叠嵌入向量经过代表特征模块得到各个类代表特征。随后通过计算测试样本嵌入向量与各类代表特征的距离以预测类别,最后使用提出的混合损失函数计算损失以拉大嵌入空间中相互类别间距减少相似类别错分情况。结果 经过广泛实验,在Omniglot、miniImageNet和Cifar100数据集上都验证了本文模型不仅可以获得目前已知最好的分类准确率,而且能够保持较高的训练效率。结论 代表特征网络可以从类中多个支持向量有效地归纳出代表特征用于对测试样本的分类,对比直接使用支持向量进行分类具有更好的鲁棒性,进一步提高了小样本条件下的分类准确率。  相似文献   

3.
一种基于融合重构的子空间学习的零样本图像分类方法   总被引:1,自引:0,他引:1  
图像分类是计算机视觉中一个重要的研究子领域.传统的图像分类只能对训练集中出现过的类别样本进行分类.然而现实应用中,新的类别不断涌现,因而需要收集大量新类别带标记的数据,并重新训练分类器.与传统的图像分类方法不同,零样本图像分类能够对训练过程中没有见过的类别的样本进行识别,近年来受到了广泛的关注.零样本图像分类通过语义空间建立起已见类别和未见类别之间的关系,实现知识的迁移,进而完成对训练过程中没有见过的类别样本进行分类.现有的零样本图像分类方法主要是根据已见类别的视觉特征和语义特征,学习从视觉空间到语义空间的映射函数,然后利用学习好的映射函数,将未见类别的视觉特征映射到语义空间,最后在语义空间中用最近邻的方法实现对未见类别的分类.但是由于已见类和未见类的类别差异,以及图像的分布不同,从而容易导致域偏移问题.同时直接学习图像视觉空间到语义空间的映射会导致信息损失问题.为解决零样本图像分类知识迁移过程中的信息损失以及域偏移的问题,本文提出了一种图像分类中基于子空间学习和重构的零样本分类方法.该方法在零样本训练学习阶段,充分利用未见类别已知的信息,来减少域偏移,首先将语义空间中的已见类别和未见类别之间的关系迁移到视觉空间中,学习获得未见类别视觉特征原型.然后根据包含已见类别和未见类别在内的所有类别的视觉特征原型所在的视觉空间和语义特征原型所在的语义空间,学习获得一个潜在类别原型特征空间,并在该潜在子空间中对齐视觉特征和语义特征,使得所有类别在潜在子空间中的表示既包含视觉空间下的可分辨性信息,又包含语义空间下的类别关系信息,同时在子空间的学习过程中利用重构约束,减少信息损失,同时也缓解了域偏移问题.最后零样本分类识别阶段,在不同的空间下根据最近邻算法对未见类别样本图像进行分类.本文的主要贡献在于:一是通过对语义空间中类别间关系的迁移,学习获得视觉空间中未见类别的类别原型,使得在训练过程中充分利用未见类别的信息,一定程度上缓解域偏移问题.二是通过学习一个共享的潜在子空间,该子空间既包含了图像视觉空间中丰富的判别性信息,也包含了语义空间中的类别间关系信息,同时在子空间学习过程中,通过重构,缓解知识迁移过程中信息损失的问题.本文在四个公开的零样本分类数据集上进行对比实验,实验结果表明本文提出的零样本分类方法取得了较高的分类平均准确率,证明了本文方法的有效性.  相似文献   

4.
在图像分类的实际应用过程中,部分类别可能完全没有带标签的训练数据。零样本学习(ZSL)的目的是将带标签类别的图像特征等知识迁移到无标签的类别上,实现无标签类别的正确分类。现有方法在测试时无法显式地区分输入图像属于已知类还是未知类,很大程度上导致未知类在传统设定下的ZSL和广义设定下的ZSL(GZSL)上的预测效果相差甚远。为此,提出一种融合视觉误差与属性语义信息的方法来缓解零样本图像分类中的预测偏置问题。首先,设计一种半监督学习方式的生成对抗网络架构来获取视觉误差信息,由此预测图像是否属于已知类;然后,提出融合属性语义信息的零样本图像分类网络来实现零样本图像分类;最后,测试融合视觉误差与属性语义的零样本图像分类方法在数据集AwA2和CUB上的效果。实验结果表明,与对比模型相比,所提方法有效缓解了预测偏置问题,其调和指标H在AwA2(Animal with Attributes)上提升了31.7个百分点,在CUB(Caltech-UCSD-Birds-200-2011)上提升了8.7个百分点。  相似文献   

5.
朴素贝叶斯分类方法由于其简单快速的特点,被广泛应用于文本分类领域。但是当训练集中各个类别的样本数据分布不均匀时,朴素贝叶斯方法分类精度不太理想。针对此问题,提出一种基于加权补集的朴素贝叶斯文本分类算法,该算法利用某个类别的补集的特征来表示当前类别的特征,且对特征权重进行归一化处理。通过实验对比了该方法与传统的朴素贝叶斯方法对文本分类效果的影响,结果表明,基于加权补集的朴素贝叶斯算法具有较好的文本分类效果。  相似文献   

6.
零样本建筑图像分类是在标记训练样本不足以涵盖所有类的情况下,利用已知建筑类别与未知建筑类别之间的知识迁移对未知类样本进行分类。针对建筑风格分类中标记数据少及局部判别性特征定位不准确的问题,提出一种基于双注意力机制的零样本图像分类方法。该方法首先引入通道注意和空间注意两种模型以增强图像特定区域的表示。其中,通道注意网络学习不同通道权重以定位图像中的建筑物;空间注意网络将位置信息嵌入通道注意图捕获目标中的细节特征,获取具有通道和空间双层维度的特征表示。其次,为减少空间映射过程中出现的信息损失,使用生成器重建视觉特征。最后,设计公共空间嵌入的零样本建筑图像分类模型,在子空间对齐视觉特征和语义特征,通过最近邻匹配实现分类任务。实验结果表明,所提方法较当前零样本学习方法而言,在零样本数据集CUB及建筑风格数据集Architecture Style Dataset上的平均分类准确率分别提高1.3和0.7百分点。  相似文献   

7.
在小样本文本分类领域中,查询集和支持集的特征提取是影响分类结果的关键之一,但以往的研究大多忽略了两者之间存在匹配信息且在各自的信息提取中忽略了特征间的重要性程度不同,因此提出了一种新的小样本分类模型.模型结合GRU的全局信息提取能力和注意力机制的局部细节学习能力对文本特征进行建模,同时采用双向注意力机制来获取支持样本与查询样本间的交互信息,并创新性的提出“类生成器”用以区分同类样本间的不同重要性同时生成更具判别性的类别表示.此外,为了获得更为清晰的分类界限,还设计了一个原型感知的正则化项来优化原型学习.模型在2个小样本分类数据集上进行了实验,均取得了比目前最优基线模型更好的分类效果.  相似文献   

8.
针对多观测样本的二分类问题,提出适合多观测样本的基于LS-SVM的新分类算法。每次分类中,待分类的模式使用多观测样本集进行表示,首先对多观测样本集的标签进行假设,将此假设条件作为LS-SVM中优化问题的约束条件,由此得到分类误差,通过比较两次假设下的分类误差确定多观测样本的类别。该方法无需提前训练获得分类器,而是同时利用已知标签样本和多观测样本集,充分利用同类样本在特征空间中连续分布的特点。最后通过三组实验验证了所提方法的有效性。  相似文献   

9.
KNN算法在数据挖掘的分支-文本分类中有重要的应用。在分析了传统KNN方法不足的基础上,提出了一种基于关联分析的KNN改进算法。该方法首先针对不同类别的训练文本提取每个类别的频繁特征集及其关联的文本,然后基于对各个类别文本的关联分析结果,为未知类别文本确定适当的近邻数k,并在已知类别的训练文本中快速选取k个近邻,进而根据近邻的类别确定未知文本的类别。相比于基于传统KNN的文本分类方法,改进方法能够较好地确定k值,并能降低时间复杂度。实验结果表明,文中提出的基于改进KNN的文本分类方法提高了文本分类的效率和准确率。  相似文献   

10.
一种改进的KNN Web文本分类方法*   总被引:3,自引:1,他引:2  
KNN方法存在两个不足:a)计算量巨大,它要求计算未知文本与所有训练样本间的相似度进而得到k个最近邻样本;b)当类别间有较多共性,即训练样本间有较多特征交叉现象时,KNN分类的精度将下降。针对这两个问题,提出了一种改进的KNN方法,该方法先通过Rocchio分类快速得到k0个最有可能的候选类别;然后在k0个类别训练文档中抽取部分代表样本采用KNN算法;最后由一种改进的相似度计算方法决定最终的文本所属类别。实验表明,改进的KNN方法在Web文本分类中能够获得较好的分类效果。  相似文献   

11.
一种利用近邻和信息熵的主动文本标注方法   总被引:1,自引:0,他引:1  
由于大规模标注文本数据费时费力,利用少量标注样本和大量未标注样本的半监督文本分类发展迅速.在半监督文本分类中,少量标注样本主要用来初始化分类模型,其合理性将影响最终分类模型的性能.为了使标注样本尽可能吻合原始数据的分布,提出一种避开选择已标注样本的K近邻来抽取下一组候选标注样本的方法,使得分布在不同区域的样本有更多的标注机会.在此基础上,为了获得更多的类别信息,在候选标注样本中选择信息熵最大的样本作为最终的标注样本.真实文本数据上的实验表明了提出方法的有效性.  相似文献   

12.
Supervised text classifiers need to learn from many labeled examples to achieve a high accuracy. However, in a real context, sufficient labeled examples are not always available because human labeling is enormously time-consuming. For this reason, there has been recent interest in methods that are capable of obtaining a high accuracy when the size of the training set is small.In this paper we introduce a new single label text classification method that performs better than baseline methods when the number of labeled examples is small. Differently from most of the existing methods that usually make use of a vector of features composed of weighted words, the proposed approach uses a structured vector of features, composed of weighted pairs of words.The proposed vector of features is automatically learned, given a set of documents, using a global method for term extraction based on the Latent Dirichlet Allocation implemented as the Probabilistic Topic Model. Experiments performed using a small percentage of the original training set (about 1%) confirmed our theories.  相似文献   

13.
Web 2.0 provides user-friendly tools that allow persons to create and publish content online. User generated content often takes the form of short texts (e.g., blog posts, news feeds, snippets, etc). This has motivated an increasing interest on the analysis of short texts and, specifically, on their categorisation. Text categorisation is the task of classifying documents into a certain number of predefined categories. Traditional text classification techniques are mainly based on word frequency statistical analysis and have been proved inadequate for the classification of short texts where word occurrence is too small. On the other hand, the classic approach to text categorization is based on a learning process that requires a large number of labeled training texts to achieve an accurate performance. However labeled documents might not be available, when unlabeled documents can be easily collected. This paper presents an approach to text categorisation which does not need a pre-classified set of training documents. The proposed method only requires the category names as user input. Each one of these categories is defined by means of an ontology of terms modelled by a set of what we call proximity equations. Hence, our method is not category occurrence frequency based, but highly depends on the definition of that category and how the text fits that definition. Therefore, the proposed approach is an appropriate method for short text classification where the frequency of occurrence of a category is very small or even zero. Another feature of our method is that the classification process is based on the ability of an extension of the standard Prolog language, named Bousi~Prolog , for flexible matching and knowledge representation. This declarative approach provides a text classifier which is quick and easy to build, and a classification process which is easy for the user to understand. The results of experiments showed that the proposed method achieved a reasonably useful performance.  相似文献   

14.
Text Classification from Labeled and Unlabeled Documents using EM   总被引:51,自引:0,他引:51  
This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available.We introduce an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation-Maximization (EM) and a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents. It then trains a new classifier using the labels for all the documents, and iterates to convergence. This basic EM procedure works well when the data conform to the generative assumptions of the model. However these assumptions are often violated in practice, and poor performance can result. We present two extensions to the algorithm that improve classification accuracy under these conditions: (1) a weighting factor to modulate the contribution of the unlabeled data, and (2) the use of multiple mixture components per class. Experimental results, obtained using text from three different real-world tasks, show that the use of unlabeled data reduces classification error by up to 30%.  相似文献   

15.
信息技术的飞速发展造成了大量的文本数据累积,其中很大一部分是短文本数据。文本分类技术对于从这些海量短文中自动获取知识具有重要意义。但是由于短文中的关键词出现次数少,而且带标签的训练样本又通常数量很少,现有的一般文本挖掘算法很难得到可接受的准确度。一些基于语义的分类方法获得了较好的准确度但又由于其低效性而无法适用于海量数据。文本提出了一个新颖的短文分类算法。该算法基于文本语义特征图,并使用类似kNN的方法进行分类。实验表明该算法在对海量短文进行分类时,其准确度和性能超过其它的算法。  相似文献   

16.
Supervised text classification methods are efficient when they can learn with reasonably sized labeled sets. On the other hand, when only a small set of labeled documents is available, semi-supervised methods become more appropriate. These methods are based on comparing distributions between labeled and unlabeled instances, therefore it is important to focus on the representation and its discrimination abilities. In this paper we present the ST LDA method for text classification in a semi-supervised manner with representations based on topic models. The proposed method comprises a semi-supervised text classification algorithm based on self-training and a model, which determines parameter settings for any new document collection. Self-training is used to enlarge the small initial labeled set with the help of information from unlabeled data. We investigate how topic-based representation affects prediction accuracy by performing NBMN and SVM classification algorithms on an enlarged labeled set and then compare the results with the same method on a typical TF-IDF representation. We also compare ST LDA with supervised classification methods and other well-known semi-supervised methods. Experiments were conducted on 11 very small initial labeled sets sampled from six publicly available document collections. The results show that our ST LDA method, when used in combination with NBMN, performed significantly better in terms of classification accuracy than other comparable methods and variations. In this manner, the ST LDA method proved to be a competitive classification method for different text collections when only a small set of labeled instances is available. As such, the proposed ST LDA method may well help to improve text classification tasks, which are essential in many advanced expert and intelligent systems, especially in the case of a scarcity of labeled texts.  相似文献   

17.
基于主动学习的文档分类   总被引:3,自引:0,他引:3  
In the field of text categorization,the number of unlabeled documents is generally much gretaer than that of labeled documents. Text categorization is the problem of categorization in high-dimension vector space, and more training samples will generally improve the accuracy of text classifier. How to add the unlabeled documents of training set so as to expand training set is a valuable problem. The theory of active learning is introducted and applied to the field of text categorization in this paper ,exploring the method of using unlabeled documents to improve the accuracy oftext classifier. It is expected that such technology will improve text classifier's accuracy through adopting relativelylarge number of unlabelled documents samples. We brought forward an active learning based algorithm for text categorization,and the experiments on Reuters news corpus showed that when enough training samples available,it′s effective for the algorithm to promote text classifier's accuracy through adopting unlabelled document samples.  相似文献   

18.
Text mining, intelligent text analysis, text data mining and knowledge-discovery in text are generally used aliases to the process of extracting relevant and non-trivial information from text. Some crucial issues arise when trying to solve this problem, such as document representation and deficit of labeled data. This paper addresses these problems by introducing information from unlabeled documents in the training set, using the support vector machine (SVM) separating margin as the differentiating factor. Besides studying the influence of several pre-processing methods and concluding on their relative significance, we also evaluate the benefits of introducing background knowledge in a SVM text classifier. We further evaluate the possibility of actively learning and propose a method for successfully combining background knowledge and active learning. Experimental results show that the proposed techniques, when used alone or combined, present a considerable improvement in classification performance, even when small labeled training sets are available.  相似文献   

19.
张玉红  陈伟  胡学钢 《计算机科学》2016,43(12):179-182, 194
现实生活中网络监控、网络评论以及微博等应用领域涌现了大量文本数据流,这些数据的不完全标记和频繁概念漂移给已有的数据流分类方法带来了挑战。为此,面向不完全标记的文本数据流提出了一种自适应的数据流分类算法。该算法以一个标记数据块作为起始数据块,对未标记数据块首先提取标记数据块与未标记数据块之间的特征集,并利用特征在两个数据块间的相似度进行概念漂移检测,最后计算未标记数据中特征的极性并对数据进行预测。实验表明了算法在分类精度上的优越性,尤其在标记信息较少和概念漂移较为频繁时。  相似文献   

20.
中心分类法性能高效,但需要大量的训练文档(已标识文档)来训练分类器以保证分类的正确性.而训练文档因需花费大量人力物力来分类而数量有限,同时,网络上存在着很多未标识文档.为此,对中心分类法进行改进,提出了ONUC和0FFUC算法,以弥补当训练文档不足时,中心分类法性能急剧下降的缺陷.考虑到中心分类法易受孤立点的影响,采取了去边处理.实验证明,与普通的中心分类法、其它半监督经典算法比较,在训练文档很少的情况下,该算法能获得较好的性能.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号