首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
作为一项基本的无监督学习任务,聚类旨在将无标签的、混杂的图像数据划分成语义相似的类。最近的一些方法通过引入数据增强,利用对比学习方法学习特征表示和聚类分配,关注模型区分不同语义类的能力,可能导致来自同一语义类样本的特征嵌入被分离的情况。针对以上问题,提出一种结构关系一致的对比聚类方法(Contrastive Clustering with Consistent Structural Relations, CCR),在实例级和聚类级执行对比学习,并且增加关系级别的一致性约束,让模型学习更多来自结构关系的“正数据对”信息,从而减小聚类嵌入被分离所带来的影响。实验结果表明,CCR方法在图像基准数据集上得到了比近年来的无监督聚类方法更优异的结果。模型在CIFAR-10和STL-10数据集上的平均准确度比相同实验设置下的最好方法提升了1.7%,在CIFAR-100数据集上提升了1.9%。  相似文献   

2.
通常无监督算法在对高光谱数据进行聚类时仅使用光谱信息,忽略了空间信息,使得聚类准确率较低.针对上述问题提出一种基于深度谱空网络和无监督判别极限学习的高光谱图像聚类算法.利用深度谱空网络对高光谱数据进行光谱特征和空间特征的分层交叉学习,通过反复学习获得深度空谱特征,为后续无监督聚类提供方便.在三种高光谱图像上进行实验,结果表明,该算法获得的聚类效果优于其他基于极限学习机的方法和其他无监督方法.  相似文献   

3.
针对基于无监督特征提取的目标检测方法效率不高的问题,提出一种在无标记数据集中准确检测前景目标的方法.其基本出发点是:正确的特征聚类结果可以指导目标特征提取,同时准确提取的目标特征可以提高特征聚类的精度.该方法首先对无标记样本图像进行局部特征提取,然后根据最小化特征距离进行无监督特征聚类.将同一个聚类内的图像两两匹配,将特征匹配的重现程度作为特征权重,最后根据更新后的特征权重指导下一次迭代的特征聚类.多次迭代后同时得到聚类结果和前景目标.实验结果表明,该方法有效地提高Caltech-256数据集和Google车辆图像的检测精度.此外,针对目前绝大部分无监督目标检测方法不具备增量学习能力这一缺点,提出了增量学习方法实现,实验结果表明,增量学习方法有效地提高了计算速度.  相似文献   

4.
传统多维标度方法学习得到的低维嵌入保持了数据点的拓扑结构,但忽略了低维嵌入数据类别间的判别性。基于此,提出一种基于多维标度法的无监督判别性特征学习方法——判别多维标度模型(DMDS),该模型能在学习低维数据表示的同时发现簇结构,并通过使同簇的低维嵌入更接近,让学习到的数据表示更具有判别性。首先,设计了DMDS对应的目标公式,体现所学习特征在保留拓扑性的同时增强判别性;其次,对目标函数进行了推理和求解,并根据推理过程设计所对应的迭代优化算法;最后,在12个公开的数据集上对聚类平均准确率和平均纯度进行对比实验。实验结果表明,根据Friedman统计量综合评价DMDS在12个数据集上的性能优于原始数据表示和传统多维标度模型的数据表示,它的低维嵌入更具有判别性。  相似文献   

5.
图像的无监督聚类就是基于图像数据,在无任何先验信息的情况下将整个图像集合划分成若干子集的过程。由于图像的本征维度很高,在图像处理中会遇到“维数灾难”问题。针对图像无监督聚类的特点,提出了一种图像的扩散界面无监督聚类算法,将图像编码成高维观测空间中的点,再通过投影变换映射到低维特征空间,在低维特征空间中构建扩散界面无监督聚类模型,并在模型中引入维度约简算子,采用循环迭代算法优化扩散界面模型的能量函数。基于最优的扩散界面,将整个图像集合聚类成不同的子集。实验结果表明,扩散界面无监督聚类算法优于传统聚类算法中的K-means算法、DBSCAN算法和Spectral Clustering算法,能够更好地实现图像的无监督聚类,在相同条件下具有更高的准确度。  相似文献   

6.
传统的浅层文本聚类方法在对短文本聚类时,面临上下文信息有限、用词不规范、实际意义词少等挑战,导致文本的嵌入表示稀疏、关键特征难以提取等问题。针对以上问题,文中提出一种融合简单数据增强方法的深度聚类模型SSKU(SBERT SimCSE K-means Umap)。该模型采用SBERT对短文本进行嵌入表示,利用无监督SimCSE方法联合深度聚类K-Means算法对文本嵌入模型进行微调,改善短文本的嵌入表示使其适于聚类。使用Umap流形降维方法学习嵌入局部的流形结构来改善短文本特征稀疏问题,优化嵌入结果。最后使用K-Means算法对降维后嵌入进行聚类,得到聚类结果。在StackOverFlow, Biomedical等4个公开短文本数据集进行大量实验并与最新的深度聚类算法作对比,结果表明所提模型在准确度与标准互信息两个评价指标上均表现出良好的聚类性能。  相似文献   

7.
孙圣姿  万源  曾成 《计算机应用》2018,38(12):3391-3398
半监督模式下的多视角特征降维方法,大多并未考虑到不同视角间特征投影的差异,且由于缺乏对降维后的低维矩阵的稀疏约束,无法避免噪声和其他不相关特征的影响。针对这两个问题,提出自适应嵌入的半监督多视角特征降维方法。首先,将投影从单视角下相同的嵌入矩阵扩展到多视角间不同的矩阵,引入全局结构保持项;然后,将无标签的数据利用无监督方法进行嵌入投影,对于有标签的数据,结合分类的判别信息进行线性投影;最后,再将两类多投影映射到统一的低维空间,使用组合权重矩阵来保留全局结构,很大程度上消除了噪声及不相关因素的影响。实验结果表明,所提方法的聚类准确率平均提高了约9%。该方法较好地保留了多视角间特征的相关性,捕获了更多的具有判别信息的特征。  相似文献   

8.
针对传统自编码器以无监督方式学习特征、缺乏监督信息的指导造成特征判别性弱的问题,提出一种簇紧凑自编码器(cluster compact auto-encoder,CCAE).首先,利用模糊C均值算法对样本进行聚类得到伪标签,并通过PBMF指标确定最佳聚类数;然后,利用伪标签构建簇紧凑正则项,嵌入样本所属类别的判别性信息;最后,将簇紧凑正则项与标准自编码器的损失函数相结合作为CCAE的损失函数,所提出的CCAE通过伪标签的方式嵌入区分类别的判别性信息,可增强特征的判别性,从而显著提升诊断性能;最后,在旋转机械齿轮和轴承数据集上验证所提出方法的有效性,结果表明,CCAE可广泛用于旋转机械故障诊断的特征提取阶段,为工程人员实现判别性特征的自动提取提供一种解决方案.  相似文献   

9.
唐佳敏  韩华  黄丽 《计算机工程》2022,48(4):269-275+283
行人再识别研究中存在特征判别信息不够丰富的情况,并且遮挡、光照等因素会干扰有效特征的准确提取,对后续相似性度量、度量结果排序等工作都有较大影响。此外,监督学习需要使用标签信息,在面对大型数据集时工作量很大。通过引入无监督学习框架,提出一种粗细粒度判别性特征提取方法。构建基于细粒度和粗粒度特征学习的模型框架,其中包含局部和全局2个分支。在局部分支中,对图像学习到的特征映射提取补丁块,并在未标记数据集上学习不同位置的细粒度补丁特征;在全局分支中,使用无标注数据集的相似度和多样性作为信息来学习粗粒度特征。在此基础上,利用相吸和相斥2个损失函数分别增加类别内相似度和类别间多样性,并结合最小距离准则计算特征之间的相似度,进行无监督的聚类合并。在Market-1501和DukeMTMC-reID数据集上的实验结果表明,该方法对于完成行人再识别任务具有较好的判别性能和鲁棒性,相比所有对比方法的最优结果,其Rank-1指标分别提高5.76%和5.07%,平均精度均值分别提高3.2%和5.6%。  相似文献   

10.
为了更加准确地对图像进行聚类与分类,提出一种基于局部样条嵌入的正交半监督子空间学习算法.通过学习一个正交投影矩阵,使得训练样本中的标注数据经过投影矩阵降维后类间离散度尽量大,类内离散度尽量小;采用局部样条回归将局部低维嵌入坐标映射成全局低维嵌入坐标,使得被投影数据保持原有流形结构,并有效地利用有标注训练样本和未标注训练样本得到优化的图像表达方式.图像聚类与分类实验的结果表明了文中算法的有效性.  相似文献   

11.
Kearns  Michael  Sebastian Seung  H. 《Machine Learning》1995,18(2-3):255-276
We introduce a new formal model in which a learning algorithm must combine a collection of potentially poor but statistically independent hypothesis functions in order to approximate an unknown target function arbitrarily well. Our motivation includes the question of how to make optimal use of multiple independent runs of a mediocre learning algorithm, as well as settings in which the many hypotheses are obtained by a distributed population of identical learning agents.  相似文献   

12.
Auer  Peter  Long  Philip M.  Maass  Wolfgang  Woeginger  Gerhard J. 《Machine Learning》1995,18(2-3):187-230
The majority of results in computational learning theory are concerned with concept learning, i.e. with the special case of function learning for classes of functions with range {0, 1}. Much less is known about the theory of learning functions with a larger range such as or . In particular relatively few results exist about the general structure of common models for function learning, and there are only very few nontrivial function classes for which positive learning results have been exhibited in any of these models.We introduce in this paper the notion of a binary branching adversary tree for function learning, which allows us to give a somewhat surprising equivalent characterization of the optimal learning cost for learning a class of real-valued functions (in terms of a max-min definition which does not involve any learning model).Another general structural result of this paper relates the cost for learning a union of function classes to the learning costs for the individual function classes.Furthermore, we exhibit an efficient learning algorithm for learning convex piecewise linear functions from d into . Previously, the class of linear functions from d into was the only class of functions with multidimensional domain that was known to be learnable within the rigorous framework of a formal model for online learning.Finally we give a sufficient condition for an arbitrary class of functions from into that allows us to learn the class of all functions that can be written as the pointwise maximum ofk functions from . This allows us to exhibit a number of further nontrivial classes of functions from into for which there exist efficient learning algorithms.  相似文献   

13.
Transfer in variable-reward hierarchical reinforcement learning   总被引:2,自引:1,他引:1  
Transfer learning seeks to leverage previously learned tasks to achieve faster learning in a new task. In this paper, we consider transfer learning in the context of related but distinct Reinforcement Learning (RL) problems. In particular, our RL problems are derived from Semi-Markov Decision Processes (SMDPs) that share the same transition dynamics but have different reward functions that are linear in a set of reward features. We formally define the transfer learning problem in the context of RL as learning an efficient algorithm to solve any SMDP drawn from a fixed distribution after experiencing a finite number of them. Furthermore, we introduce an online algorithm to solve this problem, Variable-Reward Reinforcement Learning (VRRL), that compactly stores the optimal value functions for several SMDPs, and uses them to optimally initialize the value function for a new SMDP. We generalize our method to a hierarchical RL setting where the different SMDPs share the same task hierarchy. Our experimental results in a simplified real-time strategy domain show that significant transfer learning occurs in both flat and hierarchical settings. Transfer is especially effective in the hierarchical setting where the overall value functions are decomposed into subtask value functions which are more widely amenable to transfer across different SMDPs.  相似文献   

14.
This article studies self-directed learning, a variant of the on-line (or incremental) learning model in which the learner selects the presentation order for the instances. Alternatively, one can view this model as a variation of learning with membership queries in which the learner is only charged for membership queries for which it could not predict the outcome. We give tight bounds on the complexity of self-directed learning for the concept classes of monomials, monotone DNF formulas, and axis-parallel rectangles in {0, 1, , n – 1} d . These results demonstrate that the number of mistakes under self-directed learning can be surprisingly small. We then show that learning complexity in the model of self-directed learning is less than that of all other commonly studied on-line and query learning models. Next we explore the relationship between the complexity of self-directed learning and the Vapnik-Chervonenkis (VC-)dimension. We show that, in general, the VC-dimension and the self-directed learning complexity are incomparable. However, for some special cases, we show that the VC-dimension gives a lower bound for the self-directed learning complexity. Finally, we explore a relationship between Mitchell's version space algorithm and the existence of self-directed learning algorithms that make few mistakes.  相似文献   

15.
In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables.  相似文献   

16.
刘晓  毛宁 《数据采集与处理》2015,30(6):1310-1317
学习自动机(Learning automation,LA)是一种自适应决策器。其通过与一个随机环境不断交互学习从一个允许的动作集里选择最优的动作。在大多数传统的LA模型中,动作集总是被取作有限的。因此,对于连续参数学习问题,需要将动作空间离散化,并且学习的精度取决于离散化的粒度。本文提出一种新的连续动作集学习自动机(Continuous action set learning automaton,CALA),其动作集为一个可变区间,同时按照均匀分布方式选择输出动作。学习算法利用来自环境的二值反馈信号对动作区间的端点进行自适应更新。通过一个多模态学习问题的仿真实验,演示了新算法相对于3种现有CALA算法的优越性。  相似文献   

17.
不同程度的监督机制在自动文本分类中的应用   总被引:1,自引:0,他引:1  
自动文本分类技术涉及信息检索、模式识别及机器学习等领域。本文以监督的程度为线索,综述了分属全监督,非监督以及半监督学习策略的若干方法-NBC(Naive Bayes Classifier),FCM(Fuzzy C-Means),SOM(Self-Organizing Map),ssFCM(serni-supervised Fuzzy C-Means)gSOM(guided Self-Organizing Map),并应用于文本分类中。其中,gSOM是我们在SOM基础上发展得到的半监督形式。并以Reuters-21578为语料,研究了监督程度对分类效果的影响,从而提出了对实际文本分类工作的建议。  相似文献   

18.
We study a model of probably exactly correct (PExact) learning that can be viewed either as the Exact model (learning from equivalence queries only) relaxed so that counterexamples to equivalence queries are distributionally drawn rather than adversarially chosen or as the probably approximately correct (PAC) model strengthened to require a perfect hypothesis. We also introduce a model of probably almost exactly correct (PAExact) learning that requires a hypothesis with negligible error and thus lies between the PExact and PAC models. Unlike the Exact and PExact models, PAExact learning is applicable to classes of functions defined over infinite instance spaces. We obtain a number of separation results between these models. Of particular note are some positive results for efficient parallel learning in the PAExact model, which stand in stark contrast to earlier negative results for efficient parallel Exact learning.  相似文献   

19.
Massive Open Online Courses (MOOCs) require individual learners to self-regulate their own learning, determining when, how and with what content and activities they engage. However, MOOCs attract a diverse range of learners, from a variety of learning and professional contexts. This study examines how a learner's current role and context influences their ability to self-regulate their learning in a MOOC: Introduction to Data Science offered by Coursera. The study compared the self-reported self-regulated learning behaviour between learners from different contexts and with different roles. Significant differences were identified between learners who were working as data professionals or studying towards a higher education degree and other learners in the MOOC. The study provides an insight into how an individual's context and role may impact their learning behaviour in MOOCs.  相似文献   

20.
Ram  Ashwin 《Machine Learning》1993,10(3):201-248
This article describes how a reasoner can improve its understanding of an incompletely understood domain through the application of what it already knows to novel problems in that domain. Case-based reasoning is the process of using past experiences stored in the reasoner's memory to understand novel situations or solve novel problems. However, this process assumes that past experiences are well understood and provide good lessons to be used for future situations. This assumption is usually false when one is learning about a novel domain, since situations encountered previously in this domain might not have been understood completely. Furthermore, the reasoner may not even have a case that adequately deals with the new situation, or may not be able to access the case using existing indices. We present a theory of incremental learning based on the revision of previously existing case knowledge in response to experiences in such situations. The theory has been implemented in a case-based story understanding program that can (a) learn a new case in situations where no case already exists, (b) learn how to index the case in memory, and (c) incrementally refine its understanding of the case by using it to reason about new situations, thus evolving a better understanding of its domain through experience. This research complements work in case-based reasoning by providing mechanisms by which a case library can be automatically built for use by a case-based reasoning program.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号