首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Mel BW  Fiser J 《Neural computation》2000,12(4):731-762
We have studied some of the design trade-offs governing visual representations based on spatially invariant conjunctive feature detectors, with an emphasis on the susceptibility of such systems to false-positive recognition errors-Malsburg's classical binding problem. We begin by deriving an analytical model that makes explicit how recognition performance is affected by the number of objects that must be distinguished, the number of features included in the representation, the complexity of individual objects, and the clutter load, that is, the amount of visual material in the field of view in which multiple objects must be simultaneously recognized, independent of pose, and without explicit segmentation. Using the domain of text to model object recognition in cluttered scenes, we show that with corrections for the nonuniform probability and nonindependence of text features, the analytical model achieves good fits to measured recognition rates in simulations involving a wide range of clutter loads, word size, and feature counts. We then introduce a greedy algorithm for feature learning, derived from the analytical model, which grows a representation by choosing those conjunctive features that are most likely to distinguish objects from the cluttered backgrounds in which they are embedded. We show that the representations produced by this algorithm are compact, decorrelated, and heavily weighted toward features of low conjunctive order. Our results provide a more quantitative basis for understanding when spatially invariant conjunctive features can support unambiguous perception in multiobject scenes, and lead to several insights regarding the properties of visual representations optimized for specific recognition tasks.  相似文献   

2.
Mel B  Fiser J 《Neural computation》2000,12(2):247-278
We have studied some of the design trade-offs governing visual representations based on spatially invariant conjunctive feature detectors, with an emphasis on the susceptibility of such systems to false-positive recognition errors-Malsburg's classical binding problem. We begin by deriving an analytical model that makes explicit how recognition performance is affected by the number of objects that must be distinguished, the number of features included in the representation, the complexity of individual objects, and the clutter load, that is, the amount of visual material in the field of view in which multiple objects must be simultaneously recognized, independent of pose, and without explicit segmentation. Using the domain of text to model object recognition in cluttered scenes, we show that with corrections for the nonuniform probability and nonindependence of text features, the analytical model achieves good fits to measured recognition rates in simulations involving a wide range of clutter loads, word size, and feature counts. We then introduce a greedy algorithm for feature learning, derived from the analytical model, which grows a representation by choosing those conjunctive features that are most likely to distinguish objects from the cluttered backgrounds in which they are embedded. We show that the representations produced by this algorithm are compact, decorrelated, and heavily weighted toward features of low conjunctive order. Our results provide a more quantitative basis for understanding when spatially invariant conjunctive features can support unambiguous perception in multiobject scenes, and lead to several insights regarding the properties of visual representations optimized for specific recognition tasks.  相似文献   

3.
Many learning problems require handling high dimensional datasets with a relatively small number of instances. Learning algorithms are thus confronted with the curse of dimensionality, and need to address it in order to be effective. Examples of these types of data include the bag-of-words representation in text classification problems and gene expression data for tumor detection/classification. Usually, among the high number of features characterizing the instances, many may be irrelevant (or even detrimental) for the learning tasks. It is thus clear that there is a need for adequate techniques for feature representation, reduction, and selection, to improve both the classification accuracy and the memory requirements. In this paper, we propose combined unsupervised feature discretization and feature selection techniques, suitable for medium and high-dimensional datasets. The experimental results on several standard datasets, with both sparse and dense features, show the efficiency of the proposed techniques as well as improvements over previous related techniques.  相似文献   

4.
针对遥感图像中高光谱数据的分类问题,提出一种基于堆叠稀疏自动编码器(SSAE)深度学习特征表示的高光谱遥感图像分类方法。首先,将光谱数据样本进行预处理和归一化。然后,将其输入到SSAE中进行特征表示学习,并通过网格搜索来获得最优网络参数,以此获得有效的特征表示。最后通过支持向量机(SVM)分类器对输入图像特征进行分类,最终实现遥感图像中像素的分类。在两个标准数据集上的实验结果表明,该方法能够实现准确的高光谱地物分类。  相似文献   

5.
This paper describes an algorithm based on 3D clipping for mapping feature models across domains. The problem is motivated by the need to identify feature models corresponding to different domains. Feature mapping (also referred to as feature conversion) involves obtaining a feature model in one domain given a feature model in another. This is in contrast to feature extraction which works from the boundary representation of the part. Most techniques for feature mapping have focused on obtaining negative feature models only. We propose an algorithm that can convert a feature model with mixed features (both positive and negative) to a feature model containing either only positive or only negative features.The input to the algorithm is a feature model in one domain. The algorithm for mapping this model to another feature model is based on classification of faces of features in the model and 3D clipping. 3D clipping refers to the splitting of a solid by a surface. The feature mapping process involves three major steps. In the first step, faces forming the features in the input model are classified with respect to one another. The spatial arrangement of faces is used next to derive the dependency relationship amongst features in the input model and a Feature Relationship Graph (FRG) is constructed. In the second step, using the FRG, features are clustered and interactions between features (if any) are resolved. In the final step, the 3D clipping algorithm is used to determine the volumes corresponding to the features in the target domain. These volumes are then classified to identify the features for obtaining the feature model in the target domain. Multiple feature sets (where possible) can be obtained by varying the sequence of faces used for clipping. Results of implementation are presented.  相似文献   

6.
Feature selection via sensitivity analysis of SVM probabilistic outputs   总被引:1,自引:0,他引:1  
Feature selection is an important aspect of solving data-mining and machine-learning problems. This paper proposes a feature-selection method for the Support Vector Machine (SVM) learning. Like most feature-selection methods, the proposed method ranks all features in decreasing order of importance so that more relevant features can be identified. It uses a novel criterion based on the probabilistic outputs of SVM. This criterion, termed Feature-based Sensitivity of Posterior Probabilities (FSPP), evaluates the importance of a specific feature by computing the aggregate value, over the feature space, of the absolute difference of the probabilistic outputs of SVM with and without the feature. The exact form of this criterion is not easily computable and approximation is needed. Four approximations, FSPP1-FSPP4, are proposed for this purpose. The first two approximations evaluate the criterion by randomly permuting the values of the feature among samples of the training data. They differ in their choices of the mapping function from standard SVM output to its probabilistic output: FSPP1 uses a simple threshold function while FSPP2 uses a sigmoid function. The second two directly approximate the criterion but differ in the smoothness assumptions of criterion with respect to the features. The performance of these approximations, used in an overall feature-selection scheme, is then evaluated on various artificial problems and real-world problems, including datasets from the recent Neural Information Processing Systems (NIPS) feature selection competition. FSPP1-3 show good performance consistently with FSPP2 being the best overall by a slight margin. The performance of FSPP2 is competitive with some of the best performing feature-selection methods in the literature on the datasets that we have tested. Its associated computations are modest and hence it is suitable as a feature-selection method for SVM applications. Editor: Risto Miikkulainen.  相似文献   

7.
目前特征选择方法中常用的特征相关性测度可有效评估两个特征之间的相关性,但却将特征孤立看待,没有考虑其它特征对它们相关性的影响。文中在整体考虑特征之间关系的前提下,提出用稀疏表示系数评估特征的相关性,它与现有特征相关性测度的不同之处在于可揭示特征在其它所有特征影响下与目标的相关性,反映特征间的相互影响。为验证稀疏表示系数评估特征相关性的有效性,在典型的高维小样本数据上,比较了Relief F方法及分别以稀疏表示系数、对称不确定性和皮尔森相关系数为相关性测度的特征选择方法选择的特征集的分类能力。实验结果表明文中方法选择的特征集的分类能力高且较稳定。  相似文献   

8.
自动特征识别技术综述   总被引:89,自引:0,他引:89  
高曙明 《计算机学报》1998,21(3):281-288
自动特征识别是从零件实体模型中抽取出具有特定工程意义的特征信息,由于自动特征识别构成CAD与CAPP之间的智能接口,对实现CAD,CAPP,CAM集成具有重要意义,因此一直是CAD/CAM领域的研究热点,研究成果十分丰硕,另一方面,由于特征识别具有相当难度,该领域对仍存在的许多问题有待解决,本文对自动特征识别技术的历史和现状进行全面综述,介绍了具有代表性的特征识别方法,并阐述各个方法的特点,最后对  相似文献   

9.
基于自表示关联图的谱聚类模型性能受冗余特征影响较大.为了缓解高维数据无效特征的负面影响,文中提出联合特征选择和光滑表示的子空间聚类算法.首先基于自表示思想构建系数矩阵,将特征选择与数据重构纳入同一框架,同时使用权值因子衡量相关特征贡献度,并对系数矩阵进行组效应约束以保持局部性.通过交替变量更新法优化目标函数模型.在人造数据与标准数据库上的实验表明,文中算法在各项性能上均较优.  相似文献   

10.
Many interesting problems in reinforcement learning (RL) are continuous and/or high dimensional, and in this instance, RL techniques require the use of function approximators for learning value functions and policies. Often, local linear models have been preferred over distributed nonlinear models for function approximation in RL. We suggest that one reason for the difficulties encountered when using distributed architectures in RL is the problem of negative interference, whereby learning of new data disrupts previously learned mappings. The continuous temporal difference (TD) learning algorithm TD(lambda) was used to learn a value function in a limited-torque pendulum swing-up task using a multilayer perceptron (MLP) network. Three different approaches were examined for learning in the MLP networks; 1) simple gradient descent; 2) vario-eta; and 3) a pseudopattern rehearsal strategy that attempts to reduce the effects of interference. Our results show that MLP networks can be used for value function approximation in this task but require long training times. We also found that vario-eta destabilized learning and resulted in a failure of the learning process to converge. Finally, we showed that the pseudopattern rehearsal strategy drastically improved the speed of learning. The results indicate that interference is a greater problem than ill conditioning for this task.  相似文献   

11.
Reinforcement learning (RL) is one of the methods of solving problems defined in multiagent systems. In the real world, the state is continuous, and agents take continuous actions. Since conventional RL schemes are often defined to deal with discrete worlds, there are difficulties such as the representation of an RL evaluation function. In this article, we intend to extend an RL algorithm so that it is applicable to continuous world problems. This extension is done by a combination of an RL algorithm and a function approximator. We employ Q-learning as the RL algorithm, and a neural network model called the normalized Gaussian network as the function approximator. The extended RL method is applied to a chase problem in a continuous world. The experimental result shows that our RL scheme was successful. This work was presented in part at the Fifth International Symposium on Artificial Life and Robotics, Oita, Japan, January 26–28, 2000  相似文献   

12.
基于机器学习的迭代编译方法可以在对新程序进行迭代编译时,有效预测新程序的最佳优化参数组合。现有方法在模型训练过程中存在优化参数组合搜索效率较低、程序特征表示不恰当、预测精度不高的问题。因此,基于机器学习的迭代编译方法是当前迭代编译领域内的一个研究热点,其研究挑战在于学习算法选择、优化参数搜索以及程序特征表示等问题。基于监督学习技术,提出了一种程序优化参数预测方法。该方法首先通过约束多目标粒子群算法对优化参数空间进行搜索,找到样本函数的最佳优化参数;然后,通过动静结合的程序特征表示技术,对函数特征进行抽取;最后,通过由函数特征和优化参数形成的样本构建监督学习模型,对新程序的优化参数进行预测。分别采用k近邻法和softmax回归建立统计模型,实验结果表明,新方法在NPB测试集和大型科学计算程序上实现了较好的预测性能。  相似文献   

13.
In machine learning the so-called curse of dimensionality, pertinent to many classification algorithms, denotes the drastic increase in computational complexity and classification error with data having a great number of dimensions. In this context, feature selection techniques try to reduce dimensionality finding a new more compact representation of instances selecting the most informative features and removing redundant, irrelevant, and/or noisy features. In this paper, we propose a filter-based feature selection method for working in the multiple-instance learning scenario called ReliefF-MI; it is based on the principles of the well-known ReliefF algorithm. Different extensions are designed and implemented and their performance checked in multiple instance learning. ReliefF-MI is applied as a pre-processing step that is completely independent from the multi-instance classifier learning process and therefore is more efficient and generic than wrapper approaches proposed in this area. Experimental results on five benchmark real-world data sets and 17 classification algorithms confirm the utility and efficiency of this method, both statistically and from the point of view of execution time.  相似文献   

14.
在方面级情感分类任务中,现有方法强化方面词信息能力较弱,局部特征信息利用不充分.针对上述问题,文中提出面向方面级情感分类的特征融合学习网络.首先,将评论处理为文本、方面和文本-方面的输入序列,通过双向Transformer的表征编码器得到输入的向量表示后,使用注意力编码器进行上下文和方面词的建模,获取隐藏状态,提取语义信息.然后,基于隐藏状态特征,采用方面转换组件生成方面级特定的文本向量表示,将方面信息融入上下文表示中.最后,对于方面级特定的文本向量通过文本位置加权模块提取局部特征后,与全局特征进行融合学习,得到最终的表示特征,并进行情感分类.在英文数据集和中文评论数据集上的实验表明,文中网络提升分类效果.  相似文献   

15.
自主机器人的强化学习研究进展   总被引:9,自引:1,他引:8  
陈卫东  席裕庚  顾冬雷 《机器人》2001,23(4):379-384
虽然基于行为控制的自主机器人具有较高的鲁棒性,但其对于动态环境缺乏必要的自 适应能力.强化学习方法使机器人可以通过学习来完成任务,而无需设计者完全预先规定机 器人的所有动作,它是将动态规划和监督学习结合的基础上发展起来的一种新颖的学习方法 ,它通过机器人与环境的试错交互,利用来自成功和失败经验的奖励和惩罚信号不断改进机 器人的性能,从而达到目标,并容许滞后评价.由于其解决复杂问题的突出能力,强化学习 已成为一种非常有前途的机器人学习方法.本文系统论述了强化学习方法在自主机器人中的 研究现状,指出了存在的问题,分析了几种问题解决途径,展望了未来发展趋势.  相似文献   

16.
通过分析维吾尔文字母自身的结构和书写特点,提出一种联机手写维吾尔文字母识别方案,并选择在手写汉字识别技术中所提出来的归一化、特征提取及常用的分类方法,从中找出最佳的技术选择。在实验对比中,采用8种不同的归一化预处理方法,基于坐标归一化的特征提取 (NCFE) 方法,以及改进的二次分类函数(MQDF)、判别学习型二次判别函数(DLQDF)、学习矢量量化(LVQ)、支持向量机(SVM)4种分类器。同时,再考虑字符在文档中的空间几何特征,进一步提高识别性能。在128个维吾尔文字母类别、38 400个测试样本的实验中,正确识别率最高达89。08%,为进一步研究面向维吾尔文字母特性的识别技术奠定重要基础。  相似文献   

17.
针对互联网中在线招聘的工作广告,建立准确的薪水预测模型有助于求职者选择合适的职位。目前的研究方法都是通过词频或词向量平均化计算来获取职位的文本描述信息特征,无法全面理解文本语义。针对上述问题,本文利用文本深度表示模型Doc2vec计算文本的特征向量,能更深入地表征出文本语义特征。实验结果表明,与TF-IDF和word2vec相比,使用Doc2vec提取文本特征在对薪水进行预测的效果更好。  相似文献   

18.
Most current tracking approaches utilize only one type of feature to represent the target and learn the appearance model of the target just by using the current frame or a few recent ones. The limited representation of one single type of feature might not represent the target well. What's more, the appearance model learning from the current frame or a few recent ones is intolerant of abrupt appearance changes in short time intervals. These two factors might cause the track's failure. To overcome these two limitations, in this paper, we apply the Augmented Kernel Matrix (AKM) classification to combine two complementary features, pixel intensity and LBP (Local Binary Pattern) features, to enrich the target's representation. Meanwhile, we employ the AKM clustering to group the tracking results into a few aspects. And then, the representative patches are selected and added into the training set to learn the appearance model. This makes the appearance model cover more aspects of the target appearance and more robust to abrupt appearance changes. Experiments compared with several state-of-the-art methods on challenging sequences demonstrate the effectiveness and robustness of the proposed algorithm.  相似文献   

19.
Within manufacturing, features have been widely accepted as useful concepts, and in particular they are used as an interface between CAD and CAPP systems. Previous research on feature recognition focus on the issues of intersecting features and multiple interpretations, but do not address the problem of custom features representation. Representation of features is an important aspect for making feature recognition more applicable in practice. In this paper a hybrid procedural and knowledge-based approach based on artificial intelligence planning is presented, which addresses both classic feature interpretation and also feature representation problems. STEP designs are presented as case studies in order to demonstrate the effectiveness of the model.  相似文献   

20.
行人重识别精度主要取决于特征描述和度量学习两个方面。在特征描述方面,现有特征难以解决行人图像视角变化的问题,因此考虑将颜色标签特征与颜色和纹理特征融合,并通过区域和块划分的方式提取直方图获得图像特征;在度量学习方面,传统的核局部Fisher判别分析度量学习方法对所有查询图像统一映射到相同的特征空间中,忽略了查询图像不同区域的重要性,为此在核局部Fisher判别分析的基础上对特征进行区域分组,采用查询自适应得分融合方法来描述图像不同区域的重要性,由此实现度量学习。在VIPeR和iLIDS数据集上,实验结果表明融合后的特征描述能力明显优于原始特征,同时改进的度量学习方法有效提高了行人重识别精度。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号