首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
方敏  王君  王红艳  李天涯 《红外与激光工程》2016,45(10):1028003-1028003(8)
针对高光谱遥感数据特征提取方法的研究,提出了一种新的监督近邻重构分析(Supervised Neighbor Reconstruction Analysis,SNRA)算法。该方法首先利用同一类别的近邻数据点对各数据点进行重构;然后在低维嵌入空间中保持该重构关系不变,尽可能地分离开非同类数据点,并利用总体散度矩阵来约束数据间的相关性;最后求解得到一个最佳投影矩阵,进而提取出鉴别特征。SNRA算法不仅保持了同类数据的局部结构而且增强了非同类数据的可分性,同时减少了数据的冗余信息。在Indian Pine和KSC高光谱遥感数据集上的实验结果表明:提出的方法能更好地揭示出高光谱遥感数据的内在特性,提取出更有效的鉴别特征,改善分类效果。  相似文献   

2.
余家林  孙季丰  李万益 《电子学报》2016,44(8):1899-1908
为了准确有效的重构多视角图像中的三维人体姿态,该文提出一种基于多核稀疏编码的人体姿态估计算法.首先,针对连续帧姿态估计的歧义问题,该文设计了一种用于表达多视角图像的HA-SIFT描述子,其中,人体局部拓扑、肢体相对位置及外观信息被同时编码;然后,在多核学习框架下建立同时考虑特征空间内在流形结构与姿态空间几何信息的目标函数,并在希尔伯特空间优化目标函数以更新稀疏编码、过完备字典与多核权值;最后,利用姿态字典原子的线性组合来估计对应未知输入的三维人体姿态.实验结果表明,与核稀疏编码、Laplace稀疏编码及Bayesian稀疏编码相比,文本方法具有更高的估计精度.  相似文献   

3.
Image classification using correlation tensor analysis   总被引:3,自引:0,他引:3  
Images, as high-dimensional data, usually embody large variabilities. To classify images for versatile applications, an effective algorithm is necessarily designed by systematically considering the data structure, similarity metric, discriminant subspace, and classifier. In this paper, we provide evidence that, besides the Fisher criterion, graph embedding, and tensorization used in many existing methods, the correlation-based similarity metric embodied in supervised multilinear discriminant subspace learning can additionally improve the classification performance. In particular, a novel discriminant subspace learning algorithm, called correlation tensor analysis (CTA), is designed to incorporate both graph-embedded correlational mapping and discriminant analysis in a Fisher type of learning manner. The correlation metric can estimate intrinsic angles and distances for the locally isometric embedding, which can deal with the case when Euclidean metric is incapable of capturing the intrinsic similarities between data points. CTA learns multiple interrelated subspaces to obtain a low-dimensional data representation reflecting both class label information and intrinsic geometric structure of the data distribution. Extensive comparisons with most popular subspace learning methods on face recognition evaluation demonstrate the effectiveness and superiority of CTA. Parameter analysis also reveals its robustness.  相似文献   

4.
空间一致性邻域保留嵌入的高光谱数据特征提取   总被引:1,自引:0,他引:1       下载免费PDF全文
局部线性嵌入(LLE)和邻域保留嵌入(NPE)等流形学习方法可以提取高光谱数据的主要结构特征,有助于对数据的理解和进一步处理。但是,这些方法忽视了高光谱图像中相邻像素之间的相关性。针对这个问题,提出一种基于空间一致性思想的邻域保留嵌入(SC-NPE)特征提取算法,通过一个优化的局部线性嵌入,并考虑相邻像素的相关特性,在高维空间建立数据的局部邻域结构。然后寻找一个优化的变换矩阵,将局部邻域结构投影到低维空间,实现数据的特征提取。与LLE和NPE算法相比,SC-NPE既考虑高光谱数据的流形结构,又考虑了其图像域空间信息,可以更好地应用在高光谱数据的特征提取过程中。实验结果表明,SC-NPE特征提取算法在高光谱图像分类方面的性能明显优于其他同类算法。  相似文献   

5.
In response to the problems traditional multi-view document clustering methods separate the multi-view document representation from the clustering process and ignore the complementary characteristics of multi-view document clustering,an iterative algorithm for complementary multi-view document clustering——CMDC was proposed,in which the multi-view document clustering process and the multi-view feature adjustment were conducted in a mutually unified manner.In CMDC algorithm,complementary text documents were selected from the clustering results to aid adjusting the contribution of view features via learning a local measurement metric of each document view.The complementary text document of the results among the dimensionality clusters was selected by CMDC,and used to promote the feature tuning of the clusters.The partition consistency of the multi-dimensional document clustering was solved by the measure consistency of the dimensions.Experimental results show that CMDC effectively improves multi-dimensional clustering performance.  相似文献   

6.
本文首先对图像特征提取过程中的近邻距离进行线 性重构,然后对其在整个流形上的分布进行优化运算,得到一个最优线性重构权为变量的图 像数据局部低维特征的表达函数。最后以该函数稳定性的最小度量构造出适用于图像特征提 取的邻域尺寸和本征维数的自动选择策略。实验表明该方法实现了图像数据本征维数与邻域 尺寸的自动化选择,并具有计算简单、匹配率高且计算复杂性低等特点。  相似文献   

7.
We propose a novel multi-view document clustering method with the graph-regularized concept factorization (MVCF). MVCF makes full use of multi-view features for more comprehensive understanding of the data and learns weights for each view adaptively. It also preserves the local geometrical structure of the manifolds for multi-view clustering. We have derived an efficient optimization algorithm to solve the objective function of MVCF and proven its convergence by utilizing the auxiliary function method. Experiments carried out on three benchmark datasets have demonstrated the effectiveness of MVCF in comparison to several state-of-the-art approaches in terms of accuracy, normalized mutual information and purity.  相似文献   

8.
Most dimensionality reduction works construct the nearest-neighbor graph by using Euclidean distance between images; this type of distance may not reflect the intrinsic structure. Different from existing methods, we propose to use sets as input rather than single images for accurate distance calculation. The set named as neighbor circle consists of the corresponding data point and its neighbors in the same class. Then a supervised dimensionality reduction method is developed, i.e., intrinsic structure feature transform (ISFT), it captures the local structure by constructing the nearest-neighbor graph using the Log-Euclidean distance as measurements of neighbor circles. Furthermore, ISFT finds representative images for each class; it captures the global structure by using the projected samples of these representatives to maximize the between-class scatter measure. The proposed method is compared with several state-of-the-art dimensionality reduction methods on various publicly available databases. Extensive experimental results have demonstrated the effectiveness of the proposed approach.  相似文献   

9.
Canonical correlation analysis (CCA) is an efficient method for dimensionality reduction on two-view data. However, as an unsupervised learning method, CCA cannot utilize partly given label information in multi-view semi-supervised scenarios. In this paper, we propose a novel two-view semi-supervised learning method, called semi-supervised canonical correlation analysis based on label propagation (LPbSCCA). LPbSCCA incorporates a new sparse representation based label propagation algorithm to infer label information for unlabeled data. Specifically, it firstly constructs dictionaries consisting of all labeled samples; and then obtains reconstruction coefficients of unlabeled samples using sparse representation technique; at last, by combining given labels of labeled samples, estimates label information for unlabeled ones. After that, it constructs soft label matrices of all samples and probabilistic within-class scatter matrices in each view. Finally, in order to enhance discriminative power of features, it is formulated to maximize the correlations between samples of the same class from cross views, while minimizing within-class variations in the low-dimensional feature space of each view simultaneously. Furthermore, we also extend a general model called LPbSMCCA to handle data from multiple (more than two) views. Extensive experimental results from several well-known datasets demonstrate that the proposed methods can achieve better recognition performances and robustness than existing related methods.  相似文献   

10.
The robustness against noise, outliers, and corruption is a crucial issue in image feature extraction. To address this concern, this paper proposes a discriminative low-rank embedding image feature extraction algorithm. Firstly, to enhance the discriminative power of the extracted features, a discriminative term is introduced using label information, obtaining global discriminative information and learning an optimal projection matrix for data dimensionality reduction. Secondly, manifold constraints are incorporated, unifying low-rank embedding and manifold constraints into a single framework to capture the geometric structure of local manifolds while considering both local and global information. Finally, test samples are projected into a lower-dimensional space for classification. Experimental results demonstrate that the proposed method achieves classification accuracies of 95.62%, 95.22%, 86.38%, and 86.54% on the ORL, CMUPIE, AR, and COIL20 datasets, respectively, outperforming dimensionality reduction-based image feature extraction algorithms.  相似文献   

11.
赵雪梅  李玉  赵泉华 《电子学报》2016,44(3):679-686
本文利用隐马尔可夫随机场和高斯模型分别建立标号场和特征场的邻域关系,提出了基于隐马尔可夫高斯随机场模型的模糊聚类分割算法。该算法用隐马尔可夫随机场模型定义先验概率,并将该先验概率作为尺度控制因子引入到KL(Kullback-Lerbler)信息中,在目标函数的定义中,KL信息作为规则化项,其系数表示算法的模糊程度。在基于高斯模型的后验概率中,像素相关性被定义在空间和谱间,并用该概率的负对数值表征像素点到聚类中心的非相似性测度。通过对合成遥感影像和高分辨率遥感影像进行分割实验,证明了算法的有效性和普适性。  相似文献   

12.
传统模糊系统建模方法本质上是一种单视角学习模式,面向适合多视角处理的场景时,它们通常只能将每一视角割裂开来进行独立建模,这导致其所得系统泛化性能往往不令人满意。针对此缺陷,该文探讨具备多视角学习能力的模糊系统建模方法。为此,基于经典的L2型TSK模糊系统,通过引入具备多视角学习能力的协同学习项,该文提出了核心的多视角TSK型模糊系统(MV-TSK-FS)建模方法。MV-TSK-FS不仅能有效地利用各视角不同特征构成的独立样本信息,还能充分地利用各视角间由于相互关联而存在内在信息,以最终达到提高系统泛化性能的效果。在模拟数据集与真实数据集上的实验结果验证了较之于传统单视角模糊建模方法该多视角模糊系统有着更好的泛化性和适用性。  相似文献   

13.
高光谱图像分类是近年来的研究热点。其数据的 高维性引发了“维度灾难”问 题。数据降维成为解决问题的关键。针对高光谱数据有标记训练样本点匮乏的特点, 提出用无监督的特征选择方法对高光谱数据进行降维。该方法能够同时保持原始高光 谱数据的判别能力和局部几何结构。为了保持判别能力,用所选特征对原始高光谱数 据进行重构,利用重构误差最小化将特征选择问题转化为优化问题。为了保持局部几 何结构,建立近邻图,并将其转化为正则项加入目标函数中。通过迭代梯度下降方法 解此优化问题,得出优选特征子集参与高光谱图像分类识别任务。在真实数据集上的 实验表明,新方法能够提高分类识别的精度。  相似文献   

14.
针对高光谱图像谱段数目较多、近邻谱段相关性过高而导致分类困难的问题,提出了一种自适应差分进化特征选择的高光谱图像分类算法.首先初始化种群向量集,利用自适应差分进化算法搜索特征的自适应性生成特征子集;然后,通过使用ReliefF技术根据特征排序去除重复特征,从而为所有的特征构建一个特征列表;最后,借助于模糊k-近邻分类器计算每个向量的分类精度,利用包裹模型评估特征子集.在印第安纳数据集和KSC数据集上的实验结果验证了算法的有效性及可靠性,实验结果表明,相比其他几种特征选择算法,该算法取得了更高的总分类精度和更好的Kappa系数.  相似文献   

15.
隐树结构图模型通过引入了隐藏节点来描述变量之间的潜在关系,因而可以更好地对变量之间的相关性进行建模。树模型学习过程中,从变量观测数据所提取的有用特征数量,决定了该模型对变量间深层关系的建模能力;而现有学习算法都是对观测数据直接计算统计量来进行模型学习,未能按观测数据中的特征分类处理。针对现有算法对观测数据中信息利用不充分的不足,该文提出基于模糊多特征递归分组算法的隐树模型学习方法。首先,将变量的原始观测数据通过反映其特征的模糊隶属度函数转化成多个模糊特征,并构造多维模糊特征向量;其次,计算两两变量模糊特征向量之间的距离,并将其综合得到所有变量之间的模糊特征向量距离矩阵;最后,基于该距离矩阵,利用递归分组算法学习隐树模型。该文还将所提算法应用于股票收益数据和气温数据建模,验证了该文算法的实用性和有效性。  相似文献   

16.
该文针对包含多种攻击模式的高维特征空间中的异常检测问题,提出了一种基于有监督局部决策的分层支持向量机(HSVM)异常检测方法.通过HSVM的二叉树结构实现复杂异常检测问题的分而治之,即在每个中间节点上,通过信息增益准则构建有监督学习所需的训练信号,监督局部决策;在每个嵌入中间节点的二分类支持向量机(SVM)的训练过程中,以局部决策边界对特征的敏感度为依据,选择入侵检测的局部最优特征子集.实验结果表明,该文提出的异常检测方法能够在训练信号的局部决策监督下构建具有良好稳定性的检测学习模型,并能以更精简的特征信息实现检测精确率和检测效率的提高.  相似文献   

17.
In this paper, a manifold learning based method named local maximal margin discriminant embedding (LMMDE) is developed for feature extraction. The proposed algorithm LMMDE and other manifold learning based approaches have a point in common that the locality is preserved. Moreover, LMMDE takes consideration of intra-class compactness and inter-class separability of samples lying in each manifold. More concretely, for each data point, it pulls its neighboring data points with the same class label towards it as near as possible, while simultaneously pushing its neighboring data points with different class labels away from it as far as possible under the constraint of locality preserving. Compared to most of the up-to-date manifold learning based methods, this trick makes contribution to pattern classification from two aspects. On the one hand, the local structure in each manifold is still kept in the embedding space; one the other hand, the discriminant information in each manifold can be explored. Experimental results on the ORL, Yale and FERET face databases show the effectiveness of the proposed method.  相似文献   

18.
In law enforcement applications such as surveillance and forensics, video is often presented as evidence. It is therefore of paramount importance to establish the authenticity and reliability of the video data. This paper presents an intelligent video authentication algorithm which integrates learning based Support Vector Machine classification with Singular Value Decomposition watermarking. During video capture and storage, intrinsic local correlation information is extracted from the frames and embedded in the frames at local levels. Tamper detection and classification is performed using the inherent video information and embedded correlation information. The proposed algorithm is independent of the choice of watermark and does not require any key to store. Further, it is robust to global tampering such as frame addition and removal, local attacks such as object alteration and can differentiate between acceptable operations and malicious tampering. Experiments are performed on an extensive database which contains non-tampered videos and videos with several types of tampering. The results show that the proposed algorithm outperforms existing video authentication algorithms.  相似文献   

19.
侯榜焕  姚敏立  贾维敏  沈晓卫  金伟 《红外与激光工程》2017,46(12):1228001-1228001(8)
高光谱遥感图像具有特征(波段)数多、冗余度高等特点,因此特征选择成为高光谱分类的研究热点。针对此问题,提出了空间结构与光谱结构同时保持的高光谱数据分类算法。考虑高光谱图像的物理特性,首先对图像进行加权空谱重构,使图像的空间结构信息自动融入光谱特征,形成空谱特征集;对利用最小二乘回归模型保存数据集的全局相似性结构的基础上,加入局部流形结构正则项,使挑选的特征子集更好地保存数据集的内在本质结构;讨论了窗口大小和正则参数对分类精度的影响。对Indian Pines、PaviaU和Salinas数据集的实验表明,该算法得到的特征子集的总体分类精度达到93.22%、96.01%和95.90%。该算法不仅充分利用了高光谱图像的空间结构信息,而且深入挖掘了数据集的内在本质结构,从而得到更有鉴别性的特征子集,相比传统方法明显提高了分类精度。  相似文献   

20.
Diffusion-based compactness is an effective method for foreground-based saliency detection, in which one key is the conventional graph construction. However, the conventional graph only displays the local structure but not preserves global relevance information. Therefore, diffusion-based compactness cannot highlight complete salient object which contains multiple areas with different features, and the extracted salient regions with weak homogeneous. Aiming to address these problems, we propose a saliency detection method via coarse-to-fine diffusion-based compactness with a weighted learning affinity matrix. Firstly, we construct multi-view conventional graphs to calculate the rough compactness cue. Secondly, we build a two-stage multi-view weighted graphs using a weighted learning affinity matrix and compute the coarse-to-fine compactness cue. Extensive experiments tested on three benchmark datasets, demonstrating the superior against several state-of-the-art methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号