首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
由于线性变换无法较好保留数据的非线性结构而非线性变换往往需要进行大量的复杂运算,提出一种快速、高效的非线性特征提取方法。该方法通过研究互信息梯度在核空间中的线性不变性,采用互信息二次熵快速算法及梯度上升寻优策略,在有效降低计算量的同时能够提取有判别力的非线性高阶统计量。详细的数据投影和分类实验表明该方法在分类性能和算法时间复杂度上都优于传统算法。  相似文献   

3.
局部特征信息在图像分割中扮演着重要角色,然而基于文本的实例分割任务具有对输入文本表达式的依赖性,无法直接从原始的输入图像中提取局部特征信息。针对这一问题,提出了一种具体的名词引导局部特征提取的深度神经网络模型(NgLFNet),NgLFNet模型可根据输入文本表达式中的关键名词来自动挖掘待分割对象的局部特征信息。具体地,该模型首先通过语句分析得到关键名词;其次通过文本和图像编码器提取相应特征,并利用关键名词通过多头注意力机制获取高关注区域局部特征;然后逐步融合多模态特征;最后在解码修正模块利用得到的局部特征对预测掩膜进行更细致的修正,从而得到最终结果。将该方法与多种主流基于文本的实例分割方法进行对比,实验结果表明该方法提升了分割效果。  相似文献   

4.
Feature extraction is a fundamental step in the feature matching task. A lot of studies are devoted to feature extraction. Recent researches propose to extract features by pre-trained neural networks, and the output is used for feature matching. However, the quality and the quantity of the features extracted by these methods are difficult to meet the requirements for the practical applications. In this article, we propose a two-stage object-aware-based feature matching method. Specifically, the proposed object-aware block predicts a weighted feature map through a mask predictor and a prefeature extractor, so that the subsequent feature extractor pays more attention to the key regions by using the weighted feature map. In addition, we introduce a state-of-the-art model estimation algorithm to align image pair as the input of the object-aware block. Furthermore, our method also employs an advanced outlier removal algorithm to further improve matching quality. Experimental results show that our object-aware-based feature matching method improves the performance of feature matching compared with several state-of-the-art methods.  相似文献   

5.
Info-margin maximization for feature extraction   总被引:1,自引:0,他引:1  
We propose a novel method of linear feature extraction with info-margin maximization (InfoMargin) from information theoretic viewpoint. It aims to achieve a low generalization error by maximizing the information divergence between the distributions of different classes while minimizing the entropy of the distribution in each single class. We estimate the density of data in each class with Gaussian kernel Parzen window and develop an efficient and fast convergent algorithm to calculate quadratic entropy and divergence measure. Experimental results show that our method outperforms the traditional feature extraction methods in the classification and data visualization tasks.  相似文献   

6.
基于无线传感网的防入侵应用领域中行为分类问题,提出一种基于时域特征提取的电子围栏入侵检测及异常入侵模式识别系统。由于频域处理方法计算量大、复杂度高、传感器采样率高,为减轻系统的传输负担并减少时延,首先对原始信号预处理提取时域特征,然后通过一个三层的BP神经网络对目标事件进行分类,最后对比了多种典型的分类器方法的准确率。仿真结果表明:相比于频域处理方法,该方法复杂度低、易于实现,多种分类器准确率达86%以上,其中BP神经网络测试集的准确率能够达到94%,并且训练集和测试集的准确率偏差较小。  相似文献   

7.
为解决多重分形维数不能够很好地反映图像强度信息和对图像尺度有强依赖的问题,在研究q 阶广义维数D(q)基础上,提出两种改进方法。通过分析影响生长概率的因子,提出一种结合强度信息的加权子数计算方法,提出一种基于网格强度与均值的二维分形维数计算方法。实验表明改进的多重分形算法提了特征区分度,计算特征更加鲁棒和有效,将改进方法用于血细胞识别系统,改善了白细胞分类准确性。  相似文献   

8.
维吾尔文Bigram文本特征提取   总被引:1,自引:0,他引:1  
文本特征表示是在文本自动分类中最重要的一个环节。在基于向量空间模型(VSM)的文本表示中特征单元粒度的选择直接影响到文本分类的效果。在维吾尔文文本分类中,对于单词特征不能更好地表征文本内容特征的问题,在分析了维吾尔文Bigram对文本分类作用的基础上,构造了一个新的统计量CHIMI,并在此基础上提出了一种维吾尔语Bigram特征提取算法。将抽取到的Bigram作为文本特征,采用支持向量机(SVM)算法对维吾尔文文本进行了分类实验。实验结果表明,与以词为特征的文本分类相比,Bigram作为文本特征能够提高维吾尔文文本分类的准确率和召回率并且通过实验验证了该算法的有效性。  相似文献   

9.
一种广义的主成分分析特征提取方法   总被引:2,自引:0,他引:2       下载免费PDF全文
提出了一种广义的PCA特征提取方法。该方法先将图像矩阵进行重组,根据重组的图像矩阵构造出总体散布矩阵,然后求出最佳投影向量进行特征提取。它是2DPCA和模块2DPCA的进一步推广,可以建立任意维数的散布矩阵,得到任意维数的投影向量。实验表明,随着总体散布矩阵维数的减小,广义PCA的特征提取能力更强,特征提取的速度也更快。  相似文献   

10.
针对函数型数据分类算法中全局统计特征表达能力有限,且显著点特征易受噪声干扰等问题,提出一种基于统计深度方法的函数曲线特征分段提取算法。首先,利用数据平滑技术对离散观测的数据进行平滑化处理,同时引入函数型数据的一阶和二阶导函数;然后,分段计算函数本身及其低阶导函数的马氏积分深度值,在此基础上构造函数曲线特征向量;最后,给出三种选择调节参数的搜索方案,并进行分类研究。在UCR数据集上的实验表明,与当前其他曲线特征提取算法相比,所提算法能有效提取函数曲线特征,提高分类的准确性。  相似文献   

11.
Beyond linear and kernel-based feature extraction, we propose in this paper the generalized feature extraction formulation based on the so-called Graph Embedding framework. Two novel correlation metric based algorithms are presented based on this formulation. Correlation Embedding Analysis (CEA), which incorporates both correlational mapping and discriminating analysis, boosts the discriminating power by mapping data from a high-dimensional hypersphere onto another low-dimensional hypersphere and preserving the intrinsic neighbor relations with local graph modeling. Correlational Principal Component Analysis (CPCA) generalizes the conventional Principal Component Analysis (PCA) algorithm to the case with data distributed on a high-dimensional hypersphere. Their advantages stem from two facts: 1) tailored to normalized data, which are often the outputs from the data preprocessing step, and 2) directly designed with correlation metric, which shows to be generally better than Euclidean distance for classification purpose. Extensive comparisons with existing algorithms on visual classification experiments demonstrate the effectiveness of the proposed methods.  相似文献   

12.
13.
The primary goal of linear discriminant analysis (LDA) in face feature extraction is to find an effective subspace for identity discrimination. The introduction of kernel trick has extended the LDA to nonlinear decision hypersurface. However, there remained inherent limitations for the nonlinear LDA to deal with physical applications under complex environmental factors. These limitations include the use of a common covariance function among each class, and the limited dimensionality inherent to the definition of the between-class scatter. Since these problems are inherently caused by the definition of the Fisher's criterion itself, they may not be solvable under the conventional LDA framework. This paper proposes to adopt a margin-based between-class scatter and a regularization process to resolve the issue. Essentially, we redesign the between-class scatter matrix based on the SVM margins to facilitate an effective and reliable feature extraction. This is followed by a regularization of the within-class scatter matrix. Extensive empirical experiments are performed to compare the proposed method with several other variants of the LDA method using the FERET, AR, and CMU-PIE databases.  相似文献   

14.
Two semi-supervised feature extraction methods are proposed for electroencephalogram (EEG) classification. They aim to alleviate two important limitations in brain–computer interfaces (BCIs). One is on the requirement of small training sets owing to the need of short calibration sessions. The second is the time-varying property of signals, e.g., EEG signals recorded in the training and test sessions often exhibit different discriminant features. These limitations are common in current practical applications of BCI systems and often degrade the performance of traditional feature extraction algorithms. In this paper, we propose two strategies to obtain semi-supervised feature extractors by improving a previous feature extraction method extreme energy ratio (EER). The two methods are termed semi-supervised temporally smooth EER and semi-supervised importance weighted EER, respectively. The former constructs a regularization term on the preservation of the temporal manifold of test samples and adds this as a constraint to the learning of spatial filters. The latter defines two kinds of weights by exploiting the distribution information of test samples and assigns the weights to training data points and trials to improve the estimation of covariance matrices. Both of these two methods regularize the spatial filters to make them more robust and adaptive to the test sessions. Experimental results on data sets from nine subjects with comparisons to the previous EER demonstrate their better capability for classification.  相似文献   

15.
This paper extends on previous work in applying an ant algorithm to image feature extraction, focusing on edge pattern extraction, as well as the broader study of self-organisation mechanisms in digital image environments. A novel method of distributed adaptive thresholding is introduced to the ant algorithm, which enables automated distributed adaptive thresholding across the swarm. This technique is shown to increase performance of the algorithm, and furthermore, eliminates the requirement for a user set threshold, allowing the algorithm to autonomously adapt an appropriate threshold for a given image, or data set. Additionally this approach is extended to allow for simultaneous multiple-swarm multiple-feature extraction, as well as dynamic adaptation to changing imagery.  相似文献   

16.
线特征提取的多尺度分析   总被引:2,自引:0,他引:2  
线特征提取是计算机视觉中重要的低级处理过程,而多尺度分析是采用微分几何方法进行线特征提取时一个重要内容。研究了在对不同宽度线特征进行检测时,尺度因子的选择问题,分析了变化的线宽与特定尺度因子间的关系,得到新的尺度因子确定方法。实验表明该方法简单、省时、有效。  相似文献   

17.
A systematic feature extraction procedure is proposed. It is based on successive extractions of features. At each stage a dimensionality reduction is made and a new feature is extracted. A specific example is given using the Gaussian minus-log-likelihood ratio as a basis for the extracted features. This form has the advantage that if both classes are Gaussianly distributed, only a single feature, the sufficient statistic, is extracted. If the classes are not Gaussianly distributed, additional features are extracted in an effort to improve the classification performance. Two examples are presented to demonstrate the performance of the procedure.  相似文献   

18.
一种可最优化计算特征规模的互信息特征提取   总被引:3,自引:0,他引:3       下载免费PDF全文
利用矩阵特征向量分解,提出一种可最优化计算特征规模的互信息特征提取方法.首先,论述了高斯分布假设下的该互信息判据的类可分特性,并证明了现有典型算法都是本算法的特例;然后,在给出该互信息判据严格的数学意义基础上,提出了基于矩阵特征向量分解计算最优化特征规模算法;最后,通过实际数据验证了该方法的有效性  相似文献   

19.
基于改进最小噪声分离变换的特征提取与分类   总被引:2,自引:0,他引:2  
在最小噪声分离变换的基础上,引入核方法,采用小波核函数代替传统核函数对最小噪声分离变换予以改进。小波核函数的多分辨率分析特性可进一步提高算法的非线性映射能力。相关向量机高光谱图像分类是一种较新的高光谱图像分类方法,将新型核最小噪声分离变换方法与相关向量机相结合,对高光谱影像数据进行分类实验。仿真实验结果表明,基于小波核最小噪声分离变换的方法体现了高光谱影像的非线性特征,将所提出的方法应用于HYDICE系统在Washington DC Mall上空拍摄的数据,与对照算法相比,分类精度可提高3%~8%,并可有效地提高小样本区域的分类精度。  相似文献   

20.
由于无监督环境下特征选择缺少类别信息的依赖,所以利用模糊粗糙集理论提出一种非一致性度量方法DAM(disagreement measure),用于度量任意两个特征集合或特征间引起的模糊等价类含义的差异程度.在此基础上实现DAMUFS无监督特征选择算法,其在无监督条件下可以选择出包含更多信息量的特征子集,同时还保证特征子集中属性冗余度尽可能小.实验将DAMUFS算法与一些无监督以及有监督特征选择算法在多个数据集上进行分类性能比较,结果证明了DAMUFS的有效性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号