共查询到20条相似文献,搜索用时 10 毫秒
2.
Neural Computing and Applications - Context of data points, which is usually defined as the other data points in a data set, has been found to paly important roles in data representation and... 相似文献
3.
Sparse representation is a mathematical model for data representation that has proved to be a powerful tool for solving problems in various fields such as pattern recognition, machine learning, and computer vision. As one of the building blocks of the sparse representation method, dictionary learning plays an important role in the minimization of the reconstruction error between the original signal and its sparse representation in the space of the learned dictionary. Although using training samples directly as dictionary bases can achieve good performance, the main drawback of this method is that it may result in a very large and inefficient dictionary due to noisy training instances. To obtain a smaller and more representative dictionary, in this paper, we propose an approach called Laplacian sparse dictionary (LSD) learning. Our method is based on manifold learning and double sparsity. We incorporate the Laplacian weighted graph in the sparse representation model and impose the l1-norm sparsity on the dictionary. An LSD is a sparse overcomplete dictionary that can preserve the intrinsic structure of the data and learn a smaller dictionary for each class. The learned LSD can be easily integrated into a classification framework based on sparse representation. We compare the proposed method with other methods using three benchmark-controlled face image databases, Extended Yale B, ORL, and AR, and one uncontrolled person image dataset, i-LIDS-MA. Results show the advantages of the proposed LSD algorithm over state-of-the-art sparse representation based classification methods. 相似文献
4.
To preserve the sparsity structure in dimensionality reduction, sparsity preserving projection (SPP) is widely used in many fields of classification, which has the advantages of noise robustness and data adaptivity compared with other graph based method. However, the sparsity parameter of SPP is fixed for all samples without any adjustment. In this paper, an improved SPP method is proposed, which has an adaptive parameter adjustment strategy during sparse graph construction. With this adjustment strategy, the sparsity parameter of each sample is adjusted adaptively according to the relationship of those samples with nonzero sparse representation coefficients, by which the discriminant information of graph is enhanced. With the same expectation, similarity information both in original space and projection space is applied for sparse representation as guidance information. Besides, a new measurement is introduced to control the influence of each sample’s local structure on projection learning, by which more correct discriminant information should be preserved in the projection space. With the contributions of above strategies, the low-dimensional space with high discriminant ability is found, which is more beneficial for classification. Experimental results on three datasets demonstrate that the proposed approach can achieve better classification performance over some available state-of-the-art approaches. 相似文献
5.
为解决k‐NN算法中固定k的选定问题,引入稀疏学习和重构技术用于最近邻分类,通过数据驱动(data‐driven)获得k值,不需人为设定。由于样本之间存在相关性,用训练样本重构所有测试样本,生成重构系数矩阵,用 l1‐范数稀疏重构系数矩阵,使每个测试样本用它邻域内最近的k (不定值)个训练样本来重构,解决k‐NN算法对每个待分类样本都用同一个k值进行分类造成的分类不准确问题。UCI数据集上的实验结果表明,在分类时,改良k‐NN算法比经典k‐NN算法效果要好。 相似文献
6.
针对传统的稀疏表示字典学习图像分类方法在大规模分布式环境下效率低下的问题,设计一种基于稀疏表示全局字典的图像学习方法。将传统的字典学习步骤分布到并行节点上,使用凸优化方法在节点上学习局部字典并实时更新全局字典,从而提高字典学习效率和大规模数据的分类效率。最后在MapReduce平台上进行并行化实验,结果显示该方法在不影响分类精度的情况下对大规模分布式数据的分类有明显的加速,可以更高效地运用于各种大规模图像分类任务中。 相似文献
7.
Recently, sparsity preserving projections (SPP) algorithm has been proposed, which combines l1-graph preserving the sparse reconstructive relationship of the data with the classical dimensionality reduction algorithm. However, when applied to classification problem, SPP only focuses on the sparse structure but ignores the label information of samples. To enhance the classification performance, a new algorithm termed discriminative learning by sparse representation projections or DLSP for short is proposed in this paper. DLSP algorithm incorporates the merits of both local interclass geometrical structure and sparsity property. That makes it possess the advantages of the sparse reconstruction, and more importantly, it has better capacity of discrimination, especially when the size of the training set is small. Extensive experimental results on serval publicly available data sets show the feasibility and effectiveness of the proposed algorithm. 相似文献
8.
Multimedia Tools and Applications - A rapid increase in brain tumor cases mandates researchers for the automation of brain tumor detection and diagnosis. Multi-tumor brain image classification... 相似文献
10.
Vector quantization(VQ) can perform efficient feature extraction from electrocardiogram (ECG) with the advantages of dimensionality reduction and accuracy increase. However, the existing dictionary learning algorithms for vector quantization are sensitive to dirty data, which compromises the classification accuracy. To tackle the problem, we propose a novel dictionary learning algorithm that employs k-medoids cluster optimized by k-means++ and builds dictionaries by searching and using representative samples, which can avoid the interference of dirty data, and thus boost the classification performance of ECG systems based on vector quantization features. We apply our algorithm to vector quantization feature extraction for ECG beats classification, and compare it with popular features such as sampling point feature, fast Fourier transform feature, discrete wavelet transform feature, and with our previous beats vector quantization feature. The results show that the proposed method yields the highest accuracy and is capable of reducing the computational complexity of ECG beats classification system. The proposed dictionary learning algorithm provides more efficient encoding for ECG beats, and can improve ECG classification systems based on encoded feature. 相似文献
11.
Managing colossal image datasets with large dimensional hand-crafted features is no more feasible in most of the cases. Content based image classification (CBIC) of these large image datasets calls for the need of dimensionality reduction of features extracted for the purpose. This paper identifies the escalating challenges in the discussed domain and introduces a technique of feature dimension reduction by means of identifying region of interest in a given image with the use of reconstruction errors computed by sparse autoencoders. The automated process identifies the significant regions in an image for feature extraction. It not only improves the dimension of useful features but also contributes to increased classification results compared to earlier approaches. The reduction in number of one kind of features easily makes space for the inclusion of other features whose fusion facilitates improved classification performance compared to individual feature extraction techniques. Two different datasets, i.e. Wang dataset and Corel 5K dataset have been used for the experiments. State-of-the-art classifiers, i.e. Support Vector Machine and Extreme Learning Machine are used for CBIC. The proposed techniques are evaluated and compared in the context of both the classifiers and analysis of results suggests the appropriateness of the proposed methods for real time applications. 相似文献
12.
Sparse representation has attracted great attention in the past few years. Sparse representation based classification (SRC) algorithm was developed and successfully used for classification. In this paper, a kernel sparse representation based classification (KSRC) algorithm is proposed. Samples are mapped into a high dimensional feature space first and then SRC is performed in this new feature space by utilizing kernel trick. Since samples in the high dimensional feature space are unknown, we cannot perform KSRC directly. In order to overcome this difficulty, we give the method to solve the problem of sparse representation in the high dimensional feature space. If an appropriate kernel is selected, in the high dimensional feature space, a test sample is probably represented as the linear combination of training samples of the same class more accurately. Therefore, KSRC has more powerful classification ability than SRC. Experiments of face recognition, palmprint recognition and finger-knuckle-print recognition demonstrate the effectiveness of KSRC. 相似文献
13.
构建视觉词典是BOVW模型中关键的一个步骤,目前大多数视觉词典是基于K-means聚类方式构建。然而由于K-means聚类的局限性以及样本空间结构的复杂性与高维性,这种方式构建的视觉词典往往区分性能较差。在谱聚类的框架下,提出一种区分性能更强的视觉词典学习算法,为了减少特征在量化过程中区分性能的降低以及谱聚类固有的存储计算问题,算法根据训练样本的类别标签对训练数据进行划分,基于Nystrom谱聚类得到各子样本数据集的中心并得到最终的视觉词典。在Scene-15数据集上的实验结果验证了算法的正确性和有效性。特别当训练样本有限时,采用该算法生成的视觉词典性能较优。 相似文献
14.
Automatic annotation is an essential technique for effectively handling and organizing Web objects (e.g., Web pages), which have experienced an unprecedented growth over the last few years. Automatic annotation is usually formulated as a multi-label classification problem. Unfortunately, labeled data are often time-consuming and expensive to obtain. Web data also accommodate much richer feature space. This calls for new semi-supervised approaches that are less demanding on labeled data to be effective in classification. In this paper, we propose a graph-based semi-supervised learning approach that leverages random walks and ? 1 sparse reconstruction on a mixed object-label graph with both attribute and structure information for effective multi-label classification. The mixed graph contains an object-affinity subgraph, a label-correlation subgraph, and object-label edges with adaptive weight assignments indicating the assignment relationships. The object-affinity subgraph is constructed using ? 1 sparse graph reconstruction with extracted structural meta-text, while the label-correlation subgraph captures pairwise correlations among labels via linear combination of their co-occurrence similarity and kernel-based similarity. A random walk with adaptive weight assignment is then performed on the constructed mixed graph to infer probabilistic assignment relationships between labels and objects. Extensive experiments on real Yahoo! Web datasets demonstrate the effectiveness of our approach. 相似文献
15.
为解决现有密度聚类算法中参数设置依赖经验、复杂密度环境下聚类精度不高等问题,提出了基于簇间最大密度连通点进行密度簇分割与合并的模糊聚类方法。基于高斯混合模型计算数据点密度,形成高维离散密度空间,通过低精度网格连续数据空间,结合插值算法赋予空白网格相应密度,构建连续高维密度空间。对数据点按密度排序后,利用能否从大于当前密度的点集中连续可达识别密度极大值点,再以密度序实现极大值点的邻域扩张,以扩张矛盾实现稀疏交界处最大密度连通点识别、密度簇分割。最后基于最大密度连通点计算密度簇间隶属度,设定隶属度阈值,实现相关邻簇的合并,完成聚类。通过与多种密度聚类算法进行仿真对比验证,该算法大大降低了经验参数的依赖性,具有全局统一的合并隶属度,提升了多密度下的类识别能力。 相似文献
16.
针对DHMM分类器对音频特征进行向量量化引起的误差及特征维数过多导致计算复杂度过大的问题,提出了一种新的基于PCA和CHMM的音频自动分类方法。它先将音频特征组成一个高维向量,然后使用PCA对这些高维向量进行降维,再使用CHMM分类器对降维后的特征进行分类。实验证明了PCA和CHMM音频分类的有效性。 相似文献
17.
目前大多数池化方法主要是从一阶池化层或二阶池化层提取聚合特征信息,忽略了多种池化策略对场景的综合表示能力,进而影响到场景识别性能。针对以上问题,提出了联合一二阶池化网络学习的遥感场景分类模型。首先,利用残差网络ResNet-50的卷积层提取输入图像的初始特征。接着,提出基于特征向量相似度的二阶池化方法,即通过特征向量间的相似度求出其权重系数来调制特征值的信息分布,并计算有效的二阶特征信息。同时,引入一种有效的协方差矩阵平方根逼近求解方法,以获得高阶语义信息的二阶特征表示。最后,基于交叉熵和类距离加权的组合损失函数训练整个网络,从而得到富于判别性的分类模型。所提方法在AID(50%训练比例)、NWPU-RESISC45 (20%训练比例)、CIFAR-10和CIFAR-100数据集上的分类准确率分别达到96.32%、93.38%、96.51%和83.30%,与iSQRT-COV方法相比,分别提高了1.09个百分点、0.55个百分点、1.05个百分点和1.57个百分点。实验结果表明,所提方法有效提高了遥感场景分类性能。 相似文献
18.
Artificial Intelligence and Machine learning has been used by many research groups for processing large scale data known as big data. Machine learning techniques to handle large scale complex datasets are expensive to process computation. Apache Spark framework called spark MLlib is becoming a popular platform for handling big data analysis and it is used for many machine learning problems such as classification, regression and clustering. In this work, Apache Spark and the advanced machine learning architecture of a Deep Multilayer Perceptron (MLP), is proposed for Audio Scene Classification. Log Mel band features are used to represent the characteristics of the input audio scenes. The parameters of the DNN are set according to the DNN baseline of DCASE 2017 challenge. The system is evaluated with TUT dataset (2017) and the result is compared with the baseline provided. 相似文献
19.
Multimedia Tools and Applications - The classification of hyperspectral image with a paucity of labeled samples is a challenging task. In this paper, we present a discriminant sparse representation... 相似文献
|