首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The conventional sparse representation-based image classification usually codes the samples independently, which will ignore the correlation information existed in the data. Hence, if we can explore the correlation information hidden in the data, the classification result will be improved significantly. To this end, in this paper, a novel weighted supervised spare coding method is proposed to address the image classification problem. The proposed method firstly explores the structural information sufficiently hidden in the data based on the low rank representation. And then, it introduced the extracted structural information to a novel weighted sparse representation model to code the samples in a supervised way. Experimental results show that the proposed method is superiority to many conventional image classification methods.  相似文献   

2.
The sparse representation classification (SRC) method proposed by Wright et al. is considered as the breakthrough of face recognition because of its good performance. Nevertheless it still cannot perfectly address the face recognition problem. The main reason for this is that variation of poses, facial expressions, and illuminations of the facial image can be rather severe and the number of available facial images are fewer than the dimensions of the facial image, so a certain linear combination of all the training samples is not able to fully represent the test sample. In this study, we proposed a novel framework to improve the representation-based classification (RBC). The framework first ran the sparse representation algorithm and determined the unavoidable deviation between the test sample and optimal linear combination of all the training samples in order to represent it. It then exploited the deviation and all the training samples to resolve the linear combination coefficients. Finally, the classification rule, the training samples, and the renewed linear combination coefficients were used to classify the test sample. Generally, the proposed framework can work for most RBC methods. From the viewpoint of regression analysis, the proposed framework has a solid theoretical soundness. Because it can, to an extent, identify the bias effect of the RBC method, it enables RBC to obtain more robust face recognition results. The experimental results on a variety of face databases demonstrated that the proposed framework can improve the collaborative representation classification, SRC, and improve the nearest neighbor classifier.  相似文献   

3.
针对传统训练样本字典学习未利用类共有信息的不足,引入共享空间和与类别相关的剩余空间,提出了共享空间基-逐类剩余空间基混合稀疏表示人脸识别的算法。该算法首先提取训练样本主成分分析(PCA)特征,获取无标记的共享空间基及其重构样本得到类共有信息;然后结合原始样本得到差分训练集合,并引入类间差异信息构建逐类特异性剩余空间基;最后融合共享空间基和剩余空间基,利用残差判别函数完成模式分类。该方法不仅利用混合空间的正交特性,而且发挥剩余空间的鉴别能力和共享信息稀疏逼近的作用,使结构性字典和模式分类紧密结合。该方法的有效性,分别通过用AR、CMU PIE、Extended Yale B人脸数据库进行的实验得到验证。  相似文献   

4.
基于稀疏表示的人脸识别算法(SRC)识别率相当高,但是当使用l1范数求最优的稀疏表示时,大大增加了算法的计算复杂度,矩阵随着维度的增加,计算时间呈几何级别上升,该文提出利用拉格朗日算法求解矩阵的逆的推导思路,用一种简化的伪逆求解方法来代替l1范数的计算,可将运算量较高的矩阵求逆运算转变为轻量级向量矩阵运算,基于AR人脸库的实验证明,维度高的时候识别率高达97%,同时,计算复杂度和开销比SRC算法大幅度降低95%。  相似文献   

5.
The sparse representation classifier (SRC) performs classification by evaluating which class leads to the minimum representation error. However, in real world, the number of available training samples is limited due to noise interference, training samples cannot accurately represent the test sample linearly. Therefore, in this paper, we first produce virtual samples by exploiting original training samples at the aim of increasing the number of training samples. Then, we take the intra-class difference as data representation of partial noise, and utilize the intra-class differences and training samples simultaneously to represent the test sample in a linear way according to the theory of SRC algorithm. Using weighted score level fusion, the respective representation scores of the virtual samples and the original training samples are fused together to obtain the final classification results. The experimental results on multiple face databases show that our proposed method has a very satisfactory classification performance.  相似文献   

6.
Recently, sparse representation classification (SRC) and fisher discrimination dictionary learning (FDDL) methods have emerged as important methods for vehicle classification. In this paper, inspired by recent breakthroughs of discrimination dictionary learning approach and multi-task joint covariate selection, we focus on the problem of vehicle classification in real-world applications by formulating it as a multi-task joint sparse representation model based on fisher discrimination dictionary learning to merge the strength of multiple features among multiple sensors. To improve the classification accuracy in complex scenes, we develop a new method, called multi-task joint sparse representation classification based on fisher discrimination dictionary learning, for vehicle classification. In our proposed method, the acoustic and seismic sensor data sets are captured to measure the same physical event simultaneously by multiple heterogeneous sensors and the multi-dimensional frequency spectrum features of sensors data are extracted using Mel frequency cepstral coefficients (MFCC). Moreover, we extend our model to handle sparse environmental noise. We experimentally demonstrate the benefits of joint information fusion based on fisher discrimination dictionary learning from different sensors in vehicle classification tasks.  相似文献   

7.
There has been a considerable interest in sparse representation and compressive sensing in applied mathematics and signal processing in recent years but with limited success to medical image processing. In this paper we developed a sparse representation-based classification (SRC) algorithm based on L1-norm minimization for classifying chromosomes from multicolor fluorescence in situ hybridization (M-FISH) images. The algorithm has been tested on a comprehensive M-FISH database that we established, demonstrating improved performance in classification. When compared with other pixel-wise M-FISH image classifiers such as fuzzy c-means (FCM) clustering algorithms and adaptive fuzzy c-means (AFCM) clustering algorithms that we proposed earlier the current method gave the lowest classification error. In order to evaluate the performance of different SRC for M-FISH imaging analysis, three different sparse representation methods, namely, Homotopy method, Orthogonal Matching Pursuit (OMP), and Least Angle Regression (LARS), were tested and compared. Results from our statistical analysis have shown that Homotopy based method is significantly better than the other two methods. Our work indicates that sparse representations based classifiers with proper models can outperform many existing classifiers for M-FISH classification including those that we proposed before, which can significantly improve the multicolor imaging system for chromosome analysis in cancer and genetic disease diagnosis.  相似文献   

8.
In the literature there are only few papers concerned with classification methods for multi-way arrays. The most common procedure, by far, is to unfold the multi-way data array into an ordinary matrix and then to apply the traditional multivariate tools for classification. As opposed to unfolding the data several possibilities exist for building classification models more directly based on the multi-way structure of the data. As an example, multi-way partial least squares discriminant analysis has been used as a supervised classification method, another alternative that has been investigated is to perform classification using Fisher's LDA or SIMCA on the score matrix from e.g. a PARAFAC or a Tucker model. Despite a few attempts of applying such multi-way classification approaches, no-one has looked into how such models are best built and implemented.In this work, the SIMCA method is extended to three-way arrays. Included in this work is also actual code that will work on general multi-way arrays rather than just three-way arrays. In analogy with two-way SIMCA, a decomposition model is separately built for the multi-way data for each class, using multi-way decomposition method such as PARAFAC or Tucker3. In the choice of the best class dimensionality, i.e. number of latent factors, both the results of cross-validation but mainly the sensitivity/specificity values are evaluated. In order to estimate the class limits for each class model, orthogonal and score distances are considered, and different statistics are implemented and tested to set confidence limits for these two parameters. Classification performance using different definitions of class boundaries and classification rules, including the use of cross-validated residuals and scores is compared.The proposed N-SIMCA methodology and code, besides simulated data sets of varying dimensionality, has been tested on two case studies, concerning food authentication tasks for typical food products.  相似文献   

9.
Diabetic retinopathy (DR) is a disease with an increasing prevalence and the major reason for blindness among working-age population. The possibility of severe vision loss can be extensively reduced by timely diagnosis and treatment. An automated screening for DR has been identified as an effective method for early DR detection, which can decrease the workload associated to manual grading as well as save diagnosis costs and time. Several studies have been carried out to develop automated detection and classification models for DR. This paper presents a new IoT and cloud-based deep learning for healthcare diagnosis of Diabetic Retinopathy (DR). The proposed model incorporates different processes namely data collection, preprocessing, segmentation, feature extraction and classification. At first, the IoT-based data collection process takes place where the patient wears a head mounted camera to capture the retinal fundus image and send to cloud server. Then, the contrast level of the input DR image gets increased in the preprocessing stage using Contrast Limited Adaptive Histogram Equalization (CLAHE) model. Next, the preprocessed image is segmented using Adaptive Spatial Kernel distance measure-based Fuzzy C-Means clustering (ASKFCM) model. Afterwards, deep Convolution Neural Network (CNN) based Inception v4 model is applied as a feature extractor and the resulting feature vectors undergo classification in line with the Gaussian Naive Bayes (GNB) model. The proposed model was tested using a benchmark DR MESSIDOR image dataset and the obtained results showcased superior performance of the proposed model over other such models compared in the study.  相似文献   

10.
付荣荣  隋佳新  刘冲  张扬 《计量学报》2022,43(8):1103-1108
运动想象脑电信号的识别与分类问题一直是脑机领域研究的热点问题。针对此问题,使用区别传统线性降维方法的流形学习方法,将共空间模式算法与均匀流形投影算法相结合,充分利用了脑电信号中的非线性特征,对运动想象脑电信号进行了特征提取和数据降维,并使用KNN分类器进行了分类,对分类效果做出了评价;将降维前后的数据分类结果进行对比,说明了数据降维的优点和必要性;进一步讨论了降维结果在数据可视化方面的表现。发现经过数据降维的特征数据的可视化效果明显优于未经过降维的数据,进一步提出了一种基于共空间模式和均匀流形投影的新型脑电信号识别方法,对进行脑电信号深度剖析。挖掘脑电信号非线性特征提供了参考价值,同时也在数据流形分布以及数据可视化的角度为运动想象脑电信号识别提供了新思路。  相似文献   

11.
In the modern world, one of the most severe eye infections brought on by diabetes is known as diabetic retinopathy (DR), which will result in retinal damage, and, thus, lead to blindness. Diabetic retinopathy (DR) can be well treated with early diagnosis. Retinal fundus images of humans are used to screen for lesions in the retina. However, detecting DR in the early stages is challenging due to the minimal symptoms. Furthermore, the occurrence of diseases linked to vascular anomalies brought on by DR aids in diagnosing the condition. Nevertheless, the resources required for manually identifying the lesions are high. Similarly, training for Convolutional Neural Networks (CNN) is more time-consuming. This proposed research aims to improve diabetic retinopathy diagnosis by developing an enhanced deep learning model (EDLM) for timely DR identification that is potentially more accurate than existing CNN-based models. The proposed model will detect various lesions from retinal images in the early stages. First, characteristics are retrieved from the retinal fundus picture and put into the EDLM for classification. For dimensionality reduction, EDLM is used. Additionally, the classification and feature extraction processes are optimized using the stochastic gradient descent (SGD) optimizer. The EDLM’s effectiveness is assessed on the KAGGLE dataset with 3459 retinal images, and results are compared over VGG16, VGG19, RESNET18, RESNET34, and RESNET50. Experimental results show that the EDLM achieves higher average sensitivity by 8.28% for VGG16, by 7.03% for VGG19, by 5.58% for ResNet18, by 4.26% for ResNet 34, and by 2.04% for ResNet 50, respectively.  相似文献   

12.
针对机械故障数据的高维性和不平衡性,提出基于格拉斯曼流形的多聚类特征选择和迭代近邻过采样的故障分类方法。对采集到的振动信号,提取时域和频域相关特征,利用多聚类特征选择将高维数据以局部流形结构映射到低维特征集合。无标签样本借助迭代近邻过采样以恢复最大平衡性为目标进行样本分类,并对剩余无标签样本进行模糊分类。选取滚动轴承正常、外圈、内圈以及滚动体的故障数据,并与支持向量机、基于图的半监督学习算法进行对比。结果表明,提出的方法能有效识别出少数类故障,并在整体上有显著的分类效果。  相似文献   

13.
多波束测深声呐的反向散射数据中包含海底表层的声学信息,可以用来进行海底表层底质分类。但实际中通过物理采样获得大范围的底质类型的标签信息所需成本过高,制约了传统监督分类算法的性能。针对实际应用中只拥有大量无标签数据和少量有标签数据的情况,文章提出了基于自动编码器预训练以及伪标签自训练的半监督学习底质分类算法。利用2018年和2019年两次同一海域实验采集的多波束测深声呐反向散射数据,对所提算法进行了验证。数据处理结果表明,相比仅利用有标签数据的监督分类算法,提出的半监督学习分类算法保证分类准确率的同时所需的有标签数据更少。自动编码器预训练的半监督学习分类方法在有标签样本数量极少时的准确率仍高于75%。  相似文献   

14.
尚丽  周燕  孙战里 《计量学报》2021,42(11):1430-1435
与稀疏表示(SR)模型相比,基于单个核函数的SR(KSR)模型可以有效减少数据维数、降低学习模型的计算复杂度并提高特征分类精度;但这种模型对核函数及其参数的选择通常不能包含恰当的、完整的分类信息。为了满足更高的特征分类精度需求,提出了一种基于多个核函数的SR(M-KSR)模型及其快速稀疏优化方法,并将其应用于掌纹图像的分类。测试结果证明了基于M-KSR模型的掌纹分类方法的有效性和实用性。  相似文献   

15.
Abstract

In this paper, a fuzzy min‐max hyperbox classifier is designed to solve M‐class classification problems using a hybrid SVM and supervised learning approach. In order to solve a classification problem, a set of training patterns is gathered from a considered classification problem. However, the training set may include several noisy patterns. In order to delete the noisy patterns from the training set, the support vector machine is applied to find the noisy patterns so that the remaining training patterns can describe the behavior of the considered classification system well. Subsequently, a supervised learning method is proposed to generate fuzzy min‐max hyperboxes for the remaining training patterns so that the generated fuzzy min‐max hyperbox classifier has good generalization performance. Finally, the Iris data set is considered to demonstrate the good performance of the proposed approach for solving this classification problem.  相似文献   

16.
We propose a method for sparse and robust principal component analysis. The methodology is structured in two steps: first, a robust estimate of the covariance matrix is obtained, then this estimate is plugged-in into an elastic-net regression which enforces sparseness. Our approach provides an intuitive, general and flexible extension of sparse principal component analysis to the robust setting. We also show how to implement the algorithm when the dimensionality exceeds the number of observations by adapting the approach to the use of robust loadings from ROBPCA. The proposed technique is seen to compare well for simulated and real datasets.  相似文献   

17.
18.
High-dimensional data monitoring and diagnosis has recently attracted increasing attention among researchers as well as practitioners. However, existing process monitoring methods fail to fully use the information of high-dimensional data streams due to their complex characteristics including the large dimensionality, spatio-temporal correlation structure, and nonstationarity. In this article, we propose a novel process monitoring methodology for high-dimensional data streams including profiles and images that can effectively address foregoing challenges. We introduce spatio-temporal smooth sparse decomposition (ST-SSD), which serves as a dimension reduction and denoising technique by decomposing the original tensor into the functional mean, sparse anomalies, and random noises. ST-SSD is followed by a sequential likelihood ratio test on extracted anomalies for process monitoring. To enable real-time implementation of the proposed methodology, recursive estimation procedures for ST-SSD are developed. ST-SSD also provides useful diagnostics information about the location of change in the functional mean. The proposed methodology is validated through various simulations and real case studies. Supplementary materials for this article are available online.  相似文献   

19.
This paper presents a semisupervised dimensionality reduction (DR) method based on the combination of semisupervised learning (SSL) and metric learning (ML) (CSSLML-DR) in order to overcome some existing limitations in HSIs analysis. Specifically, CSSML focuses on the difficulties of high dimensionality of hyperspectral images (HSIs) data, the insufficient number of labelled samples and inappropriate distance metric. CSSLML aims to learn a local metrics under which the similar samples are pushed as close as possible, and simultaneously, the different samples are pulled away as far as possible. CSSLML constructs two local-reweighted dynamic graphs in an iterative two-steps approach: L-step and V-step. In L-step, the local between-class and within-class graphs are updated. In V-step, the transformation matrix and the reduced space are updated. The algorithm is repeated until a stopping criterion is satisfied. Experimental results on two well-known hyperspectral image data sets demonstrate the superiority of CSSLML algorithm compared to some traditional DR methods.  相似文献   

20.
The text classification process has been extensively investigated in various languages, especially English. Text classification models are vital in several Natural Language Processing (NLP) applications. The Arabic language has a lot of significance. For instance, it is the fourth mostly-used language on the internet and the sixth official language of the United Nations. However, there are few studies on the text classification process in Arabic. A few text classification studies have been published earlier in the Arabic language. In general, researchers face two challenges in the Arabic text classification process: low accuracy and high dimensionality of the features. In this study, an Automated Arabic Text Classification using Hyperparameter Tuned Hybrid Deep Learning (AATC-HTHDL) model is proposed. The major goal of the proposed AATC-HTHDL method is to identify different class labels for the Arabic text. The first step in the proposed model is to pre-process the input data to transform it into a useful format. The Term Frequency-Inverse Document Frequency (TF-IDF) model is applied to extract the feature vectors. Next, the Convolutional Neural Network with Recurrent Neural Network (CRNN) model is utilized to classify the Arabic text. In the final stage, the Crow Search Algorithm (CSA) is applied to fine-tune the CRNN model’s hyperparameters, showing the work’s novelty. The proposed AATC-HTHDL model was experimentally validated under different parameters and the outcomes established the supremacy of the proposed AATC-HTHDL model over other approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号