首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we mainly focus on two issues (1) SVM is very sensitive to noise. (2) The solution of SVM does not take into consideration of the intrinsic structure and the discriminant information of the data. To address these two problems, we first propose an integration model to integrate both the local manifold structure and the local discriminant information into ?1 graph embedding. Then we add the integration model into the objection function of υ-support vector machine. Therefore, a discriminant sparse neighborhood preserving embedding υ-support vector machine (υ-DSNPESVM) method is proposed. The theoretical analysis demonstrates that υ-DSNPESVM is a reasonable maximum margin classifier and can obtain a very lower generalization error upper bound by minimizing the integration model and the upper bound of margin error. Moreover, in the nonlinear case, we construct the kernel sparse representation-based ?1 graph for υ-DSNPESVM, which is more conducive to improve the classification accuracy than ?1 graph constructed in the original space. Experimental results on real datasets show the effectiveness of the proposed υ-DSNPESVM method.  相似文献   

2.
Support Vector Machine (SVM) is one of the well-known classifiers. SVM parameters such as kernel parameters and penalty parameter (C) significantly influence the classification accuracy. In this paper, a novel Chaotic Antlion Optimization (CALO) algorithm has been proposed to optimize the parameters of SVM classifier, so that the classification error can be reduced. To evaluate the proposed algorithm (CALO-SVM), the experiment adopted six standard datasets which are obtained from UCI machine learning data repository. For verification, the results of the CALO-SVM algorithm are compared with grid search, which is a conventional method of searching parameter values, standard Ant Lion Optimization (ALO) SVM, and three well-known optimization algorithms: Genetic Algorithm (GA), Particle Swarm Optimization (PSO), and Social Emotional Optimization Algorithm (SEOA). The experimental results proved that the proposed algorithm is capable of finding the optimal values of the SVM parameters and avoids the local optima problem. The results also demonstrated lower classification error rates compared with GA, PSO, and SEOA algorithms.  相似文献   

3.
We study the strategies in feature selection with sparse support vector machine (SVM). Recently, the socalled L p -SVM (0 < p < 1) has attracted much attention because it can encourage better sparsity than the widely used L 1-SVM. However, L p -SVM is a non-convex and non-Lipschitz optimization problem. Solving this problem numerically is challenging. In this paper, we reformulate the L p -SVM into an optimization model with linear objective function and smooth constraints (LOSC-SVM) so that it can be solved by numerical methods for smooth constrained optimization. Our numerical experiments on artificial datasets show that LOSC-SVM (0 < p < 1) can improve the classification performance in both feature selection and classification by choosing a suitable parameter p. We also apply it to some real-life datasets and experimental results show that it is superior to L 1-SVM.  相似文献   

4.
In this paper, we propose a novel ECG arrhythmia classification method using power spectral-based features and support vector machine (SVM) classifier. The method extracts electrocardiogram’s spectral and three timing interval features. Non-parametric power spectral density (PSD) estimation methods are used to extract spectral features. The proposed approach optimizes the relevant parameters of SVM classifier through an intelligent algorithm using particle swarm optimization (PSO). These parameters are: Gaussian radial basis function (GRBF) kernel parameter σ and C penalty parameter of SVM classifier. ECG records from the MIT-BIH arrhythmia database are selected as test data. It is observed that the proposed power spectral-based hybrid particle swarm optimization-support vector machine (SVMPSO) classification method offers significantly improved performance over the SVM which has constant and manually extracted parameter.  相似文献   

5.
Relief algorithm is a feature selection algorithm used in binary classification proposed by Kira and Rendell, and its computational complexity remarkably increases with both the scale of samples and the number of features. In order to reduce the complexity, a quantum feature selection algorithm based on Relief algorithm, also called quantum Relief algorithm, is proposed. In the algorithm, all features of each sample are superposed by a certain quantum state through the CMP and rotation operations, then the swap test and measurement are applied on this state to get the similarity between two samples. After that, Near-hit and Near-miss are obtained by calculating the maximal similarity, and further applied to update the feature weight vector WT to get \({\overline{WT}}\) that determine the relevant features with the threshold \(\tau \). In order to verify our algorithm, a simulation experiment based on IBM Q with a simple example is performed. Efficiency analysis shows the computational complexity of our proposed algorithm is O(M), while the complexity of the original Relief algorithm is O(NM), where N is the number of features for each sample, and M is the size of the sample set. Obviously, our quantum Relief algorithm has superior acceleration than the classical one.  相似文献   

6.
In this paper, a steganographic scheme adopting the concept of the generalized K d -distance N-dimensional pixel matching is proposed. The generalized pixel matching embeds a B-ary digit (B is a function of K and N) into a cover vector of length N, where the order-d Minkowski distance-measured embedding distortion is no larger than K. In contrast to other pixel matching-based schemes, a N-dimensional reference table is used. By choosing d, K, and N adaptively, an embedding strategy which is suitable for arbitrary relative capacity can be developed. Additionally, an optimization algorithm, namely successive iteration algorithm (SIA), is proposed to optimize the codeword assignment in the reference table. Benefited from the high dimensional embedding and the optimization algorithm, nearly maximal embedding efficiency is achieved. Compared with other content-free steganographic schemes, the proposed scheme provides better image quality and statistical security. Moreover, the proposed scheme performs comparable to state-of-the-art content-based approaches after combining with image models.  相似文献   

7.
Image Forgery is a field that has attracted the attention of a significant number of researchers in the recent years. The widespread popularity of imagery applications and the advent of powerful and inexpensive cameras are among the numerous reasons that have contributed to this upward spike in the reach of image manipulation. A considerable number of features – including numerous texture features – have been proposed by various researchers for identifying image forgery. However, detecting forgery in images utilizing texture-based features have not been explored to its full potential – especially a thorough evaluation of the texture features have not been proposed. In this paper, features based on image textures are extracted and combined in a specific way to detect the presence of image forgery. First, the input image is converted to YCbCr color space to extract the chroma channels. Gabor Wavelets and Local Phase Quantization are subsequently applied to these channels to extract the texture features at different scales and orientations. These features are then optimized using Non-negative Matrix Factorization (NMF) and fed to a Support Vector Machine (SVM) classifier. This method leads to the classification of images with accuracies of 99.33%, 96.3%, 97.6%, 85%, and 96.36% for the CASIA v2.0, CASIA v1.0, CUISDE, IFS-TC and Unisa TIDE datasets respectively showcasing its ability to identify image forgeries under varying conditions. With CASIA v2.0, the detection accuracy outperforms the recent state-of-the-art methods, and with the other datasets, it gives a comparable performance with much reduced feature dimensions.  相似文献   

8.
The goal of blind image deblurring is to estimate the blur kernel and restore the sharp latent image based on an input blur image. This paper proposes a novel blind image deblurring algorithm based on L0-regularization and kernel shape optimization. Firstly, the proposed objective function of the optimization model is formulated with L0-norm terms of the gradient and intensity of kernels, which results to good sparsity and less noise in the obtained kernel. Then, the coarse-to-fine iterative framework is adopted to estimate reliable salient image structures implicitly, which can reduce computation and accelerate convergence. Finally, the kernel shape is optimized by weighting method, which enables the obtained kernel closer to the ground-truth. Experimental results on public bench mark datasets demonstrate that restored images are clear with less ring-artifacts.  相似文献   

9.
This paper proposed an Interval Type-2 Fuzzy Kernel based Support Vector Machine (IT2FK-SVM) for scene classification of humanoid robot. Type-2 fuzzy sets have been shown to be a more promising method to manifest the uncertainties. Kernel design is a key component for many kernel-based methods. By integrating the kernel design with type-2 fuzzy sets, a systematic design methodology of IT2FK-SVM classification for scene images is presented to improve robustness and selectivity in the humanoid robot vision, which involves feature extraction, dimensionality reduction and classifier learning. Firstly, scene images are represented as high dimensional vector extracted from intensity, edge and orientation feature maps by biological-vision feature extraction method. Furthermore, a novel three-domain Fuzzy Kernel-based Principal Component Analysis (3DFK-PCA) method is proposed to select the prominent variables from the high-dimensional scene image representation. Finally, an IT2FM SVM classifier is developed for the comprehensive learning of scene images in complex environment. Different noisy, different view angle, and variations in lighting condition can be taken as the uncertainties in scene images. Compare to the traditional SVM classifier with RBF kernel, MLP kernel, and the Weighted Kernel (WK), respectively, the proposed method performs much better than conventional WK method due to its integration of IT2FK, and WK method performs better than the single kernel methods (SVM classifier with RBF kernel or MLP kernel). IT2FK-SVM is able to deal with uncertainties when scene images are corrupted by various noises and captured by different view angles. The proposed IT2FK-SVM method yields over $92~\% $ classification rates for all cases. Moreover, it even achieves $98~\% $ classification rate on the newly built dataset with common light case.  相似文献   

10.
In this paper, we propose a novel supervised dimension reduction algorithm based on K-nearest neighbor (KNN) classifier. The proposed algorithm reduces the dimension of data in order to improve the accuracy of the KNN classification. This heuristic algorithm proposes independent dimensions which decrease Euclidean distance of a sample data and its K-nearest within-class neighbors and increase Euclidean distance of that sample and its M-nearest between-class neighbors. This algorithm is a linear dimension reduction algorithm which produces a mapping matrix for projecting data into low dimension. The dimension reduction step is followed by a KNN classifier. Therefore, it is applicable for high-dimensional multiclass classification. Experiments with artificial data such as Helix and Twin-peaks show ability of the algorithm for data visualization. This algorithm is compared with state-of-the-art algorithms in classification of eight different multiclass data sets from UCI collection. Simulation results have shown that the proposed algorithm outperforms the existing algorithms. Visual place classification is an important problem for intelligent mobile robots which not only deals with high-dimensional data but also has to solve a multiclass classification problem. A proper dimension reduction method is usually needed to decrease computation and memory complexity of algorithms in large environments. Therefore, our method is very well suited for this problem. We extract color histogram of omnidirectional camera images as primary features, reduce the features into a low-dimensional space and apply a KNN classifier. Results of experiments on five real data sets showed superiority of the proposed algorithm against others.  相似文献   

11.
The k nearest neighbors (k-NN) classification technique has a worldly wide fame due to its simplicity, effectiveness, and robustness. As a lazy learner, k-NN is a versatile algorithm and is used in many fields. In this classifier, the k parameter is generally chosen by the user, and the optimal k value is found by experiments. The chosen constant k value is used during the whole classification phase. The same k value used for each test sample can decrease the overall prediction performance. The optimal k value for each test sample should vary from others in order to have more accurate predictions. In this study, a dynamic k value selection method for each instance is proposed. This improved classification method employs a simple clustering procedure. In the experiments, more accurate results are found. The reasons of success have also been understood and presented.  相似文献   

12.
Paper presents a unique novel online learning algorithm for eight popular nonlinear (i.e., kernel), classifiers based on a classic stochastic gradient descent in primal domain. In particular, the online learning algorithm is derived for following classifiers: L1 and L2 support vector machines with both a quadratic regularizer w t w and the l 1 regularizer |w|1; regularized huberized hinge loss; regularized kernel logistic regression; regularized exponential loss with l 1 regularizer |w|1 and Least squares support vector machines. The online learning algorithm is aimed primarily for designing classifiers for large datasets. The novel learning model is accurate, fast and extremely simple (i.e., comprised of few coding lines only). Comparisons of performances of the proposed algorithm with the state of the art support vector machine algorithm on few real datasets are shown.  相似文献   

13.
王谦  张红英 《测控技术》2019,38(10):51-55
针对当前对于行人检测的准确率和检测效率的要求越来越高,提出一种GA-PSO算法对于支持向量机(SVM)参数优化的行人检测方法。首先,针对梯度直方图特征描述子的维数高、提取速度慢,使用PCA对其进行降维处理;以SVM算法作为分类器,为避免传统单核支持向量机算法检测率低的情况出现,以组合核函数作为分类器核函数,并设置松弛变量,引进惩罚因子,结合遗传算法(GA)和改进权重系数的粒子群算法(PSO)进行组合系数和参数的优化与选择,根据优化后的参数构成最终的SVM分类器进行行人检测。实验结果表明,与传统SVM检测以及其他优化方法相比,检测率方面都有明显改进,且满足对检测效率的要求。  相似文献   

14.
The challenge to enhance the naturalness and efficiency of spoken language man–machine interface, emotional speech identification and its classification has been a predominant research area. The reliability and accuracy of such emotion identification greatly depends on the feature selection and extraction. In this paper, a combined feature selection technique has been proposed which uses the reduced features set artifact of vector quantizer (VQ) in a Radial Basis Function Neural Network (RBFNN) environment for classification. In the initial stage, Linear Prediction Coefficient (LPC) and time–frequency Hurst parameter (pH) are utilized to extract the relevant feature, both exhibiting complementary information from the emotional speech. Extensive simulations have been carried out using Berlin Database of Emotional Speech (EMO-DB) with various combination of feature set. The experimental results reveal 76 % accuracy for pH and 68 % for LPC using standalone feature set, whereas the combination of feature sets, (LP VQC and pH VQC) enhance the average accuracy level up to 90.55 %.  相似文献   

15.
In this work, we evaluate two schemes for incorporating feature selection processes in multi-class classifier systems on high-dimensional data of low cardinality. These schemes operate on the level of the systems’ individual base classifiers and therefore do not perfectly fit in the traditional categories of filter, wrapper and embedded feature selection strategies. They can be seen as two examples of feature selection networks that are only loosely related to the structure of the multi-class classifier system. The architectures are tested for their application in predicting diagnostic phenotypes from gene expression profiles. Their selection stability and the overall generalization ability are evaluated in \(10 \times 10\) cross-validation experiments with support vector machines, random forests and nearest neighbor classifiers on eight publicly available multi-class microarray datasets. Overall the feature selecting multi-class classifier systems were able to outperform their counterparts on at least five of eight datasets.  相似文献   

16.
Krawtchouk polynomials (KPs) and their moments are used widely in the field of signal processing for their superior discriminatory properties. This study proposes a new fast recursive algorithm to compute Krawtchouk polynomial coefficients (KPCs). This algorithm is based on the symmetry property of KPCs along the primary and secondary diagonals of the polynomial array. The \(n-x\) plane of the KP array is partitioned into four triangles, which are symmetrical across the primary and secondary diagonals. The proposed algorithm computes the KPCs for only one triangle (partition), while the coefficients of the other three triangles (partitions) can be computed using the derived symmetry properties of the KP. Therefore, only N / 4 recursion times are required. The proposed algorithm can also be used to compute polynomial coefficients for different values of the parameter p in interval (0, 1). The performance of the proposed algorithm is compared with that in previous literature in terms of image reconstruction error, polynomial size, and computation cost. Moreover, the proposed algorithm is applied in a face recognition system to determine the impact of parameter p on feature extraction ability. Simulation results show that the proposed algorithm has a remarkable advantage over other existing algorithms for a wide range of parameters p and polynomial size N, especially in reducing the computation time and the number of operations utilized.  相似文献   

17.
For solving a class of ?2- ?0- regularized problems we convexify the nonconvex ?2- ?0 term with the help of its biconjugate function. The resulting convex program is explicitly given which possesses a very simple structure and can be handled by convex optimization tools and standard softwares. Furthermore, to exploit simultaneously the advantage of convex and nonconvex approximation approaches, we propose a two phases algorithm in which the convex relaxation is used for the first phase and in the second phase an efficient DCA (Difference of Convex functions Algorithm) based algorithm is performed from the solution given by Phase 1. Applications in the context of feature selection in support vector machine learning are presented with experiments on several synthetic and real-world datasets. Comparative numerical results with standard algorithms show the efficiency the potential of the proposed approaches.  相似文献   

18.
The density based notion for clustering approach is used widely due to its easy implementation and ability to detect arbitrary shaped clusters in the presence of noisy data points without requiring prior knowledge of the number of clusters to be identified. Density-based spatial clustering of applications with noise (DBSCAN) is the first algorithm proposed in the literature that uses density based notion for cluster detection. Since most of the real data set, today contains feature space of adjacent nested clusters, clearly DBSCAN is not suitable to detect variable adjacent density clusters due to the use of global density parameter neighborhood radius N rad and minimum number of points in neighborhood N pts . So the efficiency of DBSCAN depends on these initial parameter settings, for DBSCAN to work properly, the neighborhood radius must be less than the distance between two clusters otherwise algorithm merges two clusters and detects them as a single cluster. Through this paper: 1) We have proposed improved version of DBSCAN algorithm to detect clusters of varying density adjacent clusters by using the concept of neighborhood difference and using the notion of density based approach without introducing much additional computational complexity to original DBSCAN algorithm. 2) We validated our experimental results using one of our authors recently proposed space density indexing (SDI) internal cluster measure to demonstrate the quality of proposed clustering method. Also our experimental results suggested that proposed method is effective in detecting variable density adjacent nested clusters.  相似文献   

19.
Secure online communication is a necessity in today’s digital world. This paper proposes a novel reversible data hiding technique based on side match vector quantization (SMVQ). The proposed scheme classifies SMVQ indices as Case 1 or 2 based on the value of the first state codeword’s side match distortion (SMD) and a predefined threshold t. The proposed scheme uses this classification to switch between compression codes designed for Cases 1 and 2 SMVQ indices. The length of these compression codes is controlled by the parameter ?. Thus, with the selection of appropriate ? and t values, the proposed scheme achieves good compression, creating spaces to embed secret information. The embedding algorithm can embed n secret bits into each SMVQ index, where n = 1, 2, 3, or 4. The experimental results show that the proposed scheme obtains the embedding rates of 1, 2, 3, or 4 bit per index (bpi) at the average bit rates of 0.340, 0.403, 0.465, or 0.528 bit per pixel (bpp) for the codebook size 256. This improves the performance of recent VQ and SMVQ-based data hiding schemes.  相似文献   

20.
The renowned k-nearest neighbor decision rule is widely used for classification tasks, where the label of any new sample is estimated based on a similarity criterion defined by an appropriate distance function. It has also been used successfully for regression problems where the purpose is to predict a continuous numeric label. However, some alternative neighborhood definitions, such as the surrounding neighborhood, have considered that the neighbors should fulfill not only the proximity property, but also a spatial location criterion. In this paper, we explore the use of the k-nearest centroid neighbor rule, which is based on the concept of surrounding neighborhood, for regression problems. Two support vector regression models were executed as reference. Experimentation over a wide collection of real-world data sets and using fifteen odd different values of k demonstrates that the regression algorithm based on the surrounding neighborhood significantly outperforms the traditional k-nearest neighborhood method and also a support vector regression model with a RBF kernel.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号