共查询到20条相似文献,搜索用时 15 毫秒
1.
De-Shuang Huang Xing-Ming Zhao Guang-Bin Huang Yiu-Ming Cheung 《Pattern recognition》2006,39(12):2293-2300
The annotation of proteins can be achieved by classifying the protein of interest into a certain known protein family to induce its functional and structural features. This paper presents a new method for classifying protein sequences based upon the hydropathy blocks occurring in protein sequences. First, a fixed-dimensional feature vector is generated for each protein sequence using the frequency of the hydropathy blocks occurring in the sequence. Then, the support vector machine (SVM) classifier is utilized to classify the protein sequences into the known protein families. The experimental results have shown that the proteins belonging to the same family or subfamily can be identified using features generated from the hydropathy blocks. 相似文献
2.
3.
Support vector machine active learning for music retrieval 总被引:7,自引:0,他引:7
4.
在实际的邮件过滤应用中,由于垃圾邮件本身的一些因素,像传统的支持向量机分类模型把一个邮件样本明确地归为某一类就很容易出错,而以一定概率的输出判断是否属于某一类则较为合理。根据这种思想,本文在传统支持向量机邮件分类器基础上,提出了一种分类器优化思想,通过对分类输出进行概率计算,并对概率的阈值进行判断,从而确定邮件所属类别。实验证明这种方法是有效可行的。 相似文献
5.
Acoustic events produced in controlled environments may carry information useful for perceptually aware interfaces. In this paper we focus on the problem of classifying 16 types of meeting-room acoustic events. First of all, we have defined the events and gathered a sound database. Then, several classifiers based on support vector machines (SVM) are developed using confusion matrix based clustering schemes to deal with the multi-class problem. Also, several sets of acoustic features are defined and used in the classification tests. In the experiments, the developed SVM-based classifiers are compared with an already reported binary tree scheme and with their correlative Gaussian mixture model (GMM) classifiers. The best results are obtained with a tree SVM-based classifier that may use a different feature set at each node. With it, a 31.5% relative average error reduction is obtained with respect to the best result from a conventional binary tree scheme. 相似文献
6.
Chuan-Yu Chang Author Vitae Author Vitae Ming-Fong Tsai Author Vitae 《Pattern recognition》2010,43(10):3494-3506
Most thyroid nodules are heterogeneous with various internal components, which confuse many radiologists and physicians with their various echo patterns in ultrasound images. Numerous textural feature extraction methods are used to characterize these patterns to reduce the misdiagnosis rate. Thyroid nodules can be classified using the corresponding textural features. In this paper, six support vector machines (SVMs) are adopted to select significant textural features and to classify the nodular lesions of a thyroid. Experiment results show that the proposed method can correctly and efficiently classify thyroid nodules. A comparison with existing methods shows that the feature-selection capability of the proposed method is similar to that of the sequential-floating-forward-selection (SFFS) method, while the execution time is about 3-37 times faster. In addition, the proposed criterion function achieves higher accuracy than those of the F-score, T-test, entropy, and Bhattacharyya distance methods. 相似文献
7.
Texture classification using the support vector machines 总被引:12,自引:0,他引:12
In recent years, support vector machines (SVMs) have demonstrated excellent performance in a variety of pattern recognition problems. In this paper, we apply SVMs for texture classification, using translation-invariant features generated from the discrete wavelet frame transform. To alleviate the problem of selecting the right kernel parameter in the SVM, we use a fusion scheme based on multiple SVMs, each with a different setting of the kernel parameter. Compared to the traditional Bayes classifier and the learning vector quantization algorithm, SVMs, and, in particular, the fused output from multiple SVMs, produce more accurate classification results on the Brodatz texture album. 相似文献
8.
9.
模糊支持向量分类机 总被引:6,自引:0,他引:6
研究了当训练点的输出为模糊数时支持向量分类机的构建问题。对于线性模糊分类问题,首先将其转化为模糊系数规划。利用模糊系数规划的λ-最优规划,求解模糊系数规划得到模糊最优解(模糊集合)以及模糊最优分类函数集(取值为最优分类函数而隶属度为λ(0≤λ≤1)的模糊集合),从而构造线性模糊支持向量分类机。对于非线性模糊分类问题,引入核函数,类似干线性模糊分类问题得到非线性模糊支持向量分类机。最后构造显示模糊支持向量分类机特点的模糊支持向量集(取值为模糊训练点,隶属度为λ(0≤λ≤1)的模糊集合)。模糊支持向量分类机较好地解决了支持向量机中含有模糊信息的分类问题。 相似文献
10.
Knowledge based Least Squares Twin support vector machines 总被引:1,自引:0,他引:1
We propose knowledge based versions of a relatively new family of SVM algorithms based on two non-parallel hyperplanes. Specifically, we consider prior knowledge in the form of multiple polyhedral sets and incorporate the same into the formulation of linear Twin SVM (TWSVM)/Least Squares Twin SVM (LSTWSVM) and term them as knowledge based TWSVM (KBTWSVM)/knowledge based LSTWSVM (KBLSTWSVM). Both of these formulations are capable of generating non-parallel hyperplanes based on real-world data and prior knowledge. We derive the solution of KBLSTWSVM and use it in our computational experiments for comparison against other linear knowledge based SVM formulations. Our experiments show that KBLSTWSVM is a versatile classifier whose solution is extremely simple when compared with other linear knowledge based SVM algorithms. 相似文献
11.
There are two well-known characteristics about text classification.One is that the dimension of the sample space is very high,while the number of examples available usually is very small.The other is that the example vectors are sparse.Meanwhile,we find existing support vector machines active learning approaches are subject to the influence of outliers.Based on these observations,this paper presents a new hybrid active learning approach.In this approach,to select the unlabelled example(s) to query,the learner takes into account both sparseness and high-di-mension characteristics of examples as well as its uncertainty about the examples‘‘ categorization.This way, the active learner needs less labeled examples,but still can get a good generalization performance more quickly than competing methods.Our empirical results indicate that this new approach is effective. 相似文献
12.
This paper presents a novel reject rule for support vector classifiers, based on the receiver operating characteristic (ROC) curve. The rule minimises the expected classification cost, defined on the basis of classification and the error costs for the particular application at hand. The rationale of the proposed approach is that the ROC curve of the SVM contains all of the necessary information to find the optimal threshold values that minimise the expected classification cost. To evaluate the effectiveness of the proposed reject rule, a large number of tests has been performed on several data sets, and with different kernels. A comparison technique, based on the Wilcoxon rank sum test, has been defined and employed to provide the results at an adequate significance level. The experiments have definitely confirmed the effectiveness of the proposed reject rule. 相似文献
13.
Shigeo Abe 《Pattern Analysis & Applications》2007,10(3):203-214
In this paper we discuss sparse least squares support vector machines (sparse LS SVMs) trained in the empirical feature space,
which is spanned by the mapped training data. First, we show that the kernel associated with the empirical feature space gives
the same value with that of the kernel associated with the feature space if one of the arguments of the kernels is mapped
into the empirical feature space by the mapping function associated with the feature space. Using this fact, we show that
training and testing of kernel-based methods can be done in the empirical feature space and that training of LS SVMs in the
empirical feature space results in solving a set of linear equations. We then derive the sparse LS SVMs restricting the linearly
independent training data in the empirical feature space by the Cholesky factorization. Support vectors correspond to the
selected training data and they do not change even if the value of the margin parameter is changed. Thus for linear kernels,
the number of support vectors is the number of input variables at most. By computer experiments we show that we can reduce
the number of support vectors without deteriorating the generalization ability.
Shigeo Abe received the B.S. degree in Electronics Engineering, the M.S. degree in Electrical Engineering, and the Dr. Eng. degree, all from Kyoto University, Kyoto, Japan in 1970, 1972, and 1984, respectively. After 25 years in the industry, he was appointed as full professor of Electrical Engineering, Kobe University in April 1997. He is now a professor of Graduate School of Science and Technology, Kobe University. His research interests include pattern classification and function approximation using neural networks, fuzzy systems, and support vector machines. He is the author of Neural Networks and Fuzzy Systems (Kluwer, 1996), Pattern Classification (Springer, 2001), and Support Vector Machines for Pattern Classification (Springer, 2005). Dr. Abe was awarded an outstanding paper prize from the Institute of Electrical Engineers of Japan in 1984 and 1995. He is a member of IEEE, INNS, and several Japanese Societies. 相似文献
Shigeo AbeEmail: |
Shigeo Abe received the B.S. degree in Electronics Engineering, the M.S. degree in Electrical Engineering, and the Dr. Eng. degree, all from Kyoto University, Kyoto, Japan in 1970, 1972, and 1984, respectively. After 25 years in the industry, he was appointed as full professor of Electrical Engineering, Kobe University in April 1997. He is now a professor of Graduate School of Science and Technology, Kobe University. His research interests include pattern classification and function approximation using neural networks, fuzzy systems, and support vector machines. He is the author of Neural Networks and Fuzzy Systems (Kluwer, 1996), Pattern Classification (Springer, 2001), and Support Vector Machines for Pattern Classification (Springer, 2005). Dr. Abe was awarded an outstanding paper prize from the Institute of Electrical Engineers of Japan in 1984 and 1995. He is a member of IEEE, INNS, and several Japanese Societies. 相似文献
14.
基于支撑向量回归(SVR)可以通过构建支撑向量机分类问题实现的基本思想,推广最小类方差支撑向量机(MCVSVMs)于回归估计,提出了最小方差支撑向量回归(MVSVR)算法.该方法继承了MCVSVMs鲁棒性和泛化能力强的优点,分析了MVSVR和标准SVR之间的关系,讨论了在散度矩阵奇异情况下该方法的求解问题,同时也讨论了MVSVR的非线性情况.实验表明,该方法是可行的,且表现出了更强的泛化能力. 相似文献
15.
Time series classification is a supervised learning problem aimed at labeling temporally structured multivariate sequences of variable length. The most common approach reduces time series classification to a static problem by suitably transforming the set of multivariate input sequences into a rectangular table composed by a fixed number of columns. Then, one of the alternative efficient methods for classification is applied for predicting the class of new temporal sequences. In this paper, we propose a new classification method, based on a temporal extension of discrete support vector machines, that benefits from the notions of warping distance and softened variable margin. Furthermore, in order to transform a temporal dataset into a rectangular shape, we also develop a new method based on fixed cardinality warping distances. Computational tests performed on both benchmark and real marketing temporal datasets indicate the effectiveness of the proposed method in comparison to other techniques. 相似文献
16.
Support vector regression (SVR) is a powerful tool in modeling and prediction tasks with widespread application in many areas. The most representative algorithms to train SVR models are Shevade et al.'s Modification 2 and Lin's WSS1 and WSS2 methods in the LIBSVM library. Both are variants of standard SMO in which the updating pairs selected are those that most violate the Karush-Kuhn-Tucker optimality conditions, to which LIBSVM adds a heuristic to improve the decrease in the objective function. In this paper, and after presenting a simple derivation of the updating procedure based on a greedy maximization of the gain in the objective function, we show how cycle-breaking techniques that accelerate the convergence of support vector machines (SVM) in classification can also be applied under this framework, resulting in significantly improved training times for SVR. 相似文献
17.
18.
In this paper we formulate a least squares version of the recently proposed twin support vector machine (TSVM) for binary classification. This formulation leads to extremely simple and fast algorithm for generating binary classifiers based on two non-parallel hyperplanes. Here we attempt to solve two modified primal problems of TSVM, instead of two dual problems usually solved. We show that the solution of the two modified primal problems reduces to solving just two systems of linear equations as opposed to solving two quadratic programming problems along with two systems of linear equations in TSVM. Classification using nonlinear kernel also leads to systems of linear equations. Our experiments on publicly available datasets indicate that the proposed least squares TSVM has comparable classification accuracy to that of TSVM but with considerably lesser computational time. Since linear least squares TSVM can easily handle large datasets, we further went on to investigate its efficiency for text categorization applications. Computational results demonstrate the effectiveness of the proposed method over linear proximal SVM on all the text corpuses considered. 相似文献
19.
L. Meng Q. H. Wu Department of Electrical Engineering Electronics The University of Liverpool Liverpool L GJ UK 《国际自动化与计算杂志》2005,2(1):6-12
1 Introduction Based on recent advances in statistical learning theory, Support Vector Machines (SVMs) compose a new class of learning system for pattern classification. Training a SVM amounts to solving a quadratic pro- gramming (QP) problem with a dense matrix. Stan- dard QP solvers require the full storage of this matrix, and their e?ciency lies in its sparseness, which make its application to SVM training with large training sets intractable. The SVM, pioneered by Vapnik and his te… 相似文献
20.
The main aim of this paper is to predict NO and NO2 concentrations 4 days in advance by comparing two artificial intelligence learning methods, namely, multi-layer perceptron
and support vector machines, on two kinds of spatial embedding of the temporal time series. Hourly values of NO and NO2 concentrations, as well as meteorological variables were recorded in a cross-road monitoring station with heavy traffic in
Szeged, in order to build a model for predicting NO and NO2 concentrations several hours in advance. The prediction of NO and NO2 concentrations was performed partly on the basis of their past values, and partly on the basis of temperature, humidity and
wind speed data. Since NO can be predicted more accurately, its values were considered primarily when forecasting NO2. Time series prediction can be interpreted in a way that is suitable for artificial intelligence learning. Two effective
learning methods, namely, multi-layer perceptron and support vector regression are used to provide efficient non-linear models
for NO and NO2 time series predictions. Multi-layer perceptron is widely used to predict these time series, but support vector regression
has not yet been applied for predicting NO and NO2 concentrations. Three commonly used linear algorithms were considered as references: 1-day persistence, average of several
day persistence and linear regression. Based on the good results of the average of several day persistence, a prediction scheme
was introduced, which forms weighted averages instead of simple ones. The optimization of these weights was performed with
linear regression in linear case and with the learning methods mentioned in non-linear case. Concerning the NO predictions,
the non-linear learning methods give significantly better predictions than the reference linear methods. In the case of NO2, the improvement of the prediction is considerable, however, it is less notable than for NO. 相似文献