共查询到19条相似文献,搜索用时 218 毫秒
1.
2.
一种基于GA和支持向量机的车牌字符识别方法 总被引:2,自引:2,他引:0
以高斯核为其核函数的支持向量机在实际应用中表现出优良的学习性能,被广泛应用于模式分类中。支持向量机的识别性能对参数的选取是敏感的,惩罚因子C和核函数参数σ对支持向量机性能会产生重要的影响。针对高斯核支持向量机在车牌字符识别问题中的应用,提出了一种基于遗传算法的参数选择方法。首先确定合适的遗传算法适应度函数,然后利用遗传算法对支持向量机的参数进行优化,最后在各个识别子网中分别采用参数优化后的支持向量机对车牌字符进行识别。实验结果表明,该方法取得了令人满意的识别率。 相似文献
3.
针对回归问题提出了非对称[ν]-无核二次曲面支持向量回归机。通过引入Pinball损失函数,使得位于[ε]带上方和下方的样本点具有不同的惩罚,从而得到更优的回归函数。进一步从理论上分析了参数[p]和[ν]控制[ε]带上方和下方错误样本点数目的上界。当[p=0.5]时,该方法就退化成了对称[ν]-无核二次曲面支持向量回归机,此时也证明了参数[ν]可控制支持向量的个数。事实上,该算法不需要使用核函数,从而避免了核参数的选择且不损失决策函数的可解释性。数值实验部分展示了该算法具有更好的拟合性能且耗时较少,也分析了参数[p]不会增加计算成本。 相似文献
4.
5.
一种混合核函数SVM建模方法及其应用 总被引:2,自引:0,他引:2
为了提高模型的泛化能力和精度,提出了一种基于混合核函数的支持向量机(SVM)建模方法.所提出的混合核函数由径向基函数和多项式函数加权组合而成,克服了支持向量机模型中单个核函数的局限性.并利用量子粒子群算法(QPSO)对惩罚系数、核参数以及混合权重系数进行综合寻优,求取最优化参数组合,从而提高模型的精度.采用锌湿法冶炼净化过程现场数据对建模的方法进行了测试,结果表明,所提出的混合核函数支持向量机模型具有较好的泛化性能和预测精度,预测结果满足现场工艺生产的要求. 相似文献
6.
7.
支持向量机(SVM)建模的拟合精度和泛化能力取决于相关参数的选取,目前SVM中的参数的寻优一般只针对惩罚系数和核参数,而混合核函数的引入,使SVM增加了一个可调参数.针对混合核函数SVM的多参数选择问题,提出利用具有较强全局搜索能力的混沌粒子群(CPSO)优化算法对混合核函数SVM建模过程中的重要参数进行优化调整,每一... 相似文献
8.
支持向量回归模型在曲线光顺拟合中的改进 总被引:2,自引:1,他引:1
几何逆向工程中的光顺曲线重构问题本质上属于回归问题。支持向量回归机是求解回归问题的新的十分有效的方法。论文研究用支持向量回归机处理光顺曲线的重构问题。鉴于后者有着对于光顺性的特殊要求,已有的支持向量机并不适用。通过修正惩罚因子对支持向量机加以改造,即根据测量数据点的分布情况,利用各测量点圆率的特性确定对应的惩罚因子,从而实现了自由曲线的光顺重构。数值试验表明新方法可以剔除输入数据中不光顺点的影响,并在给定的精度条件下有效地逼近曲线,达到较好的拟合效果。 相似文献
9.
10.
基于特征指数加权的最小二乘支持向量机算法 总被引:1,自引:1,他引:0
根据支持向量回归机原理,针对样本特征对回归预测重要性的差异,采用最小二乘支持向量回归机(LS-SVR)算法,减少参数数量,针对参数对预测效果的影响,并考虑到特征加权的意义,采用特征指数进行加权,其权重系数由灰色关联度确定,提出了基于特征指数加权的最小二乘支持向量回归机算法。为验证该算法的有效性,对实际股票价格进行预测,结果表明该算法较传统最小二乘支持向量回归机算法,其回归估计函数的预测能力明显提高,具有一定的实用价值。 相似文献
11.
核函数的性质及其构造方法 总被引:5,自引:0,他引:5
支持向量机是一项机器学习技术,发展至今近10年了,已经成功地用于模式识别、回归估计以及聚类等,并由此衍生出了核方法。支持向量机由核函数与训练集完全刻画。进一步提高支持向量机性能的关键,是针对给定的问题设计恰当的核函数,这就要求对核函数本身有深刻了解。本文首先分析了核函数的一些重要性质,接着对3类核函数,即平移不变核函数、旋转不变核函数和卷积核,提出了简单实用的判别准则。在此基础上,验证和构造了很多重要核函数。 相似文献
12.
Robust support vector regression networks for function approximation with outliers 总被引:17,自引:0,他引:17
Chen-Chia Chuang Shun-Feng Su Jin-Tsong Jeng Chih-Ching Hsiao 《Neural Networks, IEEE Transactions on》2002,13(6):1322-1330
Support vector regression (SVR) employs the support vector machine (SVM) to tackle problems of function approximation and regression estimation. SVR has been shown to have good robust properties against noise. When the parameters used in SVR are improperly selected, overfitting phenomena may still occur. However, the selection of various parameters is not straightforward. Besides, in SVR, outliers may also possibly be taken as support vectors. Such an inclusion of outliers in support vectors may lead to seriously overfitting phenomena. In this paper, a novel regression approach, termed as the robust support vector regression (RSVR) network, is proposed to enhance the robust capability of SVR. In the approach, traditional robust learning approaches are employed to improve the learning performance for any selected parameters. From the simulation results, our RSVR can always improve the performance of the learned systems for all cases. Besides, it can be found that even the training lasted for a long period, the testing errors would not go up. In other words, the overfitting phenomenon is indeed suppressed. 相似文献
13.
在最大熵分布估计算法中,根据Jaynes原理来建立分布估计算法中的概率密度。基于SVM的概率密度估计则是根据概率密度的定义,由核函数构造一个包含未知参数的概率密度函数。它根据样本点建立这个概率密度的数学规划模型,并用不敏感损失函数的支持向量机方法来求解这个模型。对得到的概率密度进行仿真测试,最后将得到的密度应用到分布估计算法中。 相似文献
14.
Hidden space support vector machines 总被引:7,自引:0,他引:7
Li Zhang Weida Zhou Licheng Jiao 《Neural Networks, IEEE Transactions on》2004,15(6):1424-1434
Hidden space support vector machines (HSSVMs) are presented in this paper. The input patterns are mapped into a high-dimensional hidden space by a set of hidden nonlinear functions and then the structural risk is introduced into the hidden space to construct HSSVMs. Moreover, the conditions for the nonlinear kernel function in HSSVMs are more relaxed, and even differentiability is not required. Compared with support vector machines (SVMs), HSSVMs can adopt more kinds of kernel functions because the positive definite property of the kernel function is not a necessary condition. The performance of HSSVMs for pattern recognition and regression estimation is also analyzed. Experiments on artificial and real-world domains confirm the feasibility and the validity of our algorithms. 相似文献
15.
Fuzzy Regression Analysis by Support Vector Learning Approach 总被引:1,自引:0,他引:1
Pei-Yi Hao Jung-Hsien Chiang 《Fuzzy Systems, IEEE Transactions on》2008,16(2):428-441
Support vector machines (SVMs) have been very successful in pattern classification and function approximation problems for crisp data. In this paper, we incorporate the concept of fuzzy set theory into the support vector regression machine. The parameters to be estimated in the SVM regression, such as the components within the weight vector and the bias term, are set to be the fuzzy numbers. This integration preserves the benefits of SVM regression model and fuzzy regression model and has been attempted to treat fuzzy nonlinear regression analysis. In contrast to previous fuzzy nonlinear regression models, the proposed algorithm is a model-free method in the sense that we do not have to assume the underlying model function. By using different kernel functions, we can construct different learning machines with arbitrary types of nonlinear regression functions. Moreover, the proposed method can achieve automatic accuracy control in the fuzzy regression analysis task. The upper bound on number of errors is controlled by the user-predefined parameters. Experimental results are then presented that indicate the performance of the proposed approach. 相似文献
16.
核函数、惩罚因子、核参数是影响支持向量数据描述(SVDD)分类方法分类效果的重要因素。研究了多核支持向量数据描述(MKSVDD)分类方法,给出了多核支持向量数据描述分类方法的实现步骤,基于banana数据集分析了惩罚因子和核参数对分类效果的影响,重点讨论了多核函数的权值对支持向量数据描述边界分布的影响。仿真实验结果表明,与单核支持向量数据描述分类方法相比较,多核支持向量数据描述分类方法的分类效果更佳,为实际应用时参数的选择提供了参考。 相似文献
17.
针对核函数选择对最小二乘支持向量机回归模型泛化性的影响, 提出一种新的基于????- 范数约束的最小二乘支持向量机多核学习算法. 该算法提供了两种求解方法, 均通过两重循环进行求解, 外循环用于更新核函数的权值, 内循环用于求解最小二乘支持向量机的拉格朗日乘数, 充分利用该多核学习算法, 有效提高了最小二乘支持向量机的泛化能力, 而且对惩罚参数的选择具有较强的鲁棒性. 基于单变量和多变量函数的仿真实验表明了所提出算法的有效性.
相似文献18.
19.
In some nonlinear dynamic systems, the state variables function usually can be separated from the control variables function, which brings much trouble to the identification of such systems. To well solve this problem, an improved least squares support vector regression (LSSVR) model with multiple-kernel is proposed and the model is applied to the nonlinear separable system identification. This method utilizes the excellent nonlinear mapping ability of Morlet wavelet kernel function and combines the state and control variables information into a kernel matrix. Using the composite wavelet kernel, the LSSVR includes two nonlinear functions, whose variables are the state variables and the control ones respectively, in this way, the regression function can gain better nonlinear mapping ability, and it can simulate almost any curve in quadratic continuous integral space. Then, they are used to identify the two functions in the separable nonlinear dynamic system. Simulation results show that the multiple-kernel LSSVR method can greatly improve the identification accuracy than the single kernel method, and the Morlet wavelet kernel is more efficient than the other kernels. 相似文献