首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 171 毫秒
1.
This paper addresses the problem of transductive learning of the kernel matrix from a probabilistic perspective. We define the kernel matrix as a Wishart process prior and construct a hierarchical generative model for kernel matrix learning. Specifically, we consider the target kernel matrix as a random matrix following the Wishart distribution with a positive definite parameter matrix and a degree of freedom. This parameter matrix, in turn, has the inverted Wishart distribution (with a positive definite hyperparameter matrix) as its conjugate prior and the degree of freedom is equal to the dimensionality of the feature space induced by the target kernel. Resorting to a missing data problem, we devise an expectation-maximization (EM) algorithm to infer the missing data, parameter matrix and feature dimensionality in a maximum a posteriori (MAP) manner. Using different settings for the target kernel and hyperparameter matrices, our model can be applied to different types of learning problems. In particular, we consider its application in a semi-supervised learning setting and present two classification methods. Classification experiments are reported on some benchmark data sets with encouraging results. In addition, we also devise the EM algorithm for kernel matrix completion. Editor: Philip M. Long  相似文献   

2.
Differential evolution (DE) is a simple and effective approach for solving numerical optimization problems. However, the performance of DE is sensitive to the choice of mutation and crossover strategies and their associated control parameters. Therefore, to achieve optimal performance, a time-consuming parameter tuning process is required. In DE, the use of different mutation and crossover strategies with different parameter settings can be appropriate during different stages of the evolution. Therefore, to achieve optimal performance using DE, various adaptation, self-adaptation, and ensemble techniques have been proposed. Recently, a classification-assisted DE algorithm was proposed to overcome trial and error parameter tuning and efficiently solve computationally expensive problems. In this paper, we present an evolving surrogate model-based differential evolution (ESMDE) method, wherein a surrogate model constructed based on the population members of the current generation is used to assist the DE algorithm in order to generate competitive offspring using the appropriate parameter setting during different stages of the evolution. As the population evolves over generations, the surrogate model also evolves over the iterations and better represents the basin of search by the DE algorithm. The proposed method employs a simple Kriging model to construct the surrogate. The performance of ESMDE is evaluated on a set of 17 bound-constrained problems. The performance of the proposed algorithm is compared to state-of-the-art self-adaptive DE algorithms: the classification-assisted DE algorithm, regression-assisted DE algorithm, and ranking-assisted DE algorithm.  相似文献   

3.
粒子群算法中随机数参数的设置与实验分析   总被引:1,自引:0,他引:1  
刘志雄  梁华 《控制理论与应用》2010,27(11):1489-1496
粒子群算法的相关参数,对粒子群算法的优化性能有着重要影响,本文针对粒子群算法模型中随机数参数的设置问题展开实验分析.首先,由于各种高级程序语言的结构不同,在粒子群算法的实现程序中,对速度更新公式内同一个粒子速度向量,其各个分量的随机数参数的设置各不相同.其次,根据连续函数优化问题和作业车间调度问题中的典型测试算例,以及对于设备拥有量参数优化问题的计算,表明在粒子群算法中设置不同的随机数参数将对粒子群算法的优化性能产生较大影响,并且,对一个粒子速度向量中的不同分量所对应的随机数参数,如果设置相同的值,可以有效地提高粒子群算法的优化效率.  相似文献   

4.
Statistical depth functions provide from the deepest point a center-outward ordering of multidimensional data. In this sense, depth functions can measure the extremeness or outlyingness of a data point with respect to a given data set. Hence, they can detect outliers observations that appear extreme relative to the rest of the observations. Of the various statistical depths, the spatial depth is especially appealing because of its computational efficiency and mathematical tractability. In this article, we propose a novel statistical depth, the kernelized spatial depth (KSD), which generalizes the spatial depth via positive definite kernels. By choosing a proper kernel, the KSD can capture the local structure of a data set while the spatial depth fails. We demonstrate this by the half-moon data and the ring-shaped data. Based on the KSD, we propose a novel outlier detection algorithm, by which an observation with a depth value less than a threshold is declared as an outlier. The proposed algorithm is simple in structure: the threshold is the only one parameter for a given kernel. It applies to a one-class learning setting, in which normal observations are given as the training data, as well as to a missing label scenario, where the training set consists of a mixture of normal observations and outliers with unknown labels. We give upper bounds on the false alarm probability of a depth-based detector. These upper bounds can be used to determine the threshold. We perform extensive experiments on synthetic data and data sets from real applications. The proposed outlier detector is compared with existing methods. The KSD outlier detector demonstrates a competitive performance.  相似文献   

5.
Standard fixed symmetric kernel-type density estimators are known to encounter problems for positive random variables with a large probability mass close to zero. It is shown that, in such settings, alternatives of asymmetric gamma kernel estimators are superior, but also differ in asymptotic and finite sample performance conditionally on the shape of the density near zero and the exact form of the chosen kernel. Therefore, a refined version of the gamma kernel with an additional tuning parameter adjusted according to the shape of the density close to the boundary is suggested. A data-driven method for the appropriate choice of the modified gamma kernel estimator is also provided. An extensive simulation study compares the performance of this refined estimator to those of standard gamma kernel estimates and standard boundary corrected and adjusted fixed kernels. It is found that the finite sample performance of the proposed new estimator is superior in all settings. Two empirical applications based on high-frequency stock trading volumes and realized volatility forecasts demonstrate the usefulness of the proposed methodology in practice.  相似文献   

6.
刘俊  李威  陈蜀宇  徐光侠 《软件学报》2022,33(12):4574-4589
提出了一种基于各向异性高斯核核惩罚的主成分分析的特征提取算法.该算法不同于传统的核主成分分析算法.在非线性数据降维中,传统的核主成分分析算法忽略了原始数据的无量纲化.此外,传统的核函数在各维度上主要由一个相同的核宽参数控制,该方法无法准确反映各维度不同特征的重要性,从而导致降维过程中准确率低下.为了解决上述问题,首先针对现原始数据的无量纲化问题,提出了一种均值化算法,使得原始数据的总方差贡献率有明显的提高.其次,引入了各向异性高斯核函数,该核函数每个维度拥有不同的核宽参数,各核宽参数能够准确地反映所在维度数据特征的重要性.再次,基于各向异性高斯核函数建立了核主成分分析的特征惩罚目标函数,以便用较少的特征表示原始数据,并反映每个主成分信息的重要性.最后,为了寻求最佳特征,引入梯度下降算法来更新特征惩罚目标函数中的核宽度和控制特征提取算法的迭代过程.为了验证所提出算法的有效性,各算法在UCI公开数据集上和KDDCUP99数据集上进行了比较.实验结果表明,所提基于各向异性高斯核核惩罚的主成分分析的特征提取算法比传统的主成分分析算法在9种公开的UCI公开数据集上准确率平均提高了4.49%.在KDDCUP99数据集上,所提基于各向异性高斯核核惩罚的主成分分析的特征提取算法比传统的主成分分析算法准确率提高了8%.  相似文献   

7.
Deep Belief Networks (DBN) have become a powerful tools to deal with a wide range of applications. On complex tasks like image reconstruction, DBN’s performance is highly sensitive to parameter settings. Manually trying out different parameters is tedious and time consuming however often required in practice as there are not many better options. This work proposes an evolutionary hyper-heuristic framework for automatic parameter optimisation of DBN. The hyper-heuristic framework introduced here is the first of its kind in this domain. It involves a high level strategy and a pool of evolutionary operators such as crossover and mutation to generates DBN parameter settings by perturbing or modifying the current setting of a DBN. Providing a large set of operators could be beneficial to form a more effective high level strategy, but in the same time would increase the search space hence make it more difficulty to form a good strategy. To address this issue, a non-parametric statistical test is introduced to identify a subset of effective operators for different phases of the hyper-heuristic search. Three well-known image reconstruction datasets were used to evaluate the performance of the proposed framework. The results reveal that the proposed hyper-heuristic framework is very competitive when compared to the state of art methods.  相似文献   

8.
Subspace clustering is a data-mining task that groups similar data objects and at the same time searches the subspaces where similarities appear. For this reason, subspace clustering is recognized as more general and complicated than standard clustering. In this article, we present ChameleoClust+, a bioinspired evolutionary subspace clustering algorithm that takes advantage of an evolvable genome structure to detect various numbers of clusters located in different subspaces. ChameleoClust+ incorporates several biolike features such as a variable genome length, both functional and nonfunctional elements, and mutation operators including large rearrangements. It was assessed and compared with the state-of-the-art methods on a reference benchmark using both real-world and synthetic data sets. Although other algorithms may need complex parameter settings, ChameleoClust+ needs to set only one subspace clustering ad hoc and intuitive parameter: the maximal number of clusters. The remaining parameters of ChameleoClust+ are related to the evolution strategy (eg, population size, mutation rate), and a single setting for all of them turned out to be effective for all the benchmark data sets. A sensitivity analysis has also been carried out to study the impact of each parameter on the subspace clustering quality.  相似文献   

9.
针对K中心点聚类算法对分类数据聚类的有效性和遗传算法良好的自组织、自适应和自学习能力,提出了基于遗传聚类算法的出行行为分析方法。该方法采用整数编码,用活动模式间的匹配度度量模式对象之间的相异度,以各活动模式与最近聚类中心点之间相异度的总和为适应度函数,探讨了K中心聚类与遗传算法相结合完成分类对象聚类分析的方法;通过算法在不同数据量和不同参数设定下仿真结果的比较,提出了关键参数的推荐值。研究表明,新方法不仅能很好地解决孤立点和局部最优的问题,同时还提高了算法的收敛速度,降低了计算成本,能很好地解决分类数据的  相似文献   

10.
We describe an efficient technique for adapting control parameter settings associated with differential evolution (DE). The DE algorithm has been used in many practical cases and has demonstrated good convergence properties. It has only a few control parameters, which are kept fixed throughout the entire evolutionary process. However, it is not an easy task to properly set control parameters in DE. We present an algorithm-a new version of the DE algorithm-for obtaining self-adaptive control parameter settings that show good performance on numerical benchmark problems. The results show that our algorithm with self-adaptive control parameter settings is better than, or at least comparable to, the standard DE algorithm and evolutionary algorithms from literature when considering the quality of the solutions obtained  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号