首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
A relative difference quotient algorithm for discrete optimization   总被引:9,自引:0,他引:9  
According to the characteristics of discrete optimization, the concept of a relative difference quotient is proposed, and a highly accurate heuristic algorithm, a relative difference quotient algorithm, is developed for a class of discrete optimization problems with monotonic objective functions and constraint functions. The algorithm starts from the minimum point of the objective function outside the feasible region and advances along the direction of minimum increment of the objective function and maximum decrement of constraint functions to find a better approximate optimum solution. In order to evaluate the performance of the algorithm, a stochastic numerical test and a statistical analysis for the test results are also completed. The algorithm has been successfully applied to the discrete optimization of structures.  相似文献   

3.
The shuffled frog leaping (SFL) optimization algorithm has been successful in solving a wide range of real-valued optimization problems. In this paper we present a discrete version of this algorithm and compare its performance with a SFL algorithm, a binary genetic algorithm (BGA), and a discrete particle swarm optimization (DPSO) algorithm on seven low dimensional and five high dimensional benchmark problems. The obtained results demonstrate that our proposed algorithm, i.e. the DSFL, outperforms the BGA and the DPSO in terms of both success rate and speed. On low dimensional functions and for large values of tolerance the DSFL is slower than the SFL, but their success rates are equal. Part of this slowness could be attributed to the extra bits used for data coding. By increasing number of variables and the required precision of answer, the DSFL performs very well in terms of both speed and success rate. For high dimensional problems, for intrinsically discrete problems, also when the required precision of answer is high, the DSFL is the most efficient method.  相似文献   

4.
对切换回归模型的聚类方法一般都没有考虑到噪音的影响,因此在含有噪音数据的情况下,用这些方法聚类的结果就会出现一定的偏差.为了减弱聚类过程中噪音数据的影响,提出了一种新的具有抵抗噪音能力的聚类算法,称为抗噪音聚类算法.该算法通过将已知数据集划分为非噪音数据集和噪音数据集2个子集,然后对非噪音数据集进行聚类分析,估计出模型的各个参数.通过对噪音数据集和非噪音数据集进行不断地调整,同时不断地修正得到的参数估计值,从而得到对聚类结果的优化.实验表明,抗噪音聚类算法能够有效地克服噪音数据对聚类结果的影响,并估计出优质的参数.  相似文献   

5.
Input selection for nonlinear regression models   总被引:3,自引:0,他引:3  
A simple and effective method for the selection of significant inputs in nonlinear regression models is proposed. Given a set of input-output data and an initial superset of potential inputs, the relevant inputs are selected by checking whether after deleting a particular input, the data set is still consistent with the basic property of a function. In order to be able to handle real-valued and noisy data in a sensible manner, fuzzy clustering is first applied. The obtained clusters are compared by using a similarity measure in order to find inconsistencies within the data. Several examples using simulated and real-world data sets are presented to demonstrate the effectiveness of the algorithm.  相似文献   

6.
BackgroundShort-term load forecasting is an important issue that has been widely explored and examined with respect to the operation of power systems and commercial transactions in electricity markets. Of the existing forecasting models, support vector regression (SVR) has attracted much attention. While model selection, including feature selection and parameter optimization, plays an important role in short-term load forecasting using SVR, most previous studies have considered feature selection and parameter optimization as two separate tasks, which is detrimental to prediction performance.ObjectiveBy evolving feature selection and parameter optimization simultaneously, the main aims of this study are to make practitioners aware of the benefits of applying unified model selection in STLF using SVR and to provide one solution for model selection in the framework of memetic algorithm (MA).MethodsThis study proposes a comprehensive learning particle swarm optimization (CLPSO)-based memetic algorithm (CLPSO-MA) that evolves feature selection and parameter optimization simultaneously. In the proposed CLPSO-MA algorithm, CLPSO is applied to explore the solution space, while a problem-specific local search is proposed for conducting individual learning, thereby enhancing the exploitation of CLPSO.ResultsCompared with other well-established counterparts, benefits of the proposed unified model selection problem and the proposed CLPSO-MA for model selection are verified using two real-world electricity load datasets, which indicates the SVR equipped with CLPSO-MA can be a promising alternative for short-term load forecasting.  相似文献   

7.
基于QoS的组合服务优化选择问题建模与求解   总被引:2,自引:1,他引:1       下载免费PDF全文
提出了一种支持Web服务动态组合的框架,并在此基础上对Web服务组合中的服务优化选择问题建模,根据用户设置的QoS约束条件,将服务优化选择问题分成3类,并分别给出了相应的服务选择算法。实验证明算法在保证一定优度的同时亦具有良好的时间性能。  相似文献   

8.

Feature selection (FS) is a critical step in data mining, and machine learning algorithms play a crucial role in algorithms performance. It reduces the processing time and accuracy of the categories. In this paper, three different solutions are proposed to FS. In the first solution, the Harris Hawks Optimization (HHO) algorithm has been multiplied, and in the second solution, the Fruitfly Optimization Algorithm (FOA) has been multiplied, and in the third solution, these two solutions are hydride and are named MOHHOFOA. The results were tested with MOPSO, NSGA-II, BGWOPSOFS and B-MOABC algorithms for FS on 15 standard data sets with mean, best, worst, standard deviation (STD) criteria. The Wilcoxon statistical test was also used with a significance level of 5% and the Bonferroni–Holm method to control the family-wise error rate. The results are shown in the Pareto front charts, indicating that the proposed solutions' performance on the data set is promising.

  相似文献   

9.
Probabilistic latent semantic analysis (PLSA) is a double structure mixture model which has got a wide application in text and web mining. This method is capable of establishing hidden semantic relations among the observed features, using a number of latent variables. In this approach, the selection of the correct number of latent variables is critical. In the most of the previous researches, the number of latent topics was selected based on the number of invoked classes. This paper presents a method, based on backward elimination approach, which is capable of unsupervised order selection in PLSA. This method starts with a model having a number of components more than the needed value, and then prunes the mixtures to reach their optimum size. During the elimination process, proper selection of some latent variables which must be deleted is the most essential problem, and its relation to the final performance of the pruned model is straightforward. To treat this problem, we introduce a new combined pruning method which selects the best options for removal, while keeping a low computational cost, at all. We conducted some experiments on two datasets from Reuters-21578 corpus. The obtained results show that this algorithm leads to an optimized number of latent variables and in turn achieves better clustering performance compared to the conventional model selection methods. It also shows superiority over the case in which a PLSA model with a fixed number of latent variables, equal to the real number of clusters, is exploited.  相似文献   

10.
As a novel evolutionary technique, particle swarm optimization (PSO) has received increasing attention and wide applications in a variety of fields. To our knowledge this paper investigates the first application of PSO algorithm to tackle the parallel machines scheduling problem. Proposing equations analogous to those of the classical PSO equations, we present a discrete PSO algorithm (DPSO) to minimize makespan (Cmax) criterion. We also investigate the effectiveness of DPSO algorithm through hybridizing it with an efficient local search heuristic. To verify the performance of DPSO algorithm and its hybridized version (HDPSO), comparisons are made through using a recently proposed simulated annealing algorithm for the problem, addressed in the literature, as a comparator algorithm. Computational results signify that the proposed DPSO algorithm is very competitive and can be rapidly guided when hybridizing with a local search heuristic.  相似文献   

11.
针对现有回归算法没有考虑利用特征与输出的关系,各输出之间的关系,以及样本之间的关系来处理高维数据的多输出回归问题易输出不稳定的模型,提出一种新的低秩特征选择多输出回归方法。该方法采用低秩约束去构建低秩回归模型来获取多输出变量之间的关联结构;同时创新地在该低秩回归模型上使用[L2,p]-范数来进行样本选择,合理地去除噪音和离群点的干扰;并且使用[L2,p]-范数正则化项惩罚回归系数矩阵进行特征选择,有效地处理特征与输出的关系和避免“维灾难”的影响。通过实际数据集的实验结果表明,提出的方法在处理高维数据的多输出回归分析中能获得非常好的效果。  相似文献   

12.
监督学习情况下,经常遇到样例的维数远远大于样本个数的学习情况。此时,样例中存在许多与样例类标签无关的特征,研究如何同时实现稀疏特征选择并具有更好的分类性能的算法具有优势。提出了基于权核逻辑斯蒂非线性回归模型的分类和特征选择算法。权对角矩阵的对角元素在0到1之间取值,对角元素的取值作为学习参数由最优化过程确定,讨论了提出的快速轮转优化算法。提出的算法在十个实际数据集上进行了测试,实验结果显示,提出的分类算法与L1,L2,Lp正则化逻辑斯蒂模型分类算法比较具有优势。  相似文献   

13.
Li  Lingyu  Liu  Zhi-Ping 《Applied Intelligence》2022,52(10):11672-11702
Applied Intelligence - Feature selection on a network structure can not only discover interesting variables but also mine out their intricate interactions. Regularization is often employed to...  相似文献   

14.
Comparison of model selection for regression   总被引:10,自引:0,他引:10  
Cherkassky V  Ma Y 《Neural computation》2003,15(7):1691-1714
We discuss empirical comparison of analytical methods for model selection. Currently, there is no consensus on the best method for finite-sample estimation problems, even for the simple case of linear estimators. This article presents empirical comparisons between classical statistical methods - Akaike information criterion (AIC) and Bayesian information criterion (BIC) - and the structural risk minimization (SRM) method, based on Vapnik-Chervonenkis (VC) theory, for regression problems. Our study is motivated by empirical comparisons in Hastie, Tibshirani, and Friedman (2001), which claims that the SRM method performs poorly for model selection and suggests that AIC yields superior predictive performance. Hence, we present empirical comparisons for various data sets and different types of estimators (linear, subset selection, and k-nearest neighbor regression). Our results demonstrate the practical advantages of VC-based model selection; it consistently outperforms AIC for all data sets. In our study, SRM and BIC methods show similar predictive performance. This discrepancy (between empirical results obtained using the same data) is caused by methodological drawbacks in Hastie et al. (2001), especially in their loose interpretation and application of SRM method. Hence, we discuss methodological issues important for meaningful comparisons and practical application of SRM method. We also point out the importance of accurate estimation of model complexity (VC-dimension) for empirical comparisons and propose a new practical estimate of model complexity for k-nearest neighbors regression.  相似文献   

15.
Bacterial foraging optimization (BFO) algorithm is a new swarming intelligent method, which has a satisfactory performance in solving the continuous optimization problem based on the chemotaxis, swarming, reproduction and elimination-dispersal steps. However, BFO algorithm is rarely used to deal with feature selection problem. In this paper, we propose two novel BFO algorithms, which are named as adaptive chemotaxis bacterial foraging optimization algorithm (ACBFO) and improved swarming and elimination-dispersal bacterial foraging optimization algorithm (ISEDBFO) respectively. Two improvements are presented in ACBFO. On the one hand, in order to solve the discrete problem, data structure of each bacterium is redefined to establish the mapping relationship between the bacterium and the feature subset. On the other hand, an adaptive method for evaluating the importance of features is designed. Therefore the primary features in feature subset are preserved. ISEDBFO is proposed based on ACBFO. ISEDBFO algorithm also includes two modifications. First, with the aim of describing the nature of cell to cell attraction-repulsion relationship more accurately, swarming representation is improved by means of introducing the hyperbolic tangent function. Second, in order to retain the primary features of eliminated bacteria, roulette technique is applied to the elimination-dispersal phase.In this study, ACBFO and ISEDBFO are tested with 10 public data sets of UCI. The performance of the proposed methods is compared with particle swarm optimization based, genetic algorithm based, simulated annealing based, ant lion optimization based, binary bat algorithm based and cuckoo search based approaches. The experimental results demonstrate that the average classification accuracy of the proposed algorithms is nearly 3 percentage points higher than other tested methods. Furthermore, the improved algorithms reduce the length of the feature subset by almost 3 in comparison to other methods. In addition, the modified methods achieve excellent performance on wilcoxon signed-rank test and sensitivity-specificity test. In conclusion, the novel BFO algorithms can provide important support for the expert and intelligent systems.  相似文献   

16.
Evolutionary selection extreme learning machine optimization for regression   总被引:2,自引:1,他引:1  
Neural network model of aggression can approximate unknown datasets with the less error. As an important method of global regression, extreme learning machine (ELM) represents a typical learning method in single-hidden layer feedforward network, because of the better generalization performance and the faster implementation. The “randomness” property of input weights makes the nonlinear combination reach arbitrary function approximation. In this paper, we attempt to seek the alternative mechanism to input connections. The idea is derived from the evolutionary algorithm. After predefining the number L of hidden nodes, we generate original ELM models. Each hidden node is seemed as a gene. To rank these hidden nodes, the larger weight nodes are reassigned for the updated ELM. We put L/2 trivial hidden nodes in a candidate reservoir. Then, we generate L/2 new hidden nodes to combine L hidden nodes from this candidate reservoir. Another ranking is used to choose these hidden nodes. The fitness-proportional selection may select L/2 hidden nodes and recombine evolutionary selection ELM. The entire algorithm can be applied for large-scale dataset regression. The verification shows that the regression performance is better than the traditional ELM and Bayesian ELM under less cost gain.  相似文献   

17.
The cross-validation deletion-substitution-addition (cvDSA) algorithm is based on data-adaptive estimation methodology to select and estimate marginal structural models (MSMs) for point treatment studies as well as models for conditional means where the outcome is continuous or binary. The algorithm builds and selects models based on user-defined criteria for model selection, and utilizes a loss function-based estimation procedure to distinguish between different model fits. In addition, the algorithm selects models based on cross-validation methodology to avoid “over-fitting” data. The cvDSA routine is an R software package available for download. An alternative R-package (DSA) based on the same principles as the cvDSA routine (i.e., cross-validation, loss function), but one that is faster and with additional refinements for selection and estimation of conditional means, is also available for download. Analyses of real and simulated data were conducted to demonstrate the use of these algorithms, and to compare MSMs where the causal effects were assumed (i.e., investigator-defined), with MSMs selected by the cvDSA. The package was used also to select models for the nuisance parameter (treatment) model to estimate the MSM parameters with inverse-probability of treatment weight (IPTW) estimation. Other estimation procedures (i.e., G-computation and double robust IPTW) are available also with the package.  相似文献   

18.
Decentralized optimization for distributed-lag models of discrete systems   总被引:2,自引:0,他引:2  
Hiroyuki Tamura 《Automatica》1975,11(6):593-602
The approach discussed in this paper solves a general class of optimization problems for discrete dynamic systems which include distributed lag, distributed and/or multiple pure delays, and constraints both in state and control variables. The overall system equation of this problem is described by a multidimensional nonlinear difference equation of high-order which is called the distributed-lag model. Applying Lagrange duality theory to the original problem, the dual problem is formulated, and the decomposition of the optimization process in stage is obtained. It is shown that by solving the dual problem the delay terms can be easily handled and the optimal solution to the original problem is obtained without reducing the multi-dimensional high-order system equation to a conventional larger dimensional first-order system equation. It is also shown that the dual decentralized method in this paper is easier to cope with state and control constraints than the primal method in the space of control, i.e. gradient and other techniques. The approach developed in this paper is compared with other methods using a simple example, and is applied to a combined marketing and production control problem. Some computational results are included.  相似文献   

19.
20.
In this paper, a two-level delimitative and combinatorial algorithm for a kind of (0,1,2) programming is proposed and applied to discrete optimization of structures. The algorithm generates all combinations in a certain order of magnitude of the function of the objective by using a two-level generating method and eliminates the majority of infeasible or nonoptimum combinations by using a two-level delimitative algorithm, so that computational efficiency is greater. Additionally, a (0,1,2) programming model of discrete structural optimization is established and the local optimum solution can be obtained by using this algorithm, thus it provides a method to judge whether or not the approximate optimum solution obtained by the heuristic algorithm is a local optimum solution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号