首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper presents a novel adaptive cuckoo search (ACS) algorithm for optimization. The step size is made adaptive from the knowledge of its fitness function value and its current position in the search space. The other important feature of the ACS algorithm is its speed, which is faster than the CS algorithm. Here, an attempt is made to make the cuckoo search (CS) algorithm parameter free, without a Levy step. The proposed algorithm is validated using twenty three standard benchmark test functions. The second part of the paper proposes an efficient face recognition algorithm using ACS, principal component analysis (PCA) and intrinsic discriminant analysis (IDA). The proposed algorithms are named as PCA + IDA and ACS–IDA. Interestingly, PCA + IDA offers us a perturbation free algorithm for dimension reduction while ACS + IDA is used to find the optimal feature vectors for classification of the face images based on the IDA. For the performance analysis, we use three standard face databases—YALE, ORL, and FERET. A comparison of the proposed method with the state-of-the-art methods reveals the effectiveness of our algorithm.  相似文献   

2.
《Computer Networks》2007,51(11):3172-3196
A search based heuristic for the optimisation of communication networks where traffic forecasts are uncertain and the problem is NP-complete is presented. While algorithms such as genetic algorithms (GA) and simulated annealing (SA) are often used for this class of problem, this work applies a combination of newer optimisation techniques specifically: fast local search (FLS) as an improved hill climbing method and guided local search (GLS) to allow escape from local minima. The GLS + FLS combination is compared with an optimised GA and SA approaches. It is found that in terms of implementation, the parameterisation of the GLS + FLS technique is significantly simpler than that for a GA and SA. Also, the self-regularisation feature of the GLS + FLS approach provides a distinctive advantage over the other techniques which require manual parameterisation. To compare numerical performance, the three techniques were tested over a number of network sets varying in size, number of switch circuit demands (network bandwidth demands) and levels of uncertainties on the switch circuit demands. The results show that the GLS + FLS outperforms the GA and SA techniques in terms of both solution quality and optimisation speed but even more importantly GLS + FLS has significantly reduced parameterisation time.  相似文献   

3.
In this study, we propose a set of new algorithms to enhance the effectiveness of classification for 5-year survivability of breast cancer patients from a massive data set with imbalanced property. The proposed classifier algorithms are a combination of synthetic minority oversampling technique (SMOTE) and particle swarm optimization (PSO), while integrating some well known classifiers, such as logistic regression, C5 decision tree (C5) model, and 1-nearest neighbor search. To justify the effectiveness for this new set of classifiers, the g-mean and accuracy indices are used as performance indexes; moreover, the proposed classifiers are compared with previous literatures. Experimental results show that the hybrid algorithm of SMOTE + PSO + C5 is the best one for 5-year survivability of breast cancer patient classification among all algorithm combinations. We conclude that, implementing SMOTE in appropriate searching algorithms such as PSO and classifiers such as C5 can significantly improve the effectiveness of classification for massive imbalanced data sets.  相似文献   

4.
This paper presents results of a comparative study with the objective to identify the most effective and efficient way of applying a local search method embedded in a hybrid algorithm. The hybrid metaheuristic employed in this study is called “DE–HS–HJ” because it is comprised of two cooperative metaheusitic algorithms, i.e., differential evolution (DE) and harmony search (HS), and one local search (LS) method, i.e., Hooke and Jeeves (HJ) direct search. Eighteen different ways of using HJ local search were implemented and all of them were evaluated with 19 problems, in terms of six performance indices, covering both accuracy and efficiency. Statistic analyses were conducted accordingly to determine the significance in performance differences. The test results show that overall the best three LS application strategies are applying local search to every generated solution with a specified probability and also to each newly updated solution (NUS + ESP), applying local search to every generated solution with a specified probability (ESP), and applying local search to every generated solution with probability and also to the updated current global best solution (EUGbest + ESP). ESP is found to be the best local search application strategy in terms of success rate. Integrating it with NUS further improve the overall performance. EUGbest + ESP is the most efficient and it is also able to achieve high level of accuracy (the fourth place in terms of success rate with an average above 0.9).  相似文献   

5.
3-D Networks-on-Chip (NoCs) have been proposed as a potent solution to address both the interconnection and design complexity problems facing future System-on-Chip (SoC) designs. In this paper, two topology-aware multicast routing algorithms, Multicasting XYZ (MXYZ) and Alternative XYZ (AL + XYZ) algorithms in supporting of 3-D NoC are proposed. In essence, MXYZ is a simple dimension order multicast routing algorithm that targets 3-D NoC systems built upon regular topologies. To support multicast routing in irregular regions, AL + XYZ can be applied, where an alternative output channel is sought to forward/replicate the packets whenever the output channel determined by MXYZ is not available. To evaluate the performance of MXYZ and AL + XYZ, extensive experiments have been conducted by comparing MXYZ and AL + XYZ against a path-based multicast routing algorithm and an irregular region oriented multiple unicast routing algorithm, respectively. The experimental results confirm that the proposed MXYZ and AL + XYZ schemes, respectively, have lower latency and power consumption than the other two routing algorithms, meriting the two proposed algorithms to be more suitable for supporting multicasting in 3-D NoC systems. In addition, the hardware implementation cost of AL + XYZ is shown to be quite modest.  相似文献   

6.
Dynamic time-linkage optimization problems (DTPs) are a special class of dynamic optimization problems (DOPs) with the feature of time-linkage. Time-linkage means that the decisions taken now could influence the problem states in future. Although DTPs are common in practice, attention from the field of evolutionary optimization is little. To date, the prediction method is the major approach to solve DTPs in the field of evolutionary optimization. However, in existing studies, the method of how to deal with the situation where the prediction is unreliable has not been studied yet for the complete Black-Box Optimization (BBO) case. In this paper, the prediction approach EA + predictor, proposed by Bosman, is improved to handle such situation. A stochastic-ranking selection scheme based on the prediction accuracy is designed to improve EA + predictor under unreliable prediction, where the prediction accuracy is based on the rank of the individuals but not the fitness. Experimental results show that, compared with the original prediction approach, the performance of the improved algorithm is competitive.  相似文献   

7.
Differential evolution (DE) is a simple and powerful evolutionary algorithm for global optimization. DE with constraint handling techniques, named constrained differential evolution (CDE), can be used to solve constrained optimization problems (COPs). In existing CDEs, the parents are randomly selected from the current population to produce trial vectors. However, individuals with fitness and diversity information should have more chances to be selected. This study proposes a new CDE framework that uses nondominated sorting mutation operator based on fitness and diversity information, named MS-CDE. In MS-CDE, firstly, the fitness of each individual in the population is calculated according to the current population situation. Secondly, individuals in the current population are ranked according to their fitness and diversity contribution. Lastly, parents in the mutation operators are selected in proportion to their rankings based on fitness and diversity. Thus, promising individuals with better fitness and diversity are more likely to be selected as parents. The MS-CDE framework can be applied to most CDE variants. In this study, the framework is applied to two popular representative CDE variants, (μ + λ)-CDE and ECHT-DE. Experiment results on 24 benchmark functions from CEC’2006 and 18 benchmark functions from CEC’2010 show that the proposed framework is an effective approach to enhance the performance of CDE algorithms.  相似文献   

8.
Stock index forecasting is a hot issue in the financial arena. As the movements of stock indices are non-linear and subject to many internal and external factors, they pose a great challenge to researchers who try to predict them. In this paper, we select a radial basis function neural network (RBFNN) to train data and forecast the stock indices of the Shanghai Stock Exchange. We introduce the artificial fish swarm algorithm (AFSA) to optimize RBF. To increase forecasting efficiency, a K-means clustering algorithm is optimized by AFSA in the learning process of RBF. To verify the usefulness of our algorithm, we compared the forecasting results of RBF optimized by AFSA, genetic algorithms (GA) and particle swarm optimization (PSO), as well as forecasting results of ARIMA, BP and support vector machine (SVM). Our experiment indicates that RBF optimized by AFSA is an easy-to-use algorithm with considerable accuracy. Of all the combinations we tried in this paper, BIAS6 + MA5 + ASY4 was the optimum group with the least errors.  相似文献   

9.
In biometric systems, reference facial images captured during enrollment are commonly secured using watermarking, where invisible watermark bits are embedded into these images. Evolutionary Computation (EC) is widely used to optimize embedding parameters in intelligent watermarking (IW) systems. Traditional IW methods represent all blocks of a cover image as candidate embedding solutions of EC algorithms, and suffer from premature convergence when dealing with high resolution grayscale facial images. For instance, the dimensionality of the optimization problem to process a 2048 × 1536 pixel grayscale facial image that embeds 1 bit per 8 × 8 pixel block involves 49k variables represented with 293k binary bits. Such Large-Scale Global Optimization problems cannot be decomposed into smaller independent ones because watermarking metrics are calculated for the entire image. In this paper, a Blockwise Coevolutionary Genetic Algorithm (BCGA) is proposed for high dimensional IW optimization of embedding parameters of high resolution images. BCGA is based on the cooperative coevolution between different candidate solutions at the block level, using a local Block Watermarking Metric (BWM). It is characterized by a novel elitism mechanism that is driven by local blockwise metrics, where the blocks with higher BWM values are selected to form higher global fitness candidate solutions. The crossover and mutation operators of BCGA are performed on block level. Experimental results on PUT face image database indicate a 17% improvement of fitness produced by BCGA compared to classical GA. Due to improved exploration capabilities, BCGA convergence is reached in fewer generations indicating an optimization speedup.  相似文献   

10.
Protein thermostability information is closely linked to commercial production of many biomaterials. Recent developments have shown that amino acid composition, special sequence patterns and hydrogen bonds, disulfide bonds, salt bridges and so on are of considerable importance to thermostability. In this study, we present a system to integrate these various factors that predict protein thermostability. In this study, the features of proteins in the PGTdb are analyzed. We consider both structure and sequence features and correlation coefficients are incorporated into the feature selection algorithm. Machine learning algorithms are then used to develop identification systems and performances between the different algorithms are compared. In this research, two features, (E + F + M + R)/residue and charged/non-charged, are found to be critical to the thermostability of proteins. Although the sequence and structural models achieve a higher accuracy, sequence-only models provides sufficient accuracy for sequence-only thermostability prediction.  相似文献   

11.
《Computer Networks》2003,41(1):73-88
To provide real-time service or engineer constrained-based paths, networks require the underlying routing algorithm to be able to find low-cost paths that satisfy given quality-of-service constraints. However, the problem of constrained shortest (least-cost) path routing is known to be NP-hard, and some heuristics have been proposed to find a near-optimal solution. However, these heuristics either impose relationships among the link metrics to reduce the complexity of the problem which may limit the general applicability of the heuristic, or are too costly in terms of execution time to be applicable to large networks. In this paper, we focus on solving the delay-constrained minimum-cost path problem, and present a fast algorithm to find a near-optimal solution. This algorithm, called delay-cost-constrained routing (DCCR), is a variant of the k-shortest-path algorithm. DCCR uses a new adaptive path weight function together with an additional constraint imposed on the path cost, to restrict the search space. Thus, DCCR can return a near-optimal solution in a very short time. Furthermore, we use a variant of the Lagrangian relaxation method proposed by Handler and Zang [Networks 10 (1980) 293] to further reduce the search space by using a tighter bound on path cost. This makes our algorithm more accurate and even faster. We call this improved algorithm search space reduction + DCCR (SSR + DCCR). Through extensive simulations, we confirm that SSR + DCCR performs very well compared to the optimal but very expensive solution.  相似文献   

12.
In this paper, a new approach for multiyear expansion planning of distribution systems (MEPDS) is presented. The proposed MEPDS model optimally specifies the expansion schedule of distribution systems including reinforcement scheme of distribution feeders as well as sizing and location of distributed generations (DGs) during a certain planning horizon. Moreover, it can determine the optimal timing (i.e. year) of each investment/reinforcement. The objective function of the proposed MEPDS model minimizes the total investment, operation and emission costs while satisfying various technical and operational constraints. In order to solve the presented MEPDS model as a complicated multi-dimensional optimization problem, a new two-stage solution approach composed of binary modified imperialist competitive algorithm (BMICA) and Improved Shark Smell Optimization (ISSO), i.e. BMICA + ISSO, is presented. The performance of the suggested MEPDS model and also two-stage solution approach of BMICA + ISSO is verified by applying them on two distribution systems including a classic 34-bus and a real-world 94-bus distribution system as well as a well-known benchmark function. Additionally, the achieved results of BMICA + ISSO are compared with the obtained results of other two-stage solution methods.  相似文献   

13.
When conducting a comparison between multiple algorithms on multiple optimisation problems it is expected that the number of algorithms, problems and even the number of independent runs will affect the final conclusions. Our question in this research was to what extent do these three factors affect the conclusions of standard Null Hypothesis Significance Testing (NHST) and the conclusions of our novel method for comparison and ranking the Chess Rating System for Evolutionary Algorithms (CRS4EAs). An extensive experiment was conducted and the results were gathered and saved of k = 16 algorithms on N = 40 optimisation problems over n = 100 runs. These results were then analysed in a way that shows how these three values affect the final results, how they affect ranking and which values provide unreliable results. The influence of the number of algorithms was examined for values k = {4, 8, 12, 16}, number of problems for values N = {5, 10, 20, 40}, and number of independent runs for values n = {10, 30, 50, 100}. We were also interested in the comparison between both methods – NHST's Friedman test with post-hoc Nemenyi test and CRS4EAs – to see if one of them has advantages over the other. Whilst the conclusions after analysing the values of k were pretty similar, this research showed that the wrong value of N can give unreliable results when analysing with the Friedman test. The Friedman test does not detect any or detects only a small number of significant differences for small values of N and the CRS4EAs does not have a problem with that. We have also shown that CRS4EAs is an appropriate method when only a small number of independent runs n are available.  相似文献   

14.
A new version of the Euclidean algorithm is developed for computing the greatest common divisor of two Gaussian integers. It uses approximation to obtain a sequence of remainders of decreasing absolute values. The algorithm is compared with the new (1  +  i)-ary algorithm of Weilert and found to be somewhat faster if properly implemented.  相似文献   

15.
Based on a detailed check of the LDA + U and GGA + U corrected methods, we found that the transition energy levels depend almost linearly on the effective U parameter. GGA + U seems to be better than LDA + U, with effective U parameter of about 5.0 eV. However, though the results between LDA and GGA are very different before correction, the corrected transition energy levels spread less than 0.3 eV. These more or less consistent results indicate the necessity and validity of LDA + U and GGA + U correction.  相似文献   

16.
Cervical cancer is one of the vital and most frequent cancers, but can be cured effectively if diagnosed in the early stage. This is a novel effort towards effective characterization of cervix lesions from contrast enhanced CT-Scan images to provide a reliable and objective discrimination between benign and malignant lesions. Performance of such classification models mostly depends on features used to represent samples in a training dataset. Selection of optimal feature subset here is NP-hard; where, randomized algorithms do better. In this paper, Grey Wolf Optimizer (GWO), which is a population based meta-heuristic inspired by the leadership hierarchy and hunting mechanism of grey wolves has been utilized for feature selection. The traditional GWO is applicable for continuous single objective optimization problems. Since, feature selection is inherently multi-objective; this paper proposes two different approaches for multi-objective binary GWO algorithms. One is a scalarized approach to multi-objective GWO (MOGWO) and the other is a Non-dominated Sorting based GWO (NSGWO). These are used for wrapper based feature selection that selects optimal textural feature subset for improved classification of cervix lesions. For experiments, contrast enhanced CT-Scan (CECT) images of 62 patients have been used, where all lesions had been recommended for surgical biopsy by specialist. Gray-level co-occurrence matrix based texture features are extracted from two-level decomposition of wavelet coefficients of cervix regions extracted from CECT images. The results of proposed approaches are compared with mostly used meta-heuristics such as genetic algorithm (GA) and firefly algorithm (FA) for multi-objective optimization. With better diversification and intensification, GWO obtains Pareto solutions, which dominate the solutions obtained by GA and FA when assessed on the utilized cervix lesion cases. Cervix lesions are up to 91% accurately classified as benign and malignant with only five features selected by NSGWO. A two-tailed t-test was conducted by hypothesizing the mean F-score obtained by the proposed NSGWO method at significance level = 0.05. This confirms that NSGWO performs significantly better than other methods for the real cervix lesion dataset in hand. Further experiments were conducted on high dimensional microarray gene expression datasets collected online. The results demonstrate that the proposed method performs significantly better than other methods selecting relevant genes for high-dimensional, multi-category cancer diagnosis with an average of 12.82% improvement in F-score value.  相似文献   

17.
The discontinuous Galerkin (DG) method is known to provide good wave resolution properties, especially for long time simulation. In this paper, using Fourier analysis, we provide a quantitative error analysis for the semi-discrete DG method applied to time dependent linear convection equations with periodic boundary conditions. We apply the same technique to show that the error is of order k + 2 superconvergent at Radau points on each element and of order 2k + 1 superconvergent at the downwind point of each element, when using piecewise polynomials of degree k. An analysis of the fully discretized approximation is also provided. We compute the number of points per wavelength required to obtain a fixed error for several fully discrete schemes. Numerical results are provided to verify our error analysis.  相似文献   

18.
It is very important for financial institutions to develop credit rating systems to help them to decide whether to grant credit to consumers before issuing loans. In literature, statistical and machine learning techniques for credit rating have been extensively studied. Recent studies focusing on hybrid models by combining different machine learning techniques have shown promising results. However, there are various types of combination methods to develop hybrid models. It is unknown that which hybrid machine learning model can perform the best in credit rating. In this paper, four different types of hybrid models are compared by ‘Classification + Classification’, ‘Classification + Clustering’, ‘Clustering + Classification’, and ‘Clustering + Clustering’ techniques, respectively. A real world dataset from a bank in Taiwan is considered for the experiment. The experimental results show that the ‘Classification + Classification’ hybrid model based on the combination of logistic regression and neural networks can provide the highest prediction accuracy and maximize the profit.  相似文献   

19.
Due to the challenging constraint search space of real-world engineering problems, a variation of the Chimp Optimization Algorithm (ChOA) called the Universal Learning Chimp Optimization Algorithm (ULChOA) is proposed in this paper, in which a unique learning method is applied to all previous best knowledge obtained by chimps (candid solutions) to update prey’s positions (best solution). This technique preserves the chimp’s variety, discouraging early convergence in multimodal optimization problems. Furthermore, ULChOA introduces a unique constraint management approach for dealing with the constraints in real-world constrained optimization issues. A total of fifteen commonly recognized multimodal functions, twelve real-world constrained optimization challenges, and ten IEEE CEC06-2019 suit tests are utilized to assess the ULChOA's performance. The results suggest that the ULChOA surpasses sixteen out of eighteen algorithms by an average Friedman rank of better than 78 percent for all 25 numerical functions and 12 engineering problems while outperforming jDE100 and DISHchain1e + 12 by 21% and 39%, respectively. According to Bonferroni-Dunn and Holm's tests, ULChOA is statistically superior to benchmark algorithms regarding test functions and engineering challenges. We believe that the ULChOA proposed here may be utilized to solve challenges requiring multimodal search spaces. Furthermore, ULChOA is more widely applicable to engineering applications than competitor benchmark algorithms.  相似文献   

20.
Based on the constraints and frame conditions given by the real processes the production in bakeries can be modelled as a no-wait permutation flow-shop, following the definitions in scheduling theory. A modified genetic algorithm, ant colony optimization and a random search procedure were used to analyse and optimize the production planning of a bakery production line that processes 40 products on 26 production stages. This setup leads to 8.2 × 1047 different possible schedules in a permutation flow-shop model and is thus not solvable in reasonable time with exact methods. Two objective functions of economical interest were analysed, the makespan and the total idle time of machines. In combination with the created model, the applied algorithms proved capable to provide optimized results for the scheduling operation within a predefined runtime of 15 min, reducing the makespan by up to 8.6% and the total idle time of machines by up to 23%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号