首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
An efficient swap algorithm for the lattice Boltzmann method   总被引:1,自引:0,他引:1  
During the last decade, the lattice-Boltzmann method (LBM) as a valuable tool in computational fluid dynamics has been increasingly acknowledged. The widespread application of LBM is partly due to the simplicity of its coding. The most well-known algorithms for the implementation of the standard lattice-Boltzmann equation (LBE) are the two-lattice and two-step algorithms. However, implementations of the two-lattice or the two-step algorithm suffer from high memory consumption or poor computational performance, respectively. Ultimately, the computing resources available decide which of the two disadvantages is more critical. Here we introduce a new algorithm, called the swap algorithm, for the implementation of LBE. Simulation results demonstrate that implementations based on the swap algorithm can achieve high computational performance and have very low memory consumption. Furthermore, we show how the performance of its implementations can be further improved by code optimization.  相似文献   

2.
We present a comparative study of numerical algorithms to solve the time-dependent Maxwell equations for systems with spatially varying permittivity and permeability. We show that the Lie-Trotter-Suzuki product-formula approach can be used to construct a family of unconditionally stable algorithms, the conventional Yee algorithm, and two new variants of the Yee algorithm that do not require the use of the staggered-in-time grid. We also consider a one-step algorithm, based on the Chebyshev polynomial expansion, and compare the computational efficiency of the one-step, the Yee-type, the alternating-direction-implicit, and the unconditionally stable algorithms. For applications where the long-time behavior is of main interest, we find that the one-step algorithm may be orders of magnitude more efficient than present multiple time-step, finite-difference time-domain algorithms.  相似文献   

3.
Simulated Annealing (SA) is a single-solution-based metaheuristic technique based on the annealing process in metallurgy. It is also one of the best-known metaheuristic algorithms due to its simplicity and good performance. Despite its interesting characteristics, SA suffers from several limitations such as premature convergence. On the other hand, Japanese swordsmithing refers to the manual-intensive process for producing high-quality bladed weapons from impure raw metals. During this process, Japanese smiths fold and reheat pieces of metal multiple times in order to eliminate impurities and defects. In this paper, an improved version of the SA algorithm is presented. In the new approach, a population of agents is considered. Each agent conducts a search strategy based on a modification of the SA scheme. The proposed algorithm modifies the original SA incorporating two new operators, folding and reheating, inspired by the ancient Japanese Swordsmithing technique. Under the new approach, the process of folding is conceived as a compression of the search space, while the reheating mechanism considers a reinitialization of the cooling process in the original SA scheme. With this inclusion, the new algorithm maintains the computational structure of the SA method but improving its search capacities. In order to evaluate its performance, the proposed algorithm is tested in a set of 28 benchmark functions, which include multimodal, unimodal, composite and shifted functions, and 3 real world optimization problems. The results demonstrate the high performance of the proposed method when compared to the original SA and other popular state-of-the-art algorithms.  相似文献   

4.
The minimization of binary functions finds many applications in practice, and can be solved by the simulated annealing (SA) algorithm. However, the SA algorithm is designed for general combinatorial problems, not specifically for binary problems. Consequently, a direct application of the SA algorithm might not provide optimal performance and efficiency. Therefore, this study specifically investigated the performance of various implementations of the SA algorithm when applied to binary functions. Results obtained in this investigation demonstrated that 1) the SA algorithm can reliably minimize difficult binary functions, 2) a simple technique, analogous to the local search technique used in minimizing continuous functions, can exploit the special structure of binary problems and significantly improve the solution with negligible computational cost, and 3) this technique effectively reduces computational cost while maintaining reconstruction fidelity in binary tomography problems. This study also developed two classes of binary functions to represent the typical challenges encountered in minimization.  相似文献   

5.
统计遗传算法   总被引:28,自引:1,他引:28  
张铃  张钹 《软件学报》1997,8(5):335-344
本文讨论了遗传算法中框架定理的不足之处,并对之进行了改进,然后分析了遗传算法与A算法的相似性,以及遗传算法的概率性质.由此联想到它与SA算法的相似性,在此基础上,作者将原先发展的一套SA算法的理论移植到遗传算法中来,建立一个新的算法,称之为统计遗传算法(简记为SGA算法).为适合于优化计算,作者引入最大值统计量及其对应的SA算法(简称为SMA算法),并将SMA算法与GA算法相结合(记为SGA(MAX)算法).新的算法不仅提高了算法的精度和降低了计算的复杂性,而且能克服GA算法中出现“早熟”的现象以及提供进行并行计算的可能性.更主要的是新的方法为GA算法的精度、可信度和计算复杂性的定量分析提供了理论和方法上的有力工具.  相似文献   

6.
Nowadays, mixed-model assembly line is used increasingly as a result of customers’ demand diversification. An important problem in this field is determining the sequence of products for entering the line. Before determining the best sequence of products, a new procedure is introduced to choose important orders for entering the shop floor. Thus the orders are sorted using an analytical hierarchy process (AHP) approach based on three criteria: critical ratio of each order (CRo), Significance degree of customer and innovation in a product, while the last one is presented for the first time. In this research, six objective functions are presented: minimizing total utility work cost, total setup cost and total production rate variation cost are the objectives which were presented previously, another objective is minimizing total idle cost, meanwhile two other new objectives regarding minimizing total operator error cost and total tardiness cost are presented for the first time. The total tardiness cost tries to choose a sequence of products that minimizes the tardiness cost for customers with high priority. First, to check the feasibility of the model, GAMS software is used. In this case, GAMS software could not search all of the solution space, so it is tried in two stages and because this problem is NP-hard, particle swarm optimization (PSO) and simulated annealing (SA) algorithms are used. For small sized problems, to compare exact method with proposed algorithms, the problem must be solved using meta-heuristic algorithms in two stages as GAMS software, whereas for large sized problems, the problem can be solved in two ways (one stage and two stages) by using proposed algorithms; the computational results and pairwise comparisons (based on sign test) show GAMS is a proper software to solve small sized problems, whereas for a large sized problem the objective function is better when solved in one stage than two stages; therefore it is proposed to solve the problem in one stage for large sized problems. Also PSO algorithm is better than SA algorithm based on objective function and pairwise comparisons.  相似文献   

7.
高维复杂函数的混合模拟退火全局优化策略   总被引:1,自引:0,他引:1  
对于高维复杂函数优化问题,经典的优化算法存在着初始点敏感、局部收敛等问题;而模拟退火算法等智能算法则有着计算成本高昂、算法早熟等缺陷。NFL定理犤1犦预示了混合优化策略是解决实际优化问题的最好途径。该文融合了模拟退火算法和经典算法的优点,设计了高维复杂函数混合模拟退火优化策略。混合优化策略具有模拟退火算法的全局收敛性,同时引入强局部收敛经典算法作为模拟退火算法的精英个体提高算子,提高了模拟退火算法局部开采能力,加快了收敛速度。数值仿真计算结果表明,混合模拟退火策略求解高维复杂函数的性能大大优于单一算法,具有强鲁棒性、高收敛速度和高精度等优点。该文的算法设计思想对于解决实际问题有较好的借鉴意义。  相似文献   

8.
The LP and CP methods are two versions of the piecewise perturbation methods to solve the Schrödinger equation. On each step the potential function is approximated by a constant (for CP) or by a linear function (for LP) and the deviation of the true potential from this approximation is treated by the perturbation theory.This paper is based on the idea that an LP algorithm can be made faster if expressed in a CP-like form. We obtain a version of order 12 whose two main ingredients are a new set of formulae for the computation of the zeroth-order solution which replaces the use of the Airy functions, and a convenient way of expressing the formulae for the perturbation corrections. Tests on a set of eigenvalue problems with a very big number of eigenvalues show that the proposed algorithm competes very well with a CP version of the same order and is by one order of magnitude faster than the LP algorithms existing in the literature. We also formulate a new technique for the step width adjustment and bring some new elements for a better understanding of the energy dependence of the error for the piecewise perturbation methods.  相似文献   

9.
IDR/QR: an incremental dimension reduction algorithm via QR decomposition   总被引:1,自引:0,他引:1  
Dimension reduction is a critical data preprocessing step for many database and data mining applications, such as efficient storage and retrieval of high-dimensional data. In the literature, a well-known dimension reduction algorithm is linear discriminant analysis (LDA). The common aspect of previously proposed LDA-based algorithms is the use of singular value decomposition (SVD). Due to the difficulty of designing an incremental solution for the eigenvalue problem on the product of scatter matrices in LDA, there has been little work on designing incremental LDA algorithms that can efficiently incorporate new data items as they become available. In this paper, we propose an LDA-based incremental dimension reduction algorithm, called IDR/QR, which applies QR decomposition rather than SVD. Unlike other LDA-based algorithms, this algorithm does not require the whole data matrix in main memory. This is desirable for large data sets. More importantly, with the insertion of new data items, the IDR/QR algorithm can constrain the computational cost by applying efficient QR-updating techniques. Finally, we evaluate the effectiveness of the IDR/QR algorithm in terms of classification error rate on the reduced dimensional space. Our experiments on several real-world data sets reveal that the classification error rate achieved by the IDR/QR algorithm is very close to the best possible one achieved by other LDA-based algorithms. However, the IDR/QR algorithm has much less computational cost, especially when new data items are inserted dynamically.  相似文献   

10.
A simulated annealing algorithm for dynamic layout problem   总被引:1,自引:0,他引:1  
Increased level of volatility in today's manufacturing world demanded new approaches for modelling and solving many of its well-known problems like the facility layout problem. Over a decade ago Rosenblatt published a key paper on modelling and solving dynamic version of the facility layout problems. Since then, various other researchers proposed new and improved models and algorithms to solve the problem. Balakrishnan and Cheng have recently published a comprehensive review of the literature about this subject. The problem was defined as a complex combinatorial optimisation problem. The efficiency of SA in solving combinatorial optimisation problems is very well known. However, it has recently not been applied to DLP based on the review of the available literature. In this research paper a SA-based procedure for DLP is developed and results for test problems are reported.

Scope and purpose

One of the characteristic of today's manufacturing environments is volatility. Under a volatile environment (or dynamic manufacturing environment) demand is not stable. To operate efficiently under such environments facilities must be adaptive to changing demand conditions. This requires solution of the dynamic layout problem (DLP). DLP is a complex combinatorial optimisation problem for which optimal solutions can be found for small size problems. This research paper makes use of a SA algorithm to solve the DLP. Simulated annealing (SA) is a well-established stochastic neighbourhood search technique. It has a potential to solve complex combinatorial optimisation problems. The paper presents in detail how to apply SA to solve DLP and an extensive computational study. The computational study shows that SA is quite effective in solving dynamic layout problems.  相似文献   

11.

This paper describes two new suboptimal mask search algorithms for Fuzzy inductive reasoning (FIR), a technique for modelling dynamic systems from observations of their input/output behaviour. Inductive modelling is by its very nature an optimisation problem. Modelling large-scale systems in this fashion involves solving a high-dimensional optimisation problem, a task that invariably carries a high computational cost. Suboptimal search algorithms are therefore important. One of the two proposed algorithms is a new variant of a directed hill-climbing method. The other algorithm is a statistical technique based on spectral coherence functions. The utility of the two techniques is demonstrated by means of an industrial example. A garbage incinerator process is inductively modelled from observations of 20 variable trajectories. Both suboptimal search algorithms lead to similarly good models. Each of the algorithms carries a computational cost that is in the order of a few percent of the cost of solving the complete optimisation problem. Both algorithms can also be used to filter out variables of lesser importance, i.e. they can be used as variable selection tools.  相似文献   

12.
An efficient algorithm for learning to rank from preference graphs   总被引:1,自引:0,他引:1  
In this paper, we introduce a framework for regularized least-squares (RLS) type of ranking cost functions and we propose three such cost functions. Further, we propose a kernel-based preference learning algorithm, which we call RankRLS, for minimizing these functions. It is shown that RankRLS has many computational advantages compared to the ranking algorithms that are based on minimizing other types of costs, such as the hinge cost. In particular, we present efficient algorithms for training, parameter selection, multiple output learning, cross-validation, and large-scale learning. Circumstances under which these computational benefits make RankRLS preferable to RankSVM are considered. We evaluate RankRLS on four different types of ranking tasks using RankSVM and the standard RLS regression as the baselines. RankRLS outperforms the standard RLS regression and its performance is very similar to that of RankSVM, while RankRLS has several computational benefits over RankSVM.  相似文献   

13.
The simulation of fabrics, clothes, and flexible materials is an essential topic in computer animation of realistic virtual humans and dynamic sceneries. New emerging technologies, as interactive digital TV and multimedia products, make necessary the development of powerful tools to perform real-time simulations. Parallelism is one of such tools. When analyzing computationally fabric simulations we found these codes belonging to the complex class of irregular applications. Frequently this kind of codes includes reduction operations in their core, so that an important fraction of the computational time is spent on such operations. In fabric simulators these operations appear when evaluating forces, giving rise to the equation system to be solved. For this reason, this paper discusses only this phase of the simulation. This paper analyzes and evaluates different irregular reduction parallelization techniques on ccNUMA shared memory machines, applied to a real, physically-based, fabric simulator we have developed. Several issues are taken into account in order to achieve high code performance, as exploitation of data access locality and parallelism, as well as careful use of memory resources (memory overhead). In this paper we use the concept of data affinity to develop various efficient algorithms for reduction parallelization exploiting data locality.  相似文献   

14.
Stabilized Runge-Kutta methods (they have also been called Chebyshev-Runge-Kutta methods) are explicit methods with extended stability domains, usually along the negative real axis. They are easy to use (they do not require algebra routines) and are especially suited for MOL discretizations of two- and three-dimensional parabolic partial differential equations. Previous codes based on stabilized Runge-Kutta algorithms were tested with mildly stiff problems. In this paper we show that they have some difficulties to solve efficiently problems where the eigenvalues are very large in absolute value (over 105). We also develop a new procedure to build this kind of algorithms and we derive second-order methods with up to 320 stages and good stability properties. These methods are efficient numerical integrators of very large stiff ordinary differential equations. Numerical experiments support the effectiveness of the new algorithms compared to well-known methods as RKC, ROCK2, DUMKA3 and ROCK4.  相似文献   

15.
Many research works in mathematical modeling of the facility location problem have been carried out in discrete and continuous optimization area to obtain the optimum number of required facilities along with the relevant allocation processes. This paper proposes a new multi-objective facility-location problem within the batch arrival queuing framework. Three objective functions are considered: (I) minimizing the weighted sum of the waiting and the traveling times, (II) minimizing the maximum idle time pertinent to each facility, and (III) minimizing the total cost associated with the opened facilities. In this way, the best combination of the facilities is determined in the sense of economical, equilibrium, and enhancing service quality viewpoints. As the model is shown strongly NP-hard, two meta-heuristic algorithms, namely genetic algorithm (GA) and simulated annealing (SA) are proposed to solve the model. Not only new coding is developed in these solution algorithms, but also a random search algorithm is proposed to justify the efficiency of both algorithms. Since the solution-quality of all meta-heuristic algorithms severely depends on their parameters, design of experiments and response surface methodologies have been utilized to calibrate the parameters of both algorithms. Finally, computational results obtained by implementing both algorithms on several problems of different sizes demonstrate the performances of the proposed methodology.  相似文献   

16.
一种新型的差分演化算法及其应用研究   总被引:1,自引:0,他引:1  
提出了一种新的基于简单多样性规则的改进差分演化算法,并把它运用于约束全局最优化问题的求解中。新算法的特征是: 1)提出一种新的混合自适应交叉变异算子,以增强算法的搜索能力; 2)采用具有保持群体多样性的约束函数处理技术; 3)简化基本差分演化算法的缩放因子,尽量减少算法的控制参数,方便工程人员的使用。通过对13个标准测试函数进行测试,并与其他演化算法结果进行比较。实验结果表明,新算法在求解精度和稳定性具有很好的性能,而且其函数平均评价次数要低于所比较的其他演化算法。  相似文献   

17.
An efficient non-dominated sorting method for evolutionary algorithms   总被引:1,自引:0,他引:1  
We present a new non-dominated sorting algorithm to generate the non-dominated fronts in multi-objective optimization with evolutionary algorithms, particularly the NSGA-II. The non-dominated sorting algorithm used by NSGA-II has a time complexity of O(MN(2)) in generating non-dominated fronts in one generation (iteration) for a population size N and M objective functions. Since generating non-dominated fronts takes the majority of total computational time (excluding the cost of fitness evaluations) of NSGA-II, making this algorithm faster will significantly improve the overall efficiency of NSGA-II and other genetic algorithms using non-dominated sorting. The new non-dominated sorting algorithm proposed in this study reduces the number of redundant comparisons existing in the algorithm of NSGA-II by recording the dominance information among solutions from their first comparisons. By utilizing a new data structure called the dominance tree and the divide-and-conquer mechanism, the new algorithm is faster than NSGA-II for different numbers of objective functions. Although the number of solution comparisons by the proposed algorithm is close to that of NSGA-II when the number of objectives becomes large, the total computational time shows that the proposed algorithm still has better efficiency because of the adoption of the dominance tree structure and the divide-and-conquer mechanism.  相似文献   

18.
This paper aims at the automatic design and cost minimization of reinforced concrete vaults used in road construction. This paper presents three heuristic optimization methods: the multi-start global best descent local search (MGB), the meta-simulated annealing (SA) and the meta-threshold acceptance (TA). Penalty functions are used for unfeasible solutions. The structure is defined by 49 discrete design variables and the objective function is the cost of the structure. All methods are applied to a vault of 12.40 m of horizontal free span, 3.00 m of vertical height of the lateral walls and 1.00 m of earth cover. This paper presents two original moves of neighborhood search and an algorithm for the calibration of SA-TA algorithms. The MGB algorithm appears to be more efficient than the SA and the TA algorithms in terms of mean results. However, the SA outperforms MGB and TA in terms of best results. The optimization method indicates savings of about 10% with respect to a traditional design.  相似文献   

19.
We consider a class of constrained nonlinear integer programs, which arise in manufacturing batch-sizing problems with multiple raw materials. In this paper, we investigate the use of genetic algorithms (GAs) for solving these models. Both binary and real coded genetic algorithms with six different penalty functions are developed. The real coded genetic algorithm works well for all six penalty functions compared to binary coding. A new method to calculate the penalty coefficient is also discussed. Numerical examples are provided and computational experiences are discussed.  相似文献   

20.
A novel optimization approach for minimum cost design of trusses   总被引:1,自引:0,他引:1  
This paper describes new optimization strategies that offer significant improvements in performance over existing methods for bridge-truss design. In this study, a real-world cost function that consists of costs on the weight of the truss and the number of products in the design is considered. We propose a new sizing approach that involves two algorithms applied in sequence – (1) a novel approach to generate a “good” initial solution and (2) a local search that attempts to generate the optimal solution by starting with the final solution from the previous algorithm. A clustering technique, which identifies members that are likely to have the same product type, is used with cost functions that consider a cost on the number of products. The proposed approach gives solutions that are much lower in cost compared to those generated in a comprehensive study of the same problem using genetic algorithms (GA). Also, the number of evaluations needed to arrive at the optimal solution is an order of magnitude lower than that needed in GAs. Since existing optimization techniques use cost functions like those of minimum-weight truss problems to illustrate their performance, the proposed approach is also applied to the same examples in order to compare its relative performance. The proposed approach is shown to generate solutions of not only better quality but also much more efficiently. To highlight the use of this sizing approach in a broader optimization framework, a simple geometry optimization algorithm that uses the sizing approach is presented. This algorithm is also shown to provide solutions better than the existing results in literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号