共查询到20条相似文献,搜索用时 15 毫秒
1.
In energy-constrained wireless sensor networks, energy efficiency is critical for prolonging the network lifetime. A family of ant colony algorithms called DAACA for data aggregation are proposed in this paper. DAACA consists of three phases: initialization, packets transmissions and operations on pheromones. In the transmission phase, each node estimates the remaining energy and the amount of pheromones of neighbor nodes to compute the probabilities for dynamically selecting the next hop. After certain rounds of transmissions, the pheromones adjustments are performed, which take the advantages of both global and local merits for evaporating or depositing pheromones. Four different pheromones adjustment strategies which constitute DAACA family are designed to prolong the network lifetime. Experimental results indicate that, compared with other data aggregation algorithms, DAACA shows higher superiority on average degree of nodes, energy efficiency, prolonging the network lifetime, computation complexity and success ratio of one hop transmission. At last, the features of DAACA are analyzed. 相似文献
2.
Networks-on-Chip (NoC) is an interesting option in design of communication infrastructures for embedded systems. It provides a scalable structure and balanced communication between the cores. Parallel applications that take advantage of the NoC architectures, are usually are communication-intensive. Thus, a big deal of data packets is transmitted simultaneously through the network. In order to avoid congestion delays that deteriorate the execution time of the implemented applications, an efficient routing strategy must be thought of carefully. In this paper, the ant colony optimization paradigm is explored to find and optimize routes in a mesh-based NoC. The proposed routing algorithms are simple yet efficient. The routing optimization is driven by the minimization of total latency during packets transmission between the tasks that compose the application. The presented performance evaluation is threefold: first, the impact of well-known synthetic traffic patterns is assessed; second, randomly generated applications are mapped into the NoC infrastructure and some synthetic communication traffics, that follow known patterns, are used to simulate real situations; third, sixteen real-world applications of the E3S and one specific application for digital image processing are mapped and their execution time evaluated. In both cases, the obtained results are compared to those obtained with known general purpose algorithms for deadlock free routing. The comparison avers the effectiveness and superiority of the ant colony inspired routing. 相似文献
3.
聚类作为数据挖掘技术的重要组成部分,在很多领域有着广泛应用.蚁群算法是近几年研究的一种新算法,该算法采用分布式并行计算和正反馈机制,具有易于与其它方法相结合的优点.根据蚁群算法在聚类中的应用及改进型式的不同,文章主要介绍了几种基本的流行的蚁群聚类算法,分析了它们的不同之处,并对蚁群聚类算法今后的研究方向作了展望. 相似文献
4.
给出了基本蚁群算法在多用户检测中的具体实现,同时针对基本蚁群算法易陷入局部最优解的缺点,提出了一种改进蚁群算法的多用户检测方法。该算法对蚁群算法的信息素更新采用串联式的多级更新策略,首先进行有选择性的信息素更新,然后引进随机扰动因子进一步修改信息素,最后采用最大门限确定信息素的范围。最终的仿真结果表明:所提出的级联信息素更新蚁群算法的多用户检测(UCP-ACO-MUD)算法具有较强的跳出局部最优解的性能,效果良好。 相似文献
5.
Multi-objective algorithms are aimed to obtain a set of solutions, called Pareto set, covering the whole Pareto front, i.e. the representation of the optimal set of solutions. To this end, the algorithms should yield a wide amount of near-optimal solutions with a good diversity or spread along this front. This work presents a study on different coarse-grained distribution schemes dealing with Multi-Objective Ant Colony Optimization Algorithms (MOACOs). Two of them are a variation of independent multi-colony structures, respectively having a fixed number of ants in every subset or distributing the whole amount of ants into small sub-colonies. We introduce in this paper a third method: an island-based model where the colonies communicate by migrating ants, following a neighbourhood topology which fits to the search space. All the methods are aimed to cover the whole PF, thus each sub-colony or island tries to search for solutions in a limited area, complemented by the rest of colonies, in order to obtain a more diverse high-quality set of solutions. The models have been tested by implementing them considering three different MOACOs: two well-known and CHAC, an algorithm previously proposed by the authors. Three different instances of the bi-Criteria travelling salesman problem have been considered. The experiments have been performed in a parallel environment (inside a cluster platform), in order to get a time improvement. Moreover, the system scaling factor with respect to the number of processors will be also analysed. The results show that the proposed Pareto-island model and its novel neighbourhood topology performs better than the other models, yielding a more diverse and more optimized set of solutions. Moreover, from the algorithmic point of view, the proposed algorithm, named CHAC, yields the best results on average. 相似文献
6.
Artificial bee colony (ABC) algorithm has already shown more effective than other population-based algorithms. However, ABC is good at exploration but poor at exploitation, which results in an issue on convergence performance in some cases. To improve the convergence performance of ABC, an efficient and robust artificial bee colony (ERABC) algorithm is proposed. In ERABC, a combinatorial solution search equation is introduced to accelerate the search process. And in order to avoid being trapped in local minima, chaotic search technique is employed on scout bee phase. Meanwhile, to reach a kind of sustainable evolutionary ability, reverse selection based on roulette wheel is applied to keep the population diversity. In addition, to enhance the global convergence, chaotic initialization is used to produce initial population. Finally, experimental results tested on 23 benchmark functions show that ERABC has a very good performance when compared with two ABC-based algorithms. 相似文献
7.
提出利用蚁群算法来完成虚拟机放置过程的优化策略(ant colony optimization based virtual machine placement,ACO-VMP).建立基于向量代数的物理服务器多维资源的描述,以云数据中心的整体能量消耗降低和活动物理主机数量减少为目标函数;蚁群算法中信息素增强变量的更新与所... 相似文献
8.
One objective of process planning optimization is to cut down the total cost for machining process, and the ant colony optimization (ACO) algorithm is used for the optimization in this paper. Firstly, the process planning problem, considering the selection of machining resources, operations sequence optimization and the manufacturing constraints, is mapped to a weighted graph and is converted to a constraint-based traveling salesman problem. The operation sets for each manufacturing features are mapped to city groups, the costs for machining processes (including machine cost and tool cost) are converted to the weights of the cities; the costs for preparing processes (including machine changing, tool changing and set-up changing) are converted to the ‘distance’ between cities. Then, the mathematical model for process planning problem is constructed by considering the machining constraints and goal of optimization. The ACO algorithm has been employed to solve the proposed mathematical model. In order to ensure the feasibility of the process plans, the Constraint Matrix and State Matrix are used in this algorithm to show the state of the operations and the searching range of the candidate operations. Two prismatic parts are used to compare the ACO algorithm with tabu search, simulated annealing and genetic algorithm. The computing results show that the ACO algorithm performs well in process planning optimization than other three algorithms. 相似文献
9.
This paper presents an ant colony optimization (ACO) algorithm in an agent-based system to integrate process planning and shopfloor scheduling (IPPS). The search-based algorithm which aims to obtain optimal solutions by an autocatalytic process is incorporated into an established multi-agent system (MAS) platform, with advantages of flexible system architectures and responsive fault tolerance. Artificial ants are implemented as software agents. A graph-based solution method is proposed with the objective of minimizing makespan. Simulation studies have been established to evaluate the performance of the ant approach. The experimental results indicate that the ACO algorithm can effectively solve the IPPS problems and the agent-based implementation can provide a distributive computation of the algorithm. 相似文献
11.
The Journal of Supercomputing - Software evolution is a natural phenomenon due to the changing requirements. Understanding the program structure is a significant and complicated factor in... 相似文献
12.
This study develops an enhanced ant colony optimization (E-ACO) meta-heuristic to accomplish the integrated process planning and scheduling (IPPS) problem in the job-shop environment. The IPPS problem is represented by AND/OR graphs to implement the search-based algorithm, which aims at obtaining effective and near-optimal solutions in terms of makespan, job flow time and computation time taken. In accordance with the characteristics of the IPPS problem, the mechanism of ACO algorithm has been enhanced with several modifications, including quantification of convergence level, introduction of node-based pheromone, earliest finishing time-based strategy of determining the heuristic desirability, and oriented elitist pheromone deposit strategy. Using test cases with comprehensive consideration of manufacturing flexibilities, experiments are conducted to evaluate the approach, and to study the effects of algorithm parameters, with a general guideline for ACO parameter tuning for IPPS problems provided. The results show that with the specific modifications made on ACO algorithm, it is able to generate encouraging performance which outperforms many other meta-heuristics. 相似文献
13.
基于Gibbs自由能最小原理,本文利用蚁群算法构建了一个复杂体系化学平衡计算的数值计算模型。该模型嵌入局部搜索算法以提高计算精度,通过蚁群移动以获取全局最优解。通过算法验证计算表明,该模型能够作为一种复杂体系化学平衡计算的方法。 相似文献
14.
Parallel computers are having a profound impact on computational science. Recently highly parallel machines have taken the lead as the fastest supercomputers, a trend that is likely to accelerate in the future. We describe some of these new computers, and issues involved in using them. We present elliptic PDE solutions currently running at 3.8 gigaflops, and an atmospheric dynamics model running at 1.7 gigaflops, on a 65 536-processor computer. One intrinsic disadvantage of a parallel machine is the need to perform inter-processor communication. It is important to ensure that such communication time is maintained at a small fraction of computation time. We analyze standard multigrid algorithms in two and three dimensions from this point of view, indicating that performance efficiencies in excess of 95% are attainable under suitable conditions on moderately parallel machines. We also demonstrate that such performance is not attainable for multigrid on massively parallel computers, as indicated by an example of poor multigrid efficiency on 65 536 processors. The fundamental difficulty is the inability to keep 65 536 processors busy when operating on very coarse grids. Most algorithms used for implementing applications on parallel machines have been derived directly from algorithms designed for serial machines. The previously mentioned multigrid example indicates that such ‘parallelized’ algorithms may not always be optimal. Parallel machines open the possibility of finding totally new approaches to solving standard tasks—intrinsically parallel algorithms. In particular, we present a class of superconvergent multiple scale methods that were motivated directly by massevely parallel machines. These methods differ from standard multigrid methods in an intrinsic way, and allow all processors to be used at all times, even when processing on the coarsest grid levels. Their serial versions are not sensible algorithms. The idea that parallel hardware—the Connection Machine in this case—can lead to discovery of new mathematical algorithms was surprising for us. 相似文献
15.
Data clustering has attracted a lot of research attention in the field of computational statistics and data mining. In most related studies, the dissimilarity between two clusters is defined as the distance between their centroids or the distance between two closest (or farthest) data points However, all of these measures are vulnerable to outliers and removing the outliers precisely is yet another difficult task. In view of this, we propose a new similarity measure, referred to as cohesion, to measure the intercluster distances. By using this new measure of cohesion, we have designed a two-phase clustering algorithm, called cohesion-based self-merging (abbreviated as CSM), which runs in time linear to the size of input data set. Combining the features of partitional and hierarchical clustering methods, algorithm CSM partitions the input data set into several small subclusters in the first phase and then continuously merges the subclusters based on cohesion in a hierarchical manner in the second phase. The time and the space complexities of algorithm CSM are analyzed. As shown by our performance studies, the cohesion-based clustering is very robust and possesses excellent tolerance to outliers in various workloads. More importantly, algorithm CSM is shown to be able to cluster the data sets of arbitrary shapes very efficiently and provide better clustering results than those by prior methods. 相似文献
16.
To implement on-line process monitoring techniques such as principal component analysis (PCA) or partial least squares (PLS), it is necessary to extract data associated with the normal operating conditions from the plant historical database for calibrating the models. One way to do this is to use robust outlier detection algorithms such as resampling by half-means (RHM), smallest half volume (SHV), or ellipsoidal multivariate trimming (MVT) in the off-line model building phase. While RHM and SHV are conceptually clear and statistically sound, the computational requirements are heavy. Closest distance to center (CDC) is proposed in this paper as an alternative for outlier detection. The use of Mahalanobis distance in the initial step of MVT for detecting outliers is known to be ineffective. To improve MVT, CDC is incorporated with MVT. The performance was evaluated relative to the goal of finding the best half of a data set. Data sets were derived from the Tennessee Eastman process (TEP) simulator. Comparable results were obtained for RHM, SHV, and CDC. Better performance was obtained when CDC is incorporated with MVT, compared to using CDC and MVT alone. All robust outlier detection algorithms outperformed the standard PCA algorithm. The effect of auto scaling, robust scaling and a new scaling approach called modified scaling were investigated. With the presence of multiple outliers, auto scaling was found to degrade the performance of all the robust techniques. Reasonable results were obtained with the use of robust scaling and modified scaling. 相似文献
17.
路径规划是移动机器人的重要研究课题之一,但过去的很多方法都存在一些不足.针对这些问题,一种新方法--蚁群元胞模型被提出,并将应用于移动机器人路径规划.蚁群算法是求解组合优化问题的有力工具,而元胞自动机是适合复杂大系统模拟的工具.将蚁群算法用元胞自动机来建立就产生了蚁群元胞模型.通过两组试验从不同角度考察了蚁群元胞模型应用的可行性.试验结果表明,改进的蚁群元胞模型能够解决路径规划问题,且模型中的相关参数也有一些的规律. 相似文献
18.
Orthogonal moments have been successfully used in the field of pattern recognition and image analysis. However, the direct computation of orthogonal moments is very expensive. In this paper, we present two new algorithms for fast computing the two-dimensional (2D) Legendre moments. The first algorithm consists of transforming the pixel-based calculation of Legendre moments into the line-segment-based calculation. After all line-segment moments have been calculated, Hatamian's filter method is extended to calculate the one-dimensional Legendre moments. The second algorithm is directly based on the double integral formulation. The 2D shape is considered as a continuous region and the contribution of the boundary points is used for fast calculation of shape moments. The numerical results show that the new algorithms can decrease the computational complexity tremendously, furthermore, they can be used to treat any complicated objects. 相似文献
19.
This paper deals with a reliability optimization problem for a series system with multiple-choice and budget constraints. The objective is to choose one technology for each subsystem in order to maximize the reliability of the whole system subject to the available budget. This problem is NP-hard and could be formulated as a binary integer programming problem with a nonlinear objective function. In this paper, an efficient ant colony optimization (ACO) approach is developed for the problem. In the approach, a solution is generated by an ant based on both pheromone trails modified by previous ants and heuristic information considered as a fuzzy set. Constructed solutions are not guaranteed to be feasible; consequently, applying an appropriate procedure, an infeasible solution is replaced by a feasible one. Then, feasible solutions are improved by a local search. The proposed approach is compared with the existing metaheuristic available in the literature. Computational results demonstrate that the approach serves to be a better performance for large problems. 相似文献
20.
The defuzzification is a critical block when implementing a fuzzy inference engine due to different variations and also high computational power demands of defuzzification algorithms. These various methods stand for different cost-accuracy trade-off points. Three new implementation friendly defuzzification algorithms are presented in this paper and compared with a complete set of existing defuzzification methods. Some accuracy analysis simulation results and analytic studies are provided to demonstrate that these methods provide acceptable precision with respect to other existing methods. The software models of the proposed and exiting defuzzification methods are developed under three well-known platforms, Intel's Pentium IV, IBM's PowerPC, and TI's C62 DSP to show that new methods gain much lower execution-time and instruction-count with respect to the most common existing methods. The hardware models of all these methods are also developed and synthesized to demonstrate the superiority of the new methods in terms of area, delay, and power consumption with respect to other methods when implemented in hardware. 相似文献
|