首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Independent component analysis (ICA) technique separates mixed signals blindly without any information of the mixing system. Fast ICA is the most popular gradient based ICA algorithm. Bacterial foraging optimization based ICA (BFOICA) and constrained genetic algorithm based ICA (CGAICA) are two recently developed derivative free evolutionary computational ICA techniques. In BFOICA the foraging behavior of E. coli bacteria present in our intestine is mimicked for evaluation of independent components (IC) where as in CGAICA genetic algorithm is used for IC estimation in a constrained manner. The present work evaluates the error performance of fast ICA, BFOICA and CGAICA algorithms when they are implemented with finite length register. Simulation study is carried on both fixed and floating point ICA algorithms. It is observed that the word length greatly influences the separation performance. A comparison of fixed-point error performance of the three algorithms is also carried out in this work.  相似文献   

2.
In this paper we present a topologically correct and efficient version of the algorithm by Guibas and Stolfi (Algorithmica 7 (1992), pp. 381-413) for the exact computation of Delaunay and power triangulations in two dimensions. The algorithm avoids numerical errors and degeneracies caused by the accumulation of rounding errors in fixed length floating point arithmetic when constructing these triangulations.Most methods for computing Delaunay and power triangulations involve the calculation of two basic primitives: the INCIRCLE test and the CCW orientation test. Both primitives require the computation of the sign of a determinant. The key to our method is the exact computation of this sign and is based on an algorithm for determining the sign of the sum of a finite set of normalized floating point numbers of fixed mantissa length (machine numbers) exactly. The exact computation of the primitives allows the construction of the correct Delaunay and power triangulations. The method has been implemented and tested for the incremental construction of Delaunay and power triangulations. Tests have been conducted for different distributions of points for which non-exact algorithms may encounter difficulties, for example, slightly perturbed points on a grid or on a circle. Experimental results show that the performance of our implementation is comparable with that of a simple implementation of the incremental algorithm in single precision floating point arithmetic. For random distribution of points the exact algorithm is only 4 times slower than the inexact implementation. The algorithm is easy to implement, robust and portable as long as the input data to the algorithm remains exact.  相似文献   

3.
Recently, evolutionary multiobjective optimization (EMO) algorithms have been utilized for the design of accurate and interpretable fuzzy rule-based systems. This research area is often referred to as multiobjective genetic fuzzy systems (MoGFS), where EMO algorithms are used to search for non-dominated fuzzy rule-based systems with respect to their accuracy and interpretability. In this paper, we examine the ability of EMO algorithms to efficiently search for Pareto optimal or near Pareto optimal fuzzy rule-based systems for classification problems. We use NSGA-II (elitist non-dominated sorting genetic algorithm), its variants, and MOEA/D (multiobjective evolutionary algorithm based on decomposition) in our multiobjective fuzzy genetics-based machine learning (MoFGBML) algorithm. Classification performance of obtained fuzzy rule-based systems by each EMO algorithm is evaluated for training data and test data under various settings of the available computation load and the granularity of fuzzy partitions. Experimental results in this paper suggest that reported classification performance of MoGFS in the literature can be further improved using more computation load, more efficient EMO algorithms, and/or more antecedent fuzzy sets from finer fuzzy partitions.  相似文献   

4.
5.
An important problem in the study of evolutionary algorithms is how to continuously predict promising solutions while simultaneously escaping from local optima. In this paper, we propose an elitist probability schema (EPS) for the first time, to the best of our knowledge. Our schema is an index of binary strings that expresses the similarity of an elitist population at every string position. EPS expresses the accumulative effect of fitness selection with respect to the coding similarity of the population. For each generation, EPS can quantify the coding similarity of the population objectively and quickly. One of our key innovations is that EPS can continuously predict promising solutions while simultaneously escaping from local optima in most cases. To demonstrate the abilities of the EPS, we designed an elitist probability schema genetic algorithm and an elitist probability schema compact genetic algorithm. These algorithms are estimations of distribution algorithms (EDAs). We provided a fair comparison with the persistent elitist compact genetic algorithm (PeCGA), quantum-inspired evolutionary algorithm (QEA), and particle swarm optimization (PSO) for the 0–1 knapsack problem. The proposed algorithms converged quicker than PeCGA, QEA, and PSO, especially for the large knapsack problem. Furthermore, the computation time of the proposed algorithms was less than some EDAs that are based on building explicit probability models, and was approximately the same as QEA and PSO. This is acceptable for evolutionary algorithms, and satisfactory for EDAs. The proposed algorithms are successful with respect to convergence performance and computation time, which implies that EPS is satisfactory.  相似文献   

6.
We present an algorithmic solution to the robustness problem in computational geometry, called controlled linear perturbation, and demonstrate it on Minkowski sums of polyhedra. The robustness problem is how to implement real RAM algorithms accurately and efficiently using computer arithmetic. Approximate computation in floating point arithmetic is efficient but can assign incorrect signs to geometric predicates, which can cause combinatorial errors in the algorithm output. We make approximate computation accurate by performing small input perturbations, which we compute using differential calculus. This strategy supports fast, accurate Minkowski sum computation. The only prior robust implementation uses a less efficient algorithm, requires exact algebraic computation, and is far slower based on our extensive testing.  相似文献   

7.
In this note we study scaling rules and roundoff noise variances in a fixed-point implementation of the Kalman predictor for an ARMA time series observed noise free. The Kalman predictor is realized in a fast form that uses the so-called fast Kalman gain algorithm. The algorithm for the gain is fixed point. Scaling rules and expressions for rounding error variances are derived. The numerical results show that the fixed-point realization performs very close to the floating point realization for relatively low-order ARMA time series that are not too narrow band. The predictor has been implemented in 16-bit fixed-point arithmetic on an INTEL 8086 microprocessor, and in 16-bit floating-point arithmetic on an INTEL 8080. Fixed-point code was written in Assembly language and floating-point code was written in Fortran. Experimental results were obtained by running the fixed- and floating-point filters on identical data sets. All experiments were carried out on an INTEL MIDS 230 development system.  相似文献   

8.
Many large combinatorial optimization problems tackled with evolutionary algorithms often require very high computational times, usually due to the fitness evaluation. This fact forces programmers to use clusters of computers, a computational solution very useful for running applications of intensive calculus but having a high acquisition price and operation cost, mainly due to the Central Processing Unit (CPU) power consumption and refrigeration devices. A low-cost and high-performance alternative comes from reconfigurable computing, a hardware technology based on Field Programmable Gate Array devices (FPGAs). The main objective of the work presented in this paper is to compare implementations on FPGAs and CPUs of different fitness functions in evolutionary algorithms in order to study the performance of the floating-point arithmetic in FPGAs and CPUs that is often present in the optimization problems tackled by these algorithms. We have taken advantage of the parallelism at chip-level of FPGAs pursuing the acceleration of the fitness functions (and consequently, of the evolutionary algorithms) and showing the parallel scalability to reach low cost, low power and high performance computational solutions based on FPGA. Finally, the recent popularity of GPUs as computational units has moved us to introduce these devices in our performance comparisons. We analyze performance in terms of computation times and economic cost.  相似文献   

9.
We present a heuristically certified form of floating-point arithmetic and its implementation in CoCoALib. This arithmetic is intended to act as a fast alternative to exact rational arithmetic, and is developed from the idea of paired floats expounded by Traverso and Zanoni (2002). As prerequisites we need a source of (pseudo-)random numbers, and an underlying floating-point arithmetic system where the user can set the precision. Twin-float arithmetic can be used only where the input data are exact, or can be obtained at high enough precision. Our arithmetic includes a total cancellation heuristic for sums and differences, and so can be used in classical algebraic algorithms such as Buchberger’s algorithm. We also present a (new) algorithm for recovering an exact rational value from a twin-float, so in some cases an exact answer can be obtained from an approximate computation.  相似文献   

10.
Finite word length arithmetic roundoff noise in adaptive filter algorithms results in statistical variations in the filter weight vector about the infinite precision arithmetic weight vector. These roundoff errors may be modeled as a statistically non stationary driving noise affecting weight mean and covariance convergence. Mean and covariance expressions and bounds are desired for word lengths in fixed-point arithmetic by making use of multiplication roundoff error models. The adaptive filter algorithms consist of the LMS algorithm, the Widrow-Hoff LMS algorithm, pilot-vector algorithm and clipped vector algorithm. All of these algorithms can be implemented on-line and real-time. However, only the behavior of the LMS algorithm is reported here. The implementation of the adaptive filter algorithms in finite word length arithmetic is most evident in minicomputer, microprocessor, and dedicated digital signal processors for on-line real-time signal identification and parameter estimation in many disciplines. Radar signal processing, adaptive beam forming, acoustic signal identification, communication channel enhancement have a definite need for advanced filtering concepts. Our adaptive algorithms are typically employed in these filter configurations. These filters can also be employed in phase distortion equalizers. A particular advantage of these filters is that they can be trained to equalize a variety of distortions. Should a particular distortion scenario change in time, the filters can be made to easily adapt to the new problem.  相似文献   

11.
Three numerical algorithms for computing the solution of the covariance matrix differential equations of states of a linear time-invariant dynamical system forced by white Gaussian noise are analyzed. Estimates of errors due to truncation and roundoff are derived for each algorithm. The error analyses are based on the assumption that computation is performed in floating point mode and that it is not numerically ill-conditioned. Computational complexity of each algorithm is also discussed. Two numerical examples are presented to evaluate the performance of each algorithm.  相似文献   

12.
Since they often embody compact but mathematically sophisticated algorithms, operations for computing the common transcendental functions in floating point arithmetic seem good targets for formal verification using a mechanical theorem prover. We discuss some of the general issues that arise in verifications of this class, and then present a machine-checked verification of an algorithm for computing the exponential function in IEEE-754 standard binary floating point arithmetic. We confirm (indeed strengthen) the main result of a previousl published error analysis, though we uncover a minor error in the hand proof and are forced to confront several subtle issues that might easily be overlooked informally.The development described here includes, apart from the proof itself, a formalization of IEEE arithmetic, a mathematical semantics for the programming language in which the algorithm is expressed, and the body of pure mathematics needed. All this is developed logically from first principles using the HOL Light prover, which guarantees strict adherence to simple rules of inference while allowing the user to perform proofs using higher-level derived rules.  相似文献   

13.
Elitism-based compact genetic algorithms   总被引:1,自引:0,他引:1  
This paper describes two elitism-based compact genetic algorithms (cGAs)-persistent elitist compact genetic algorithm (pe-cGA), and nonpersistent elitist compact genetic algorithm (ne-cGA). The aim is to design efficient cGAs by treating them as estimation of distribution algorithms (EDAs) for solving difficult optimization problems without compromising on memory and computation costs. The idea is to deal with issues connected with lack of memory by allowing a selection pressure that is high enough to offset the disruptive effect of uniform crossover. The pe-cGA finds a near optimal solution (i.e., a winner) that is maintained as long as other solutions generated from probability vectors are no better. The ne-cGA further improves the performance of the pe-cGA by avoiding strong elitism that may lead to premature convergence. It also maintains genetic diversity. This paper also proposes an analytic model for investigating convergence enhancement.  相似文献   

14.
双精度浮点并行计算将不能满足高性能计算领域对计算精度的要求,但是目前还没有高性能的超双精度并行计算的解决方法。基于并行编程语言MPI,本文提出了扩展双精度浮点的并行计算实现方法,并且使用精度敏感的圆周率计算BBP算法验证了该方法的正确性和性能。  相似文献   

15.
This paper presents hardware designs, arithmetic algorithms, and numerical applications for variable-precision, interval arithmetic coprocessors. These coprocessors give the programmer the ability to set the initial precision of the computation, determine the accuracy of the results, and recompute inaccurate results with higher precision. Variable-precision, interval arithmetic algorithms are used to reduce the execution times of numerical applications. Three hardware designs with data paths of 16, 32, and 64 bits are examined. These designs are compared based on their estimated chip area, cycle time, and execution times for various numerical applications. Each coprocessor can be implemented on a single chip with a cycle time that is comparable to IEEE double-precision floating point coprocessors. For certain numerical applications, the coprocessors are two to four orders of magnitude faster than a conventional software package for variable-precision, interval arithmetic.  相似文献   

16.
高精度、高性能浮点运算部件是高性能微处理器设计的重要部分。通过对传统双精度浮点乘加运算算法的研究,结合四倍精度浮点数据格式特点,设计并实现一种高性能的四倍精度浮点乘加器(QPFMA),该乘加器支持多种浮点运算,运算延迟为7拍,全流水结构。采用双路加法器改进算法结构,优化头零预测和规格化移位逻辑,减小运算延迟和硬件开销。通过参数化设计验证方法,实现高效的正确性验证。逻辑综合结果表明,基于65 nm工艺,该QPFMA频率可达1.2 GHz,比现有的QPFMA设计运算延迟减少3拍,频率提高约11.63%。  相似文献   

17.
On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. The approach presented here can apply not only to conventional processors but also to other technologies such as Field Programmable Gate Arrays (FPGA), Graphical Processing Units (GPU), and the STI Cell BE processor. Results on modern processor architectures and the STI Cell BE are presented.

Program summary

Program title: ITER-REFCatalogue identifier: AECO_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECO_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 7211No. of bytes in distributed program, including test data, etc.: 41 862Distribution format: tar.gzProgramming language: FORTRAN 77Computer: desktop, serverOperating system: Unix/LinuxRAM: 512 MbytesClassification: 4.8External routines: BLAS (optional)Nature of problem: On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution.Solution method: Mixed precision algorithms stem from the observation that, in many cases, a single precision solution of a problem can be refined to the point where double precision accuracy is achieved. A common approach to the solution of linear systems, either dense or sparse, is to perform the LU factorization of the coefficient matrix using Gaussian elimination. First, the coefficient matrix A is factored into the product of a lower triangular matrix L and an upper triangular matrix U. Partial row pivoting is in general used to improve numerical stability resulting in a factorization PA=LU, where P is a permutation matrix. The solution for the system is achieved by first solving Ly=Pb (forward substitution) and then solving Ux=y (backward substitution). Due to round-off errors, the computed solution, x, carries a numerical error magnified by the condition number of the coefficient matrix A. In order to improve the computed solution, an iterative process can be applied, which produces a correction to the computed solution at each iteration, which then yields the method that is commonly known as the iterative refinement algorithm. Provided that the system is not too ill-conditioned, the algorithm produces a solution correct to the working precision.Running time: seconds/minutes  相似文献   

18.
This paper shows how the performance of evolutionary multiobjective optimization (EMO) algorithms can be improved by hybridization with local search. The main positive effect of the hybridization is the improvement in the convergence speed to the Pareto front. On the other hand, the main negative effect is the increase in the computation time per generation. Thus, the number of generations is decreased when the available computation time is limited. As a result, the global search ability of EMO algorithms is not fully utilized. These positive and negative effects are examined by computational experiments on multiobjective permutation flowshop scheduling problems. Results of our computational experiments clearly show the importance of striking a balance between genetic search and local search. In this paper, we first modify our former multiobjective genetic local search (MOGLS) algorithm by choosing only good individuals as initial solutions for local search and assigning an appropriate local search direction to each initial solution. Next, we demonstrate the importance of striking a balance between genetic search and local search through computational experiments. Then we compare the modified MOGLS with recently developed EMO algorithms: the strength Pareto evolutionary algorithm and revised nondominated sorting genetic algorithm. Finally, we demonstrate that a local search can be easily combined with those EMO algorithms for designing multiobjective memetic algorithms.  相似文献   

19.
The purpose of this paper is to look at the problem of propagation of round-off errors in fixed-point arithmetic and at various problems of checking solutions of equations already treated by La Porte and Vignes in the case of floating-point arithmetic. We first consider the probabilistic model for the numerical fixed-point representation on a computer, the evaluation of the mean value and of the standard deviation for the absolute error of the assignment operator A, and of elementary operators of arithmetic. We then compute the statistical estimate of the error in the computation of an inner product, and this leads us to the problem of checking the accuracy of the solution of linear systems and of algebraic equations.  相似文献   

20.
This document seeks to provide a scientific basis by which different initialization algorithms for evolutionary timetabling may be compared. Seeding the initial population may be used to improve initial quality and provide a better starting point for the evolutionary algorithm. This must be tempered against the consideration that if the seeding algorithm produces very similar solutions, then the loss of genetic diversity may well lead to a worse final solution. Diversity, we hope, provides a good indication of how good the final solution will be, although only by running the evolutionary algorithm will the exact result be found. We will investigate the effects of heuristic seeding by taking quality and diversity measures of populations generated by heuristic initialization methods on both random and real-life data, as well as assessing the long-term performance of an evolutionary algorithm (found to work well on the timetabling problem) when using heuristic initialization. This will show how the use of heuristic initialization strategies can substantially improve the performance of evolutionary algorithms for the timetabling problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号