首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
共形球面阵天线的三维方向图综合算法   总被引:1,自引:0,他引:1  
在粒子群算法的基础上,吸收了目前主流优化算法的优势并加以改进,得到了一种适用于共形球面阵天线的三维方向图综合算法,该算法针对共形球面阵天线的方向图综合问题,在预先获得天线单元方向图数据的前提下,优化后可以得到三维空间任意指定波束指向及波束宽度的方向图。仿真结果表明:这种三维方向图综合算法可以有效解决共形球面阵天线的方向图综合问题。  相似文献   

2.
蚁群算法在考试安排中的应用   总被引:4,自引:1,他引:4  
蚁群算法是一种新的进化算法,目前的研究表明该算法具有许多优良的性质,它为组合优化等问题提供了新的思路。利用蚁群算法对考试课程安排这一实际问题进行求解。综合了图论中的着色和运筹学中的背包问题。通过实例的解决和分析,说明了该算法的优越性。  相似文献   

3.
一种基于路径的调度算法   总被引:3,自引:0,他引:3  
操作调度是高级综合中的重要任务,文中首次提出了一种基于路径的操作调度算法,该算法中机时处理条件结构及循环结构的调度,采用该算法可获得使控制数据流图(CDFG)中的所有的路径的执行所需的控制步数达到最小化调度,经实验证明,该算法特别适合于微处理器及控制占主要成分的大型综合设计。  相似文献   

4.
本文首先介绍了目前已有的几种初始拓扑生成算法,并指出了这些算法存在的局限性。然后提出了一个新的算法——基于链路综合重要度的初始拓扑生成算法,详细讨论了新算法的实现原理及算法时间复杂性。  相似文献   

5.
1.引言演绎综合是一种形式化开发算法程序的方法,它从给定问题的一个易于理解、简洁明确的规范说明出发(规范说明表达了所要设计程序的目的),通过定理证明,程序变换,演绎推理等手段,形式化地推导出解决问题的正确算法或程序。算法程序演绎综合的基础  相似文献   

6.
基于空间复用的信号检测算法研究   总被引:1,自引:1,他引:0  
为了在接收端恢复出发送端的原始数据,需要在接收端进行信号检测。对几种经典的传统信号检测算法进行了详细阐述和分析,并对各种算法进行了Matlab仿真和性能比较。由此得出,改进型的V-BLAST算法可以用于TD-LTE无线综合测试仪的开发。  相似文献   

7.
对常见的指纹匹配算法进行了综合描述。实验结果表明,综合运用常见的这几种算法大大提高了指纹识别的精度,获得良好效果。  相似文献   

8.
一种优化多层前向网络的IA-BP混合算法   总被引:4,自引:2,他引:4  
该文针对免疫算法(IA)在优化较大规模的多层前向神经网络时收敛速度慢的缺点,给出了一种综合免疫算法和BP算法优点的IA-BP混合算法,它首先采用免疫算法进行全局搜索,然后调用BP算法进行局部搜索,从而加快收敛速度。实验结果表明该算法在训练较大规模的前向神经网络时性能要优于免疫算法和BP算法。  相似文献   

9.
改进的混合粒子群算法   总被引:1,自引:0,他引:1       下载免费PDF全文
从研究分析粒子群算法和郭涛算法的特点出发,提出一种综合两算法优点的混合算法。新算法改变了粒子的更新方式,以子空间搜索和串行搜索相结合的多点并行搜索,扩大了算法的搜索范围,减少了粒子对初值的依赖,增强了算法跳出局部最优的能力;通过后代较优个体变异产生子群,提高了算法局部寻优能力;实验证明,该算法正确高效。  相似文献   

10.
林晓帆  李超 《计算机工程》2007,33(7):101-103
提出了一种基于P2P网格的高效广播传递算法,算法综合了两种覆盖网络的广播传递算法的优点:一种是规则网络,另一种是采用感染算法通信的非结构化网络。形成的元结构算法比原来的算法具有更快的广播传递速度、更低的消息复杂度和更高的鲁棒性。实验表明该方法具有可行性。  相似文献   

11.
基于李雅普诺夫函数的BP神经网络算法的收敛性分析   总被引:3,自引:0,他引:3  
针对前馈神经网络应时变输入的自学习机制,采用李雅普诺夫函数来分析权值的收敛性,从而揭示BP神经网络算法朝最小误差方向调整权值的内在因素,并在分析单参数BP算法收敛性基础上,提出单参数变调整法则的离散型BP神经网络算法.  相似文献   

12.
A general backpropagation algorithm is proposed for feedforward neural network learning with time varying inputs. The Lyapunov function approach is used to rigorously analyze the convergence of weights, with the use of the algorithm, toward minima of the error function. Sufficient conditions to guarantee the convergence of weights for time varying inputs are derived. It is shown that most commonly used backpropagation learning algorithms are special cases of the developed general algorithm.  相似文献   

13.
In this letter, a new approach for the learning process of multilayer feedforward neural network is introduced. This approach minimizes a modified form of the criterion used in the standard backpropagation algorithm. This criterion is based on the sum of the linear and the nonlinear quadratic errors of the output neuron. The quadratic linear error signal is appropriately weighted. The choice of the weighted design parameter is evaluated via rank convergence series analysis and asymptotic constant error values. The new proposed modified standard backpropagation algorithm (MBP) is first derived on a single neuron-based net and then extended to a general feedforward neural network. Simulation results of the 4-b parity checker and the circle in the square problem confirm that the performance of the MBP algorithm exceed the standard backpropagation (SBP) in the reduction of the total number of iterations and in the learning time.  相似文献   

14.
A multilayer neural networks training algorithm that minimizes the probability of classification error is proposed. The claim is made that such an algorithm possesses some clear advantages over the standard backpropagation (BP) algorithm. The convergence analysis of the proposed procedure is performed and convergence of the sequence of criterion realizations with probability of one is proven. An experimental comparison with the BP algorithm on three artificial pattern recognition problems is given.  相似文献   

15.
In training the weights of a feedforward neural network, it is well known that the global extended Kalman filter (GEKF) algorithm has much better performance than the popular gradient descent with error backpropagation in terms of convergence and quality of solution. However, the GEKF is very computationally intensive, which has led to the development of efficient algorithms such as the multiple extended Kalman algorithm (MEKA) and the decoupled extended Kalman filter algorithm (DEKF), that are based on dimensional reduction and/or partitioning of the global problem. In this paper we present a new training algorithm, called local linearized least squares (LLLS), that is based on viewing the local system identification subproblems at the neuron level as recursive linearized least squares problems. The objective function of the least squares problems for each neuron is the sum of the squares of the linearized backpropagated error signals. The new algorithm is shown to give better convergence results for three benchmark problems in comparison to MEKA, and in comparison to DEKF for highly coupled applications. The performance of the LLLS algorithm approaches that of the GEKF algorithm in the experiments.  相似文献   

16.
Genetic algorithms represent a class of highly parallel robust adaptive search processes for solving a wide range of optimization and machine learning problems. The present work is an attempt to demonstrate their effectiveness to search a global optimal solution to select a decision boundary for a pattern recognition problem using a multilayer perceptron. The proposed method incorporates a new concept of nonlinear selection for creating mating pools and a weighted error as a fitness function. Since there is no need for the backpropagation technique, the algorithm is computationally efficient and avoids all the drawbacks of the backpropagation algorithm. Moreover, it does not depend on the sequence of the training data. The performance of the method along with the convergence has been experimentally demonstrated for both linearly separable and nonseparable pattern classes.  相似文献   

17.
An accelerated learning algorithm for multilayer perceptronnetworks   总被引:2,自引:0,他引:2  
An accelerated learning algorithm (ABP-adaptive back propagation) is proposed for the supervised training of multilayer perceptron networks. The learning algorithm is inspired from the principle of "forced dynamics" for the total error functional. The algorithm updates the weights in the direction of steepest descent, but with a learning rate a specific function of the error and of the error gradient norm. This specific form of this function is chosen such as to accelerate convergence. Furthermore, ABP introduces no additional "tuning" parameters found in variants of the backpropagation algorithm. Simulation results indicate a superior convergence speed for analog problems only, as compared to other competing methods, as well as reduced sensitivity to algorithm step size parameter variations.  相似文献   

18.
本文提出一和中基于Kalman滤波算法的神经网络快速学习算法,经图象边缘检测应用结果表明,该算法对于加快网络学习的收敛性有着显著的成效。  相似文献   

19.
Multilayer perceptrons are successfully used in an increasing number of nonlinear signal processing applications. The backpropagation learning algorithm, or variations hereof, is the standard method applied to the nonlinear optimization problem of adjusting the weights in the network in order to minimize a given cost function. However, backpropagation as a steepest descent approach is too slow for many applications. In this paper a new learning procedure is presented which is based on a linearization of the nonlinear processing elements and the optimization of the multilayer perceptron layer by layer. In order to limit the introduced linearization error a penalty term is added to the cost function. The new learning algorithm is applied to the problem of nonlinear prediction of chaotic time series. The proposed algorithm yields results in both accuracy and convergence rates which are orders of magnitude superior compared to conventional backpropagation learning.  相似文献   

20.
A robust backpropagation training algorithm with a dead zone scheme is used for the online tuning of the neural-network (NN) tracking control system. This assures the convergence of the multilayer NN in the presence of disturbance. It is proved in this paper that the selection of a smaller range of the dead zone leads to a smaller estimate error of the NN, and hence a smaller tracking error of the NN tracking controller. The proposed algorithm is applied to a three-layered network with adjustable weights and a complete convergence proof is provided. The results can also be extended to the network with more hidden layers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号