首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
神经网络计算机的实现是神经网络研究领域中一个重要课题。目前,神经网络的研究已形成了较为系统的理论模型与算法,但神经网络计算机的研究却至今没有重大突破,主要困难就在于网络规模过大,突触联系密度太高等等,为解决这个问题,文中基于分形理论,提出一种神经网络计算机的分形实现方案,给出了分形维数的计算公式,产从物理上实现了与体结构具有自相似性的分形子结构,在神经网络计算机的实现上作了有益的探索。  相似文献   

2.
神经计算机的研究是神经网络研究中的一项重要内容,神经计算机就是指根据神经网络结及其计算特点,用电子器件、光学器件或分子/化学器件而构成的计算系统.神经计算机的研究主集中在两个方面,即器件研究及系统构造.神经计算机的研究也称为神经网络实现技术的研究。  相似文献   

3.
Recent advances in FPGA technology have permitted the implementation of neurocomputational models, making them an interesting alternative to standard PCs in order to speed up the computations involved taking advantage of the intrinsic FPGA parallelism. In this work, we analyse and compare the FPGA implementation of two neural network learning algorithms: the standard and well known Back-Propagation algorithm and C-Mantec, a constructive neural network algorithm that generates compact one hidden layer architectures with good predictive capabilities. One of the main differences between both algorithms is the fact that while Back-Propagation needs a predefined architecture, C-Mantec constructs its network while learning the input patterns. Several aspects of the FPGA implementation of both algorithms are analyzed, focusing in features like logic and memory resources needed, transfer function implementation, computation time, etc. The advantages and disadvantages of both methods in relationship to their hardware implementations are discussed.  相似文献   

4.
5.
基于神经网络结构学习的知识求精方法   总被引:5,自引:0,他引:5  
知识求精是知识获取中必不可少的步骤.已有的用于知识求精的KBANN(know ledge based artificialneuralnetw ork)方法,主要局限性是训练时不能改变网络的拓扑结构.文中提出了一种基于神经网络结构学习的知识求精方法,首先将一组规则集转化为初始神经网络,然后用训练样本和结构学习算法训练初始神经网络,并提取求精的规则知识.网络拓扑结构的改变是通过训练时采用基于动态增加隐含节点和网络删除的结构学习算法实现的.大量实例表明该方法是有效的  相似文献   

6.
In this paper, a hardware-based neural identification method is proposed in order to learn the characteristics or structure of a discrete linear dynamical system. Quick or instant identification of unknown dynamical systems is particularly required for practical controls not only in intelligent mechatronics such as, for example, automatic selforganized running of mobile vehicles, but in intelligent self-controlled systems. We developed a new method of hardware-based identification for general dynamical systems using a digital neural network very large scale integration (VLSI) chip, RN-200, where sixteen neurons and a total of 256 synapses are integrated in a 13.73×13.73 mm2 VLSI chip, fabricated using RICOH 0.8 μm complementary metal oxide semiconductor CMOS technology (RICOH, Yokohama, Japan). This paper describes how to implement neural ideitification in both learning and feedfoward processing (recognizing) using a RICOH RN-2000 neurocomputer which consists of seven RN-200 digital neural network VLSI chips.  相似文献   

7.
A new concept learning neural network is presented. This network builds correlation learning into a rule learning neural network where the certainty factor model of traditional expert systems is taken as the network activation function. The main argument for this approach is that correlation learning can help when the neural network fails to converge to the target concept due to insufficient or noisy training data. Both theoretical analysis and empirical evaluation are provided to validate the system.  相似文献   

8.
Back-propagation learning in expert networks   总被引:17,自引:0,他引:17  
Expert networks are event-driven, acyclic networks of neural objects derived from expert systems. The neural objects process information through a nonlinear combining function that is different from, and more complex than, typical neural network node processors. The authors develop back-propagation learning for acyclic, event-driven networks in general and derive a specific algorithm for learning in EMYCIN-derived expert networks. The algorithm combines back-propagation learning with other features of expert networks, including calculation of gradients of the nonlinear combining functions and the hypercube nature of the knowledge space. It offers automation of the knowledge acquisition task for certainty factors, often the most difficult part of knowledge extraction. Results of testing the learning algorithm with a medium-scale (97-node) expert network are presented.  相似文献   

9.
Recently appeared a renewed interest for Single Layer Feedforward Neural Network (SLF-NN) models where the hidden layer coefficients are randomly assigned and the output coefficients are calculated by a least square algorithm. In addition to random coefficient initialization, the main advantages for these learning models are the speed of training (no multiple iterations required) and no initial coefficient definition (e.g. no adaptation constant as in multilayer perceptron). These features are adequate for real time operation since a fast online training can be achieved, benefiting to applications (industrial, automotive, portable systems) where other neural networks learning approaches could not be used due to large resource usage, low speed and lack of flexibility. Thus, targeting hardware implementation allows its use in embedded systems, expanding its application areas to real time systems and, in general, those applications where the use of desktop computers is not possible. Typically, RVFLN demands a wide number of resources and a high computational burden; high dimension matrices are involved, and computation intensive algorithms are required to obtain the output layer coefficient values for the neural network, especially matrix inversion. This work describes the algorithm implementation and optimization of these models to fit embedded hardware system requirements together with a parameterizable model, allowing different applications to benefit from it. The proposal includes the use of fuzzy activation functions in neurons to reduce computations. An exhaustive analysis of three proposed different computation architectures for the learning algorithm is done. Classification results for three standard datasets and fixed point arithmetic are compared to Matlab floating point results, together with hardware related analysis as speed of operation, bit-length accuracy in fixed point arithmetic and logic resource occupation.  相似文献   

10.
介绍了模拟神经网络VLSI脉冲流技术实现神经网络模式识别硬件电路的方法,并且直接将故障分类。提出利用包含有故障信息的原始模拟噪声信号,经过前置信号处理和神经网络运算,得出VLSI电路输出端电容的电压值-代表待识别信号与模板故障信号的“欧氏距离”,以实现噪声故障信号的实时硬件在线识别。  相似文献   

11.
Evolutionary computation is a rapidly expanding field of research with a long history. Much of that history remains unknown to most practitioners and researchers. This two-part article offers a review of selected foundational efforts in evolutionary computation, with a focus on those that have not received commensurate attention. Part I presented an initial overview of the essential components of evolutionary algorithms followed by a review of early research in artificial life and modeling genetic systems. Here, Part II reviews seminal results in evolving programs and evolvable hardware. Comments on theoretical developments and future developments conclude Part II.  相似文献   

12.
深度学习算法和GPU算力的不断进步,正促进着人工智能技术在包括计算机视觉、语音识别、自然语言处理等领域得到广泛应用.与此同时,深度学习已经开始应用于以自动驾驶为代表的安全攸关领域.但是,近两年接连发生了几起严重的交通事故表明,深度学习技术的成熟度还远未达到安全攸关应用的要求.因此,对可信人工智能系统的研究已经成为了一个热点方向.对现有的面向实时应用的深度学习领域的研究工作进行了综述,首先介绍了深度学习技术应用于实时嵌入式系统所面临的关键设计问题;然后,从深层神经网络的轻量化设计、GPU时间分析与任务调度、CPU+GPU SoC异构平台的资源管理、深层神经网络与网络加速器的协同设计等多个方面对现有的研究工作进行了分析和总结;最后展望了面向实时应用的深度学习领域进一步的研究方向.  相似文献   

13.
Da Lin  Xingyuan Wang 《Neurocomputing》2011,74(12-13):2241-2249
This paper proposes a self-organizing adaptive fuzzy neural control (SAFNC) for the synchronization of uncertain chaotic systems with random-varying parameters. The proposed SAFNC system is composed of a computation controller and a robust controller. The computation controller containing a self-organizing fuzzy neural network (SOFNN) identifier is the principle controller. The SOFNN identifier is used to online estimate the compound uncertainties with the structure and parameter learning phases of fuzzy neural network (FNN), simultaneously. The structure-learning phase consists of the growing of membership functions, the splitting of fuzzy rules and the pruning of fuzzy rules, and thus the SOFNN identifier can avoid the time-consuming trial-and-error tuning procedure for determining the network structure of fuzzy neural network. The robust controller is used to attenuate the effects of the approximation error so that the synchronization of chaotic systems is achieved.All the parameter learning algorithms are derived based on the Lyapunov stability theorem to ensure network convergence as well as stable synchronization performance. To demonstrate the effectiveness of the proposed method, simulation results are illustrated in this paper.  相似文献   

14.
15.
卷积神经网络是深度学习算法应用最广泛的方向之一,目前卷积神经网络的应用不仅仅是停留在科技领域,已经扩展到医学、军事等领域,并且已在相关领域发挥着巨大的作用。卷积是卷积神经网络中最为核心的一部分,卷积运算占整个网络70%以上的时间,所以针对卷积运算的加速研究就显得十分重要。首先介绍近年来的卷积算法,并对其复杂度进行分析,总结了这些算法各自的优点和不足,最后对其理论研究和应用领域可能存在的突破进行了探讨和展望。  相似文献   

16.
In practice, the back-propagation algorithm often runs very slowly, and the question naturally arises as to whether there are necessarily intrinsic computation and difficulties with training neural networks, or better training algorithms might exist. Two important issues will be investigated in this framework. One establishes a flexible structure, to construct very simple neural network for multi-input/output systems. The other issue is how to obtain the learning algorthm to achieve good performance in the training phase. In this paper, the feedforward neural network with flexible bipolar sigmoid functions (FBSFs) are investigated to learn the inverse model of the system. The FBSF has changeable shape by changing the values of its parameter according to the desired trajectory or the teaching signal. The proposed neural network is trained to learn the inverse dynamic model by using back-propagation learning algorithms. In these learning algorithms, not only the connection weights but also the sigmoid function parameters (SFPs) are adjustable. The feedback-error-learning is used as a learning method for the feedforward controller. In this case, the output of a feedback controller is fed to the neural network model. The suggested method is applied to a two-link robotic manipulator control system which is configured as a direct controller for the system to demonstrate the capability of our scheme. Also, the advantages of the proposed structure over other traditional neural network structures are discussed.  相似文献   

17.
A parallel mapping of self-organizing map (SOM) algorithm is presented for a partial tree shape neurocomputer (PARNEU). PARNEU is a general purpose parallel neurocomputer that is designed for soft computing applications. Practical scalability and a reconfigurable partial tree network are the main architectural features. The presented neuron parallel mapping of SOM with on-line learning illustrates a parallel winner neuron search and a coordinate transfer that are performed in the partial tree network. Phase times are measured to analyse speedup and scalability of the mapping. The performance of the learning phase in SOM with a four processor PARNEU configuration is about 26 MCUPS and the recall phase performs 30 MCPS. Compared to other mappings done for general purpose neurocomputers, PARNEU's performance is very good.  相似文献   

18.
车辆目标检测是基于计算机视觉的目标检测领域的一个重要应用领域,近年来随着深度学习在图像分类方面取得的巨大进展,机器视觉技术结合深度学习方法的车辆目标检测算法逐渐成为该领域的研究重点和热点。介绍了基于机器视觉的车辆目标检测的任务、难点与发展现状,以及深度学习方法中几种具有代表性的卷积神经网络模型,通过这些网络模型衍生出的two stage、one stage车辆目标检测算法和用于模型训练的相关数据集与检测效果评价标准,对其存在的问题及未来可能的发展方向进行了讨论。  相似文献   

19.
In this paper, fuzzy logic theory is used to build a specific decision-making system for heuristic search algorithms. Such algorithms are typically used for expert systems. To improve the performance of the overall system, a set of important parameters of the decision-making system is identified. Two optimization methods for the learning of the optimum parameters, namely genetic algorithms and gradient-descent techniques based on a neural network formulation of the problem, are used to obtain an improvement of the performance. The decision-making system and both optimization methods are tested on a target recognition system  相似文献   

20.
自组织增量学习神经网络综述   总被引:1,自引:1,他引:0  
邱天宇  申富饶  赵金熙 《软件学报》2016,27(9):2230-2247
自组织增量学习神经网络SOINN(self-organizing incremental neural network)是一种基于竞争学习的两层神经网络,用于在没有先验知识的情况下对动态输入数据进行在线聚类和拓扑表示,同时,对噪音数据具有较强的鲁棒性.SOINN的增量性,使得它能够发现数据流中出现的新模式并进行学习,同时不影响之前学习的结果.因此,SOINN能够作为一种通用的学习算法应用于各类非监督学习问题中.对SOINN的模型和算法进行相应的调整,可以使其适用于监督学习、联想记忆、基于模式的推理、流形学习等多种学习场景中.SOINN已经在许多领域得到了应用,包括机器人智能、计算机视觉、专家系统、异常检测等.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号