首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
鲍立威  沈平 《信息与控制》1997,26(2):101-106
研究了人工神经网络K-Net构造非线性系统状态顺及其原理、数字描述和算法,并选用 文献中的例子进行了仿真研究和对比。这样构造的神经网络状态观测器结构清晰,算法简明,有很好的适用性。  相似文献   

2.
两步纹理映射和环境映射一般都采用球面作为中介曲面,由于球面映射算法是这两种纹理映射技术的重要组成部分,且球面映射算法还可应用于自由曲面的识别和视景生成等方面,因此,对球面映射算法进行研究具有重要的意义,为了进一步减少球面映射算法产生的纹理形变,从纹理不变形准则出发,分析了传统球面纹理映射算法及其不足之处,并在此基础上提出了一种适用于局部球面的纹理映射新算法,由于该算法考虑了面积等比约束关系,因此可显著提高纹理映射质量,实验结果表明,该算法十分有效,具有一定优势。  相似文献   

3.
方新  穆志纯  陈静  杜大鹏 《计算机应用》2005,25(12):2951-2953
以自组织特征映射算法为理论基础,提出了一种新的SOMNET算法,利用人工神经网络实现了具有相似特征的汉字的聚类以及汉字部件的聚类。并对汉字及其部件的显示效果进行了分析,从不同角度刻画了SOM模型的应用。研究结果对于汉字认知研究以及留学生第二语言习得教学具有一定的参考价值。  相似文献   

4.
寄存器分析是数据路径综合中的一个重要任务。文中通过对寄存器分配问题进行研究分析,得出了它与通道布线中的轨道分配问题具有等价性的结论,晨而采用一种轨道分配算法一左边界算法来解决寄存器的分配问题,同时对它进行扩充以支持条件结构中寄存器的分配问题。  相似文献   

5.
神经网络在机器人控制中的应用   总被引:3,自引:1,他引:2  
本文综述了人工神经网络在机器人控制中的算法及结构,对存在的问题进行了分析,并提出了发展的趋势。  相似文献   

6.
K-L变换是主误差准则意义下的最佳变换,是心电地图数据压缩的重要方法。但KPL变换的常规算法计算量大,上前又无快速算法,实现困难。本文采用人工神经网络算法来实现K-L变换,并将其应用于心电地图的数据压缩,实验结果表明:该网络法具有设计简单,易于实现以及对数据扰动的稳健性等优点,其压缩效果与K-L变换常规算法相当。  相似文献   

7.
K-L变换是均方误差准则意义下的最佳交换,是心电地图数据压缩的重要方法.但K-L变换的常规算法计算量大,目前又无快速算法,实现困难.本文采用人工神经网络算法来实现K-L变换,并将其应用于心电地图的数据压缩,实验结果表明:该网络算法具有设计简单、易于实现以及对数据扰动的稳健性等优点,其压缩效果与K-L变换常规算法相当.  相似文献   

8.
一种基于人工神经网络在线学习的自适应预测方法   总被引:2,自引:0,他引:2  
本文在分析和比较了传统预测方法的基础上,研究基于人工神经网络非线性映射的预测方法,针对一类缓变故障的预测问题,提出单样本在线学习的自适应预测算法,并用于柴油机故障预测和诊断。  相似文献   

9.
针对人脸识别在网络安全性方面的重要用途,该文对基于模糊神经网络的人脸识别算法进行了分析研究,首先简单分析了模糊逻辑神经网络在图像识别中应用的可行性,在此基础上重点构建了模糊人工神经网络的分类器,并给出了人脸图像识别的网络逐层设计方案和算法执行步骤,并对识别算法和传统的BP神经网络算法进行了对比仿真测试,仿真结果表明本论文算法具有更好的人脸识别效果及可靠性,对于进一步提高人模糊人工神经网络在图像辨识领域中的应用水平具有一定借鉴意义。  相似文献   

10.
决策树与人工神经网络的对比分析   总被引:2,自引:0,他引:2  
决策树和人工神经网络是数据挖掘分类任务中两项重要技术,各具特点,对不同的数据类型应采用不同的算法进行相应的研究应用。为了深入地说明各自的特点,根据决策树C 4.5算法的原理和流程,以及人工神经网络的BP网络模型原理和实现分类的流程,并应用具体的实例,对两种技术进行了对比分析研究,得出并验证了它们在实现分类中的一些性能差异。  相似文献   

11.
A novel approach to simulate cellular neural networks (CNN) is presented in this paper. The approach, time-multiplexing simulation, is prompted by the need to simulate hardware models and test hardware implementations of CNN. For practical applications, due to hardware limitations, it is impossible to have a one-to-one mapping between the CNN hardware processors and all the pixels of the image. This simulator provides a solution by processing the input image block by block, with the number of pixels in a block being the same as the number of CNN processors in the hardware. The algorithm for implementing this simulator is presented along with popular numerical integration algorithms. Some simulation results and comparisons are also presented.  相似文献   

12.
一种多处理机任务分配的启发式算法   总被引:4,自引:1,他引:4  
冯斌  孙俊 《计算机工程》2004,30(14):63-65,157
列表调度方法与其它方法相比,可以用较少的开销获得更好的结果。但仅用于处理机个数有限的系统,对于处理机个数无限的系统,调度策略都是基于任务簇调度的。文章提出了一种处理机个数无限的任务分配的列表调度算法,称之为节点迁移调度算法(NTSA)。实验证明。该算法解的性能优于其它的算法。  相似文献   

13.
Surmann  H. Ungering  A.P. 《Micro, IEEE》1995,15(4):40-48
The simulation and optimization of fuzzy systems with neural networks or genetic algorithms on general-purpose processors require fast implementations of fuzzy rule-based systems. We present several adapted implementation concepts that precisely analyze the fuzzy algorithm and are based on lookup tables, optimized rule processing, and digital pulse-duration modulation. These concepts allow general-purpose processors to produce solutions quicker than the second generation of special fuzzy processors  相似文献   

14.
随着深亚微米工艺的迅速发展,现代网络处理器芯片广泛采用MPSoC(Multi-Processor System on Chip)体系结构实现,继而需要一种新的设计方法指导网络处理器体系结构设计.本文研究了网络处理器的设计方法,提出了一种基于遗传算法的网络应用到网络处理器异构硬件资源映射方法.该方法首先对网络处理器设计的问题空间进行分析,采用加权数据流进程网络描述网络应用,并参数化各种硬件资源,最后构建遗传算法来完成网络应用到异构硬件资源的映射,形成网络处理器体系结构设计方案.  相似文献   

15.
The paper concerns the estimation under constraints of the parameters of distributed logic processors (DLP). This optimization problem under constraints is solved using stochastic approximation techniques. DLPs are fuzzy neural networks capable of representing nonlinear functions. They consist of several logic processors, each of which performs a logical fuzzy mapping. A simulation example, using data collected from an industrial fluidized bed combustor, illustrates the feasibility and the performance of this training algorithm  相似文献   

16.
Overload control of call processors in telecom networks is used to protect the network of call processing computers from excessive load during traffic peaks, and involves techniques of predictive control with limited local information. Here we propose a neural network algorithm, in which a group of neural controllers are trained using examples generated by a globally optimal control method. Simulations show that the neural controllers have better performance than local control algorithms in both the throughput and the response to traffic upsurges. Compared with the centralized control algorithm, the neural control significantly decreases the computational time for making decisions and can be implemented in real time.  相似文献   

17.
A method is given for obtaining independent parts of algorithms, represented by affine loop nests (not necessary perfectly nested). The method is based on a modular affine mapping of algorithm operations onto independent virtual processors. The method can select more independent computations than the known procedures based on affine mappings.  相似文献   

18.
This paper describes a new scheme of binary codification of artificial neural networks designed to generate automatically neural networks using any optimization method. Instead of using direct mapping of strings of bits in network connectivities, this particular codification abstracts binary encoding so that it does not reference the artificial indexing of network nodes; this codification employs shorter string length and avoids illegal points in the search space, but does not exclude any legal neural network. With these goals in mind, an Abelian semi-group structure with neutral element is obtained in the set of artificial neural networks with a particular internal operation called superimposition that allows building complex neural nets from minimum useful structures. This scheme preserves the significant feature that similar neural networks only differ in one bit, which is desirable when using search algorithms. Experimental results using this codification with genetic algorithms are reported and compared to other codification methods in terms of speed of convergence and the size of the networks obtained as a solution.  相似文献   

19.
This paper addresses optimal mapping of parallel programs composed of a chain of data parallel tasks onto the processors of a parallel system. The input to the programs is a stream of data sets, each of which is processed in order by the chain of tasks. This computation structure, also referred to as a data parallel pipeline, is common in several application domains, including digital signal processing, image processing, and computer vision. The parameters of the performance for such stream processing are latency (the time to process an individual data set) and throughput (the aggregate rate at which data sets are processed). These two criteria are distinct since multiple data sets can be pipelined or processed in parallel. The central contribution of this research is a new algorithm to determine a processor mapping for a chain of tasks that optimizes latency in the presence of a throughput constraint. We also discuss how this algorithm can be applied to solve the converse problem of optimizing throughput with a latency constraint. The problem formulation uses a general and realistic model of intertask communication and addresses the entire problem of mapping, which includes clustering tasks into modules, assigning of processors to modules, and possible replicating of modules. The main algorithms are based on dynamic programming and their execution time complexity is polynomial in the number of processors and tasks. The entire framework is implemented as an automatic mapping tool in the Fx parallelizing compiler for a dialect of High Performance Fortran.  相似文献   

20.
Using a convolutional neural network as an example, we discuss specific aspects of implementing a learning algorithm of pattern recognition on the GPU graphics card using NVIDIA CUDA architecture. The training time of the neural network on a video-adapter is decreased by a factor of 5.96 and the recognition time of a test set is decreased by a factor of 8.76 when compared with the implementation of an optimized algorithm on a central processing unit (CPU). We show that the implementation of the neural network algorithms on graphics processors holds promise.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号