首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
前向网络在用于模式分类时,其网络的有效训练一直是一个受到关注的问题,本文首先提出模式的可逆线性变换不改变网络的结构,而组成可逆线性变换的错切变换会在一定程度上改变网络训练的难度,从而提出了对模式进行适当错切变换可以有铲改变网络训练难度提高网络训练效率的方法,文末对所提出方法的实验结果也证明了这一点。  相似文献   

2.
遗传前馈神经网络在函数逼近中的应用   总被引:5,自引:1,他引:4       下载免费PDF全文
人工神经网络具有高计算能力、泛化能力和非线性映射等特点,被成功应用于众多领域,但缺乏用于确定其网络拓扑结构、激活函数和训练方法的规则。该文提出利用遗传算法优化前馈神经网络的方法,将网络结构、激活函数和训练方法等编码作为个体,发现最优或次优解,针对特定问题设计较理想的前馈神经网络。介绍遗传算法的具体步骤,对非线性函数逼近进行实验,结果表明优化后前馈神经网络的性能优于由经验确定的前馈神经网络,验证了本文方法的有效性。  相似文献   

3.
Here, formation of continuous attractor dynamics in a nonlinear recurrent neural network is used to achieve a nonlinear speech denoising method, in order to implement robust phoneme recognition and information retrieval. Formation of attractor dynamics in recurrent neural network is first carried out by training the clean speech subspace as the continuous attractor. Then, it is used to recognize noisy speech with both stationary and nonstationary noise. In this work, the efficiency of a nonlinear feedforward network is compared to the same one with a recurrent connection in its hidden layer. The structure and training of this recurrent connection, is designed in such a way that the network learns to denoise the signal step by step, using properties of attractors it has formed, along with phone recognition. Using these connections, the recognition accuracy is improved 21% for the stationary signal and 14% for the nonstationary one with 0db SNR, in respect to a reference model which is a feedforward neural network.  相似文献   

4.
由于传统循环神经网络具有复杂的结构,需要大量的数据才能在连续语音识别中进行正确训练,并且训练需要耗费大量的时间,对硬件性能要求很大.针对以上问题,提出了基于残差网络和门控卷积神经网络的算法,并结合联结时序分类算法,构建端到端中文语音识别模型.该模型将语谱图作为输入,通过残差网络提取高层抽象特征,然后通过堆叠门控卷积神经...  相似文献   

5.
A Chebyshev polynomial-based unified model (CPBUM) neural network is introduced and applied to control a magnetic bearing systems. First, we show that the CPBUM neural network not only has the same capability of universal approximator, but also has faster learning speed than conventional feedforward/recurrent neural network. It turns out that the CPBUM neural network is more suitable in the design of controller than the conventional feedforward/recurrent neural network. Second, we propose the inverse system method, based on the CPBUM neural networks, to control a magnetic bearing system. The proposed controller has two structures; namely, off-line and on-line learning structures. We derive a new learning algorithm for each proposed structure. The experimental results show that the proposed neural network architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.  相似文献   

6.
Amit Y  Mascaro M 《Neural computation》2001,13(6):1415-1442
We describe a system of thousands of binary perceptrons with coarse-oriented edges as input that is able to recognize shapes, even in a context with hundreds of classes. The perceptrons have randomized feedforward connections from the input layer and form a recurrent network among themselves. Each class is represented by a prelearned attractor (serving as an associative hook) in the recurrent net corresponding to a randomly selected subpopulation of the perceptrons. In training, first the attractor of the correct class is activated among the perceptrons; then the visual stimulus is presented at the input layer. The feedforward connections are modified using field-dependent Hebbian learning with positive synapses, which we show to be stable with respect to large variations in feature statistics and coding levels and allows the use of the same threshold on all perceptrons. Recognition is based on only the visual stimuli. These activate the recurrent network, which is then driven by the dynamics to a sustained attractor state, concentrated in the correct class subset and providing a form of working memory. We believe this architecture is more transparent than standard feedforward two-layer networks and has stronger biological analogies.  相似文献   

7.
A neural network structure is presented that uses feedback of unmeasured system states to represent dynamic systems more efficiently than conventional feedforward and recurrent networks, leading to better predictions, reduced training requirement and more reliable extrapolation. The structure identifies the actual system states based on imperfect knowledge of the initial state, which is available in many practical systems, and is therefore applicable only to such systems. It also enables a natural integration of any available partial state-space model directly into the prediction scheme, to achieve further performance improvement. Simulation examples of three varied dynamic systems illustrate the various options and advantages offered by the state-feedback neural structure. Although the advantages of the proposed structure, compared with the conventional feedforward and recurrent networks, should hold for most practical dynamic systems, artificial systems can readily be created and real systems can surely be found for which one or more of these advantages would vanish or even get reversed. Caution is therefore recommended against interpreting the suggested advantages as strict theorems valid in all situations.  相似文献   

8.
Neural network-based image registration using global image features is relatively a new research subject, and the schemes devised so far use a feedforward neural network to find the geometrical transformation parameters. In this work, we propose to use a radial basis function neural network instead of feedforward neural network to overcome lengthy pre-registration training stage. This modification has been tested on the neural network-based registration approach using discrete cosine transformation features in the presence of noise. The experimental registration work is conducted in two different levels: estimation of transformation parameters from a local range for fine registration and from a medium range for coarse registration. For both levels, the performances of the feedforward neural network-based and radial basis function neural network-based schemes have been obtained and compared to each other. The proposed scheme does not only speed up the training stage enormously but also increases the accuracy and gives robust results in the presence of additive Gaussian noise owing to the better generalization ability of the radial basis function neural networks.  相似文献   

9.
In this work, a probabilistic model is established for recurrent networks. The expectation-maximization (EM) algorithm is then applied to derive a new fast training algorithm for recurrent networks through mean-field approximation. This new algorithm converts training a complicated recurrent network into training an array of individual feedforward neurons. These neurons are then trained via a linear weighted regression algorithm. The training time has been improved by five to 15 times on benchmark problems.  相似文献   

10.
A recurrent self-organizing neural fuzzy inference network   总被引:15,自引:0,他引:15  
A recurrent self-organizing neural fuzzy inference network (RSONFIN) is proposed. The RSONFIN is inherently a recurrent multilayered connectionist network for realizing the basic elements and functions of dynamic fuzzy inference, and may be considered to be constructed from a series of dynamic fuzzy rules. The temporal relations embedded in the network are built by adding some feedback connections representing the memory elements to a feedforward neural fuzzy network. Each weight as well as node in the RSONFIN has its own meaning and represents a special element in a fuzzy rule. There are no hidden nodes initially in the RSONFIN. They are created online via concurrent structure identification and parameter identification. The structure learning together with the parameter learning forms a fast learning algorithm for building a small, yet powerful, dynamic neural fuzzy network. Two major characteristics of the RSONFIN can thus be seen: 1) the recurrent property of the RSONFIN makes it suitable for dealing with temporal problems and 2) no predetermination, like the number of hidden nodes, must be given, since the RSONFIN can find its optimal structure and parameters automatically and quickly. Moreover, to reduce the number of fuzzy rules generated, a flexible input partition method, the aligned clustering-based algorithm, is proposed. Various simulations on temporal problems are done and performance comparisons with some existing recurrent networks are also made. Efficiency of the RSONFIN is verified from these results.  相似文献   

11.
As a nonlinear system, a recurrent neural network generally has an incremental gain different from its induced norm. While most of the previous research efforts were focused on the latter, this paper presents a method to compute an effective upper bound of the former for a class of discrete-time recurrent neural networks, which is not only applied to systems with arbitrary inputs but also extended to systems with small-norm inputs. The upper bound is computed by simple optimizations subject to linear matrix inequalities (LMIs). To demonstrate the wide connections of our results to problems in control, the servomechanism is then studied, where a feedforward neural network is designed to control the output of a recurrent neural network to track a set of trajectories. This problem can be converted into the synthesis of feedforward-feedback gains such that the incremental gain of a certain system is minimized. An algorithm to perform such a synthesis is proposed and illustrated with a numerical example.  相似文献   

12.
This paper presents a wavelet-based recurrent fuzzy neural network (WRFNN) for prediction and identification of nonlinear dynamic systems. The proposed WRFNN model combines the traditional Takagi-Sugeno-Kang (TSK) fuzzy model and the wavelet neural networks (WNN). This paper adopts the nonorthogonal and compactly supported functions as wavelet neural network bases. Temporal relations embedded in the network are caused by adding some feedback connections representing the memory units into the second layer of the feedforward wavelet-based fuzzy neural networks (WFNN). An online learning algorithm, which consists of structure learning and parameter learning, is also presented. The structure learning depends on the degree measure to obtain the number of fuzzy rules and wavelet functions. Meanwhile, the parameter learning is based on the gradient descent method for adjusting the shape of the membership function and the connection weights of WNN. Finally, computer simulations have demonstrated that the proposed WRFNN model requires fewer adjustable parameters and obtains a smaller rms error than other methods.  相似文献   

13.
人体表面积(BSA)在临床医学上有着至关重要的作用,但现有BSA计算方法大多只使用身高和体重2个参数且采用匹配简单函数的方法来估计体表面积,临床上也认为现有的BSA计算方法误差较大。针对这些问题,提出一种BSA回归预测模型。该回归预测模型包含2个部分:首先,借助相关性和显著性分析选择相关性较高的体表面积影响因子;其次,利用人体数据训练深度前馈神经网络,构建回归模型。实验分别采取5-折交叉验证与测试集验证2种方法。首先,将深度前馈神经网络模型与传统人体表面积计算方法进行精度评估和结果对比分析;其次将深度前馈神经网络模型与3种模型进行精度评估和结果对比分析。在与传统方法对比中,深度前馈神经网络模型的决定系数高于2种传统方法的,且比传统方法提高了6%,误差与传统方法的相比降低了近一倍。在与3种模型的对比中,深度前馈神经网络的决定系数比其他模型的提高了至少2%,误差降低。一致性分析实验结果也显示,深度前馈神经网络95%一致性界限最小,一致性最好。总体来说,提出的回归预测模型可以得到更加精确的体表面积预测值。  相似文献   

14.
LUT与Elman网络相结合的图像逆半调算法   总被引:1,自引:0,他引:1  
为改进查找表逆半调算法中"未出现半调模式逆半调值"的估计精度,提出了一种查找表与Elman回归网络相结合的图像逆半调算法。该算法首先通过样本图集生成初步逆半调查找表,然后以Elman型回归网络为工具,构造、训练逆半调逼近模型,最后达到拟合"未出现半调模式逆半调值"的目的,产生完整查找表,支持逆半调处理。实验结果及性能分析表明,应用本文算法生成的逆半调重建图像在视觉效果及PSNR指标上表现良好,具有运行速度快、空间复杂度低的特点。  相似文献   

15.
In practice, the back-propagation algorithm often runs very slowly, and the question naturally arises as to whether there are necessarily intrinsic computation and difficulties with training neural networks, or better training algorithms might exist. Two important issues will be investigated in this framework. One establishes a flexible structure, to construct very simple neural network for multi-input/output systems. The other issue is how to obtain the learning algorthm to achieve good performance in the training phase. In this paper, the feedforward neural network with flexible bipolar sigmoid functions (FBSFs) are investigated to learn the inverse model of the system. The FBSF has changeable shape by changing the values of its parameter according to the desired trajectory or the teaching signal. The proposed neural network is trained to learn the inverse dynamic model by using back-propagation learning algorithms. In these learning algorithms, not only the connection weights but also the sigmoid function parameters (SFPs) are adjustable. The feedback-error-learning is used as a learning method for the feedforward controller. In this case, the output of a feedback controller is fed to the neural network model. The suggested method is applied to a two-link robotic manipulator control system which is configured as a direct controller for the system to demonstrate the capability of our scheme. Also, the advantages of the proposed structure over other traditional neural network structures are discussed.  相似文献   

16.
This paper explores feasibility of employing the non-recurrent backpropagation training algorithm for a recurrent neural network, Simultaneous Recurrent Neural network, for static optimisation. A simplifying observation that maps the recurrent network dynamics, which is configured to operate in relaxation mode as a static optimizer, to feedforward network dynamics is leveraged to facilitate application of a non-recurrent training algorithm such as the standard backpropagation and its variants. A simulation study that aims to assess feasibility, optimizing potential, and computational efficiency of training the Simultaneous Recurrent Neural network with non-recurrent backpropagation is conducted. A comparative computational complexity analysis between the Simultaneous Recurrent Neural network trained with non-recurrent backpropagation algorithm and the same network trained with the recurrent backpropagation algorithm is performed. Simulation results demonstrate that it is feasible to apply the non-recurrent backpropagation to train the Simultaneous Recurrent Neural network. The optimality and computational complexity analysis fails to demonstrate any advantage on behalf of the non-recurrent backpropagation versus the recurrent backpropagation for the optimisation problem considered. However, considerable future potential that is yet to be explored exists given that computationally efficient versions of the backpropagation training algorithm, namely quasi-Newton and conjugate gradient descent among others, are also applicable for the neural network proposed for static optimisation in this paper.  相似文献   

17.
In this paper,the constrained optimization technique for a substantial problem is explored,that is accelerating training the globally recurrent neural network.Unlike most of the previous methods in feedforware neural networks,the authors adopt the constrained optimization technique to improve the gradientbased algorithm of the globally recurrent neural network for the adaptive learning rate during tracining.Using the recurrent network with the improved algorithm,some experiments in two real-world problems,namely,filtering additive noises in acoustic data and classification of temporat signals for speaker identification,have been performed.The experimental results show that the recurrent neural network with the improved learning algorithm yields significantly faster training and achieves the satisfactory performance.  相似文献   

18.
提出了一种新的演化神经网络算法GTEANN,该算法基于高效的郭涛算法,同时完成在网络结构空间和权值空间的搜索,以实现前馈神经网络的自动化设计。本方法采用的编码方案直观有效,基于该编码表示,神经网络的学习过程是一个复杂的混合整实数非线性规划问题,例如杂交操作包括网络的同构和规整处理。初步实验结果表明该方法收敛,能够达到根据训练样本自动优化设计多层前馈神经网络的目的。  相似文献   

19.
顾哲彬  曹飞龙 《计算机科学》2018,45(Z11):238-243
传统人工神经网络的输入均为向量形式,而图像由矩阵形式表示,因此,在用人工神经网络进行图像处理时,图像将以向量形式输入至神经网络,这破坏了图像的结构信息,从而影响了图像处理的效果。为了提高网络对图像的处理能力,文中借鉴了深度学习的思想与方法,引进了具有矩阵输入的多层前向神经网络。同时,采用传统的反向传播训练算法(BP)训练该网络,给出了训练过程与训练算法,并在USPS手写数字数据集上进行了数值实验。实验结果表明,相对于单隐层矩阵输入前向神经网络(2D-BP),所提多层网络具有较好的分类效果。此外,对于彩色图片分类问题,利用所提出的2D-BP网络,给出了一个有效的可行方法。  相似文献   

20.
A recurrent Sigma-Pi-linked back-propagation neural network is presented. The increase of input information is achieved by the introduction of “higher-order≓ terms, that are generated through functional-linked input nodes. Based on the Sigma-Pi-linked model, this network is capable of approximating more complex function at a much faster convergence rate. This recurrent network is intensively tested by applying to different types of linear and nonlinear time-series. Comparing to the conventional feedforward BP network, the training convergence rate is substantially faster. Results indicate that the functional approximation property of this recurrent network is remarkable for time-series applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号