共查询到20条相似文献,搜索用时 15 毫秒
1.
时序数据处理任务中,循环神经网络模型以及相关衍生模型有较好的性能,如长短期记忆模型(LSTM),门限循环单元(GRU)等.模型的记忆层能够保存每个时间步的信息,但是无法高效处理某些领域的时序数据中的非等时间间隔和不规律的数据波动,如金融数据.本文提出了一种基于模糊控制的新型门限循环单元(GRU-Fuzzy)来解决这些问... 相似文献
2.
Kijsirikul Boonserm Sinthupinyo Sukree Chongkasemwongse Kongsak 《Machine Learning》2001,44(3):273-299
This paper presents a method for approximate match of first-order rules with unseen data. The method is useful especially in case of a multi-class problem or a noisy domain where unseen data are often not covered by the rules. Our method employs the Backpropagation Neural Network for the approximation. To build the network, we propose a technique for generating features from the rules to be used as inputs to the network. Our method has been evaluated on four domains of first-order learning problems. The experimental results show improvements of our method over the use of the original rules. We also applied our method to approximate match of propositional rules converted from an unpruned decision tree. In this case, our method can be thought of as soft-pruning of the decision tree. The results on multi-class learning domains in the UCI repository of machine learning databases show that our method performs better than standard C4.5's pruned and unpruned trees. 相似文献
3.
In this paper, we propose some techniques for injecting finite state automata into Recurrent Radial Basis Function networks (R2BF). When providing proper hints and constraining the weight space properly, we show that these networks behave as automata. A technique is suggested for forcing the learning process to develop automata representations that is based on adding a proper penalty function to the ordinary cost. Successful experimental results are shown for inductive inference of regular grammars. 相似文献
4.
Multi-step Learning Rule for Recurrent Neural Models: An Application to Time Series Forecasting 总被引:3,自引:0,他引:3
Multi-step prediction is a difficult task that has attracted increasing interest in recent years. It tries to achieve predictions several steps ahead into the future starting from current information. The interest in this work is the development of nonlinear neural models for the purpose of building multi-step time series prediction schemes. In that context, the most popular neural models are based on the traditional feedforward neural networks. However, this kind of model may present some disadvantages when a long-term prediction problem is formulated because they are trained to predict only the next sampling time. In this paper, a neural model based on a partially recurrent neural network is proposed as a better alternative. For the recurrent model, a learning phase with the purpose of long-term prediction is imposed, which allows to obtain better predictions of time series in the future. In order to validate the performance of the recurrent neural model to predict the dynamic behaviour of the series in the future, three different data time series have been used as study cases. An artificial data time series, the logistic map, and two real time series, sunspots and laser data. Models based on feedforward neural networks have also been used and compared against the proposed model. The results suggest than the recurrent model can help in improving the prediction accuracy. 相似文献
5.
基于李雅普诺夫函数的BP神经网络算法的收敛性分析 总被引:3,自引:0,他引:3
针对前馈神经网络应时变输入的自学习机制,采用李雅普诺夫函数来分析权值的收敛性,从而揭示BP神经网络算法朝最小误差方向调整权值的内在因素,并在分析单参数BP算法收敛性基础上,提出单参数变调整法则的离散型BP神经网络算法. 相似文献
6.
The properties of time series, generated by continuous valued feed-forward networks in which the next input vector is determined from past output values, are studied. Asymptotic solutions developed suggest that the typical stable behavior is (quasi) periodic with attractor dimension that is limited by the number of hidden units, independent of the details of the weights. The results are robust under additive noise, except for expected noise-induced effects – attractor broadening and loss of phase coherence at large times. These effects, however, are moderated by the size of the network N. 相似文献
7.
8.
Noisy Time Series Prediction using Recurrent Neural Networks and Grammatical Inference 总被引:6,自引:0,他引:6
Financial forecasting is an example of a signal processing problem which is challenging due to small sample sizes, high noise, non-stationarity, and non-linearity. Neural networks have been very successful in a number of signal processing applications. We discuss fundamental limitations and inherent difficulties when using neural networks for the processing of high noise, small sample size signals. We introduce a new intelligent signal processing method which addresses the difficulties. The method proposed uses conversion into a symbolic representation with a self-organizing map, and grammatical inference with recurrent neural networks. We apply the method to the prediction of daily foreign exchange rates, addressing difficulties with non-stationarity, overfitting, and unequal a priori class probabilities, and we find significant predictability in comprehensive experiments covering 5 different foreign exchange rates. The method correctly predicts the directionof change for the next day with an error rate of 47.1%. The error rate reduces to around 40% when rejecting examples where the system has low confidence in its prediction. We show that the symbolic representation aids the extraction of symbolic knowledge from the trained recurrent neural networks in the form of deterministic finite state automata. These automata explain the operation of the system and are often relatively simple. Automata rules related to well known behavior such as tr end following and mean reversal are extracted. 相似文献
9.
A fuzzy‐recurrent neural network (FRNN) has been constructed by adding some feedback connections to a feedforward fuzzy neural network (FNN). The FRNN expands the modeling ability of a FNN in order to deal with temporal problems. A basic concept of the FRNN is first to use process or expert knowledge, including appropriate fuzzy logic rules and membership functions, to construct an initial structure and to then use parameter‐learning algorithms to fine‐tune the membership functions and other parameters. Its recurrent property makes it suitable for dealing with temporal problems, such as on‐line fault diagnosis. In addition, it also provides human‐understandable meaning to the normal feedforward multilayer neural network, in which the internal units are always opaque to users. In a word, the trained FRNN has good interpreting ability and one‐step‐ahead predicting ability. To demonstrate the performance of the FRNN in diagnosis, a comparison is made with a conventional feedforward network. The efficiency of the FRNN is verified by the results. 相似文献
10.
A backpropagation learning algorithm for feedforward neural networks withan adaptive learning rate is derived. The algorithm is based uponminimising the instantaneous output error and does not include anysimplifications encountered in the corresponding Least Mean Square (LMS)algorithms for linear adaptive filters. The backpropagation algorithmwith an adaptive learning rate, which is derived based upon the Taylorseries expansion of the instantaneous output error, is shown to exhibitbehaviour similar to that of the Normalised LMS (NLMS) algorithm. Indeed,the derived optimal adaptive learning rate of a neural network trainedby backpropagation degenerates to the learning rate of the NLMS for a linear activation function of a neuron. By continuity, the optimal adaptive learning rate for neural networks imposes additional stabilisationeffects to the traditional backpropagation learning algorithm. 相似文献
11.
多项式函数型回归神经网络模型及应用 总被引:2,自引:1,他引:2
文中利用回归神经网络既有前馈通路又有反馈通路的特点,将网络隐层中神经元的激活函数设置为可调多项式函数序列,提出了多项式函数型回归神经网络新模型,它不但具有传统回归神经网络的特点,而且具有较强的函数逼近能力,针对递归计算问题,提出了多项式函数型回归神经网络学习算法,并将该网络模型应用于多元多项式近似因式分解,其学习算法在多元多项式近似分解中体现了较强的优越性,通过算例分析表明,该算法十分有效,收敛速度快,计算精度高,可适用于递归计算问题领域,该文所提出的多项式函数型回归神经网络模型及学习算法对于代数符号近似计算有重要的指导意义。 相似文献
12.
13.
14.
Pérez-Ortiz Juan Antonio Calera-Rubio Jorge Forcada Mikel L. 《Neural Processing Letters》2001,14(2):127-140
Arithmetic coding is one of the most outstanding techniques for lossless data compression. It attains its good performance with the help of a probability model which indicates at each step the probability of occurrence of each possible input symbol given the current context. The better this model, the greater the compression ratio achieved. This work analyses the use of discrete-time recurrent neural networks and their capability for predicting the next symbol in a sequence in order to implement that model. The focus of this study is on online prediction, a task much harder than the classical offline grammatical inference with neural networks. The results obtained show that recurrent neural networks have no problem when the sequences come from the output of a finite-state machine, easily giving high compression ratios. When compressing real texts, however, the dynamics of the sequences seem to be too complex to be learned online correctly by the net. 相似文献
15.
In [1] we have shown how to construct a 3-layered recurrent neural network that computes the fixed point of the meaning function TP of a given propositional logic program P, which corresponds to the computation of the semantics of P. In this article we consider the first order case. We define a notion of approximation for interpretations and prove that there exists a 3-layered feed forward neural network that approximates the calculation of TP for a given first order acyclic logic program P with an injective level mapping arbitrarily well. Extending the feed forward network by recurrent connections we obtain a recurrent neural network whose iteration approximates the fixed point of TP. This result is proven by taking advantage of the fact that for acyclic logic programs the function TP is a contraction mapping on a complete metric space defined by the interpretations of the program. Mapping this space to the metric space R with Euclidean distance, a real valued function fP can be defined which corresponds to TP and is continuous as well as a contraction. Consequently it can be approximated by an appropriately chosen class of feed forward neural networks. 相似文献
16.
基于递归神经网络的一类非线性无模型系统的自适应控制 总被引:10,自引:0,他引:10
给出了基于递归神经网络非线性无模型的自适应控制方案,它具有灵活、简单、方法等特点,可以处理传统方法和非线性无模型系统自适应控制方法不能控制或控制效果不理想的非线性对象。理论分析和仿真结果证明了这种方法的优越性。 相似文献
17.
重点研究进化回归神经网络对时序数据和关联数据的建模能力。针对两个标准问题,采用不同形式的建模数据,比较了前向网络和回归神经网络的建模及预测效果,进一步将进化算法用于不同结构回归神经网络的训练并比较了它们的建模能力。仿真结果表明回归神经网络对时序关联数据有很好的建模和预测能力,相比于前向网络,无需过程时序特点的先验知识,可以采用最简单的建模数据形式。而进化算法相比于常规的梯度下降算法,用于训练不同的回归网络结构通用性好,且训练过程不受局部极小问题的困扰,适当规模的训练过程可以获得性能良好的神经网络模型。 相似文献
18.
一类动态递归神经网络的智能控制器 总被引:2,自引:0,他引:2
提出一种改进型动态递归神经网络的自适应控制方法,研究了动态递归网络的学习算法,分析了学习算法的收敛性,并推导了保证算法收敛的有效学习率范围,在此基础上提出了模糊推理自适应学习率方法。计算机仿真实验表明,本文控制方法对于未知、非线性被控对象的控制是有效的。 相似文献
19.
L. Hontoria J. Aguilera J. Riesco P. Zufiria 《Journal of Intelligent and Robotic Systems》2001,31(1-3):201-221
In this paper, a neural network method for generating solar radiation synthetic series is proposed and evaluated. In solar energy application fields such as photovoltaic systems and solar heating systems, the need of long sequences of solar irradiation data is fundamental. Nevertheless those series are not frequently available: in many locations the records are incomplete or difficult to manage, whereas in other places there are no records at all. Hence, many authors have proposed different methods to generate synthetic series of irradiation trying to preserve some statistical properties of the recorded ones. The neural procedure shown here represents a simple alternative way to address this problem. A comparative study of the neural-based synthetic series and series generated by other methods has been carried out with the objective of demonstrating the universality and generalisation capabilities of this new approach. The results show the good performance of this irradiation series generation method. 相似文献
20.
Classification of Multivariate Time Series and Structured Data Using Constructive Induction 总被引:2,自引:0,他引:2
We present a method of constructive induction aimed at learning tasks involving multivariate time series data. Using metafeatures, the scope of attribute-value learning is expanded to domains with instances that have some kind of recurring substructure, such as strokes in handwriting recognition, or local maxima in time series data. The types of substructures are defined by the user, but are extracted automatically and are used to construct attributes.Metafeatures are applied to two real domains: sign language recognition and ECG classification. Using metafeatures we are able to generate classifiers that are either comprehensible or accurate, producing results that are comparable to hand-crafted preprocessing and comparable to human experts. 相似文献