首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Forecasting the foreign exchange rate is an uphill task. Numerous methods have been used over the years to develop an efficient and reliable network for forecasting the foreign exchange rate. This study utilizes recurrent neural networks (RNNs) for forecasting the foreign currency exchange rates. Cartesian genetic programming (CGP) is used for evolving the artificial neural network (ANN) to produce the prediction model. RNNs that are evolved through CGP have shown great promise in time series forecasting. The proposed approach utilizes the trends present in the historical data for its training purpose. Thirteen different currencies along with the trade-weighted index (TWI) and special drawing rights (SDR) is used for the performance analysis of recurrent Cartesian genetic programming-based artificial neural networks (RCGPANN) in comparison with various other prediction models proposed to date. The experimental results show that RCGPANN is not only capable of obtaining an accurate but also a computationally efficient prediction model for the foreign currency exchange rates. The results demonstrated a prediction accuracy of 98.872 percent (using 6 neurons only) for a single-day prediction in advance and, on average, 92% for predicting a 1000 days’ exchange rate in advance based on ten days of data history. The results prove RCGPANN to be the ultimate choice for any time series data prediction, and its capabilities can be explored in a range of other fields.  相似文献   

2.
Neural networks do not readily provide an explanation of the knowledge stored in their weights as part of their information processing. Until recently, neural networks were considered to be black boxes, with the knowledge stored in their weights not readily accessible. Since then, research has resulted in a number of algorithms for extracting knowledge in symbolic form from trained neural networks. This article addresses the extraction of knowledge in symbolic form from recurrent neural networks trained to behave like deterministic finite-state automata (DFAs). To date, methods used to extract knowledge from such networks have relied on the hypothesis that networks' states tend to cluster and that clusters of network states correspond to DFA states. The computational complexity of such a cluster analysis has led to heuristics that either limit the number of clusters that may form during training or limit the exploration of the space of hidden recurrent state neurons. These limitations, while necessary, may lead to decreased fidelity, in which the extracted knowledge may not model the true behavior of a trained network, perhaps not even for the training set. The method proposed here uses a polynomial time, symbolic learning algorithm to infer DFAs solely from the observation of a trained network's input-output behavior. Thus, this method has the potential to increase the fidelity of the extracted knowledge.  相似文献   

3.
Describes a novel neural architecture for learning deterministic context-free grammars, or equivalently, deterministic pushdown automata. The unique feature of the proposed network is that it forms stable state representations during learning-previous work has shown that conventional analog recurrent networks can be inherently unstable in that they cannot retain their state memory for long input strings. The authors have previously introduced the discrete recurrent network architecture for learning finite-state automata. Here they extend this model to include a discrete external stack with discrete symbols. A composite error function is described to handle the different situations encountered in learning. The pseudo-gradient learning method (introduced in previous work) is in turn extended for the minimization of these error functions. Empirical trials validating the effectiveness of the pseudo-gradient learning method are presented, for networks both with and without an external stack. Experimental results show that the new networks are successful in learning some simple pushdown automata, though overfitting and non-convergent learning can also occur. Once learned, the internal representation of the network is provably stable; i.e., it classifies unseen strings of arbitrary length with 100% accuracy.  相似文献   

4.
Classical statistical techniques for prediction reach their limitations in applications with nonlinearities in the data set; nevertheless, neural models can counteract these limitations. In this paper, we present a recurrent neural model where we associate an adaptative time constant to each neuron-like unit and a learning algorithm to train these dynamic recurrent networks. We test the network by training it to predict the Mackey-Glass chaotic signal. To evaluate the quality of the prediction, we computed the power spectra of the two signals and computed the associated fractional error. Results show that the introduction of adaptative time constants associated to each neuron of a recurrent network improves the quality of the prediction and the dynamical features of a neural model. The performance of such dynamic recurrent neural networks outperform time-delay neural networks.  相似文献   

5.
深度学习模型广泛应用于多媒体信号处理领域,通过引入非线性能够极大地提升性能,但是其黑箱结构无法解析地给出最优点和优化条件。因此如何利用传统信号处理理论,基于变换/基映射模型逼近深度学习模型,解析优化问题,成为当前研究的前沿问题。本文从信号处理的基础理论出发,分析了当前针对高维非线性非规则结构方法的数学模型和理论边界,主要包括:结构化稀疏表示模型、基于框架理论的深度网络模型、多层卷积稀疏编码模型以及图信号处理理论。详细描述了基于组稀疏性和层次化稀疏性的表示模型和优化方法,分析基于半离散框架和卷积稀疏编码构建深度/多层网络模型,进一步在非欧氏空间上扩展形成图信号处理模型,并对国内外关于记忆网络的研究进展进行了比较。最后,展望了多媒体信号处理的理论模型发展,认为图信号处理通过解析谱图模型的数学性质,解释其中的关联性,为建立广义的大规模非规则多媒体信号处理模型提供理论基础,是未来研究的重要领域之一。  相似文献   

6.
Fuzzy neural systems have been a subject of great interest in the last few years, due to their abilities to facilitate the exchange of information between symbolic and subsymbolic domains. However, the models in the literature are not able to deal with structured organization of information, that is typically required by symbolic processing. In many application domains, the patterns are not only structured, but a fuzziness degree is attached to each subsymbolic pattern primitive. The purpose of this paper is to show how recursive neural networks, properly conceived for dealing with structured information, can represent nondeterministic fuzzy frontier-to-root tree automata. Whereas available prior knowledge expressed in terms of fuzzy state transition rules are injected into a recursive network, unknown rules are supposed to be filled in by data-driven learning. We also prove the stability of the encoding algorithm, extending previous results on the injection of fuzzy finite-state dynamics in high-order recurrent networks.  相似文献   

7.
In this paper, active noise control using recurrent neural networks is addressed. A new learning algorithm for recurrent neural networks based on Adjoint Extended Kalman Filter is developed for active noise control. The overall control structure for active noise control is constructed using two recurrent neural networks: the first neural network is used to model secondary path of active noise control while the second one is employed to generate control signal. Real-time experiment of the proposed algorithm using digital signal processor is carried-out to show the effectiveness of the method.  相似文献   

8.

This article proposes the use of recurrent neural networks in order to forecast foreign exchange rates. Artificial neural networks have proven to be efficient and profitable in fore casting financial time series. In particular, recurrent networks in which activity patterns pass through the network more than once before they generate an output pattern can learn ex tremely complex temporal sequences. Three recurrent architectures are compared in terms of prediction accuracy of futures forecast for Deutsche mark currency. A trading strategy is then devised and optimized. The profitability of the trading strategy taking into account trans action costs is shown for the different architectures. The methods described here which have obtained promising results in real time trading are applicable to other markets.  相似文献   

9.
The performance of neural networks for which weights and signals are modeled by shot-noise processes is considered. Examples of such networks are optical neural networks and biological systems. We develop a theory that facilitates the computation of the average probability of error in binary-input/binary-output multistage and recurrent networks. We express the probability of error in terms of two key parameters: the computing-noise parameter and the weight-recording-noise parameter. The former is the average number of particles per clock cycle per signal and it represents noise due to the particle nature of the signal. The latter represents noise in the weight-recording process and is the average number of particles per weight. For a fixed computing-noise parameter, the probability of error decreases with the increase in the recording-noise parameter and saturates at a level limited by the computing-noise parameter. A similar behavior is observed when the role of the two parameters is interchanged. As both parameters increase, the probability of error decreases to zero exponentially fast at a rate that is determined using large deviations. We show that the performance can be optimized by a selective choice of the nonlinearity threshold levels. For recurrent networks, as the number of iterations increases, the probability of error increases initially and then saturates at a level determined by the stationary distribution of a Markov chain.  相似文献   

10.
This paper proposes noisy speech recognition using hierarchical singleton-type recurrent neural fuzzy networks (HSRNFNs). The proposed HSRNFN is a hierarchical connection of two singleton-type recurrent neural fuzzy networks (SRNFNs), where one is used for noise filtering and the other for recognition. The SRNFN is constructed by recurrent fuzzy if-then rules with fuzzy singletons in the consequences, and their recurrent properties make them suitable for processing speech patterns with temporal characteristics. In n words recognition, n SRNFNs are created for modeling n words, where each SRNFN receives the current frame feature and predicts the next one of its modeling word. The prediction error of each SRNFN is used as recognition criterion. In filtering, one SRNFN is created, and each SRNFN recognizer is connected to the same SRNFN filter, which filters noisy speech patterns in the feature domain before feeding them to the SRNFN recognizer. Experiments with Mandarin word recognition under different types of noise are performed. Other recognizers, including multilayer perceptron (MLP), time-delay neural networks (TDNNs), and hidden Markov models (HMMs), are also tested and compared. These experiments and comparisons demonstrate good results with HSRNFN for noisy speech recognition tasks  相似文献   

11.
为获取较高精度车内噪声主动控制(Active Noise Control, ANC)参考信号,提出了一种基于小波变换和BP神经网络的车内噪声信号重构方法。以在某轿车采集到的噪声信号为基础,用声学传递路径分析(TPA)方法确定影响车内噪声的关键点信号。鉴于噪声源信号对车内信号非线性关系的复杂性,建立BP神经网络的噪声重构模型,并利用小波分解来降低噪声信号的非平稳性。为对比重构效果,建立BP神经网络噪声重构模型。结果表明,本文提出算法的重构值与实测值之间的平均绝对误差比BP神经网络小,并且基于小波变换和BP网络重构模型的平均绝对误差均小于0.01。该方法能够对车内噪声信号进行准确、有效的重构。  相似文献   

12.
A general framework for adaptive processing of data structures   总被引:2,自引:0,他引:2  
A structured organization of information is typically required by symbolic processing. On the other hand, most connectionist models assume that data are organized according to relatively poor structures, like arrays or sequences. The framework described in this paper is an attempt to unify adaptive models like artificial neural nets and belief nets for the problem of processing structured information. In particular, relations between data variables are expressed by directed acyclic graphs, where both numerical and categorical values coexist. The general framework proposed in this paper can be regarded as an extension of both recurrent neural networks and hidden Markov models to the case of acyclic graphs. In particular we study the supervised learning problem as the problem of learning transductions from an input structured space to an output structured space, where transductions are assumed to admit a recursive hidden state-space representation. We introduce a graphical formalism for representing this class of adaptive transductions by means of recursive networks, i.e., cyclic graphs where nodes are labeled by variables and edges are labeled by generalized delay elements. This representation makes it possible to incorporate the symbolic and subsymbolic nature of data. Structures are processed by unfolding the recursive network into an acyclic graph called encoding network. In so doing, inference and learning algorithms can be easily inherited from the corresponding algorithms for artificial neural networks or probabilistic graphical model.  相似文献   

13.
Rule revision with recurrent neural networks   总被引:2,自引:0,他引:2  
Recurrent neural networks readily process, recognize and generate temporal sequences. By encoding grammatical strings as temporal sequences, recurrent neural networks can be trained to behave like deterministic sequential finite-state automata. Algorithms have been developed for extracting grammatical rules from trained networks. Using a simple method for inserting prior knowledge (or rules) into recurrent neural networks, we show that recurrent neural networks are able to perform rule revision. Rule revision is performed by comparing the inserted rules with the rules in the finite-state automata extracted from trained networks. The results from training a recurrent neural network to recognize a known non-trivial, randomly-generated regular grammar show that not only do the networks preserve correct rules but that they are able to correct through training inserted rules which were initially incorrect (i.e. the rules were not the ones in the randomly generated grammar)  相似文献   

14.
A new architecture and a statistical model for a pulse-mode digital multilayer neural network (DMNN) are presented. Algebraic neural operations are replaced by stochastic processes using pseudo-random pulse sequences. Synaptic weights and neuron states are represented as probabilities and estimated as average rates of pulse occurrences in corresponding pulse sequences. A statistical model of error (or noise) is developed to estimate relative accuracy associated with stochastic computing in terms of mean and variance. The stochastic computing technique is implemented with simple logic gates as basic computing elements leading to a high neuron-density on a chip. Furthermore, the use of simple logic gates for neural operations, the pulse-mode signal representation, and the modular design techniques lead to a massively parallel yet compact and flexible network architecture, well suited for VLSI implementation. Any size of a feedforward network can be configured where processing speed is independent of the network size. Multilayer feedforward networks are modeled and applied to pattern classification problems such as encoding and character recognition.  相似文献   

15.
The currency market is one of the most efficient markets, making it very difficult to predict future prices. Several studies have sought to develop more accurate models to predict the future exchange rate by analyzing econometric models, developing artificial intelligence models and combining both through the creation of hybrid models. This paper proposes a hybrid model for forecasting the variations of five exchange rates related to the US Dollar: Euro, British Pound, Japanese Yen, Swiss Franc and Canadian Dollar. The proposed model uses Independent Component Analysis (ICA) to deconstruct the series into independent components as well as neural networks (NN) to predict each component. This method differentiates this study from previous works where ICA has been used to extract the noise of time series or used to obtain explanatory variables that are then used in forecasting. The proposed model is then compared to random walk, autoregressive and conditional variance models, neural networks, recurrent neural networks and long–short term memory neural networks. The hypothesis of this study supposes that first deconstructing the exchange rate series and then predicting it separately would produce better forecasts than traditional models. By using the mean squared error and mean absolute percentage error as a measures of performance and Model Confidence Sets to statistically test the superiority of the proposed model, our results showed that this model outperformed the other models examined and significantly improved the accuracy of forecasts. These findings support this model’s use in future research and in decision-making related to investments.  相似文献   

16.
Using classical signal processing and filtering techniques for music note recognition faces various kinds of difficulties. This paper proposes a new scheme based on neural networks for music note recognition. The proposed scheme uses three types of neural networks: time delay neural networks, self-organizing maps, and linear vector quantization. Experimental results demonstrate that the proposed scheme achieves 100% recognition rate in moderate noise environments. The basic design of two potential applications of the proposed scheme is briefly demonstrated.  相似文献   

17.
目的 人脸美丽预测是研究如何使计算机具有与人类相似的人脸美丽判断或预测能力,然而利用深度神经网络进行人脸美丽预测存在过度拟合噪声标签样本问题,从而影响深度神经网络的泛化性。因此,本文提出一种自纠正噪声标签方法用于人脸美丽预测。方法 该方法包括自训练教师模型机制和重标签再训练机制。自训练教师模型机制以自训练的方式获得教师模型,帮助学生模型进行干净样本选择和训练,直至学生模型泛化能力超过教师模型并成为新的教师模型,并不断重复该过程;重标签再训练机制通过比较最大预测概率和标签对应预测概率,从而纠正噪声标签。同时,利用纠正后的数据反复执行自训练教师模型机制。结果 在大规模人脸美丽数据库LSFBD(large scale facial beauty database)和SCUT-FBP5500数据库上进行实验。结果表明,本文方法在人工合成噪声标签的条件下可降低噪声标签的负面影响,同时在原始LSFBD数据库和SCUT-FBP5500数据库上分别取得60.8%和75.5%的准确率,高于常规方法。结论 在人工合成噪声标签条件下的LSFBD和SCUT-FBP5500数据库以及原始LSFBD和SCUT-F...  相似文献   

18.
Conventional recurrent neural networks (RNNs) have difficulties in learning long-term dependencies. To tackle this problem, we propose an architecture called segmented-memory recurrent neural network (SMRNN). A symbolic sequence is broken into segments and then presented as inputs to the SMRNN one symbol per cycle. The SMRNN uses separate internal states to store symbol-level context, as well as segment-level context. The symbol-level context is updated for each symbol presented for input. The segment-level context is updated after each segment. The SMRNN is trained using an extended real-time recurrent learning algorithm. We test the performance of SMRNN on the information latching problem, the “two-sequence problem” and the problem of protein secondary structure (PSS) prediction. Our implementation results indicate that SMRNN performs better on long-term dependency problems than conventional RNNs. Besides, we also theoretically analyze how the segmented memory of SMRNN helps learning long-term temporal dependencies and study the impact of the segment length.   相似文献   

19.
为保证设备正常运行并准确预测轴承剩余寿命,提出二维卷积神经网络与改进WaveNet组合的寿命预测模型.为克服未优化的递归网络在预测训练过程中易出现梯度消失问题,该模型引入了WaveNet时序网络结构.针对原始WaveNet结构不适用滚动轴承振动数据情况,将WaveNet结构改进与二维卷积神经网络结合应用于滚动轴承寿命预测.模型利用二维卷积网络提取一维振动序列的特征,随后特征输入WaveNet并进行滚动轴承的预测寿命.改进模型相比于深度循环网络计算效率更高、结果更准确,相比于原始CNN-WaveNet-O模型预测结果更准确.相比于深度长短期记忆网络模型,改进方法预测结果均方根误差降低了11.04%,评分函数降低了11.34%.  相似文献   

20.
Abstract: Features are used to represent patterns with minimal loss of important information. The feature vector, which is composed of the set of all features used to describe a pattern, is a reduced‐dimensional representation of that pattern. Medical diagnostic accuracies can be improved when the pattern is simplified through representation by important features. By identifying a set of salient features, the noise in a classification model can be reduced, resulting in more accurate classification. In this study, a signal‐to‐noise ratio saliency measure was employed to determine the saliency of input features of recurrent neural networks (RNNs) used in classification of ophthalmic arterial Doppler signals. Eigenvector methods were used to extract features representing the ophthalmic arterial Doppler signals. The RNNs used in the ophthalmic arterial Doppler signal classification were trained for the signal‐to‐noise ratio screening method. The application results of the signal‐to‐noise ratio screening method to the ophthalmic arterial Doppler signals demonstrated that classification accuracies of RNNs with salient input features are higher than those of RNNs with salient and non‐salient input features.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号