首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A novel nonlinear adaptive filter with pipelined Chebyshev functional link artificial recurrent neural network (PCFLARNN) is presented in this paper, which uses a modification real-time recurrent learning algorithm. The PCFLARNN consists of a number of simple small-scale Chebyshev functional link artificial recurrent neural network (CFLARNN) modules. Compared to the standard recurrent neural network (RNN), those modules of PCFLARNN can simultaneously be performed in a pipelined parallelism fashion, and this would lead to a significant improvement in its total computational efficiency. Furthermore, contrasted with the architecture of a pipelined RNN (PRNN), each module of PCFLARNN is a CFLARNN whose nonlinearity is introduced by enhancing the input pattern with Chebyshev functional expansion, whereas the RNN of each module in PRNN utilizing linear input and first-order recurrent term only fails to utilize the high-order terms of inputs. Therefore, the performance of PCFLARNN can further be improved at the cost of a slightly increased computational complexity. In addition, due to the introduced nonlinear functional expansion of each module in PRNN, the number of input signals can be reduced. Computer simulations have demonstrated that the proposed filter performs better than PRNN and RNN for nonlinear colored signal prediction, nonstationary speech signal prediction, and chaotic time series prediction.   相似文献   

2.
Haiquan  Jiashu   《Neurocomputing》2009,72(13-15):3046
A computationally efficient pipelined functional link artificial recurrent neural network (PFLARNN) is proposed for nonlinear dynamic system identification using a modification real-time recurrent learning (RTRL) algorithm in this paper. In contrast to a feedforward artificial neural network (such as a functional link artificial neural network (FLANN)), the proposed PFLARNN consists of a number of simple small-scale functional link artificial recurrent neural network (FLARNN) modules. Since those modules of PFLARNN can be performed simultaneously in a pipelined parallelism fashion, this would result in a significant improvement in its total computational efficiency. Moreover, nonlinearity of each module is introduced by enhancing the input pattern with nonlinear functional expansion. Therefore, the performance of the proposed filter can be further improved. Computer simulations demonstrate that with proper choice of functional expansion in the PFLARNN, this filter performs better than the FLANN and multilayer perceptron (MLP) for nonlinear dynamic system identification.  相似文献   

3.
This paper presents a computationally efficient nonlinear adaptive filter by a pipelined functional link artificial decision feedback recurrent neural network (PFLADFRNN) for the design of a nonlinear channel equalizer. It aims to reduce computational burden and improve nonlinear processing capabilities of the functional link artificial recurrent neural network (FLANN). The proposed equalizer consists of several simple small-scale functional link artificial decision feedback recurrent neural network (FLADFRNN) modules with less computational complexity. Since it is a module nesting architecture comprising a number of modules that are interconnected in a chained form, its performance can be further improved. Moreover, the equalizer with a decision feedback recurrent structure overcomes the unstableness thanks to its nature of infinite impulse response structure. Finally, the performance of the PFLADFRNN modules is evaluated by a modified real-time recurrent learning algorithm via extensive simulations for different linear and nonlinear channel models in digital communication systems. The comparisons of multilayer perceptron, FLANN and reduced decision feedback FLANN equalizers have clearly indicated the convergence rate, bit error rate, steady-state error and computational complexity, respectively, for nonlinear channel equalization.  相似文献   

4.
This paper proposes noisy speech recognition using hierarchical singleton-type recurrent neural fuzzy networks (HSRNFNs). The proposed HSRNFN is a hierarchical connection of two singleton-type recurrent neural fuzzy networks (SRNFNs), where one is used for noise filtering and the other for recognition. The SRNFN is constructed by recurrent fuzzy if-then rules with fuzzy singletons in the consequences, and their recurrent properties make them suitable for processing speech patterns with temporal characteristics. In n words recognition, n SRNFNs are created for modeling n words, where each SRNFN receives the current frame feature and predicts the next one of its modeling word. The prediction error of each SRNFN is used as recognition criterion. In filtering, one SRNFN is created, and each SRNFN recognizer is connected to the same SRNFN filter, which filters noisy speech patterns in the feature domain before feeding them to the SRNFN recognizer. Experiments with Mandarin word recognition under different types of noise are performed. Other recognizers, including multilayer perceptron (MLP), time-delay neural networks (TDNNs), and hidden Markov models (HMMs), are also tested and compared. These experiments and comparisons demonstrate good results with HSRNFN for noisy speech recognition tasks  相似文献   

5.
Although the extraction of symbolic knowledge from trained feedforward neural networks has been widely studied, research in recurrent neural networks (RNN) has been more neglected, even though it performs better in areas such as control, speech recognition, time series prediction, etc. Nowadays, a subject of particular interest is (crisp/fuzzy) grammatical inference, in which the application of these neural networks has proven to be suitable. In this paper, we present a method using a self‐organizing map (SOM) for extracting knowledge from a recurrent neural network able to infer a (crisp/fuzzy) regular language. Identification of this language is done only from a (crisp/fuzzy) example set of the language. © 2000 John Wiley & Sons, Inc.  相似文献   

6.
时序数据处理任务中,循环神经网络模型以及相关衍生模型有较好的性能,如长短期记忆模型(LSTM),门限循环单元(GRU)等.模型的记忆层能够保存每个时间步的信息,但是无法高效处理某些领域的时序数据中的非等时间间隔和不规律的数据波动,如金融数据.本文提出了一种基于模糊控制的新型门限循环单元(GRU-Fuzzy)来解决这些问...  相似文献   

7.
We address the choice of the coefficients in the cost function of a modular nested recurrent neural-network (RNN) architecture, known as the pipelined recurrent neural network (PRNN). Such a network can cope with the problem of vanishing gradient, experienced in prediction with RNN's. Constraints on the coefficients of the cost function, in the form of a vector norm, are considered. Unlike the previous cost function for the PRNN, which included a forgetting factor motivated by the recursive least squares (RLS) strategy, the proposed forms of cost function provide “forgetting” of the outputs of adjacent modules based upon the network architecture. Such an approach takes into account the number of modules in the PRNN, through the unit norm constraint on the coefficients of the cost function of the PRNN. This is shown to be particularly suitable, since due to inherent nesting in the PRNN, every module gives its full contribution to the learning process, whereas the unit norm constrained cost function introduces a sense of forgetting in the memory management of the PRNN. The PRNN based upon a modified cost function outperforms existing PRNN schemes in the time series prediction simulations presented  相似文献   

8.
端到端神经网络能够根据特定的任务自动学习从原始数据到特征的变换,解决人工设计的特征与任务不匹配的问题。以往语音识别的端到端网络采用一层时域卷积网络作为特征提取模型,递归神经网络和全连接前馈深度神经网络作为声学模型的方式,在效果和效率两个方面具有一定的局限性。从特征提取模块的效果以及声学模型的训练效率角度,提出多时间频率分辨率卷积网络与带记忆模块的前馈神经网络相结合的端到端语音识别模型。实验结果表明,所提方法语音识别在真实录制数据集上较传统方法字错误率下降10%,训练时间减少80%。  相似文献   

9.
An associative neural network whose architecture is greatly influenced by biological data is described. The proposed neural network is significantly different in architecture and connectivity from previous models. Its emphasis is on high parallelism and modularity. The network connectivity is enriched by recurrent connections within the modules. Each module is, effectively, a Hopfield net. Connections within a module are plastic and are modified by associative learning. Connections between modules are fixed and thus not subject to learning. Although the network is tested with character recognition, it cannot be directly used as such for real-world applications. It must be incorporated as a module in a more complex structure. The architectural principles of the proposed network model can be used in the design of other modules of a whole system. Its architecture is such that it constitutes a good mathematical prototype to analyze the properties of modularity, recurrent connections, and feedback. The model does not make any contribution to the subject of learning in neural networks.  相似文献   

10.
实际生活中有很多带有季节特征的时空数据,在城市计算领域分布尤广,例如交通流量数据便具有较为明显的以天或周为周期的统计学特征.如何有效利用这种季节特征,如何捕捉历史观测与待预测数据之间的相关性,成为了预测此类时空数据未来变化趋势的关键.传统时序建模方法将时序数据分解为多个信号分量,并使用线性模型来进行预测.此类方法具有较强的理论基础,但对于数据的平稳性要求过于严格,难以预测趋势信息复杂的数据,更不适用于高维的时空数据.然而在真实场景下,季节性时空数据的周期长短可变,且不同周期的对应关系往往并不固定,存在时间、空间上的模式变化与偏移,很难作为理想的周期信号以传统时序方法建模.相比之下,深度神经网络建模能力更强,可拟合更为复杂的数据.近几年有许多工作研究了如何利用卷积神经网络和循环神经网络来处理时空数据,也有一些工作讨论了如何有效利用周期性信息提升预测的准确性.但深度神经网络受困于梯度消失和误差累积,难以捕捉时序数据中的长时间依赖,且少有方法讨论如何在深度神经网络中有效建模上述具有弹性周期对应关系的时空信号.本文针对真实场景下季节性时空数据的上述问题,给出具有弹性周期对应关系的时空数据预测问题的形式化定义,并提出了一种新的季节性时空数据预测模型.该模型包含季节网络、趋势网络、时空注意力模块三个部分,可捕捉短期数据中的临近变化趋势和长期数据中隐含的季节性趋势,并广泛考虑历史周期中的每个时空元素对未来预测值的影响.为了解决深度循环网络难以捕捉时序数据中的长时间依赖的问题,本文提出一种新的循环卷积记忆单元,该单元将上述模块融合于一个可端到端训练的神经网络中,一方面实现了时间和空间信息统一建模,另一方面实现了短期趋势特征与历史周期特征的统一建模.进一步地,为了解决季节性数据中的各周期时空元素对应关系不固定的问题,本文探讨了多种基于注意力模块的时空数据融合方式,创新性地提出一种级联式的时空注意力模块,并将其嵌入于上述循环卷积记忆单元内.该模块建模记忆单元的隐藏状态在不同周期内的弹性时空对应关系,自适应地选取相关度高的季节性特征辅助预测.实验部分,我们选取了两个时空数据预测在城市计算中最为典型的应用:交通流量预测和气象数据预报.本文所提出的时空周期性循环神经网络在北京、纽约的交通流量数据集、美国气象数据集上均取得了目前最高的预测准确性.  相似文献   

11.
Abstract: A key problem of modular neural networks is finding the optimal aggregation of the different subtasks (or modules) of the problem at hand. Functional networks provide a partial solution to this problem, since the inter‐module topology is obtained from domain knowledge (functional relationships and symmetries). However, the learning process may be too restrictive in some situations, since the resulting modules (functional units) are assumed to be linear combinations of selected families of functions. In this paper, we present a non‐parametric learning approach for functional networks using feedforward neural networks for approximating the functional modules of the resulting architecture; we also introduce a genetic algorithm for finding the optimal intra‐module topology (the appropriate balance of neurons for the different modules according to the complexity of their respective tasks). Some benchmark examples from nonlinear time‐series prediction are used to illustrate the performance of the algorithm for finding optimal modular network architectures for specific problems.  相似文献   

12.
长短期记忆网络(long short term memory,LSTM)是一种能长久储存序列信息的循环神经网络,在语言模型、语音识别、机器翻译等领域都得到了广泛的应用。先研究了前人如何将LSTM中的记忆模块拓展到语法树得到LSTM树结构网络模型,以获取和储存句子深层次的语义结构信息;然后针对句子词语间的极性转移在LSTM树结构网络模型中添加了极性转移信息提出了极性转移LSTM树结构网络模型,更好获取情感信息来进行句子分类。实验表明在Stanford sentiment tree-bank数据集上,提出的极性转移LSTM树结构网络模型的句子分类效果优于LSTM、递归神经网络等模型。  相似文献   

13.
The main limits on adaptive Volterra filters are their computational complexity in practical implementation and significant performance degradation under the impulsive noise environment. In this paper, a low-complexity pipelined robust M-estimate second-order Volterra (PRMSOV) filter is proposed to reduce the computational burdens of the Volterra filter and enhance the robustness against impulsive noise. The PRMSOV filter consists of a number of extended second-order Volterra (SOV) modules without feedback input cascaded in a chained form. To apply to the modular architecture, the modified normalized least mean M-estimate (NLMM) algorithms are derived to suppress the effect of impulsive noise on the nonlinear and linear combiner subsections, respectively. Since the SOV-NLMM modules in the PRMSOV can operate simultaneously in a pipelined parallelism fashion, they can give a significant improvement of computational efficiency and robustness against impulsive noise. The stability and convergence on nonlinear and linear combiner subsections are also analyzed with the contaminated Gaussian (CG) noise model. Simulations on nonlinear system identification and speech prediction show the proposed PRMSOV filter has better performance than the conventional SOV filter and joint process pipelined SOV (JPPSOV) filter under impulsive noise environment. The initial convergence, steady-state error, robustness and computational complexity are also better than the SOV and JPPSOV filters.  相似文献   

14.
High-assurance and complex mission-critical software systems are heavily dependent on reliability of their underlying software applications. An early software fault prediction is a proven technique in achieving high software reliability. Prediction models based on software metrics can predict number of faults in software modules. Timely predictions of such models can be used to direct cost-effective quality enhancement efforts to modules that are likely to have a high number of faults. We evaluate the predictive performance of six commonly used fault prediction techniques: CART-LS (least squares), CART-LAD (least absolute deviation), S-PLUS, multiple linear regression, artificial neural networks, and case-based reasoning. The case study consists of software metrics collected over four releases of a very large telecommunications system. Performance metrics, average absolute and average relative errors, are utilized to gauge the accuracy of different prediction models. Models were built using both, original software metrics (RAW) and their principle components (PCA). Two-way ANOVA randomized-complete block design models with two blocking variables are designed with average absolute and average relative errors as response variables. System release and the model type (RAW or PCA) form the blocking variables and the prediction technique is treated as a factor. Using multiple-pairwise comparisons, the performance order of prediction models is determined. We observe that for both average absolute and average relative errors, the CART-LAD model performs the best while the S-PLUS model is ranked sixth.  相似文献   

15.
This paper proposes a recurrent self-evolving interval type-2 fuzzy neural network (RSEIT2FNN) for dynamic system processing. An RSEIT2FNN incorporates type-2 fuzzy sets in a recurrent neural fuzzy system in order to increase the noise resistance of a system. The antecedent parts in each recurrent fuzzy rule in the RSEIT2FNN are interval type-2 fuzzy sets, and the consequent part is of the Takagi-Sugeno-Kang (TSK) type with interval weights. The antecedent part of RSEIT2FNN forms a local internal feedback loop by feeding the rule firing strength of each rule back to itself. The TSK-type consequent part is a linear model of exogenous inputs. The RSEIT2FNN initially contains no rules; all rules are learned online via structure and parameter learning. The structure learning uses online type-2 fuzzy clustering. For the parameter learning, the consequent part parameters are tuned by a rule-ordered Kalman filter algorithm to improve learning performance. The antecedent type-2 fuzzy sets and internal feedback loop weights are learned by a gradient descent algorithm. The RSEIT2FNN is applied to simulations of dynamic system identifications and chaotic signal prediction under both noise-free and noisy conditions. Comparisons with type-1 recurrent fuzzy neural networks validate the performance of the RSEIT2FNN.  相似文献   

16.
Igor  Matthew W.   《Neurocomputing》2008,71(7-9):1172-1179
As potential candidates for explaining human cognition, connectionist models of sentence processing must demonstrate their ability to behave systematically, generalizing from a small training set. It has recently been shown that simple recurrent networks and, to a greater extent, echo-state networks possess some ability to generalize in artificial language learning tasks. We investigate this capacity for a recently introduced model that consists of separately trained modules: a recursive self-organizing module for learning temporal context representations and a feedforward two-layer perceptron module for next-word prediction. We show that the performance of this architecture is comparable with echo-state networks. Taken together, these results weaken the criticism of connectionist approaches, showing that various general recursive connectionist architectures share the potential of behaving systematically.  相似文献   

17.
This paper proposes a new speech detection method by recurrent neural fuzzy network in variable noise-level environments. The detection method uses wavelet energy (WE) and zero crossing rate (ZCR) as detection parameters. The WE is a new and robust parameter, and is derived using wavelet transformation. It can reduce the influences of different types of noise at different levels. With the inclusion of ZCR, we can robustly and effectively detect speech from noise with only two parameters. For detector design, a singleton-type recurrent fuzzy neural network (SRNFN) is proposed. The SRNFN is constructed by recurrent fuzzy if-then rules with fuzzy singletons in the consequences, and the recurrent property makes them suitable for processing speech patterns with temporal characteristics. The learning ability of SRNFN helps avoid the need of empirically determining a threshold in normal detection algorithms. Experiments with different types of noises and various signal-to noise ratios (SNRs) are performed. The results show that using the WE and ZCR parameters-based SRNFN, a pretty good performance is achieved. Comparisons with another robust detection method, the refined time–frequency-based method, and other detectors have also verified the performance of the proposed method.  相似文献   

18.
We present an approach for selecting optimal parameters for the pipelined recurrent neural network (PRNN) in the paradigm of nonlinear and nonstationary signal prediction. We consider the role of nesting, which is inherent to the PRNN architecture. The corresponding number of nested modules needed for a certain prediction task, and their contribution toward the final prediction gain give a thorough insight into the way the PRNN performs, and offers solutions for optimization of its parameters. In particular, nesting allows the forgetting factor in the cost function of the PRNN to exceed unity, hence it becomes an emphasis factor. This compensates for the small contribution of the distant modules to the prediction process, due to nesting, and helps to circumvent the problem of vanishing gradient, experienced in RNNs for prediction. The PRNN is shown to outperform the linear least mean square and recursive least squares predictors, as well as previously proposed PRNN schemes, at no expense of additional computational complexity.  相似文献   

19.
There has been an increased interest in combining fuzzy systems with neural networks because fuzzy neural systems merge the advantages of both paradigms. On the one hand, parameters in fuzzy systems have clear physical meanings and rule-based and linguistic information can be incorporated into adaptive fuzzy systems in a systematic way. On the other hand, there exist powerful algorithms for training various neural network models. However, most of the proposed combined architectures are only able to process static input-output relationships; they are not able to process temporal input sequences of arbitrary length. Fuzzy finite-state automats (FFAs) can model dynamical processes whose current state depends on the current input and previous states. Unlike in the case of deterministic finite-state automats (DFAs), FFAs are not in one particular state, rather each state is occupied to some degree defined by a membership function. Based on previous work on encoding DFAs in discrete-time second-order recurrent neural networks, we propose an algorithm that constructs an augmented recurrent neural network that encodes a FFA and recognizes a given fuzzy regular language with arbitrary accuracy. We then empirically verify the encoding methodology by correct string recognition of randomly generated FFAs. In particular, we examine how the networks' performance varies as a function of synaptic weight strengths  相似文献   

20.
Classical statistical techniques for prediction reach their limitations in applications with nonlinearities in the data set; nevertheless, neural models can counteract these limitations. In this paper, we present a recurrent neural model where we associate an adaptative time constant to each neuron-like unit and a learning algorithm to train these dynamic recurrent networks. We test the network by training it to predict the Mackey-Glass chaotic signal. To evaluate the quality of the prediction, we computed the power spectra of the two signals and computed the associated fractional error. Results show that the introduction of adaptative time constants associated to each neuron of a recurrent network improves the quality of the prediction and the dynamical features of a neural model. The performance of such dynamic recurrent neural networks outperform time-delay neural networks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号