首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8篇
  免费   0篇
自动化技术   8篇
  2013年   3篇
  2012年   1篇
  2011年   1篇
  2006年   1篇
  2005年   2篇
排序方式: 共有8条查询结果,搜索用时 203 毫秒
1
1.
This paper introduces a methodology for neural network global optimization. The aim is the simultaneous optimization of multilayer perceptron (MLP) network weights and architectures, in order to generate topologies with few connections and high classification performance for any data sets. The approach combines the advantages of simulated annealing, tabu search and the backpropagation training algorithm in order to generate an automatic process for producing networks with high classification performance and low complexity. Experimental results obtained with four classification problems and one prediction problem has shown to be better than those obtained by the most commonly used optimization techniques  相似文献   
2.
In artificial neural networks (ANNs), the activation function most used in practice are the logistic sigmoid function and the hyperbolic tangent function. The activation functions used in ANNs have been said to play an important role in the convergence of the learning algorithms. In this paper, we evaluate the use of different activation functions and suggest the use of three new simple functions, complementary log-log, probit and log-log, as activation functions in order to improve the performance of neural networks. Financial time series were used to evaluate the performance of ANNs models using these new activation functions and to compare their performance with some activation functions existing in the literature. This evaluation is performed through two learning algorithms: conjugate gradient backpropagation with Fletcher–Reeves updates and Levenberg–Marquardt.  相似文献   
3.
Social class differences in the prevalence of Common Mental Disorder (CMD) are likely to vary according to time, culture and stage of economic development. The present study aimed to investigate the use of optimization of architecture and weights of Artificial Neural Network (ANN) for identification of the factors related to CMDs. The identification of the factors was possible by optimizing the architecture and weights of the network. The optimization of architecture and weights of ANNs is based on Particle Swarm Optimization with early stopping criteria. This approach achieved a good generalization control, as well as similar or better results than other techniques, but with a lower computational cost, with the ability to generate small networks and with the advantage of the automated architecture selection, which simplify the training process. This paper presents the results obtained in the experiments with ANNs in which it was observed an average percentage of correct classification of individuals with positive diagnostic for the CMDs of 90.59%.  相似文献   
4.
Reservoir computing is a framework for computation like a recurrent neural network that allows for the black box modeling of dynamical systems. In contrast to other recurrent neural network approaches, reservoir computing does not train the input and internal weights of the network, only the readout is trained. However it is necessary to adjust parameters to create a “good” reservoir for a given application. In this study we introduce a method, called RCDESIGN (reservoir computing and design training). RCDESIGN combines an evolutionary algorithm with reservoir computing and simultaneously looks for the best values of parameters, topology and weight matrices without rescaling the reservoir matrix by the spectral radius. The idea of adjust the spectral radius within the unit circle in the complex plane comes from the linear system theory. However, this argument does not necessarily apply to nonlinear systems, which is the case of reservoir computing. The results obtained with the proposed method are compared with results obtained by a genetic algorithm search for global parameters generation of reservoir computing. Four time series were used to validate RCDESIGN.  相似文献   
5.
In this letter, the computational power of a class of random access memory (RAM)-based neural networks, called general single-layer sequential weightless neural networks (GSSWNNs), is analyzed. The theoretical results presented, besides helping the understanding of the temporal behavior of these networks, could also provide useful insights for the developing of new learning algorithms.  相似文献   
6.
The use of neural network models for time series forecasting has been motivated by experimental results that indicate high capacity for function approximation with good accuracy. Generally, these models use activation functions with fixed parameters. However, it is known that the choice of activation function strongly influences the complexity and neural network performance and that a limited number of activation functions has been used in general. We describe the use of an asymmetric activation functions family with free parameter for neural networks. We prove that the activation functions family defined, satisfies the requirements of the universal approximation theorem We present a methodology for global optimization of the activation functions family with free parameter and the connections between the processing units of the neural network. The main idea is to optimize, simultaneously, the weights and activation function used in a Multilayer Perceptron (MLP), through an approach that combines the advantages of simulated annealing, tabu search and a local learning algorithm. We have chosen two local learning algorithms: the backpropagation with momentum (BPM) and Levenberg–Marquardt (LM). The overall purpose is to improve performance in time series forecasting.  相似文献   
7.
8.
Evolutionary Radial Basis Functions for Credit Assessment   总被引:1,自引:1,他引:0  
Credit analysts generally assess the risk of credit applications based on their previous experience. They frequently employ quantitative methods to this end. Among the methods used, Artificial Neural Networks have been particularly successful and have been incorporated into several computational tools. However, the design of efficient Artificial Neural Networks is largely affected by the definition of adequate values for their free parameters. This article discusses a new approach to the design of a particular Artificial Neural Networks model, RBF networks, through Genetic Algorithms. It presents an overall view of the problems involved and the different approaches employed to optimize Artificial Neural Networks genetically. For such, several methods proposed in the literature for optimizing RBF networks using Genetic Algorithms are discussed. Finally, the model proposed by the authors is described and experimental results using this model for a credit risk assessment problem are presented.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号