首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recent advances in artificial neural networks (ANNs) have led to the design and construction of neuroarchitectures as simulator and emulators of a variety of problems in science and engineering. Such problems include pattern recognition, prediction, optimization, associative memory, and control of dynamic systems. This paper offers an analytical overview of the most successful design, implementation, and application of neuroarchitectures as neurosimulators and neuroemulators. It also outlines historical notes on the formulation of basic biological neuron, artificial computational models, network architectures, and learning processes of the most common ANN; describes and analyzes neurosimulation on parallel architecture both in software and hardware (neurohardware); presents the simulation of ANNs on parallel architectures; gives a brief introduction of ANNs in vector microprocessor systems; and presents ANNs in terms of the "new technologies". Specifically, it discusses cellular computing, cellular neural networks (CNNs), a new proposition for unsupervised neural networks (UNNs), and pulse coupled neural networks (PCNNs).  相似文献   

2.
The fault-tolerance characteristics of time-continuous, recurrent artificial neural networks (ANNs) that can be used to solve optimization problems are investigated. The performance of these networks is illustrated by using well-known model problems like the traveling salesman problem and the assignment problem. The ANNs are then subjected to up to 13 simultaneous stuck-at-1 or stuck-at-0 faults for network sizes of up to 900 neurons. The effect of these faults on the performance is demonstrated, and the cause for the observed fault-tolerance is discussed. An application is presented in which a network performs a critical task for a real-time distributed processing system by generating new task allocations during the reconfiguration of the system. The performance degradation of the ANN under the presence of faults is investigated by large-scale simulations and the potential benefits of delegating a critical task to a fault-tolerant network are discussed.  相似文献   

3.

Purpose

Extracting comprehensible classification rules is the most emphasized concept in data mining researches. In order to obtain accurate and comprehensible classification rules from databases, a new approach is proposed by combining advantages of artificial neural networks (ANN) and swarm intelligence.

Method

Artificial neural networks (ANNs) are a group of very powerful tools applied to prediction, classification and clustering in different domains. The main disadvantage of this general purpose tool is the difficulties in its interpretability and comprehensibility. In order to eliminate these disadvantages, a novel approach is developed to uncover and decode the information hidden in the black-box structure of ANNs. Therefore, in this paper a study on knowledge extraction from trained ANNs for classification problems is carried out. The proposed approach makes use of particle swarm optimization (PSO) algorithm to transform the behaviors of trained ANNs into accurate and comprehensible classification rules. Particle swarm optimization with time varying inertia weight and acceleration coefficients is designed to explore the best attribute-value combination via optimizing ANN output function.

Results

The weights hidden in trained ANNs turned into comprehensible classification rule set with higher testing accuracy rates compared to traditional rule based classifiers.  相似文献   

4.
Speeding up backpropagation using multiobjective evolutionary algorithms   总被引:3,自引:0,他引:3  
Abbass HA 《Neural computation》2003,15(11):2705-2726
The use of backpropagation for training artificial neural networks (ANNs) is usually associated with a long training process. The user needs to experiment with a number of network architectures; with larger networks, more computational cost in terms of training time is required. The objective of this letter is to present an optimization algorithm, comprising a multiobjective evolutionary algorithm and a gradient-based local search. In the rest of the letter, this is referred to as the memetic Pareto artificial neural network algorithm for training ANNs. The evolutionary approach is used to train the network and simultaneously optimize its architecture. The result is a set of networks, with each network in the set attempting to optimize both the training error and the architecture. We also present a self-adaptive version with lower computational cost. We show empirically that the proposed method is capable of reducing the training time compared to gradient-based techniques.  相似文献   

5.
Artificial neural network (ANN) is one of the most widely used techniques in classification data mining. Although ANNs can achieve very high classification accuracies, their explanation capability is very limited. Therefore one of the main challenges in using ANNs in data mining applications is to extract explicit knowledge from them. Based on this motivation, a novel approach is proposed in this paper for generating classification rules from feed forward type ANNs. Although there are several approaches in the literature for classification rule extraction from ANNs, the present approach is fundamentally different from them. In the previous studies, ANN training and rule extraction is generally performed independently in a sequential (hierarchical) manner. However, in the present study, training and rule extraction phases are integrated within a multiple objective evaluation framework for generating accurate classification rules directly. The proposed approach makes use of differential evolution algorithm for training and touring ant colony optimization algorithm for rule extracting. The proposed algorithm is named as DIFACONN-miner. Experimental study on the benchmark data sets and comparisons with some other classical and state-of-the art rule extraction algorithms has shown that the proposed approach has a big potential to discover more accurate and concise classification rules.  相似文献   

6.
This paper considers the use of artificial neural networks (ANNs) to model six different heuristic algorithms applied to the n job, m machine real flowshop scheduling problem with the objective of minimizing makespan. The objective is to obtain six ANN models to be used for the prediction of the completion times for each job processed on each machine and to introduce the fuzziness of scheduling information into flowshop scheduling. Fuzzy membership functions are generated for completion, job waiting and machine idle times. Different methods are proposed to obtain the fuzzy parameters. To model the functional relation between the input and output variables, multilayered feedforward networks (MFNs) trained with error backpropagation learning rule are used. The trained network is able to apply the learnt relationship to new problems. In this paper, an implementation alternative to the existing heuristic algorithms is provided. Once the network is trained adequately, it can provide an outcome (solution) faster than conventional iterative methods by its generalizing property. The results obtained from the study can be extended to solve the scheduling problems in the area of manufacturing.  相似文献   

7.
It is becoming increasingly apparent that, without some form of explanation capability, the full potential of trained artificial neural networks (ANNs) may not be realised. This survey gives an overview of techniques developed to redress this situation. Specifically, the survey focuses on mechanisms, procedures, and algorithms designed to insert knowledge into ANNs (knowledge initialisation), extract rules from trained ANNs (rule extraction), and utilise ANNs to refine existing rule bases (rule refinement). The survey also introduces a new taxonomy for classifying the various techniques, discusses their modus operandi, and delineates criteria for evaluating their efficacy.  相似文献   

8.

The bored pile foundations are gaining popularity in construction industry because of ease in construction, low noise and vibrations. The load-carrying capacity of bored pile foundations is dependent upon soil–structure interaction. This being a three-dimensional problem is further complicated due to large variations in soil properties. Also, modeling of soil is difficult because of its nonlinear and anisotropic nature. For such cases, the artificial neural network (ANN) and nature-inspired optimization techniques have been found to be highly suitable to attain acceptable levels of accuracy. In the present study, two ANNs have been trained for determination of unit skin friction and unit end bearing capacity from soil properties. The training data for ANNs have been obtained from finite element analysis of pile foundations for 4809 different soil types. A dataset of 50 field pile loading test results is used to check the performance of the developed artificial neural networks. To enhance the accuracy of the developed ANNs, two correlation factors have been determined by applying four popular nature-inspired optimization algorithms: particle swarm optimization (PSO), fire flies, cuckoo search and bacterial foraging. In order to rank these optimization algorithms, parametric and nonparametric statistical analysis has been carried out. The results of optimization algorithms have been compared to find the most suitable solution for this multi-dimensional problem which has a large number of nonlinear equality constraints. The effectiveness and suitability of the nature-inspired algorithms for the presented problem have been demonstrated by computing correlation coefficients with field pile loading test results and then with the total execution time taken by each algorithm. The results of comparison show that PSO is the best performer for such constrained problems.

  相似文献   

9.
Most artificial neural networks (ANNs) have a fixed topology during learning, and often suffer from a number of shortcomings as a result. Variations of ANNs that use dynamic topologies have shown ability to overcome many of these problems. This paper introduces location-independent transformations (LITs) as a general strategy for implementing distributed feed forward networks that use dynamic topologies (dynamic ANNs) efficiently in parallel hardware. A LIT creates a set of location-independent nodes, where each node computes its part of the network output independent of other nodes, using local information. This type of transformation allows efficient support for adding and deleting nodes dynamically during learning. In particular, this paper presents a LIT that supports both the standard (static) multilayer backpropagation network, and backpropagation with dynamic extensions. The complexity of both learning and execution algorithms is O(q(Nlog M)) for a single pattern, where q is the number of weight layers in the original network, N the number of nodes in the widest node layer in the original network, and M is the number of nodes in the transformed network (which is linear in the number hidden nodes in the original network). This paper extends previous work with 2-weight-layer backpropagation networks.  相似文献   

10.
帅典勋  赵宏彬  吴晓江 《计算机学报》2003,26(10):1224-1233
实时优化求解快速包交换问题(FPS)是提高网络性能的重要手段.基于梯度下降法等数学规划方法,不能并行地实时地优化求解FPS问题,而基于Hopfield型神经网络和细胞神经网络的优化方法中,都只有单一粒度的细胞动力学方程和单一粒度细胞之间的相互作用,不仅收敛到平衡点的过程长,而且神经网络参数的选择和修正十分困难.该文提出一种新的具有多粒度宏细胞的广义细胞自动机模型和方法,广义细胞自动机中的小粒度宏细胞聚合成可以独立演化的大粒度宏细胞,通过多粒度群体的不同程度群体智能的相互作用,能够比目前其他方法更快更有效地分布并行地优化求解FPS问题和其它类似的复杂的网络优化问题.  相似文献   

11.
Over the last two decades, artificial neural networks (ANN) have been applied to solve a variety of problems such as pattern classification and function approximation. In many applications, it is desirable to extract knowledge from trained neural networks for the users to gain a better understanding of the network’s solution. In this paper, we use a neural network rule extraction method to extract knowledge from 2222 dividend initiation and resumption events. We find that the positive relation between the short-term price reaction and the ratio of annualized dividend amount to stock price is primarily limited to 96 small firms with high dividend ratios. The results suggest that the degree of short-term stock price underreaction to dividend events may not be as dramatic as previously believed. The results also show that the relations between the stock price response and firm size is different across different types of firms. Thus, drawing the conclusions from the whole dividend event data may leave some important information unexamined. This study shows that neural network rule extraction method can reveal more knowledge from the data.  相似文献   

12.
Group decision making is a multi-criteria decision-making method applied in many fields. However, the use of group decision-making techniques in multi-class classification problems and rule generation is not explored widely. This investigation developed a group decision classifier with particle swarm optimization (PSO) and decision tree (GDCPSODT) for analyzing students’ mathematic and scientific achievements, which is a multi-class classification problem involving rule generation. The PSO technique is employed to determine weights of condition attributes; the decision tree is used to generate rules. To demonstrate the performance of the developed GDCPSODT model, other classifiers such as the Bayesian classifier, the k-nearest neighbor (KNN) classifier, the back propagation neural networks classifier with particle swarm optimization (BPNNPSO) and the radial basis function neural networks classifier with PSO (RBFNNPSO) are used to cope with the same data. Experimental results indicated the testing accuracy of GDCPSODT is higher than the other four classifiers. Furthermore, rules and some improvement directions of academic achievements are provided by the GDCPSODT model. Therefore, the GDCPSODT model is a feasible and promising alternative for analyzing student-related mathematic and scientific achievement data.  相似文献   

13.
提出了一种用于求解离散时间大系统动态递阶优化问题的神经网络模型(LHONN),该网络以全集成化为特征:1)各子系统的动态方程嵌入相应的局部优化网络中,使得网络结构具有较低的维数,易于硬件实现;2)其上级协调网络和局部优化网络的求解过程同时进行,优化求解速度高,适宜于实时系统优化.  相似文献   

14.
Because search space in artificial neural networks (ANNs) is high dimensional and multimodal which is usually polluted by noises and missing data, the process of weight training is a complex continuous optimization problem. This paper deals with the application of a recently invented metaheuristic optimization algorithm, bird mating optimizer (BMO), for training feed-forward ANNs. BMO is a population-based search method which tries to imitate the mating ways of bird species for designing optimum searching techniques. In order to study the usefulness of the proposed algorithm, BMO is applied to weight training of ANNs for solving three real-world classification problems, namely, Iris flower, Wisconsin breast cancer, and Pima Indian diabetes. The performance of BMO is compared with those of the other classifiers. Simulation results indicate the superior capability of BMO to tackle the problem of ANN weight training. BMO is also applied to model fuel cell system which has been addressed as an open and demanding problem in electrical engineering. The promising results verify the potential of BMO algorithm.  相似文献   

15.
In this paper the perceptron neural networks are applied to approximate the solution of fractional optimal control problems. The necessary (and also sufficient in most cases) optimality conditions are stated in a form of fractional two-point boundary value problem. Then this problem is converted to a Volterra integral equation. By using perceptron neural network’s ability in approximating a nonlinear function, first we propose approximating functions to estimate control, state and co-state functions which they satisfy the initial or boundary conditions. The approximating functions contain neural network with unknown weights. Using an optimization approach, the weights are adjusted such that the approximating functions satisfy the optimality conditions of fractional optimal control problem. Numerical results illustrate the advantages of the method.  相似文献   

16.
A pattern search optimization method is applied to the generation of optimal artificial neural networks (ANNs). Optimization is performed using a mixed variable extension to the generalized pattern search method. This method offers the advantage that categorical variables, such as neural transfer functions and nodal connectivities, can be used as parameters in optimization. When used together with a surrogate, the resulting algorithm is highly efficient for expensive objective functions. Results demonstrate the effectiveness of this method in optimizing an ANN for the number of neurons, the type of transfer function, and the connectivity among neurons. The optimization method is applied to a chemistry approximation of practical relevance. In this application, temperature and a chemical source term are approximated as functions of two independent parameters using optimal ANNs. Comparison of the performance of optimal ANNs with conventional tabulation methods demonstrates equivalent accuracy by considerable savings in memory storage. The architecture of the optimal ANN for the approximation of the chemical source term consists of a fully connected feedforward network having four nonlinear hidden layers and 117 synaptic weights. An equivalent representation of the chemical source term using tabulation techniques would require a 500 x 500 grid point discretization of the parameter space.  相似文献   

17.
Artificial neural networks (ANNs) are one of the hottest topics in computer science and artificial intelligence due to their potential and advantages in analyzing real-world problems in various disciplines, including but not limited to physics, biology, chemistry, and engineering. However, ANNs lack several key characteristics of biological neural networks, such as sparsity, scale-freeness, and small-worldness. The concept of sparse and scale-free neural networks has been introduced to fill this gap. Network sparsity is implemented by removing weak weights between neurons during the learning process and replacing them with random weights. When the network is initialized, the neural network is fully connected, which means the number of weights is four times the number of neurons. In this study, considering that a biological neural network has some degree of initial sparsity, we design an ANN with a prescribed level of initial sparsity. The neural network is tested on handwritten digits, Arabic characters, CIFAR-10, and Reuters newswire topics. Simulations show that it is possible to reduce the number of weights by up to 50% without losing prediction accuracy. Moreover, in both cases, the testing time is dramatically reduced compared with fully connected ANNs.  相似文献   

18.
The highly nonlinear chaotic nature of electrocardiogram (EKG) data represents a well-suited application of artificial neural networks (ANNs) for the detection of normal and abnormal heartbeats. Digitized EKG data were applied to a two-layer feed-forward neural network trained to distinguish between different types of heartbeat patterns. The Levenberg–Marquardt training algorithm was found to provide the best training results. In our study, the trained ANN correctly distinguished between normal heartbeats and premature ventricular contractions in 92% of the cases presented.  相似文献   

19.
The problems associated with training feedforward artificial neural networks (ANNs) such as the multilayer perceptron (MLP) network and radial basis function (RBF) network have been well documented. The solutions to these problems have inspired a considerable amount of research, one particular area being the application of evolutionary search algorithms such as the genetic algorithm (GA). To date, the vast majority of GA solutions have been aimed at the MLP network. This paper begins with a brief overview of feedforward ANNs and GAs followed by a review of the current state of research in applying evolutionary techniques to training RBF networks.  相似文献   

20.
This paper reviews the use of evolutionary algorithms (EAs) to optimize artificial neural networks (ANNs). First, we briefly introduce the basic principles of artificial neural networks and evolutionary algorithms and, by analyzing the advantages and disadvantages of EAs and ANNs, explain the advantages of using EAs to optimize ANNs. We then provide a brief survey on the basic theories and algorithms for optimizing the weights, optimizing the network architecture and optimizing the learning rules, and discuss recent research from these three aspects. Finally, we speculate on new trends in the development of this area.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号