首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 477 毫秒
1.
We investigate the complete stability for multistable delayed neural networks. A new formulation modified from the previous studies on multistable networks is developed to derive componentwise dynamical property. An iteration argument is then constructed to conclude that every solution of the network converges to a single equilibrium as time tends to infinity. The existence of 3n equilibria and 2n positively invariant sets for the n-neuron system remains valid under the new formulation. The theory is demonstrated by a numerical illustration.  相似文献   

2.
Feedforward neural networks (FNN) have been proposed to solve complex problems in pattern recognition, classification and function approximation. Despite the general success of learning methods for FNN, such as the backpropagation (BP) algorithm, second-order algorithms, long learning time for convergence remains a problem to be overcome. In this paper, we propose a new hybrid algorithm for a FNN that combines unsupervised training for the hidden neurons (Kohonen algorithm) and supervised training for the output neurons (gradient descent method). Simulation results show the effectiveness of the proposed algorithm compared with other well-known learning methods.  相似文献   

3.
Feedforward neural networks (FNNs) have been proposed to solve complex problems in pattern recognition and classification and function approximation. Despite the general success of learning methods for FNNs, such as the backpropagation (BP) algorithm, second-order optimization algorithms and layer-wise learning algorithms, several drawbacks remain to be overcome. In particular, two major drawbacks are convergence to a local minima and long learning time. We propose an efficient learning method for a FNN that combines the BP strategy and optimization layer by layer. More precisely, we construct the layer-wise optimization method using the Taylor series expansion of nonlinear operators describing a FNN and propose to update weights of each layer by the BP-based Kaczmarz iterative procedure. The experimental results show that the new learning algorithm is stable, it reduces the learning time and demonstrates improvement of generalization results in comparison with other well-known methods.  相似文献   

4.
The problem of downscaling the effects of global scale climate variability into predictions of local hydrology has important implications for water resource management. Our research aims to identify predictive relationships that can be used to integrate solar and ocean-atmospheric conditions into forecasts of regional water flows. In recent work we have developed an induction technique called second-order table compression, in which learning can be viewed as a process that transforms a table consisting of training data into a second-order table (which has sets of atomic values as entries) with fewer rows by merging rows in consistency preserving ways. Here, we apply the second-order table compression technique to generate predictive models of future water inflows of Lake Okeechobee, a primary source of water supply for south Florida. We also describe SORCER, a second-order table compression learning system and compare its performance with three well-established data mining techniques: neural networks, decision tree learning and associational rule mining. SORCER gives more accurate results, on the average, than the other methods with average accuracy between 49% and 56% in the prediction of inflows discretized into four ranges. We discuss the implications of these results and the practical issues in assessing the results from data mining models to guide decision-making.  相似文献   

5.
Yi Z  Tan KK  Lee TH 《Neural computation》2003,15(3):639-662
Multistability is a property necessary in neural networks in order to enable certain applications (e.g., decision making), where monostable networks can be computationally restrictive. This article focuses on the analysis of multistability for a class of recurrent neural networks with unsaturating piecewise linear transfer functions. It deals fully with the three basic properties of a multistable network: boundedness, global attractivity, and complete convergence. This article makes the following contributions: conditions based on local inhibition are derived that guarantee boundedness of some multistable networks, conditions are established for global attractivity, bounds on global attractive sets are obtained, complete convergence conditions for the network are developed using novel energy-like functions, and simulation examples are employed to illustrate the theory thus developed.  相似文献   

6.
This paper investigates the learning of a wide class of single-hidden-layer feedforward neural networks (SLFNs) with two sets of adjustable parameters, i.e., the nonlinear parameters in the hidden nodes and the linear output weights. The main objective is to both speed up the convergence of second-order learning algorithms such as Levenberg-Marquardt (LM), as well as to improve the network performance. This is achieved here by reducing the dimension of the solution space and by introducing a new Jacobian matrix. Unlike conventional supervised learning methods which optimize these two sets of parameters simultaneously, the linear output weights are first converted into dependent parameters, thereby removing the need for their explicit computation. Consequently, the neural network (NN) learning is performed over a solution space of reduced dimension. A new Jacobian matrix is then proposed for use with the popular second-order learning methods in order to achieve a more accurate approximation of the cost function. The efficacy of the proposed method is shown through an analysis of the computational complexity and by presenting simulation results from four different examples.  相似文献   

7.
This paper proposes unconstrained functional networks as a new classifier to deal with the pattern recognition problems. Both methodology and learning algorithm for this kind of computational intelligence classifier using the iterative least squares optimization criterion are derived. The performance of this new intelligent systems scheme is demonstrated and examined using real-world applications. A comparative study with the most common classification algorithms in both machine learning and statistics communities is carried out. The study was achieved with only sets of second-order linearly independent polynomial functions to approximate the neuron functions. The results show that this new framework classifier is reliable, flexible, stable, and achieves a high-quality performance  相似文献   

8.
A major problem in designing artificial neural networks is the proper choice of the network architecture. Especially for vision networks classifying three-dimensional (3-D) objects this problem is very challenging, as these networks are necessarily large and therefore the search space for defining the needed networks is of a very high dimensionality. This strongly increases the chances of obtaining only suboptimal structures from standard optimization algorithms. We tackle this problem in two ways. First, we use biologically inspired hierarchical vision models to narrow the space of possible architectures and to reduce the dimensionality of the search space. Second, we employ evolutionary optimization techniques to determine optimal features and nonlinearities of the visual hierarchy. Here, we especially focus on higher order complex features in higher hierarchical stages. We compare two different approaches to perform an evolutionary optimization of these features. In the first setting, we directly code the features into the genome. In the second setting, in analogy to an ontogenetical development process, we suggest the new method of an indirect coding of the features via an unsupervised learning process, which is embedded into the evolutionary optimization. In both cases the processing nonlinearities are encoded directly into the genome and are thus subject to optimization. The fitness of the individuals for the evolutionary selection process is computed by measuring the network classification performance on a benchmark image database. Here, we use a nearest-neighbor classification approach, based on the hierarchical feature output. We compare the found solutions with respect to their ability to generalize. We differentiate between a first- and a second-order generalization. The first-order generalization denotes how well the vision system, after evolutionary optimization of the features and nonlinearities using a database A, can classify previously unseen test views of objects from this database A. As second-order generalization, we denote the ability of the vision system to perform classification on a database B using the features and nonlinearities optimized on database A. We show that the direct feature coding approach leads to networks with a better first-order generalization, whereas the second-order generalization is on an equally high level for both direct and indirect coding. We also compare the second-order generalization results with other state-of-the-art recognition systems and show that both approaches lead to optimized recognition systems, which are highly competitive with recent recognition algorithms.  相似文献   

9.
This paper demonstrates how unsupervised learning based on Hebb-like mechanisms is sufficient for training second-order neural networks to perform different types of motion analysis. The paper studies the convergence properties of the network in several conditions, including different levels of noise and motion coherence and different network configurations. We demonstrate the effectiveness of a novel variability dependent learning mechanism, which allows the network to learn under conditions of large feature similarity thresholds, which is crucial for noise robustness. The paper demonstrates the particular relevance of second-order neural networks and therefore correlation based approaches as contributing mechanisms for directional selectivity in the retina.  相似文献   

10.
We present an estimate approach to compute the viscoplastic behavior of a polymer matrix composite under different thermomechanical environments. This investigation incorporates computational neural network as the tool for determining the creep behavior of the composite. We propose a new second-order learning algorithm for training the multilayer networks. Training in the neural network is generally specified as the minimization of an appropriate error function with respect to parameters of the network (weights and learning rates) corresponding to excitory and inhibitory connections. We propose here a technique for error minimization based on the use of the truncated Newton (TN) large-scale unconstrained minimization technique with quadratic convergence rate. This technique offers a more sophisticated use of the gradient information compared to simple steepest descent or conjugate gradient methods. In this work we briefly specify the necessary details for implementing the TN method for training the neural networks that predicts the viscoplastic behavior of the polymeric composite. We provide comparative experimental results and explicit model results to verify the effectiveness of the neural networks-based model. These results verify the superiority of the present approach compared to the explicit modeling scheme. Moreover, the present study demonstrates for the first time the feasibility of introducing the TN method, with quadratic convergence rate, to the field of neural networks.  相似文献   

11.
In this paper, we propose a new passive weight learning law for switched Hopfield neural networks with time-delay under parametric uncertainty. Based on the proposed passive learning law, some new stability results, such as asymptotical stability, input-to-state stability (ISS), and bounded input-bounded output (BIBO) stability, are presented. An existence condition for the passive weight learning law of switched Hopfield neural networks is expressed in terms of strict linear matrix inequality (LMI). Finally, numerical examples are provided to illustrate our results.  相似文献   

12.
徐春荞  张冰冰  李培华 《计算机应用研究》2021,38(10):3040-3043,3048
域对抗学习是一种主流的域适应方法,它通过分类器和域判别器来学习具有可区分性的域不变特征;然而,现有的域对抗方法大多利用一阶特征来学习域不变特征,忽略了具有更强表达能力的二阶特征.提出了一种条件对抗域适应网络,通过联合建模图像的二阶表征以及特征和分类器预测之间的互协方差以便更有效地学习具有区分性的域不变特征;此外,引入了熵条件来平衡分类器预测的不确定性,以保证特征的可迁移性.提出的方法在两个常用的域适应数据库Office-31和ImageCLEF-DA上进行了验证,实验结果表明该方法优于同类方法并获得了领先的性能.  相似文献   

13.
In this paper, we present a new learning method using prior information for three-layered neural networks. Usually when neural networks are used for identification of systems, all of their weights are trained independently, without considering their interrelation of weight values. Thus the training results are not usually good. The reason for this is that each parameter has its influence on others during the learning. To overcome this problem, first, we give an exact mathematical equation that describes the relation between weight values given by a set of data conveying prior information. Then we present a new learning method that trains a part of the weights and calculates the others by using these exact mathematical equations. In almost all cases, this method keeps prior information given by a mathematical structure exactly during the learning. In addition, a learning method using prior information expressed by inequality is also presented. In any case, the degree of freedom of networks (the number of  相似文献   

14.
In a great variety of neuron models, neural inputs are combined using the summing operation. We introduce the concept of multiplicative neural networks that contain units that multiply their inputs instead of summing them and thus allow inputs to interact nonlinearly. The class of multiplicative neural networks comprises such widely known and well-studied network types as higher-order networks and product unit networks. We investigate the complexity of computing and learning for multiplicative neural networks. In particular, we derive upper and lower bounds on the Vapnik-Chervonenkis (VC) dimension and the pseudo-dimension for various types of networks with multiplicative units. As the most general case, we consider feedforward networks consisting of product and sigmoidal units, showing that their pseudo-dimension is bounded from above by a polynomial with the same order of magnitude as the currently best-known bound for purely sigmoidal networks. Moreover, we show that this bound holds even when the unit type, product or sigmoidal, may be learned. Crucial for these results are calculations of solution set components bounds for new network classes. As to lower bounds, we construct product unit networks of fixed depth with super-linear VC dimension. For sigmoidal networks of higher order, we establish polynomial bounds that, in contrast to previous results, do not involve any restriction of the network order. We further consider various classes of higher-order units, also known as sigma-pi units, that are characterized by connectivity constraints. In terms of these, we derive some asymptotically tight bounds. Multiplication plays an important role in both neural modeling of biological behavior and computing and learning with artificial neural networks. We briefly survey research in biology and in applications where multiplication is considered an essential computational element. The results we present here provide new tools for assessing the impact of multiplication on the computational power and the learning capabilities of neural networks.  相似文献   

15.
In this paper, the global exponential stability in Lagrange sense for continuous neutral type recurrent neural networks (NRNNs) with multiple time delays is studied. Three different types of activation functions are considered, including general bounded and two types of sigmoid activation functions. By constructing appropriate Lyapunov functions, some easily verifiable criteria for the ultimate boundedness and global exponential attractivity of NRNNs are obtained. These results can be applied to monostable and multistable neural networks as well as chaos control and chaos synchronization.  相似文献   

16.
In this paper, we propose some new results on stability for Takagi–Sugeno fuzzy delayed neural networks with a stable learning method. Based on the Lyapunov–Krasovskii approach, for the first time, a new learning method is presented to not only guarantee the exponential stability of Takagi–Sugeno fuzzy neural networks with time-delay, but also reduce the effect of external disturbance to a prescribed attenuation level. The proposed learning method can be obtained by solving a convex optimization problem which is represented in terms of a set of linear matrix inequalities (LMIs). An illustrative example is given to demonstrate the effectiveness of the proposed learning method.  相似文献   

17.
朱明敏  刘三阳  汪春峰 《自动化学报》2011,37(12):1514-1519
针对小样本数据集下学习贝叶斯网络 (Bayesian networks, BN)结构的不足, 以及随着条件集的增大, 利用统计方法进行条件独立 (Conditional independence, CI) 测试不稳定等问题, 提出了一种基于先验节点序学习网络结构的优化方法. 新方法通过定义优化目标函数和可行域空间, 首次将贝叶斯网络结构学习问题转化为求解目标函数极值的数学规划问题, 并给出最优解的存在性及唯一性证明, 为贝叶斯网络的不断扩展研究提出了新的方案. 理论证明以及实验结果显示了新方法的正确性和有效性.  相似文献   

18.
The proliferation of networked data in various disciplines motivates a surge of research interests on network or graph mining. Among them, node classification is a typical learning task that focuses on exploiting the node interactions to infer the missing labels of unlabeled nodes in the network. A vast majority of existing node classification algorithms overwhelmingly focus on static networks and they assume the whole network structure is readily available before performing learning algorithms. However, it is not the case in many real-world scenarios where new nodes and new links are continuously being added in the network. Considering the streaming nature of networks, we study how to perform online node classification on this kind of streaming networks (a.k.a. online learning on streaming networks). As the existence of noisy links may negatively affect the node classification performance, we first present an online network embedding algorithm to alleviate this problem by obtaining the embedding representation of new nodes on the fly. Then we feed the learned embedding representation into a novel online soft margin kernel learning algorithm to predict the node labels in a sequential manner. Theoretical analysis is presented to show the superiority of the proposed framework of online learning on streaming networks (OLSN). Extensive experiments on real-world networks further demonstrate the effectiveness and efficiency of the proposed OLSN framework.  相似文献   

19.
This paper presents a novel learning algorithm of fuzzy perceptron neural networks (FPNNs) for classifiers that utilize expert knowledge represented by fuzzy IF-THEN rules as well as numerical data as inputs. The conventional linear perceptron network is extended to a second-order one, which is much more flexible for defining a discriminant function. In order to handle fuzzy numbers in neural networks, level sets of fuzzy input vectors are incorporated into perceptron neural learning. At different levels of the input fuzzy numbers, updating the weight vector depends on the minimum of the output of the fuzzy perceptron neural network and the corresponding nonfuzzy target output that indicates the correct class of the fuzzy input vector. This minimum is computed efficiently by employing the modified vertex method. Moreover, the fuzzy pocket algorithm is introduced into our fuzzy perceptron learning scheme to solve the nonseparable problems. Simulation results demonstrate the effectiveness of the proposed FPNN model  相似文献   

20.
In this paper, we present a new learning method using prior information for three-layer neural networks. Usually when neural networks are used for identification of systems, all of their weights are trained independently, without considering interrelated weights values. Thus, the training results are usually not good. The reason for this in that each parameter has its influence on others during learning. To overcome this problem, we first give an exact mathematical equation that describes the relation between weight values given a set of data conveying prior information. The we present a new learning method that trains part of the weights and calculates the others using these exact mathematical equations. This method often a priori keeps the given mathematical structure exactly the same during learning; in other words, training is done so that the network follows a predetermined trajectory. Numerical computer simulation results are provided to support this approach. This work was presented, in part, at the Fourth International Symposium on Artificial Life and Robotics, Oita, Japan, January 19–22, 1999.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号