共查询到20条相似文献,搜索用时 0 毫秒
1.
To avoid unstable phenomenon during the learning process, two new learning schemes, called the multiplier and constrained learning rate algorithms, are proposed in this paper to provide stable adaptive updating processes for both the synaptic and somatic parameters of the network. Based on the explicit stability conditions, in the multiplier method these conditions are introduced into the iterative error index, and the new updating formulations contain a set of inequality constraints. In the constrained learning rate algorithm, the learning rate is updated at each iterative instant by an equation derived using the stability conditions. With these stable dynamic backpropagation algorithms, any analog target pattern may be implemented by a steady output vector which is a nonlinear vector function of the stable equilibrium point. The applicability of the approaches presented is illustrated through both analog and binary pattern storage examples. 相似文献
2.
Some globally asymptotical stability criteria for the equilibrium states of a general class of discrete-time dynamic neural networks with continuous states are presented using a diagonal Lyapunov function approach. The neural networks are assumed to have the asymmetrical weight matrices throughout the paper. The resulting criteria are described by the diagonal stability of some matrices associated with the network parameters. Some novel stability conditions represented by either the existence of the positive diagonal solutions of the Lyapunov equations or some inequalities are given. Using the equivalence between the diagonal stability and the Schur stability for a nonnegative matrix, some simplified global stability conditions are also presented. Finally, some examples are provided for demonstrating the effectiveness of the global stability conditions presented. 相似文献
3.
An adaptive output feedback control scheme for the output tracking of a class of continuous-time nonlinear plants is presented. An RBF neural network is used to adaptively compensate for the plant nonlinearities. The network weights are adapted using a Lyapunov-based design. The method uses parameter projection, control saturation, and a high-gain observer to achieve semi-global uniform ultimate boundedness. The effectiveness of the proposed method is demonstrated through simulations. The simulations also show that by using adaptive control in conjunction with robust control, it is possible to tolerate larger approximation errors resulting from the use of lower order networks. 相似文献
4.
Neural-network front ends in unsupervised learning 总被引:1,自引:0,他引:1
Proposed is an idea of partial supervision realized in the form of a neural-network front end to the schemes of unsupervised learning (clustering). This neural network leads to an anisotropic nature of the induced feature space. The anisotropic property of the space provides us with some of its local deformation necessary to properly represent labeled data and enhance efficiency of the mechanisms of clustering to be exploited afterwards. The training of the network is completed based upon available labeled patterns-a referential form of the labeling gives rise to reinforcement learning. It is shown that the discussed approach is universal and can be utilized in conjunction with any clustering method. Experimental studies are concentrated on three main categories of unsupervised learning including FUZZY ISODATA, Kohonen self-organizing maps, and hierarchical clustering. 相似文献
5.
An analysis of the absolute stability for a general class of discrete-time recurrent neural networks (RNN's) is presented. A discrete-time model of RNN's is represented by a set of nonlinear difference equations. Some sufficient conditions for the absolute stability are derived using Ostrowski's theorem and the similarity transformation approach. For a given RNN model, these conditions are determined by the synaptic weight matrix of the network. The results reported in this paper need fewer constraints on the weight matrix and the model than in previously published studies. 相似文献
6.
We show how the quantum paradigm can be used to speed up unsupervised learning algorithms. More precisely, we explain how it is possible to accelerate learning algorithms by quantizing some of their subroutines. Quantization refers to the process that partially or totally converts a classical algorithm to its quantum counterpart in order to improve performance. In particular, we give quantized versions of clustering via minimum spanning tree, divisive clustering and k-medians that are faster than their classical analogues. We also describe a distributed version of k-medians that allows the participants to save on the global communication cost of the protocol compared to the classical version. Finally, we design quantum algorithms for the construction of a neighbourhood graph, outlier detection as well as smart initialization of the cluster centres. 相似文献
7.
Ling Chen Chuandong Li Tingwen Huang Yiran Chen Xin Wang 《Neural computing & applications》2014,25(2):393-400
This letter presents a new memristor crossbar array system and demonstrates its applications in image learning. The controlled pulse and image overlay technique are introduced for the programming of memristor crossbars and promising a better performance for noise reduction. The time-slot technique is helpful for improving the processing speed of image. Simulink and numerical simulations have been employed to demonstrate the useful applications of the proposed circuit structure in image learning. 相似文献
8.
Microsystem Technologies - This research focuses on bot detection through implementation of techniques such as traffic analysis, unsupervised machine learning, and similarity analysis between... 相似文献
9.
Pena J.M. Lozano J.A. Larranaga P. Inza I. 《IEEE transactions on pattern analysis and machine intelligence》2001,23(6):590-603
This paper introduces a novel enhancement for unsupervised learning of conditional Gaussian networks that benefits from feature selection. Our proposal is based on the assumption that, in the absence of labels reflecting the cluster membership of each case of the database, those features that exhibit low correlation with the rest of the features can be considered irrelevant for the learning process. Thus, we suggest performing this process using only the relevant features. Then, every irrelevant feature is added to the learned model to obtain an explanatory model for the original database which is our primary goal. A simple and, thus, efficient measure to assess the relevance of the features for the learning process is presented. Additionally, the form of this measure allows us to calculate a relevance threshold to automatically identify the relevant features. The experimental results reported for synthetic and real-world databases show the ability of our proposal to distinguish between relevant and irrelevant features and to accelerate learning, while still obtaining good explanatory models for the original database 相似文献
10.
Mathew David Mackenzie 《Neural computing & applications》1995,3(1):2-16
A novel neural network called Class Directed Unsupervised Learning (CDUL) is introduced. The architecture, based on a Kohonen self-organising network, uses additional input nodes to feed class knowledge to the network during training, in order to optimise the final positioning of Kohonen nodes in feature space. The structure and training of CDUL networks is detailed, showing that (a) networks cannot suffer from the problem of single Kohonen nodes being trained by vectors of more than one class, (b) the number of Kohonen nodes necessary to represent the classes is found during training, and (c) the number of training set passes CDUL requires is low in comparison to similar networks. CDUL is subsequently applied to the classification of chemical excipients from Near Infrared (NIR) reflectance spectra, and its performance compared with three other unsupervised paradigms. The results thereby obtained demonstrate a superior performance which remains relatively constant through a wide range of network parameters. 相似文献
11.
Recursive unsupervised learning of finite mixture models 总被引:10,自引:0,他引:10
Zivkovic Z van der Heijden F 《IEEE transactions on pattern analysis and machine intelligence》2004,26(5):651-656
There are two open problems when finite mixture densities are used to model multivariate data: the selection of the number of components and the initialization. In this paper, we propose an online (recursive) algorithm that estimates the parameters of the mixture and that simultaneously selects the number of components. The new algorithm starts with a large number of randomly initialized components. A prior is used as a bias for maximally structured models. A stochastic approximation recursive learning algorithm is proposed to search for the maximum a posteriori (MAP) solution and to discard the irrelevant components. 相似文献
12.
A large and influential class of neural network architectures uses postintegration lateral inhibition as a mechanism for competition. We argue that these algorithms are computationally deficient in that they fail to generate, or learn, appropriate perceptual representations under certain circumstances. An alternative neural network architecture is presented here in which nodes compete for the right to receive inputs rather than for the right to generate outputs. This form of competition, implemented through preintegration lateral inhibition, does provide appropriate coding properties and can be used to learn such representations efficiently. Furthermore, this architecture is consistent with both neuroanatomical and neurophysiological data. We thus argue that preintegration lateral inhibition has computational advantages over conventional neural network architectures while remaining equally biologically plausible. 相似文献
13.
Fuzzy identification of systems with unsupervised learning 总被引:1,自引:0,他引:1
Luciano A.M. Savastano M. 《IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics》1997,27(1):138-141
The paper describes a mathematical tool to build a fuzzy model whose membership functions and consequent parameters rely on the estimates of a data set. The proposed method proved to be capable of approximating any real continuous function, also a strongly nonlinear one, on a compact set to arbitrary accuracy. Without resorting to domain experts, the algorithm constructs a model-free, complete function approximation system. Applications to the modeling of several functions among which classical nonlinear ones such as the Rosenbrock and the sine (x, y) functions are reported. The proposed algorithm can find applications in the development of fuzzy logic controllers (FLC). 相似文献
14.
Optimal, unsupervised learning in invariant object recognition 总被引:2,自引:0,他引:2
A means for establishing transformation-invariant representations of objects is proposed and analyzed, in which different views are associated on the basis of the temporal order of the presentation of these views, as well as their spatial similarity. Assuming knowledge of the distribution of presentation times, an optimal linear learning rule is derived. Simulations of a competitive network trained on a character recognition task are then used t highlight the success of this learning rule in relation to simple Hebbian learning and to show that the theory can give accurate quantitative predictions for the optimal parameters for such networks. 相似文献
15.
Existence and uniqueness of equilibrium, as well as its stability and instability, of a continuous-time Hopfield neural network are studied. A set of new and simple sufficient conditions are derived. 相似文献
16.
Neurocontroller design via supervised and unsupervised learning 总被引:1,自引:0,他引:1
In this paper we study the role of supervised and unsupervised neural learning schemes in the adaptive control of nonlinear dynamic systems. We suggest and demonstrate that the teacher's knowledge in the supervised learning mode includes a-priori plant sturctural knowledge which may be employed in the design of exploratory schedules during learning that results in an unsupervised learning scheme. We further demonstrate that neurocontrollers may realize both linear and nonlinear control laws that are given explicitly in an automated teacher or implicitly through a human operator and that their robustness may be superior to that of a model based controller. Examples of both learning schemes are provided in the adaptive control of robot manipulators and a cart-pole system. 相似文献
17.
18.
Slow feature analysis: unsupervised learning of invariances 总被引:8,自引:0,他引:8
Invariant features of temporally varying signals are useful for analysis and classification. Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal. It is based on a nonlinear expansion of the input signal and application of principal component analysis to this expanded signal and its time derivative. It is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decorrelated features, which are ordered by their degree of invariance. SFA can be applied hierarchically to process high-dimensional input signals and extract complex features. SFA is applied first to complex cell tuning properties based on simple cell output, including disparity and motion. Then more complicated input-output functions are learned by repeated application of SFA. Finally, a hierarchical network of SFA modules is presented as a simple model of the visual system. The same unstructured network can learn translation, size, rotation, contrast, or, to a lesser degree, illumination invariance for one-dimensional objects, depending on only the training stimulus. Surprisingly, only a few training objects suffice to achieve good generalization to new objects. The generated representation is suitable for object recognition. Performance degrades if the network is trained to learn multiple invariances simultaneously. 相似文献
19.
Dong-Chul Park 《Neural Networks, IEEE Transactions on》2000,11(2):520-528
An unsupervised competitive learning algorithm based on the classical k-means clustering algorithm is proposed. The proposed learning algorithm called the centroid neural network (CNN) estimates centroids of the related cluster groups in training date. This paper also explains algorithmic relationships among the CNN and some of the conventional unsupervised competitive learning algorithms including Kohonen's self-organizing map and Kosko's differential competitive learning algorithm. The CNN algorithm requires neither a predetermined schedule for learning coefficient nor a total number of iterations for clustering. The simulation results on clustering problems and image compression problems show that CNN converges much faster than conventional algorithms with compatible clustering quality while other algorithms may give unstable results depending on the initial values of the learning coefficient and the total number of iterations. 相似文献
20.
Reducing the dimensionality of the data has been a challenging task in data mining and machine learning applications. In these applications, the existence of irrelevant and redundant features negatively affects the efficiency and effectiveness of different learning algorithms. Feature selection is one of the dimension reduction techniques, which has been used to allow a better understanding of data and improve the performance of other learning tasks. Although the selection of relevant features has been extensively studied in supervised learning, feature selection in the absence of class labels is still a challenging task. This paper proposes a novel method for unsupervised feature selection, which efficiently selects features in a greedy manner. The paper first defines an effective criterion for unsupervised feature selection that measures the reconstruction error of the data matrix based on the selected subset of features. The paper then presents a novel algorithm for greedily minimizing the reconstruction error based on the features selected so far. The greedy algorithm is based on an efficient recursive formula for calculating the reconstruction error. Experiments on real data sets demonstrate the effectiveness of the proposed algorithm in comparison with the state-of-the-art methods for unsupervised feature selection. 相似文献