首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Mario  David  Francisco B. 《Neurocomputing》2009,72(16-18):3795
The model of attractor neural network on the small-world topology (local and random connectivity) is investigated. The synaptic weights are random, driving the network towards a disordered state for the neural activity. An ordered macroscopic neural state is induced by a bias in the network weight connections, and the network evolution when initialized in blocks of positive/negative activity is studied. The retrieval of the block-like structure is investigated. An application to the Hebbian learning of a pattern, carrying local information, is presented. The block and the global attractor compete according to the initial conditions and the change of stability from one to the other depends on the long-range character of the network connectivity, as shown with a flow-diagram analysis. Moreover, a larger number of blocks emerges with the network dilution.  相似文献   

2.
We study the effect of competition between short-term synaptic depression and facilitation on the dynamic properties of attractor neural networks, using Monte Carlo simulation and a mean-field analysis. Depending on the balance of depression, facilitation, and the underlying noise, the network displays different behaviors, including associative memory and switching of activity between different attractors. We conclude that synaptic facilitation enhances the attractor instability in a way that (1) intensifies the system adaptability to external stimuli, which is in agreement with experiments, and (2) favors the retrieval of information with less error during short time intervals.  相似文献   

3.
We study both analytically and numerically the effect of presynaptic noise on the transmission of information in attractor neural networks. The noise occurs on a very short timescale compared to that for the neuron dynamics and it produces short-time synaptic depression. This is inspired in recent neurobiological findings that show that synaptic strength may either increase or decrease on a short timescale depending on presynaptic activity. We thus describe a mechanism by which fast presynaptic noise enhances the neural network sensitivity to an external stimulus. The reason is that, in general, presynaptic noise induces nonequilibrium behavior and, consequently, the space of fixed points is qualitatively modified in such a way that the system can easily escape from the attractor. As a result, the model shows, in addition to pattern recognition, class identification and categorization, which may be relevant to the understanding of some of the brain complex tasks.  相似文献   

4.
Yumei  Daoyi  Zhichun 《Neurocomputing》2007,70(16-18):2953
In this paper, a model is considered to describe the dynamics of a class of non-autonomous neural networks with time-varying delays. By applying the properties of M-matrix, the techniques of inequality analysis and the Banach fixed point theorem, we obtain a series of new criteria on the dissipativity and existence of periodic attractor. Our results can extend and improve earlier ones.  相似文献   

5.

Object tracking still remains challenging in computer vision because of the severe object variation, e.g., deformation, occlusion, and rotation. To handle the object variation and achieve robust object tracking performance, we propose a novel relationship-based tracking algorithm using neural networks in this paper. Compared with existing approaches in the literature, our method assumes the targeted object to be consisted of several parts and considers the evolution of the topology structure among these parts. After training a candidate neural network for predicting the probable areas each part may locate at in the successive frame, we then design a novel collaboration neural network to determine the precise area each part will locate at with account for the topology structure among these individual parts, which is learned from their historical physical locations during online tracking process. Experimental results show that the proposed method outperforms state-of-the-art trackers on a benchmark dataset, yielding the significant improvements in accuracy on high-distorted sequences.

  相似文献   

6.
Zemel RS  Mozer MC 《Neural computation》2001,13(5):1045-1064
Attractor networks, which map an input space to a discrete output space, are useful for pattern completion--cleaning up noisy or missing input features. However, designing a net to have a given set of attractors is notoriously tricky; training procedures are CPU intensive and often produce spurious attractors and ill-conditioned attractor basins. These difficulties occur because each connection in the network participates in the encoding of multiple attractors. We describe an alternative formulation of attractor networks in which the encoding of knowledge is local, not distributed. Although localist attractor networks have similar dynamics to their distributed counterparts, they are much easier to work with and interpret. We propose a statistical formulation of localist attractor net dynamics, which yields a convergence proof and a mathematical interpretation of model parameters. We present simulation experiments that explore the behavior of localist attractor networks, showing that they yield few spurious attractors, and they readily exhibit two desirable properties of psychological and neurobiological models: priming (faster convergence to an attractor if the attractor has been recently visited) and gang effects (in which the presence of an attractor enhances the attractor basins of neighboring attractors).  相似文献   

7.
In this paper, neural network- and feature-based approaches are introduced to overcome current shortcomings in the automated integration of topology design and shape optimization. The topology optimization results are reconstructed in terms of features, which consist of attributes required for automation and integration in subsequent applications. Features are defined as cost-efficient simple shapes for manufacturing. A neural network-based image-processing technique is presented to match the arbitrarily shaped holes inside the structure with predefined features. The effectiveness of the proposed approach in integrating topology design and shape optimization is demonstrated with several experimental examples.  相似文献   

8.
We provide a characterization of the expressive powers of several models of deterministic and nondeterministic first-order recurrent neural networks according to their attractor dynamics. The expressive power of neural nets is expressed as the topological complexity of their underlying neural ω-languages, and refers to the ability of the networks to perform more or less complicated classification tasks via the manifestation of specific attractor dynamics. In this context, we prove that most neural models under consideration are strictly more powerful than Muller Turing machines. These results provide new insights into the computational capabilities of recurrent neural networks.  相似文献   

9.
This research explores the usage of classification approaches in order to facilitate the accurate estimation of probabilistic constraints in optimization problems under uncertainty. The efficiency of the proposed framework is achieved with the combination of a conventional topology optimization method and a classification approach- namely, probabilistic neural networks (PNN). Specifically, the implemented framework using PNN is useful in the case of highly nonlinear or disjoint failure domain problems. The effectiveness of the proposed framework is demonstrated with three examples. The first example deals with the estimation of the limit state function in the case of disjoint failure domains. The second example shows the efficacy of the proposed method in the design of stiffest structure through the topology optimization process with the consideration of random field inputs and disjoint failure phenomenon, such as buckling. The third example demonstrates the applicability of the proposed method in a practical engineering problem.  相似文献   

10.
Information theory suggests that extraction of the principal sub-space from data is useful when the input to a neural network is corrupted with additive noise. A number of neural network algorithms exist which can find this principal sub-space, many of which also extract the principal components of the input. However, when there is noise on both input and output of a network, simply extracting the principal sub-space (or components) is not sufficient to optimize information capacity. An approximate solution to maximizing information capacity would be to extract the principal sub-space of components with variances above a certain threshold, and then ensure that these are uncorrelated and that they have equal variance at the output. A neural network is described which uses negative feedback connections to achieve this uncorrelated, equal-variance solution.  相似文献   

11.
Here, formation of continuous attractor dynamics in a nonlinear recurrent neural network is used to achieve a nonlinear speech denoising method, in order to implement robust phoneme recognition and information retrieval. Formation of attractor dynamics in recurrent neural network is first carried out by training the clean speech subspace as the continuous attractor. Then, it is used to recognize noisy speech with both stationary and nonstationary noise. In this work, the efficiency of a nonlinear feedforward network is compared to the same one with a recurrent connection in its hidden layer. The structure and training of this recurrent connection, is designed in such a way that the network learns to denoise the signal step by step, using properties of attractors it has formed, along with phone recognition. Using these connections, the recognition accuracy is improved 21% for the stationary signal and 14% for the nonstationary one with 0db SNR, in respect to a reference model which is a feedforward neural network.  相似文献   

12.
There are several neural network implementations using either software, hardware-based or a hardware/software co-design. This work proposes a hardware architecture to implement an artificial neural network (ANN), whose topology is the multilayer perceptron (MLP). In this paper, we explore the parallelism of neural networks and allow on-the-fly changes of the number of inputs, number of layers and number of neurons per layer of the net. This reconfigurability characteristic permits that any application of ANNs may be implemented using the proposed hardware. In order to reduce the processing time that is spent in arithmetic computation, a real number is represented using a fraction of integers. In this way, the arithmetic is limited to integer operations, performed by fast combinational circuits. A simple state machine is required to control sums and products of fractions. Sigmoid is used as the activation function in the proposed implementation. It is approximated by polynomials, whose underlying computation requires only sums and products. A theorem is introduced and proven so as to cover the arithmetic strategy of the computation of the activation function. Thus, the arithmetic circuitry used to implement the neuron weighted sum is reused for computing the sigmoid. This resource sharing decreased drastically the total area of the system. After modeling and simulation for functionality validation, the proposed architecture synthesized using reconfigurable hardware. The results are promising.  相似文献   

13.
Artificial neural networks techniques have been successfully applied in vector quantization (VQ) encoding. The objective of VQ is to statistically preserve the topological relationships existing in a data set and to project the data to a lattice of lower dimensions, for visualization, compression, storage, or transmission purposes. However, one of the major drawbacks in the application of artificial neural networks is the difficulty to properly specify the structure of the lattice that best preserves the topology of the data. To overcome this problem, in this paper we introduce merging algorithms for machine-fusion, boosting-fusion-based and hybrid-fusion ensembles of SOM, NG and GSOM networks. In these ensembles not the output signals of the base learners are combined, but their architectures are properly merged. We empirically show the quality and robustness of the topological representation of our proposed algorithm using both synthetic and real benchmarks datasets.  相似文献   

14.
Shows that the main proofs of the above paper (Yu et al., Trans Neural Networks, vol. 4, no. 2, p. 207-220, 1993) are incomplete and not correct: in fact, the self-organization cannot be achieved if the adaptation parameter satisfies the classical Robins-Monro conditions and Proposition 2 is erroneous. On the other hand, the two-dimensional extension (Theorem 3) is not proved. The main point is that the four classes that the authors consider as stable classes are not stable at all. Some references are finally given.  相似文献   

15.
The goal of this work is to learn and retrieve a sequence of highly correlated patterns using a Hopfield-type of attractor neural network (ANN) with a small-world connectivity distribution. For this model, we propose a weight learning heuristic which combines the pseudo-inverse approach with a row-shifting schema. The influence of the ratio of random connectivity on retrieval quality and learning time has been studied. Our approach has been successfully tested on a complex pattern, as it is the case of traffic video sequences, for different combinations of the involved parameters. Moreover, it has demonstrated to be robust with respect to highly variable frame activity.  相似文献   

16.
17.
Boolean networks are an important formalism for modelling biological systems and have attracted much attention in recent years. An important challenge in Boolean networks is to exhaustively find attractors, which represent steady states of a biological network. In this paper, we propose a new approach to improve the efficiency of BDD-based attractor detection. Our approach includes a monolithic algorithm for small networks, an enumerative strategy to deal with large networks, a method to accelerate attractor detection based on an analysis of the network structure, and two heuristics on ordering BDD variables. We demonstrate the performance of our approach on a number of examples and on a realistic model of apoptosis in hepatocytes. We compare it with one existing technique in the literature.  相似文献   

18.
Neurophysiological experiments show that the strength of synaptic connections can undergo substantial changes on a short time scale. These changes depend on the history of the presynaptic input. Using mean-field techniques, we study how short-time dynamics of synaptic connections influence the performance of attractor neural networks in terms of their memory capacity and capability to process external signals. For binary discrete-time as well as for firing rate continuous-time neural networks, the fixed points of the network dynamics are shown to be unaffected by synaptic dynamics. However, the stability of patterns changes considerably. Synaptic depression turns out to reduce the storage capacity. On the other hand, synaptic depression is shown to be advantageous for processing of pattern sequences. The analytical results on stability, size of the basins of attraction and on the switching between patterns are complemented by numerical simulations.  相似文献   

19.
Tao Ye  Xuefeng Zhu 《Neurocomputing》2011,74(6):906-915
The process neural network (PrNN) is an ANN model suited for solving the learning problems with signal inputs, whose elementary unit is the process neuron (PN), an emerging neuron model. There is an essential difference between the process neuron and traditional neurons, but there also exists a relation between them. The former can be approximated by the latter within any precision. First, the PN model and some PrNNs are introduced in brief. And then, two PN approximating theorems are presented and proved in detail. Each theorem gives an approximating model to the PN model, i.e., the time-domain feature expansion model and the orthogonal decomposition feature expansion model. Some corollaries are given for the PrNNs based on these two theorems. Thereafter, simulation studies are performed on some simulated signal sets and a real dataset. The results show that the PrNN can effectively suppress noises polluting the signals and generalize quite well. Finally some problems on PrNNs are discussed and further research directions are suggested.  相似文献   

20.
This paper concerns the communication primitives of broadcasting (one-to-all communication) and gossiping (all-to-all communication) in known topology radio networks, i.e., where for each primitive the schedule of transmissions is precomputed in advance based on full knowledge about the size and the topology of the network. The first part of the paper examines the two communication primitives in arbitrary graphs. In particular, for the broadcast task we deliver two new results: a deterministic efficient algorithm for computing a radio schedule of length D + O(log3 n), and a randomized algorithm for computing a radio schedule of length D + O(log2 n). These results improve on the best currently known D + O(log4 n) time schedule due to Elkin and Kortsarz (Proceedings of the 16th ACM-SIAM Symposium on Discrete Algorithms, pp. 222–231, 2005). Later we propose a new (efficiently computable) deterministic schedule that uses 2D + Δlog n + O(log3 n) time units to complete the gossiping task in any radio network with size n, diameter D and max-degree Δ. Our new schedule improves and simplifies the currently best known gossiping schedule, requiring time , for any network with the diameter D = Ω(log i+4 n), where i is an arbitrary integer constant i ≥ 0, see Gąsieniec et al. (Proceedings of the 11th International Colloquium on Structural Information and Communication Complexity, vol. 3104, pp. 173–184, 2004). The second part of the paper focuses on radio communication in planar graphs, devising a new broadcasting schedule using fewer than 3D time slots. This result improves, for small values of D, on the currently best known D + O(log3 n) time schedule proposed by Elkin and Kortsarz (Proceedings of the 16th ACM-SIAM Symposium on Discrete Algorithms, pp. 222–231, 2005). Our new algorithm should be also seen as a separation result between planar and general graphs with small diameter due to the polylogarithmic inapproximability result for general graphs by Elkin and Kortsarz (Proceedings of the 7th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems, vol. 3122, pp. 105–116, 2004; J. Algorithms 52(1), 8–25, 2004). The second author is supported in part by a grant from the Israel Science Foundation and by the Royal Academy of Engineering. Part of this research was performed while this author (Q. Xin) was a PhD student at The University of Liverpool.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号