共查询到10条相似文献,搜索用时 93 毫秒
1.
Jooyoung Prk Hyuk Cho Daihee Park 《Neural Networks, IEEE Transactions on》1999,10(4):946-950
This paper concerns reliable search for the optimally performing GBSB (generalized brain-state-in-a-box) neural associative memory given a set of prototype patterns to be stored as stable equilibrium points. First, we observe some new qualitative properties of the GBSB model. Next, we formulate the synthesis of GBSB neural associative memories as a constrained optimization problem. Finally, we convert the optimization problem into a semidefinite program (SDP), which can be solved efficiently by recently developed interior point methods. The validity of this approach is illustrated by a design example. 相似文献
2.
This article is concerned with the synthesis of the optimally performing GBSB (generalized brain-state-in-a-box) neural associative memory given a set of desired binary patterns to be stored as asymptotically stable equilibrium points. Based on some known qualitative properties and newly observed fundamental properties of the GBSB model, the synthesis problem is formulated as a constrained optimization problem. Next, we convert this problem into a quasi-convex optimization problem called GEVP (generalized eigenvalue problem). This conversion is particularly useful in practice, because GEVPs can be efficiently solved by recently developed interior point methods. Design examples are given to illustrate the proposed approach and to compare with existing synthesis methods. 相似文献
3.
Several novel results concerning the characterization of the equilibrium conditions of a continuous-time dynamical neural network model and a systematic procedure for synthesizing associative memory networks with nonsymmetrical interconnection matrices are presented. The equilibrium characterization focuses on the exponential stability and instability properties of the network equilibria and on equilibrium confinement, viz., ensuring the uniqueness of an equilibrium in a specific region of the state space. While the equilibrium confinement result involves a simple test, the stability results given obtain explicit estimates of the degree of exponential stability and the regions of attraction of the stable equilibrium points. Using these results as valuable guidelines, a systematic synthesis procedure for constructing a dynamical neural network that stores a given set of vectors as the stable equilibrium points is developed. 相似文献
4.
High-throughput implementations of neural network models are required to transfer the technology from small prototype research problems into large-scale "real-world" applications. The flexibility of these implementations in accommodating for modifications to the neural network computation and structure is of paramount importance. The performance of many implementation methods today is greatly dependent on the density and the interconnection structure of the neural network model being implemented. A principal contribution of this paper is to demonstrate an implementation method which exploits maximum amount of parallelism from neural computation, without enforcing stringent conditions on the neural network interconnection structure, to achieve this high implementation efficiency. We propose a new reconfigurable parallel processing architecture, the Dynamically Reconfigurable Extended Array Multiprocessor (DREAM) machine, and an associated mapping method for implementing neural networks with regular interconnection structures. Details of the system execution rate calculation as a function of the neural network structure are presented. Several example neural network structures are used to demonstrate the efficiency of our mapping method and the DREAM machine architecture on implementing diverse interconnection structures. We show that due to the reconfigurable nature of the DREAM machine, most of the available parallelism of neural networks can be efficiently exploited. 相似文献
5.
提出一种新的神经网络模型---时滞标准神经网络模型(DSNNM),它由线性动力学系统和有界静态时滞非线性算子连接而成.利用不同的Lyapunov泛函和S方法推导出DSNNM全局渐近稳定性和全局指数稳定性的充分条件,这些条件可表示为线性不等式(LMI)形式.大多数时滞(或非时滞)动态神经网络(DANN)稳定性分析或神经网络控制系统都可以转化为DSNNM,以便用统一的方法进行稳定性分析或镇定控制.从DSNNM应用于时滞联想记忆(BAM)神经网络的稳定性分析以及PH中和过程神经控制器的综合实例,可以看出,得到的稳定性判据扩展并改进了以往文献中的稳定性定理,而且可将稳定性分析推广到非线性控制系统的综合. 相似文献
6.
7.
8.
This article deals with a special class of neural autoassociative memory, namely, with fuzzy BSB and GBSB models and their
learning algorithms. These models defined on a hypercube solve the problem of fuzzy clusterization of a data array owing to
the fact that the vertices of the hypercube act as point attractors. A membership function is introduced that allows one to
classify data that belong to overlapping clusters.
__________
Translated from Kibernetika i Sistemnyi Analiz, No. 6, pp. 18–28, November–December 2006. 相似文献
9.
Associative Memory Design Using Support Vector Machines 总被引:1,自引:0,他引:1
《Neural Networks, IEEE Transactions on》2006,17(5):1165-1174
The relation existing between support vector machines (SVMs) and recurrent associative memories is investigated. The design of associative memories based on the generalized brain-state-in-a-box (GBSB) neural model is formulated as a set of independent classification tasks which can be efficiently solved by standard software packages for SVM learning. Some properties of the networks designed in this way are evidenced, like the fact that surprisingly they follow a generalized Hebb's law. The performance of the SVM approach is compared to existing methods with nonsymmetric connections, by some design examples. 相似文献
10.
The astonishing development in the field of artificial neural networks (ANN) has brought significant advancement in many application domains, such as pattern recognition, image classification, and computer vision. ANN imitates neuron behaviors and makes a decision or prediction by learning patterns and features from the given data set. To reach higher accuracies, neural networks are getting deeper, and consequently, the computation and storage demands on hardware platforms are steadily increasing. In addition, the massive data communication among neurons makes the interconnection more complex and challenging. To overcome these challenges, ASIC-based DNN accelerators are being designed which usually incorporate customized processing elements, fixed interconnection, and large off-chip memory storage. As a result, DNN computation involves large memory accesses due to frequent load/off-loading data, which significantly increases the energy consumption and latency. Also, the rigid architecture and interconnection among processing elements limit the efficiency of the platform to specific applications. In recent years, Network-on-Chip-based (NoC-based) DNN becomes an emerging design paradigm because the NoC interconnection can help to reduce the off-chip memory accesses while offers better scalability and flexibility. To evaluate the NoC-based DNN in the early design stage, we introduce a cycle-accurate NoC-based DNN simulator, called DNNoC-sim. To support various operations such as convolution and pooling in the modern DNN models, we first propose a DNN flattening technique to convert diverse DNN operation into MAC-like operations. In addition, we propose a DNN slicing method to evaluate the large-scale DNN models on a resource-constraint NoC platform. The evaluation results show a significant reduction in the off-chip memory accesses compared to the state-of-the-art DNN model. We also analyze the performance and discuss the trade-off between different design parameters. 相似文献