首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 496 毫秒
1.
Morphological neural networks are based on a new paradigm for neural computing. Instead of adding the products of neural values and corresponding synaptic weights, the basic neural computation in a morphological neuron takes the maximum or minimum of the sums of neural values and their corresponding synaptic weights. By taking the maximum (or minimum) of sums instead of the sum of products, morphological neuron computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we restrict our attention to morphological associative memories. After a brief review of morphological neural computing and a short discussion about the properties of morphological associative memories, we present new methodologies and associated theorems for retrieving complete stored patterns from noisy or incomplete patterns using morphological associative memories. These methodologies are derived from the notions of morphological independence, strong independence, minimal representations of patterns vectors, and kernels. Several examples are provided in order to illuminate these novel concepts.  相似文献   

2.
Morphological neural networks (MNNs) are a class of artificial neural networks whose operations can be expressed in the mathematical theory of minimax algebra. In a morphological neural net, the usual sum of weighted inputs is replaced by a maximum or minimum of weighted inputs (in this context, the weighting is performed by summing the weight and the input). We speak of a max product, a min product respectively.In recent years, a number of different MNN models and applications have emerged. The emphasis of this paper is on morphological associative memories (MAMs), in particular on binary autoassociative morphological memories (AMMs). We give a new set theoretic interpretation of recording and recall in binary AMMs and provide a generalization using fuzzy set theory.  相似文献   

3.
Associative neural memories are models of biological phenomena that allow for the storage of pattern associations and the retrieval of the desired output pattern upon presentation of a possibly noisy or incomplete version of an input pattern. In this paper, we introduce implicative fuzzy associative memories (IFAMs), a class of associative neural memories based on fuzzy set theory. An IFAM consists of a network of completely interconnected Pedrycz logic neurons with threshold whose connection weights are determined by the minimum of implications of presynaptic and postsynaptic activations. We present a series of results for autoassociative models including one pass convergence, unlimited storage capacity and tolerance with respect to eroded patterns. Finally, we present some results on fixed points and discuss the relationship between implicative fuzzy associative memories and morphological associative memories  相似文献   

4.
In this paper, a new synthesis approach is developed for associative memories based on the perceptron training algorithm. The design (synthesis) problem of feedback neural networks for associative memories is formulated as a set of linear inequalities such that the use of perceptron training is evident. The perceptron training in the synthesis algorithms is guaranteed to converge for the design of neural networks without any constraints on the connection matrix. For neural networks with constraints on the diagonal elements of the connection matrix, results concerning the properties of such networks and concerning the existence of such a network design are established. For neural networks with sparsity and/or symmetry constraints on the connection matrix, design algorithms are presented. Applications of the present synthesis approach to the design of associative memories realized by means of other feedback neural network models are studied. To demonstrate the applicability of the present results and to compare the present synthesis approach with existing design methods, specific examples are considered.  相似文献   

5.
Recent advances in artificial neural networks (ANNs) have led to the design and construction of neuroarchitectures as simulator and emulators of a variety of problems in science and engineering. Such problems include pattern recognition, prediction, optimization, associative memory, and control of dynamic systems. This paper offers an analytical overview of the most successful design, implementation, and application of neuroarchitectures as neurosimulators and neuroemulators. It also outlines historical notes on the formulation of basic biological neuron, artificial computational models, network architectures, and learning processes of the most common ANN; describes and analyzes neurosimulation on parallel architecture both in software and hardware (neurohardware); presents the simulation of ANNs on parallel architectures; gives a brief introduction of ANNs in vector microprocessor systems; and presents ANNs in terms of the "new technologies". Specifically, it discusses cellular computing, cellular neural networks (CNNs), a new proposition for unsupervised neural networks (UNNs), and pulse coupled neural networks (PCNNs).  相似文献   

6.
Multiplicative neuron model-based artificial neural networks are one of the artificial neural network types which have been proposed recently and have produced successful forecasting results. Sigmoid activation function was used in multiplicative neuron model-based artificial neural networks in the previous studies. Although artificial neural networks which involve the use of radial basis activation function produce more successful forecasting results, Gaussian activation function has not been used for multiplicative neuron model yet. In this study, rather than using a sigmoid activation function, Gaussian activation function was used in multiplicative neuron model artificial neural network. The weights of artificial neural network and parameters of activation functions were optimized by guaranteed convergence particle swarm optimization. Two major contributions of this study are as follows: the use of Gaussian activation function in multiplicative neuron model for the first time and the optimizing of central and propagation parameters of activation function with the weights of artificial neural network in a single optimization process. The superior forecasting performance of the proposed Gaussian activation function-based multiplicative neuron model artificial neural network was proved by applying it to real-life time series.  相似文献   

7.
Many well-known fuzzy associative memory (FAM) models can be viewed as (fuzzy) morphological neural networks (MNNs) because they perform an operation of (fuzzy) mathematical morphology at every node, possibly followed by the application of an activation function. The vast majority of these FAMs represent distributive models given by single-layer matrix memories. Although the Kosko subsethood FAM (KS-FAM) can also be classified as a fuzzy morphological associative memory (FMAM), the KS-FAM constitutes a two-layer non-distributive model. In this paper, we prove several theorems concerning the conditions of perfect recall, the absolute storage capacity, and the output patterns produced by the KS-FAM. In addition, we propose a normalization strategy for the training and recall phases of the KS-FAM. We employ this strategy to compare the error correction capabilities of the KS-FAM and other fuzzy and gray-scale associative memories in terms of some experimental results concerning gray-scale image reconstruction. Finally, we apply the KS-FAM to the task of vision-based self-localization in robotics.  相似文献   

8.
A morphological neural network is generally defined as a type of artificial neural network that performs an elementary operation of mathematical morphology at every node, possibly followed by the application of an activation function. The underlying framework of mathematical morphology can be found in lattice theory.With the advent of granular computing, lattice-based neurocomputing models such as morphological neural networks and fuzzy lattice neurocomputing models are becoming increasingly important since many information granules such as fuzzy sets and their extensions, intervals, and rough sets are lattice ordered. In this paper, we present the lattice-theoretical background and the learning algorithms for morphological perceptrons with competitive learning which arise by incorporating a winner-take-all output layer into the original morphological perceptron model. Several well-known classification problems that are available on the internet are used to compare our new model with a range of classifiers such as conventional multi-layer perceptrons, fuzzy lattice neurocomputing models, k-nearest neighbors, and decision trees.  相似文献   

9.
形态学联想记忆框架研究   总被引:9,自引:0,他引:9  
形态学联想记忆(MAM)是一类极为新颖的人工神经网络.典型的MAM实例对象包括:实域MAM(RMAM)、复域MAM(CMAM)、双向MAM(MBAM)、模糊MAM(FMAM)、增强的FMAM(EFMAM)、模糊MBAM(FMBAM)等.它们虽有许多诱人的优点和特点,但有相同的形态学理论基础,本质上是相通的,将其统一在一个MAM框架中是可能的.同时,联想记忆统一框架的建立也是当前的研究重点和难点之一.为此作者构建了一个形态学联想记忆框架.文中首先分析MAM类的代数结构,奠定可靠的MAM框架计算基础;其次,分析MAM类的基本操作和共同特征,抽取它们的本质属性和方法,引入形态学联想记忆范式和算子;最后,提炼并证明主要的框架定理.该框架的意义在于:(1)从数学的角度将MAM对象统一在一起,从而能以更高的视角揭示它们的特性和本质;(2)有助于发现一些新的形态学联想记忆方法,从而解决更多的联想记忆、模式识别、模糊推理等问题.  相似文献   

10.
Lattice algebra approach to single-neuron computation   总被引:1,自引:0,他引:1  
Recent advances in the biophysics of computation and neurocomputing models have brought to the foreground the importance of dendritic structures in a single neuron cell. Dendritic structures are now viewed as the primary autonomous computational units capable of realizing logical operations. By changing the classic simplified model of a single neuron with a more realistic one that incorporates the dendritic processes, a novel paradigm in artificial neural networks is being established. In this work, we introduce and develop a mathematical model of dendrite computation in a morphological neuron based on lattice algebra. The computational capabilities of this enriched neuron model are demonstrated by means of several illustrative examples and by proving that any single layer morphological perceptron endowed with dendrites and their corresponding input and output synaptic processes is able to approximate any compact region in higher dimensional Euclidean space to within any desired degree of accuracy. Based on this result, we describe a training algorithm for single layer morphological perceptrons and apply it to some well-known nonlinear problems in order to exhibit its performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号