首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
Morphological neural networks are based on a new paradigm for neural computing. Instead of adding the products of neural values and corresponding synaptic weights, the basic neural computation in a morphological neuron takes the maximum or minimum of the sums of neural values and their corresponding synaptic weights. By taking the maximum (or minimum) of sums instead of the sum of products, morphological neuron computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we restrict our attention to morphological associative memories. After a brief review of morphological neural computing and a short discussion about the properties of morphological associative memories, we present new methodologies and associated theorems for retrieving complete stored patterns from noisy or incomplete patterns using morphological associative memories. These methodologies are derived from the notions of morphological independence, strong independence, minimal representations of patterns vectors, and kernels. Several examples are provided in order to illuminate these novel concepts.  相似文献   

2.
Morphological neural networks (MNNs) are a class of artificial neural networks whose operations can be expressed in the mathematical theory of minimax algebra. In a morphological neural net, the usual sum of weighted inputs is replaced by a maximum or minimum of weighted inputs (in this context, the weighting is performed by summing the weight and the input). We speak of a max product, a min product respectively.In recent years, a number of different MNN models and applications have emerged. The emphasis of this paper is on morphological associative memories (MAMs), in particular on binary autoassociative morphological memories (AMMs). We give a new set theoretic interpretation of recording and recall in binary AMMs and provide a generalization using fuzzy set theory.  相似文献   

3.
Associative neural memories are models of biological phenomena that allow for the storage of pattern associations and the retrieval of the desired output pattern upon presentation of a possibly noisy or incomplete version of an input pattern. In this paper, we introduce implicative fuzzy associative memories (IFAMs), a class of associative neural memories based on fuzzy set theory. An IFAM consists of a network of completely interconnected Pedrycz logic neurons with threshold whose connection weights are determined by the minimum of implications of presynaptic and postsynaptic activations. We present a series of results for autoassociative models including one pass convergence, unlimited storage capacity and tolerance with respect to eroded patterns. Finally, we present some results on fixed points and discuss the relationship between implicative fuzzy associative memories and morphological associative memories  相似文献   

4.
In this paper, a new synthesis approach is developed for associative memories based on the perceptron training algorithm. The design (synthesis) problem of feedback neural networks for associative memories is formulated as a set of linear inequalities such that the use of perceptron training is evident. The perceptron training in the synthesis algorithms is guaranteed to converge for the design of neural networks without any constraints on the connection matrix. For neural networks with constraints on the diagonal elements of the connection matrix, results concerning the properties of such networks and concerning the existence of such a network design are established. For neural networks with sparsity and/or symmetry constraints on the connection matrix, design algorithms are presented. Applications of the present synthesis approach to the design of associative memories realized by means of other feedback neural network models are studied. To demonstrate the applicability of the present results and to compare the present synthesis approach with existing design methods, specific examples are considered.  相似文献   

5.
Recent advances in artificial neural networks (ANNs) have led to the design and construction of neuroarchitectures as simulator and emulators of a variety of problems in science and engineering. Such problems include pattern recognition, prediction, optimization, associative memory, and control of dynamic systems. This paper offers an analytical overview of the most successful design, implementation, and application of neuroarchitectures as neurosimulators and neuroemulators. It also outlines historical notes on the formulation of basic biological neuron, artificial computational models, network architectures, and learning processes of the most common ANN; describes and analyzes neurosimulation on parallel architecture both in software and hardware (neurohardware); presents the simulation of ANNs on parallel architectures; gives a brief introduction of ANNs in vector microprocessor systems; and presents ANNs in terms of the "new technologies". Specifically, it discusses cellular computing, cellular neural networks (CNNs), a new proposition for unsupervised neural networks (UNNs), and pulse coupled neural networks (PCNNs).  相似文献   

6.
Multiplicative neuron model-based artificial neural networks are one of the artificial neural network types which have been proposed recently and have produced successful forecasting results. Sigmoid activation function was used in multiplicative neuron model-based artificial neural networks in the previous studies. Although artificial neural networks which involve the use of radial basis activation function produce more successful forecasting results, Gaussian activation function has not been used for multiplicative neuron model yet. In this study, rather than using a sigmoid activation function, Gaussian activation function was used in multiplicative neuron model artificial neural network. The weights of artificial neural network and parameters of activation functions were optimized by guaranteed convergence particle swarm optimization. Two major contributions of this study are as follows: the use of Gaussian activation function in multiplicative neuron model for the first time and the optimizing of central and propagation parameters of activation function with the weights of artificial neural network in a single optimization process. The superior forecasting performance of the proposed Gaussian activation function-based multiplicative neuron model artificial neural network was proved by applying it to real-life time series.  相似文献   

7.
Many well-known fuzzy associative memory (FAM) models can be viewed as (fuzzy) morphological neural networks (MNNs) because they perform an operation of (fuzzy) mathematical morphology at every node, possibly followed by the application of an activation function. The vast majority of these FAMs represent distributive models given by single-layer matrix memories. Although the Kosko subsethood FAM (KS-FAM) can also be classified as a fuzzy morphological associative memory (FMAM), the KS-FAM constitutes a two-layer non-distributive model. In this paper, we prove several theorems concerning the conditions of perfect recall, the absolute storage capacity, and the output patterns produced by the KS-FAM. In addition, we propose a normalization strategy for the training and recall phases of the KS-FAM. We employ this strategy to compare the error correction capabilities of the KS-FAM and other fuzzy and gray-scale associative memories in terms of some experimental results concerning gray-scale image reconstruction. Finally, we apply the KS-FAM to the task of vision-based self-localization in robotics.  相似文献   

8.
A morphological neural network is generally defined as a type of artificial neural network that performs an elementary operation of mathematical morphology at every node, possibly followed by the application of an activation function. The underlying framework of mathematical morphology can be found in lattice theory.With the advent of granular computing, lattice-based neurocomputing models such as morphological neural networks and fuzzy lattice neurocomputing models are becoming increasingly important since many information granules such as fuzzy sets and their extensions, intervals, and rough sets are lattice ordered. In this paper, we present the lattice-theoretical background and the learning algorithms for morphological perceptrons with competitive learning which arise by incorporating a winner-take-all output layer into the original morphological perceptron model. Several well-known classification problems that are available on the internet are used to compare our new model with a range of classifiers such as conventional multi-layer perceptrons, fuzzy lattice neurocomputing models, k-nearest neighbors, and decision trees.  相似文献   

9.
形态学联想记忆框架研究   总被引:9,自引:0,他引:9  
形态学联想记忆(MAM)是一类极为新颖的人工神经网络.典型的MAM实例对象包括:实域MAM(RMAM)、复域MAM(CMAM)、双向MAM(MBAM)、模糊MAM(FMAM)、增强的FMAM(EFMAM)、模糊MBAM(FMBAM)等.它们虽有许多诱人的优点和特点,但有相同的形态学理论基础,本质上是相通的,将其统一在一个MAM框架中是可能的.同时,联想记忆统一框架的建立也是当前的研究重点和难点之一.为此作者构建了一个形态学联想记忆框架.文中首先分析MAM类的代数结构,奠定可靠的MAM框架计算基础;其次,分析MAM类的基本操作和共同特征,抽取它们的本质属性和方法,引入形态学联想记忆范式和算子;最后,提炼并证明主要的框架定理.该框架的意义在于:(1)从数学的角度将MAM对象统一在一起,从而能以更高的视角揭示它们的特性和本质;(2)有助于发现一些新的形态学联想记忆方法,从而解决更多的联想记忆、模式识别、模糊推理等问题.  相似文献   

10.
包健  余红明 《计算机应用》2009,29(1):230-233
为了使得神经网络的应用符合嵌入式系统快速计算、存储量精简的要求,提出了一种定点数权值神经网络的优化方法。采用精度可调的比例数格式定点数表示神经网络的权值和阈值,用遗传算法对神经网络进行训练,并用最小二乘法对网络的非线性连续节点激励函数进行了线性离散化。将这种优化的神经网络应用于触摸屏校准。实验表明,采用该方法进行触摸屏校准比传统的校准方法具有更高的准确率。  相似文献   

11.
Lattice algebra approach to single-neuron computation   总被引:1,自引:0,他引:1  
Recent advances in the biophysics of computation and neurocomputing models have brought to the foreground the importance of dendritic structures in a single neuron cell. Dendritic structures are now viewed as the primary autonomous computational units capable of realizing logical operations. By changing the classic simplified model of a single neuron with a more realistic one that incorporates the dendritic processes, a novel paradigm in artificial neural networks is being established. In this work, we introduce and develop a mathematical model of dendrite computation in a morphological neuron based on lattice algebra. The computational capabilities of this enriched neuron model are demonstrated by means of several illustrative examples and by proving that any single layer morphological perceptron endowed with dendrites and their corresponding input and output synaptic processes is able to approximate any compact region in higher dimensional Euclidean space to within any desired degree of accuracy. Based on this result, we describe a training algorithm for single layer morphological perceptrons and apply it to some well-known nonlinear problems in order to exhibit its performance.  相似文献   

12.
Design and analysis of maximum Hopfield networks   总被引:7,自引:0,他引:7  
Since McCulloch and Pitts presented a simplified neuron model (1943), several neuron models have been proposed. Among them, the binary maximum neuron model was introduced by Takefuji et al. and successfully applied to some combinatorial optimization problems. Takefuji et al. also presented a proof for the local minimum convergence of the maximum neural network. In this paper we discuss this convergence analysis and show that this model does not guarantee the descent of a large class of energy functions. We also propose a new maximum neuron model, the optimal competitive Hopfield model (OCHOM), that always guarantees and maximizes the decrease of any Lyapunov energy function. Funabiki et al. (1997, 1998) applied the maximum neural network for the n-queens problem and showed that this model presented the best overall performance among the existing neural networks for this problem. Lee et al. (1992) applied the maximum neural network for the bipartite subgraph problem showing that the solution quality was superior to that of the best existing algorithm. However, simulation results in the n-queens problem and in the bipartite subgraph problem show that the OCHOM is much superior to the maximum neural network in terms of the solution quality and the computation time.  相似文献   

13.
Associative memories have emerged as a powerful computational neural network model for several pattern classification problems. Like most traditional classifiers, these models assume that the classes share similar prior probabilities. However, in many real-life applications the ratios of prior probabilities between classes are extremely skewed. Although the literature has provided numerous studies that examine the performance degradation of renowned classifiers on different imbalanced scenarios, so far this effect has not been supported by a thorough empirical study in the context of associative memories. In this paper, we fix our attention on the applicability of the associative neural networks to the classification of imbalanced data. The key questions here addressed are whether these models perform better, the same or worse than other popular classifiers, how the level of imbalance affects their performance, and whether distinct resampling strategies produce a different impact on the associative memories. In order to answer these questions and gain further insight into the feasibility and efficiency of the associative memories, a large-scale experimental evaluation with 31 databases, seven classification models and four resampling algorithms is carried out here, along with a non-parametric statistical test to discover any significant differences between each pair of classifiers.  相似文献   

14.
Side weirs are structures often used in irrigation techniques, sewer networks and flood protection. This study aims to obtain sharp-crested rectangular side weirs discharge coefficients in the straight channel by using artificial neural network model for a total of 843 experiments. The performance of the feed forward neural networks (FFNN) and radial basis neural networks (RBNN) are compared with multiple nonlinear and linear regression models. Root mean square errors (RMSE), mean absolute errors (MAE) and correlation coefficient (R) statistics are used for the evaluation of the models’ performances. Comparison results indicated that the neural computing techniques could be employed successfully in modeling discharge coefficient. The FFNN is found to be better than the RBNN. It is found that the FFNN model with RMSE of 0.037 in test period is superior in estimation of discharge coefficient than the multiple nonlinear and linear regression models with RMSE of 0.054 and 0.106, respectively.  相似文献   

15.
In this paper, global exponential stability in Lagrange sense for periodic neural networks with various activation functions is further studied. By constructing appropriate Lyapunov-like functions, we provide easily verifiable criteria for the boundedness and global exponential attractivity of periodic neural networks. These theoretical analysis can narrow the search field of optimization computation, associative memories, chaos control and provide convenience for applications.  相似文献   

16.
This study compares the multi-period predictive ability of linear ARIMA models to nonlinear time delay neural network models in water quality applications. Comparisons are made for a variety of artificially generated nonlinear ARIMA data sets that simulate the characteristics of wastewater process variables and watershed variables, as well as two real-world wastewater data sets. While the time delay neural network model was more accurate for the two real-world wastewater data sets, the neural networks were not always more accurate than linear ARIMA for the artificial nonlinear data sets. In some cases of the artificial nonlinear data, where multi-period predictions are made, the linear ARIMA model provides a more accurate result than the time delay neural network. This study suggests that researchers and practitioners should carefully consider the nature and intended use of water quality data if choosing between neural networks and other statistical methods for wastewater process control or watershed environmental quality management.  相似文献   

17.
A widely used complex-valued activation function for complex-valued multistate Hopfield networks is revealed to be essentially based on a multilevel step function. By replacing the multilevel step function with other multilevel characteristics, we present two alternative complex-valued activation functions. One is based on a multilevel sigmoid function, while the other on a characteristic of a multistate bifurcating neuron. Numerical experiments show that both modifications to the complex-valued activation function bring about improvements in network performance for a multistate associative memory. The advantage of the proposed networks over the complex-valued Hopfield networks with the multilevel step function is more outstanding when a complex-valued neuron represents a larger number of multivalued states. Further, the performance of the proposed networks in reconstructing noisy 256 gray-level images is demonstrated in comparison with other recent associative memories to clarify their advantages and disadvantages.  相似文献   

18.
In this work we use the continuous Hopfield network and the continuous bidirectional associative memory system (BAM) in order to develop two novel methods for structural analysis. The development of these techniques is based on the analogous relationship that results from comparing the energy functions of the above two models with that of the structural displacement method (i.e. the socalled stiffness matrix method) and it takes advantage of the fact that classical numerical methods do not have the characteristics of parallel computation that artificial neural networks have. Several examples related to structural deformation are used to illustrate the superiority of the BAM-based neural networks over other traditional numerical methods and the Hopfield model, especially for the case of large dimensional stiffness matrices.  相似文献   

19.
构造性形态学神经网络算法(CMNN)是一种数学形态学与传统的神经网络模型相结合的一种非线性神经网络,有较强的实用性。其训练算法根据形态学联想记忆而来,在测试过程中采用形态学算子将测试样本归类于训练得到的超盒之中。由于其测试过程无法正确地将落在超盒外的样本进行分类,后有人提出了一种基于模糊格的形态学神经网络(FL-CMNN),该算法用样本与超盒的隶属度判断提高了原CMNN算法的分类效果,但增加了算法的复杂程度且分类效果不稳定。这里提出一种基于构造性形态学神经网络算法的提升算法(LCMNN),该算法继承了原有的形态学算子运算速度快的优点且能够将落在超盒之外的样本进行准确地归类。数值实验表明,基于构造性形态学神经网络算法的提升算法(LCMNN)与其他几种算法相比,能够达到最好的分类效果,而且简单易行,计算时间少。  相似文献   

20.
There are several neural network implementations using either software, hardware-based or a hardware/software co-design. This work proposes a hardware architecture to implement an artificial neural network (ANN), whose topology is the multilayer perceptron (MLP). In this paper, we explore the parallelism of neural networks and allow on-the-fly changes of the number of inputs, number of layers and number of neurons per layer of the net. This reconfigurability characteristic permits that any application of ANNs may be implemented using the proposed hardware. In order to reduce the processing time that is spent in arithmetic computation, a real number is represented using a fraction of integers. In this way, the arithmetic is limited to integer operations, performed by fast combinational circuits. A simple state machine is required to control sums and products of fractions. Sigmoid is used as the activation function in the proposed implementation. It is approximated by polynomials, whose underlying computation requires only sums and products. A theorem is introduced and proven so as to cover the arithmetic strategy of the computation of the activation function. Thus, the arithmetic circuitry used to implement the neuron weighted sum is reused for computing the sigmoid. This resource sharing decreased drastically the total area of the system. After modeling and simulation for functionality validation, the proposed architecture synthesized using reconfigurable hardware. The results are promising.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号