首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An open problem concerning the computational power of neural networks with symmetric weights is solved. It is shown that these networks possess the same computational power as general networks with asymmetric weights; i.e., these networks can compute any recursive function. The computations of these networks can be described as a minimmization process of a certain energy function; it is shown that for uninitialized symmetric neural networks this process presents a Σ2-complete problem.  相似文献   

2.
Mathematical essence and structures of the feedforward neural networks are investigated in this paper. The interpolation mechanisms of the feedforward neural networks are explored. For example, the well-known result, namely, that a neural network is an universal approximator, can be concluded naturally from the interpolative representations. Finally, the learning algorithms of the feedforward neural networks are discussed.  相似文献   

3.
4.
There has recently been a tremendous rebirth of interest in neural networks, ranging from distributed and localist spreading-activation networks to semantic networks with symbolic marker-passing. Ideally these networks would be encoded in dedicated massively-parallel hardware that directly implements their functionality. Cost and flexibility concerns, however, necessitate the use of general-purpose machines the simulate neural networks, especially in the research stages in which various models are being explored and tested. Issues of a simulation's timing and control become more critical when models are made up of heterogeneous networks in which nodes have different processing characteristics and cycling rates or which are made up of modular, interacting sub-networks. We have developed a simulation environment to create, operate, and control these types of connectionist networks. This paper describes how massively-parallel heterogeneous networks are simulated on serial machines as efficiently as possible, how large-scale simulations could be handled on current SIMD parallel machines, and outlines how the simulator could be implemented on its ideal hardware, a large-scale MIMD parallel machine.  相似文献   

5.
The paper describes a methodology for constructing transfer functions for the hidden layer of a back-propagation network, which is based on evolutionary programming. The method allows the construction of almost any mathematical form. It is tested using four benchmark classification problems from the well-known machine intelligence problems repository maintained by the University of California, Irvine. It was found that functions other than the commonly used sigmoidal function could perform well when used as hidden layer transfer functions. Three of the four problems showed improved test results when these evolved functions were used.  相似文献   

6.
Hongyong  Guanglan 《Neurocomputing》2007,70(16-18):2924
In this paper, a discrete-time bidirectional associative memory neural networks model is considered. By employing the theory of coincidence degree and using Halanay-type inequality technique we give some sufficient conditions ensuring the existence and globally exponential stability of periodic solutions for the discrete-time bidirectional neural networks. An example with the numerical simulations is provided to show the correctness of our analysis.  相似文献   

7.
In this article, we describe an innovative form of cipher design based on the use of recurrent neural networks. The well-known characteristics of neural networks, such as parallel distributed structure, high computational power, ability to learn and represent knowledge as a black box, are successfully applied to cryptography. The proposed cipher has a relatively simple architecture and, by incorporating neural networks, it releases the constraint on the length of the secret key. The design of the symmetric cipher is described in detail and its security is analyzed. The cipher is robust in resisting different cryptanalysis attacks and provides efficient data integrity and authentication services. Simulation results are presented to validate the effectiveness of the proposed cipher design.  相似文献   

8.
In this paper, a constructive one-hidden-layer network is introduced where each hidden unit employs a polynomial function for its activation function that is different from other units. Specifically, both a structure level as well as a function level adaptation methodologies are utilized in constructing the network. The functional level adaptation scheme ensures that the "growing" or constructive network has different activation functions for each neuron such that the network may be able to capture the underlying input-output map more effectively. The activation functions considered consist of orthonormal Hermite polynomials. It is shown through extensive simulations that the proposed network yields improved performance when compared to networks having identical sigmoidal activation functions.  相似文献   

9.
An important consideration when applying neural networks to pattern recognition is the sensitivity to weight perturbation or to input errors. In this paper, we analyze the sensitivity of single hidden-layer networks with threshold functions. In a case of weight perturbation or input errors, the probability of inversion error for an output neuron is derived as a function of the trained weights, the input pattern, and the variance of weight perturbation or the bit error probability of the input pattern. The derived results are verified with a simulation of the Madaline recognizing handwritten digits. The result shows that the sensitivity of trained networks is far different from that of networks with random weights.  相似文献   

10.
In this paper, we study the problem of minimizing a multilinear objective function over the discrete set {0, 1}n. This is an extension of an earlier work addressed to the problem of minimizing a quadratic function over {0, 1}n. A gradient-type neural network is proposed to perform the optimization. A novel feature of the network is the introduction of a so-called bias vector. The network is operated in the high-gain region of the sigmoidal nonlinearities. The following comprehensive theorem is proved: For all sufficiently small bias vectors except those belonging to a set of measure zero, for all sufficiently large sigmoidal gains, for all initial conditions except those belonging to a nowhere dense set, the state of the network converges to a local minimum of the objective function. This is a considerable generalization of earlier results for quadratic objective functions. Moreover, the proofs here are completely rigorous. The neural network-based approach to optimization is briefly compared to the so-called interior-point methods of nonlinear programming, as exemplified by Karmarkar's algorithm. Some problems for future research are suggested  相似文献   

11.
In this paper, the ability of a binary neural-network comprising only neurons with zero thresholds and binary weights to map given samples of a Boolean function is studied. A mathematical model describing a network with such restrictions is developed. It is shown that this model is quite amenable to algebraic manipulation. A key feature of the model is that it replaces the two input and output variables with a single "normalized" variable. The model is then used to provide a priori criteria, stated in terms of the new variable, that a given Boolean function must satisfy in order to be mapped by a network having one or two layers. These criteria provide necessary, and in the case of a one-layer network, sufficient conditions for samples of a Boolean function to be mapped by a binary neural network with zero thresholds. It is shown that the necessary conditions imposed by the two-layer network are, in some sense, minimal.  相似文献   

12.
We introduce an asset-allocation framework based on the active control of the value-at-risk of the portfolio. Within this framework, we compare two paradigms for making the allocation using neural networks. The first one uses the network to make a forecast of asset behavior, in conjunction with a traditional mean-variance allocator for constructing the portfolio. The second paradigm uses the network to directly make the portfolio allocation decisions. We consider a method for performing soft input variable selection, and show its considerable utility. We use model combination (committee) methods to systematize the choice of hyperparameters during training. We show that committees using both paradigms are significantly outperforming the benchmark market performance.  相似文献   

13.
基于折线模糊数间的模糊算术以及一个新的扩展原理建立了一种新的模糊神经网络模型,证明了当输入为负模糊数时,相应的前向三层折线模糊网络可以作为连续模糊函数的通用逼近器,并给出了此时连续模糊函数所需满足的等价条件,最后给出了一个仿真实例。  相似文献   

14.
A novel supervised learning method is proposed by combining linear discriminant functions with neural networks. The proposed method results in a tree-structured hybrid architecture. Due to constructive learning, the binary tree hierarchical architecture is automatically generated by a controlled growing process for a specific supervised learning task. Unlike the classic decision tree, the linear discriminant functions are merely employed in the intermediate level of the tree for heuristically partitioning a large and complicated task into several smaller and simpler subtasks in the proposed method. These subtasks are dealt with by component neural networks at the leaves of the tree accordingly. For constructive learning, growing and credit-assignment algorithms are developed to serve for the hybrid architecture. The proposed architecture provides an efficient way to apply existing neural networks (e.g. multi-layered perceptron) for solving a large scale problem. We have already applied the proposed method to a universal approximation problem and several benchmark classification problems in order to evaluate its performance. Simulation results have shown that the proposed method yields better results and faster training in comparison with the multilayered perceptron.  相似文献   

15.
Deals with computational issues of loading a fixed-architecture neural network with a set of positive and negative examples. This is the first result on the hardness of loading a simple three-node architecture which does not consist of the binary-threshold neurons, but rather utilizes a particular continuous activation function, commonly used in the neural-network literature. The authors observe that the loading problem is polynomial-time if the input dimension is constant. Otherwise, however, any possible learning algorithm based on particular fixed architectures faces severe computational barriers. Similar theorems have already been proved by Megiddo and by Blum and Rivest, to the case of binary-threshold networks only. The authors' theoretical results lend further suggestion to the use of incremental (architecture-changing) techniques for training networks rather than fixed architectures. Furthermore, they imply hardness of learnability in the probably approximately correct sense as well.  相似文献   

16.
H. Meijer  S. G. Akl 《Computing》1988,40(1):9-17
Parallel algorithms are examined for a number of fundamental computational problems. All algorithms have as a basic operation the addition ofk-bit integers. For each problem we present a solution in the form of a logical circuit for which the product of the computation time and number of gates used is smaller than that of the best previously known algorithm.  相似文献   

17.
P.A.  C.  M.  J.C.   《Neurocomputing》2009,72(13-15):2731
This paper proposes a hybrid neural network model using a possible combination of different transfer projection functions (sigmoidal unit, SU, product unit, PU) and kernel functions (radial basis function, RBF) in the hidden layer of a feed-forward neural network. An evolutionary algorithm is adapted to this model and applied for learning the architecture, weights and node typology. Three different combined basis function models are proposed with all the different pairs that can be obtained with SU, PU and RBF nodes: product–sigmoidal unit (PSU) neural networks, product–radial basis function (PRBF) neural networks, and sigmoidal–radial basis function (SRBF) neural networks; and these are compared to the corresponding pure models: product unit neural network (PUNN), multilayer perceptron (MLP) and the RBF neural network. The proposals are tested using ten benchmark classification problems from well known machine learning problems. Combined functions using projection and kernel functions are found to be better than pure basis functions for the task of classification in several datasets.  相似文献   

18.

Accurate vegetation models usually rely on experimental data obtained by means of measurement campaigns. Nowadays, RET and dRET models provide a realistic characterization of vegetation volumes, including not only in-excess attenuation, but also scattering, diffraction and depolarization. Nevertheless, both approaches imply the characterization of the forest media by means of a range of parameters, and thus, the construction of a simple parameter extraction method based on propagation measurements is required. Moreover, when dealing with experimental data, two common problems must be usually overcome: the scaling of the vegetation mass parameters into different dimensions, and the scarce number of frequencies available within the experimental data set. This paper proposes the use of Artificial Neural Networks as accurate and reliable tools able to scale vegetation parameters for varying physical dimensions and to predict them for new frequencies. This proposal provides a RMS error lower than 1 dB when compared to unbiased measured data, leading to an accurate parameter extracting method, while being simple enough for not to increase the computational cost of the model.

  相似文献   

19.
20.
We generalize directed loop networks to loop-symmetric networks in which there are N nodes and in which each node has in-degree and out-degree k, subject to the condition that 2k does not exceed N. We show that by proper selection of links one can obtain generalized loop networks with optimal or close to optimal diameter and connectivity. The optimized diameter is less than k[N1k/], where [x] indicates the ceiling of x. We also show that these networks are rather compact in that the diameter is not more than twice the average distance. Roughly 1/2(k-1)N1k/ nodes can be removed such that the network of remaining nodes is still strongly connected, if all remaining nodes have at least one incoming and one outgoing link left  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号