共查询到20条相似文献,搜索用时 46 毫秒
1.
Reducibility concepts are fundamental in complexity theory. Usually, they are defined as follows: A problem Π is reducible
to a problems Σ if Π can be computed using a program or device for Σ as a subroutine. However, this approach has its limitations
if restricted computational models are considered. In the case of ordered binary decision diagrams (OBDDs), it allows the
use of merely the almost unmodified original program for the subroutine.
Here we propose a new reducibility for OBDDs: We say that Π is reducible to Σ if and OBDD for Π can be constructed by applying
a sequence of elementary operations to an OBDD for Σ. In contrast to traditional reducibility notions, the newly introduced
reduction is able to reflect the real needs of a reducibility concept in the context of OBDD-based complexity classes: it
allows the reduction of problems to others which are computable with the same amount of OBDD-resources and it gives a tool
to carry over lower and upper bounds.
The authors are grateful to DAAD Acciones Integradas, Grant No. 322-ai-e-dr. 相似文献
2.
A. Yu. Dorogov 《Cybernetics and Systems Analysis》2000,36(4):512-519
Methods of construction of structural models of fast two-layer neural networks are considered. The methods are based on the
criteria of minimum computing operations and maximum degrees of freedom. Optimal structural models of two-layer neural networks
are constructed. Illustrative examples are given.
Translated from Kibernetika i Sistemnyi Analiz, No. 4, pp. 47–56, July–August, 2000. 相似文献
3.
CAO FeiLong ZHANG YongQuan & XU ZongBen College of Science China Jiliang University Hangzhou China Institute of Information System Sciences Xi’an Jiaotong University Xi’an 《中国科学F辑(英文版)》2009,52(8):1321-1327
Let SFd and Πψ,n,d = { nj=1bjψ(ωj·x+θj) :bj,θj∈R,ωj∈Rd} be the set of periodic and Lebesgue’s square-integrable functions and the set of feedforward neural network (FNN) functions, respectively. Denote by dist (SF d, Πψ,n,d) the deviation of the set SF d from the set Πψ,n,d. A main purpose of this paper is to estimate the deviation. In particular, based on the Fourier transforms and the theory of approximation, a lower estimation for dist (SFd, Πψ,n,d) is proved. That is, dist(SF d, Πψ,n,d) (nlogC2n)1/2 . T... 相似文献
4.
Applications in a wide variety of industries require access to multiple heterogeneous distributed databases. One step in
heterogeneous database integration is semantic integration: identifying corresponding attributes in different databases that
represent the same real world concept. The rules of semantic integration can not be ‘pre-programmed’ since the information
to be accessed is heterogeneous and attribute correspondences could be fuzzy. Manually comparing all possible pairs of attributes
is an unreasonably large task. We have applied artificial neural networks (ANNs) to this problem. Metadata describing attributes
is automatically extracted from a database to represent their ‘signatures’. The metadata is used to train neural networks
to find similar patterns of metadata describing corresponding attributes from other databases. In our system, the rules to
determine corresponding attributes are discovered through machine learning. This paper describes how we applied neural network techniques in a database integration problem
and how we represent an attribute with its metadata as discriminators. This paper focuses on our experiments on effectiveness of neural networks and each discriminator. We also discuss difficulties
of using neural networks for this problem and our wish list for the Machine Learning community.
Received 18 February 1999 / Revised 22 April 1999 / Accepted in revised form 20 November 1999 相似文献
5.
The article presents of accelerating neural network learning by the Back Propagation algorithm and one of its fastest modifications
– the Levenberg–Marqurdt method. The learning is accelerated by introducing the ‘single-direction’ coefficient of the change
of x for calculating its new values (the number of iterations is decreased by approximately 30%). Simulation results of learning
neural networks by applying both the classic method and the method of accelerating the procedure are presented. 相似文献
6.
Maria Nazaré Munari Angeloni Hahne Fernando Mendes de Azevedo 《Neural computing & applications》2008,17(1):65-74
This paper presents a methodology that uses evolutionary learning in training ‘A’ model networks, a topology based on Interactive
Activation and Competition (IAC) neural networks. IAC networks show local knowledge and processing units clustered in pools.
The connections among units may assume only 1, 0 or −1. On the other hand, ‘A’ model network uses values in interval [−1,
1]. This feature provides a wider range of applications for this network, including problems which do not show mutually exclusive
concepts. However, there is no algorithm to adjust the network weights and still preserve the desired characteristics of the
original network. Accordingly, we propose the use of genetic algorithms in a new methodology to obtain the correct weight
set for this network. Two examples are used to illustrate the proposed method. Findings are considered consistent and generic
enough to allow further applications on similar classes of problems suitable for ‘A’ model IAC Networks. 相似文献
7.
Recently, a convex incremental algorithm (CI-ELM) has been proposed in Huang and Chen (Neurocomputing 70:3056–3062, 2007), which randomly chooses hidden neurons and then analytically determines the output weights connecting with the hidden layer
and the output layer. Though hidden neurons are generated randomly, the network constructed by CI-ELM is still based on the
principle of universal approximation. The random approximation theory breaks through the limitation of most conventional theories,
eliminating the need for tuning hidden neurons. However, due to the random characteristic, some of the neurons contribute
little to decrease the residual error, which eventually increase the complexity and computation of neural networks. Thus,
CI-ELM cannot precisely give out its convergence rate. Based on Lee’s results (Lee et al., IEEE Trans Inf Theory 42(6):2118–2132,
1996), we first show the convergence rate of a maximum CI-ELM, and then systematically analyze the convergence rate of an enhanced
CI-ELM. Different from CI-ELM, the hidden neurons of the two algorithms are chosen by following the maximum or optimality
principle under the same structure as CI-ELM. Further, the proof process also demonstrates that our algorithms achieve smaller
residual errors than CI-ELM. Since the proposed neural networks remove these “useless” neurons, they improve the efficiency
of neural networks. The experimental results on benchmark regression problems will support our conclusions.
The work is under the funding of Singapore MOE AcRF Tier 1 grant WBS No: R 252-000-221-112. 相似文献
8.
Tan Qingping 《计算机科学技术学报》1997,12(3):231-243
is paper presents a method to define a set of mutually recursive inductive types,and develops a higher-order unification algorithm for λП∑ extended with inductive types.The algorithm is an extension of Elliott‘s algorithm for λП∑.The notaiton of normal forms plays a vital role in higher-order unification.The weak head normal forms in the extended type theory is defined to reveal the ultimate “top level structures”of the fully normalized terms and types.Unification transformation rules are designed to deal with inductive types,a recursive operator and its reduction rule.The algorithm can construct recursive functions automatically. 相似文献
9.
Motivated by the slow learning properties of multilayer perceptrons (MLPs) which utilize computationally intensive training
algorithms, such as the backpropagation learning algorithm, and can get trapped in local minima, this work deals with ridge
polynomial neural networks (RPNN), which maintain fast learning properties and powerful mapping capabilities of single layer
high order neural networks. The RPNN is constructed from a number of increasing orders of Pi–Sigma units, which are used to
capture the underlying patterns in financial time series signals and to predict future trends in the financial market. In
particular, this paper systematically investigates a method of pre-processing the financial signals in order to reduce the
influence of their trends. The performance of the networks is benchmarked against the performance of MLPs, functional link
neural networks (FLNN), and Pi–Sigma neural networks (PSNN). Simulation results clearly demonstrate that RPNNs generate higher
profit returns with fast convergence on various noisy financial signals. 相似文献
10.
Leen Torenvliet 《Theory of Computing Systems》1988,21(1):99-123
In this paper we discuss the concepts ofimmunity andsimplicity in levels of the relativized Polynomial-time Hierarchy just aboveP. We consider various diagonalization techniques with which oracle sets can be constructed relative to which strong separations
between language classes in the first two levels of this hierarchy are established. In particular, we build oracle sets for
separation of relativized Σ
2
P
from relativizedNP with immunity, of relativized Σ
2
P
from relativizedNP with bi-immunity, of relativized Σ
2
P
from relativized Δ
2
P
with immunity, of relativized Π
2
P
from relativized Δ
2
P
with immunity, and finally of relativized Σ
2
P
from relativized Π
2
P
with simplicity. 相似文献
11.
This paper presents the results of a computer simulation which, combined a small network of spiking neurons with linear quadratic
regulator (LQR) control to solve the acrobot swing-up and balance task. To our knowledge, this task has not been previously
solved with spiking neural networks. Input to the network was drawn from the state of the acrobot, and output was torque,
either directly applied to the actuated joint, or via the switching of an LQR controller designed for balance. The neural
network’s weights were tuned using a (μ + λ)-evolution strategy without recombination, and neurons’ parameters, were chosen
to roughly approximate biological neurons. 相似文献
12.
Current analyses of complex biological networks focus either on their global statistical connectivity properties (e.g. topological
path lengths and nodes connectivity ranks) or the statistics of specific local connectivity circuits (motifs). Here we present
a different approach – Functional Topology, to enable identification of hidden topological and geometrical fingerprints of
biological computing networks that afford their functioning – the form-function fingerprints. To do so we represent the network
structure in terms of three matrices: 1. Topological connectivity matrix – each row (i) is the shortest topological path lengths of node i with all other nodes; 2. Topological correlation matrix – the element (i,j) is the correlation between the topological connectivity of nodes (i) and (j); and 3. Weighted graph matrix – in this case the links represent the conductance between nodes that can be simply one over
the geometrical length, the synaptic strengths in case of neural networks or other quantity that represents the strengths
of the connections. Various methods (e.g. clustering algorithms, random matrix theory, eigenvalues spectrum etc.), can be
used to analyze these matrices, here we use the newly developed functional holography approach which is based on clustering
of the matrices following their collective normalization. We illustrate the approach by analyzing networks of different topological
and geometrical properties: 1. Artificial networks, including – random, regular 4-fold and 5-fold lattice and a tree-like
structure; 2. Cultured neural networks: A single network and a network composed of three linked sub-networks; and 3. Model
neural network composed of two overlapping sub-networks. Using these special networks, we demonstrate the method’s ability
to reveal functional topology features of the networks. 相似文献
13.
多层前向小世界神经网络及其函数逼近 总被引:1,自引:0,他引:1
借鉴复杂网络的研究成果, 探讨一种在结构上处于规则和随机连接型神经网络之间的网络模型—-多层前向小世界神经网络. 首先对多层前向规则神经网络中的连接依重连概率p进行重连, 构建新的网络模型, 对其特征参数的分析表明, 当0 < p < 1时, 该网络在聚类系数上不同于Watts-Strogatz 模型; 其次用六元组模型对网络进行描述; 最后, 将不同p值下的小世界神经网络用于函数逼近, 仿真结果表明, 当p = 0:1时, 网络具有最优的逼近性能, 收敛性能对比试验也表明, 此时网络在收敛性能、逼近速度等指标上要优于同规模的规则网络和随机网络. 相似文献
14.
Recently, hybrid dynamical systems have attracted considerable attention in the automatic control domain. In this article,
a theory for recurrent neural networks is presented from a hybrid dynamical systems point of view. The hybrid dynamical system
is defined by a continuous dynamical system discretely switched by external temporal inputs. The theory suggests that the
dynamics of continuous-time recurrent neural networks, which are stochastically excited by external temporal inputs, are generally
characterized by a set of continuous trajectories with a fractal-like structure in hyper-cylindrical phase space.
This work was presented, in part, at the 7th International Symposium on Artificial Life and Robotics, Oita, Japan, January
16–18, 2002 相似文献
15.
The use of multilayer perceptrons (MLP) with threshold functions (binary step function activations) greatly reduces the complexity of the hardware implementation of neural networks, provides tolerance to noise and improves the interpretation of the internal representations. In certain case, such as in learning stationary tasks, it may be sufficient to find appropriate weights for an MLP with threshold activation functions by software simulation and, then, transfer the weight values to the hardware implementation. Efficient training of these networks is a subject of considerable ongoing research. Methods available in the literature mainly focus on two-state (threshold) nodes and try to train the networks by approximating the gradient of the error function and modifying appropriately the gradient descent, or by progressively altering the shape of the activation functions. In this paper, we propose an evolution-motivated approach, which is eminently suitable for networks with threshold functions and compare its performance with four other methods. The proposed evolutionary strategy does not need gradient related information, it is applicable to a situation where threshold activations are used from the beginning of the training, as in “on-chip” training, and is able to train networks with integer weights. 相似文献
16.
Based on high order dynamic neural network, this paper presents the tracking problem for uncertain nonlinear composite system, which contains external disturbance, whose nonlinearities are assumed to be unknown. A smooth controller is designed to guarantee a uniform ultimate boundedness property for the tracking error and all other signals in the dosed loop. Certain measures are utilized to test its performance. No a priori knowledge of an upper bound on the “optimal” weight and modeling error is required; the weights of neural networks are updated on-line. Numerical simulations performed on a simple example illustrate and clarify the approach. 相似文献
17.
Eduardo Masato Iyoda Takushi Shibata Hajime Nobuhara Witold Pedrycz Kaoru Hirota 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2007,11(1):53-61
A high-order feedforward neural architecture, called pi
t
-sigma (π
t
σ) neural network, is proposed for lossy digital image compression and reconstruction problems. The π
t
σ network architecture is composed of an input layer, a single hidden layer, and an output layer. The hidden layer is composed of classical additive neurons, whereas the output layer is composed of translated multiplicative neurons (π
t
-neurons). A two-stage learning algorithm is proposed to adjust the parameters of the π
t
σ network: first, a genetic algorithm (GA) is used to avoid premature convergence to poor local minima; in the second stage, a conjugate gradient method is used to fine-tune the solution found by GA. Experiments using the Standard Image Database and infrared satellite images show that the proposed π
t
σ network performs better than classical multilayer perceptron, improving the reconstruction precision (measured by the mean squared error) in about 56%, on average. 相似文献
18.
The possibilities of applying genetic algorithms to optimization of the structure of neural networks that solve problems of
recognition of handwritten and printed symbols and words are considered. The results of an experimental study are given. The
experiments performed demonstrate an increase in the efficiency of neural networks after optimization. Ways of improving the
results obtained are discussed.
These results were partially obtained due to grant U4M000 of the Soros International Scientific Fund and also due to the“Neurocomputer”
project (1992-994) of the State Committee of NAS of Ukraine on Science and Engineering.
Translated from Kibernetika i Sistemnyi Analiz, No. 5, pp. 23–32, September–October, 1999. 相似文献
19.
Adenilton J. da SilvaAuthor Vitae Wilson R. de OliveiraAuthor VitaeTeresa B. LudermirAuthor Vitae 《Neurocomputing》2012,75(1):52-60
A supervised learning algorithm for quantum neural networks (QNN) based on a novel quantum neuron node implemented as a very simple quantum circuit is proposed and investigated. In contrast to the QNN published in the literature, the proposed model can perform both quantum learning and simulate the classical models. This is partly due to the neural model used elsewhere which has weights and non-linear activations functions. Here a quantum weightless neural network model is proposed as a quantisation of the classical weightless neural networks (WNN). The theoretical and practical results on WNN can be inherited by these quantum weightless neural networks (qWNN). In the quantum learning algorithm proposed here patterns of the training set are presented concurrently in superposition. This superposition-based learning algorithm (SLA) has computational cost polynomial on the number of patterns in the training set. 相似文献
20.
In this paper, a new approach is investigated for adaptive dynamic neural network-based H
∞ control, which is designed for a class of non-linear systems with unknown uncertainties. Currently, non-linear systems with
unknown uncertainties are commonly used to efficiently and accurately express the real practical control process. Therefore,
it is of critical importance but a great challenge and still at its early age to design a stable and robust controller for
such a process. In the proposed research, dynamic neural networks were constructed to precisely approximate the non-linear
system with unknown uncertainties first, a non-linear state feedback H
∞ control law was designed next, then an adaptive weighting adjustment mechanism for dynamic neural networks was developed
to achieve H
∞ regulation performance, and last a recurrent neural network was employed as a neuro-solver to efficiently and numerically
solve the standard LMI problem so as to obtain the appropriate control gains. Finally, case studies further verify the feasibility
and efficiency of the proposed research. 相似文献