首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A class of discrete time recurrent neural networks with multivalued neurons   总被引:1,自引:0,他引:1  
Wei  Jacek M.   《Neurocomputing》2009,72(16-18):3782
This paper discusses a class of discrete time recurrent neural networks with multivalued neurons (MVN), which have complex-valued weights and an activation function defined as a function of the argument of a weighted sum. Complementing state-of-the-art of such networks, our research focuses on the convergence analysis of the networks in synchronous update mode. Two related theorems are presented and simulation results are used to illustrate the theory.  相似文献   

2.
Orthogonality of decision boundaries in complex-valued neural networks   总被引:1,自引:0,他引:1  
This letter presents some results of an analysis on the decision boundaries of complex-valued neural networks whose weights, threshold values, input and output signals are all complex numbers. The main results may be summarized as follows. (1) A decision boundary of a single complex-valued neuron consists of two hypersurfaces that intersect orthogonally, and divides a decision region into four equal sections. The XOR problem and the detection of symmetry problem that cannot be solved with two-layered real-valued neural networks, can be solved by two-layered complex-valued neural networks with the orthogonal decision boundaries, which reveals a potent computational power of complex-valued neural nets. Furthermore, the fading equalization problem can be successfully solved by the two-layered complex-valued neural network with the highest generalization ability. (2) A decision boundary of a three-layered complex-valued neural network has the orthogonal property as a basic structure, and its two hypersurfaces approach orthogonality as all the net inputs to each hidden neuron grow. In particular, most of the decision boundaries in the three-layered complex-valued neural network inetersect orthogonally when the network is trained using Complex-BP algorithm. As a result, the orthogonality of the decision boundaries improves its generalization ability. (3) The average of the learning speed of the Complex-BP is several times faster than that of the Real-BP. The standard deviation of the learning speed of the Complex-BP is smaller than that of the Real-BP.It seems that the complex-valued neural network and the related algorithm are natural for learning complex-valued patterns for the above reasons.  相似文献   

3.
杨娟  陆阳  俞磊  方欢 《自动化学报》2012,38(9):1459-1470
在布尔空间中,汉明球突表达了一类结构清晰的布尔函数, 由于其特殊的几何特性,存在线性可分与线性不可分两种空间结构. 剖析汉明球突的逻辑意义对二进神经网络的规则提取十分重要, 然而,从线性可分的汉明球突中提取具有清晰逻辑意义的规则, 以及如何判定非线性可分的汉明球突,并得到其逻辑意义,仍然是二进神经网络研究中尚未很好解决的问题. 为此,本文首先根据汉明球突在汉明图上的几何特性, 采用真节点加权高度排序的方法, 提出对于任意布尔函数是否为汉明球突的判定算法;然后, 在此基础上利用已知结构的逻辑意义, 将汉明球突分解为若干个已知结构的并集,从而得到汉明球突的逻辑意义; 最后,通过实例说明判定任意布尔函数是否为汉明球突的过程, 并相应得到汉明球突的逻辑表达.  相似文献   

4.
In this article, we study a single-machine scheduling problem in which the processing time of a job is a nonlinear function of its basic processing time and starting time. The objectives are to minimise the makespan, the sum of weighted completion times and the sum of the kth powers of completion times. We show that the makespan minimisation problem can be solved in polynomial time. However, the total completion time and the sum of the kth powers of completion times minimisation problems can be solved in polynomial time in some cases. Besides, some useful properties are also provided for the sum of weighted completion times problem under certain conditions.  相似文献   

5.
A widely used complex-valued activation function for complex-valued multistate Hopfield networks is revealed to be essentially based on a multilevel step function. By replacing the multilevel step function with other multilevel characteristics, we present two alternative complex-valued activation functions. One is based on a multilevel sigmoid function, while the other on a characteristic of a multistate bifurcating neuron. Numerical experiments show that both modifications to the complex-valued activation function bring about improvements in network performance for a multistate associative memory. The advantage of the proposed networks over the complex-valued Hopfield networks with the multilevel step function is more outstanding when a complex-valued neuron represents a larger number of multivalued states. Further, the performance of the proposed networks in reconstructing noisy 256 gray-level images is demonstrated in comparison with other recent associative memories to clarify their advantages and disadvantages.  相似文献   

6.
In this paper, we present a fast learning fully complex-valued extreme learning machine classifier, referred to as ‘Circular Complex-valued Extreme Learning Machine (CC-ELM)’ for handling real-valued classification problems. CC-ELM is a single hidden layer network with non-linear input and hidden layers and a linear output layer. A circular transformation with a translational/rotational bias term that performs a one-to-one transformation of real-valued features to the complex plane is used as an activation function for the input neurons. The neurons in the hidden layer employ a fully complex-valued Gaussian-like (‘sech’) activation function. The input parameters of CC-ELM are chosen randomly and the output weights are computed analytically. This paper also presents an analytical proof to show that the decision boundaries of a single complex-valued neuron at the hidden and output layers of CC-ELM consist of two hyper-surfaces that intersect orthogonally. These orthogonal boundaries and the input circular transformation help CC-ELM to perform real-valued classification tasks efficiently.Performance of CC-ELM is evaluated using a set of benchmark real-valued classification problems from the University of California, Irvine machine learning repository. Finally, the performance of CC-ELM is compared with existing methods on two practical problems, viz., the acoustic emission signal classification problem and a mammogram classification problem. These study results show that CC-ELM performs better than other existing (both) real-valued and complex-valued classifiers, especially when the data sets are highly unbalanced.  相似文献   

7.
A multilayer neural network based on multi-valued neurons (MLMVN) is considered in the paper. A multi-valued neuron (MVN) is based on the principles of multiple-valued threshold logic over the field of the complex numbers. The most important properties of MVN are: the complex-valued weights, inputs and output coded by the kth roots of unity and the activation function, which maps the complex plane into the unit circle. MVN learning is reduced to the movement along the unit circle, it is based on a simple linear error correction rule and it does not require a derivative. It is shown that using a traditional architecture of multilayer feedforward neural network (MLF) and the high functionality of the MVN, it is possible to obtain a new powerful neural network. Its training does not require a derivative of the activation function and its functionality is higher than the functionality of MLF containing the same number of layers and neurons. These advantages of MLMVN are confirmed by testing using parity n, two spirals and “sonar” benchmarks and the Mackey–Glass time series prediction.  相似文献   

8.
具有线性恶化加工时间的调度问题   总被引:11,自引:0,他引:11  
讨论了工件具有线性恶化加工时间的调度问题.在这类问题中,工件的恶化函数为线性函数.对单机调度问题中目标函数为极小化最大完工时间加权完工时间和,最大延误以及最大费用等问题分别给出了最优算法.对两台机器极小化最大完工时间的Flowshop问题,证明了利用Johnson规则可以得到最优调度.对于一般情况,如果同一工件的工序的加工时间均相等,则Flowshop问题可以转化为单机问题.  相似文献   

9.
This paper presents some results of an analysis on the decision boundaries of complex-valued neurons. The main results may be summarized as follows. (a) Weight parameters of a complex-valued neuron have a restriction which is concerned with two-dimensional motion. (b) The decision boundary of a complex-valued neuron consists of two hypersurfaces which intersect orthogonally, and divides a decision region into four equal sections.  相似文献   

10.
离散时间Hopfield网络的动力系统分析   总被引:2,自引:0,他引:2  
离散时间的Hopfield网络模型是一个非线性动力系统.对网络的状态变量引入新的能量函数,利用凸函数次梯度性质可以得到网络状态能量单调减少的条件.对于神经元的连接权值且激活函数单调非减(不一定严格单调增加)的Hopfield网络,若神经元激活函数的增益大于权值矩阵的最小特征值,则全并行时渐进收敛;而当网络串行时,只要网络中每个神经元激活函数的增益与该神经元的自反馈连接权值的和大于零即可.同时,若神经元激活函数单调,网络连接权值对称,利用凸函数次梯度的性质,证明了离散时间的Hopfield网络模型全并行时收敛到周期不大于2的极限环.  相似文献   

11.
Minimum Dominating Sets of Intervals on Lines   总被引:3,自引:0,他引:3  
We study the problem of computing minimum dominating sets of n intervals on lines in three cases: (1) the lines intersect at a single point, (2) all lines except one are parallel, and (3) one line with t weighted points on it and the minimum dominating set must maximize the sum of the weights of the points covered. We propose polynomial-time algorithms for the first two problems, which are special cases of the minimum dominating set problem for path graphs which is known to be NP-hard. The third problem requires identifying the structure of minimum dominating sets of intervals on a line so as to be able to select one that maximizes the weight sum of the weighted points covered. Assuming that presorting has been performed, the first problem has an O(n) -time solution, while the second and the third problems are solved by dynamic programming algorithms, requiring O(n log n) and O(n + t) time, respectively. Received April 13, 1995; revised July 27, 1996.  相似文献   

12.
Goh SL  Mandic DP 《Neural computation》2004,16(12):2699-2713
A complex-valued real-time recurrent learning (CRTRL) algorithm for the class of nonlinear adaptive filters realized as fully connected recurrent neural networks is introduced. The proposed CRTRL is derived for a general complex activation function of a neuron, which makes it suitable for nonlinear adaptive filtering of complex-valued nonlinear and nonstationary signals and complex signals with strong component correlations. In addition, this algorithm is generic and represents a natural extension of the real-valued RTRL. Simulations on benchmark and real-world complex-valued signals support the approach.  相似文献   

13.
Scheduling a Single Server in a Two-machine Flow Shop   总被引:1,自引:0,他引:1  
We study the problem of scheduling a single server that processes n jobs in a two-machine flow shop environment. A machine dependent setup time is needed whenever the server switches from one machine to the other. The problem with a given job sequence is shown to be reducible to a single machine batching problem. This result enables several cases of the server scheduling problem to be solved in O(n log n) by known algorithms, namely, finding a schedule feasible with respect to a given set of deadlines, minimizing the maximum lateness and, if the job processing times are agreeable, minimizing the total completion time. Minimizing the total weighted completion time is shown to be NP-hard in the strong sense. Two pseudopolynomial dynamic programming algorithms are presented for minimizing the weighted number of late jobs. Minimizing the number of late jobs is proved to be NP-hard even if setup times are equal and there are two distinct due dates. This problem is solved in O(n 3) time when all job processing times on the first machine are equal, and it is solved in O(n 4) time when all processing times on the second machine are equal. Received November 20, 2001; revised October 18, 2002 Published online: January 16, 2003  相似文献   

14.
In this paper, we observe some important aspects of Hebbian and error-correction learning rules for complex-valued neurons. These learning rules, which were previously considered for the multi-valued neuron (MVN) whose inputs and output are located on the unit circle, are generalized for a complex-valued neuron whose inputs and output are arbitrary complex numbers. The Hebbian learning rule is also considered for the MVN with a periodic activation function. It is experimentally shown that Hebbian weights, even if they still cannot implement an input/output mapping to be learned, are better starting weights for the error-correction learning, which converges faster starting from the Hebbian weights rather than from the random ones.  相似文献   

15.
Síma J 《Neural computation》2002,14(11):2709-2728
We first present a brief survey of hardness results for training feedforward neural networks. These results are then completed by the proof that the simplest architecture containing only a single neuron that applies a sigmoidal activation function sigma: kappa --> [alpha, beta], satisfying certain natural axioms (e.g., the standard (logistic) sigmoid or saturated-linear function), to the weighted sum of n inputs is hard to train. In particular, the problem of finding the weights of such a unit that minimize the quadratic training error within (beta - alpha)(2) or its average (over a training set) within 5(beta - alpha)(2)/ (12n) of its infimum proves to be NP-hard. Hence, the well-known backpropagation learning algorithm appears not to be efficient even for one neuron, which has negative consequences in constructive learning.  相似文献   

16.
R.  S.  N.  P. 《Neurocomputing》2009,72(16-18):3771
In a fully complex-valued feed-forward network, the convergence of the Complex-valued Back Propagation (CBP) learning algorithm depends on the choice of the activation function, learning sample distribution, minimization criterion, initial weights and the learning rate. The minimization criteria used in the existing versions of CBP learning algorithm in the literature do not approximate the phase of complex-valued output well in function approximation problems. The phase of a complex-valued output is critical in telecommunication and reconstruction and source localization problems in medical imaging applications. In this paper, the issues related to the convergence of complex-valued neural networks are clearly enumerated using a systematic sensitivity study on existing complex-valued neural networks. In addition, we also compare the performance of different types of split complex-valued neural networks. From the observations in the sensitivity analysis, we propose a new CBP learning algorithm with logarithmic performance index for a complex-valued neural network with exponential activation function. The proposed CBP learning algorithm directly minimizes both the magnitude and phase errors and also provides better convergence characteristics. Performance of the proposed scheme is evaluated using two synthetic complex-valued function approximation problems, the complex XOR problem, and a non-minimum phase equalization problem. Also, a comparative analysis on the convergence of the existing fully complex and split complex networks is presented.  相似文献   

17.
There are several neural network implementations using either software, hardware-based or a hardware/software co-design. This work proposes a hardware architecture to implement an artificial neural network (ANN), whose topology is the multilayer perceptron (MLP). In this paper, we explore the parallelism of neural networks and allow on-the-fly changes of the number of inputs, number of layers and number of neurons per layer of the net. This reconfigurability characteristic permits that any application of ANNs may be implemented using the proposed hardware. In order to reduce the processing time that is spent in arithmetic computation, a real number is represented using a fraction of integers. In this way, the arithmetic is limited to integer operations, performed by fast combinational circuits. A simple state machine is required to control sums and products of fractions. Sigmoid is used as the activation function in the proposed implementation. It is approximated by polynomials, whose underlying computation requires only sums and products. A theorem is introduced and proven so as to cover the arithmetic strategy of the computation of the activation function. Thus, the arithmetic circuitry used to implement the neuron weighted sum is reused for computing the sigmoid. This resource sharing decreased drastically the total area of the system. After modeling and simulation for functionality validation, the proposed architecture synthesized using reconfigurable hardware. The results are promising.  相似文献   

18.
This paper presents a fast adaptive iterative algorithm to solve linearly separable classification problems in R n.In each iteration,a subset of the sampling data (n-points,where n is the number of features) is adaptively chosen and a hyperplane is constructed such that it separates the chosen n-points at a margin and best classifies the remaining points.The classification problem is formulated and the details of the algorithm are presented.Further,the algorithm is extended to solving quadratically separable classification problems.The basic idea is based on mapping the physical space to another larger one where the problem becomes linearly separable.Numerical illustrations show that few iteration steps are sufficient for convergence when classes are linearly separable.For nonlinearly separable data,given a specified maximum number of iteration steps,the algorithm returns the best hyperplane that minimizes the number of misclassified points occurring through these steps.Comparisons with other machine learning algorithms on practical and benchmark datasets are also presented,showing the performance of the proposed algorithm.  相似文献   

19.
This paper provides a continuation of the idea presented by Yin et al. [Yin et al., Some scheduling problems with general position-dependent and time-dependent learning effects, Inform. Sci. 179 (2009) 2416-2425]. For each of the following three objectives, total weighted completion time, maximum lateness and discounted total weighted completion time, this paper presents an approximation algorithm which is based on the optimal algorithm for the corresponding single-machine scheduling problem and analyzes its worst-case bound. It shows that the single-machine scheduling problems under the proposed model can be solved in polynomial time if the objective is to minimize the total lateness or minimize the sum of earliness penalties. It also shows that the problems of minimizing the total tardiness, discounted total weighted completion time and total weighted earliness penalty are polynomially solvable under some agreeable conditions on the problem parameters.  相似文献   

20.
In this paper, we investigate the decision making ability of a fully complex-valued radial basis function (FC-RBF) network in solving real-valued classification problems. The FC-RBF classifier is a single hidden layer fully complex-valued neural network with a nonlinear input layer, a nonlinear hidden layer, and a linear output layer. The neurons in the input layer of the classifier employ the phase encoded transformation to map the input features from the Real domain to the Complex domain. The neurons in the hidden layer employ a fully complex-valued Gaussian-like activation function of the type of hyperbolic secant (sech). The classification ability of the classifier is first studied analytically and it is shown that the decision boundaries of the FC-RBF classifier are orthogonal to each other. Then, the performance of the FC-RBF classifier is studied experimentally using a set of real-valued benchmark problems and also a real-world problem. The study clearly indicates the superior classification ability of the FC-RBF classifier.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号