首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 832 毫秒
1.
The authors previously proposed a self-organizing Hierarchical Cerebellar Model Articulation Controller (HCMAC) neural network containing a hierarchical GCMAC neural network and a self-organizing input space module to solve high-dimensional pattern classification problems. This novel neural network exhibits fast learning, a low memory requirement, automatic memory parameter determination and highly accurate high-dimensional pattern classification. However, the original architecture needs to be hierarchically expanded using a full binary tree topology to solve pattern classification problems according to the dimension of the input vectors. This approach creates many redundant GCMAC nodes when the dimension of the input vectors in the pattern classification problem does not exactly match that in the self-organizing HCMAC neural network. These redundant GCMAC nodes waste memory units and degrade the learning performance of a self-organizing HCMAC neural network. Therefore, this study presents a minimal structure of self-organizing HCMAC (MHCMAC) neural network with the same dimension of input vectors as the pattern classification problem. Additionally, this study compares the learning performance of this novel learning structure with those of the BP neural network,support vector machine (SVM), and original self-organizing HCMAC neural network in terms of ten benchmark pattern classification data sets from the UCI machine learning repository. In particular, the experimental results reveal that the self-organizing MHCMAC neural network handles high-dimensional pattern classification problems better than the BP, SVM or the original self-organizing HCMAC neural network. Moreover, the proposed self-organizing MHCMAC neural network significantly reduces the memory requirement of the original self-organizing HCMAC neural network, and has a high training speed and higher pattern classification accuracy than the original self-organizing HCMAC neural network in most testing benchmark data sets. The experimental results also show that the MHCMAC neural network learns continuous function well and is suitable for Web page classification.  相似文献   

2.
An ART-based fuzzy adaptive learning control network   总被引:4,自引:0,他引:4  
This paper addresses the structure and an associated online learning algorithm of a feedforward multilayer neural net for realizing the basic elements and functions of a fuzzy controller. The proposed fuzzy adaptive learning control network (FALCON) can be contrasted with traditional fuzzy control systems in network structure and learning ability. An online structure/parameter learning algorithm, FALCON-ART, is proposed for constructing FALCON dynamically. It combines backpropagation for parameter learning and fuzzy ART for structure learning. FALCON-ART partitions the input state space and output control space using irregular fuzzy hyperboxes according to the data distribution. In many existing fuzzy or neural fuzzy control systems, the input and output spaces are always partitioned into “grids”. As the number of variables increases, the number of partitioned grids grows combinatorially. To avoid this problem in some complex systems, FALCON-ART partitions the I/O spaces flexibly based on data distribution. It can create and train FALCON in a highly autonomous way. In its initial form, there is no membership function, fuzzy partition, and fuzzy logic rule. They are created and begin to grow as the first training pattern arrives. Thus, the users need not give it any a priori knowledge or initial information. FALCON-ART can online partition the I/O spaces, tune membership functions, find proper fuzzy logic rules, and annihilate redundant rules dynamically upon receiving online data  相似文献   

3.
Basak J 《Neural computation》2004,16(9):1959-1981
Decision trees and neural networks are widely used tools for pattern classification. Decision trees provide highly localized representation, whereas neural networks provide a distributed but compact representation of the decision space. Decision trees cannot be induced in the online mode, and they are not adaptive to changing environment, whereas neural networks are inherently capable of online learning and adpativity. Here we provide a classification scheme called online adaptive decision trees (OADT), which is a tree-structured network like the decision trees and capable of online learning like neural networks. A new objective measure is derived for supervised learning with OADT. Experimental results validate the effectiveness of the proposed classification scheme. Also, with certain real-life data sets, we find that OADT performs better than two widely used models: the hierarchical mixture of experts and multilayer perceptron.  相似文献   

4.
In this paper, we present an efficient technique for mapping a backpropagation (BP) learning algorithm for multilayered neural networks onto a network of workstations (NOW's). We adopt a vertical partitioning scheme, where each layer in the neural network is divided into p disjoint partitions, and map each partition onto an independent workstation in a network of p workstations. We present a fully distributed version of the BP algorithm and also its speedup analysis. We compare the performance of our algorithm with a recent work involving the vertical partitioning approach for mapping the BP algorithm onto a distributed memory multiprocessor. Our results on SUN 3/50 NOW's show that we are able to achieve better speedups by using only two communication sets and also by avoiding some redundancy in the weights computation for one training cycle of the algorithm.  相似文献   

5.
MCM划分的自组织神经网络   总被引:2,自引:0,他引:2  
本文在提出一个直接和间接相联模块间相似性的表示方法的基础上,提出了一个基于自组织神经网络的性能驱动MCM划分的神经学习方法。算法求解如何在高层设计中将功能模块分配到MCM芯片中。算法不仅考虑了模块间的相似关系,还考虑了MCM的版图结构;具有芯片间连线数目最少和时钟周期最短双重优化目标;能使连线尽量产生在相邻近的芯片之间;能满足时延、散热和面积约束。文中还提出了一个层次神经网络模型和面积约束下的MC  相似文献   

6.
A novel supervised learning method is proposed by combining linear discriminant functions with neural networks. The proposed method results in a tree-structured hybrid architecture. Due to constructive learning, the binary tree hierarchical architecture is automatically generated by a controlled growing process for a specific supervised learning task. Unlike the classic decision tree, the linear discriminant functions are merely employed in the intermediate level of the tree for heuristically partitioning a large and complicated task into several smaller and simpler subtasks in the proposed method. These subtasks are dealt with by component neural networks at the leaves of the tree accordingly. For constructive learning, growing and credit-assignment algorithms are developed to serve for the hybrid architecture. The proposed architecture provides an efficient way to apply existing neural networks (e.g. multi-layered perceptron) for solving a large scale problem. We have already applied the proposed method to a universal approximation problem and several benchmark classification problems in order to evaluate its performance. Simulation results have shown that the proposed method yields better results and faster training in comparison with the multilayered perceptron.  相似文献   

7.
Incremental backpropagation learning networks   总被引:2,自引:0,他引:2  
How to learn new knowledge without forgetting old knowledge is a key issue in designing an incremental-learning neural network. In this paper, we present a new incremental learning method for pattern recognition, called the "incremental backpropagation learning network", which employs bounded weight modification and structural adaptation learning rules and applies initial knowledge to constrain the learning process. The viability of this approach is demonstrated for classification problems including the iris and the promoter domains.  相似文献   

8.
Underwater acoustic transients can develop from a wide variety of sources. Accordingly, detection and classification of such transients by automated means can be exceedingly difficult. This paper describes a new approach to this problem based on adaptive pattern recognition employing neural networks and an alternative metric, the Hausdorff metric. The system uses self-organization to both generalize and provide rapid throughput while utilizing supervised learning for decision making, being based on a concept that temporally partitions acoustic transient signals, and as a result, studies their trajectories through power spectral density space. This method has exhibited encouraging results for a large set of simulated underwater transients contained in both quiet and noisy ocean environments, and requires from five to ten MFLOPS for the implementation described.  相似文献   

9.
Competitive neural trees for pattern classification   总被引:1,自引:0,他引:1  
Presents competitive neural trees (CNeTs) for pattern classification. The CNeT contains m-ary nodes and grows during learning by using inheritance to initialize new nodes. At the node level, the CNeT employs unsupervised competitive learning. The CNeT performs hierarchical clustering of the feature vectors presented to it as examples, while its growth is controlled by forward pruning. Because of the tree structure, the prototype in the CNeT close to any example can be determined by searching only a fraction of the tree. The paper introduces different search methods for the CNeT, which are utilized for training as well as for recall. The CNeT is evaluated and compared with existing classifiers on a variety of pattern classification problems.  相似文献   

10.
Several researchers have shown that substantial improvements can be achieved in difficult pattern recognition problems by combining the outputs of multiple neural networks. In this work, we present and test a pattern classification multi-net system based on both supervised and unsupervised learning. Following the ‘divide-and-conquer’ framework, the input space is partitioned into overlapping subspaces and neural networks are subsequently used to solve the respective classification subtasks. Finally, the outputs of individual classifiers are appropriately combined to obtain the final classification decision. Two clustering methods have been applied for input space partitioning and two schemes have been considered for combining the outputs of the multiple classifiers. Experiments on well-known data sets indicate that the multi-net classification system exhibits promising performance compared with the case of single network training, both in terms of error rates and in terms of training speed (especially if the training of the classifiers is done in parallel). ID="A1"Correspondance and offprint requests to: D. Frosyniotis, National Technical University of Athens, Department of Electrical and Computer Engineering, Zographou 157 73, Athens, Greece. E-mail: andreas@cs.ntua.gr  相似文献   

11.
Properly adapted Boltzmann machine neural networks are used to devise effective unstructured grid partitioners that are capable of providing equally loaded grid subsets with minimum interface, for concurrent data-handling on parallel computers. The partitioning scheme is based on recursive bisections so that the outcome always consists of 2n partitions. Two different techniques are introduced to speed up the—otherwise costly—partitioning process and several variants are considered. In particular, a transformation of bipolar Hopfield-type neural networks is developed providing an effective multi-scale approach. Results on a number of test cases are presented in order to assess the performance of the proposed techniques.  相似文献   

12.
This paper addresses a phase space partitioning problem in motion planning systems. A class of kinematic and dynamic motion planning systems, including rapid semioptimal motion-planning (RASMO), uses partitions for phase spaces in cumulative optimization criteria. In these systems, a partition results in a uniquely planned motion with a quality that is determined by a selected optimization criterion. In this paper, state-dispersion-based phase space partitioning (SDPP) that generates adaptive partitions is proposed. These partitions allow the motion planning systems to plan better motions. Uniform partitions and adaptively fixed partitions of SDPP are compared under several conditions using RASMO and a double inverted pendulum model while setting the optimality criterion of RASMO to time. The results reveal that RASMO with SDPP plans smaller time motions than those obtained with RASMO using uniform partitions.  相似文献   

13.
Support-vector-based fuzzy neural network for pattern classification   总被引:3,自引:0,他引:3  
Fuzzy neural networks (FNNs) for pattern classification usually use the backpropagation or C-cluster type learning algorithms to learn the parameters of the fuzzy rules and membership functions from the training data. However, such kinds of learning algorithms usually cannot minimize the empirical risk (training error) and expected risk (testing error) simultaneously, and thus cannot reach a good classification performance in the testing phase. To tackle this drawback, a support-vector-based fuzzy neural network (SVFNN) is proposed for pattern classification in this paper. The SVFNN combines the superior classification power of support vector machine (SVM) in high dimensional data spaces and the efficient human-like reasoning of FNN in handling uncertainty information. A learning algorithm consisting of three learning phases is developed to construct the SVFNN and train its parameters. In the first phase, the fuzzy rules and membership functions are automatically determined by the clustering principle. In the second phase, the parameters of FNN are calculated by the SVM with the proposed adaptive fuzzy kernel function. In the third phase, the relevant fuzzy rules are selected by the proposed reducing fuzzy rule method. To investigate the effectiveness of the proposed SVFNN classification, it is applied to the Iris, Vehicle, Dna, Satimage, Ijcnn1 datasets from the UCI Repository, Statlog collection and IJCNN challenge 2001, respectively. Experimental results show that the proposed SVFNN for pattern classification can achieve good classification performance with drastically reduced number of fuzzy kernel functions.  相似文献   

14.
Object-based image analysis using multiscale connectivity   总被引:2,自引:0,他引:2  
This paper introduces a novel approach for image analysis based on the notion of multiscale connectivity. We use the proposed approach to design several novel tools for object-based image representation and analysis, which exploit the connectivity structure of images in a multiscale fashion. More specifically, we propose a nonlinear pyramidal image representation scheme, which decomposes an image at different scales by means of multiscale grain filters. These filters gradually remove connected components from an image that fail to satisfy a given criterion. We also use the concept of multiscale connectivity to design a hierarchical data partitioning tool. We employ this tool to construct another image representation scheme, based on the concept of component trees, which organizes partitions of an image in a hierarchical multiscale fashion. In addition, we propose a geometrically-oriented hierarchical clustering algorithm which generalizes the classical single-linkage algorithm. Finally, we propose two object-based multiscale image summaries, reminiscent of the well-known (morphological) pattern spectrum, which can be useful in image analysis and image understanding applications.  相似文献   

15.
Fuzzy min-max neural networks. I. Classification.   总被引:1,自引:0,他引:1  
A supervised learning neural network classifier that utilizes fuzzy sets as pattern classes is described. Each fuzzy set is an aggregate (union) of fuzzy set hyperboxes. A fuzzy set hyperbox is an n-dimensional box defined by a min point and a max point with a corresponding membership function. The min-max points are determined using the fuzzy min-max learning algorithm, an expansion-contraction process that can learn nonlinear class boundaries in a single pass through the data and provides the ability to incorporate new and refine existing classes without retraining. The use of a fuzzy set approach to pattern classification inherently provides a degree of membership information that is extremely useful in higher-level decision making. The relationship between fuzzy sets and pattern classification is described. The fuzzy min-max classifier neural network implementation is explained, the learning and recall algorithms are outlined, and several examples of operation demonstrate the strong qualities of this new neural network classifier.  相似文献   

16.
Decision-based neural networks with signal/image classificationapplications   总被引:2,自引:0,他引:2  
Supervised learning networks based on a decision-based formulation are explored. More specifically, a decision-based neural network (DBNN) is proposed, which combines the perceptron-like learning rule and hierarchical nonlinear network structure. The decision-based mutual training can be applied to both static and temporal pattern recognition problems. For static pattern recognition, two hierarchical structures are proposed: hidden-node and subcluster structures. The relationships between DBNN's and other models (linear perceptron, piecewise-linear perceptron, LVQ, and PNN) are discussed. As to temporal DBNN's, model-based discriminant functions may be chosen to compensate possible temporal variations, such as waveform warping and alignments. Typical examples include DTW distance, prediction error, or likelihood functions. For classification applications, DBNN's are very effective in computation time and performance. This is confirmed by simulations conducted for several applications, including texture classification, OCR, and ECG analysis.  相似文献   

17.
18.
Hierarchical Fusion of Multiple Classifiers for Hyperspectral Data Analysis   总被引:3,自引:0,他引:3  
Many classification problems involve high dimensional inputs and a large number of classes. Multiclassifier fusion approaches to such difficult problems typically centre around smart feature extraction, input resampling methods, or input space partitioning to exploit modular learning. In this paper, we investigate how partitioning of the output space (i.e. the set of class labels) can be exploited in a multiclassifier fusion framework to simplify such problems and to yield better solutions. Specifically, we introduce a hierarchical technique to recursively decompose a C-class problem into C_1 two-(meta) class problems. A generalised modular learning framework is used to partition a set of classes into two disjoint groups called meta-classes. The coupled problems of finding a good partition and of searching for a linear feature extractor that best discriminates the resulting two meta-classes are solved simultaneously at each stage of the recursive algorithm. This results in a binary tree whose leaf nodes represent the original C classes. The proposed hierarchical multiclassifier framework is particularly effective for difficult classification problems involving a moderately large number of classes. The proposed method is illustrated on a problem related to classification of landcover using hyperspectral data: a 12-class AVIRIS subset with 180 bands. For this problem, the classification accuracies obtained were superior to most other techniques developed for hyperspectral classification. Moreover, the class hierarchies that were automatically discovered conformed very well with human domain experts’ opinions, which demonstrates the potential of using such a modular learning approach for discovering domain knowledge automatically from data. Received: 21 November 2000, Received in revised form: 02 November 2001, Accepted: 13 December 2001  相似文献   

19.
Classifiability-based omnivariate decision trees   总被引:1,自引:0,他引:1  
Top-down induction of decision trees is a simple and powerful method of pattern classification. In a decision tree, each node partitions the available patterns into two or more sets. New nodes are created to handle each of the resulting partitions and the process continues. A node is considered terminal if it satisfies some stopping criteria (for example, purity, i.e., all patterns at the node are from a single class). Decision trees may be univariate, linear multivariate, or nonlinear multivariate depending on whether a single attribute, a linear function of all the attributes, or a nonlinear function of all the attributes is used for the partitioning at each node of the decision tree. Though nonlinear multivariate decision trees are the most powerful, they are more susceptible to the risks of overfitting. In this paper, we propose to perform model selection at each decision node to build omnivariate decision trees. The model selection is done using a novel classifiability measure that captures the possible sources of misclassification with relative ease and is able to accurately reflect the complexity of the subproblem at each node. The proposed approach is fast and does not suffer from as high a computational burden as that incurred by typical model selection algorithms. Empirical results over 26 data sets indicate that our approach is faster and achieves better classification accuracy compared to statistical model select algorithms.  相似文献   

20.
The hierarchical fast learning artificial neural network (HieFLANN) is a clustering NN that can be initialized using statistical properties of the data set. This provides the possibility of constructing the entire network autonomously with no manual intervention. This distinguishes it from many existing networks that, though hierarchically plausible, still require manual initialization processes. The unique system of hierarchical networks begins with a reduction of the high-dimensional feature space into smaller and manageable ones. This process involves using the K-iterations fast learning artificial neural network (KFLANN) to systematically cluster a square matrix containing the Mahalanobis distances (MDs) between data set features, into homogeneous feature subspaces (HFSs). The KFLANN is used for its heuristic network initialization capabilities on a given data set and requires no supervision. Through the recurring use of the KFLANN and a second stage involving canonical correlation analysis (CCA), the HieFLANN is developed. Experimental results on several standard benchmark data sets indicate that the autonomous determination of the HFS provides a viable avenue for feasible partitioning of feature subspaces. When coupled with the network transformation process, the HieFLANN yields results showing accuracies comparable with available methods. This provides a new platform by which data sets with high-dimensional feature spaces can be systematically resolved and trained autonomously, alleviating the effects of the curse of dimensionality.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号