首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 291 毫秒
1.
It is widely believed in the pattern recognition field that when a fixed number of training samples is used to design a classifier, the generalization error of the classifier tends to increase as the number of features gets larger. In this paper, we discuss the generalization error of the artificial neural network (ANN) classifiers in high-dimensional spaces, under a practical condition that the ratio of the training sample size to the dimensionality is small. Experimental results show that the generalization error of ANN classifiers seems much less sensitive to the feature size than 1-NN, Parzen and quadratic classifiers  相似文献   

2.
针对单个神经网络分类准确率低、RUSBoost算法提高NN分类器准确率耗时长的问题,提出了一种混合RUSBoost算法和积矩系数的分类优化算法。首先,利用RUSBoost算法生成m组训练集;然后,依据Pearson积矩系数计算每组训练集属性的相关程度消除冗余属性,生成目标训练集;最后,新的子训练集训练神经网络分类器,选择最大准确率分类器作为最终的分类模型。实验中使用了4个Benchmark数据集来验证本文算法的有效性。实验结果表明,本文提出的算法的准确率相较于传统的算法最大提升了8.26%,训练时间最高降低了62.27%。  相似文献   

3.
Machine learning techniques have been widely applied to solve the classification problem of highly dimensional and complex data in the field of bioinformatics. Among them, Bayesian regularized neural network (BRNN) became one of the popular choices due to its robustness and ability to avoid over fitting. On the other hand, Bayesian approach applied to neural network training offers computational burden and increases its time complexity. This restricts the use of BRNN in an on-line machine learning system. In this article, a Bayesian regularized neural network decision Tree (BrNdT) ensemble model, is proposed to combat high computational time complexity of a classifier model. The key idea behind the proposed ensemble methodology is to weigh and combine several individual classifiers and apply majority voting decision scheme to obtain an efficient classifier which outperforms each one of them. The simulation results show that the proposed method achieves a significant reduction in time complexity and maintains high accuracy over other conventional techniques.  相似文献   

4.
In this article, we propose a uniform Parzen window neural network, and introduce the configuration of the neural network, methods for decision boundary determination, and forward training algorithms. For the neural network, we adopt the uniform distributed Parzen window density function to construct the nodes of the hidden layer, and the union function for the output nodes. We also design a pattern generator algorithm to create artificial pattern data, which can be used for simulation, performance evaluation, and neural network optimization.  相似文献   

5.
We propose an approach for neuro-fuzzy system modeling. A neuro-fuzzy system for a given set of input-output data is obtained in two steps. First, the data set is partitioned automatically into a set of clusters based on input-similarity and output-similarity tests. Membership functions associated with each cluster are defined according to statistical means and variances of the data points included in the cluster. Then, a fuzzy IF-THEN rule is extracted from each cluster to form a fuzzy rule-base. Second, a fuzzy neural network is constructed accordingly and parameters are refined to increase the precision of the fuzzy rule-base. To decrease the size of the search space and to speed up the convergence, we develop a hybrid learning algorithm which combines a recursive singular value decomposition-based least squares estimator and the gradient descent method. The proposed approach has advantages of determining the number of rules automatically and matching membership functions closely with the real distribution of the training data points. Besides, it learns faster, consumes less memory, and produces lower approximation errors than other methods.  相似文献   

6.
Models of real-world applications often include a large number of parameters with a wide dynamic range, which contributes to the difficulties of neural network training. Creating the training data set for such applications becomes costly, if not impossible. In order to overcome the challenge, one can employ an active learning technique known as query-based learning (QBL) to add performance-critical data to the training set during the learning phase, thereby efficiently improving the overall learning/generalization. The performance-critical data can be obtained using an inverse mapping called network inversion (discrete network inversion and continuous network inversion) followed by oracle query. This paper investigates the use of both inversion techniques for QBL learning, and introduces an original heuristic to select the inversion target values for continuous network inversion method. Efficiency and generalization was further enhanced by employing node decoupled extended Kalman filter (NDEKF) training and a causality index (CI) as a means to reduce the input search dimensionality. The benefits of the overall QBL approach are experimentally demonstrated in two aerospace applications: a classification problem with large input space and a control distribution problem.  相似文献   

7.
Fuzzy min-max neural networks. I. Classification.   总被引:1,自引:0,他引:1  
A supervised learning neural network classifier that utilizes fuzzy sets as pattern classes is described. Each fuzzy set is an aggregate (union) of fuzzy set hyperboxes. A fuzzy set hyperbox is an n-dimensional box defined by a min point and a max point with a corresponding membership function. The min-max points are determined using the fuzzy min-max learning algorithm, an expansion-contraction process that can learn nonlinear class boundaries in a single pass through the data and provides the ability to incorporate new and refine existing classes without retraining. The use of a fuzzy set approach to pattern classification inherently provides a degree of membership information that is extremely useful in higher-level decision making. The relationship between fuzzy sets and pattern classification is described. The fuzzy min-max classifier neural network implementation is explained, the learning and recall algorithms are outlined, and several examples of operation demonstrate the strong qualities of this new neural network classifier.  相似文献   

8.
Different types of neural networks can be used to classify images. We propose to apply LIRA (LImited Receptive Area) neural classifier to work with images. To accelerate the neural network functioning we propose a digital implementation of the LIRA neural classifier. We begin with a neuron design, and then continue with the neural network simulation. The advantage of neural network is its parallel structure and possibility of the training. FPGA (Field Programmable Gate Array) allows the implementation of these parallel algorithms in a single device. Speed of classification is one of the most important requirements in adaptive control systems based on computer vision. The contribution of this article is LIRA neural classifier implementation with FPGA for two classes to accelerate the training and recognition processes.  相似文献   

9.
针对卷积神经网络提取特征信息不完整导致图像分类方法分类精度不高等问题,利用深度学习的方法搭建卷积神经网络模型框架,提出一种基于迭代训练和集成学习的图像分类方法。利用数据增强对图像数据集进行预处理操作,在提取图像特征时,采用一种迭代训练卷积神经网络的方式,得到充分有效的图像特征,在训练分类器时,采用机器学习中集成学习的思想。分别在特征提取后训练分类器,根据各分类器贡献的大小,赋予它们不同的权重值,取得比单个分类器更好的性能,提高图像分类的精度。该方法在Stanford Dogs、UEC FOOD-100和CIFAR-100数据集上的实验结果表明了其较好的分类性能。  相似文献   

10.
FANNC: A Fast Adaptive Neural Network Classifier   总被引:3,自引:0,他引:3  
In this paper, a fast adaptive neural network classifier named FANNC is proposed. FANNC exploits the advantages of both adaptive resonance theory and field theory. It needs only one-pass learning, and achieves not only high predictive accuracy but also fast learning speed. Besides, FANNC has incremental learning ability. When new instances are fed, it does not need to retrain the whole training set. Instead, it could learn the knowledge encoded in those instances through slightly adjusting the network topology when necessary, that is, adaptively appending one or two hidden units and corresponding connections to the existing network. This characteristic makes FANNC fit for real-time online learning tasks. Moreover, since the network architecture is adaptively set up, the disadvantage of manually determining the number of hidden units of most feed-forward neural networks is overcome. Benchmark tests show that FANNC is a preferable neural network classifier, which is superior to several other neural algorithms on both predictive accuracy and learning speed. Received 10 February 1999 / Revised 21 June 1999 / Accepted in revised form 11 October 1999  相似文献   

11.
Voting over Multiple Condensed Nearest Neighbors   总被引:4,自引:0,他引:4  
  相似文献   

12.
Neural-Based Learning Classifier Systems   总被引:1,自引:0,他引:1  
UCS is a supervised learning classifier system that was introduced in 2003 for classification in data mining tasks. The representation of a rule in UCS as a univariate classification rule is straightforward for a human to understand. However, the system may require a large number of rules to cover the input space. Artificial neural networks (NNs), on the other hand, normally provide a more compact representation. However, it is not a straightforward task to understand the network. In this paper, we propose a novel way to incorporate NNs into UCS. The approach offers a good compromise between compactness, expressiveness, and accuracy. By using a simple artificial NN as the classifier's action, we obtain a more compact population size, better generalization, and the same or better accuracy while maintaining a reasonable level of expressiveness. We also apply negative correlation learning (NCL) during the training of the resultant NN ensemble. NCL is shown to improve the generalization of the ensemble.  相似文献   

13.
We compare kernel estimators, single and multi-layered perceptrons and radial-basis functions for the problems of classification of handwritten digits and speech phonemes. By taking two different applications and employing many techniques, we report here a two-dimensional study whereby a domain-independent assessment of these learning methods can be possible. We consider a feed-forward network with one hidden layer. As examples of the local methods, we use kernel estimators like k-nearest neighbour (k-nn), Parzen windows, generalised k-nn, and Grow and Learn (Condensed Nearest Neighbour). We have also considered fuzzy k-nn due to its similarity. As distributed networks, we use linear perceptron, pairwise separating linear perceptron and multi-layer perceptrons with sigmoidal hidden units. We also tested the radial-basis function network, which is a combination of local and distributed networks. Four criteria are taken for comparison: correct classification of the test set; network size; learning time; and the operational complexity. We found that perceptrons, when the architecture is suitable, generalise better than local, memory-based kernel estimators, but require a longer training and more precise computation. Local networks are simple, leant very quickly and acceptably, but use more memory.  相似文献   

14.
In this paper, we report results obtained with a Madaline neural network trained to classify inductive signatures of two vehicles classes: trucks with one rear axle and trucks with double rear axle. In order to train the Madaline, the inductive signatures were pre-processed and both classes, named C2 and C3, were subdivided into four subclasses. Thus, the initial classification task was split into four smaller tasks (theoretically) easier to be performed. The heuristic adopted in the training attempts to minimize the effects of the input space non-linearity on the classifier performance by uncoupling the learning of the classes and, for this, we induce output Adalines to specialize in learning one of the classes. The percentages of correct classifications presented concern patterns which were not submitted to the neural network in the training process, and, therefore, they indicate the neural network generalization ability. The results are good and stimulate the maintenance of this research on the use of Madaline networks in vehicle classification tasks using not linearly separable inductive signatures.  相似文献   

15.
The Domain Adaptation problem in machine learning occurs when the distribution generating the test data differs from the one that generates the training data. A common approach to this issue is to train a standard learner for the learning task with the available training sample (generated by a distribution that is different from the test distribution). One can view such learning as learning from a not-perfectly-representative training sample. The question we focus on is under which circumstances large sizes of such training samples can guarantee that the learned classifier preforms just as well as one learned from target generated samples. In other words, are there circumstances in which quantity can compensate for quality (of the training data)? We give a positive answer, showing that this is possible when using a Nearest Neighbor algorithm. We show this under some assumptions about the relationship between the training and the target data distributions (the assumptions of covariate shift as well as a bound on the ratio of certain probability weights between the source (training) and target (test) distribution). We further show that in a slightly different learning model, when one imposes restrictions on the nature of the learned classifier, these assumptions are not always sufficient to allow such a replacement of the training sample: For proper learning, where the output classifier has to come from a predefined class, we prove that any learner needs access to data generated from the target distribution.  相似文献   

16.
It is demonstrated both theoretically and experimentally that, under appropriate assumptions, a neural network pattern classifier implemented with a supervised learning algorithm generates the empirical Bayes rule that is optimal against the empirical distribution of the training sample. It is also shown that, for a sufficiently large sample size, asymptotic equivalence of the network-generated rule to the theoretical Bayes optimal rule against the true distribution governing the occurrence of data follows immediately from the law of large numbers. It is proposed that a Bayes statistical decision approach leads naturally to a probabilistic definition of the valid generalization which a neural network can be expected to generate from a finite training sample.  相似文献   

17.
论文构建了24种不同信号调制类型的数据集,并提出一款端到端的信号调制识别神经网络.研究了网络卷积层数、卷积核以及训练数据集大小对信号调制识别性能的影响.所提方法避免了基于特征提取的信号调制识别方法中所需的特征选择、信号同步、载波跟踪、信噪比估计等繁杂的处理流程.最后,引入迁移学习技术解决因信道环境变化导致网络识别性能下...  相似文献   

18.
In this paper, we perform a noise analysis to assess the degree of robustness to noise of a neural classifier aimed at performing multi-class diagnosis of rolling element bearings. We work on vibration signals collected by means of two accelerometers and we consider ten levels of noise, each of which characterized by a different signal-to-noise ratio ranging from 40.55 to ?11.35 db. We classify the noisy signals by means of a neural classifier initially trained on signals without noise and then we repeat the training process with signals affected by increasing levels of noise. We show that adding noisy signals to the training set we can significantly increase the classification accuracy of a single classifier. Finally, we apply the two most used strategies to combine classifiers: classifier fusion and classifier selection, and show that, in both cases, we can significantly increase the performance of the single best classifier. In particular, classifier selection achieves the best results for low and medium levels of noise, while classifier fusion is the most accurate for high levels of noise. The analysis presented in the paper can be profitably used to identify both the type of classifier (e.g., single classifier or classifier ensemble) and how many and which noise levels should be used in the training phase in order to achieve the desired classification accuracy in the application domain of interest.  相似文献   

19.
为了更好地将现有深度卷积神经网络应用于表情识别,提出将构建自然表情图像集预训练和多任务深度学习相结合的方法。首先,利用社交网络图像构建一个自发面部表情数据集,对现有深度卷积神经网络进行预训练;然后,以双层树分类器替换输出层的平面softmax分类器,构建深度多任务人脸表情识别模型。实验结果表明,本文提出的方法有效提高了人脸表情识别准确率。  相似文献   

20.
In on-device training of machine learning models on microcontrollers a neural network is trained on the device. A specific approach for collaborative on-device training is federated learning. In this paper, we propose embedded federated learning on microcontroller boards using the communication capacity of a LoRa mesh network. We apply a dual board design: The machine learning application that contains a neural network is trained for a keyword spotting task on the Arduino Portenta H7. For the networking of the federated learning process, the Portenta is connected to a TTGO LORA32 board that operates as a router within a LoRa mesh network. We experiment the federated learning application on the LoRa mesh network and analyze the network, system, and application level performance. The results from our experimentation suggest the feasibility of the proposed system and exemplify an implementation of a distributed application with re-trainable compute nodes, interconnected over LoRa, entirely deployed at the tiny edge.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号