首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
针对基于学习的三维模型兴趣点提取问题,提出一种兴趣点分层学习的全监督算法.提取三维模型表面所有顶点的特征向量后,将人工标注的兴趣点分为稀疏点和密集点,对于稀疏点使用整个三维模型进行神经网络训练,对于密集点则找出兴趣点分布密集的区域进行单独的神经网络训练;然后对2个神经网络进行特征匹配,得到一个用于三维模型兴趣点提取预测的分类器.测试时,提取新输入的三维模型上所有顶点的特征向量,将其输入到训练好的分类器中进行预测,应用改进的密度峰值聚类算法提取兴趣点.算法采用分层学习的策略,解决了传统算法在模型细节处难以准确提取密集兴趣点的问题.在SHREC’11数据集上的实验结果表明,与传统算法相比,该算法提取兴趣点的准确率更高,出现的遗漏点和错误点更少,对解决越来越精细的三维模型的兴趣点提取问题有较大帮助.  相似文献   

2.
应用神经网络隐式视觉模型进行立体视觉的三维重建   总被引:2,自引:0,他引:2  
针对传统的基于精确数学模型的立体视觉方法过程繁琐的不足,提出一种应用BP神经网络隐式视觉模型进行三维重建的算法。该算法将多个标定平面放置在有效视场内,用神经网络模拟立体视觉由两个二维图像重建三维几何的过程,经过网络训练建模后,无须摄像机标定即可进行三维重建。仿真实验结果证明,该算法比较简单,且能保持较高的精度。  相似文献   

3.
Comparison of neofuzzy and rough neural networks   总被引:18,自引:0,他引:18  
Conventional neural network architectures generally lack semantics. Both rough and neofuzzy neurons introduce semantic structures in the conventional neural network models. Rough neurons make it possible to process data points with a range of values instead of a single precise value. Neofuzzy neurons make it possible to convert crisp values into fuzzy values. This paper compares rough and neofuzzy neural networks. Rough and neofuzzy neurons are demonstrated to be complementary to each other. It is shown that the introduction of rough and fuzzy semantic structures in neural networks can increase the accuracy of predictions.  相似文献   

4.
Abstract: The aim of the work is to exploit some aspects of functional approximation techniques in parameter estimation procedures applied on fault detection and isolation tasks using backpropagation neural networks as functional approximation devices. The major focus of this paper deals with the strategy used in the data selection task as applied to the determination of non-conventional process parameters, such as performance or process-efficiency indexes, which are difficult to acquire by direct measurement. The implementation and validation procedure on a real case study is carried out with the aid of the facilities supplied by commercial neural networks toolboxes, which manage databases, neural network structures and highly efficient training algorithms.  相似文献   

5.
神经网络智能控制系统中辨识网络设计方法的探讨   总被引:1,自引:1,他引:0  
该文将人工神经网络的基本原理应用于过程控制系统,对辨识网络的基本结构以及网络的训练方法进行了初步探索和总结,在控制系统的辨识网络中应用基本BP算法与加动量项的BP算法、随机设置初值与训练设置初值、带参考模型与无参考模型对系统过渡过程特性的影响作了一定的比较与分析。仿真结果表明:所采用的改善网络自适应能力的方法,在不同程度上对系统的调节品质均有相应的改善。  相似文献   

6.
Artificial neural networks are used to model the offset printing process aiming to develop tools for on-line ink feed control. Inherent in the modelling data are outliers owing to sensor faults, measurement errors and impurity of materials used. It is fundamental to identify outliers in process data in order to avoid using these data points for updating the model. We present a hybrid, the process-model-network-based technique for outlier detection. The outliers can then be removed to improve the process model. Several diagnostic measures are aggregated via a neural network to categorize data points into the outlier and inlier classes. We demonstrate experimentally that a soft fuzzy expert can be configured to label data for training the categorization of neural network.  相似文献   

7.
针对过程神经网络时空聚合运算机制复杂、学习周期长的问题,提出了一种基于数据并行的过程神经网络训练算法。该方法基于梯度下降的批处理训练方式,应用MPI并行模式进行算法设计,在局域网内实现多台计算机的机群并行计算。文中给出了基于数据并行的过程神经网络训练算法和实现机制,对不同规模的训练函数样本集和进程数进行了对比实验,并对加速比、并行效率等算法性质进行了分析。实验结果表明,根据网络和样本规模适当选取并行粒度,算法可较大提高过程神经网络的训练效率。  相似文献   

8.
The structures of optical neural nets (NN) based on new matrix-tensor equivalental models (MTEMs) and algorithms are described in this article. MTE models are neuroparadigm of non-iterative type, which is a generalization of Hopfield and Hamming networks. The adaptive multi-layer networks, auto-associative and hetero-associative memory of 2-D images of high order can be built on the basis of MTEMs. The capacity of such networks in comparison with capacity of Hopfield networks is increased (including capacity for greatly correlated images). The results of modeling show that the number of neurons in neural network MTEMs is 10–20 thousand and more. The problems of training of such networks, different modifications, including networks with double adaptive-equivalental auto-weighing of weights, organization of computing process in different modes of network are discussed. The basic components of networks: matrix-tensor “equivalentors” and variants of their realization on the basis of liquid-crystal structures and optical multipliers with spatial and time integration are considered. The efficiency of proposed optical neural networks on the basis of MTEMs is evaluated for both variants on the level of 109 connections per second. Modified optical connections are realized as liquid-crystal television screens.  相似文献   

9.
This study examines the capability of neural networks for linear time-series forecasting. Using both simulated and real data, the effects of neural network factors such as the number of input nodes and the number of hidden nodes as well as the training sample size are investigated. Results show that neural networks are quite competent in modeling and forecasting linear time series in a variety of situations and simple neural network structures are often effective in modeling and forecasting linear time series.Scope and purposeNeural network capability for nonlinear modeling and forecasting has been established in the literature both theoretically and empirically. The purpose of this paper is to investigate the effectiveness of neural networks for linear time-series analysis and forecasting. Several research studies on neural network capability for linear problems in regression and classification have yielded mixed findings. This study aims to provide further evidence on the effectiveness of neural network with regard to linear time-series forecasting. The significance of the study is that it is often difficult in reality to determine whether the underlying data generating process is linear or nonlinear. If neural networks can compete with traditional forecasting models for linear data with noise, they can be used in even broader situations for forecasting researchers and practitioners.  相似文献   

10.
Adaptive structures with algebraic loops   总被引:1,自引:0,他引:1  
The contraction theorem has many fields of application, including linear algebraic equations, differential and integral equations, control systems theory, optimization, etc. The paper aims at showing how contraction mapping can be applied to the computation and the training of adaptive structures with algebraic loops. These structures are used for the approximation of unknown functional relations (mappings) represented by training sets. The technique is extended to multilayer neural networks with algebraic loops. Application of a two-layer neural network to breast cancer diagnosis is described.  相似文献   

11.
Rule revision with recurrent neural networks   总被引:2,自引:0,他引:2  
Recurrent neural networks readily process, recognize and generate temporal sequences. By encoding grammatical strings as temporal sequences, recurrent neural networks can be trained to behave like deterministic sequential finite-state automata. Algorithms have been developed for extracting grammatical rules from trained networks. Using a simple method for inserting prior knowledge (or rules) into recurrent neural networks, we show that recurrent neural networks are able to perform rule revision. Rule revision is performed by comparing the inserted rules with the rules in the finite-state automata extracted from trained networks. The results from training a recurrent neural network to recognize a known non-trivial, randomly-generated regular grammar show that not only do the networks preserve correct rules but that they are able to correct through training inserted rules which were initially incorrect (i.e. the rules were not the ones in the randomly generated grammar)  相似文献   

12.
This paper proposes a framework for constructing and training radial basis function (RBF) neural networks. The proposed growing radial basis function (GRBF) network begins with a small number of prototypes, which determine the locations of radial basis functions. In the process of training, the GRBF network gross by splitting one of the prototypes at each growing cycle. Two splitting criteria are proposed to determine which prototype to split in each growing cycle. The proposed hybrid learning scheme provides a framework for incorporating existing algorithms in the training of GRBF networks. These include unsupervised algorithms for clustering and learning vector quantization, as well as learning algorithms for training single-layer linear neural networks. A supervised learning scheme based on the minimization of the localized class-conditional variance is also proposed and tested. GRBF neural networks are evaluated and tested on a variety of data sets with very satisfactory results.  相似文献   

13.
提出了一种新的演化神经网络算法GTEANN,该算法基于高效的郭涛算法,同时完成在网络结构空间和权值空间的搜索,以实现前馈神经网络的自动化设计。本方法采用的编码方案直观有效,基于该编码表示,神经网络的学习过程是一个复杂的混合整实数非线性规划问题,例如杂交操作包括网络的同构和规整处理。初步实验结果表明该方法收敛,能够达到根据训练样本自动优化设计多层前馈神经网络的目的。  相似文献   

14.
《Advanced Robotics》2013,27(8):669-682
In this article, a neural network-based grasping system that is able to collect objects of arbitrary shape is introduced. The grasping process is split into three functional blocks: image acquisition and processing, contact point estimation, and contact force determination. The paper focuses on the second block, which contains two neural networks. A competitive Hopfield neural network first determines an approximate polygon for an object outline. These polygon edges are the input for a supervised neural network model [radial basis function (RBF) or multilayer perceptions], which then defines the contact points. Tests were conducted with objects of different shapes, and experimental results suggest that the performance of the neural gripper and its learning rate are significantly influenced by the choice of supervised training model and RBF learning algorithm.  相似文献   

15.
基于遗传策略和神经网络的非监督分类方法   总被引:2,自引:0,他引:2  
黎明  严超华  刘高航 《软件学报》1999,10(12):1310-1315
文章提出了一种新的基于遗传策略和模糊ART(adaptive resonance theory)神经网络的非监督分类方法.首先,利用原有的训练样本对模糊ART神经网络进行非监督训练,然后,采用遗传策略为模糊ART神经网络增加各类族边界邻域内的训练样本点,再对模糊ART神经网络进行有监督训练.这种方法解决了训练样本在较少条件下的ART系列神经网络的学习与分类问题,提高了ART系列神经网络的分类性能,并扩展了其应用范围.  相似文献   

16.
以小数据集为样本进行卷积神经网络模型的训练过程,容易出现所得到的神经网络模型泛化能力不足的问题。传统的处理方法大都通过数据增强的方式来提高训练数据的样本总数。本文选取多个网络模型进行对比实验,验证不同神经网络在训练过程中是否使用数据随机增强方式的模型识别准确率提高的效果,为如何选取小数据集样本训练神经网络提供参考。  相似文献   

17.
《Applied Soft Computing》2007,7(3):957-967
In this study, CPBUM neural networks with annealing robust learning algorithm (ARLA) are proposed to improve the problems of conventional neural networks for modeling with outliers and noise. In general, the obtained training data in the real applications maybe contain the outliers and noise. Although the CPBUM neural networks have fast convergent speed, these are difficult to deal with outliers and noise. Hence, the robust property must be enhanced for the CPBUM neural networks. Additionally, the ARLA can be overcome the problems of initialization and cut-off points in the traditional robust learning algorithm and deal with the model with outliers and noise. In this study, the ARLA is used as the learning algorithm to adjust the weights of the CPBUM neural networks. It tunes out that the CPBUM neural networks with the ARLA have fast convergent speed and robust against outliers and noise than the conventional neural networks with robust mechanism. Simulation results are provided to show the validity and applicability of the proposed neural networks.  相似文献   

18.
大数据具有高速变化特性,其内容与分布特征均处于动态变化之中,目前的前馈神经网络模型是一种静态学习模型,不支持增量式更新,难以实时学习动态变化的大数据特征。针对这个问题,提出一种支持增量式更新的大数据特征学习模型。通过设计一个优化目标函数对参数进行快速增量式更新,为了在更新过程中保持网络的原始知识,最小化平方误差函数。对于特征变化频繁的数据,通过增加隐藏层神经元数目网络对结构进行更新,使得更新后的网络能够实时学习动态变化大数据的特征。在对网络参数与结构更新之后,通过权重矩阵SVD分解对更新后的网络结构进行优化,删除冗余的网络连接,增强网络模型的泛化能力。实验结果表明提出的模型能够在尽可能保持网络模型原始知识的基础上,通过不断更新神经网络的参数与结构实时学习动态大数据的特征。  相似文献   

19.
The inversion of a neural network is a process of computing inputs that produce a given target when fed into the neural network. The inversion algorithm of crisp neural networks is based on the gradient descent search in which a candidate inverse is iteratively refined to decrease the error between its output and the target. In this paper. we derive an inversion algorithm of fuzzified neural networks from that of crisp neural networks. First, we present a framework of learning algorithms of fuzzified neural networks and introduce the idea of adjusting schemes for fuzzy variables. Next, we derive the inversion algorithm of fuzzified neural networks by applying the adjusting scheme for fuzzy variables to total inputs in the input layer. Finally, we make three experiments on the parity-three problem, examine the effect of the size of training sets on the inversion, and investigate how the fuzziness of inputs and targets of training sets affects the inversion  相似文献   

20.
A robust backpropagation learning algorithm for functionapproximation   总被引:3,自引:0,他引:3  
The backpropagation (BP) algorithm allows multilayer feedforward neural networks to learn input-output mappings from training samples. Due to the nonlinear modeling power of such networks, the learned mapping may interpolate all the training points. When erroneous training data are employed, the learned mapping can oscillate badly between data points. In this paper we derive a robust BP learning algorithm that is resistant to the noise effects and is capable of rejecting gross errors during the approximation process. The spirit of this algorithm comes from the pioneering work in robust statistics by Huber and Hampel. Our work is different from that of M-estimators in two aspects: 1) the shape of the objective function changes with the iteration time; and 2) the parametric form of the functional approximator is a nonlinear cascade of affine transformations. In contrast to the conventional BP algorithm, three advantages of the robust BP algorithm are: 1) it approximates an underlying mapping rather than interpolating training samples; 2) it is robust against gross errors; and 3) its rate of convergence is improved since the influence of incorrect samples is gracefully suppressed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号