首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract

Although numerical calculations of heat transfer and fluid flow can provide detailed insights into welding processes and welded materials, these calculations are complex and unsuitable in situations where rapid calculations are needed. A recourse is to train and validate a neural network, using results from a well tested heat and fluid flow model to significantly expedite calculations and ensure that the computed results conform to the basic laws of conservation of mass, momentum and energy. Seven feedforward neural networks were developed for gas metal arc (GMA) fillet welding, one each for predicting penetration, leg length, throat, weld pool length, cooling time between 800°C and 500°C, maximum velocity and peak temperature in the weld pool. Each model considered 22 inputs that included all the welding variables, such as current, voltage, welding speed, wire radius, wire feed rate, arc efficiency, arc radius, power distribution, and material properties such as thermal conductivity, specific heat and temperature coefficient of surface tension. The weights in the neural network models were calculated using the conjugate gradient (CG) method and by a hybrid optimisation scheme involving the CG method and a genetic algorithm (GA). The neural network produced by the hybrid optimisation model produced better results than the networks based on the CG method with various sets of randomised initial weights. The CG method alone was unable to find the best optimal weights for achieving low errors. The hybrid optimisation scheme helped in finding optimal weights through a global search, as evidenced by good agreement between all the outputs from the neural networks and the corresponding results from the heat and fluid flow model.  相似文献   

2.
Effective training data selection in tool condition monitoring system   总被引:1,自引:1,他引:1  
When neural networks (NNs) are used to identify tool conditions, the richness and size of training data are crucial. The training data set not only has to cover a wide range of cutting conditions, but also to capture the characteristics of the tool wear process. This data set imposes significant computing burdens, results in a complex identification model, and hampers the feasible application of NNs. In this paper, a training data selection method is proposed, and a systematic procedure is provided to perform this data selection. With this method, the generalization error surface is divided into three regions, and proper sampling factors are chosen for each region to prune the data points from the original training set. The quality of the training set is estimated by performance evaluation through decision making. In this work, SVM is used in the decision making method, and the generalization error is used as the performance evaluation criterion. The tradeoff between the generalization performance and the size of the training set is key to this selection. Experimental results have demonstrated that this selection strategy provides an effective and efficient training set, and the developed model based on this set is fast and reliable for tool condition identification.  相似文献   

3.
Reduction in the size and complexity of neural networks is essential to improve generalization, reduce training error and improve network speed. Most of the known optimization methods heavily rely on weight-sharing concepts for pattern separation and recognition. In weight-sharing methods the redundant weights from specific areas of input layer are pruned and the value of weights and their information content play a very minimal role in the pruning process. The method presented here focuses on network topology and information content for optimization. We have studied the change in the network topology and its effects on information content dynamically during the optimization of the network. The primary optimization uses scaled conjugate gradient and the secondary method of optimization is a Boltzmann method. The conjugate gradient optimization serves as a connection creation operator and the Boltzmann method serves as a competitive connection annihilation operator. By combining these two methods, it is possible to generate small networks which have similar testing and training accuracy, i.e. good generalization, from small training sets. In this paper, we have also focused on network topology. Topological separation is achieved by changing the number of connections in the network. This method should be used when the size of the network is large enough to tackle real-life problems such as fingerprint classification. Our findings indicate that for large networks, topological separation yields a smaller network size, which is more suitable for VLSI implementation. Topological separation is based on the error surface and information content of the network. As such it is an economical way of reducing size, leading to overall optimization. The differential pruning of the connections is based on the weight content rather than the number of connections. The training error may vary with the topological dynamics but the correlation between the error surface and recognition rate decreases to a minimum. Topological separation reduces the size of the network by changing its architecture without degrading its performance,  相似文献   

4.
In this article, three different methods for hybridization and specialization of real-time recurrent learning (RTRL)-based neural networks (NNs) are presented. The first approach consists of combining recurrent networks with feedforward networks. The second approach continues with the combination of multiple recurrent NNs. The last approach introduces the combination of connectionist systems with instructionist artificial intelligence techniques. Two examples are added to demonstrate properties and advantages of these techniques. The first example is a process diagnosis task where a hybrid NN is connected to a knowledge-based system. The second example is a NN consisting of different recurrent modules that is used to handle missing sensor data in a process modelling task.  相似文献   

5.
A neural network NN ensemble is a very successful technique where the outputs of a set of separately trained NNs are combined to form one unified prediction. An effective ensemble should consist of a set of networks that are not only highly correct, but ones that make their errors on different parts of the input space as well; however, most existing techniques only indirectly address the problem of creating such a set. We present an algorithm called ADDEMUP that uses genetic algorithms to search explicitly for a highly diverse set of accurate trained networks. ADDEMUP works by first creating an initial population, then uses genetic operators to create new networks continually, keeping the set of networks that are highly accurate while disagreeing with each other as much as possible. Experiments on four real-world domains show that ADDEMUP is able to generate a set of trained networks that is more accurate than several existing ensemble approaches. Experiments also show ADDEMUP is able to incorporate prior knowledge effectively, if available, to improve the quality of its ensemble.  相似文献   

6.
BRUCE E ROSEN 《连接科学》1996,8(3-4):373-384
We describe a decorrelation network training method for improving the quality of regression learning in 'ensemble' neural networks NNs that are composed of linear combinations of individual NNs. In this method, individual networks are trained by backpropogation not only to reproduce a desired output, but also to have their errors linearly decorrelated with the other networks. Outputs from the individual networks are then linearly combined to produce the output of the ensemble network. We demonstrate the performances of decorrelated network training on learning the 'three-parity' logic function, a noisy sine function and a one-dimensional non-linear function, and compare the results with the ensemble networks composed of independently trained individual networks without decorrelation training . Empirical results show than when individual networks are forced to be decorrelated with one another the resulting ensemble NNs have lower mean squared errors than the ensemble networks having independently trained individual networks. This method is particularly applicable when there is insufficient data to train each individual network on disjoint subsets of training patterns.  相似文献   

7.
在自动化生产中建立难加工材料的表面质量预测模型,是实现可持续制造的基础。提出一种结合量子遗传算法和支持向量回归(Quantum genetic algorithm-Support vector regression,QGA-SVR)的已加工表面粗糙度预测模型,改进了现有寻优方法在搜索支持向量回归的模型参数易陷入局部最优解的问题。在量子门更新的过程中加入交叉和变异的操作,保证了模型全局搜索能力,为了提高支持向量回归的泛化能力,在参数优化过程结合了K-折叠交叉验证。结合干车削304不锈钢的切削试验以及现有的铣削实验数据,对比分析了基于量子遗传算法和遗传算法的支持向量回归模型。结果表明:QGA-SVR具有收敛速度快、预测精度高的优点,基于建立的QGA-SVR模型分析了切削参数对车削表面粗糙度的影响规律。  相似文献   

8.
Selection of optimal cutting conditions by using GONNS   总被引:1,自引:2,他引:1  
Machining conditions are optimized to minimize the production cost in conventional manufacturing. In specialized manufacturing applications, such as micro machining and mold making, achievement of specific goals may be the primary objective. The Genetically Optimized Neural Network System (GONNS) is proposed for the selection of optimal cutting conditions from the experimental data when analytical or empirical mathematical models are not available. GONNS uses Backpropagation (BP) type neural networks (NN) to represent the input and output relations of the considered system. Genetic Algorithm (GA) obtains the optimal operational condition by using the NNs. In this study, multiple NNs represented the relationship between the cutting conditions and machining-related variables. Performance of the GONNS was tested in two case studies. Optimal operating conditions were found in the first case study to keep the cutting forces in the desired range, while a merit criterion (metal removal rate) was maximized in micro-end-milling. Optimal operating conditions were calculated in the second case study to obtain the best possible compromise between the roughness of machined mold surfaces and the duration of finishing cut. To train the NNs, 81 mold parts were machined at different cutting conditions and inspected.  相似文献   

9.
It is well known that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple neural classifiers. This paper analyses the performance of some combination schemes applied to a multi-hybrid neural system which is composed of neural and fuzzy neural networks. Essentially, the combination methods employ different ways to extract valuable information from the output of the experts through the use of confidence (weights) measures of the ensemble members to each class. An empirical evaluation in a handwritten numeral recognition task is used to investigate the performance of the presented methods in comparison with some existing combination methods.  相似文献   

10.
In this paper, we propose an architecture with two different kinds of neural networks for on-line determination of optimal cutting conditions. A back-propagation network with three inputs and four outputs is used to model the cutting process. A second network, which parallelizes the augmented Lagrange multiplier algorithm, determines the corresponding optimal cutting parameters by maximizing the material removal rate according to appropriate operating constraints. Due to its parallelism, this architecture can greatly reduce processing time and make real-time control possible. Numerical simulations and a series of experiments are conducted on end milling to confirm the feasibility of this architecture.  相似文献   

11.
柳健  洪波  李湘文  刘龙 《焊接学报》2016,37(3):53-56,105
针对磁控电弧焊缝跟踪信号非线性不平稳等特点,提出了一种基于匹配追踪和非参数基函数相结合的磁控电弧焊缝跟踪特征信号提取方法(MP_NBFE);在匹配追踪的每一次迭代中,首先自适应调整模板信号,使其逼近原始跟踪信号中的某一特征成分,然后用非参数基函数特征波形提取方法计算出与该模板信号最匹配的信号特征成分的最优估计;其次,依照匹配追踪的原理,用最优估计值去更新信号余量,在新的信号余量中继续寻找计算其他特征成分的最好估计.重复执行该过程,直到信号余量的能量小于预先设定的阈值.通过自行研制的磁控电弧传感器焊缝跟踪平台上的特征信号提取试验验证,结果表明,该方法提取的V形坡口扫描信号与V形坡口扫描的仿真信号变化趋势相同,可准确反映焊缝偏差信息.  相似文献   

12.
This paper describes a medical application of modular neural networks (NNs) for temporal pattern recognition. In order to increase the reliability of prognostic indices for patients living with the acquired immunodeficiency syndrome (AIDS), survival prediction was performed in a system composed of modular NNS that classified cases according to death in a certain year of follow-up. The output of each NN module corresponded to the probability of survival in a given year. Inputs were the values of demographic, clinical and laboratory variables. The results of the modules were combined to produce survival curves for individuals. The NNs were trained by backpropagation and the results were evaluated in test sets of previously unseen cases. We showed that, for certain combinations of NN modules, the performance of the prognostic index, measured by the area under the receiver operating characteristic curve, was significantly improved (p 0.05). We also used calibration measurements to quantify the benefits of combining NN modules, and show why, when and how NNs should be combined for building prognostic models.  相似文献   

13.
冯宝  覃科  蒋志勇 《焊接学报》2018,39(9):31-35
针对电弧焊熔池变化过程中非线性因素导致的熔透状态识别准确率低的问题,利用极限学习机(ELM)网络框架,提出一种基于L1/L2范数约束的ELM熔透状态识别模型.通过高速视觉传感系统获取熔池图像,利用主成分分析来进行特征提取.采用结构简单、训练简便的ELM算法来训练熔透识别模型,并利用L1范数约束抑制ELM输出权重中的异常值以改善ELM算法的泛化能力,同时利用L2范数约束来平滑ELM输出权重以获取熔池图像中的团块特征,提高熔池熔透状态的识别准确率.结果表明,基于L1/L2-ELM的熔透状态识别模型能够快速有效地判别熔池的全熔透、未熔透、过熔透三种状态.  相似文献   

14.
It has been shown that the ability of echo state networks (ESNs) to generalise in a sentence-processing task can be increased by adjusting their input connection weights to the training data. We present a qualitative analysis of the effect of such weight adjustment on an ESN that is trained to perform the next-word prediction task. Our analysis makes use of CrySSMEx, an algorithm for extracting finite state machines (FSMs) from the data about the inputs, internal states, and outputs of recurrent neural networks that process symbol sequences. We find that the ESN with adjusted input weights yields a concise and comprehensible FSM. In contrast, the standard ESN, which shows poor generalisation, results in a massive and complex FSM. The extracted FSMs show how the two networks differ behaviourally. Moreover, poor generalisation is shown to correspond to a highly fragmented quantisation of the network's state space. Such findings indicate that CrySSMEx can be a useful tool for analysing ESN sentence processing.  相似文献   

15.
SHU Fu-hua 《中国铸造》2007,4(3):202-205
This paper presents a kind of ZA27 squeeze casting process parameter optimization method using artificial neural network (ANN) combined with the particle swarm optimizer (PSO). Regarding the test data as samples and using neural network create ZA27 squeeze casting process parameters and mechanical properties of nonlinear mapping model. Using PSO optimize the model and obtain the optimum value of the process parameters. Make full use of the non-neural network mapping capabilities and PSO global optimization capability. The network uses the radial direction primary function neural network, using the clustering and gradient method to make use of network learning, in order to enhance the generalization ability of the network. PSO takes dynamic changing inertia weights to accelerate the convergence speed and avoid a local minimum.  相似文献   

16.
Human cognition is said to be systematic: cognitive ability generalizes to structurally related behaviours. The connectionist approach to cognitive theorizing has been strongly criticized for its failure to explain systematicity. Demonstrations of generalization notwithstanding, I show that two widely used networks (feedforward and recurrent) do not support systematicity under the condition of local input/output representations. For a connectionist explanation of systematicity, these results leave two choices: either (1) develop models capable of systematicity under local input/output representations or (2) justify the choice of similarity-based (non-local) component representations sufficient for systematicity.  相似文献   

17.
以实验为基础,利用神经网络和遗传算法优化Al-5%Cu合金的电脉冲孕育处理工艺参数。神经网络的输入参数为脉冲电压、脉冲时间和电脉冲孕育处理时熔体温度,输出参数是合金凝固组织的晶粒度。在神经网络训练的基础上,采用遗传算法优化神经网络的输入参数。结果表明,神经网络和遗传算法的组合建模获得了较好的优化结果。  相似文献   

18.
肖磊  郭立渌  汪晓洁  邱杰 《机床与液压》2020,48(12):198-203
为了提高预测控制模型的准确度,采用RBF神经网络来完成网络流量预测,并借助群体智能算法中的混合蛙跳算法来实现模型参数的优化。首先,在建模过程中引入混合蛙跳算法。然后,将RBF神经网络权重和阈值作为青蛙个体,随机产生的多个权重和阈值组合个体构成蛙群。对蛙群进行分组,并通过不断重新分组和组内迭代的方法来获取全局最优个体,从而得到最优权重和阈值,以便确定最优的预测控制模型。经过实验证明:采用基于群体智能优化RBF神经网络的预测控制模型具有更高的准确度。  相似文献   

19.
The study of numerical abilities, and how they are acquired, is being used to explore the continuity between ontogenesis and environmental learning. One technique that proves useful in this exploration is the artificial simulation of numerical abilities with neural networks, using different learning paradigms to explore development. A neural network simulation of subitization, sometimes referred to as visual enumeration, and of counting, a recurrent operation, has been developed using the so-called multi-net architecture. Our numerical ability simulations use two or more neural networks combining supervised and unsupervised learning techniques to model subitization and counting. Subitization has been simulated using networks employing unsupervised self-organizing learning, the results of which agree with infant subitization experiments and are comparable with supervised neural network simulations of subitization reported in the literature. Counting has been simulated using a multi-net system of supervised static and recurrent backpropagation networks that learn their individual tasks within an unsupervised, competitive framework. The developmental profile of the counting simulation shows similarities to that of children learning to count and demonstrates how neural networks can learn how to be combined together in a process modelling development.  相似文献   

20.
Connectionist models of sentence processing must learn to behave systematically by generalizing from a small training set. To what extent recurrent neural networks manage this generalization task is investigated. In contrast to Van der Velde et al. (Connection Sci., 16, pp. 21–46, 2004), it is found that simple recurrent networks do show so-called weak combinatorial systematicity, although their performance remains limited. It is argued that these limitations arise from overfitting in large networks. Generalization can be improved by increasing the size of the recurrent layer without training its connections, thereby combining a large short-term memory with a small long-term memory capacity. Performance can be improved further by increasing the number of word types in the training set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号