首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 140 毫秒
1.
针对如何在线准确检测实际工作中发射车车载电池容量,提出了一种基于Kalman算法的软测量方法。首先,建立蓄电池数学模型,选择可测、易测的辅助变量,利用Kalman算法建立软测量模型;然后,使用试验数据,通过参数辨识求解软测量模型;最后,利用Matlab软件对该模型进行仿真以及现场试验。结果表明:采用Kalman算法的软测量技术减少了电池应用过程中的电流累积误差,提高了电池容量的检测精度。  相似文献   

2.
针对化工生产反应过程复杂,难以直接建立对应的机理模型,提出了一种基于ISOMAP-ELM的软测量模型。将等距离映射与极限学习机结合,通过等距离映射对输入数据进行降维,消除输入数据的共线性,在低维空间提取更具代表性的特征分量,最后对特征分量采用极限学习机训练,以此建立软测量模型。验证结果表明,所提出算法与传统的基于ELM以及基于欧式距离降维的MDS-ELM模型相比,具有更高的预测精度,模型的均方误差仅为0.28,软测量模型的命中率达到94%,对化工生产具有一定的指导意义。  相似文献   

3.
径向基函数神经网络(RBFNN)具有最优逼近和全局逼近的特性,在函数拟合方面优于传统的BP网络,将在化工领域广泛使用的软测量技术应用于电机系统的转矩测量,该方法的可行性进行了论证,并运用RBF神经网络建立转矩的软测量模型。同时建立了基于BP神经网络的软测量模型,用改进的kvenberg—Marquardt算法对BP神经网络进行学习和训练,并对两种网络进行了对比。该方法只需要电流信息,辨识方法简单。研究表明,RBF神经网络辨识效果优于BP神经网络。  相似文献   

4.
基于Boostins思想,提出了一种改进的adaboost算法。在此基础上,提出了一种新的多神经网络构造方法BBMNN。应用于软测量建模,给出了一种新的非线性系统软测量建模方案,并分别针对多变量、非线性典型模型和复杂工业过程,应用实验数据和实际运行数据进行了仿真研究。仿真结果表明,该方案可以较好地解决复杂对象神经网络建模时样本点数量与模型精度之间的矛盾,可同时获得较高的训练精度和预测精度。  相似文献   

5.
丁知平  刘超  牛培峰 《计量学报》2018,39(3):414-419
提出一种基于改进引力优化算法(IGSA)优化最小二乘支持向量机(LSSVM)的软测量模型(IGSA-LSSVM)以精确测量煤粉锅炉NOx排放量。首先,针对引力搜索算法易陷入局部最小、全局优化能力差的问题,提出了一种改进的引力搜索算法,利用网格算法初始化种群,基于适应度值自适应递减惯性权重更新质点位置以提高全局优化性能;然后,采用IGSA优化选择LSSVM的超参数以改善模型的预测精度和泛化能力;最后,以330 MW燃煤锅炉为研究对象,建立IGSA-LSSVM的NOx软测量模型,仿真结果表明该软测量模型具有更高的预测精度和泛化能力,能有效测量NOx排放量。  相似文献   

6.
软测量的工程化应用   总被引:2,自引:0,他引:2  
软测量在工业生产中的应用必须解决好工程化设计问题,软测量的工程化设计包括机理分析、辅助变量选择、建立软测量模型、数据采集和预处理、对软测量模型校正等步骤,其关键技术在于软测量过程的数据处理、模型的建立和校正,并以杭州龙山化工厂碳酸化塔中CO2浓度软测量的工程化应用为例,对一些关键环节的处理进行了介绍。  相似文献   

7.
本文将两种自适应算法与传统AOSVR相结合,提出了一种改进的AOSVR,该方法能够在线更新SVR模型,同时还能对惩罚参数、不敏感函数参数以及核函数参数进行实时调整,从而提高算法的自适应能力及预测量精度。通过实验表明基于AOSVR的油田动液面软测量方法可以在油田动液面测量中加以应用。  相似文献   

8.
《中国测试》2016,(11):89-93
为实现水泥窑尾分解率的实时在线检测,利用软测量技术在解决工业在线测量问题中的优势,提出一种改进的粒子群参数优化的支持向量回归机算法(IPSO-SVR),即通过粒子群算法对支持向量机模型核心参数进行优化选择,并在粒子群算法中引入自适应惯性权重的思想,克服粒子群算法容易出现早熟收敛、陷入局部极值的缺点,最终建立起基于IPSO-SVR的窑尾分解率软测量模型。将其与基于交叉验证法(CV)和未改进粒子群算法优化SVR参数的软测量模型进行仿真对比实验,实验表明:该IPSO-SVR模型具有更佳的预测能力,窑尾分解率预测相关系数达0.857 5,预测最大相对误差不超过1.14%,平均相对误差为0.75%,可进一步运用到诸如水泥生产等大型工业的产品分解率预测中。  相似文献   

9.
唐晓  王佳 《高技术通讯》2004,14(10):90-93
提出用软测量技术现场测定海水中碳钢腐蚀速度,以评价区域海水的腐蚀能力。在氧平衡态下静止海水中腐蚀速度软测量模型的基础上并进行了氧饱和度和流速的修正,得到了A3钢腐蚀速度与海洋环境参数间的函数表达式,即为软测量模型。经过验证,同等海水环境状态下此模型计算数值与实验测量结果比较吻合。  相似文献   

10.
引黄干渠通向支渠放水闸口(俗称斗口)的水流量是水管部门与用水单位结算水费的法定依据。本文介绍基于人工神经网络软测量模型的引黄灌渠斗口水流量自动测量技术。检验结果表明,人工神经网络软测量模型输出与标准三角量水堰测量结果吻合良好,斗口水流量的测量准确度可大为提高。  相似文献   

11.
在分析高速电主轴功率与负载转矩关系的基础上,提出了一种负载扭矩软测量的方法.采用电主轴定子电压、定子电流、空载电流和主轴转速作为辅助变量,建立了基于BP神经网络的负载扭矩软测量模型.以航空发动机离合器轴承试验台扭矩检测为例,对软测量模型进行了仿真研究.仿真结果表明,该方法能够满足一定精度要求,为解决高速电主轴拖动系统扭矩传感器昂贵和不易安装等问题,提供了一种解决方法.  相似文献   

12.
王华 《计量学报》2006,27(4):309-312
在白车身制造尺寸质量控制过程中,主要采用样架、三坐标测量机(CMM)、在线三坐标测量机(OCMM)进行检测。比较三者,样架和CMM测得的数据“点多量少”(测点很多,每个测点每天有1到2个数据),而OCMM测得的数据“点少量多”(测点相对较少,每个测点每天有200个左右数据)。采用软测量的方法,可使CMM测量数据和OCMM测量数据进行互补,达到精确评价车身制造质量的目的。  相似文献   

13.
It is shown that threshold functions of many variables can be realized by writing these variables into a multiaperture core and then reading them out synchronously. The multiaperture core is a closed series magnetic circuit of elementary, e.g., toroidal, cores each of which serves to store one binary variable. During the read operation identical EMF pulses are induced in all one-turn figure-eight output windings. Their polarity depends on the value of the input variable. The threshold function is realized by summing the EMF induced in all the output windings whose number of turns are proportional to the weights of the corresponding variables. On one core it is possible to obtain simultaneously various threshold functions of the input variables. It is possible to expand the logical possibilities of the threshold element by executing some elementary logical functions of two or three variables at the input.  相似文献   

14.
Liu GH  Liu XY  Feng QY 《Applied optics》2011,50(23):4557-4565
This paper presents a method that allows a conventional dual-camera structured light system to directly acquire the three-dimensional shape of the whole surface of an object with high dynamic range of surface reflectivity. To reduce the degradation in area-based correlation caused by specular highlights and diffused darkness, we first disregard these highly specular and dark pixels. Then, to solve this problem and further obtain unmatched area data, this binocular vision system was also used as two camera-projector monocular systems operated from different viewing angles at the same time to fill in missing data of the binocular reconstruction. This method involves producing measurable images by integrating such techniques as multiple exposures and high dynamic range imaging to ensure the capture of high-quality phase of each point. An image-segmentation technique was also introduced to distinguish which monocular system is suitable to reconstruct a certain lost point accurately. Our experiments demonstrate that these techniques extended the measurable areas on the high dynamic range of surface reflectivity such as specular objects or scenes with high contrast to the whole projector-illuminated field.  相似文献   

15.
This paper describes hazardous maneuvers and their possible utilization to evaluate hazard of roadway sites. Some established hazardous maneuvers are erratic maneuvers, traffic conflicts, near-miss and hazardous regions of vehicle pairs.

Hazard is defined as an occurrence function. The possible output consists of a continuous range of manifest severity events—accidents, hazardous and borderline maneuvers. An interval of that range is described by a subfunction of the occurrence function. The input consists of driver. environment and vehicle (DEV) variables. A random variable is interrelated with the DEV variables in the occurrence function forming complex interactions.

In a comprehensive hazard reduction program, the concept of hazardous maneuvers is only a subset of the total hazard. Remedial techniques would be applied to the DEV variables as suggested by models of occurrence subfunctions and conventional traffic engineering studies.  相似文献   


16.
This article compares genetic algorithm (GA) and genetic programming (GP) for system modeling in metal forming. As an example, the radial stress distribution in a cold-formed specimen (steel X6Cr13) was predicted by GA and GP. First, cylindrical workpieces were forward extruded and analyzed by the visioplasticity method. After each extrusion, the values of independent variables (radial position of measured stress node, axial position of measured stress node, and coefficient of friction) were collected. These variables influence the value of the dependent variable, radial stress. On the basis of training data, different prediction models for radial stress distribution were developed independently by GA and GP. The obtained models were tested with the testing data. The research has shown that both approaches are suitable for system modeling. However, if the relations between input and output variables are complex, the models developed by the GP approach are much more accurate.  相似文献   

17.
Distribution Envelope Determination (DEnv) is a method for computing the CDFs of random variables whose samples are a function of samples of other random variable(s), termed inputs. DEnv computes envelopes around these CDFs when there is uncertainty about the precise form of the probability distribution describing any input. For example, inputs whose distribution functions have means and variances known only to within intervals can be handled. More generally, inputs can be handled if the set of all plausible cumulative distributions describing each input can be enclosed between left and right envelopes. Results will typically be in the form of envelopes when inputs are envelopes, when the dependency relationship of the inputs is unspecified, or both. For example in the case of specific input distribution functions with unspecified dependency relationships, each of the infinite number of possible dependency relationships would imply some specific output distribution, and the set of all such output distributions can be bounded with envelopes. The DEnv algorithm is a way to obtain the bounding envelopes. DEnv is implemented in a tool which is used to solve problems from a benchmark set.  相似文献   

18.
The analysis of many physical and engineering problems involves running complex computational models (simulation models, computer codes). With problems of this type, it is important to understand the relationships between the input variables (whose values are often imprecisely known) and the output. The goal of sensitivity analysis (SA) is to study this relationship and identify the most significant factors or variables affecting the results of the model. In this presentation, an improvement on existing methods for SA of complex computer models is described for use when the model is too computationally expensive for a standard Monte-Carlo analysis. In these situations, a meta-model or surrogate model can be used to estimate the necessary sensitivity index for each input. A sensitivity index is a measure of the variance in the response that is due to the uncertainty in an input. Most existing approaches to this problem either do not work well with a large number of input variables and/or they ignore the error involved in estimating a sensitivity index. Here, a new approach to sensitivity index estimation using meta-models and bootstrap confidence intervals is described that provides solutions to these drawbacks. Further, an efficient yet effective approach to incorporate this methodology into an actual SA is presented. Several simulated and real examples illustrate the utility of this approach. This framework can be extended to uncertainty analysis as well.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号