首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In a process that is integral to a measurement system, some variation is likely to occur. Measurement system analysis is an important area of study that is able to determine the amount of variation. In evaluating a measurement system's variation, the most adequate technique, once an instrument is calibrated, is gauge repeatability and reproducibility (GR&R). For evaluating multivariate measurement systems, however, discussion has been scarce. Some researchers have applied multivariate analysis of variance to estimate the evaluation indexes; here the geometric mean is used as an agglutination strategy for the eigenvalues extracted from variance–covariance matrices. This approach, however, has some weaknesses. This paper thus proposes new multivariate indexes based on four weighted approaches. Statistical analysis of empirical and data from the literature indicates that the most effective weighting strategy in multivariate GR&R studies is based on an explanation of the percentages of the eigenvalues extracted from a measurement system’ matrix.  相似文献   

2.
Given the complex nature of their phenomena and interactions, industrial processes often have multiple variables of interest, usually grouped into critical-to-quality and critical-to-performance characteristics. These variables often have significant correlations, which make engineering problems multivariate. For this reason, Response Surface Methodology, coupled with multivariate techniques, has been widely used as a logical roadmap for modeling and optimization of the characteristics of interest. However, the variability and prediction capability of the numerical solutions obtained are almost always neglected, reducing the likelihood that numerical results are indeed compatible with observable process improvements. To fill this gap, this paper proposes a nonlinear multiobjective optimization strategy based on multivariate prediction capability ratios. For this, rotated Factor Analysis is used as the multivariate technique for grouping process characteristics and composing capability ratios, so that the prediction variance is taken as the natural variability of the process modeled and the expected value distances to the nadir solutions of the latent variables are taken as the allowed variability. Normal Boundary Intersection method, combined with Generalized Reduced Gradient algorithm, is used as the numerical scheme to maximize the prediction capability of Pareto optimal solutions. To illustrate the feasibility of the proposed strategy, we present a case study of end milling without cutting fluids of duplex stainless steel UNS S32205. Rotatable Central Composite Design, with three cutting parameters, was employed for data collection. Traditional multivariate and proposed approaches were compared. The results demonstrate that the proposed optimization strategy is able to provide solutions with satisfactory prediction capability for all variables analyzed, regardless of their convexities, optimization directions, and correlation structure. In addition, while critical-to-quality characteristics are more difficult to control, they have been favored by the proposed optimization regarding prediction capability, which was a desirable result.  相似文献   

3.
Sampling uncertainty in coordinate measurement data analysis   总被引:2,自引:0,他引:2  
There are a number of important software related issues in coordinate metrology. After measurement data are collected in the form of position vectors, the data analysis software must derive the necessary geometric information from the point set, and uncertainty plays an important role in the analysis. When extreme fit approaches (L−∞ norm estimation approaches) are employed for form error evaluation, the uncertainty is closely related to the sampling process used to gather the data. The measurement points are a subset of the true surface, and, consequently, the extreme fit result differs from the true value. In this paper, we investigate the functional relationship between the uncertainty in an extreme fit and the number of points measured. Two major issues are addressed in this paper. The first addresses and identifies the parameters that affect the functional relationship. The second develops a methodology to apply this relationship to the sampling of measurement points.  相似文献   

4.
Drilling being one of the primary machining processes find wide spread applications in manufacturing of functional components. Optimization of drilling process performance requires critical understanding of process parameters which govern the mechanism of drilling process. Machining process at nanoscale level has been studied extensively using numerical modeling approaches owing to complexity in conducting experiments at nanoscale level. In this paper, we propose a new evolutionary approach based on multi-gene genetic programming (MGGP) to numerically model the drilling process of graphene sheet, a two dimensional nanoscale material. The performance of our proposed MGGP model is compared with that of the artificial neural network (ANN) and we observe that our predictions are well in agreement with the data obtained using conventional numerical approach for modeling machining process of nanoscale materials. We anticipate that our proposed MGGP model can find applications in optimizing the machining processes of nanoscale materials.  相似文献   

5.
复杂曲面快速测量、建模及基于测量点云的RP和NC加工   总被引:22,自引:4,他引:18  
坐标测量在制造领域中有着广泛的应用。评述了坐标测量及相关技术中所涉及的多传感器集成测量、测量点规划、测头路径规划、测头半径补偿、测量点云分割、曲面拟合以及基于测量点云的直接NC和RP加工等关键技术的研究现状,并对一些瓶颈问题的解决方案作了探讨,提出了几点建议。  相似文献   

6.
The paper first presents an AND/OR nets approach for planning of a computer numerical control (CNC) machining operation and then describes how an adaptive fuzzy Petri nets (AFPNs) can be used to model and control all activities and events within CNC machine tools. It also demonstrates how product quality specification such as surface roughness and machining process quality can be controlled by utilizing AFPNs. The paper presents an intelligent control architecture based on AFPNs with learning capability for modeling a CNC machining operation and control of machining process quality. In this paper it will be shown that several ideas and approaches proposed in the field of robotic assembly are applicable to the planning procedure modeling with minor modifications. Graph theories, Petri nets, and fuzzy logic are powerful tools which are employed in this research to model different feasible states for performing a process and to obtain the best process performance path using exertion of the process designer’s criteria.  相似文献   

7.
Laser shock peening is an innovative surface treatment technique, which has been successfully applied to improve fatigue performance of metallic components. Laser shock peening improves the surface morphology and microstructure of the material. In this paper, three Nd3+:YAG laser process parameters (voltage, focus position and pulse duration) are varied in an experiment, in order to determine the optimal process parameters that could simultaneously meet the specifications for seven correlated responses of processed Nimonic 263 sheets. The modelling and optimisation of a process were performed using the advanced, problem-independent method. First, responses are expressed using Taguchi’s quality loss function, followed by the application of multivariate statistical methods to uncorrelate and synthesise them into a single performance measure. Then, artificial neural networks are used to build the process model, and simulated annealing was utilised to find the optimal process parameters setting in a global continual space of solutions. Effectiveness of the proposed method in the development of a robust laser shock peening was proved in comparison to several commonly used approaches from the literature, resulting in the highest process performance measure, the most favourable response values and the corresponding process parameters optimum. Besides the improved surface characteristics, the optimised laser shock peening (LSP) showed an improvement in terms of microhardness and formation of favourable microstructural transformations.  相似文献   

8.
Most of industrial applications of statistical process control involve more than one quality characteristics to be monitored. These characteristics are usually correlated, causing challenges for the monitoring methods. These challenges are resolved using multivariate quality control charts that have been widely developed in recent years. Nonetheless, multivariate process monitoring methods encounter a problem when the quality characteristics are of the attribute type and follow nonnormal distributions such as multivariate binomial or multivariate Poisson. Since the data analysis in the latter case is not as easy as the normal case, more complexities are involved to monitor multiattribute processes. In this paper, a hybrid procedure is developed to monitor multiattribute correlated processes, in which number of defects in each characteristic is important. Two phases are proposed to design the monitoring scheme. In the first phase, the inherent skewness of multiattribute Poisson data is almost removed using a root transformation technique. In the second phase, a method based on the decision on belief concept is employed. The transformed data obtained from the first phase are implemented on the decision on belief (DOB) method. Finally, some simulation experiments are performed to compare the performances of the proposed methodology with the ones obtained using the Hotelling T 2 and the MEWMA charts in terms of in-control and out-of-control average run length criteria. The simulation results show that the proposed methodology outperforms the other two methods.  相似文献   

9.
The normal fluctuation of wideband signals in process industry systems exhibits behavior that is characteristic of process dynamics, sensor dynamics, vibration of components, and product quality. A baseline statistical signature behavior can be established by a systematic processing of multivariate signals and determining the cause and effect relationship among the process variables characterizing a subsystem. Both theoretical and computational basis for processing a set of signals using the multivariate autoregression (MAR) modeling has been developed and applied to establish frequency domain statistical signatures for an aluminum rolling mill. A systematic procedure is developed to interpret the causal relationships for the detection and isolation of process anomalies and sensor maloperation. This digital signal processing technique and its implementation have clearly demonstrated the applicability of this method of characterizing and monitoring complex industrial processes.  相似文献   

10.
Modeling of machining has evolved through three main stages over the years, namely empirical modeling, science-based (predictive) modeling, and computer-based modeling. All three of these now co-exist and synergize each other. Empirical modeling can be said to have had its beginning as an organized process in the late 1890s to early 1900s. Science-based modeling began to emerge in the 1940s and computer-based modeling in the 1970s. Each of these three stages was ushered in by a key event. The first originated with F. W. Taylor's pioneering engineering research and development of empirical methodology (and equations) for estimating reasonably economic machining conditions. The second stage was initiated largely by Merchant's physics-based modeling and analysis of the basic force system acting between the cutting tool, chip, and workpiece in a machining process. The third (and major) stage was the “watershed” event of the advent of digital computer technology and its application to manufacturing in general. This enabled integration of computer-based modeling with all of the databases of the full system of manufacturing. Today the synergistic combination of these three stages faces a significant challenge arising from the growing need for machine tools to be able to autonomously avoid or even correct processing errors or failures in process. Basic to advancement of such capability is to effect a continuing increase of the accuracy and realism of machining process models.  相似文献   

11.
This paper discusses the empirical modeling using system identification technique with a focus on an interacting series process. The study is carried out experimentally using a gaseous pilot plant as the process, in which the dynamic of such a plant exhibits the typical dynamic of an interacting series process. Three practical approaches are investigated and their performances are evaluated. The models developed are also examined in real-time implementation of a linear model predictive control. The selected model is able to reproduce the main dynamic characteristics of the plant in open-loop and produces zero steady-state errors in closed-loop control system. Several issues concerning the identification process and the construction of a MIMO state space model for a series interacting process are deliberated.  相似文献   

12.
《Wear》2002,252(3-4):179-188
The modeling of particle–wall impaction in a confined gas-particle flow using both Lagrangian and Eulerian approaches is reported. The Lagrangian method is based on a general computational fluid dynamics (CFD) code, FLUENT (FLUENT-4.3, 1996). In the Eulerian method, based on our previously developed code [J. Eng. Gas Turb. Power 119 (1997) 709], a computational procedure by decomposing one Eulerian solution of particulate phase into two equivalent Lagrangian solutions for incident and reflected particles has been developed. These two approaches are evaluated versus experimental data for particle–wall impaction using spray droplets. Two test cases, a 45° ramp and an isolated single tube, have been studied using the above two approaches to determine the particle behavior and physical properties of impacting and reflected particles near wall surface. Results show that both approaches are successful in predicting the main features of particulate flow near wall, however, the Eulerian approach is much less expensive than the Lagrangian approach in obtaining the flow solution of impacting particles. The particulate flow predictions using both approaches have been applied for predicting tube erosions that are compared with reported data. Good agreement between predictions using the two approaches and between the predicted and measured erosion results are observed.  相似文献   

13.
利用多变量参数回归模型(MAR)的数据融合技术,从双导联心电信号中提取特征,以实现计算机辅助自动诊断。在分类时,利用MAR模型系数及其K—L变换作为信号特征,并采用了非线性二次判别函数(QDF)分类器。利用文中方法对MIT—BIH标准数据库中的正常窦性心律(NSR)、心房早期收缩(APC)、心室早期收缩(PVC)、心室性心动过速(VT),心室纤维性颤动(VF),室上性心动过速(SVT)信号进行了建模和测试。结果表明,融合双导联心电数据后取得了比只利用单导联心电数据更为满意的结果。  相似文献   

14.
The modified independent component analysis (MICA) was proposed mainly to obtain a consistent solution that cannot be ensured in the original ICA algorithm and has been widely investigated in multivariate statistical process monitoring (MSPM). Within the MICA-based non-Gaussian process monitoring circle, there are two main problems, i.e., the selection of a proper non-quadratic function for measuring non-Gaussianity and the determination of dominant ICs for constructing latent subspace, have not been well attempted so far. Given that the MICA method as well as other MSPM approaches are usually implemented in an unsupervised manner, the two problems are always solved by some empirical criteria without respect to enhancing fault detectability. The current work aims to address the challenging issues involved in the MICA-based approach and propose a double-layer ensemble monitoring method based on MICA (abbreviated as DEMICA) for non-Gaussian processes. Instead of proposing an approach for selecting a proper non-quadratic function and determining the dominant ICs, the DEMICA method combines all possible base MICA models developed with different non-quadratic functions and different sets of dominant ICs into an ensemble, and a double-layer Bayesian inference is formulated as a decision fusion method to form a unique monitoring index for online fault detection. The effectiveness of the proposed approach is then validated on two systems, and the achieved results clearly demonstrate its superior proficiency.  相似文献   

15.
冷汹涛 《机械》2010,37(5):41-44
以Pro/Engineer和UGNX为代表的计算机辅助设计和辅助制造软件,在制造行业已经成为产品设计和产品制造工程师的重要工具。这些CAD/CAM软件都必须能够胜任曲面造型的功能,它是产品三维设计的必然前提,因此曲面造型成为计算机辅助几何设计和计算机图形学的一项研究重要内容,而计算几何正是对曲面造型进行研究的数学方法。通过对曲面造型和计算几何的相关概念的介绍,详细阐述了基于计算几何方法建立曲面参数曲线的各种数学建模过程。这些数学建模方法能为各种CAD/CAM系统实现曲面功能提供理论指导。  相似文献   

16.
为有效应对多点风速传感器或风压传感器故障而造成的损失,同时为了降低运算的复杂性和工程应用的难度,需要提出同步恢复缺失数据的模型。传统的多通道信号诊断采用多元经验模态分解(multivariate empiricalmode decomposition,简称MEMD),笔者提出多变量经验小波变换(multivariable empirical wavelet transform,简称MEWT)来同步恢复多点缺失数据。具体应用时,首先,运用MEWT将多点信号同时分解为一系列模态;然后,利用核函数极限学习机(kernel-based extreme learning machine,简称KELM)实现同步预测,同时运用杜鹃搜索(cuckoo search,简称CS)算法对模型的正则化参数以及核参数进行智能寻优。多步预测时,采用多输入多输出(multi-input multi-output,简称MIMO)策略代替传统的滚动策略。建筑物表面实测多点风压数据和实测多点下击暴流风速数据用于验证模型的可行性。与噪声辅助的多元经验模态分解核函数极限学习机的对比结果表明,该模型能更高精度地同步恢复多点多步信号。  相似文献   

17.
基于空间微四面体的异质材料零件建模方法   总被引:1,自引:1,他引:0  
当前以均质材料为设计前提的大多数商用计算机辅助设计系统难以描述异质材料零件(Heterogeneous objects,HEO)的结构信息和材料信息,已有的HEO建模方法也基本上都缺少与通用的建模软件系统及快速成形设备的兼容性。基于此,以几何与材料信息基本映射关系为基础,提出一种基于常用数据格式——表面三角化数据(Stereo lithography,STL)以构建微四面体空间单元的HEO建模方法。通过建立实体、微四面体、STL面片、空间节点四者之间的对应关系,进行HEO实体模型的逐层分解,从而形成被离散化的微四面体空间单元网格节点,并赋予网格节点相应的材料信息,再根据单元网格节点的三维位置及材料值逐一计算各微四面体表面处的结构和材料分布,进而实现整个HEO边界曲面至内部的结构及材料设计,利用实例验证了该方法在复杂HEO设计方面的有效性。  相似文献   

18.
DIRECT MANIPULATION OF B-SPLINE SURFACES   总被引:6,自引:0,他引:6  
Engineering design and geometric modeling often require the ability to modify the shape of parametric curves and surfaces so that their shape satisfy some given geometric constraints, including point, normal vector, curve and surface. Two approaches are presented to directly manipulate the shape of B-spline surface. The former is based on the least-square, whereas the latter is based on minimizing the bending energy of surface. For each method, since unified and explicit formulae are derived to compute new control points of modified surface, these methods are simple, fast and applicable for CAD systems. Algebraic technique is used to simplify the computation of B-spline composition and multiplication. Comparisons and examples are also given.  相似文献   

19.
The selection of lean tools is one of the crucial factors for decision makers and practitioners in a competitive environment. A few efforts have been made based on problem selection. Conversely, numerical studies have been done on analytical hierarchy process (AHP)–data envelopment analysis (DEA) as well as DEA-undesirable variables separately. Thus, there is a shortage of lean practitioners as well as the methods involved. The present research aims at integrating AHP and DEA with desirable and undesirable factors to evaluate the lean tools and techniques and to rank the aspect of efficacy. We suggest a logical procedure to measure the efficacy of lean tools on leanness and to prioritize them as decision makers. In this extensive research, we apply the integrated multicriteria decision-making approach, including the hybrid groups AHP and DEA models with desirable and undesirable variables, to assess the relative efficiency of lean manufacturing tools and techniques. Case studies are used to demonstrate the lean implementation in companies while being validated by a panel of experts. The integration of these approaches has created synergy and shown to be even more powerful. Thus, the proposed integrated AHP-DEA model can evaluate and rank different alternatives while considering desirable and undesirable variables in the production processes.  相似文献   

20.
Flow velocity controls hillslope soil erosion and is a key hydrodynamic variable involved in sediment transport and deposition processes. The dye-tracer technique is one of the most applied methods for measuring velocity of shallow interrill and rill flow. The technique is based on the injection of a tracer in a specific point and the measurement of its speed to travel the known distance from the injection point to a given channel section. The dye-tracer technique requires that the measured surface flow velocity has to be corrected to obtain the mean flow velocity using a correction factor which is generally empirically deduced. The technique has two sources of uncertainties: i) the method applied for measuring the travel time of the dye-tracer and ii) the estimate of the correction factor, which is the ratio between the mean flow velocity and the surface velocity, in different flow conditions. In this paper the results of a wide experimental investigation, carried out using a fixed bed small flume simulating a rill channel, are presented. At first, the comparison between a chronometer-based (CB) and video-based (VB) technique was carried out for establishing the influence of the travel time measuring technique. For each experimental run, which was characterized by a sample of 20 measurements carried out with the same values of slope and discharge, the developed analysis showed that the empirical frequency distribution of the ratio between the single measurement and the sample mean (i.e., the average of 20 measurements) is more uniform for the VB technique than for the CB one. In any case, this sample mean was representative of surface flow velocity for both the CB and the VB technique. Furthermore, the mean value obtained by the CB measurements lightly underestimated (−1.7%) the corresponding mean obtained by the VB technique. Finally, the effects of slope (0.1–8.7%), flow Reynolds number (3462–10040), Froude number (1.44–5.17) on the correction factor are presented. The analysis demonstrated that the correction factor is independent of flow Reynolds number while a relationship with a Froude number, obtained by surface velocity measurement, or channel slope can be established.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号