首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
Tolerance analysis of an assembly is an important issue for mechanical design. Among various tolerance analysis methods, statistical analysis is the most commonly employed method. However, the conventional statistical tolerance method is often based on the normal distribution. It fails to predict the resultant tolerance of an assembly of parts with non-normal distributions. In this paper, a novel method based on statistical moments is proposed. Tolerance distributions of parts are first transferred into statistical moments that are then used for computing tolerance stack-up. The computed moments, particularly the variance, the skewness and the kurtosis, are then mapped back to probability distributions in order to calculate the resultant tolerance of the assembly. The proposed method can be used to analyse the resultant tolerance specification for non-normal distributions with different skewness and kurtosis. Simulated results showed that tail coefficients of different distributions with the same kurtosis are close to each other for normalised probabilities between ?3 and 3. That is, the tail coefficients of a statistical distribution can be predicted by the coefficients of skewness and kurtosis. Two examples are illustrated in the paper to demonstrate the proposed method. The predicted resultant tolerances of the two examples are only 0.5% and 1.5% differences compared with that by the Monte Carlo simulation for 1,000,000 samples. The proposed method is much faster in computation with higher accuracy than conventional statistical tolerance methods. The merit of the proposed method is that the computation is fast and comparatively accurate for both symmetrical and unsymmetrical distributions, particularly when the required probability is between ±2σ and ±3σ.  相似文献   

2.
An assembly is the integrative process of joining components to make a completed product. It brings together the upstream process of design, engineering and manufacturing processes. The functional performance of an assembled product and its manufacturing cost are directly affected by the individual component tolerances. But, the selective assembly method can achieve tight assembly tolerance through the components manufactured with wider tolerances. The components are segregated by the selective groups (bins) and mated according to a purposeful strategy rather than being at random, so that small clearances are obtained at the assembly level at lower manufacturing cost. In this paper, the effect of mean shift in the manufacturing of the mating components and the selection of number of groups for selective assembly are analysed. A new model is proposed based on their effect to obtain the minimum assembly clearance within the specification range. However, according to Taguchi's concept, manufacturing a product within the specification may not be sufficient. Rather, it must be manufactured to the target dimension. The concept of Taguchi's loss function is applied into the selective assembly method to evaluate the deviation from the mean. Subsequently, a genetic algorithm is used to obtain the best combination of selective groups with minimum clearance and least loss value within the clearance specification. The effect of the ratio between the mating part quality characteristic's dimensional distributions is also analysed in this paper.  相似文献   

3.
Tolerance is one of the most important parameters in design and manufacturing. Tolerance synthesis has a significant impact on manufacturing cost and product quality. In the international standards community two approaches for statistical tolerancing of mechanical parts are being discussed: process capability indices and distribution function zone. The distribution function zone (DFZone) approach defines the acceptability of a population of parts by requiring that the distribution function of relevant values of the parts be bounded by a pair of specified distribution functions. In order to apply this approach to statistical tolerancing, one needs a method to decompose the assembly level tolerance specification to obtain tolerance parameters for each component in conjunction with a corresponding tolerance-cost model. This paper introduces an optimization-based statistical tolerance synthesis model based on the DFZone tolerance specifications. A new tolerance-cost model is proposed and the model is illustrated with an assembly example.  相似文献   

4.
A multipole algorithm for plane elasticity based on the direct boundary element method (BEM) is presented. The kernels in the BEM are approximated as truncated Taylor series with expansion points taken from a uniform grid. The algorithm replaces the usual BEM elemental summations with correlation sums on the regular grid in terms of the sampled kernel data and density moments. Far field influences are rapidly computed in the frequency domain using the fast Fourier transform (FFT). The resultant linear system of equations is solved with GMRES. The multipole method is extended to whole-body regularized forms of the standard displacement-BIE and the stress-BIE. Free-term coefficients which arise from regularization in the far field are also rapidly computed as correlation sums with the FFT. The algorithm is shown to be faster than the traditional BEM for models with over 400 quartic elements while maintaining an acceptably high level of accuracy.  相似文献   

5.
基于形位公差带的分布特征,对形状、位置要素变动情况进行研究,提出了一种在一维线性装配尺寸链下,判断模型中形住误差是否应当纳入装配误差累积计算的方法.在统计平方根模型的基础上,依据零件生产加工完毕后各形位要素变动的分布特性,提出了一种新的装配误差累积计算模型.采用此模型可实现包含形住误差的装配精度定量分析.给出的一个应用实例说明了所提出方法的有效性.  相似文献   

6.
The paper presents a simple approximation technique for statistical tolerance analysis, namely, the allocation of component tolerances based on a known assembly tolerance. The technique utilizes a discretized, multivariate kernel density estimate and a simple transformation to approximate the probability distribution of the overall assembly characteristic. The data-driven approach is suitable for real-world settings in which components are randomly selected from their respective manufacturing processes to form mechanical assemblies. Demonstrated is the numerical approach in two dimensions for two distinct cases: first, when component characteristics are non-normal, independent random variables, and second, when they are highly correlated, normal random variables. The results are promising in initial test problems.  相似文献   

7.
The accuracy of the moment method of log-normal size distribution for aerosol coagulation problem was investigated. The constant collision kernel coagulation problem was solved by the moment method as well as a very accurate numerical method (Landgrebe and Pratsinis, 1990) for the purpose. Approximate analytical solutions to the problem by different choices of moments were obtained and compared with the result of the accurate numerical model. During the analysis, solutions to the moments of the size distribution were obtained in exact form and it was discussed that the solutions can be used as a standard reference for ensuring the accuracy of any new numerical schemes for solving the coagulation equation. The time evolution of the particle size distribution was obtained by representing the size distribution with a log-normal size distribution function. Depending on the choice for three moments during the analysis, different solutions to the size distribution parameters were obtained. Three choices for moments were compared as an example. The use of the 0th, 1st and 2nd moments which has been conventionally used was proven to be most accurate for various polydispersity cases by comparing with a numerical result and is recommended to be used for future studies.  相似文献   

8.
A size distribution function of magnetite nanoparticles has been selected among known distribution functions that well represent the frequency polygon of the nanoparticles. Normal and lognormal distribution functions are shown to provide more accurate representations for the initial and final portions of the size distribution of magnetite nanoparticles. We have determined the transition diameter of magnetite nanoparticles which corresponds to a transition from a normal to a lognormal distribution function and have constructed a composite distribution function consisting of normal and lognormal distribution functions, which accurately represents the size distribution of the magnetite nanoparticles. The distribution moments and magnetization curve calculated using this distribution function agree well with the moments calculated using the frequency polygon of the magnetite nanoparticles and with the experimentally determined magnetization curve.  相似文献   

9.
A method is presented to estimate the process capability index (PCI) for a set of non‐normal data from its first four moments. It is assumed that these four moments, i.e. mean, standard deviation, skewness, and kurtosis, are suitable to approximately characterize the data distribution properties. The probability density function of non‐normal data is expressed in Chebyshev–Hermite polynomials up to tenth order from the first four moments. An effective range, defined as the value for which a pre‐determined percentage of data falls within the range, is solved numerically from the derived cumulative distribution function. The PCI with a specified limit is hence obtained from the effective range. Compared with some other existing methods, the present method gives a more accurate PCI estimation and shows less sensitivity to sample size. A simple algebraic equation for the effective range, derived from the least‐square fitting to the numerically solved results, is also proposed for PCI estimation. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

10.
In general, the exact probability distribution of a definite integral of a given non-Gaussian random field is not known. Some information about this unknown distribution can be obtained from the 3rd and 4th moment of the integral. Approximations to these moments can be calculated by discretizing the integral and replacing the integrand by third-degree polynomials of correlated Gaussian variables which reproduce the first four moments and the correlation function of the field correctly. The method described (see Ditlevsen O, Mohr G, Hoffmeyer P. Integration of non-Gaussian fields. Probabilistic engineering mechanics, 1996) based on these ideas is discussed and further developed and used in a computer program which produces fairly accurate approximations to the mentioned moments with no restrictions put on the weight function applied to the field and the correlation function of the field. A pathological example demonstrating the limitations of the method is given.  相似文献   

11.
Probabilistic uncertainty analysis quantifies the effect of input random variables on model outputs. It is an integral part of reliability-based design, robust design, and design for Six Sigma. The efficiency and accuracy of probabilistic uncertainty analysis is a trade-off issue in engineering applications. In this paper, an efficient and accurate mean-value first order Saddlepoint Approximation (MVFOSA) method is proposed. Similar to the mean-value first order Second Moment (MVFOSM) approach, a performance function is approximated with the first order Taylor expansion at the mean values of random input variables. Instead of simply using the first two moments of the random variables as in MVFOSM, MVFOSA estimates the probability density function and cumulative distribution function of the response by the accurate Saddlepoint Approximation. Because of the use of complete distribution information, MVFOSA is generally more accurate than MVFOSM with the same computational effort. Without the nonlinear transformation from non-normal variables to normal variables as required by the first order reliability method (FORM), MVFOSA is also more accurate than FORM in certain circumstances, especially when the transformation significantly increases the nonlinearity of a performance function. It is also more efficient than FORM because an iterative search process for the so-called Most Probable Point is not required. The features of the proposed method are demonstrated with four numerical examples.  相似文献   

12.
ABSTRACT

The accuracy of the moment method of log-normal size distribution for aerosol coagulation problem was investigated. The constant collision kernel coagulation problem was solved by the moment method as well as a very accurate numerical method (Landgrebe and Pratsinis, 1990) for the purpose. Approximate analytical solutions to the problem by different choices of moments were obtained and compared with the result of the accurate numerical model. During the analysis, solutions to the moments of the size distribution were obtained in exact form and it was discussed that the solutions can be used as a standard reference for ensuring the accuracy of any new numerical schemes for solving the coagulation equation. The time evolution of the particle size distribution was obtained by representing the size distribution with a log-normal size distribution function. Depending on the choice for three moments during the analysis, different solutions to the size distribution parameters were obtained. Three choices for moments were compared as an example. The use of the 0th, 1st and 2nd moments which has been conventionally used was proven to be most accurate for various polydispersity cases by comparing with a numerical result and is recommended to be used for future studies.  相似文献   

13.
Tolerance analysis is receiving renewed emphasis as industry recognizes that tolerance management is a key element in their programs for improving quality, reducing overall costs, and retaining market share. The specification of tolerances is being elevated from a menial task to a legitimate engineering design function. New engineering models and sophisticated analysis tools are being developed to assist design engineers in specifying tolerances on the basis of performance requirements and manufacturing considerations. This paper presents an overview of tolerance analysis applications to design with emphasis on recent research that is advancing the state of the art. Major topics covered are (1) new models for tolerance accumulation in mechanical assemblies, including the Motorola Six Sigma model; (2) algorithms for allocating the specified assembly tolerance among the components of an assembly; (3) the development of 2-D and 3-D tolerance analysis models; (4) methods which account for non-normal statistical distributions and nonlinear effects; and (5) several strategies for improving designs through the application of modern analytical tools.  相似文献   

14.
The general formula for the rth moment of the folded normal distribution is obtained, and formulae for the first four non-central and central moments are calculated explicitly.

To illustrate the mode of convergence of the folded normal to the normal distribution, as μ/σ = θ increases, the shape factors β f1 and β f2 were calculated and the relationship between them represented graphically.

Two methods, one using first and second moments (Method I) and the other using second and fourth moments (Method II) of estimating the parameters μ and σ of the parent normal distribution are presented and their standard errors calculated. The accuracy of both methods, for various values of θ, are discussed.  相似文献   

15.
联肢剪力墙的刚度、稳定性以及二阶效应   总被引:1,自引:1,他引:0  
童根树  苏健 《工程力学》2012,29(11):115-122
该文采用连续化模型,对双肢剪力墙结构平面内稳定性进行了研究,求得了顶部作用集中压力时临界荷载的精确显式表达式和显式屈曲波形。这个临界荷载公式表明,联肢剪力墙是一种双重抗侧力结构,并且可以采用串并联电路模型来表示两者之间的相互作用。串并联模型推广到线性分析的情况,得到顶部抗侧刚度的显式表达式,与精确解进行了比较。推导了顶部作用竖向集中荷载时,在不同水平荷载作用下结构的侧移、墙肢弯矩、墙肢轴力和连梁弯矩放大系数,并提供了近似计算公式。  相似文献   

16.
针对目前舰船压力表检定任务重、时效性要求高、对现场检定需求迫切的实际,分析了我国主要港口的室内环境和舰船舱室环境,确定了舰船压力表现场检定环境条件,并根据GJB 5109-2004关于测试不确定度比的要求,对适用现场检定的主标准器、测试不确定度比以及合格判据等有关问题进行了探析,并通过了实验验证,为开展舰船压力表现场检定提供了参考。  相似文献   

17.
JJF 1108-2012中增加了测量石油螺纹量规截顶的要求,但是给出的测量方法并不能准确反映螺纹牙型轮廓的加工误差。本文详细分析了规范中所给方法的不周全的地方,并给出了使用三坐标测量机测量螺纹截顶的方法和计算过程。  相似文献   

18.
The Lagrange multiplier method (LM) is currently used to allocate tolerance for optimum manufacturing cost. This is a tedious iterative process and sometimes it allocates a component's tolerance outside its process tolerance limits. The present work develops a graphical representation which can help the process engineer to visualize the minimum and maximum values for assembly tolerances. The graphical representation developed can also help the process engineer to determine the exact total manufacturing cost of the assembly and help to fix the tolerance, which would not fall outside the limits prescribed. A simple C program is developed to construct the closed-form equations (CFE), and a single EXCEL graphical representation is derived for assembly tolerance, allocated tolerance, and total manufacturing cost. The developed algorithm has been demonstrated on a two- to five-component linear assembly, to help the process/manufacturing engineer's visualization before determining the tolerance specification on component dimensions. The test results show a maximum percentage deviation of 0.09% of assembly tolerance and 0.33% of total manufacturing cost between the LM and the newly developed CFE method.  相似文献   

19.
The use of multi-parameter distribution functions that incorporate empirically derived parameters to more accurately capture the nature of data being studied is investigated. Improving the accuracy of these models is especially important for predicting the extreme values of the non-linear random variables. This study was motivated by problems commonly encountered in the design of offshore systems where the accurate modeling of the distribution tail is of significant importance. A four-parameter Weibull probability distribution model whose structural form is developed using a quadratic transformation of linear random variables is presented. The parameters of the distribution model are derived using the method of linear moments. For comparison, the model parameters are also derived using the more conventional method of moments. To illustrate the behavior of these models, laboratory data measuring the time series of wave run-up on a vertical column of a TLP structure and wave crests interacting in close proximity with an offshore platform are utilized. Comparisons of the extremal predictions using the four-parameter Weibull model and the three-parameter Rayleigh model verify the ability of the new formulation to better capture the tail of the sample distributions.  相似文献   

20.
Selective assembly is the method of obtaining high-precision assemblies from relatively low-precision components. A relatively smaller clearance variation is achieved than in interchangeable assembly, with the components manufactured with wider tolerance. In selective assembly, the mating parts are partitioned to form selective groups with smaller tolerance, and the corresponding groups are assembled interchangeably. The mating parts are manufactured in different machines, using different processes, and with different standard deviations. Therefore, the dimensional distributions of the mating parts are not similar. In selective assembly, the number of parts in the corresponding selective groups is not similar and will result in surplus parts. The clearance variation is also very high. In this article, a new method is proposed in selective assembly. Instead of assembling components from corresponding selective groups, the components from different combination of selective groups can be assembled to achieve minimum clearance variation. Genetic algorithm is used to find the best combination of the selective groups for minimizing the clearance variation. A case of hole and shaft (radial) assembly is analyzed in this article, and the best combination is obtained to minimize assembly clearance variation. The assembly is done in three stages to completely use all the components. The best combination for the selective groups and the resulting clearance variations are tabulated. The surplus parts are minimized to a large extent.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号