首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Technological innovations provide integrated circuits of increased functionality and complexity, and modern design tools facilitate a new multiplicity of products, such as application-specific products (ASICs). Traditional qualification procedures cannot keep pace with this evolution with respect to requirements of product reliability, ability of qualifying the multiplicity of future products, and market demands for saving cost and time. A further development of a reliability assurance concept, which will be discussed here, considers design tools, basic product elements, materials, manufacturing process and controls as a ‘system’, which has to be qualified with respect to the consistency and efficiency of all of the implemented reliability assurance measures. The concept is based on the manufacturer's ‘system’ knowledge and responsibility. It is compatible with the relevant requirements of ISO 9000 and recent military standard proposals. The procedure is applied to commercial products. The main part of this concept is the qualification of the manufacturing technology. The procedure is organized as a continuous process starting at the concept phase of a new technology and its pilot product. The various steps then follow the development, the pre-series and series production phases. The reliability aspects concentrate on the physical properties of product elements relevant to their stability and endurance, i.e. the potential failure mechanisms and their root causes as reliability risks. Thus a major part of reliability testing for the qualification of the pilot product of a new technology can be performed without the use of the final product version. The benefits derivable from this approach are savings in time and cost as well as the capability to handle future product multiplicity.  相似文献   

2.
In the innovative industry, four major trends are found to influence product quality and reliability: the increase in product complexity, the strong pressure on time to market, the increasing global economy, and the decreasing tolerance for quality problems. Thus, it becomes more difficult to anticipate all potential failures during the product development process. In this context, an efficient field feedback process should be in place to react to the unanticipated deviations in product performance. Based on a case study made in an innovative company, this paper shows that the problem is not so much in the information collection as in the inherent quality of the information and in the manner the information is processed. Therefore, a new method, presented in this paper, was developed to classify and prioritize field data and to upgrade it into information that can be used for design improvement according to the dominant classes of failures using the four‐phase roller coaster model. Although this newly generated information is richer than raw field data it is not yet detailed enough to allow direct design optimization. Therefore, a second upgrading stage, based on design of experiments, was developed. It uses a method that combines physics‐of‐failure (bottom‐up) and field information (top‐down). As traditional DoE mainly deals with largely time‐independent quality data obtained during the manufacturing process the approach had to be modified to deal with time‐dependent reliability data. Case study results show that it is a promising approach for characterizing and resolving failure mechanisms also in innovative companies. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

3.
First‐order reliability method (FORM) has been mostly utilized for solving reliability‐based design optimization (RBDO) problems efficiently. However, second‐order reliability method (SORM) is required in order to estimate a probability of failure accurately in highly nonlinear performance functions. Despite accuracy of SORM, its application to RBDO is quite challenging due to unaffordable numerical burden incurred by a Hessian calculation. For reducing the numerical efforts, a quasi‐Newton approach to approximate the Hessian is introduced in this study instead of calculating the true Hessian. The proposed SORM with the approximated Hessian requires computations only used in FORM, leading to very efficient and accurate reliability analysis. The proposed SORM also utilizes a generalized chi‐squared distribution in order to achieve better accuracy. Furthermore, SORM‐based inverse reliability method is proposed in this study. An accurate reliability index corresponding to a target probability of failure is updated using the proposed SORM. Two approaches in terms of finding an accurate most probable point using the updated reliability index are proposed. The proposed SORM‐based inverse analysis is then extended to RBDO in order to obtain a reliability‐based optimum design satisfying probabilistic constraints more accurately even for a highly nonlinear system. The numerical study results show that the proposed reliability analysis and RBDO achieve efficiency of FORM and accuracy of SORM at the same time. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
Equality constraints have been well studied and widely used in deterministic optimization, but they have rarely been addressed in reliability‐based design optimization (RBDO). The inclusion of an equality constraint in RBDO results in dependency among random variables. Theoretically, one random variable can be substituted in terms of remaining random variables given an equality constraint; and the equality constraint can then be eliminated. However, in practice, eliminating an equality constraint may be difficult or impossible because of complexities such as coupling, recursion, high dimensionality, non‐linearity, implicit formats, and high computational costs. The objective of this work is to develop a methodology to model equality constraints and a numerical procedure to solve a RBDO problem with equality constraints. Equality constraints are classified into demand‐based type and physics‐based type. A sequential optimization and reliability analysis strategy is used to solve RBDO with physics‐based equality constraints. The first‐order reliability method is employed for reliability analysis. The proposed method is illustrated by a mathematical example and a two‐member frame design problem. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

5.
为了提高通用弹药产品可靠性评估的效率,实现产品失效及可靠性的系统化辅助分析评估,对通用弹药产品的失效过程进行研究,总结出产品的一般失效分析过程、失效诊断方法及弹药失效分析数据字典.在辅助分析系统中,提供了失效分析、可靠性评估、FMECA等模块.应用Java、JSP等语言,在B/S模式下进行设计开发,实现了对产品失效的过程分析及产品可靠性评估.优化可靠性统计分析的数学模型,改进评估参量算法,使评估工作效率提高30%以上,参数精度达到小数点后8位.  相似文献   

6.
In this article, the authors present a general methodology for age‐dependent reliability analysis of degrading or ageing components, structures and systems. The methodology is based on Bayesian methods and inference—its ability to incorporate prior information and on ideas that ageing can be thought of as age‐dependent change of beliefs about reliability parameters (mainly failure rate), when change of belief occurs not only because new failure data or other information becomes available with time but also because it continuously changes due to the flow of time and the evolution of beliefs. The main objective of this article is to present a clear way of how practitioners can apply Bayesian methods to deal with risk and reliability analysis considering ageing phenomena. The methodology describes step‐by‐step failure rate analysis of ageing components: from the Bayesian model building to its verification and generalization with Bayesian model averaging, which as the authors suggest in this article, could serve as an alternative for various goodness‐of‐fit assessment tools and as a universal tool to cope with various sources of uncertainty. The proposed methodology is able to deal with sparse and rare failure events, as is the case in electrical components, piping systems and various other systems with high reliability. In a case study of electrical instrumentation and control components, the proposed methodology was applied to analyse age‐dependent failure rates together with the treatment of uncertainty due to age‐dependent model selection. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
Usually, for high reliability products the production cost is high and the lifetime is much longer, which may not be observable within a limited time. In this paper, an accelerated experiment is employed in which the lifetime follows an exponential distribution with the failure rate being related to the accelerated factor exponentially. The underlying parameters are also assumed to have the exponential prior distributions. A Bayesian zero‐failure reliability demonstration test is conducted to design forehand the minimum sample size and testing length subject to a certain specified reliability criterion. Probability of passing the test design as well as predictive probability for additional experiments is also derived. Sensitivity analysis of the design is investigated by a simulation study. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

8.
This paper presents a design stage method for assessing performance reliability of systems with multiple time‐variant responses due to component degradation. Herein the system component degradation profiles over time are assumed to be known and the degradation of the system is related to component degradation using mechanistic models. Selected performance measures (e.g. responses) are related to their critical levels by time‐dependent limit‐state functions. System failure is defined as the non‐conformance of any response and unions of the multiple failure regions are required. For discrete time, set theory establishes the minimum union size needed to identify a true incremental failure region. A cumulative failure distribution function is built by summing incremental failure probabilities. A practical implementation of the theory can be manifest by approximating the probability of the unions by second‐order bounds. Further, for numerical efficiency probabilities are evaluated by first‐order reliability methods (FORM). The presented method is quite different from Monte Carlo sampling methods. The proposed method can be used to assess mean and tolerance design through simultaneous evaluation of quality and performance reliability. The work herein sets the foundation for an optimization method to control both quality and performance reliability and thus, for example, estimate warranty costs and product recall. An example from power engineering shows the details of the proposed method and the potential of the approach. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

9.
A field data‐driven reliability and improvement program was introduced to ForHealth Technologies, Inc. (FHT) in December 2004 for its IntelliFill i.v. (IFiv) System. Since then, FHT has developed and executed a unique reliability‐based product development and improvement model. As a result of implementing this model, from January 2005 to February 2007, the reliability of IFiv System improved its Mean Syringes Between Failures by 1703%, Operator Interventions Per Thousand Syringes by 45%, and Mean Time Between Failures improved by 1725% with system uptime of over 99%. Furthermore, system‐related product costs were reduced by over 40%. IFiv has now delivered over 16 million doses without any medication errors and helped hospital pharmacies realize their drug cost savings along with higher operational efficiencies. The purpose of this case study is to share FHT's success story of conducting reliability‐based product development and improvement program, while keeping the process simple with fast results. This model is applicable at system level for all industries with positive bottom line impact. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

10.
This paper presents a methodology and a software tool to establish an eco-design concept of a product and its life cycle by assigning appropriate life cycle options to the components of the product. The product life cycle planning (LCP) methodology provides the following systematic procedures. First, the medium- or long-term production and collection plan for the product family is clarified. Next, target values for the product and its life cycle are set in the process of determination of customer-oriented specification and eco-specification. Then, eco-solution ideas to realize reasonable resource circulation are generated by using various life cycle option analysis charts. Finally, an eco-design concept which involves eco-solution ideas is evaluated for decision-making at early stages of product development. A design support tool was made for efficiently planning product life cycles by using quality function deployment and life cycle assessment data. Based on case studies, it was verified that the proposed methodology and tool are useful for developing multi-generational eco-products.  相似文献   

11.
Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life‐cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can be achieved in predicting time to failure, thus yielding more accurate field‐failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

12.
Condition-based maintenance methods have changed systems reliability in general and individual systems in particular. Yet, this change does not affect system reliability analysis. System fault tree analysis (FTA) is performed during the design phase. It uses components failure rates derived from available sources as handbooks, etc. Condition-based fault tree analysis (CBFTA) starts with the known FTA. Condition monitoring (CM) methods applied to systems (e.g. vibration analysis, oil analysis, electric current analysis, bearing CM, electric motor CM, and so forth) are used to determine updated failure rate values of sensitive components. The CBFTA method accepts updated failure rates and applies them to the FTA. The CBFTA recalculates periodically the top event (TE) failure rate (λTE) thus determining the probability of system failure and the probability of successful system operation—i.e. the system's reliability.FTA is a tool for enhancing system reliability during the design stages. But, it has disadvantages, mainly it does not relate to a specific system undergoing maintenance.CBFTA is tool for updating reliability values of a specific system and for calculating the residual life according to the system's monitored conditions. Using CBFTA, the original FTA is ameliorated to a practical tool for use during the system's field life phase, not just during system design phase.This paper describes the CBFTA method and its advantages are demonstrated by an example.  相似文献   

13.
Most of the models for software reliability analysis are based on reliability growth models which deal with the fault detection process. This is done either by assuming that faults are corrected immediately after being detected or the time to correct a fault is not counted. Some models have been developed to relax this assumption. However, unlike the fault‐detection process, few published data sets are available to support the modeling and analysis of both the fault detection and removal processes. In this paper, some useful approaches to the modeling of both software fault‐detection and fault‐correction processes are discussed. Further analysis on the software release time decision that incorporates both a fault‐detection model and fault‐correction model is also presented. This procedure is easy to use and useful for practical applications. The approach is illustrated with an actual set of data from a software development project. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

14.
In this paper, a coupled reliability method for structural fatigue evaluation considering load shedding is first proposed based on probabilistic fracture mechanics in which the uncertainties of the structural parameters are taken into account. Then, the method is applied to predict the fatigue reliability of the T‐welded structure to the case of considering load shedding or not. The compared results show that by considering the load shedding, the structural fatigue reliability might be improved with less conservativeness. The influence rules of the load‐shedding coefficient on the fatigue failure probability of the T‐welded component are investigated, and some interesting results are obtained. That is, the influences of load‐shedding coefficient on the fatigue failure probability can be divided into three regions, namely the high, medium and low fatigue failure areas. The last area is the most intriguing when we try to design a T‐welded structure. The thickness of T‐welded structure along the crack propagation direction is found to be one of the important design variables for the design of fatigue reliability, in which the low‐fatigue failure zone is used as one of the reliability constraints. The basic design frame of T‐welded structure is established to constrain the fatigue failure probability within the low‐fatigue failure area.  相似文献   

15.
Degradation tests are alternative approaches to lifetime tests and accelerated lifetime tests in reliability studies. Based on a degradation process of a product quality characteristic over time, degradation tests provide enough information to estimate the time‐to‐failure distribution. Some estimation methods, such as analytical, the numerical or the approximated, can be used to obtain the time‐to‐failure distribution. They are chosen according to the complexity of the degradation model used in the data analysis. An example of the application and analysis of degradation tests is presented in this paper to characterize the durability of a product and compare the various estimation methods of the time‐to‐failure distribution. The example refers to a degradation process related to an automobile's tyre, and was carried out to estimate its average distance covered and some percentiles of interest. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

16.
For a period of mission time, only zero‐failure data can be obtained for high‐quality long‐life products. In the case of zero‐failure data reliability assessment, the point estimates and confidence interval estimates of distribution parameters cannot be obtained simultaneously by the current reliability assessment models, and the credibility of the assessment results may be reduced if they are obtained at the same time. A new model is proposed for consistency problem in this paper. In the proposed model, the point estimates of reliability can be obtained by the lifetime probability distribution derived from matching distribution curve method, while the confidence interval estimates of reliability can be obtained by using new samples generated from the lifetime probability distribution according to parameter bootstrap method. By analyzing the zero‐failure data of the torque motors after real operation, the results show that the new model not only meets the requirements of reliability assessment but also improves the accuracy of reliability interval estimation.  相似文献   

17.
Early failures are the dominant concern as integrated circuit technology matures into consistently producing systems of high reliability. These failures are attributed to the presence of randomly occurring defects in elementary objects (contacts, vias, metal runs, gate oxides, bonds etc.) that result in extrinsic rather than intrinsic (wearout-related) mortality. A model relating system failure to failure at the elementary object level has been developed. Reliability is modelled as a function of circuit architecture, mask layout, material properties, life-test data, worst-case use-conditions and the processing environment. The effects of competing failure mechanisms, and the presence of redundant sub-systems are accounted for. Hierarchy is exploited in the analysis, allowing large scale designs to be simulated. Experimental validation of the modelling of oxide leakage related failure, based on a correlation between actual failures reported for a production integrated circuit and Monte Carlo simulations that incorporate wafer-level test results and process defect monitor data, is presented. The state of the art in IC reliability simulation is advanced in that a methodology that provides the capability to design-in reliability while accounting for early failures has been developed; applications include process qualification, design assessment and fabrication monitoring.  相似文献   

18.
In the present competitive scenario, companies face the challenge of developing new products in a short time period, with superior technology in relation to prior developments and reduced costs to guarantee the survival of their business. Success is directly coupled with client requirements where quality and reliability should be the highest feasible, whereas deadlines and price have to be the lowest possible. This paper discusses tools and methods applied to planning and assurance of quality, which have to be taken into account at the product conception project, which concerns the phase in which quality, reliability and the final price of a product are technically defined. A methodology is presented for this purpose, and it can be extended to any product or system with few adaptations concerning quality, reliability and cost models. The product selected for the case‐study analysis in this work is an automotive clutch. The methodology proposed for the analysis is a combination of the KANO method, target cost and value analysis with respect to the assessment of client requirement compliance levels and the determination of the choice of functions—whose relative costs are above relative needs, therefore offering optimization or elimination potential. Thus, the reliability concepts of statistical distributions and fault tree analysis are employed to locate critical components and quantify design temporary performance. To provide life tests results to the highest failure risk in the system, the planning and deployment of accelerated tests are carried out. The final goal of this paper is the reliability assessment based on critical levels for the analysis of components to be improved or optimized and, mainly, to create a methodology for the development of optimized products. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

19.
The objectives of this paper are to propose a practical procedure for knowledge-based reliability qualification, and provide a checklist of required information for qualification analysis and testing. The paper investigates the common failure mechanisms of electronic components in automotive environments, and addresses the role of physics-of-failure approach in component reliability qualification.  相似文献   

20.
In gradient‐based design optimization, the sensitivities of the constraint with respect to the design variables are required. In reliability‐based design optimization (RBDO), the probabilistic constraint is evaluated at the most probable point (MPP), and thus the sensitivities of the probabilistic constraints at MPP are required. This paper presents the rigorous analytic derivation of the sensitivities of the probabilistic constraint at MPP for both first‐order reliability method (FORM)‐based performance measure approach (PMA) and dimension reduction method (DRM)‐based PMA. Numerical examples are used to demonstrate that the analytic sensitivities agree very well with the sensitivities obtained from the finite difference method (FDM). However, as the sensitivity calculation at the true DRM‐based MPP requires the second‐order derivatives and additional MPP search, the sensitivity derivation at the approximated DRM‐based MPP, which does not require the second‐order derivatives and additional MPP search to find the DRM‐based MPP, is proposed in this paper. A convergence study illustrates that the sensitivity at the approximated DRM‐based MPP converges to the sensitivity at the true DRM‐based MPP as the design approaches the optimum design. Hence, the sensitivity at the approximated DRM‐based MPP is proposed to be used for the DRM‐based RBDO to enhance the efficiency of the optimization. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号