首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Mapping function to failure mode during component development   总被引:6,自引:2,他引:4  
When designing aerospace systems, it is essential to provide crucial failure information for failure prevention. Failure modes and effects types of analyses and prior engineering knowledge and experience are commonly used to determine the potential modes of failures a product might encounter during its lifetime. When new products are being considered and designed, this knowledge and information is expanded upon to help designers extrapolate based on their similarity with existing products and the potential design tradeoffs. In this work, we aim to enhance this process by providing design-aid tools which derive similarities between functionality and failure modes. Specifically, this paper presents the theoretical foundations of a matrix-based approach to derive similarities that exist between different failure modes, by mapping observed failure modes to the functionality of each component, and applies it to a simple design example. The function–failure mode method is proposed to design new products or redesign existing ones with solutions for functions that eliminate or reduce the potential of a failure mode. Electronic Publication  相似文献   

2.
Published studies and audits have documented that a significant number of U.S. Army systems are failing to demonstrate established reliability requirements. In order to address this issue, the Army developed a new reliability policy in December 2007 which encourages use of cost-effective reliability best practices. The intent of this policy is to improve reliability of Army systems and material, which in turn will have a significant positive impact on mission effectiveness, logistics effectiveness and life-cycle costs. Under this policy, the Army strongly encourages the use of Physics of Failure (PoF) analysis on mechanical and electronics systems. At the US Army Materiel Systems Analysis Activity, PoF analyses are conducted to support contractors, program managers and engineers on systems in all stages of acquisition from design, to test and evaluation (T&E) and fielded systems. This article discusses using the PoF approach to improve reliability of military products. PoF is a science-based approach to reliability that uses modeling and simulation to eliminate failures early in the design process by addressing root-cause failure mechanisms in a computer-aided engineering environment. The PoF approach involves modeling the root causes of failure such as fatigue, fracture, wear, and corrosion. Computer-aided design tools have been developed to address various loads, stresses, failure mechanisms, and failure sites. This paper focuses on understanding the cause and effect of physical processes and mechanisms that cause degradation and failure of materials and components. A reliability assessment case study of circuit cards consisting of dense circuitry is discussed. System level dynamics models, component finite element models and fatigue-life models were used to reveal the underlying physics of the hardware in its mission environment. Outputs of these analyses included forces acting on the system, displacements of components, accelerations, stress levels, weak points in the design and probable component life. This information may be used during the design process to make design changes early in the acquisition process when changes are easier to make and are much more cost effective. Design decisions and corrective actions made early in the acquisition phase leads to improved efficiency and effectiveness of the T&E process. The intent is to make fixes prior to T&E which will reduce test time and cost, allow more information to be obtained from test and improve test focus. PoF analyses may be conducted for failures occurring during test to better understand the underlying physics of the problem and identify the root cause of failures which may lead to better fixes for problems discovered, reduced test-fix-test iterations and reduced decision risk. The same analyses and benefits mentioned above may be applied to systems which are exhibiting failures in the field.  相似文献   

3.
The objective of this article is to introduce a method that will mitigate product risks during the conceptual design phase by identifying design variables that affect product failures. By using this comprehensive, step-by-step process that combines existing techniques in a new way, designers can begin with a simple functional model and emerge from the conceptual design phase with specific components selected with many risks already mitigated. The risk in early design (RED) method plays a significant role in identifying failure modes by functions, and these modes are then analyzed through modeling equations or lifespan analyses, in such a manner that emphasizes variables under the designers’ control. With the valuable insight this method provides, informed decisions can be made early in the process, thereby eliminating costly changes later on.  相似文献   

4.
Analysis of engineering failures is a complex process that requires information from personnel having expertise in many areas. From the information gathered, a failure analyst tries to discover what was fundamentally responsible for the failure. This fundamental cause is termed the “root cause” and helps in the determination of the sequence of events that led to the final failure. Root cause analysis also helps in finding solutions to the immediate problem and provides valuable guidelines as to what needs to be done to prevent recurrence of similar failures in future. However, experience suggests that most failure analyses fall short of this goal. A significant number of failure analysts incorrectly use the term “root cause” when what they really establish is the primary cause of failure or simple physical cause. This paper examines a few service failures to demonstrate that the term root cause is not adequately understood.  相似文献   

5.
Qualification frequently is a time‐critical activity at the end of a development project. As time‐to‐market is a competitive issue, the most efficient qualification efforts are of interest. A concept is outlined, which proactively integrates qualification into the development process and provides a systematic procedure as a support tool to development and gives early focus on required activities. It converts requirements for a product into measures of development and qualification in combination with a risk and opportunity assessment step and accompanies the development process as a guiding and recording tool for advanced quality planning and confirmation. The collected data enlarge the knowledge database for DFR/BIR (designing for reliability/building‐in reliability) to be used for future projects. The procedure challenges and promotes teamwork of all the disciplines involved. Based on the physics‐of‐failure concept the reliability qualification methodology is re‐arranged with regard to the relationships between design, technology, manufacturing and the different product life phases at use conditions. It makes use of the physics‐of‐failure concept by considering the potential individual failure mechanisms and relates most of the reliability aspects to the technology rather than to the individual product design. Evaluation of complex products using common reliability models and the definition of sample sizes with respect to systematic inherent product properties and fractions of defects are discussed. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

6.
Promoting risk communication in early design through linguistic analyses   总被引:1,自引:1,他引:0  
The concept of function offers significant potential for transforming thinking and reasoning about engineering design as well as providing a common thread for relating together product risk information. This paper focuses specifically on risk data by examining how this information is addressed for a design team conducting early stage design for space missions. A fundamental set of risk elements is proposed based on a linguistic analysis of the risk information needs of the design team. Sample risk statements are then decomposed into a set of key attributes that are used to scrutinize the risk information using three approaches from the pragmatics sub-field of linguistics: (1) Gricean, (2) Relevance Theory, and (3) Functional Analysis. Based on the deficiencies identified in this analysis, a format for the communication of risk data by explicitly accounting for five risk attributes developed in this work is formulated.  相似文献   

7.
This paper discusses the fracture prevention aspects of lifetime prediction. Initially, it is pointed out that lifetime can be determined by factors such as obsolescence and consumer rejection. Lifetime is then related to acceptable risk in order to make it compatible with advances in design philosophy for large welded structures. Accident statistics are cited and the argument made that the major opportunities for lifetime improvement are revealed by failure analysis, and are shown to lie in design and production. However, there are some structures, e.g. boilers and pressure vessels, where the construction rules are so well established that failures occur mainly because of operational errors. Based on the results of the Battelle/NBS Cost of Fracture Study, attention is focused on the effect of material–property reproducibility in driving failure probability. Little evidence could be found regarding reproducibility improvements of fatigue lifetime and brittle fracture toughness in production lots of alloys over time.  相似文献   

8.
Corrective maintenance is a maintenance task performed to identify and rectify the cause failures for a failed system. The engineering equipment gets many components and failure modes, and its failure mechanism is very complicated. Failure of system-level might occur due to failure(s) of any subsystem/component. Thus, the symptom failure of equipment may be caused by multilevel causality of latent failures.This paper proposes a complete corrective maintenance scheme for engineering equipment. Firstly, the FMECA is extended to organize the numerous failure modes. Secondly, the failure propagation model (FPM) is presented to depict the cause-effect relationship between failures. Multiple FPMs will make up the failure propagation graph (FPG). For a specific symptom failure, the FPG is built by iteratively searching the cause failures with FPM. Moreover, when some failure in the FPG is newly ascertained to occur (or not), the FPG needs to be adjusted. The FPG updating process is proposed to accomplish the adjustment of FPG under newly ascertained failure. Then, the probability of the cause failures is calculated by the fault diagnosis process. Thirdly, the conventional corrective maintenance recommends that the failure with the largest probability should be ascertained firstly. However, the proposed approach considers not only the probability but also the failure detectability and severity. The term REN is introduced to measure the risk of the failure. Then, a binary decision tree is trained based on REN reduction to determine the failure ascertainment order. Finally, a case is presented to implement the proposed approach on the ram feed subsystem of a boring machine tool. The result proves the validity and practicability of the proposed method for corrective maintenance of engineering equipment.  相似文献   

9.
Failure mode and effect analysis (FMEA) is a useful technique to identify and quantify potential failures. FMEA determines a potential failure mode by evaluating risk factors. In recent years, there are many works improving FMEA by allowing multiple experts to use linguistic term sets to evaluate risk factors. However, it is important to design a framework that can consider both the weight of risk factors and the weight of the experts. In addition, managing conflicts among experts is also an urgent problem to be addressed. In this paper, we proposed an FMEA model based on multi-granularity linguistic terms and the Dempster–Shafer evidence theory. On the other hand, the weights for both experts and risk factors are taken into consideration. The weights are computed objectively and subjectively to ensure the reasonability. Further, we apply our method to an emergency department case, which shows the effectiveness of the method.  相似文献   

10.
Failure mode and effects analysis (FMEA) is a widely used risk management technique for identifying the potential failures from a system, design, or process and determining the most serious ones for risk reduction. Nonetheless, the traditional FMEA method has been criticized for having many deficiencies. Further, in the real world, FMEA team members are usually bounded rationality, and thus, their psychological behaviors should be considered. In response, this study presents a novel risk priority model for FMEA by using interval two‐tuple linguistic variables and an integrated multicriteria decision‐making (MCDM) method. The interval two‐tuple linguistic variables are used to capture FMEA team members' diverse assessments on the risk of failure modes and the weights of risk factors. An integrated MCDM method based on regret theory and TODIM (an acronym in Portuguese for interactive MCDM) is developed to prioritize failure modes taking experts' psychological behaviors into account. Finally, an illustrative example regarding medical product development is included to verify the feasibility and effectiveness of the proposed FMEA. By comparing with other existing methods, the proposed linguistic FMEA approach is shown to be more advantageous in ranking failure modes under the uncertain and complex environment.  相似文献   

11.
Reliability of products is here regarded with respect to failure avoidance rather than probability of failure. To avoid failures, we emphasize variation and suggest some powerful tools for handling failures due to variation. Thus, instead of technical calculation of probabilities from data that usually are too weak for correct results, we emphasize the statistical thinking that puts the designers focus on the critical product functions. Making the design insensitive to unavoidable variation is called robust design and is handled by (i) identification and classification of variation, (ii) design of experiments to find robust solutions, and (iii) statistically based estimations of proper safety margins. Extensions of the classical failure mode and effect analysis (FMEA) are presented. The first extension consists of identifying failure modes caused by variation in the traditional bottom–up FMEA analysis. The second variation mode and effect analysis (VMEA) is a top–down analysis, taking the product characteristics as a starting point and analyzing how sensitive these characteristics are to variation. In cases when there is sufficient detailed information of potential failure causes, the VMEA can be applied in its most advanced mode, the probabilistic VMEA. Variation is then measured as statistical standard deviations, and sensitivities are measured as partial derivatives. This method gives the opportunity to dimension tolerances and safety margins to avoid failures caused by both unavoidable variation and lack of knowledge regarding failure processes. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
Failure analysis of drillstrings   总被引:2,自引:0,他引:2  
The cost of drilling a well is measured in tens of millions of dollars. The incidence of downhole failure of the drillstring can increase this figure dramatically. The focus placed on cost reduction in the early 1990’s – when the oil price was much lower than today’s levels – resulted in some scrutiny of drilling operations, amongst other areas. Drillstring failure was a natural part of this.

Despite the earlier attention, failure of drillstrings remains to this day an undesirable feature of oilwell drilling. The costs associated with lost time (to recover the drillstring from the well or to sidetrack; and recommence drilling) and the material cost of the damaged drillstring elements can be very high. This is especially true where the failing drillstring is not detected at the wash-out stage and complete separation subsequently takes place downhole.

Premature failures commonly fall into two general groups: at the threaded connections or in the body of the drillpipe at the internal taper.

This paper presents a number of case studies from the authors’ own work that collects the results of investigations on failed drillstrings and components spanning over a decade of activity in the North Sea. Improvements in relation to design practice, manufacture, use and inspection are also discussed.  相似文献   


13.
The research objective of this article is to fortify the failure mode taxonomy by including chemical failures. This inclusion would enable comprehensive risk analysis in technology-based products. As technology improves at an exponential rate, partially owing to chemical advances in the semiconductor industry, failure identification tools must keep up with the pace. While the current version of the failure mode taxonomy does consider multiple domains of failure, it does not include a comprehensive collection of chemical failures. Therefore this taxonomy is insufficient for a large number of new products. The research presented here includes identifying chemical failures from publications in the semiconductor industry. These failures were then analyzed to determine the rudimentary failure modes in each case. Finally the newly identified failure modes were added to the failure mode taxonomy. A case study is presented to demonstrate using the updated failure mode taxonomy to identify both potential failures and product risks.  相似文献   

14.
Until now, in many forensic reports, the failure cause assessments are usually carried out by a deterministic approach so far. However, it may be possible for the forensic investigation to lead to unreasonable results far from the real collapse scenario, because the deterministic approach does not systematically take into account any information on the uncertainties involved in the failures of structures.Reliability-based failure cause assessment (reliability-based forensic engineering) methodology is developed which can incorporate the uncertainties involved in structural failures and structures, and to apply them to the collapsed bridge in order to identify the most critical failure scenario and find the cause that triggered the bridge collapse. Moreover, to save the time and cost of evaluation, an algorithm of automated event tree analysis (ETA) is proposed and possible to automatically calculate the failure probabilities of the failure events and the occurrence probabilities of failure scenarios. Also, for reliability analysis, uncertainties are estimated more reasonably by using the Bayesian approach based on the experimental laboratory testing data in the forensic report. For the applicability, the proposed approach is applied to the Hang-ju Grand Bridge, which collapsed during construction, and compared with deterministic approach.  相似文献   

15.
In this paper, marginal parts are equated with low quality and low reliability. Marginal parts can be shown to cause errors in some products during tests. They are also a cause of field failures in these products. Although marginal parts causes still have a random failure time component, they have a much lower amount of variation than our traditional failure causes, hidden flaws. I give marginal parts a measurable definition. If marginal effects can be established for a product, then this knowledge can be used to improve reliability. Some examples of products where I believe this marginal effect holds are discussed in this paper. Such marginal effects on reliability are gaining more and more importance in systems that are increasing in complexity. A strong point in applying the marginal parts theory framed in this paper is that it can be readily subjected to statistical testing to see if it holds or not for any particular product.  相似文献   

16.
Stochastic analysis of failure of earth structures   总被引:2,自引:0,他引:2  
Uncertainties in material data are a common inconvenience we face when working in the area of geotechnical engineering. Elements of mathematical statistics then often become a valuable tool for allowing reasonable predictions of the behavior of complex material systems. Such an approach is advocated in this paper through two representative examples. Stochastic analysis of failure of dump slopes (tailings) is addressed first, promoting the entire distribution function as an indispensable source of information to assess the quality of the structural system from the stability perspective. The general concept of probability of failure is then revisited in conjunction with time dependent failure of earth structures impaired by a gradual change in the level of ground water table. A conceptual assessment of the instantaneous failure rate, particularly when combined with in situ measurements, is offered as a valuable tool for the design engineer to foresee sudden and catastrophic failures.  相似文献   

17.
This research addresses a need in systems engineering to verify that a system can meet performance requirements; this is done by integrating failure behavior into the system’s nominal model during the initial stages of design. In general, failure behavior is not used in early assessments, lending toward increased uncertainty in the model’s validity. Current libraries do not model failures and thus cannot confidently address how a design will function in the intended operational environments. Since failures occur from effects on the environment, they should be included during verification and validation efforts. Current approaches capture off-nominal behavior using parameter variation where flow variables and parameters are varied to measure the system-level effect. This approach is ad hoc and does not accurately capture failure mode behavior. To address this limitation, an approach is developed to understand and implement failure mode behavior into nominal models. The Modelica Standard Library (MSL) is used as an example for the component library of nominal models. MSL has a significant amount of basic nominal component behavior and therefore is desirable for this research. Two approaches are developed to implement failure mode behavior; the first uses transfer function and use case graphs, and the second uses existing literature. In addition, complex systems often have a large number of components and an even larger number of failure modes. Since the goal is to limit the development time, we generate an approach to identify high-risk failure modes. This captures an early system-level effect of each failure mode and uses an occurrence to calculate risk. To show the usefulness of each method, two examples are provided including a vehicle drivetrain subsystem with a variety of failures and a diesel engine with fuel injector and valve failures.  相似文献   

18.
Industrial systems subject to failures are usually inspected when there are evident signs of an imminent failure. Maintenance is therefore performed at a random time, somehow dependent on the failure mechanism. A competing risk model, namely a Random Sign model, is considered to relate failure and maintenance times. We propose a novel Bayesian analysis of the model and apply it to actual data from a water pump in an oil refinery. The design of an optimal maintenance policy is then discussed under a formal decision theoretic approach, analyzing the goodness of the current maintenance policy and making decisions about the optimal maintenance time.  相似文献   

19.
The increasing lifetime of the population on a world-wide scale over the last decades has led to a significant growth in the use of surgical implants for replacement of bones and teeth in affected patients. Other factors, such as scientific-technological development and more frequent exposure of individuals to trauma risk, have also contributed to this general trend. Metallic materials designed for applications in surgical implants, no matter whether orthopedic or dental, must show a group of properties in which biocompatibility, mechanical strength, and resistance to degradation (by wear or corrosion) are of primary importance. In order to reach these aims, orthopedic materials must fulfill certain requirements, usually specified in standards. These requirements include chemical composition, microstructure, and even macrographic appearances. In the present work, three cases of implant failure are presented. These cases demonstrate the most frequent causes of premature failure in orthopedic implants: inadequate surgical procedures and processing/design errors. Evaluation techniques, including optical and scanning electron microscopy (SEM), were used to evaluate macroscopic and microstructural aspects of the failed implants, and the chemical composition of each material was analyzed. These evaluations showed that design errors and improper surgical procedures of outright violation of standards were the cause of the failures.  相似文献   

20.
In the big data era, data unavailability, either temporary or permanent, becomes a normal occurrence on a daily basis. Unlike the permanent data failure, which is fixed through a background job, temporarily unavailable data is recovered on-the-fly to serve the ongoing read request. However, those newly revived data is discarded after serving the request, due to the assumption that data experiencing temporary failures could come back alive later. Such disposal of failure data prevents the sharing of failure information among clients, and leads to many unnecessary data recovery processes, (e.g. caused by either recurring unavailability of a data or multiple data failures in one stripe), thereby straining system performance.
To this end, this paper proposes GFCache to cache corrupted data for the dual purposes of failure information sharing and eliminating unnecessary data recovery processes. GFCache employs a greedy caching approach of opportunism to promote not only the failed data, but also sequential failure-likely data in the same stripe. Additionally, GFCache includes a FARC (Failure ARC) catch replacement algorithm, which features a balanced consideration of failure recency, frequency to accommodate data corruption with good hit ratio. The stored data in GFCache is able to support fast read of the normal data access. Furthermore, since GFCache is a generic failure cache, it can be used anywhere erasure coding is deployed with any specific coding schemes and parameters. Evaluations show that GFCache achieves good hit ratio with our sophisticated caching algorithm and manages to significantly boost system performance by reducing unnecessary data recoveries with vulnerable data in the cache.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号