首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
Preventive maintenance scheduling for repairable system with deterioration   总被引:4,自引:1,他引:3  
Maintenance as an important part in manufacturing system can keep equipment in good condition. Many maintenance policies help to decrease the unexpected failures and reduce high operational cost such as conventional preventive maintenance. But these conventional preventive maintenance policies have the same time interval T that may easily neglect system’s reliability, because the system deteriorates with increased usage and age. Hence, this study has developed a reliability-centred sequential preventive maintenance model for monitored repairable deteriorating system. It is supposed that system’s reliability could be monitored continuously and perfectly, whenever it reaches the threshold R, the imperfect repair must be performed to restore the system. In this model, system’s failure rate function and operational cost are both considered by the effect of system’s corresponding condition, which helps to decide the optimal reliability threshold R and preventive maintenance cycle number. Finally, through case study, the simulation results show that the improved sequential preventive maintenance policy is more practical and efficient.  相似文献   

2.
Proactivity in maintenance, which is mainly materialized by degradation-based anticipation, becomes essential to avoid failure situation with negative impact on product and/or system conditions. It leads to make emerging the E-maintenance philosophy to move from “fail and fix” maintenance practices to “predict and prevent” strategies. Within these new strategies, the anticipation action is fully supported by prognosis business process. Indeed it analyses the degradation impact on the component itself but also on the global performances of the production system in order to predict future failures of the system and investigate (future maintenance) actions. However, only few research works focuses on generic and scalable prognostic approach. Existing methods are generally restricted on component view and for solving the failure prediction issue. Consequently, the contribution presented in this paper aims at developing a global formalization of the generic prognosis business process. This generic process can be used after, from an instantiation procedure, to develop specific prognosis processes related to particular application such as shown in this paper with the case of E-maintenance platform developed within DYNAMITE Project.  相似文献   

3.
This article describes the User Model component of AthosMail, a speech-based interactive e-mail application developed in the context of the EU project DUMAS. The focus is on the system’s adaptive capabilities and user expertise modelling, exemplified through the User Model parameters dealing with initiative and explicitness of the system responses. The purpose of the conducted research was to investigate how the users could interact with a system in a more natural way, and the two aspects that mainly influence the system’s interaction capabilities, and thus the naturalness of the dialogue as a whole, are considered to be the dialogue control and the amount of information provided to the user. The User Model produces recommendations of the system’s appropriate reaction depending on the user’s observed competence level, monitored and computed on the basis of the user’s interaction with the system. The article also discusses methods for the evaluation of adaptive user models and presents results from the AthosMail evaluation.The research was done while the author was affiliated with the University of Art and Design Helsinki as the scientific coordinator of the DUMAS project.  相似文献   

4.
An important goal of autonomic computing is the development of computing systems that are capable of self healing with a minimum of human intervention. Typically, recovery from even a simple fault will require knowledge of the environment in which a computing system operates. To meet this need, we present an approach to self healing and recovery informed by environment knowledge that combines case based reasoning (CBR) and rule based reasoning. Specifically, CBR is used for fault diagnosis and rule based reasoning for fault remediation, recovery, and referral. We also show how automated information gathering from available sources in a computing system’s environment can increase problem solving efficiency and help to reduce the occurrence of service failures. Finally, we demonstrate the approach in an intelligent system for fault management in a local printer network.  相似文献   

5.
One reason that researchers may wish to demonstrate that an external software quality attribute can be measured consistently is so that they can validate a prediction system for the attribute. However, attempts at validating prediction systems for external subjective quality attributes have tended to rely on experts indicating that the values provided by the prediction systems informally agree with the experts’ intuition about the attribute. These attempts are undertaken without a pre-defined scale on which it is known that the attribute can be measured consistently. Consequently, a valid unbiased estimate of the predictive capability of the prediction system cannot be given because the experts’ measurement process is not independent of the prediction system’s values. Usually, no justification is given for not checking to see if the experts can measure the attribute consistently. It seems to be assumed that: subjective measurement isn’t proper measurement or subjective measurement cannot be quantified or no one knows the true values of the attributes anyway and they cannot be estimated. However, even though the classification of software systems’ or software artefacts’ quality attributes is subjective, it is possible to quantify experts’ measurements in terms of conditional probabilities. It is then possible, using a statistical approach, to assess formally whether the experts’ measurements can be considered consistent. If the measurements are consistent, it is also possible to identify estimates of the true values, which are independent of the prediction system. These values can then be used to assess the predictive capability of the prediction system. In this paper we use Bayesian inference, Markov chain Monte Carlo simulation and missing data imputation to develop statistical tests for consistent measurement of subjective ordinal scale attributes.  相似文献   

6.
A new design method of a stable dynamic output feedback (DOF) controller in linear MIMO systems is presented on the frame of real Grassmann spaces. For the analysis, the DOF systems are decomposed into augmented static output feedback (SOF) systems using signal flow graph analysis of all DOF loops. For synthesis and design, the characteristic polynomial of the augmented SOF system for the system’s stable poles and the sub-characteristic polynomial of the sub-SOF system for the controller’s stable poles are parametrized within their Grassmann invariants in real Grassmann spaces, whose coordinates are defined in the real coefficient function spaces of their augmented SOF variables. The numerical parametrization and computation algorithm for a stable controller design is illustrated over a MIMO plant of a practical aircraft carrier model.  相似文献   

7.
We model the reliability allocation and prediction process across a hierarchical software system comprised of modules, subsystems, and system. We experiment in modeling complex reliability software systems using several software reliability models to test the feasibility of the process and to evaluate the accuracy of the models for this application. This is a subject deserving research and experimentation because this type of system is implemented in safety-critical projects, such as National Aeronautics and Space Administration (NASA) flight software modules, that we use in our experiments. Given the reliability requirement of a software system in the software planning or design stage, we predict each module’s reliability and their relationships (e.g., reliability interactions among modules, subsystems, and system), Our critical interfaces and components are failure-mode sequences and the modules that comprise these sequences, respectively. In addition, we evaluate how sensitive the achievement of reliability goals is to predicted component reliabilities that do not meet expectations.  相似文献   

8.
This paper describes a comprehensive prototype of large-scale fault adaptive embedded software developed for the proposed Fermilab BTeV high energy physics experiment. Lightweight self-optimizing agents embedded within Level 1 of the prototype are responsible for proactive and reactive monitoring and mitigation based on specified layers of competence. The agents are self-protecting, detecting cascading failures using a distributed approach. Adaptive, reconfigurable, and mobile objects for reliablility are designed to be self-configuring to adapt automatically to dynamically changing environments. These objects provide a self-healing layer with the ability to discover, diagnose, and react to discontinuities in real-time processing. A generic modeling environment was developed to facilitate design and implementation of hardware resource specifications, application data flow, and failure mitigation strategies. Level 1 of the planned BTeV trigger system alone will consist of 2500 DSPs, so the number of components and intractable fault scenarios involved make it impossible to design an ‘expert system’ that applies traditional centralized mitigative strategies based on rules capturing every possible system state. Instead, a distributed reactive approach is implemented using the tools and methodologies developed by the Real-Time Embedded Systems group.  相似文献   

9.
一种网格环境下的动态故障检测算法   总被引:6,自引:0,他引:6  
针对现有网格系统出错几率较大、已有故障检测算法不能有效满足网格系统需求问题,提出了一种网格环境下的动态故障检测算法.根据网格系统的特点,基于不可靠故障检测思想,建立了网格系统模型和故障检测模型;结合心跳(heartbeat)策略和灰色预测方法,设计了一种动态心跳机制,并给出了预测模型和实时预测策略;提出了基于该动态心跳机制的网格故障检测算法,分析了算法的可靠性.仿真实验结果表明,该算法是正确、有效的,可用于网格环境下的故障检测.  相似文献   

10.
A model is proposed that describes the consideration of the control of the reliability in renewable systems such that the consideration is applied to determine the moment a system’s failure happens and its repair should take place. The presented model differs from the known models in the fact that the magnitude of the probability of the failure detection is multiplied by the magnitude of the leading function among the functions of the repair probability. The leading function is called the repair resource. Thus, the magnitude of the probability of the correct control determines the magnitude of the repair resource. This means that it also controls the magnitude of the regeneration period for a random process of the repaired system.  相似文献   

11.
We show a method of representing basic economic characteristics of the functioning of the Russian Compulsory Motor Third Party Liability (CMTPL) system as a reliability theory system of a special kind with independent components. Each component is characterized by the number of faults (i.e., the number of road traffic accidents), its damage level (i.e., the amount of damage inflicted on third parties), and the initial endurance characteristic (i.e., the insurance premium). We mainly deal with statistical methods of graphical and analytic computerized methods for analyzing the system’s operation and further recommendations on keeping the system operational.  相似文献   

12.
13.
We give a survey of the works of B.V. Gnedenko’s reliability school, starting from 1950s and up until the latest years in two directions: (1) invariance of state distributions for queueing systems and networks, (2) asymptotic behavior of a redundant system’s characteristics under low load.  相似文献   

14.
Due to the rapid development of IC technology the traditional packaging concepts are making a transition into more complex system integration techniques in order to enable the constantly increasing demand for more functionality, performance, miniaturisation and lower cost. These new packaging concepts (as e.g. system in package, 3D integration, MEMS-devices) will have to combine smaller structures and layers made of new materials with even higher reliability. As these structures will more and more display nano-features, a coupled experimental and simulative approach has to account for this development to assure design for reliability in the future. A necessary “nano-reliability” approach as a scientific discipline has to encompass research on the properties and failure behaviour of materials and material interfaces under explicit consideration of their micro- and nano-structure and the effects hereby induced. It uses micro- and nano-analytical methods in simulation and experiment to consistently describe failure mechanisms over these length scales for more accurate and physically motivated lifetime prediction models. This paper deals with the thermo-mechanical reliability of microelectronic components and systems and methods to analyse and predict it. Various methods are presented to enable lifetime prediction on system, component and material level, the latter promoting the field of nano-reliability for future packaging challenges in advanced electronics system integration.  相似文献   

15.
为研究系统故障在不同因素叠加时体现的总体规律、故障变化程度和故障信息量,提出系统故障熵的概念。基于线性熵的线性均匀度特性,推导了多因素相被划分为两状态时的线性熵模型。认为线性熵可以表征系统故障熵,进而研究了系统故障熵的时变特征。对连续时间间隔内的不同因素状态叠加下系统故障进行统计,得到系统故障概率分布,绘制系统故障熵时变曲线。从结果来看至少可以完成3项任务:从变化规律得到考虑不同因素影响下的系统故障熵变化情况,系统故障熵的总体变化规律,系统可靠性的稳定性。此研究可应用于类似情况下的各领域故障及数据分析。  相似文献   

16.
Goal-driven risk assessment in requirements engineering   总被引:1,自引:1,他引:0  
Risk analysis is traditionally considered a critical activity for the whole software system’s lifecycle. Risks are identified by considering technical aspects (e.g., failures of the system, unavailability of services, etc.) and handled by suitable countermeasures through a refined design. This, however, introduces the problem of reconsidering system requirements. In this paper, we propose a goal-oriented approach for analyzing risks during the requirements analysis phase. Risks are analyzed along with stakeholder interests, and then countermeasures are identified and introduced as part of the system’s requirements. This work extends the Tropos goal modeling formal framework proposing new concepts, qualitative reasoning techniques, and methodological procedures. The approach is based on a conceptual framework composed of three main layers: assets, events, and treatments. We use “loan origination process” case study to illustrate the proposal, and we present and discuss experimental results obtained from the case study.  相似文献   

17.
Safety analysis can be labour intensive and error prone for system designers. Moreover, even a relatively minor change to a system’s design can necessitate a complete reworking of the system safety analysis. This paper proposes the use of Behavior Trees and model checking to automate Cut Set Analysis (CSA) : that is, the identification of combinations of component failures that can lead to hazardous system failures. We demonstrate an automated incremental approach to CSA, in which models are extended incrementally and previous results incorporated in such a way as to significantly reduce the time and effort required for the new analysis. The approach is demonstrated on a case study concerning the hydraulics systems for the Airbus A320 aircraft.  相似文献   

18.
Improving field performance of telecommunication systems is the key objective of both telecom suppliers and operators, as an increasing amount of business critical systems worldwide are relying on dependable telecommunication. Early defect detection improves field performance in terms of reduced field failure rates and reduced intrinsic downtime. This paper describes an integrated approach to improve early defect detection and thus field reliability of telecommunication switching systems. The assumptions at the start of the projects discussed in this paper are: Wide application of code inspections and thorough module testing must lead to a lower fault detection density in subsequent phases. At the same time criteria for selecting the most critical components for code reviews, code inspections and module test are provided in order to optimize efficiency. The primary goal is to identify critical components and to make failure predictions as early as possible during the life cycle and hence reduce managerial risk combined with too early or too late release of such a system to the field. During test release time prediction and field performance prediction are both based on tailored and superposed ENHPP reliability models. Experiences from projects of Alcatel’s Switching and Routing Division are included to show practical impacts. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

19.
Benchmarking quality measurement   总被引:2,自引:1,他引:1  
This paper gives a simple benchmarking procedure for companies wishing to develop measures for software quality attributes of software artefacts. The procedure does not require that a proposed measure is a consistent measure of a quality attribute. It requires only that the measure shows agreement most of the time. The procedure provides summary statistics for measures of quality attributes of a software artefact. These statistics can be used to benchmark subjective direct measurement of a quality attribute by a company’s software developers. Each proposed measure is expressed as a set of error rates for measurement on an ordinal scale and these error rates enable simple benchmarking statistics to be derived. The statistics can also be derived for any proposed objective indirect measure or prediction system for the quality attribute. For an objective measure or prediction system to be of value to the company it must be ‘better’ or ‘more objective’ than the organisation’s current measurement or prediction capability; and thus confidence that the benchmark’s objectivity has been surpassed must be demonstrated. By using Bayesian statistical inference, the paper shows how to decide whether a new measure should be considered ‘more objective’ or whether a prediction system’s predictive capability can be considered ‘better’ than the current benchmark. Furthermore, the Bayesian inferential approach is easy to use and provides clear advantages for quantifying and inferring differences in objectivity.
John MosesEmail:
  相似文献   

20.
In this paper, the problem on asymptotical and robust stability of genetic regulatory networks with time-varying delays and stochastic disturbance is considered. The time-varying delays include not only discrete delays but also distributed delays. The parameter uncertainties are time-varying and norm-bounded. Based on the Lyapunov stability theory and Lur’s system approach, sufficient conditions are given to ensure the stability of genetic regulatory networks. All the stability conditions are given in terms of linear matrix inequalities, which are easy to be verified. Illustrative example is presented to show the effectiveness of the obtained results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号