首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents an application of functional resonance accident models (FRAM) for the safety analysis of complex socio-technological systems, i.e. systems which include not only technological, but also human and organizational components. The supervision of certain industrial domains provides a good example of such systems, because although more and more actions for piloting installations are now automatized, there always remains a decision level (at least in the management of degraded modes) involving human behavior and organizations. The field of application of the study presented here is railway traffic supervision, using modern automatic train supervision (ATS) systems. Examples taken from railway traffic supervision illustrate the principal advantage of FRAM in comparison to classical safety analysis models, i.e. their ability to take into account technical as well as human and organizational aspects within a single model, thus allowing a true multidisciplinary cooperation between specialists from the different domains involved.A FRAM analysis is used to interpret experimental results obtained from a real ATS system linked to a railway simulator that places operators (experimental subjects) in simulated situations involving incidents. The first results show a significant dispersion in performances among different operators when detecting incidents. Some subsequent work in progress aims to make these “performance conditions” more homogeneous, mainly by ergonomic modifications. It is clear that the current human-machine interface (HMI) in ATS systems (a legacy of past technologies that used LED displays) has reached its limits and needs to be improved, for example, by highlighting the most pertinent information for a given situation (and, conversely, by removing irrelevant information likely to distract operators).  相似文献   

2.
3.
Paes  R. Carvalho  G. Lucena  C. Choren  R. 《Software, IET》2009,3(2):124-139
In an open multi-agent system (MAS), agent autonomy and heterogeneity may possibly exploit cooperation, leading the system to an undesirable state. Since an MAS has no central control, a coordination mechanism must be developed to allow agents to fulfill their design goals. It is proposed to incorporate the dependability explicit computing (DepEx) ideas into a law-governed approach in order to build a dependable open MAS. The authors show that the law specification can explicitly incorporate dependability concerns, collect data and publish them in a metadata registry. This data can be used to realise DepEx and, for example, it can help to guide design and runtime decisions. The advantages of using a law-governed approach are (i) the explicit specification of the dependability concerns; (ii) the automatic collection of the dependability metadata reusing the infrastructure of the mediators presenting in law-governed approaches; and (iii) the ability to specify reactions to undesirable situations, thus preventing service failures.  相似文献   

4.
This paper describes the application of dot chart analysis to a semicontinuous catalytic hydrogenation unit. Dot chart tables have been used as a basis for developing the recursive operability analysis and the fault trees (FTs), whose aim is to determine the safety of both the unit and its operators. The unit is formed of two reactors in parallel: the transfer of operations from one reactor to the other when its catalyst is exhausted is performed by means of the isolation systems installed for this purpose on the inlet and outlet lines. FTs assessed the expected number of leak at 3×10−3 occurrences per mission time. The study clearly showed that the operations could be regarded as safe, since, with minor modification to control system and operative procedure, these leaks would be of pressurised nitrogen and hence without consequences for the unit and its operators.  相似文献   

5.
Today, new technologies (distributed systems, networks communication) are more and more integrated for applications needing to fit real-time and critical constraints. It means that we require more and more to integrate these new technology-based components in systems or sub-systems dedicated to safety or dealing with a high level of criticality.Control systems are generally evaluated as a function of required performances (overshoot, rising time, response time) under the condition to respect a stability condition. Reliability evaluation of such systems is not trivial, because generally classical methods do not take into account time and dynamic properties which are the bases of control systems.The methodology proposed in this paper deals with an approach for the dependability evaluation of control systems, based on Monte-Carlo simulation, giving a contribution to the integration of automatic control and dependability constraints.  相似文献   

6.
This article presents the model for calculating the performance of dependability for complex technical systems. According to ISO‐IEC 300, dependability is an overall indicator for the quality of service and considers simultaneously reliability, maintainability, and maintenance support. For a proper understanding of the quality of service for any technical system, it is important to define dependability performance at the level of single component as well as at the upper levels—levels of subsystems and entire system. As dependability indicators (reliability, maintainability, and maintenance support) have been defined as linguistic variables, the fuzzy max–min composition has been used for the dependability determination and integration of its indicators. A procedure for the synthesis of single components dependability performance to upper levels in complex technical system is proposed. Max–min composition is again used as a tool for fuzzy synthesis because it enables obtaining the comprehensive and synergetic effect in a process of dependability evaluation. A practical engineering example (mechanical systems at bucket wheel excavator) has been used to demonstrate the proposed dependability synthesis model. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

7.
Bayesian Networks (BN) provide a robust probabilistic method of reasoning under uncertainty. They have been successfully applied in a variety of real-world tasks but they have received little attention in the area of dependability. The present paper is aimed at exploring the capabilities of the BN formalism in the analysis of dependable systems. To this end, the paper compares BN with one of the most popular techniques for dependability analysis of large, safety critical systems, namely Fault Trees (FT). The paper shows that any FT can be directly mapped into a BN and that basic inference techniques on the latter may be used to obtain classical parameters computed from the former (i.e. reliability of the Top Event or of any sub-system, criticality of components, etc). Moreover, by using BN, some additional power can be obtained, both at the modeling and at the analysis level. At the modeling level, several restrictive assumptions implicit in the FT methodology can be removed and various kinds of dependencies among components can be accommodated. At the analysis level, a general diagnostic analysis can be performed. The comparison of the two methodologies is carried out by means of a running example, taken from the literature, that consists of a redundant multiprocessor system.  相似文献   

8.
Very often, in dependability evaluation, the systems under study are assumed to have a Markovian behavior. This assumption highly simplifies the calculations, but introduces significant errors when the systems contain deterministic or quasi-deterministic processes, as it often happens with industrial systems. Existing methodologies for non-Markovian systems, such as device stage method [1], the supplementary variables method or the imbedded Markov chain method [2] do not provide an effective solution to deal with this class of systems, since their usage is restricted to relatively simple and small systems.This paper presents an analytical methodology for the dependability evaluation of non-Markovian discrete state systems, containing both stochastic and deterministic processes, along with an associated systematic resolution procedure suitable for numerical processing. The methodology was initially developed in the context of a research work [3] addressing the dependability modeling, analysis and evaluation of large industrial information systems. This paper, extends the application domain to the evaluation of reliability oriented indexes and to the assessment of multiple components systems. Examples will be provided throughout the paper, in order to illustrate the fundamental concepts of the methodology, and to demonstrate its practical usefulness.  相似文献   

9.
Team performance modeling for HRA in dynamic situations   总被引:1,自引:0,他引:1  
This paper proposes a team behavior network model that can simulate and analyze response of an operator team to an incident in a dynamic and context-sensitive situation. The model is composed of four sub-models, which describe the context of team performance. They are task model, event model, team model and human–machine interface model. Each operator demonstrates aspects of his/her specific cognitive behavior and interacts with other operators and the environment in order to deal with an incident. Individual human factors, which determine the basis of communication and interaction between individuals, and cognitive process of an operator, such as information acquisition, state-recognition, decision-making and action execution during development of an event scenario are modeled. A case of feed and bleed operation in pressurized water reactor under an emergency situation was studied and the result was compared with an experiment to check the validity of the proposed model.  相似文献   

10.
Quantified risk and safety assessments are now required for safety cases for European air traffic management (ATM) services. Since ATM is highly human-dependent for its safety, this suggests a need for formal human reliability assessment (HRA), as carried out in other industries such as nuclear power. Since the fundamental aspect of HRA is human error data, in the form of human error probabilities (HEPs), it was decided to take a first step towards development of an ATM HRA approach by deriving some HEPs in an ATM context.This paper reports a study, which collected HEPs via analysing the results of a real-time simulation involving air traffic controllers (ATCOs) and pilots, with a focus on communication errors. This study did indeed derive HEPs that were found to be concordant with other known communication human error data. This is a first step, and shows promise for HRA in ATM, since HEPs have been derived which could be used in safety assessments, although these HEPs are for only one (albeit critical) aspect of ATCOs’ tasks (communications). The paper discusses options and potential ways forward for the development of a full HRA capability in ATM.  相似文献   

11.
In recent years, the need for a more accurate dependability modelling (encompassing reliability, availability, maintenance, and safety) has favoured the emergence of novel dynamic dependability techniques able to account for temporal and stochastic dependencies of a system. One of the most successful and widely used methods is Dynamic Fault Tree that, with the introduction of the dynamic gates, enables the analysis of dynamic failure logic systems such as fault‐tolerant or reconfigurable systems. Among the dynamic gates, Priority‐AND (PAND) is one of the most frequently used gates for the specification and analysis of event sequences. Despite the numerous modelling contributions addressing the resolution of the PAND gate, its failure logic and the consequences for the coherence behaviour of the system need to be examined to understand its effects for engineering decision‐making scenarios including design optimization and sensitivity analysis. Accordingly, the aim of this short communication is to analyse the coherence region of the PAND gate so as to determine the coherence bounds and improve the efficacy of the dynamic dependability modelling process.  相似文献   

12.
This paper presents a model for dependability performance evaluation by fuzzy sets utilization. Basic dependability indicators (reliability, maintainability and maintenance support) are used for the analysis of technical systems' conditions from the aspects of design, construction, maintenance and logistics. These indicators as well as associated dependability expressions itself are described by linguistic variables, which are characterized by a membership function to the defined classes. The proposed model is primarily appropriate for introduction, analysis and synthesis of information related to quality of systems in operation. Such data are often available only as experts' judgment and estimations. A practical engineering example (mechanical system at bucket wheel excavator) has been presented to demonstrate the proposed dependability analysis and synthesis model. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

13.
14.
In this paper, we design AVTMR (All Voting Triple Modular Redundancy) and dual–duplex system which have a fault-tolerant characteristic, and two systems are compared in the evaluation of RAMS (Reliability, Availability, Maintainability and Safety) and MTTF (Mean Time To Failure).AVTMR system is designed in a triplicated voter technique and dual–duplex system in a comparator, and two systems are based on MC68000. To evaluate system characteristic, Markov modeling method is designed for reliability, availability, safety and MTTF (Mean Time To Failure), and RELEX6.0 tool is used for the calculation of failure rate of electrical components that is based on MILSPEC-217F.In this paper, we can see two systems are more high dependability than a single system, and AVTMR or dual–duplex system can be selected for a specific application system. Especially, because AVTMR and dual–duplex system have high RAMS better than a single system, they can be applied to life critical system such as an airplane and a high-speed railway system.  相似文献   

15.

Human factors studies the intersection between people, technology and work, with the major aim to find areas where design and working conditions produce human error. It relies on the knowledge base and research results of multiple fields of inquiry (ranging from computer science to anthropology) to do so. Technological change at this intersection (1) redefines the relationship between various players (both humans and machines), (2) transforms practice and shifts sources of error and excellence, and (3) often drives up operational requirements and pressures on operators. Human factors needs to predict these reverberations of technological change before a mature system has been built in order to steer design into the direction of cooperative human-machine architectures. The quickening tempo of technology change and the expansion of technological possibilities has largely converted the traditional shortcuts for access to a design process (task analysis, guidelines, verification and validation studies, etc.) into oversimplification fallacies that retard understanding, innovation, and, ultimately, human factors' credibility. There is an enormous need for the development of techniques that gain empirical access to the future-that generate human performance data about systems which have yet to be built.  相似文献   

16.
Event storms are the manifestation of an important class of abnormal behaviors in communication systems. They occur when a large number of nodes throughout the system generate a set of events within a small period of time. It is essential for network management systems to detect every event storm and identify its cause, in order to prevent and repair potential system faults.This paper presents a set of techniques for the effective detection and identification of event storms in communication systems. First, we introduce a new algorithm to synchronize events to a single node in the system. Second, the system's event log is modeled as a normally distributed random process. This is achieved by using data analysis techniques to explore and then model the statistical behavior of the event log. Third, event storm detection is proposed using a simple test statistic combined with an exponential smoothing technique to overcome the non-stationary behavior of event logs. Fourth, the system is divided into non-overlapping regions to locate the main contributing regions of a storm. We show that this technique provides us with a method for event storm identification. Finally, experimental results from a commercially deployed multimedia communication system that uses these techniques demonstrate their effectiveness.  相似文献   

17.
The growing demand for safety, reliability, availability and maintainability in modern technological systems has led these systems to become more and more complex. To improve their dependability, many features and subsystems are employed like the diagnosis system, control system, backup systems, and so on. These subsystems have all their own dynamic, reliability and performances and interact with each other in order to provide a dependable and fault‐tolerant system. This makes the dependability analysis and assessment very difficult. This paper proposes a method to completely model the diagnosis procedure in fault‐tolerant systems using stochastic activity networks. Combined with Monte Carlo simulation, this will allow the dependability assessment by including the diagnosis parameters and performances explicitly. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

18.
General equations and numerical tables are developed for quantification of the probabilities of sequentially dependent repeatable human errors. Such errors are typically associated with testing, maintenance or calibration (called “pre-accident” or “pre-initiator” tasks) of redundant safety systems. Guidance is presented for incorporating dependent events in large system fault tree analysis using implicit or explicit methods. Exact relationships between these methods as well as numerical tables and simple approximate methods for system analysis are described. Analytical results are presented for a general human error model while the numerical tables are valid for a specific Handbook (THERP) model. Relationships are pointed out with earlier methods and guides proposed for error probability quantification.  相似文献   

19.
Operators in nuclear power plants have to acquire information from human system interfaces (HSIs) and the environment in order to create, update, and confirm their understanding of a plant state, as failures of situation assessment may cause wrong decisions for process control and finally errors of commission in nuclear power plants. A few computational models that can be used to predict and quantify the situation awareness of operators have been suggested. However, these models do not sufficiently consider human characteristics for nuclear power plant operators.In this paper, we propose a computational model for situation assessment of nuclear power plant operators using a Bayesian network. This model incorporates human factors significantly affecting operators’ situation assessment, such as attention, working memory decay, and mental model.As this proposed model provides quantitative results of situation assessment and diagnostic performance, we expect that this model can be used in the design and evaluation of human system interfaces as well as the prediction of situation awareness errors in the human reliability analysis.  相似文献   

20.
The ability to make mistakes is an innate human trait; however, until recently, the ability to spread death and destruction through one's own mistakes has been mainly limited to political men and Generals. Nowadays, there are other individuals capable, when carrying out their work, of making mistakes with exceptionally grave consequences. This is caused by the construction of increasingly larger plant (with consequently higher destructive potentials), to the centralization of controls in one single, or a few, control rooms, and to the fact that many important decisions are concentrated on a few operators. Recent surveys seem to reveal that at least 40% of the total of disastrous events in industrial activities derive from human error. It therefore appears evident that each risk analysis made on systems in which man plays a part, must take possible human error into consideration.This report is an attempt to suggest data, methodologies and programs for an analysis of human factors in process industries.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号