首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Emerging architectures such as partially reconfigurable FPGAs provide a huge potential for adaptivity in the area of embedded systems. Since many system functions are only executed at particular points of time they can share an adaptive component with other system functions, which can significantly reduce the design costs. However, adaptivity adds another dimension of complexity into system design since the system behaviour changes during the course of adaptation. This imposes additional requirements on the design process, in particular system verification. In this paper we illustrate how adaptivity is treated as first-class citizen inside the ForSyDe design framework. ForSyDe is a transformational system design methodology, where an initial abstract system model is refined by the application of semantic-preserving and non-semantic preserving design transformations into a detailed model that can be mapped to an implementation. Since ForSyDe is based on the functional paradigm we can model adaptivity by using functions as signal values, which we use as the base for our concept of adaptive processes. Depending on the level of adaptivity we categorise four classes of adaptive process, spanning from parameter adaptive to interface adaptive process. We illustrate our concepts by two typical examples for adaptivity, where we also show the application of design transformations.  相似文献   

2.
Automation is the key element in safety, reliability of industrial processes. Selecting the right type and level of automation requires careful consideration of how to allocate tasks between operators and automation. This is important in order that the joint system, human and machine as seen together, perform in the intended manner. The Halden Reactor Project is currently engaged in a project to study this topic, with an emphasis on maximizing the operator's ability to maintain control and handle unexpected events. Functional models can be used to study this in a process control environment, because they explicitly describe the functions that must be provided by the process or the operator. This paper describes how functional modelling of the joint system can be used to provide a basis for how functions should be allocated.  相似文献   

3.
A crucial step in any system design concerns the allocation of functions between human and machine. In simultaneous engineering, function allocation is potentially an issue both in product design — if the product is a technical system itself — and in process design. It is argued that well-founded function allocation decisions in relation to the production process are of particular importance in simultaneous engineering as less and less time is allowed for compensation for inadequate or missing decisions which will impede production effectiveness. The focus of recent methods for the allocation of functions is to operationalize the concept of complementarity, meaning to provide guidelines for complementing humans by technical systems instead of replacing them gradually as technical systems become more and more sophisticated. As systems design becomes more complex, involving more and faster changes and placing higher demands on engineers as well as system operators, the idea of complementary gains importance because it stresses the need to allocate functions in a way that supports human control over the production process and the development and maintenance of the necessary skills. Looking at existing methods and instruments for the complementary allocation of functions, one is at first sight confronted with a multitude of criteria for complementarity, but to date neither a fixed set of criteria, comparable to software usability criteria, nor widely accepted methods for their measurement exist. As part of a research project concerned with the development of guidelines whose objective is to assist engineers in designing work systems according to the complementarity principle, four central criteria for complementary function allocation were identified: dynamic coupling, process transparency, human decision authority, and flexibility. These criteria and their application in a design process are illustrated by means of a case study and discussed in terms of other approaches to complementary design as well as their use for simultaneous engineering projects.

Relevance to industry

Adequate allocation of functions between human and machine is crucial for the effectiveness of any production system. With more complex and faster design processes in simultaneous engineering projects, the need for methods supporting prospective analysis and design of human-machine systems has increased even more. The article presents such a method based on the principle of complementarity between human operator and technical system and illustrates its use by means of a case study.  相似文献   


4.

Increasingly sophisticated and robust automotive automation systems are being developed to be applied in all aspects of driving. Benefits, such as improving safety, task performance, and workload have been reported. However, several critical accidents involving automation assistance have also been reported. Although automation systems may work appropriately, human factors such as drivers errors, overtrust in and overreliance on automation due to lack of understanding of automation functionalities and limitations as well as distrust caused by automation surprises may trigger inappropriate human–automation interactions that lead to negative consequences. Several important methodologies and efforts for improving human–automation interactions follow the concept of human-centered automation, which claims that the human must have the final authority over the system, have been called. Given that the human-centered automation has been proposed as a more cooperative automation approach to reduce the likelihood of human–machine misunderstanding. This study argues that, especially in critical situations, the way control is handed over between agents can improve human–automation interactions even when the system has the final decision-making authority. As ways of improving human–automation interactions, the study proposes adaptive sharing of control that allows dynamic control distribution between human and system within the same level of automation while the human retains the final authority, and adaptive trading of control in which the control and authority shift between human and system dynamically while changing levels of automation. Authority and control transitions strategies are discussed, compared and clarified in terms of levels and types of automation. Finally, design aspects for determining how and when the control and authority can be shifted between human and automation are proposed with recommendations for future designs.

  相似文献   

5.
In this article we review and assess human‐centered level of automation (LOA), an alternate approach to traditional, technology‐centered design of automation in dynamic‐control systems. The objective of human‐controlled LOA is to improve human‐machine performance by taking into account both operator and technological capabilities. Automation literature has shown that traditional automation can lead to problems in operator situation awareness (SA) due to the out‐of‐the (control) loop performance problem, which may lead to a negative impact on overall systems performance. Herein we address a standing paucity of research into LOA to deal with these problems. Various schemes of generic control system function allocations were developed to establish a LOA taxonomy. The functions allocated to a human operator, a computer, or both, included monitoring system variables, generating process plans, selecting an “optimal” plan and implementing the plan. Five different function allocation schemes, or LOAs, were empirically investigated as to their usefulness for enhancing telerobot system performance and operator SA, as well as reducing workload. Human participants participated in experimental trials involving a high fidelity, interactive simulation of a telerobot performing nuclear materials handling at the various LOAs. Automation failures were attributed to various simulated system deficiencies necessitating operator detection and correction to return to functioning at an automated mode. Operator performance at each LOA, and during the failure periods, was evaluated. Operator SA was measured using the Situation Awareness Global Assessment Technique, and perceived workload was measured using the NASA‐Task Load Index. Results demonstrated improvements in human‐machine system performance at higher LOAs (levels involving greater computer control of system functions) along with lower operator subjective workload. However, under the same conditions, operator SA was reduced for certain types of system problems and reaction time to, and performance during, automation failures was substantially lower. Performance during automation failure was best when participants had been functioning at lower, intermediate LOAs (levels involving greater human control of system functions). © 2000 John Wiley & Sons, Inc.  相似文献   

6.
One of the challenges of distributed computer systems is the effective allocation of software system functions among the hardware components of the distributed system. Software function allocation methodology (SFAM) provides computer software system designers with a thorough and flexible method to allocate software system functions among the hardware components of a distributed computer system. Software designers select and rank relevant design parameters, analyse how well different distributed computer system components meet the chosen parameters, and allocate the software function accordingly. The paper defines the problem, covers necessary terminology, and discusses the current state of research. The preconditions necessary for an analysis using SFAM are covered along with the environment in which SFAM should be used. Details of SFAM components are discussed. A complete outline of the SFAM methodology is provided, along with discussion of key points and frequent examples.  相似文献   

7.
《Control Engineering Practice》2006,14(10):1249-1258
In human-machine systems, a user predicts the behavior of a machine using partial or abstracted information provided by a user-interface. If the user-interface does not provide sufficient information as required by the user, then the system does not always behave as the user anticipates. Such insufficient information and the consequent discrepancy tend to result in an automation surprise. Thus, the user-interface should be suitably designed so that the user can interact with the automated system safely and reliably. Moreover, a more simplified representation of the underlying system is required in order to reduce the complexity of operation. In the present paper, we consider coordinated actions between a user and a machine through a user-interface, where both the user's actions and the machine's behavior are modeled by discrete event systems. First, automation surprises are classified into three cases: a blocking state, a mode confusion, and a refusal state from the viewpoint of discrete event system theory. Next, the necessary and sufficient conditions for the nonexistence of the automation surprises are derived. The conditions are based on simulation and bisimulation relations between the machine model and the user model. Subsequently, we show that a user-interface and a user model without automation surprises can be designed by utilizing the bisimulation algorithm. Finally, the proposed approach is applied to a model of heating, ventilation, and air conditioning (HVAC) systems in order to illustrate the design procedure of human–machine systems without automation surprises.  相似文献   

8.
Breakdowns in complex systems often occur as a result of system elements interacting in unanticipated ways. In systems with human operators, human–automation interaction associated with both normative and erroneous human behavior can contribute to such failures. Model-driven design and analysis techniques provide engineers with formal methods tools and techniques capable of evaluating how human behavior can contribute to system failures. This paper presents a novel method for automatically generating task analytic models encompassing both normative and erroneous human behavior from normative task models. The generated erroneous behavior is capable of replicating Hollnagel's zero-order phenotypes of erroneous action for omissions, jumps, repetitions, and intrusions. Multiple phenotypical acts can occur in sequence, thus allowing for the generation of higher order phenotypes. The task behavior model pattern capable of generating erroneous behavior can be integrated into a formal system model so that system safety properties can be formally verified with a model checker. This allows analysts to prove that a human–automation interactive system (as represented by the model) will or will not satisfy safety properties with both normative and generated erroneous human behavior. We present benchmarks related to the size of the statespace and verification time of models to show how the erroneous human behavior generation process scales. We demonstrate the method with a case study: the operation of a radiation therapy machine. A potential problem resulting from a generated erroneous human action is discovered. A design intervention is presented which prevents this problem from occurring. We discuss how our method could be used to evaluate larger applications and recommend future paths of development.  相似文献   

9.
Recent accounts of accidents draw attention to “automation surprises” that arise in safety critical systems. An automation surprise can occur when a system behaves differently from the expectations of the operator. Interface mode changes are one class of such surprises that have significant impact on the safety of a dynamic interactive system. They may take place implicitly as a result of other system action. Formal specifications of interactive systems provide an opportunity to analyse problems that arise in such systems. In this paper we consider the role that an interactor based specification has as a partial model of an interactive system so that mode consequences can be checked early in the design process. We show how interactor specifications can be translated into the SMV model checker input language and how we can use such specifications in conjunction with the model checker to analyse potential for mode confusion in a realistic case. Our final aim is to develop a general purpose methodology for the automated analysis of interactive systems. This verification process can be useful in raising questions that have to be addressed in a broader context of analysis.  相似文献   

10.

The concept of automated driving changes the way humans interact with their cars. However, how humans should interact with automated driving systems remains an open question. Cooperation between a driver and an automated driving system—they exert control jointly to facilitate a common driving task for each other—is expected to be a promising interaction paradigm that can address human factors issues caused by driving automation. Nevertheless, the complex nature of automated driving functions makes it very challenging to apply the state-of-the-art frameworks of driver–vehicle cooperation to automated driving systems. To meet this challenge, we propose a hierarchical cooperative control architecture which is derived from the existing architectures of automated driving systems. Throughout this architecture, we discuss how to adapt system functions to realize different forms of cooperation in the framework of driver–vehicle cooperation. We also provide a case study to illustrate the use of this architecture in the design of a cooperative control system for automated driving. By examining the concepts behind this architecture, we highlight that the correspondence between several concepts of planning and control originated from the fields of robotics and automation and the ergonomic frameworks of human cognition and control offers a new opportunity for designing driver–vehicle cooperation.

  相似文献   

11.
Sauer J  Kao CS  Wastell D 《Ergonomics》2012,55(8):840-853
The effectiveness of different forms of adaptive and adaptable automation was examined under low- and high-stress conditions, in the form of different levels of noise. Thirty-six participants were assigned to one of the three types of variable automation (adaptive event-based, adaptive performance-based and adaptable serving as a control condition). Participants received 3 h of training on a simulation of a highly automated process control task and were subsequently tested during a 4-h session under noise exposure and quiet conditions. The results for performance suggested no clear benefits of one automation control mode over the other two. However, it emerged that participants under adaptable automation adopted a more active system management strategy and reported higher levels of self-confidence than in the two adaptive control modes. Furthermore, the results showed higher levels of perceived workload, fatigue and anxiety for performance-based adaptive automation control than the other two modes. PRACTITIONER SUMMARY: This study compared two forms of adaptive automation (where the automated system flexibly allocates tasks between human and machine) with adaptable automation (where the human allocates the tasks). The adaptable mode showed marginal advantages. This is of relevance, given that this automation mode may also be easier to design.  相似文献   

12.
13.
Prognostic and systems Health Management (PHM) is an integral part of a system. It is used for solving reliability problems that often manifest due to complexities in design, manufacturing, operating environment and system maintenance. For safety-critical applications, using a model-based development process for complex systems might not always be ideal but it is equally important to establish the robustness of the solution. The information revolution has allowed data-driven methods to diffuse within this field to construct the requisite process (or system models) to cope with the so-called big data phenomenon. This is supported by large datasets that help machine-learning models achieve impressive accuracy. AI technologies are now being integrated into many PHM related applications including aerospace, automotive, medical robots and even autonomous weapon systems. However, with such rapid growth in complexity and connectivity, a systems’ behaviour is influenced in unforeseen ways by cyberattacks, human errors, working with incorrect or incomplete models and even adversarial phenomena. Many of these models depend on the training data and how well the data represents the test data. These issues require fine-tuning and even retraining the models when there is even a small change in operating conditions or equipment. Yet, there is still ambiguity associated with their implementation, even if the learning algorithms classify accordingly. Uncertainties can lie in any part of the AI-based PHM model, including in the requirements, assumptions, or even in the data used for training and validation. These factors lead to sub-optimal solutions with an open interpretation as to why the requirements have not been met. This warrants the need for achieving a level of robustness in the implemented PHM, which is a challenging task in a machine learning solution.This article aims to present a framework for testing the robustness of AI-based PHM. It reviews some key milestones achieved in the AI research community to deal with three particular issues relevant for AI-based PHM in safety-critical applications: robustness to model errors, robustness to unknown phenomena and empirical evaluation of robustness during deployment. To deal with model errors, many techniques from probabilistic inference and robust optimisation are often used to provide some robustness guarantee metric. In the case of unknown phenomena, techniques include anomaly detection methods, using causal models, the construction of ensembles and reinforcement learning. It elicits from the authors’ work on fault diagnostics and robust optimisation via machine learning techniques to offer guidelines to the PHM research community. Finally, challenges and future directions are also examined; on how to better cope with any uncertainties as they appear during the operating life of an asset.  相似文献   

14.
《Ergonomics》2012,55(11):1905-1922
Abstract

Today many systems are highly automated. The human operator's role in these systems is to supervise the automation and intervene to take manual control when necessary. The operator's choice of automatic or manual control has important consequences for system performance, and therefore it is important to understand and optimize this decision process. One important determinant of operators' choice of manual or automatic control may be their degree of trust in the automation. However, there have been no experimental tests of this hypothesis until recently, nor is there a model of human trust in machines to form a theoretical foundation for empirical studies. In this paper a model of human trust in machines is developed, taking models of trust between people as a starting point, and extending them to the human-machine relationship. The resulting model defines human trust in machines and specifies how trust changes with experience on a system, providing a framework for experimental research on trust and human intervention in automated systems.  相似文献   

15.
Agent systems based on the Belief, Desire and Intention model of Rao and Georgeff have been used for a number of successful applications. However, it is often difficult to learn how to apply such systems, due to the complexity of both the semantics of the system and the computational model. In addition, there is a gap between the semantics and the concepts that are presented to the programmer. In this paper we address these issues by re-casting the foundations of such systems into a logic programming framework. In particular we show how the integration of backward- and forward-chaining techniques for linear logic provides a natural starting point for this investigation. We discuss how the integrated system provides for the interaction between the proactive and reactive parts of the system, and we discuss several aspects of this interaction. In particular, one perhaps surprising outcome is that goals and plans may be thought of as declarative and procedural aspects of the same concept. We also discuss the language design issues for such a system, and particularly the way in which the potential choices for rule evaluation in a forward-chaining manner is crucial to the behaviour of the system.  相似文献   

16.
Humans: still vital after all these years of automation   总被引:1,自引:0,他引:1  
OBJECTIVE: The authors discuss empirical studies of human-automation interaction and their implications for automation design. BACKGROUND: Automation is prevalent in safety-critical systems and increasingly in everyday life. Many studies of human performance in automated systems have been conducted over the past 30 years. METHODS: Developments in three areas are examined: levels and stages of automation, reliance on and compliance with automation, and adaptive automation. RESULTS: Automation applied to information analysis or decision-making functions leads to differential system performance benefits and costs that must be considered in choosing appropriate levels and stages of automation. Human user dependence on automated alerts and advisories reflects two components of operator trust, reliance and compliance, which are in turn determined by the threshold designers use to balance automation misses and false alarms. Finally, adaptive automation can provide additional benefits in balancing workload and maintaining the user's situation awareness, although more research is required to identify when adaptation should be user controlled or system driven. CONCLUSIONS: The past three decades of empirical research on humans and automation has provided a strong science base that can be used to guide the design of automated systems. APPLICATION: This research can be applied to most current and future automated systems.  相似文献   

17.
18.
An automation system's operating performance is judged by how well an automation unit is monitored and maintained by its supervisors. Previous research has shown that situation awareness (SA) and trust are critical factors in automation. The purpose of this study was to evaluate and improve supervisory performance in automation manufacturing. First, a conceptual structure of the relationship among SA, trust, and vigilance was developed. Second, a quantitative vigilance performance‐measuring model (η value) was proposed. Third, a matrix experiment based on orthogonal arrays through a simulated system of an auxiliary feed‐water system (AFWS) was conducted to verify the effect of the measuring model. Finally, according to the vigilance performance‐measuring model, a fuzzy logical vigilance alarm system was constructed to improve operating performance. The results of the first experiment indicated that the η value on human dynamic decision‐making characteristics was easy and objective in the measurement of operators' vigilance. With greater vigilance, there is a greater likelihood of making appropriate SA and acquiring more trust in automation. The results of the second experiment indicated that applying the η value to the design of the fuzzy logical vigilance alarm system could improve supervisory performance efficiently. Therefore, an adaptive vigilance performance‐measuring model combined with a fuzzy technique applied to the design of a human–machine interface for the improvement of cognitive decision making and operating performance is an important new direction in automation manufacturing. © 2006 Wiley Periodicals, Inc. Hum Factors Man 16: 409–426, 2006.  相似文献   

19.
Up to the present time, the control software design of production systems has been developed to produce a certain number of goods, in a centralised manner and through a case-by-case, timely and costly process. Therefore, the current control design approaches hinder factories in their pursuit to acquire the essential capabilities needed in order to survive in this customer-driven and highly competitive market. Some of these vital production competencies include mass customisation, fault tolerance reconfigurability, handling complexity, scalability and agility. The intention of this research is to propose a uniform architecture for control software design of collaborative manufacturing systems. It introduces software components named as modular, intelligent, and real-time agents (MIRAs) that represent both intelligent products as clients (C-MIRA) and machines or robots as operators (O-MIRAs) in a production system. C-MIRAs are in constant interaction with customers and operators through human machine interfaces, and are responsible for transforming products from concepts up to full realisation of them with the least possible human intervention. This architecture is built upon the IEC 61499 standard which is recognised for facilitating the distributed control design of automation systems; however, it also takes into account the intelligent product concept and envisages the machines’ control to be composed of a set of modular software components with standardised interfaces. This approach makes the software components intuitive and easy to install, to create the desired behaviour for collaborative manufacturing systems and ultimately paves the way towards mass customisation. A simplified food production case study, whose control is synthesised using the proposed approach, is chosen as an illustrative example for the proposed methodology.  相似文献   

20.
“Evolvability” is a concept normally associated with biology or ecology, but recent work on control of interdependent critical infrastructures reveals that network informatics systems can be designed to enable artificial, human systems to “evolve”. To explicate this finding, we draw on an analogy between disruptive behavior and stable variation in the history of science and the adaptive patterns of robustness and resilience in engineered systems. We present a definition of an evolvable system in the context of a model of robust, resilient and sustainable systems. Our review of this context and standard definitions indicates that many analysts in engineering (as well as in biology and ecology) do not differentiate Resilience from Robustness. Neither do they differentiate overall dependable system adaptability from a multi-phase process that includes graceful degradation and time-constrained recovery, restabilization, and prevention of catastrophic failure.We analyze how systemic Robustness, Resilience, and Sustainability are related to Evolvability. Our analysis emphasizes the importance of Resilience as an adaptive capability that integrates Sustainability and Robustness to achieve Evolvability.This conceptual framework is used to discuss nine engineering principles that should frame systems thinking about developing evolvable systems. These principles are derived from Kevin Kelly’s book: Out of Control, which describes living and artificial self-sustaining systems. Kelly’s last chapter, “The Nine Laws of God,” distills nine principles that govern all life-like systems. We discuss how these principles could be applied to engineering evolvability in artificial systems. This discussion is motivated by a wide range of practical problems in engineered artificial systems. Our goal is to analyze a few examples of system designs across engineering disciplines to explicate a common framework for designing and testing artificial systems. This framework highlights managing increasing complexity, intentional evolution, and resistance to disruptive events. From this perspective, we envision a more imaginative and time-sensitive appreciation of the evolution and operation of “reliable” artificial systems. We conclude with a short discussion of two hypothetical examples of engineering evolvable systems in network-centric communications using Error Resilient Data Fusion (ERDF) and cognitive radio.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号