共查询到20条相似文献,搜索用时 15 毫秒
1.
Since the early years of computing, programmers, systems analysts, and software engineers have sought ways to improve development process efficiency. Software development tools are programs that help developers create other programs and automate mundane operations while bringing the level of abstraction closer to the application engineer. In practice, software development tools have been in wide use among safety-critical system developers. Typical application areas include space, aviation, automotive, nuclear, railroad, medical, and military. While their use is widespread in safety-critical systems, the tools do not always assure the safe behavior of their respective products. This study examines the assumptions, practices, and criteria for assessing software development tools for building safety-critical real-time systems. Experiments were designed for an avionics testbed and conducted on six industry-strength tools to assess their functionality, usability, efficiency, and traceability. The results some light on possible improvements in the tool evaluation process that can lead to potential tool qualification for safety-critical real-time systems. 相似文献
2.
The STRESS environment is a collection of CASE tools for analysing and simulating the behaviour of hard real-time safety-critical applications. It is primarily intended as a means by which various scheduling and resource management algorithms can be evaluated, but can also be used to study the general behaviour of applications and real-time kernels. This paper describes the structure of the STRESS language and its environment, and gives examples of its use. 相似文献
3.
ContextDemonstrating compliance of critical systems with safety standards involves providing convincing evidence that the requirements of a standard are adequately met. For large systems, practitioners need to be able to effectively collect, structure, and assess substantial quantities of evidence.ObjectiveThis paper aims to provide insights into how practitioners deal with safety evidence management for critical computer-based systems. The information currently available about how this activity is performed in the industry is very limited.MethodWe conducted a survey to determine practitioners’ perspectives and practices on safety evidence management. A total of 52 practitioners from 15 countries and 11 application domains responded to the survey. The respondents indicated the types of information used as safety evidence, how evidence is structured and assessed, how evidence evolution is addressed, and what challenges are faced in relation to provision of safety evidence.ResultsOur results indicate that (1) V&V artefacts, requirements specifications, and design specifications are the most frequently used safety evidence types, (2) evidence completeness checking and impact analysis are mostly performed manually at the moment, (3) text-based techniques are used more frequently than graphical notations for evidence structuring, (4) checklists and expert judgement are frequently used for evidence assessment, and (5) significant research effort has been spent on techniques that have seen little adoption in the industry. The main contributions of the survey are to provide an overall and up-to-date understanding of how the industry addresses safety evidence management, and to identify gaps in the state of the art.ConclusionWe conclude that (1) V&V plays a major role in safety assurance, (2) the industry will clearly benefit from more tool support for collecting and manipulating safety evidence, and (3) future research on safety evidence management needs to place more emphasis on industrial applications. 相似文献
4.
Jakob Engblom Andreas Ermedahl Mikael Sjödin Jan Gustafsson Hans Hansson 《International Journal on Software Tools for Technology Transfer (STTT)》2003,4(4):437-455
In this article we give an overview of the worst-case execution time (WCET) analysis research performed by the WCET group of the ASTEC Competence Centre at Uppsala University. Knowing the WCET of a program is necessary when designing and verifying real-time systems. The WCET depends both on the program flow, such as loop iterations and function calls, and on hardware factors, such as caches and pipelines. WCET estimates should be both safe (no underestimation allowed) and tight (as little overestimation as possible). We have defined a modular architecture for a WCET tool, used both to identify the components of the overall WCET analysis problem, and as a starting point for the development of a WCET tool prototype. Within this framework we have proposed solutions to several key problems in WCET analysis, including representation and analysis of the control flow of programs, modeling of the behavior and timing of pipelines and other low-level timing aspects, integration of control flow information and low-level timing to obtain a safe and tight WCET estimate, and validation of our tools and methods. We have focussed on the needs of embedded real-time systems in designing our tools and directing our research. Our long-term goal is to provide WCET analysis as a part of the standard tool chain for embedded development (together with compilers, debuggers, and simulators). This is facilitated by our cooperation with the embedded systems programming-tools vendor IAR Systems. 相似文献
5.
Network invariants for real-time systems 总被引:1,自引:0,他引:1
We extend the approach of model checking parameterized networks of processes by means of network invariants to the setting of real-time systems. We introduce timed transition structures (which are similar in spirit to timed automata) and define a notion of abstraction that is safe with respect to linear temporal properties. We strengthen the notion of abstraction to allow a finite system, then called
network invariant, to be an abstraction of networks of real-time systems. In general the problem of checking abstraction of real-time systems
is undecidable. Hence, we provide sufficient criteria, which can be checked automatically, to conclude that one system is
an abstraction of a concrete one. Our method is based on timed superposition and discretization of timed systems. We exemplify our approach by proving mutual exclusion of a simple protocol inspired by Fischer’s protocol,
using the model checker TLV.
Part of this work was done during O. Grinchtein’s stay at Weizmann Institute.
This author was supported by the European Research Training Network “Games”. 相似文献
6.
P. R. Croll A. J. C. SharkeyJ. M. BassN. E. SharkeyP. J. Fleming 《Engineering Applications of Artificial Intelligence》1995,8(6):615-623
An intelligent and dependable voting mechanism for use in real-time control applications is presented. Strategies proposed by current safety standards advocate N-version software to minimize the effects of undetected software design faults (bugs). This requires diversity in design but presents a problem in that truly diverse code produces diverse results; that is, differences in output values, timeliness and reliability. Reaching a consensus requires an intelligent voter, especially when non-stop operation is demanded, e.g. in aerospace applications. This paper, therefore, firstly considers the applicable safety standards and the requirements for an intelligent voter service. The use of replicated voters to improve reliability is examined and a mechanism to ensure non-stop operation is presented. The formal mathematical analysis used to verify the crucial behavioural properties of the voting service design is detailed. Finally, the use of neural nets and genetic algorithms to create N- version redundant voters, is considered. 相似文献
7.
Software performance is an important non-functional quality attribute and software performance evaluation is an essential activity in the software development process. Especially in embedded real-time systems, software design and evaluation are driven by the needs to optimize the limited resources, to respect time deadlines and, at the same time, to produce the best experience for end-users. Software product family architectures add additional requirements to the evaluation process. In this case, the evaluation includes the analysis of the optimizations and tradeoffs for the whole products in the family. Performance evaluation of software product family architectures requires knowledge and a clear understanding of different domains: software architecture assessments, software performance and software product family architecture. We have used a scenario-driven approach to evaluate performance and dynamic memory management efficiency in one Nokia software product family architecture. In this paper we present two case studies. Furthermore, we discuss the implications and tradeoffs of software performance against evolvability and maintenability in software product family architectures. 相似文献
8.
介绍了一种针对单片机、PSD,12C接口存储芯片为主要组件设计的一类针对安全关键系统的测量与控制装置的实时软件设计、数据传输、存储及处理方法。通过各项实验考核表明,这种设计方法可靠灵活、高效,而且有较好的开放性及通用性。 相似文献
9.
The design of a fault-tolerant distributed, real-time, embedded system with safety-critical concerns requires the use of formal
languages. In this paper, we present the foundations of a new software engineering method for real-time systems that enables
the integration of semiformal and formal notations. This new software engineering method is mostly based upon the ”COntinuuM”
co-modeling methodology that we have used to integrate architecture models of real-time systems (Perseil and Pautet in 12th
International conference on engineering of complex computer systems, ICECCS, IEEE Computer Society, Auckland, pp 371–376,
2007) (so we call it “Method C”), and a model-driven development process (ISBN 978-0-387-39361-2 in: From model-driven design
to resource management for distributed embedded systems, Springer, chap. MDE benefits for distributed, real time and embedded
systems, 2006). The method will be tested in the design and development of integrated modular avionics (IMA) frameworks, with
DO178, DO254, DO297, and MILS-CC requirements. 相似文献
10.
Horst Kesselmeier Inga Tschiersch Klaus Henning Beate Stoffels Sebastian Kutscha 《AI & Society》1998,12(1-2):55-63
Today technology design can no longer be understood as a design process on a green site. Design and implementation of new technology are always dependent on existing technology and the way it is used by people. In this respect Software-Engineering has also changed to the characteristics of normal technology design taking into account existing computer systems. Experiences show that the conditions and needs of such Software-Reengineering projects are highly complex and differ in their special characteristics ranging from aspects of quality of existing system documentation to organizational structures of the computer departments concerned. The Task-Artifact Cycle presented here gives a suitable reengineering approach emphasizing both analysis and design in Software-Reengineering. 相似文献
11.
12.
Safety-critical systems are evolving into complex, networked, and distributed systems. As a result of the high interconnectivity among all networked systems and of potential security threats, security countermeasures need to be incorporated. Nonetheless, albeit cutting-edge security measures are adopted and incorporated during the system development, such as latest recommended encryption algorithms, these protection mechanisms may turn out obsolete because of the long operational periods. New security flaws and bugs are continuously detected. Software updates are then essential to restore the security level of the system. However, system shutdowns may not be acceptable when high availability is required. As expressed by the European Union Agency for Network and Information Security (ENISA) “the research in the area of patching and updating equipment without disruption of service and tools” is needed. In this article, a novel live updating approach for zero downtime safety-critical systems named Cetratus is presented. Cetratus, which is based on a quarantine-mode execution and monitoring, enables the update of non-safety-critical software components while running, without compromising the safety integrity level of the system. The focus of this work lies on the incorporation of leading-edge security mechanisms while safety-related software components will remain untouched. Other non-safety-related software components could also be updated. 相似文献
13.
Real-time embedded systems are spreading to more and more new fields and their scope and complexity have grown dramatically in the last few years. Nowadays, real-time embedded computers or controllers can be found everywhere, both in very simple devices used in everyday life and in professional environments. Real-time embedded systems have to take into account robustness, safety and timeliness. The most-used schedulability analysis is the worst-case response time proposed by Joseph and Pandya (Comput J 29:390–395,1986). This test provides a bivaluated response (yes/no) indicating whether the processes will meet their corresponding deadlines or not. Nevertheless, sometimes the real-time designer might want to know, more exactly, the probability of the processes meeting their deadlines, in order to assess the risk of a failed scheduling depending on critical requirements of the processes. This paper presents RealNet, a neural network architecture that will generate schedules from timing requirements of a real-time system. The RealNet simulator will provide the designer, after iterating and averaging over some trials, an estimation of the probability that the system will not meet the deadlines. Moreover, the knowledge of the critical processes in these schedules will allow the designer to decide whether changes in the implementation are required.This revised version was published online in November 2004 with a correction to the accepted date. 相似文献
14.
One of the most critical phases of software engineering is requirements elicitation and analysis. Success in a software project is influenced by the quality of requirements and their associated analysis since their outputs contribute to higher level design and verification decisions. Real-time software systems are event driven and contain temporal and resource limitation constraints. Natural-language-based specification and analysis of such systems are then limited to identifying functional and non-functional elements only. In order to design an architecture, or to be able to test and verify these systems, a comprehensive understanding of dependencies, concurrency, response times, and resource usage are necessary. Scenario-based analysis techniques provide a way to decompose requirements to understand the said attributes of real-time systems. However they are in themselves inadequate for providing support for all real-time attributes. This paper discusses and evaluates the suitability of certain scenario-based models in a real-time software environment and then proposes an approach, called timed automata, that constructs a formalised view of scenarios that generate timed specifications. This approach represents the operational view of scenarios with the support of a formal representation that is needed for real-time systems. Our results indicate that models with notations and semantic support for representing temporal and resource usage of scenario provide a better analysis domain.H. Saiedian is a member of the Information & Telecommunication Technology Center at the University of Kansas. His research was partially supported by a grant from the National Science Foundation (NSF). 相似文献
15.
A software complexity model of object-oriented systems 总被引:1,自引:0,他引:1
A model for the emerging area of software complexity measurement of OO systems is required for the integration of measures defined by various researchers and to provide a framework for continued investigation. We present a model, based in the literature of OO systems and software complexity for structured systems. The model defines the software complexity of OO systems at the variable, method, object, and system levels. At each level, measures are identified that account for the cohesion and coupling aspects of the system. Users of OO techniques perceptions of complexity provide support for the levels and measures. 相似文献
16.
Derek Messie Mina Jung Jae C. Oh Shweta Shetty Steven Nordstrom Michael Haney 《Artificial Intelligence Review》2006,25(4):299-312
This paper describes a comprehensive prototype of large-scale fault adaptive embedded software developed for the proposed
Fermilab BTeV high energy physics experiment. Lightweight self-optimizing agents embedded within Level 1 of the prototype
are responsible for proactive and reactive monitoring and mitigation based on specified layers of competence. The agents are
self-protecting, detecting cascading failures using a distributed approach. Adaptive, reconfigurable, and mobile objects for
reliablility are designed to be self-configuring to adapt automatically to dynamically changing environments. These objects
provide a self-healing layer with the ability to discover, diagnose, and react to discontinuities in real-time processing.
A generic modeling environment was developed to facilitate design and implementation of hardware resource specifications,
application data flow, and failure mitigation strategies. Level 1 of the planned BTeV trigger system alone will consist of
2500 DSPs, so the number of components and intractable fault scenarios involved make it impossible to design an ‘expert system’
that applies traditional centralized mitigative strategies based on rules capturing every possible system state. Instead,
a distributed reactive approach is implemented using the tools and methodologies developed by the Real-Time Embedded Systems
group. 相似文献
17.
Abdeslam En-Nouaary 《Software Quality Journal》2008,16(1):3-22
Real-time systems (RTSs) are used in different domains such as telephone switching systems, air traffic control systems and
patient monitoring systems. The behavior of RTSs is time-sensitive; that is, RTSs interact with their environment with input
and output events under time constraints. The violation of such time constraints is the main cause of the misbehavior of RTSs,
and may result in severe damage to human lives and the environment [Mandrioli, D., Morasca, S., & Morzenti, A. 1995. ACM Transactions on Computer Systems, 13(4), 365–398]. To prevent failures in RTSs, we must verify that the implementation of an RTS is correct before its deployment.
Testing is one of the formal techniques that can be used to achieve this goal. It consists of three main phases: test generation,
test execution, and test results analysis. This paper presents a test case generation method for RTSs modeled as Timed Input
Output Automata (TIOA). The approach is made in two steps. First, the TIOA describing the system being tested is sampled to
construct a subautomaton, which is easily testable (i.e., easy to generate test cases from it). Then, the resulting subautomaton
is traversed to generate test cases. Our method is scalable in the sense that it generates a small number of test cases even
when the specifications are significant. Moreover, the test cases derived by our method are executable (i.e., they can be
run on any error-free implementation of the system being tested).
相似文献
Abdeslam En-NouaaryEmail: |
18.
The design and functional complexity of medical devices have increased during the past 50 years, evolving from the use of a metronome circuit for the initial cardiac pacemaker to functions that include electrocardiogram analysis, laser surgery, and intravenous delivery systems that adjust dosage based on patient feedback. As device functionality becomes more intricate, concerns arise regarding efficacy, safety, and reliability. It thus becomes imperative to adopt a standard or methodology to ensure that the possibility of any defect or malfunction in these devices is minimized. It is with these facts in view that regulatory bodies are interested in investigating mechanisms to certify safety-crictical medical devices. These organizations advocate the use of formal methods techniques to evaluate safety-critical medical systems. However, the use of formal methods is keenly debated, with most manufacturers claiming that they are arduous and time consuming.In this paper we describe our experience in analyzing the requirements documents for the computer-aided resuscitation algorithm (CARA) designed by the Resuscitative Unit of the Walter Reed Army Institute of Research (WRAIR). We present our observations from two different angles – that of a nonbeliever in formal methods and that of a practitioner of formal methods. For the former we catalog the effort required by a novice user of formal methods tools to carry out an analysis of the requirements documents. For the latter we address issues related to choice of designs, errors in discovered requirements, and the tool support available for analyzing requirements . 相似文献
19.
《Journal of Systems Architecture》2015,61(2):82-111
Modern automation systems have to cope with large amounts of sensor data to be processed, stricter security requirements, heterogeneous hardware, and an increasing need for flexibility. The challenges for tomorrow’s automation systems need software architectures of today’s real-time controllers to evolve.This article presents FASA, a modern software architecture for next-generation automation systems. FASA provides concepts for scalable, flexible, and platform-independent real-time execution frameworks, which also provide advanced features such as software-based fault tolerance and high degrees of isolation and security. We show that FASA caters for robust execution of time-critical applications even in parallel execution environments such as multi-core processors.We present a reference implementation of FASA that controls a magnetic levitation device. This device is sensitive to any disturbance in its real-time control and thus, provides a suitable validation scenario. Our results show that FASA can sustain its advanced features even in high-speed control scenarios at 1 kHz. 相似文献
20.
Many of today’s complex computer applications are being modeled and constructed using the principles inherent to real-time
distributed object systems. In response to this demand, the Object Management Group’s (OMG) Real-Time Special Interest Group
(RT SIG) has worked to extend the Common Object Request Broker Architecture (CORBA) standard to include real-time specifications.
This group’s most recent efforts focus on the requirements of dynamic distributed real-time systems. One open problem in this
area is resource access synchronization for tasks employing dynamic priority scheduling.
This paper presents two resource synchronization protocols that meet the requirements of dynamic distributed real-time systems
as specified by Dynamic Scheduling Real-Time CORBA 2.0 (DSRT CORBA). The proposed protocols can be applied to both Earliest
Deadline First (EDF) and Least Laxity First (LLF) dynamic scheduling algorithms, allow distributed nested critical sections,
and avoid unnecessary runtime overhead. These protocols are based on (i) distributed resource preclaiming that allocates resources
in the message-based distributed system for deadlock prevention, (ii) distributed priority inheritance that bounds local and
remote priority inversion, and (iii) distributed preemption ceilings that delimit the priority inversion time further.
Chen Zhang is an Assistant Professor of Computer Information Systems at Bryant University. He received his M.S. and Ph.D. in Computer
Science from the University of Alabama in 2000 and 2002, a B.S. from Tsinghua University, Beijing, China. Dr. Zhang’s primary
research interests fall into the areas of distributed systems and telecommunications. He is a member of ACM, IEEE and DSI.
David Cordes is a Professor of Computer Science at the University of Alabama; he has also served as Department Head since 1997. He received
his Ph.D. in Computer Science from Louisiana State University in 1988, an M.S. in Computer Science from Purdue University
in 1984, and a B.S. in Computer Science from the University of Arkansas in 1982. Dr. Cordes’s primary research interests fall
into the areas of software engineering and systems. He is a member of ACM and a Senior Member of IEEE. 相似文献