首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we give a formal definition of the requirements translation language Behavior Trees. This language has been used with success in industry to systematically translate large, complex, and often erroneous requirements documents into a structured model of the system. It contains a mixture of state-based manipulations, synchronisation, message passing, and parallel, conditional, and iterative control structures. The formal semantics of a Behavior Tree is given via a translation to a version of Hoare’s process algebra CSP, extended with state-based constructs such as guards and updates, and a message passing facility similar to that used in publish/subscribe protocols. We first provide the extension of CSP and its operational semantics, which preserves the meaning of the original CSP operators, and then the Behavior Tree notation and its translation into the extended version of CSP.  相似文献   

2.
钟学燕  岳辉  周国华 《计算机工程》2006,32(17):150-152
从需求波动风险的定义出发,总结了需求波动形成的原因和对软件项目影响,列举了一些代表性风险分析和评估方法,指出当前研究的不足,提出了更进一步研究思路。  相似文献   

3.
Behavior Trees are a graphical notation used for formalising functional requirements, and have been successfully applied to several industrial case studies. However, the standard notation does not support the concept of time, and consequently its application is limited to non-real-time systems. To overcome this limitation we extend the notation to timed Behavior Trees. We provide an operational semantics which is based on timed automata, and thus serves as a formal basis for the translation of timed Behavior Trees into the input notation of the timed model checker UPPAAL. System-level timing properties of a Behavior Tree model can then be automatically verified using UPPAAL. Based on the notational extensions with model checking support, we introduce timed Failure Mode and Effects Analysis, a process for identifying cause-consequence relationships between component failures and system hazards in real-time safety critical systems.  相似文献   

4.
5.
Susan  James  Dan  Gerald   《Journal of Systems and Software》2009,82(10):1568-1577
This paper introduces an executable system dynamics simulation model developed to help project managers comprehend the complex impacts related to requirements volatility on a software development project. The simulator extends previous research and adds research results from an empirical survey, including over 50 new parameters derived from the associated survey data, to a base model. The paper discusses detailed results from two cases that show significant cost, schedule, and quality impacts as a result of requirements volatility. The simulator can be used as an effective tool to demonstrate the complex set of factor relationships and effects related to requirements volatility.  相似文献   

6.
Information systems security issues have usually been considered only after the system has been developed completely, and rarely during its design, coding, testing or deployment. However, the advisability of considering security from the very beginning of the system development has recently begun to be appreciated, and in particular in the system requirements specification phase. We present a practical method to elicit and specify the system and software requirements, including a repository containing reusable requirements, a spiral process model, and a set of requirements documents templates. In this paper, this method is focused on the security of information systems and, thus, the reusable requirements repository contains all the requirements taken from MAGERIT, the Spanish public administration risk analysis and management method, which conforms to ISO 15408, Common Criteria Framework. Any information system including these security requirements must therefore pass a risk analysis and management study performed with MAGERIT. The requirements specification templates are hierarchically structured and are based on IEEE standards. Finally, we show a case study in a system of our regional administration aimed at managing state subsidies.  相似文献   

7.
段富海  韩崇昭  赵骥 《计算机仿真》2002,19(2):36-38,42
首先在简述虚拟生产系统基本框架和关键技术的基础上,对虚拟生产系统的行为和对象行为进行了分析,然后提出了虚拟生产动态行为的实现机制,并利用WTK实现对虚拟生产系统的行为控制,最后重点介绍了实现机制中的仿真循环流及其功能。  相似文献   

8.
Appropriate maintenance technologies that facilitate model consistency in distributed simulation systems are relevant but generally unavailable. To resolve this problem, we analyze the main factors that cause model inconsistency. The analysis methods used for traditional distributed simulations are mostly empirical and qualitative, and disregard the dynamic characteristics of factor evolution in model operational running. Furthermore, distributed simulation applications (DSAs) are rapidly evolving in terms of large-scale, distributed, service-oriented, compositional, and dynamic features. Such developments present difficulty in the use of traditional analysis methods in DSAs, for the analysis of factorial effects on simulation models. To solve these problems, we construct a dynamic evolution mechanism of model consistency, called the connected model hyper-digraph (CMH). CMH is developed using formal methods that accurately specify the evolutional processes and activities of models (i.e., self-evolution, interoperability, compositionality, and authenticity). We also develop an algorithm of model consistency evolution (AMCE) based on CMH to quantitatively and dynamically evaluate influencing factors. Experimental results demonstrate that non-combination (33.7% on average) is the most influential factor, non-single-directed understanding (26.6%) is the second most influential, and non-double-directed understanding (5.0%) is the least influential. Unlike previous analysis methods, AMCE provides good fea- sibility and effectiveness. This research can serve as guidance for designers of consistency maintenance technologies toward achieving a high level of consistency in future DSAs.  相似文献   

9.
 This paper presents an automated tool for scenario-driven requirements engineering where scenario analysis plays the central role. It is shown that a scenario can be described by three views of data flow, entity relationship and state transition models by slight extensions of classic data flow, entity relationship and state transition diagrams. The notions of consistency and completeness of a set of scenarios are formally defined in graph theory terminology and automatically checked by the tool. The tool supports automatic validation of requirements definitions by analysing the consistency between a set of scenarios and requirements models. It also supports automatic synthesis of requirements models from a set of scenarios. Its utility and usefulness are demonstrated by a non-trivial example in the paper. Case studies of the tools are also presented.  相似文献   

10.
In the tradition traffic accident simulation method, the main parameters need to be determined by experience. If there is any problem found in the simulation process, we will have to change the model by amending input parameters and re-start the simulation. Therefore, the accident reconstruction result has very big difference with real accident process. This article discusses the coupled calculation method between the traffic accident simulation and simulation optimization. Then it confirms the method’s validity through the reliability analysis to simulation result. Finally through the actual traffic accident cases’ simulation and optimization analysis, the reconstruction result shows that the deviation of the accident reconstruction result does not exceed 10%. And the reconstruction result is a nice match with the accident scene investigation data. The simulation and optimization integration calculation method provides an effective solution to the problem that lies in traditional traffic accident simulation.  相似文献   

11.
The most common method to validate a DEVS model against the requirements is to simulate it several times under different conditions, with some simulation tool. The behavior of the model is compared with what the system is supposed to do. The number of different scenarios to simulate is usually infinite, therefore, selecting them becomes a crucial task. This selection, actually, is made following the experience or intuition of an engineer. Here we present a family of criteria to conduct DEVS model simulations in a disciplined way and covering the most significant simulations to increase the confidence on the model. This is achieved by analyzing the mathematical representation of the DEVS model and, thus, part of the validation process can be automatized.  相似文献   

12.
In this paper we summarize our experiences in building and integrating new generation, formal-methods based computer aided software engineering tools (CASE) to yield pragmatic improvements in software engineering processes in the telecommunication industry. We define an accelerated development methodology (ADM) for the specification, design, testing and re-engineering of telecommunications software. We identify two of the most significant barriers to adoption of tools and formal methods to speed up software development, namely the requirements engineering barrier and the legacy code re-engineering barrier, and show how the ADM methodology helps to overcome these barriers and improve time-to-market for telecommunications software.Our ADM methodology is based on the most widely accepted formal languages standardized by the International Telecommunications Union (ITU):

This paper emphasizes the following key components of our ADM methodology and their placement within the most common software engineering processes:

Author Keywords: Time-to-market; SDL tools; Formal methods; Software engineering processes; Telecommunications; Accelerated development  相似文献   

13.
Requirements analysis defines the goals and evaluation criteria of system design. We introduce a methodology for requirements analysis for customization based on large sample interactive testing, with the premise that analysis of user behaviour with prototypes leads to requirements discoveries. The methodology uses a relatively large sample (1) to identify relevant user subgroups, (2) to observe significant empirically determined group differences in the context of task and tool use and (3) to estimate the groups’ different requirements and derive design implications. Between 20 and 50 participants are used per test, rather than the three to five often recommended for user testing. Statistical relationships are investigated between subgroups in terms of background variables, questionnaire items, performance data, and coded verbal statements. Customization requirements are inferred from the significant differences observed between empirically determined groups. The methodological framework is illustrated in a case study involving the use of clinical resources on handheld devices by three groups of physicians. The groups were found to have different needs and preferences for evidence-based resources and device form factor, implying opportunities and necessities for group customization requirements.

Relevance to industry

In safety-critical domains such as health care, it is essential to assess user needs and preferences regarding devices and systems to inform appropriate customizations. We present a methodological framework and case study that demonstrates how large sample user testing can supplement typical methods of requirements analysis to provide contextualized, quantitative accounts of group differences and customization requirements.  相似文献   

14.
15.
Coloured Petri Nets (CPNs) is a language for the modelling and validation of systems in which concurrency, communication, and synchronisation play a major role. Coloured Petri Nets is a discrete-event modelling language combining Petri nets with the functional programming language Standard ML. Petri nets provide the foundation of the graphical notation and the basic primitives for modelling concurrency, communication, and synchronisation. Standard ML provides the primitives for the definition of data types, describing data manipulation, and for creating compact and parameterisable models. A CPN model of a system is an executable model representing the states of the system and the events (transitions) that can cause the system to change state. The CPN language makes it possible to organise a model as a set of modules, and it includes a time concept for representing the time taken to execute events in the modelled system. CPN Tools is an industrial-strength computer tool for constructing and analysing CPN models. Using CPN Tools, it is possible to investigate the behaviour of the modelled system using simulation, to verify properties by means of state space methods and model checking, and to conduct simulation-based performance analysis. User interaction with CPN Tools is based on direct manipulation of the graphical representation of the CPN model using interaction techniques, such as tool palettes and marking menus. A license for CPN Tools can be obtained free of charge, also for commercial use.  相似文献   

16.
When a software process is changed, a project manager needs to perform two types of change impact analysis activities: one for identifying the affected elements of a software process which is affected by the change and the other for analyzing the quantitative impact of the change on the project performance. We propose an approach to obtain the affected elements of a software process using process slicing and developing a simulation model based on the affected elements to quantitatively analyzing the change using simulation. We suggest process slicing to obtain the elements affected by the change. Process slicing identifies the affected elements of a software process using a process dependency model. The process dependency model contains activity control dependencies, artifact information dependencies, and role replacement dependencies. We also suggest transformation algorithms to automatically derive the simulation model from the process model containing the affected elements. The quantitative analysis can be performed by simulating the simulation model. In addition, we provide the tool to support our approach. We perform a case study to validate the usefulness of our approach. The result of the case study shows that our approach can reduce the effort to identify the elements affected by changes and examine alternatives for the project.  相似文献   

17.
In this research, we introduce a dual-purpose simulation model that integrates two decision support systems used by the US Postal Service to configure and staff their mail processing and distribution centers (P&DCs). The first system is designed to optimize the daily equipment schedules and the second to optimize the size and composition of the permanent workforce. Large-scale integer programs are solved in both cases. Because some compromise is needed in the time granularity, it is important to have an independent means of validating the results. This is the first purpose of our simulation model. The second involves the generation and validation of labor requirements for a category of workers known as mail handlers. While there is a one-to-one relationship between machine operators and the equipment schedule derived from the mail arrival profiles, no such relationship exists for those responsible for moving the mail between workstations. Neither productivity measures nor formal work rules exist. To resolve this shortcoming, we use simulation again, but this time to estimate mail handler requirements and then to determine whether the weekly schedules derived from the staff optimizer are adequate to meet the facility's service standards. Holistically speaking, the simulation serves as a bridge between the two optimization systems. The procedure is demonstrated with data provided by the Dallas P&DC.  相似文献   

18.
Recently proposed formal reliability analysis techniques have overcome the inaccuracies of traditional simulation based techniques but can only handle problems involving discrete random variables. In this paper, we extend the capabilities of existing theorem proving based reliability analysis by formalizing several important statistical properties of continuous random variables like the second moment and the variance. We also formalize commonly used concepts about the reliability theory such as survival, hazard, cumulative hazard and fractile functions. With these extensions, it is now possible to formally reason about important measures of reliability (the probabilities of failure, the failure risks and the mean-time-to failure) associated with the life of a system that operates in an uncertain and harsh environment and is usually continuous in nature. We illustrate the modeling and verification process with the help of examples involving the reliability analysis of essential electronic and electrical system components.  相似文献   

19.
We describe an approach and experimental results in the application of mechanized theorem proving to software requirements analysis. Serving as the test article was the embedded controller for SAFER, a backpack propulsion system used as a rescue device by NASA astronauts. SAFER requirements were previously formalized using the prototype verification system (PVS) during a NASA pilot project in formal methods, details of which appear in a NASA guidebook. This paper focuses on the formulation and proof of properties for the SAFER requirements model. To test the prospects for deductive requirements analysis, we used the PVS theorem prover to explore the upper limits of proof automation. A set of property classes was identified, with matching proof schemes later devised. After developing several PVS proof strategies (essentially prover macros), we obtained fully automatic proofs of 42 model properties. These results demonstrate how customized prover strategies can be used to automate moderate-complexity theorem proving for state machine models.  相似文献   

20.
Value stream mapping (VSM) is a useful tool for describing the manufacturing state, especially for distinguishing between those activities that add value and those that do not. It can help in eliminating non-value activities and reducing the work in process (WIP) and thereby increase the service level. This research follows the guidelines for designing future state VSM. These guidelines consist of five factors which can be changed simply, without any investment. These five factors are (1) production unit; (2) pacemaker process; (3) number of batches; (4) production sequence; and (5) supermarket size. The five factors are applied to a fishing net manufacturing system. Using experimental design and a simulation optimizing tool, the five factors are optimized. The results show that the future state maps can increase service level and reduce WIP by at least 29.41% and 33.92% respectively. For the present study, the lean principles are innovatively adopted in solving a fishing net manufacturing system which is not a well-addressed problem in literature. In light of the promising empirical results, the proposed methodologies are also readily applicable to similar industries.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号