首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
软件需求描述和需求分析建模一直以来是需求工程的重要工作,且存在紧密的关联,自动化需求分析建模与验证需要以规范的需求描述为基础。提出基于领域需求的结构化描述的自动分析建模方法,通过对系统整体按照一定的组织结构进行描述,描述句式采用富含语义的句型和普通句型相结合的方式,运用自然语言处理相关技术,通过预定义的转换规则对结构化描述下的需求文本进行建模元素识别,实现自动化的建模,最终生成UML图形化分析结果。  相似文献   

2.
In this paper, we report on our experiences of using lightweight formal methods for the partial validation of natural language requirements documents. We describe our approach to checking properties of models obtained by shallow parsing of natural language requirements, and apply it to a case study based on part of a NASA specification of the Node Control Software on the International Space Station. The experience reported supports our position that it is feasible and useful to perform automated analysis of requirements expressed in natural language. Indeed, we identified a number of errors in our case study that were also independently discovered and corrected by NASA's Independent Validation and Verification Facility in a subsequent version of the same document, and others that were not discovered. The paper describes the techniques we used, the errors we found and reflects on the lessons learned. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

3.
safeDpi: a language for controlling mobile code   总被引:11,自引:0,他引:11  
safeDpi is a distributed version of the Picalculus, in which processes are located at dynamically created sites. Parametrised code may be sent between sites using so-called ports, which are essentially higher-order versions of Picalculus communication channels. A host location may protect itself by only accepting code which conforms to a given type associated to the incoming port. We define a sophisticated static type system for these ports, which restrict the capabilities and access rights of any processes launched by incoming code. Dependent and existential types are used to add flexibility, allowing the behaviour of these launched processes, encoded as process types, to depend on the host's instantiation of the incoming code. We also show that a natural contextually defined behavioural equivalence can be characterised coinductively, using bisimulations based on typed actions. The characterisation is based on the idea of knowledge acquisition by a testing environment and makes explicit some of the subtleties of determining equivalence in this language of highly constrained distributed code.  相似文献   

4.
The article introduces an experimental system which produces multilingual semantic translations from relatively short texts from a given context. The system was conceived as an investigation instrument whose characteristics are the following: —the elaboration of a single analyzer and generator able to receive, in the form of data, specific information concerning national language, within the limits of a given area of application; —the use of exclusively semantic internal representation, whose formation is derived from “frames” (an object is defined as a list of “attribute-value” couples, permitting recursion); —a single knowledge-base is used for each natural language as initial data (the grammar transmitted was of a semantic-syntacticatn type); —the structure of grammar used in computer language processing being rarely as well adapted to analysis as to generation, the system itself is provided with the possibility of transforming and reorganizing the information for more efficient use at one stage of processing (thus, theatns are used directly in analysis, whereas for the purposes of generation they are “explored” and their information reorganized); —to produce a text, the use of general structuring principles (independent of language) are experimented with. These principles are given in the form of metarules. The application of these metarules to the restructured grammar of a natural language produces specific structuration rules, peculiar to this language. Although the system was conceived for any conceptual area or language, the present knowledge-base of the system (the experimental support) is based on a collection of elementary exercises in three-dimensional geometry written in Rumanian and in French. She does research in the CNRS group C.F. Picard headed by Professor Pitrat, University of Paris 6.  相似文献   

5.
Proving the shalls   总被引:1,自引:0,他引:1  
Incomplete, inaccurate, ambiguous, and vola-tile requirements have plagued the software industry since its inception. The convergence of model-based development and formal methods offers developers of safety-critical systems a powerful new approach to the early validation of requirements. This paper describes an exercise conducted to determine if formal methods could be used to validate system requirements early in the lifecycle at reasonable cost. Several hundred functional and safety requirements for the mode logic of a typical flight guidance system were captured as natural language “shall” statements. A formal model of the mode logic was written in the RSMLe language and translated into the NuSMV model checker and the PVS theorem prover using translators developed as part of the project. Each “shall” statement was manually translated into a NuSMV or PVS property and proven using these tools. Numerous errors were found in both the original requirements and the RSMLe model. This demonstrates that formal models can be written for realistic systems and that formal analysis tools have matured to the point where they can be effectively used to find errors before implementation. This project was partially funded by the NASA Langley Research Center under contract NCC1-01001 of the Aviation Safety Program.  相似文献   

6.
Holonic multiagent systems (HMAS) offer a promising software engineering approach for developing complex open software systems. However the process of building Multi-Agent Systems (MAS) and HMAS is mostly different from the process of building more traditional software systems as it introduces new design and development challenges. This paper introduces an agent-oriented software process for engineering complex systems called ASPECS. ASPECS is based on a holonic organisational metamodel and provides a step-by-step guide from requirements to code allowing the modelling of a system at different levels of details using a set of refinement methods. This paper details the entire ASPECS development process and provides a set of methodological guidelines for each process activity. A complete case study is also used to illustrate the design process and the associated notations. ASPECS uses UML as a modelling language. Because of the specific needs of agents and holonic organisational design, the UML semantics and notation are used as reference points, but they have been extended by introducing new specific profiles.  相似文献   

7.
随着嵌入式软件系统在汽车、核工业、航空、航天等安全关键领域的广泛应用,其失效将会导致财产的损失、环境的破坏甚至人员的伤亡,使得保障软件安全性成为系统开发过程中的重要部分.传统的安全性分析方法主要应用在软件的需求分析阶段和设计阶段,然而需求与设计之间的鸿沟却一直是软件工程领域的一大难题.正是由于这一鸿沟的存在,使得需求分析阶段的安全性分析结果难以完整详尽地反映在软件设计中,其根本原因是当前的软件需求主要通过自然语言描述,存在二义性与模糊性,且难以进行自动化处理.为了解决这一问题,本文面向构件化嵌入式软件,首先提出了一种半结构化的限定自然语言需求模板用于需求规约,能够有效降低自然语言需求的二义性与模糊性.然后,为了降低自动化处理的复杂性,采用需求抽象语法图作为中间模型实现基于限定自然语言需求模板规约的软件需求与AADL模型之间的转换,并在此过程中自动记录两者之间的可追踪关系.最后,基于AADL开源工具OSATE对本文所提方法进行了插件实现,并通过航天器导航、制导与控制系统(Guidance,Navigation andControl,GNC)进行了实例性验证.  相似文献   

8.
Ontologies can provide many benefits during information systems development. They can provide domain knowledge to requirement engineers, are reusable software components for web applications or intelligent agent developers, and can facilitate semi-automatic model verification and validation. They also assist in software extensibility, interoperability and reuse. All these benefits critically depend on the provision of a suitable ontology (ies). This paper introduces a semantically-based three stage-approach to assist developers in checking the consistency of the requirements models and choose the most suitable and relevant ontology (ies) for their development project from a given repository. The early requirements models, documented using the i language, are converted to a retrieval ontology. The consistency of this retrieval ontology is then checked before being used to identify a set of reusable ontologies that are relevant for the development project. The paper also provides an initial validation of each of the stages.  相似文献   

9.
We present a formal semantics for an object-oriented specification language. The formal semantics is presented as a conservative shallow embedding in Isabelle/hol and the language is oriented towards ocl formulae in the context of uml class diagrams. On this basis, we formally derive several equational and tableaux calculi, which form the basis of an integrated proof environment including automatic proof support and support for the analysis of this type of specifications. We show applications of our proof environment to data refinement based on an adapted standard refinement notion. Thus, we provide an integrated formal method for refinement-based object-oriented development.  相似文献   

10.
This paper describes a method for validating conceptual models of digital systems derived automatically from requirements expressed in natural language. Because natural language is ambiguous and vague, most statements have multiple interpretations. The approach here is to feed back to the requirements authors visualizations of the interpretations of the requirements that have been translated to semantic networks. The visualization task is a component (the Model Generator) of the ASPIN system for automatically interpreting requirements expressed in natural language and diagrams, analyzing the requirements for consistency and completeness, and automatically generating engineering models in the VHDL language. Visualization is performed in two steps: mapping the semantic networks to compound digraphs followed by placement of the nodes of the digraphs to generate a display in terms of icons representing devices, values, actions and events; and connectives indicating carriers, data flow and control dependency.  相似文献   

11.
Software Requirements Specifications (SRS) have been used to fill the communication gap between systems analysts and the end-users. SRSs should satisfy the needs of both systems analysts and end-users. Non-technical end-users require intelligible SRSs while systems analysts need more precise, clear and concise SRSs. Object-oriented methods cannot represent temporal relations between events precisely. However, object-oriented principles are widely used in systems analysis and designing. Hence, there is a need for a software requirements specification language which supports object-oriented analysis methods, represents temporal knowledge precisely and whose representation scheme resembles natural languages. The specification language presented in this paper, GSL, is designed to meet the above requirements. The language is based on First-order Temporal Logic (FTL), which has temporal operators in addition to classical logical connectives and quantifiers. Since FTL cannot represent relative temporal knowledge and it inherits problems with point-based time models, a new logical connective TAND and redefined AND connective are used to represent relative temporal knowledge and to solve the problems with FTL. The language employs object-oriented principles: events, conditions, rules and activities can be represented as objects as well as attributes of an object. However, systems analysts can decide whether to use object-oriented conceptual modeling or not. © 1998 John Wiley & Sons, Ltd.  相似文献   

12.
Requirements analysts consider a conceptual model to be an important artifact created during the requirements analysis phase of a software development life cycle (SDLC). A conceptual, or domain model is a visual model of the requirements domain in focus. Owing to its visual nature, the model serves as a platform for the deliberation of requirements by stakeholders and enables requirements analysts to further refine the functional requirements. Conceptual models may evolve into class diagrams during the design and execution phases of the software project. Even a partially automated conceptual model can save significant time during the requirements phase, by quickening the process of graphical communication and visualization.This paper presents a system to create a conceptual model from functional specifications, written in natural language in an automated manner. Classes and relationships are automatically identified from the functional specifications. This identification is based on the analysis of the grammatical constructs of sentences, and on Object Oriented principles of design. Extended entity-relationship (EER) notations are incorporated into the class relationships. Optimizations are applied to the identified entities during a post-processing stage, and the final conceptual model is rendered.The use of typed dependencies, combined with rules to derive class relationships offers a granular approach to the extraction of the design elements in the model. The paper illustrates the model creation process using a standard case study, and concludes with an evaluation of the usefulness of this approach for the requirements analysis. The analysis is conducted against both standard published models and conceptual models created by humans, for various evaluation parameters.  相似文献   

13.
This paper describes discourse processing inKing Kong, a portable natural language interface.King Kong enables users to pose questions and issue commands to a back end system. The notion of a discourse is central toKing Kong, and underlies much of the intelligent assistance thatkong provides to its users.kong's approach to modeling discourse is based on the work of Grosz and Sidner (1986). We extend Grosz and Sidner's framework in several ways, principally to allow multiple independent discourse contexts to remain active at the same time. This paper also describesKing Kong's method of intention recognition, which is similar to that described in Kautz and Allen (1986) and Carberry (1988). We demonstrate that a relatively simple intention recognition component can be exploited by many other discourserelated mechanisms, for example to disambiguate input and resolve anaphora. In particular, this paper describes in detail the mechanism inKing Kong that uses information from the discourse model to form a range of cooperative extended responses to queries in an effort to aid the user in accomplishing her goals.Judith Schaffer Sider received her Bachelor of Arts degree in Computer Science and Linguistics and Cognitive Science from Brandeis University. Since 1987 she has been a member of the technical staff at the MITRE Corporation, where she works on King Kong, the natural language interface under development there. The joint research with John D. Burger described in this volume reflects some of her work in the areas of cooperative responding and plan recognition.John D. Burger is a Project Leader at the MITRE Corporation and an instructor at Boston University. He received a Bachelor of Science degree in Mathematics and Computer Science from Carnegie Melon University. His research interests lie in the fields of natural language processing and intelligent multimedia interfaces. The joint work with Judith Schaffer Sider described in this volume reflects his interest in making use of discourse models in practical intelligent interfaces.  相似文献   

14.
Goal-oriented Requirements Engineering approaches have become popular in the Requirements Engineering community as they provide expressive modelling languages for requirements elicitation and analysis. However, as a common challenge, such approaches are still struggling when it comes to managing the accidental complexity of their models. Furthermore, those models might be incomplete, resulting in insufficient information for proper understanding and implementation. In this paper, we provide a set of metrics, which are formally specified and have tool support, to measure and analyse complexity and completeness of goal models, in particular social goal models (e.g. i). Concerning complexity, the aim is to identify refactoring opportunities to improve the modularity of those models, and consequently reduce their accidental complexity. With respect to completeness, the goal is to automatically detect model incompleteness. We evaluate these metrics by applying them to a set of well-known system models from industry and academia. Our results suggest refactoring opportunities in the evaluated models, and provide a timely feedback mechanism for requirements engineers on how close they are to completing their models.  相似文献   

15.
Temporal logics are commonly used for reasoning about concurrent systems. Model checkers and other finite-state verification techniques allow for automated checking of system model compliance to given temporal properties. These properties are typically specified as linear-time formulae in temporal logics. Unfortunately, the level of inherent sophistication required by these formalisms too often represents an impediment to move these techniques from “research theory” to “industry practice”. The objective of this work is to facilitate the nontrivial and error prone task of specifying, correctly and without expertise in temporal logic, temporal properties. In order to understand the basis of a simple but expressive formalism for specifying temporal properties we critically analyze commonly used in practice visual notations. Then we present a scenario-based visual language called Property Sequence Chart (PSC) that, in our opinion, fixes the highlighted lacks of these notations by extending a subset of UML 2.0 Interaction Sequence Diagrams. We also provide PSC with both denotational and operational semantics. The operational semantics is obtained via translation into Büchi automata and the translation algorithm is implemented as a plugin of our Charmy tool. Expressiveness of PSC has been validated with respect to well known property specification patterns. Preliminary results appeared in (Autili et al. 2006a).  相似文献   

16.
This report describes the current state of our central research thrust in the area of natural language generation. We have already reported on our text-level theory of lexical selection in natural language generation ([59, 60]), on a unification-based syntactic processor for syntactic generation ([73]) and designed a relatively flexible blackboard-oriented architecture for integrating these and other types of processing activities in generation ([60]). We have implemented these ideas in our prototype generator, Diogenes — a DIstributed, Opportunistic GENEration System — and tested our lexical selection and syntactic generation modules in a comprehensive natural language processing project — the KBMT-89 machine translation system ([15]). At this stage we are developing a more comprehensive Diogenes system, concentrating on both the theoretical and the system-building aspects of a) formulating a more comprehensive theory of distributed natural language generation; b) extending current theories of text organization as they pertain to the task of planning natural language texts; c) improving and extending the knowledge representation and the actual body of background knowledge (both domain and discourse/pragmatic) required for comprehensive text planning; d) designing and implementing algorithms for dynamic realization of text structure and integrating them into the blackboard style of communication and control; e) designing and implementing control algorithms for distributed text planning and realization. In this document we describe our ideas concerning opportunistic control for a natural language generation planner and present a research and development plan for the Diogenes project.Many people have contributed to the design and development of the Diogenes generation system over the last four years, especially Eric Nyberg, Rita McCardell, Donna Gates, Christine Defrise, John Leavitt, Scott Huffman, Ed Kenschaft and Philip Werner. Eric Nyberg and Masaru Tomita have created genkit, which is used as the syntactic component of Diogenes. A short version of this article appeared in Proceedings of IJCAI-89, co-authored with Victor Lesser and Eric Nyberg. To all the above many thanks. The remaining errors are the responsibility of this author.  相似文献   

17.
Engineering secure software systems requires a thorough understanding of the social setting within which the system-to-be will eventually operate. To obtain such an understanding, one needs to identify the players involved in the system's operation, and to recognize their personal preferences, agendas and powers in relation to other players. The analysis also needs to identify assets that need to be protected, as well as vulnerabilities leads to system failures when attacked. Equally important, the analyst needs to take rational steps to predict most likely attackers, knowing their possible motivations, and capabilities enabled by latest technologies and available resources. Only an integrated social analysis of both sides (attackers/protectors) can reveal the full space of tradeoffs among which the analyst must choose. Unfortunately, current system development practices treat design decisions on security in an ad-hoc way, often as an afterthought. This paper introduces a methodological framework based on i*, for dealing with security and privacy requirements, namely, Secure-i*. The framework supports a set of analysis techniques. In particular, attacker analysis helps identify potential system abusers and their malicious intents. Dependency vulnerability analysis helps detect vulnerabilities in terms of organizational relationships among stakeholders. Countermeasure analysis supports the dynamic decision-making process of defensive system players in addressing vulnerabilities and threats. Finally, access control analysis bridges the gap between security requirement models and security implementation models. The framework is illustrated with an example involving security and privacy concerns in the design of electronic health information systems.In addition, we discuss model evaluation techniques, including qualitative goal model analysis and property verification techniques based on model checking.  相似文献   

18.
敏捷开发采用用户故事表达用户需求.一般采用格式受限的自然语言编写,但在用户故事编写过程中经常出现一些表述上的缺陷.典型的缺陷包括缺失必要信息、意思表达含糊不清、故事间有重复或存在冲突等.这很大程度上影响了需求的质量,影响软件开发项目的进行.提出一种用户故事需求质量提升方法.从故事缺陷定位的角度出发,该方法构建了用户故事概念模型,并根据实际案例总结并提出11条用户故事应遵循的质量准则.从而提出故事结构分析、句法模式分析以及语法分析等技术,用于自动构建带场景用户故事的实例层模型,并根据准则进行故事缺陷检测,进而提升用户故事质量.在包含36个用户故事84个场景的实际项目中进行实验,自动检测出173个缺陷,缺陷检测的准确率和召回率分别达到88.79%和95.06%.  相似文献   

19.
Much work has been done to clarify the notion of metamodelling and new ideas, such as strict metamodelling, distinction between ontological and linguistic instantiation, unified modelling elements and deep instantiation, have been introduced. However, many of these ideas have not yet been fully developed and integrated into modelling languages with (concrete) syntax, rigorous semantics and tool support. Consequently, applying these ideas in practice and reasoning about their meaning is difficult, if not impossible. In this paper, we strive to add semantic rigour and conceptual clarity to metamodelling through the introduction of Nivel, a novel metamodelling language capable of expressing models spanning an arbitrary number of levels. Nivel is based on a core set of conceptual modelling concepts: class, generalisation, instantiation, attribute, value and association. Nivel adheres to a form of strict metamodelling and supports deep instantiation of classes, associations and attributes. A formal semantics is given for Nivel by translation to weight constraint rule language (WCRL), which enables decidable, automated reasoning about Nivel. The modelling facilities of Nivel and the utility of the formalisation are demonstrated in a case study on feature modelling.
Timo AsikainenEmail:
  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号