首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper proposes a two-step approach to identifying ambiguities in natural language (NL) requirements specifications (RSs). In the first step, a tool would apply a set of ambiguity measures to a RS in order to identify potentially ambiguous sentences in the RS. In the second step, another tool would show what specifically is potentially ambiguous about each potentially ambiguous sentence. The final decision of ambiguity remains with the human users of the tools. The paper describes several requirements-identification experiments with several small NL RSs using four prototypes of the first tool based on linguistic instruments and resources of different complexity and a manual mock-up of the second tool.
Daniel M. Berry (Corresponding author)Email:
  相似文献   

2.
Wang  Yawen  Shi  Lin  Li  Mingyang  Wang  Qing  Yang  Yun 《Requirements Engineering》2022,27(3):351-373
Requirements Engineering - Requirements are usually written in natural language and evolve continuously during the process of software development, which involves a large number of...  相似文献   

3.
This paper presents the results of an investigation of natural language specifications created in industrial projects in Germany. One goal of the investigation was to gain an insight into the state of the practice. The objects of the investigation were the requirements documents and the requirements processes. The following aspects were uppermost in my mind when examining the requirements documents: structure of the documents, kinds of information found in the documents, and notations used to express the information. When interviewing the system analysts, the following aspects were most important: usage of tools, change management, communication with the customers, and verification and validation of the documents.  相似文献   

4.
Mapping functional requirements first to specifications and then to code is one of the most challenging tasks in software development. Since requirements are commonly written in natural language, they can be prone to ambiguity, incompleteness and inconsistency. Structured semantic representations allow requirements to be translated to formal models, which can be used to detect problems at an early stage of the development process through validation. Storing and querying such models can also facilitate software reuse. Several approaches constrain the input format of requirements to produce specifications, however they usually require considerable human effort in order to adopt domain-specific heuristics and/or controlled languages. We propose a mechanism that automates the mapping of requirements to formal representations using semantic role labeling. We describe the first publicly available dataset for this task, employ a hierarchical framework that allows requirements concepts to be annotated, and discuss how semantic role labeling can be adapted for parsing software requirements.  相似文献   

5.
Natural Language (NL) deliverables suffer from ambiguity, poor understandability, incompleteness, and inconsistency. Howewer, NL is straightforward and stakeholders are familiar with it to produce their software requirements documents. This paper presents a methodology, SOLIMVA, which aims at model-based test case generation considering NL requirements deliverables. The methodology is supported by a tool that makes it possible to automatically translate NL requirements into Statechart models. Once the Statecharts are derived, another tool, GTSC, is used to generate the test cases. SOLIMVA uses combinatorial designs to identify scenarios for system and acceptance testing, and it requires that a test designer defines the application domain by means of a dictionary. Within the dictionary there is a Semantic Translation Model in which, among other features, a word sense disambiguation method helps in the translation process. Using as a case study a space application software product, we compared SOLIMVA with a previous manual approach developed by an expert under two aspects: test objectives coverage and characteristics of the Executable Test Cases. In the first aspect, the SOLIMVA methodology not only covered the test objectives associated to the expert’s scenarios but also proposed a better strategy with test objectives clearly separated according to the directives of combinatorial designs. The Executable Test Cases derived in accordance with the SOLIMVA methodology not only possessed similar characteristics with the expert’s Executable Test Cases but also predicted behaviors that did not exist in the expert’s strategy. The key benefits from applying the SOLIMVA methodology/tool within a Verification and Validation process are the ease of use and, at the same time, the support of a formal method consequently leading to a potential acceptance of the methodology in complex software projects.  相似文献   

6.
In the last decade it became a common practice to formalise software requirements to improve the clarity of users’ expectations. In this work we build on the fact that functional requirements can be expressed in temporal logic and we propose new sanity checking techniques that automatically detect flaws and suggest improvements of given requirements. Specifically, we describe and experimentally evaluate approaches to consistency and redundancy checking that identify all inconsistencies and pinpoint their exact source (the smallest inconsistent set). We further report on the experience obtained from employing the consistency and redundancy checking in an industrial environment. To complete the sanity checking we also describe a semi-automatic completeness evaluation that can assess the coverage of user requirements and suggest missing properties the user might have wanted to formulate. The usefulness of our completeness evaluation is demonstrated in a case study of an aeroplane control system.  相似文献   

7.
This article describes the natural language processing techniques used in two computer-assisted language instruction programs: VERBCON and PARSER. VERBCON is a template-type program which teaches students how to use English verb forms in written texts. In the exercises verbs have been put into the infinitive, and students are required to supply appropriate verb forms. PARSER is intended to help students learn English sentence structure. Using a lexicon and production rules, it generates sentences and asks students to identify their grammatical parts. The article contends that only by incorporating natural language processing techniques can these programs offer a substantial number of exercises and at the same time provide students with informative feedback. Alan Bailin is director of the Effective Writing Program at the University of Western Ontario, London, Ontario, Canada. Philip Thomson is a programmer in the Faculty of Medecine, University of Western Ontario.  相似文献   

8.
Symbolic connectionism in natural language disambiguation   总被引:1,自引:0,他引:1  
Natural language understanding involves the simultaneous consideration of a large number of different sources of information. Traditional methods employed in language analysis have focused on developing powerful formalisms to represent syntactic or semantic structures along with rules for transforming language into these formalisms. However, they make use of only small subsets of knowledge. This article describes how to use the whole range of information through a neurosymbolic architecture which is a hybridization of a symbolic network and subsymbol vectors generated from a connectionist network. Besides initializing the symbolic network with prior knowledge, the subsymbol vectors are used to enhance the system's capability in disambiguation and provide flexibility in sentence understanding. The model captures a diversity of information including word associations, syntactic restrictions, case-role expectations, semantic rules and context. It attains highly interactive processing by representing knowledge in an associative network on which actual semantic inferences are performed. An integrated use of previously analyzed sentences in understanding is another important feature of our model. The model dynamically selects one hypothesis among multiple hypotheses. This notion is supported by three simulations which show the degree of disambiguation relies both on the amount of linguistic rules and the semantic-associative information available to support the inference processes in natural language understanding. Unlike many similar systems, our hybrid system is more sophisticated in tackling language disambiguation problems by using linguistic clues from disparate sources as well as modeling context effects into the sentence analysis. It is potentially more powerful than any systems relying on one processing paradigm  相似文献   

9.
10.
11.
In many languages (e.g. Latin, Greek, Russian, Turkish, German) the relationship of a noun phrase to the rest of a sentence is indicated by altered forms of the noun. The possible relationships are called (surface) “cases”. Because (1) it is difficult to specify semantic-free selection rules for the cases, and (2) related phenomena based on prepositions or word order appear in apparently case-less languages, many have argued that studies of cases should focus on meaning, i.e. on “deep cases”.Deep cases bear a close relationship to the modifiers of a concept. In fact, one could consider a deep case to be a special, or distinguishing, modifier. Several criteria for recognizing deep cases are considered here in the context of the problem of describing an event. Unfortunately, none of the criteria serves as a completely adequate decision procedure. A notion based on the context-dependent “importance” of a relation appears as useful as any rule for selecting deep cases.A representative sample of proposed case systems is examined. Issues such as surface versus deep versus conceptual levels of cases, and the efficiency of the representations implicit in a case system are also discussed.  相似文献   

12.
Natural language database access requires support of both query and update capabilities. Although a great deal of research effort has gone to support natural language database query, little effort has gone to support update. We describe a model of action that supports natural language database update, as well as query, and the implementation of a system that supports the model. A major goal of this research is to design a system that is easily transportable both to different database domains and different database management systems.  相似文献   

13.
The authors describe the use of Prolog to build a syntactic list structure and a syntactic-semantic structure and to incorporate the semantic structure into a background structure that conveys the meanings of the individual words in the sentence in the context of general work knowledge. This semantic structure is then placed in the context of the sentence being parsed. Rather than rely solely on case grammar to represent the functions of words in a sentence, the authors have extended this technique to include a frame structure (which gives semantic details of the verb and the relations of other words to it) for a sentence's verbs to build the syntactic-semantic structure. Using this frame structure conveys more information than using only the case-grammar approach. The authors then place the syntactic-semantic structure in the context of its background knowledge, using the principles of partitioned networks in which word meanings are placed in a hierarchical structure that represents the background knowledge.<>  相似文献   

14.
We study automata for capturing the transformations in practical natural language processing (NLP) systems, especially those that translate between human languages. For several variations of finite-state string and tree transducers, we survey answers to formal questions about their expressiveness, modularity, teachability, and generalization. We conclude that no formal device yet captures everything that is desirable, and we point to future research.  相似文献   

15.
Simulation motion of Virtual Reality (VR) objects and humans has experienced important developments in the last decade. However, realistic virtual human animation generation remains a major challenge, even if applications are numerous, from VR games to medical training. This paper proposes different methods for animating virtual humans, including blending simultaneous animations of various temporal relations with multiple animation channels, minimal visemes for lip synchronisation, and space sites of virtual human and 3D object models for object grasping and manipulation. We present our work in our natural language visualisation (animation) system, CONFUCIUS, and describe how the proposed approaches are employed in CONFUCIUS’ animation engine.  相似文献   

16.
We aim to conduct a dialogue with a machine in a natural manner without having to learn an artificial language. With the goal of supplying a natural language interface to a variety of operating systems, we have developed a TURBO-PROLOG ®-based system called NLDOS that facilitates the communication with the well known operating system MS-DOS®. NLDOS is capable of understanding English requests, providing an automatic spelling correction facility, resolving ambiguity, and discovering and locating logical errors. © 1993 John Wiley & Sons, Inc.  相似文献   

17.
18.
对原有的安全协议形式化需求语言进行了改进,使其能适用于复杂的分布式系统。使用改进后的语言描述了网格环境下多用户协同计算中科学计算问题的安全需求。  相似文献   

19.
Computer animation and visualization can facilitate communication between the hearing impaired and those with normal speaking capabilities. This paper presents a model of a system that is capable of translating text from a natural language into animated sign language. Techniques have been developed to analyse language and transform it into sign language in a systematic way. A hand motion coding method as applied to the hand motion representation, and control has also been investigated. Two translation examples are also given to demonstrate the practicality of the system.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号