首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Model repositories play a central role in the model driven development of complex software-intensive systems by offering means to persist and manipulate models obtained from heterogeneous languages and tools. Complex models can be assembled by interconnecting model fragments by hard links, i.e., regular references, where the target end points to external resources using storage-specific identifiers. This approach, in certain application scenarios, may prove to be a too rigid and error prone way of interlinking models. As a flexible alternative, we propose to combine derived features with advanced incremental model queries as means for soft interlinking of model elements residing in different model resources. These soft links can be calculated on-demand with graceful handling for temporarily unresolved references. In the background, the links are maintained efficiently and flexibly by using incremental model query evaluation. The approach is applicable to modeling environments or even property graphs for representing query results as first-class relations, which also allows the chaining of soft links that is useful for modular applications. The approach is evaluated using the Eclipse Modeling Framework (EMF) and EMF-IncQuery in two complex industrial case studies. The first case study is motivated by a knowledge management project from the financial domain, involving a complex interlinked structure of concept and business process models. The second case study is set in the avionics domain with strict traceability requirements enforced by certification standards (DO-178b). It consists of multiple domain models describing the allocation scenario of software functions to hardware components.  相似文献   

2.
Toward reference models for requirements traceability   总被引:1,自引:0,他引:1  
Requirements traceability is intended to ensure continued alignment between stakeholder requirements and various outputs of the system development process. To be useful, traces must be organized according to some modeling framework. Indeed, several such frameworks have been proposed, mostly based on theoretical considerations or analysis of other literature. This paper, in contrast, follows an empirical approach. Focus groups and interviews conducted in 26 major software development organizations demonstrate a wide range of traceability practices with distinct low-end and high-end users of traceability. From these observations, reference models comprising the most important kinds of traceability links for various development tasks have been synthesized. The resulting models have been validated in case studies and are incorporated in a number of traceability tools. A detailed case study on the use of the models is presented. Four kinds of traceability link types are identified and critical issues that must be resolved for implementing each type and potential solutions are discussed. Implications for the design of next-generation traceability methods and tools are discussed and illustrated  相似文献   

3.
Although very important in software engineering, establishing traceability links between software artifacts is extremely tedious, error-prone, and it requires significant effort. Even when approaches for automated traceability recovery exist, these provide the requirements analyst with a, usually very long, ranked list of candidate links that needs to be manually inspected. In this paper we introduce an approach called Estimation of the Number of Remaining Links (ENRL) which aims at estimating, via Machine Learning (ML) classifiers, the number of remaining positive links in a ranked list of candidate traceability links produced by a Natural Language Processing techniques-based recovery approach. We have evaluated the accuracy of the ENRL approach by considering several ML classifiers and NLP techniques on three datasets from industry and academia, and concerning traceability links among different kinds of software artifacts including requirements, use cases, design documents, source code, and test cases. Results from our study indicate that: (i) specific estimation models are able to provide accurate estimates of the number of remaining positive links; (ii) the estimation accuracy depends on the choice of the NLP technique, and (iii) univariate estimation models outperform multivariate ones.  相似文献   

4.
The interplay between process and decision models plays a crucial role in business process management, as decisions may be based on running processes and affect process outcomes. Often process models include decisions that are encoded through process control flow structures and data flow elements, thus reducing process model maintainability. The Decision Model and Notation (DMN) was proposed to achieve separation of concerns and to possibly complement the Business Process Model and Notation (BPMN) for designing decisions related to process models. Nevertheless, deriving decision models from process models remains challenging, especially when the same data underlie both process and decision models. In this paper, we explore how and to which extent the data modeled in BPMN processes and used for decision-making may be represented in the corresponding DMN decision models. To this end, we identify a set of patterns that capture possible representations of data in BPMN processes and that can be used to guide the derivation of decision models related to existing process models. Throughout the paper we refer to real-world healthcare processes to show the applicability of the proposed approach.  相似文献   

5.
Extensive work on matrix factorization (MF) techniques have been done recently as they provide accurate rating prediction models in recommendation systems. Additional extensions, such as neighbour-aware models, have been shown to improve rating prediction further. However, these models often suffer from a long computation time. In this paper, we propose a novel method that applies clustering algorithms to the latent vectors of users and items. Our method can capture the common interests between the cluster of users and the cluster of items in a latent space. A matrix factorization technique is then applied to this cluster-level rating matrix to predict the future cluster-level interests. We then aggregate the traditional user-item rating predictions with our cluster-level rating predictions to improve the rating prediction accuracy. Our method is a general “wrapper” that can be applied to all collaborative filtering methods. In our experiments, we show that our new approach, when applied to a variety of existing matrix factorization techniques, improves their rating predictions and also results in better rating predictions for cold-start users. Above all, in this paper we show that better quality and more quantity of these clusters achieve a better rating prediction accuracy.  相似文献   

6.
ContextTraceability relations among software artifacts often tend to be missing, outdated, or lost. For this reason, various traceability recovery approaches—based on Information Retrieval (IR) techniques—have been proposed. The performances of such approaches are often influenced by “noise” contained in software artifacts (e.g., recurring words in document templates or other words that do not contribute to the retrieval itself).AimAs a complement and alternative to stop word removal approaches, this paper proposes the use of a smoothing filter to remove “noise” from the textual corpus of artifacts to be traced.MethodWe evaluate the effect of a smoothing filter in traceability recovery tasks involving different kinds of artifacts from five software projects, and applying three different IR methods, namely Vector Space Models, Latent Semantic Indexing, and Jensen–Shannon similarity model.ResultsOur study indicates that, with the exception of some specific kinds of artifacts (i.e., tracing test cases to source code) the proposed approach is able to significantly improve the performances of traceability recovery, and to remove “noise” that simple stop word filters cannot remove.ConclusionsThe obtained results not only help to develop traceability recovery approaches able to work in presence of noisy artifacts, but also suggest that smoothing filters can be used to improve performances of other software engineering approaches based on textual analysis.  相似文献   

7.
In this paper, we hypothesize that the distorted traceability tracks of a software system can be systematically re-established through refactoring, a set of behavior-preserving transformations for keeping the system quality under control during evolution. To test our hypothesis, we conduct an experimental analysis using three requirements-to-code datasets from various application domains. Our objective is to assess the impact of various refactoring methods on the performance of automated tracing tools based on information retrieval. Results show that renaming inconsistently named code identifiers, using Rename Identifier refactoring, often leads to improvements in traceability. In contrast, removing code clones, using eXtract Method (XM) refactoring, is found to be detrimental. In addition, results show that moving misplaced code fragments, using Move Method refactoring, has no significant impact on trace link retrieval. We further evaluate Rename Identifier refactoring by comparing its performance with other strategies often used to overcome the vocabulary mismatch problem in software artifacts. In addition, we propose and evaluate various techniques to mitigate the negative impact of XM refactoring. An effective traceability sign analysis is also conducted to quantify the effect of these refactoring methods on the vocabulary structure of software systems.  相似文献   

8.
Recovering traceability links between code and documentation   总被引:2,自引:0,他引:2  
Software system documentation is almost always expressed informally in natural language and free text. Examples include requirement specifications, design documents, manual pages, system development journals, error logs, and related maintenance reports. We propose a method based on information retrieval to recover traceability links between source code and free text documents. A premise of our work is that programmers use meaningful names for program items, such as functions, variables, types, classes, and methods. We believe that the application-domain knowledge that programmers process when writing the code is often captured by the mnemonics for identifiers; therefore, the analysis of these mnemonics can help to associate high-level concepts with program concepts and vice-versa. We apply both a probabilistic and a vector space information retrieval model in two case studies to trace C++ source code onto manual pages and Java code to functional requirements. We compare the results of applying the two models, discuss the benefits and limitations, and describe directions for improvements.  相似文献   

9.

Context

For large software projects it is important to have some traceability between artefacts from different phases (e.g.requirements, designs, code), and between artefacts and the involved developers. However, if the capturing of traceability information during the project is felt as laborious to developers, they will often be sloppy in registering the relevant traceability links so that the information is incomplete. This makes automated tool-based collection of traceability links a tempting alternative, but this has the opposite challenge of generating too many potential trace relationships, not all of which are equally relevant.

Objective

This paper evaluates how to rank such auto-generated trace relationships.

Method

We present two approaches for such a ranking: a Bayesian technique and a linear inference technique. Both techniques depend on the interaction event trails left behind by collaborating developers while working within a development tool.

Results

The outcome of a preliminary study suggest the advantage of the linear approach, we also explore the challenges and potentials of the two techniques.

Conclusion

The advantage of the two techniques is that they can be used to provide traceability insights that are contextual and would have been much more difficult to capture manually. We also present some key lessons learnt during this research.  相似文献   

10.
11.
The Business Process Modelling Notation (BPMN) is a standard for capturing business processes in the early phases of systems development. The mix of constructs found in BPMN makes it possible to create models with semantic errors. Such errors are especially serious, because errors in the early phases of systems development are among the most costly and hardest to correct. The ability to statically check the semantic correctness of models is thus a desirable feature for modelling tools based on BPMN. Accordingly, this paper proposes a mapping from BPMN to a formal language, namely Petri nets, for which efficient analysis techniques are available. The proposed mapping has been implemented as a tool that, in conjunction with existing Petri net-based tools, enables the static analysis of BPMN models. The formalisation also led to the identification of deficiencies in the BPMN standard specification.  相似文献   

12.
Requirements traceability offers many benefits to software projects, and it has been identified as critical for successful development. However, numerous challenges exist in the implementation of traceability in the software engineering industry. Some of these challenges can be overcome through organizational policy and procedure changes, but the lack of cost-effective traceability models and tools remains an open problem. A novel, cost-effective solution for the traceability tool problem is proposed, prototyped, and tested in a case study using an actual software project. Metrics from the case study are presented to demonstrate the viability of the proposed solution for the traceability tool problem. The results show that the proposed method offers significant advantages over implementing traceability manually or using existing commercial traceability approaches.  相似文献   

13.
Safety is a system property, hence the high-level safety requirements are incorporated into the implementation of system components. In this paper, we propose an optimized traceability analysis method which is based on the means-ends and whole-part concept of the approach for cognitive systems engineering to trace these safety requirements. A system consists of hardware, software, and humans according to a whole-part decomposition. The safety requirements of a system and its components are enforced or implemented through a means-ends lifecycle. To provide evidence of the safety of a system, the means-ends and whole-part traceability analysis method will optimize the creation of safety evidence from the safety requirements, safety analysis results, and other system artifacts produced through a lifecycle. These sources of safety evidence have a causal (cause-consequence) relationship between each other. The failure mode and effect analysis (FMEA), the hazard and operability analysis (HAZOP), and the fault tree analysis (FTA) techniques are generally used for safety analysis of systems and their components. These techniques cover the causal relations in a safety analysis. The causal relationships in the proposed method make it possible to trace the safety requirements through the safety analysis results and system artifacts. We present the proposed approach with an example, and described the usage of TRACE and NuSRS tools to apply the approach.  相似文献   

14.
针对需求工程中非功能需求概念非常模糊甚至相互矛盾、非功能需求与其他非功能需求及功能需求之间的关系繁复而难以分析和建模、非功能需求与设计阶段制品之间的追踪关系模糊而不易记录和维护等问题,分析了与非功能需求相关的概念在需求分析阶段和体系结构设计阶段的表现形式,给出了一个结构化的非功能需求定义;规范了不同类型需求之间的各种复杂关系,建立了一个跨越分析和设计阶段的概念性非功能需求追踪管理框架,规范了需求分析和体系结构设计阶段与非功能需求相关的概念和制品之间的关系。提出的结构化定义以及概念性追踪管理框架明确地刻画了非功能需求概念的外延,为简化需求模型以及进一步研制系统化、实用化的非功能需求建模及追踪管理技术奠定了理论基础。  相似文献   

15.
16.
Process-aware information systems (PAIS) are systems relying on processes, which involve human and software resources to achieve concrete goals. There is a need to develop approaches for modeling, analysis, improvement and monitoring processes within PAIS. These approaches include process mining techniques used to discover process models from event logs, find log and model deviations, and analyze performance characteristics of processes. The representational bias (a way to model processes) plays an important role in process mining. The BPMN 2.0 (Business Process Model and Notation) standard is widely used and allows to build conventional and understandable process models. In addition to the flat control flow perspective, subprocesses, data flows, resources can be integrated within one BPMN diagram. This makes BPMN very attractive for both process miners and business users, since the control flow perspective can be integrated with data and resource perspectives discovered from event logs. In this paper, we describe and justify robust control flow conversion algorithms, which provide the basis for more advanced BPMN-based discovery and conformance checking algorithms. Thus, on the basis of these conversion algorithms low-level models (such as Petri nets, causal nets and process trees) discovered from event logs using existing approaches can be represented in terms of BPMN. Moreover, we establish behavioral relations between Petri nets and BPMN models and use them to adopt existing conformance checking and performance analysis techniques in order to visualize conformance and performance information within a BPMN diagram. We believe that the results presented in this paper can be used for a wide variety of BPMN mining and conformance checking algorithms. We also provide metrics for the processes discovered before and after the conversion to BPMN structures. Cases for which conversion algorithms produce more compact or more complicated BPMN models in comparison with the initial models are identified.  相似文献   

17.
ContextModel-Driven Software Development (MDSD) has emerged as a very promising approach to cope with the inherent complexity of modern software-based systems. Furthermore, it is well known that the Requirements Engineering (RE) stage is critical for a project’s success. Despite the importance of RE, MDSD approaches commonly leave textual requirements specifications to one side.ObjectiveOur aim is to integrate textual requirements specifications into the MDSD approach by using the MDSD techniques themselves, including metamodelling and model transformations. The proposal is based on the assumption that a reuse-based Model-Driven Requirements Engineering (MDRE) approach will improve the requirements engineering stage, the quality of the development models generated from requirements models, and will enable the traces from requirements to other development concepts (such as analysis or design) to be maintained.MethodThe approach revolves around the Requirements Engineering Metamodel, denominated as REMM, which supports the definition of the boilerplate based textual requirements specification languages needed for the definition of model transformation from application requirements models to platform-specific application models and code.ResultsThe approach has been evaluated through its application to Home Automation (HA) systems. The HA Requirement Specification Language denominated as HAREL is used to define application requirements models which will be automatically transformed and traced to the application model conforming to the HA Domain Specific Language.ConclusionsAn anonymous online survey has been conducted to evaluate the degree of acceptance by both HA application developers and MDSD practitioners. The main conclusion is that 66.7% of the HA experts polled strongly agree that the automatic transformation of the requirements models to HA models improves the quality of the HA models. Moreover, 58.3% of the HA participants strongly agree with the usefulness of the traceability matrix which links requirements to HA functional units in order to discover which devices are related to a specific requirement. We can conclude that the experts we have consulted agree with the proposal we are presenting here, since the average mark given is 4 out of 5.  相似文献   

18.
A successful cutting-edge semiconductor manufacturer applies Lean principles with the goal set on perfecting its operation flow. In 2007 SEMI adopted a new standard E94-1107 that breaks the dependency between carrier and lot for processing wafers and formally introduces material redirection, which creates several opportunities for waste reductions and improved WIP handling. As depicted from simulations, the new standard is impacting several KPIs including throughput, cycle time, yield, and Just-In-Time customer response. This paper discusses different lean optimizations leveraging the new standard. Some tools require loading of many wafers simultaneously to achieve a high throughput. Although with reducing lot sizes, takt time can be improved, the sequential nature of these tools become limited by the number of load ports and thus have a negative impact on the equipment effectiveness. Before the new standard implementation, no carrier could be removed from load ports during processing to allow other carriers to load or unload wafers leaving the ports blocked. With this change, we can now remove the carrier from the Load Port after unloading wafers into the tool to allow filling the tool with the optimum number of wafers. Over the years, numerous lean manufacturing studies have been performed in terms of determining the correct lot size (number of wafers in a lot) to meet the Just-In-Time targets. Simulation runs demonstrate that smaller lots have up to 33% and 50% improved cycle time for batch tools and single wafer tools, respectively. Based on fluctuations in demand, there may not only be a need to speed up, but also to slow down production quickly. All these have direct correlation with scheduling of right sized lots (dynamic lot sizing), carefully planned delivery of carriers for processing and improved pull production techniques.  相似文献   

19.
Traceability—the ability to follow the life of software artifacts—is a topic of great interest to software developers in general, and to requirements engineers and model-driven developers in particular. This article aims to bring those stakeholders together by providing an overview of the current state of traceability research and practice in both areas. As part of an extensive literature survey, we identify commonalities and differences in these areas and uncover several unresolved challenges which affect both domains. A good common foundation for further advances regarding these challenges appears to be a combination of the formal basis and the automated recording opportunities of MDD on the one hand, and the more holistic view of traceability in the requirements engineering domain on the other hand.  相似文献   

20.
Many students who participate in online courses experience frustration and failure because they are not prepared for the demanding and isolated learning experience. A traditional learning theory known as self-directed learning (SDL) is a foundation that can help establish features of a personalized system that helps students improve their abilities to manage their overall learning activities and monitor their own performance. Additionally, the system enables collaboration, interaction, feedback, and the much-needed support from the instructor and students' peers. A Web 2.0 social-technology application, MediaWiki, was adopted as the platform from which incremental features were developed to utilize the fundamental concepts of SDL. Students were able to customize content by setting specific learning goals, reflecting on their learning experiences, self-monitoring activities and performances, and collaborating with others in the class. SDL skills exist to some degree in all learners, this study finds that students' SDL abilities can improve when a course adopts a personalized and collaborative learning system that enables the students to be more proactive in planning, organizing, and monitoring their course activities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号