首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
ContextEnterprise software systems (e.g., enterprise resource planning software) are often deployed in different contexts (e.g., different organizations or different business units or branches of one organization). However, even though organizations, business units or branches have the same or similar business goals, they may differ in how they achieve these goals. Thus, many enterprise software systems are subject to variability and adapted depending on the context in which they are used.ObjectiveOur goal is to provide a snapshot of variability in large scale enterprise software systems. We aim at understanding the types of variability that occur in large industrial enterprise software systems. Furthermore, we aim at identifying how variability is handled in such systems.MethodWe performed an exploratory case study in two large software organizations, involving two large enterprise software systems. Data were collected through interviews and document analysis. Data were analyzed following a grounded theory approach.ResultsWe identified seven types of variability (e.g., functionality, infrastructure) and eight mechanisms to handle variability (e.g., add-ons, code switches).ConclusionsWe provide generic types for classifying variability in enterprise software systems, and reusable mechanisms for handling such variability. Some variability types and handling mechanisms for enterprise software systems found in the real world extend existing concepts and theories. Others confirm findings from previous research literature on variability in software in general and are therefore not specific to enterprise software systems. Our findings also offer a theoretical foundation for describing variability handling in practice. Future work needs to provide more evaluations of the theoretical foundations, and refine variability handling mechanisms into more detailed practices.  相似文献   

2.
Abstract.  Commercial off-the-shelf (COTS) software solutions have become commonplace in many domains, including the military, because they can provide standardized functionality with more responsiveness, a shorter time-to-market and at lower costs than custom-made solutions. In one domain, however, that of certifiable safety-critical applications, COTS software has not been adopted. One particular type of certifiable safety-critical domain, the civil air transport industry, is under pressure to reduce cost and time-to-market while simultaneously increasing safety. Therefore, the use of COTS software, rather than exclusive reliance on custom-made software, would appear to be a solution worthy of investigation. This study examines the certifiability of COTS software, its technical feasibility in this environment, and the ability to achieve the expected responsiveness, time-to-market and cost benefits. A detailed evaluation of COTS software and domain-specific certification requirements is used to demonstrate that the certification of COTS-based systems is possible. A prototype COTS-based system (built upon a number of COTS components) is created to illustrate the technical feasibility of such a system in the civil air transport domain. Expected benefits from COTS solutions are evaluated both by examining process artefacts from the development of the COTS-based system and by comparing this development process with the domain's traditional custom-development process.  相似文献   

3.
Software evolution studies have traditionally focused on individual products. In this study we scale up the idea of software evolution by considering software compilations composed of a large quantity of independently developed products, engineered to work together. With the success of libre (free, open source) software, these compilations have become common in the form of ‘software distributions’, which group hundreds or thousands of software applications and libraries into an integrated system. We have performed an exploratory case study on one of them, Debian GNU/Linux, finding some significant results. First, Debian has been doubling in size every 2 years, totalling about 300 million lines of code as of 2007. Second, the mean size of packages has remained stable over time. Third, the number of dependencies between packages has been growing quickly. Finally, while C is still by far the most commonly used programming language for applications, use of the C++, Java, and Python languages have all significantly increased. The study helps not only to understand the evolution of Debian, but also yields insights into the evolution of mature libre software systems in general.
Daniel M. GermanEmail:

Jesus M. Gonzalez-Barahona   teaches and researches in Universidad Rey Juan Carlos, Mostoles (Spain). His research interests include libre software development, with a focus on quantitative and empirical studies, and distributed tools for collaboration in libre software projects. He works in the GSyC/LibreSoft research team, . Gregorio Robles   is Associate Professor at the Universidad Rey Juan Carlos, where he earned his PhD in 2006. His research interests lie in the empirical study of libre software, ranging from technical issues to those related to the human resources of the projects. Martin Michlmayr   has been involved in various free and open source software projects for well over 10 years. He acted as the leader of the Debian project for two years and currently serves on the board of the Open Source Initiative (OSI). Martin works for HP as an Open Source Community Expert and acts as the community manager of FOSSBazaar. Martin holds Master degrees in Philosophy, Psychology and Software Engineering, and earned a PhD from the University of Cambridge. Juan José Amor   has a M.Sc. in Computer Science from the Universidad Politécnica de Madrid and he is currently pursuing a Ph.D. at the Universidad Rey Juan Carlos, where he is also a project manager. His research interests are related to libre software engineering, mainly effort and schedule estimates in libre software projects. Since 1995 he has collaborated in several libre software organizations; he is also co-founder of LuCAS, the best known libre software documentation portal in Spanish, and Hispalinux, the biggest spanish Linux user group. He also collaborates with and Linux+. Daniel M. German   is associate professor of computer science at the University of Victoria, Canada. His main areas of interest are software evolution, open source software engineering and intellectual property.   相似文献   

4.
Most organizations no longer take for granted that their deployed applications are secure. But even after conducting penetration tests, network and hosting security personnel spend considerable time chasing incidents. Your organization might be one of the many that have realized the "secure the perimeter" approach doesn't stem the tide of incidents because the software it's building and buying doesn't resist attack. A new approach offers help across the enterprise.  相似文献   

5.
The generation of traceability links or traceability matrices is vital to many software engineering activities. It is also person-power intensive, time-consuming, error-prone, and lacks tool support. The activities that require traceability information include, but are not limited to, risk analysis, impact analysis, criticality assessment, test coverage analysis, and verification and validation of software systems. Information Retrieval (IR) techniques have been shown to assist with the automated generation of traceability links by reducing the time it takes to generate the traceability mapping. Researchers have applied techniques such as Latent Semantic Indexing (LSI), vector space retrieval, and probabilistic IR and have enjoyed some success. This paper concentrates on examining issues not previously widely studied in the context of traceability: the importance of the vocabulary base used for tracing and the evaluation and assessment of traceability mappings and methods using secondary measures. We examine these areas and perform empirical studies to understand the importance of each to the traceability of software engineering artifacts.  相似文献   

6.
Software and Systems Modeling - Industrial control applications are usually designed by domain experts instead of software engineers. These experts frequently use visual programming languages based...  相似文献   

7.
Objectives: Software architecture is perceived as one of the most important artefacts created during a system's design. However, implementations often diverge from their intended architectures: a phenomenon called architectural drift. The objective of this research is to assess the occurrence of architectural drift in the context of de novo software development, to characterize it, and to evaluate whether its detection leads to inconsistency removal. Method: An in vivo, longitudinal case study was performed during the development of a commercial software system, where an approach based on Reflexion Modelling was employed to detect architectural drift. Observation and think‐aloud data, captured during the system's development, were assessed for the presence and types of architectural drift. When divergences were identified, the data were further analysed to see if identification led to the removal of these divergences. Results: The analysed system diverged from the intended architecture, during the initial implementation of the system. Surprisingly however, this work showed that Reflexion Modelling served to conceal some of the inconsistencies, a finding that directly contradicts the high regard that this technique enjoys as an architectural evaluation tool. Finally, the analysis illustrated that detection of inconsistencies was insufficient to prompt their removal, in the small, informal team context studied. Conclusions: Although the utility of the approach for detecting inconsistencies was demonstrated in most cases, it also served to hide several inconsistencies and did not act as a trigger for their removal. Hence additional efforts must be taken to lessen architectural drift and several improvements in this regard are suggested. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

8.
9.
在企业素质诊断系统的开发中引入AOSE,分析了AOSE方法在实际工作中的不足.在开发过程中以软件智能体系统工程方法学(MESSAGE)建模语言为一致性规则,采用愿望-意图-信念(D-I-B)次序的建模方法进行Agent设计.以JADE和Jadex为开发平台,给出了BDI Agent等关键技术的实现,总结出一个完整和系统的面向Agent软件开发过程.  相似文献   

10.
The number of faults not discovered by the design review can be estimated by using capture-recapture methods. Since these methods were developed for wildlife population estimation, the assumptions used to derive them do not match design review applications. The authors report on a Monte Carlo simulation to study the effects of broken assumptions on maximum likelihood estimators (MLEs) and jackknife estimators (JEs) of faults remaining. It is found that the MLE performs satisfactorily if faults are classified into a small number of homogeneous groups. Without grouping, the MLE can perform poorly, but it generally does better than the JE  相似文献   

11.
The need to see compelling evidence before adopting new methods looms greater in large organizations because of their complexity and the need to integrate new technologies and processes with existing ones. To further evaluate agile methods and their underlying software development practices, several Software Experience Center (SEC) member companies initiated a series of activities to discover if agile practices match their organizations' needs. Although each organization evaluated agile methods according to its specific needs, here we attempt to generalize their findings by analyzing some of their common experiences in the particular context of large organizations with well-established structures and processes. We base this analysis on experience collected and shared among four SEC members namely ABB, DaimlerChrysler, Motorola, and Nokia.  相似文献   

12.
当前一些基于体系结构的软件可靠性模型,在操作剖面和组件可靠性中,不管这些模型是否准确,只要有相当多的不确定性存在,在计算软件可靠性时,就会存在较多的不确定性.若采用传统方法估算软件的可靠性,忽略了属于参数不确定性存在的差异,那么结果可能不准确.提出了一种新的基于体系结构的不确定性的分析方法,该方法适合大型复杂的基于组件的应用程序及整个软件生命周期.  相似文献   

13.
14.
There are more reliability models in literature than experience reports of their application in industry. This paper tries to fill a bit of the hole, reporting the experience matured in Italtel SIT, a major Italian telecommunications company. The paper deals with the following subjects: goals of the experience; overview of the product under examination and its testing process; the method followed for conducting the experience; data analysis strategy; selection of the best models; analysis of their predictive capability; linking of phases by means of compression factors; and tools used to support the work.  相似文献   

15.
We present an approach based on queuing theory and stochastic simulation to help planning, managing, and controlling the project staffing and the resulting service level in distributed multiphase maintenance processes. Data from a Y2K massive maintenance intervention on a large COBOL/JCL financial software system were used to simulate and study different service center configurations for a geographically distributed software maintenance project. In particular, a monolithic configuration corresponding to the customer's point-of-view and more fine-grained configurations, accounting for different process phases as well as for rework, were studied. The queuing theory and stochastic simulation provided a means to assess staffing, evaluate service level, and assess the likelihood to meet the project deadline while executing the project. It turned out to be an effective staffing tool for managers, provided that it is complemented with other project-management tools, in order to prioritize activities, avoid conflicts, and check the availability of resources.  相似文献   

16.
Model checking large software specifications   总被引:2,自引:0,他引:2  
In this paper, we present our experiences in using symbolic model checking to analyze a specification of a software system for aircraft collision avoidance. Symbolic model checking has been highly successful when applied to hardware systems. We are interested in whether model checking can be effectively applied to large software specifications. To investigate this, we translated a portion of the state-based system requirements specification of Traffic Alert and Collision Avoidance System II (TCAS II) into input to a symbolic model checker (SMV). We successfully used the symbolic model checker to analyze a number of properties of the system. We report on our experiences, describing our approach to translating the specification to the SMV language, explaining our methods for achieving acceptable performance, and giving a summary of the properties analyzed. Based on our experiences, we discuss the possibility of using model checking to aid specification development by iteratively applying the technique early in the development cycle. We consider the paper to be a data point for optimism about the potential for more widespread application of model checking to software systems  相似文献   

17.
This research traces the implementation of an information system in the form of ERP modules covering tenant and contract management in a Chinese service company. Misalignments between the ERP system specification and user needs led to the adoption of informal processes within the organisation. These processes are facilitated within an informal organisational structure and are based on human interactions undertaken within the formal organisation. Rather than to attempt to suppress the emergence of the informal organisation the company decided to channel the energies of staff involved in informal processes towards organisational goals. The company achieved this by harnessing the capabilities of what we term a hybrid ERP system, combining the functionality of a traditional (formal) ERP installation with the capabilities of Enterprise Social Software (ESS). However the company recognised that the successful operation of the hybrid ERP system would require a number of changes in organisational design in areas such as reporting structures and communication channels. A narrative provided by interviews with company personnel is thematised around the formal and informal characteristics of the organisation as defined in the literature. This leads to a definition of the characteristics of the hybrid organisation and strategies for enabling a hybrid organisation, facilitated by a hybrid ERP system, which directs formal and informal behaviour towards organisational goals and provides a template for future hybrid implementations.  相似文献   

18.
A number of papers have investigated the relationships between design metrics and the detection of faults in object-oriented software. Several of these studies have shown that such models can be accurate in predicting faulty classes within one particular software product. In practice, however, prediction models are built on certain products to be used on subsequent software development projects. How accurate can these models be, considering the inevitable differences that may exist across projects and systems? Organizations typically learn and change. From a more general standpoint, can we obtain any evidence that such models are economically viable tools to focus validation and verification effort? This paper attempts to answer these questions by devising a general but tailorable cost-benefit model and by using fault and design data collected on two mid-size Java systems developed in the same environment. Another contribution of the paper is the use of a novel exploratory analysis technique - MARS (multivariate adaptive regression splines) to build such fault-proneness models, whose functional form is a-priori unknown. The results indicate that a model built on one system can be accurately used to rank classes within another system according to their fault proneness. The downside, however, is that, because of system differences, the predicted fault probabilities are not representative of the system predicted. However, our cost-benefit model demonstrates that the MARS fault-proneness model is potentially viable, from an economical standpoint. The linear model is not nearly as good, thus suggesting a more complex model is required.  相似文献   

19.
Attacks on computer systems are now attracting increased attention. While the current trends in software vulnerability discovery indicate that the number of newly discovered vulnerabilities continues to be significant, the time between the public disclosure of vulnerabilities and the release of an automated exploit is shrinking. Thus, assessing the vulnerability exploitability risk is critical because this allows decision-makers to prioritize among vulnerabilities, allocate resources to patch and protect systems from these vulnerabilities, and choose between alternatives. Common vulnerability scoring system (CVSS) metrics have become the de facto standard for assessing the severity of vulnerabilities. However, the CVSS exploitability measures assign subjective values based on the views of experts. Two of the factors in CVSS, Access Vector and Authentication, are the same for almost all vulnerabilities. CVSS does not specify how the third factor, Access Complexity, is measured, and hence it is unknown whether it considers software properties as a factor. In this work, we introduce a novel measure, Structural Severity, which is based on software properties, namely attack entry points, vulnerability location, the presence of the dangerous system calls, and reachability analysis. These properties represent metrics that can be objectively derived from attack surface analysis, vulnerability analysis, and exploitation analysis. To illustrate the proposed approach, 25 reported vulnerabilities of Apache HTTP server and 86 reported vulnerabilities of Linux Kernel have been examined at the source code level. The results show that the proposed approach, which uses more detailed information, can objectively measure the risk of vulnerability exploitability and results can be different from the CVSS base scores.  相似文献   

20.

Context

It is important for Product Line Architectures (PLA) to remain stable accommodating evolutionary changes of stakeholder’s requirements. Otherwise, architectural modifications may have to be propagated to products of a product line, thereby increasing maintenance costs. A key challenge is that several features are likely to exert a crosscutting impact on the PLA decomposition, thereby making it more difficult to preserve its stability in the presence of changes. Some researchers claim that the use of aspects can ameliorate instabilities caused by changes in crosscutting features. Hence, it is important to understand which aspect-oriented (AO) and non-aspect-oriented techniques better cope with PLA stability through evolution.

Objective

This paper evaluates the positive and negative change impact of component and aspect based design on PLAs. The objective of the evaluation is to assess how aspects and components promote PLA stability in the presence of various types of evolutionary change. To support a broader analysis, we also evaluate the PLA stability of a hybrid approach (i.e. combined use of aspects and components) against the isolated use of component-based, OO, and AO approaches.

Method

An quantitative and qualitative analysis of PLA stability which involved four different implementations of a PLA: (i) an OO implementation, (ii) an AO implementation, (iii) a component-based implementation, and (iv) a hybrid implementation where both components and aspects are employed. Each implementation has eight releases and they are functionally equivalent. We used conventional metrics suites for change impact and modularity to measure the architecture stability evaluation of the 4 implementations.

Results

The combination of aspects and components promotes superior PLA resilience than the other PLAs in most of the circumstances.

Conclusion

It is concluded that the combination of aspects and components supports the design of high cohesive and loosely coupled PLAs. It also contributes to improve modularity by untangling feature implementation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号